Keywords

1 Introduction

Kaczmarz algorithm [28] is an iterative method for solving system of linear equations of the form

$$\begin{aligned} Ax=b, \end{aligned}$$
(1)

where \(A\in \mathcal{R}^{m\times n}\) has full column rank, \(m\ge n\) and \(b\in {\mathcal {R}}^m\). In the consistent case, the solution of (1) can be regarded as the coordinate of the common point of hyperplanes defined by each single equation in (1):

$$\begin{aligned} \mathcal {P}_i=\{x| a_i^Tx=b_i\}, \end{aligned}$$
(2)

where \(a_i^T\), \(i=1,2,\cdots , m\), denotes the ith row of A and \(b_i\) is the ith element of vector b.

The idea of the Kaczmarz type algorithms is to exploit the geometric structure of the problem (1), and the using a sequential of projections to seek the solution. The recursive process can be formulated as follows. Let \(x_{0}\) be an initial guess to the solution of (1), then the classical Kaczmarz algorithm iteratively generates a sequence of approximate solutions \(x_k\) by the recursive formula:

$$\begin{aligned} x_{k+1}=x_{k}+\frac{b_i-a_i^Tx_k}{||a_i||_2^2}a_i, \end{aligned}$$
(3)

where \(i=mod(k,m)+1\). For a given \(x_k\), from (3) we can see that \(x_{k+1}\) satisfies the ith equation in (1), i.e., \(a_i^Tx_{k+1}=b_i\). The updating formula (3) implicitly produces a solution to the following constraint optimization problem [21, 37]

$$\min _{\{x|a_i^Tx_{k+1}=b\}}||x-x_k||_2,$$

which is equivalent to finding the projection of \(x_k\) from the hyperplane \(\mathcal {P}_i\). Two geometric explanations of the above process can be illustrated by Fig. 1:

Fig. 1.
figure 1

Geometric illustrations of the classical Kaczmarz iterations with \(m=4\).

By comparing the projection processes displayed in Fig. 1, it is natural to have the intuition that convergence of the classical Kaczmarz algorithm highly depends on the geometric positions of the associated hyperplanes. If the normal vectors of every two successive hyperplanes keep reasonably large angles, the convergence of the classical Kaczmarz algorithm will be fast, whereas two nearly parallel consecutive hyperplanes will make the convergence slow down. The Kaczmarz algorithm can be regarded as a special application of famous von Neumann’s alternating projection [35] originally distributed in 1933. The fundamental idea can even trace the history back to Schwarz [38] in 1870s.

In the past few years, the Karzmarz algorithm has been interpreted as successive projection methods [4, 7, 8, 11,12,13], which are also known as projection onto convex sets (POCS) [9, 17, 18, 42,43,44] in the optimization community. Notice that each iteration of the Kaczmarz algorithm just need \(\mathcal{O}(n)\) flops and the cost is independent with the number of equations, this type of algorithms are well-suited to problems with \(m\gg n\). Due to its simplicity and generality, Kaczmarz algorithms find viable applications in the area of image processing and signal process [19, 20, 24,25,26, 30, 36] under the name of algebraic reconstruction techniques (ART). Since 1980s, relaxation variants [11, 25, 41]

$$\begin{aligned} x_{k+1}=x_{k}+\lambda _k\frac{b_i-a_i^Tx_k}{||a_i||_2^2}a_i, \end{aligned}$$
(4)

and the block versions [3, 33, 34]

$$\begin{aligned} x_{k+1}=x_{k}+A_{\tau }^{\dag }(b_{\tau }-A_{\tau }^Tx_k),\ with \ A= \left( \begin{array}{c} A_1 \\ A_2 \\ \vdots \\ A_M \\ \end{array} \right) , b= \left( \begin{array}{c} b_1 \\ b_2 \\ \vdots \\ b_M \\ \end{array} \right) , \tau \in \{1, 2, \cdots , M\}, \end{aligned}$$
(5)

of the Kaczmarz algorithm have been widely investigated, and some fruitful theoretical results have been obtained. In particular, for consistent linear systems, it is shown [5, 21, 31, 39] that the Kaczmarz iterations converges to the least square norm solution \(x=A^{\dag }b\) with any starting vector \(x_0\) in the column space of \(A^T\). For inconsistent linear systems, the cyclic subsequences generated by the Kaczmarz algorithm converges to a weighted least squares solution when the relaxation parameter \(\lambda _k\) goes to zero [12].

As indicated in Fig. 1, convergence of the classical Kaczmarz algorithm depends on the sequence of successive projections, which relies upon the ordering of the rows in the matrix A. In some real applications, it is observed [25, 30] that instead of selecting rows of the matrix A sequentially at each step of the Kaczmarz algorithm, randomly selection can often improve its convergence. Recently, in the remarkable paper [39], Strohmer and Vershynin proved the rate of convergence for the following randomized Kaczmarz algorithm

$$ x_{k+1}=x_{k}+\frac{b_{r(i)}-a_{r(i)}^Tx_k}{||a_{r(i)}||_2^2}a_{r(i)} $$

where r(i) is chosen from \(\{1, 2, \cdots , m\}\) with probabilities \(\frac{||a_{r(i)}||_2^2}{||A||_F^2}\). In particular, the following bound on the expected rate of convergence for the randomized Kaczmarz method is proved

$$\begin{aligned} \mathbb {E}||x_k-x||_2^2\le (1-\frac{1}{\kappa (A)^2})^k||x_0-x||^2_2, \end{aligned}$$
(6)

where \(\kappa (A)=||A||_F||A^{-1}||_2\), with \(||A^{-1}||_2=\inf \{M: M||Ax||_2\ge ||x||_2\}\) be the scaled conditioned number of A introduced by J. Demmel [14]. Due to this pioneering work that characterized the convergence rate for the randomized Kaczmarz algorithms, the idea stimulated considerable interest in this area and various investigations [1, 2, 6, 10, 15] have been performed recently. In particular, some acceleration strategies have been proposed [6, 16, 22] and convergence analysis was performed in [21, 23, 27, 29, 31, 32]. See also [21, 23] for some comments on equivalent interpretations of the randomized Kaczmarz algorithms.

2 Optimal Row Selecting Strategy of the Kaczmarz Algorithm for Solving Consistent System of Linear Equations

In this section, we consider the case that system of linear equations (1) is consistent and x is a solution. If the ith row is selected at the \((k+1)\)th iteration of the Kaczmarz algorithm, i.e.,

$$x_{k+1}=x_{k}+\frac{b_i-a_i^Tx_k}{||a_i||_2^2}a_i,$$

then \(x_{k+1}\) can be reformulated as

$$ \begin{array}{lll} x_{k+1}&{}=&{}x_{k}+\frac{b_i-a_i^Tx_k}{||a_i||_2^2}a_i\\ &{}=&{}x_k+\frac{b_i}{||a_i||_2^2}a_i-\frac{x_k^Ta_i}{||a_i||_2^2}a_i\\ &{}=&{}x_k+\frac{a_i^Tx}{||a_i||_2^2}a_i-\frac{a_i^Tx_k}{||a_i||_2^2}a_i\\ &{}=&{}x_k+\frac{a_i^T(x-x_k)}{||a_i||_2^2}a_i\\ &{}=&{}x_k+\frac{a_ia_i^T}{||a_i||_2^2}(x-x_k).\\ \end{array}$$

It follows that

$$\begin{aligned} \begin{array}{lll} x-x_{k+1}&{}=&{}x-x_k-\frac{a_ia_i^T}{||a_i||_2^2}(x-x_k)\\ &{}=&{}(I-\frac{a_ia_i^T}{||a_i||_2^2})(x-x_k)\\ \end{array} \end{aligned}$$
(7)

and thus

$$\begin{aligned} x_{k+1}-x_k=\frac{a_ia_i^T}{||a_i||_2^2}(x-x_k). \end{aligned}$$
(8)

From (7) and (8), we can see that

$$\begin{aligned} x-x_{k+1}\perp x_{k+1}-x_k, \end{aligned}$$
(9)

i.e.,

$$\begin{aligned} x-x_{k+1}\perp a_i. \end{aligned}$$
(10)

To this end, let us make the following orthogonal direct sum decomposition \(x-x_k\),

$$\begin{aligned} x-x_{k}=\alpha \hat{a}_i +\beta \hat{a}_i^{\perp }, \end{aligned}$$
(11)

where \(\hat{a}_i=\frac{a_i}{||a_i||_2}\) and \(\hat{a}_i^{\perp }\) is a normalized vector orthogonal to \(a_i\). Then coefficients \(\alpha \) and \(\beta \) can be written as

$$\alpha =||x-x_k||_2\cos \theta _{k_i},$$
$$\beta =||x-x_k||_2\sin \theta _{k_i},$$

where \(\theta _{k_i}=\angle (x-x_k, a_i)\) is the angle between the vectors \((x-x_{k})\) and \(a_i\).

Substituting the above decomposition (11) into (7) gives

$$\begin{aligned} \begin{array}{lll} x-x_{k+1}&{}=&{}(I-\frac{a_ia_i^T}{||a_i||_2^2})(\alpha \hat{a}_i +\beta \hat{a}_i^{\perp })\\ &{}=&{}\beta \hat{a}_i^{\perp }\\ &{}=&{}||x-x_k||_2\sin \theta _{k_i} \hat{a}_i^{\perp }.\\ \end{array} \end{aligned}$$
(12)

It follows that

$$\begin{aligned} \begin{array}{lll} ||x-x_{k+1}||_2=||x-x_k||_2\cdot |\sin \theta _{k_i}|.\\ \end{array} \end{aligned}$$
(13)

From (13) we can see that the error norms generated by the Kaczmarz algorithm are monotonically nonincreasing. Moreover, the convergence can be optimized if \(|\sin \theta _{k_i}|\) is minimized at every iteration, which is equivalent to selecting the row \(a_{i}\) that solves the optimization problem

$$|\sin \angle (x-x_k, a_{i})|=\min _{j} |\sin \angle (x-x_k, a_j)|.$$

As x is the unknown solution, the above minimization problems seems unsolvable. However, noting that consistent linear system (1) implies

$$a_j^Tx=b_j, \ j=1,2, \cdots , m$$

and \(x_k\) is fixed at the \((k+1)\)th iteration. The minimization problem can be tackled by maximizing \(|\cos \angle (x-x_k, a_j)|\), i.e.,

$$\begin{aligned} \begin{array}{lll} |\cos \angle (x-x_k, a_j)|&{}=&{}\frac{|a_j^T(x-x_k)|}{||x-x_k||_2||a_j||_2} \\ &{}=&{}\frac{|b_j-a_j^Tx_k|}{||x-x_k||_2||a_j||_2}\\ &{}=&{}\frac{|r_k(j)|}{||x-x_k||_2||a_j||_2},\\ \end{array} \end{aligned}$$
(14)

where \(r_k=b-Ax_k=\left( \begin{array}{cccc} r_k(1), &{} r_k(2), &{} \cdots , &{} r_k(m) \\ \end{array} \right) ^T. \)

It is clear from (14) that the optimal updating strategy for the Kaczmarz algorithm is to select the row \(\hat{i}\) that satisfies

$$|b_{\hat{i}}-a_{\hat{i}}^Tx_k|=\max _j|b_j-a_j^Tx_k|=||b-Ax_k||_{\infty },$$

i.e., the index where \(r_k\) has the largest entry in absolute value. We refer to the above row selection method as the optimal selecting strategy, and call the Kaczmarz algorithm with the optimal selecting strategy as the optimal Kaczmarz algorithm.

Next, we analyze the convergence of the optimal Kaczmarz algorithm for solving consistent system of linear equations. To simplify the analysis, we introduce two notations

$$\theta _{k}^{\hat{i}}=\min _{j}\angle (x_{k}-x, a_j),$$

and

$$\theta _{p}^{\hat{i}}=\max _{k}\theta _{k}^{\hat{i}},$$

where \(1\le \hat{i}\le m\) and \(1\le p \le k\).

Based on (13), the \((k+1)\)th error can be bounded as follows

$$\begin{aligned} \begin{array}{lll} ||x-x_{k+1}||_2&{}=&{}||x-x_{k}||_2\cdot |\sin \theta _{k}^{\hat{i}}|\\ &{}=&{}||x-x_0||_2\cdot |\sin \theta _{k}^{\hat{i}}|\cdot |\sin \theta _{k-1}^{\hat{i}}|\cdots |\sin \theta _{0}^{\hat{i}}| \\ &{}\le &{}||x-x_0||_2\cdot |\sin \theta _{p}^{\hat{i}}|^k, \\ \end{array} \end{aligned}$$
(15)

where \(1\le p \le k\).

Notice that

$$0\le \sin \theta _{p}^{\hat{i}}\le 1,$$

we can theoretically divide the convergence history of the Kaczmarz algorithm into two periods:

  • when \(\sin \theta _{p}^{\hat{i}}<1\), the algorithm converge exponentially,

  • when \(\sin \theta _{p}^{\hat{i}}= 1\), we have

    $$\max _{j}a_j^T(x_{p}-x)=0$$

    and thus,

    $$a_j^T(x_{p}-x)=0, j=1,2,\cdots , m.$$

    This implies that \(Ax_{p}=b\), i.e., \(x_{p}\) solves the system of linear equation (1).

In summary, for solving consistent system of linear equations (1), there exists a theoretical optimal selecting strategy or optimal randomization strategy for Kaczmarz algorithm. With the strategy, the algorithm converges exponentially and will achieve convergence when

$$\max _{k}\min _{1\le j\le m}\angle (x_{k}-x, a_j)=\frac{\pi }{2}.$$

3 Randomized Kaczmarz Algorithm for Solving Inconsistent System of Linear Equations

Suppose (1) is a consistent system of linear equations and its right hand side is perturbed with a noise vector r as follows:

$$\begin{aligned} Ax\simeq b+r, \end{aligned}$$
(16)

where (16) can be either consistent or inconsistent. In this section, we give some remarks on the convergence of randomized Kaczmarz algorithm for solving (16), which was investigated by D. Needell [32].

Firstly, we recall the Lemma 2.2 in [32].

Lemma 1

Let \(H_i\) be the affine subspaces of \(\mathcal {R}^n\) consisting of the solutions to unperturbed equations, \(H_i=\{x \mid \langle a_i,x\rangle =b_i\}\). Let \(\tilde{H}_i\) be the solution spaces of the noisy equations, \(\tilde{H}_i=\{x \mid \langle a_i,x\rangle =b_i+r_i\}\). Then

$$\tilde{H}_i=\{w+\alpha _ia_i\mid w\in H_i\}$$

where \(\alpha _i=\frac{r_i}{||a_i||^2_2}\).

Remarks: If the Lemma 1 is used to interpret the Kaczmarz algorithm for solving the perturbed and unperturbed equations, we need to introduce a vector \(a_i^{\perp }\) in the as the orthogonal complement of the vector \(a_i\), and write \(\tilde{x}_i\in \tilde{H}_i\) as

$$\tilde{x}_i=x_i+\alpha _ia_i+\beta v_i$$

where \(x_i\) is a solution generated by Kaczmarz algorithm for solving the unperturbed equations, and \(v_i\) is a vector in the orthogonal complement of \(a_i\).

Example 1. Consider the \(2\times 2\) system of linear equations

$$\left\{ \begin{array}{ll} x_1+x_2=1, \\ x_1-x_2=1, \end{array} \right. $$

and the perturbed equations

$$\left\{ \begin{array}{ll} x_1+x_2=1.5, \\ x_1-x_2=1.5, \end{array} \right. $$

i.e., \(A=\left( \begin{array}{cc} 1 &{} 1 \\ 1 &{} -1 \\ \end{array} \right) \), \(b=\left( \begin{array}{c} 1 \\ 1 \\ \end{array} \right) \) and \(r=\left( \begin{array}{c} 0.5 \\ 0.5 \\ \end{array} \right) \).

Let

$$H_i\doteq \{x\mid \langle a_i, x\rangle =b_i\}$$

and

$$\tilde{H}_i\doteq \{\tilde{x}\mid \langle a_i, \tilde{x}\rangle =b_i+r_i\}.$$

If we use \(x_0=\left( \begin{array}{c} 1 \\ 0\\ \end{array} \right) \) as the same initial guess for the perturbed and unperturbed linear system, then

$$H_1= \{\left( \begin{array}{c} 1 \\ 0\\ \end{array} \right) +\xi \left( \begin{array}{c} -1 \\ 1\\ \end{array} \right) \mid \xi \in \mathcal {R} \}$$

and

$$\tilde{H}_1= \{\left( \begin{array}{c} 1.5 \\ 0\\ \end{array} \right) +\xi \left( \begin{array}{c} -1 \\ 1\\ \end{array} \right) \mid \xi \in \mathcal {R} \}$$

Note that \(a_1=\left( \begin{array}{c} 1 \\ 1 \\ \end{array} \right) , \) \(||a_1||_2^2=2\) and \(r_1=\frac{1}{2}\). We have

i.e.,

In order to derive the convergence rate of randomized Kaczmarz algorithm for solving the perturbed linear equations (16), we need to make use of the established convergence results [39] for the unperturbed linear system (1), together with the relationship between the approximate solutions generated by the Kaczmarz algorithm [39] for perturbed and unperturbed linear equations. In [32], D. Needell analyzed the convergence rate and error bound of the randomized Kaczmarz algorithm for solving the perturbed linear equations, in which the author take the approximate solution to the perturbed linear equations as the guess for the unperturbed system, which make the derivation process simplified. However, the approximate solutions generated by applying the randomized Kaczmarz algorithm to the perturbed linear system may not converge to the solution of the unperturbed linear system.

In what follows, we will consider the convergence rate of the randomized Kaczmarz algorithm for solving (16) from a different perspective. We try to bound the difference between the solution for the unperturbed linear system (1) and approximate solutions generated by applying the randomized Kaczmarz algorithm to the perturbed linear system.

In the following discussion, we use \(x_k\) and \(\tilde{x}_k\) to denote the approximate solutions generated by applying the randomized Kaczmarz algorithm to (1) and (16), respectively. The recursive formulas can be written as

$$\begin{aligned} x_{k+1}=x_k+\frac{b_{i{_k}}-x_k^Ta_{i{_k}}}{||a_{i{_k}}||_2^2}a_{i{_k}} \end{aligned}$$
(17)

and

$$\begin{aligned} \tilde{x}_{k+1}=\tilde{x}_k+\frac{b_{i{_k}}+r_{i{_k}}-\tilde{x}_k^Ta_{i{_k}}}{||a_{i{_k}}||_2^2}a_{i{_k}}, \end{aligned}$$
(18)

where the subscript \(i{_k}\in \{1, 2, \cdots , m\}\) is used to denote that the \(i{_k}\)th row is selected with probability \(\frac{||a_{i{_k}}||^2_2}{||A||^2_F}\) at the kth iteration.

Suppose the same initial guess \(x_0=\tilde{x}_0\) is used as the starting vector. Then

$$ \tilde{x}_{1}=\tilde{x}_0+\frac{b_{i{_0}}+r_{i{_0}}-\tilde{x}_0^Ta_{i{_0}}}{||a_{i{_0}}||_2^2}a_{i{_0}} $$

and potentially

$$ x_{1}=x_0+\frac{b_{i{_0}}-x_k^Ta_{i{_0}}}{||a_{i{_0}}||_2^2}a_{i{_0}}. $$

It follows that

$$\begin{aligned} \tilde{x}_1=x_1+\frac{r_{i_{0}}a_{i_0}}{||a_{i_0}||_2^2}. \end{aligned}$$
(19)

In the next iteration, we have

$$ \begin{array}{lll} \tilde{x}_2&{}=&{}\tilde{x}_1+\frac{b_{i{_1}}+r_{i{_1}}-\tilde{x}_1^Ta_{i{_1}}}{||a_{i{_1}}||_2^2}a_{i{_1}}\\ &{}=&{}x_1+\frac{r_{i_{0}}a_{i_0}}{||a_{i_0}||_2^2}+\frac{b_{i_1}-(x_1+\frac{r_{i_{0}}a_{i_0}}{||a_{r_0}||_2^2})^Ta_{i_1}}{||a_{i_1}||_2^2}a_{i_1}+\frac{r_{i_{1}}a_{i_1}}{||a_{r_1}||_2^2}\\ &{}=&{}\underbrace{x_1+\frac{b_{i{_1}}-x_1^Ta_{i{_1}}}{||a_{i{_1}}||_2^2}a_{i{_1}}}_{x_2} + \frac{r_{i_{1}}a_{i_1}}{||a_{i_1}||_2^2}+ \underbrace{(I-\frac{a_{i_1}a_{i_1}^T}{||a_{i_1}||_2^2})\frac{r_{i_0}a_{i_0}}{||a_{i_0}||_2^2}}_{ a_{i_1}^{\bot }}\\ &{}=&{}x_2+\frac{r_{i_{1}}a_{i_1}}{||a_{i_1}||_2^2}+v_{i_1} \end{array}$$

where \(v_{i_1}=(I-\frac{a_{i_1}a_{i_1}^T}{||a_{i_1}||_2^2})\frac{r_{i_0}a_{i_0}}{||a_{i_0}||_2^2}\in span\{a_{i_1}\}^{\bot }\) with \(||v_{i_1}||_2=\frac{|r_{i_0}|}{||a_{i_0}||_2}\).

Continue the above process, we have

$$\begin{aligned} \tilde{x}_{k}=x_{k}+\frac{r_{i_{k-1}}}{||a_{i_{k-1}}||_2^2}a_{i_{k-1}}+\sum _{j=1}^{k-2}v_{i_j}, \end{aligned}$$
(20)

where \(v_{i_j}=(I-\frac{a_{i_j}a_{i_j}^T}{||a_{i_j}||_2^2})\frac{r_{i_{j-1}}a_{i_{j-1}}}{||a_{i_{j-1}}||_2^2}\in span\{a_{i_j}\}^{\bot }\) and \(||v_{i_j}||_2=\frac{|r_{i_{j-1}}|}{||a_{i_{j-1}}||_2}\).

Subtracting x on both sides of (20) gives

$$\begin{aligned} \tilde{x}_{k}-x=x_{k}-x+\frac{r_{i_{k-1}}a_{i_{k-1}}}{||a_{i_{k-1}}||_2^2}+\sum _{j=1}^{k-2}v_{i_j}. \end{aligned}$$
(21)

Based on Jensen’s inequality and (6), we have

$$\begin{aligned} \mathbb {E}||x_k-x||_2 \le (1-\frac{1}{\kappa (A)^2})^\frac{k}{2}||x_0-x||_2, \end{aligned}$$
(22)

where \(\kappa (A)=||A||_F||A^{-1}||_2\), with \(||A^{-1}||_2=\inf \{M: M||Ax||_2\ge ||x||_2\}\).

Taking norm on both sides of (21) and using triangle inequality, we have

$$\begin{array}{lll} \mathbb {E}(||\tilde{x}_{k}-x||_2)&{}\le &{} \mathbb {E}(||x_{k}-x||_2)+||\frac{r_{i_{k-1}}a_{i_{k-1}}}{||a_{i_{k-1}}||_2^2}||_2+\sum \limits _{j=1}^{k-2}||v_{i_j}||_2\\ &{}\le &{}(1-\frac{1}{\kappa (A)^2})^\frac{k}{2}||x_0-x||_2+\sum \limits _{j=1}^{k-1}||v_{i_j}||_2\\ &{}=&{}(1-\frac{1}{\kappa (A)^2})^\frac{k}{2}||x_0-x||_2+\sum \limits _{j=1}^{k-1}\frac{|r_{i_{j}}|}{||a_{i_{j}}||_2}\\ &{}=&{}(1-\frac{1}{\kappa (A)^2})^\frac{k}{2}||x_0-x||_2+(k-1)\gamma \\ \end{array}$$

where \(\gamma =\max \limits _{1\le i\le m}\frac{|r_i|}{||a_i||_2}\).

In conclusion, we have derived the following theorem.

Theorem 1

Let A be a matrix full column rank and assume the system \(Ax = b\) is consistent. Let \(\tilde{x}_{k}\) be the kth iterate of the noisy randomized Kaczmarz method run with \(Ax \simeq b +r\), and let \(a_1,\cdots , a_m\) denote the rows of A. Then we have

$$\mathbb {E}||\tilde{x}_{k}-x||_2\le (1-\frac{1}{\kappa (A)^2})^\frac{k}{2}||x_0-x||_2+(k-1)\gamma ,$$

where \(\kappa (A)=||A||_F||A^{-1}||_2\) and \(\gamma =\max \limits _{1\le i\le m}\frac{|r_i|}{||a_i||_2}\).

4 Conclusions

In this paper, we provide a new look at the Kaczmarz algorithm for solving system of linear equations. The optimal row selecting strategy of the Kaczmarz algorithm for solving consistent system of linear equations is derived. The convergence of the randomized Kaczmarz algorithm for solving perturbed system of linear equations is analyzed and a new bound of the convergence rate is obtained from a new perspective.