          # Linear independence Linearly independent vectors in $\mathbb {R} ^{3}$  Linearly dependent vectors in a plane in $\mathbb {R} ^{3}$ .

In the theory of vector spaces, a set of vectors is said to be linearly dependent if at least one of the vectors in the set can be defined as a linear combination of the others; if no vector in the set can be written in this way, then the vectors are said to be linearly independent. These concepts are central to the definition of dimension.

A vector space can be of finite-dimension or infinite-dimension depending on the number of linearly independent basis vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining a basis for a vector space.

## Definition

A sequence of vectors $({\vec {v_{1}}},{\vec {v_{2}}},\dots ,{\vec {v_{k}}})$ from a vector space V is said to be linearly dependent, if there exist scalars $a_{1},a_{2},\dots ,a_{k}$ , not all zero, such that

$a_{1}{\vec {v_{1}}}+a_{2}{\vec {v_{2}}}+\cdots +a_{k}{\vec {v_{k}}}={\vec {0}},$ where ${\vec {0}}$ denotes the zero vector.

Notice that if not all of the scalars are zero, then at least one is non-zero, say $a_{1}$ , in which case this equation can be written in the form

${\vec {v_{1}}}={\frac {-a_{2}}{a_{1}}}{\vec {v_{2}}}+\cdots +{\frac {-a_{k}}{a_{1}}}{\vec {v_{k}}}.$ Thus, ${\vec {v_{1}}}$ is shown to be a linear combination of the remaining vectors.

A sequence of vectors $({\vec {v_{1}}},{\vec {v_{2}}},\dots ,{\vec {v_{n}}})$ is said to be linearly independent if the equation

$a_{1}{\vec {v_{1}}}+a_{2}{\vec {v_{2}}}+\cdots +a_{n}{\vec {v_{n}}}={\vec {0}},$ can only be satisfied by $a_{i}=0$ for $i=1,\dots ,n$ . This implies that no vector in the sequence can be represented as a linear combination of the remaining vectors in the sequence. In other words, a sequence of vectors is linearly independent if the only representation of ${\vec {0}}$ as a linear combination of its vectors is the trivial representation in which all the scalars $a_{i}$ are zero. Even more concisely, a sequence of vectors is linear independent if and only if ${\vec {0}}$ can be represented as a linear combination of its vectors in a unique way.

The alternate definition, that a sequence of vectors is linearly dependent if and only if some vector in that sequence can be written as a linear combination of the other vectors, is only useful when the sequence contains two or more vectors. When the sequence contains no vectors or only one vector, the original definition is used.

### Infinite dimensions

In order to allow the number of linearly independent vectors in a vector space to be countably infinite, it is useful to define linear dependence as follows. More generally, let V be a vector space over a field K, and let {vi | iI} be a family of elements of V. The family is linearly dependent over K if there exists a finite family {aj | j ∈ J} of elements of K, all non-zero, such that

$\sum _{j\in J}a_{j}v_{j}=0.$ A set X of elements of V is linearly independent if the corresponding family {x}xX is linearly independent. Equivalently, a family is dependent if a member is in the closure of the linear span of the rest of the family, i.e., a member is a linear combination of the rest of the family. The trivial case of the empty family must be regarded as linearly independent for theorems to apply.

A set of vectors which is linearly independent and spans some vector space, forms a basis for that vector space. For example, the vector space of all polynomials in x over the reals has the (infinite) subset {1, x, x2, ...} as a basis.  