The following definition is used throughout mathematics, and applies to any function, not just linear transformations.
Let be a function. is called injective if for any two elements we have that: if then .
The following Theorem shows that for linear transformations, injective is the same as having trivial kernel.
Let be a linear transformation. The following statements are equivalent.
is injective.
[Aside: The word “injective” is synonymous with “one-to-one”. I recommend thinking of injective functions as those which map the domain in to the codomain (no two elements map to the same element).]
For the following exercises you are expected to use Theorem 4.32.
Let . Prove that defines a non-injective linear transformation, whilst defines an injective linear transformation.
Write down 3 of your own linear transformations which are injective, and 3 which are not injective.
[End of Exercise]
A linear transformation is surjective if .
[Aside: I recommend remembering surjective, because the french word “sur” means “onto”; and for such a linear transformation, for each vector there is a vector in which maps on to .]
In the following examples, one can use Theorem 4.16 to justify whether or not equals the codomain.
Here are 3 surjective linear transformations from :
Here are 3 non-surjective linear transformations:
If a linear transformation is both injective and surjective, then it is called bijective.
Let be a linear transformation between finite dimensional vector spaces. Let and be any two bases of and , respectively. The following conditions are equivalent to each other:
is bijective,
The matrix is invertible.
Moreover, when these are satified, the inverse transformation of is the linear transformation associated to the inverse matrix .
Recall from Theorem 1.12 that a matrix is invertible if and only if .
[Aside: By a set-theoretic result from MATH112, any function is bijective if and only if it has an “inverse function”.]
Let be the function defined by
for any . Prove, using Theorem 4.39 that defines a bijective linear transformation.
Solution: To apply that Theorem, we first need to check that is a linear transformation. Let and , and . Then:
Next, to apply the Theorem, we need to choose a basis for the domain and the codomain. Let us use the standard ones:
is the standard basis for ,
is the standard basis for .
Then we compute that , which is the identity matrix, and obviously invertible. Therefore, is a bijective linear transformation.
Prove that the linear transformation defined by
is bijective, and find its inverse.
Write down your own examples of
3 linear transformations which are injective but not surjective,
3 linear transformations which are surjective but not injective,
3 linear transformations which are neither injective nor surjective.
[End of Exercise]
Assume is a bijective linear transformation between vector spaces over a field . If is a basis for , then is a basis for .
Since is bijective, it is surjective. So for any , there is an such that . Since spans , there are scalars such that
where the last equality follows since is linear.
Therefore, spans .
To prove is linearly independent, assume we have scalars as follows:
, where the last eqality follows since is linear.
Since is bijective, it is injective, and therefore . In particular, . Since is linearly independent, for every . Hence is linearly independent. So is basis for . ∎
Let be a linear transformation between (finite-dimensional) vector spaces over . If , then the following are equivalent:
is injective,
is surjective.
If there exists a bijective linear transformation , then and are said to be isomorphic.
Let and be finite dimensional vector spaces over the same field . Then and are isomorphic if and only if .
If a bijective linear transformation exsits, by Theorem 4.43 the dimensions must be equal. Conversely, if the dimensions are equal, when we choose a basis for each one, they must be of the same size. So define the linear transformation associated to the identity matrix using these basis, and this must be a bijective linear transformation. ∎
Find a bijective linear transformation between the vector spaces and over .
[End of Exercise]
[Aside: Theorem 4.46 shows that, in linear algebra, the concept of isomorphism is “uninteresting” since it is equivalent to the dimensions being the same. The reason we introduce the terminology here is due to its wide usage in other mathematical disciplines, as a way of describing when two different mathematical objects are “the same” (i.e. isomorphic), in a precisely defined sense. For example, and are isomorphic as sets (because there is a set-theoretic bijection between them), but they are not isomorphic as vector spaces (since their dimensions are different). Isomorphisms of groups and of rings will be studied in MATH225. Those are both abstract mathematical concepts which are defined using axioms, like we have done for fields and vector spaces.]