Week 0

  1. A reference to all symbols necessary for the notes Linear Algebra Symbols
  2. A list of Problems associated with the Notes 471-ProblemList
  3. A series of unanswered questions about Linear Algebra 471-QuestionBank
  4. References to Category theory Category Theory For Babies Category Theory
  5. Tikz Testing Ground 471-ImagesTest 6) 471-Homework

Coordinate Maps

Given a Vector Space V over F, and a basis , a coordinate map is the following isomorphism:

Where

Week 1

Lecture 1 : Systems of Equations

Solution Space

Given a field F, and a system of m equations and n variables,

A solution space is the set of all n-tuple such that it solves all of the equations. Can be either of the three cases

  1. The system is not solvable Then there are no solutions to the system,
  2. The system is solvable Then there are either 1 or infinitely many solutions (cannot have any other finite number of solutions other than 1, think about lines intersecting)

Back Substitution

A method of solving a system of equations where the equations are shaped like a triangle, with no 0’s on the diagonals!

Example

, OR WE GET DIVISION BY ZERO!

We assume that

Symbolic representation of systems of linear equations

We can represent any system in the following way :

Lecture 2 : E.R.O and Equivalent Systems

Elementary Row Operations (ERO)

Operations that can be done on any system with the following properties

  1. Scalar Multiplication : : multiply the ith equation by a, where
  2. Addition : replace the jth equation with jth equation + a ith equation,
  3. Permutation : : Swap ith and jth equation

Row equivalent

We call 2 systems row equivalent if one is obtained from another by a sequence of elementary row operations

Theorem 1.1 Equivalent Systems of Equations

Two systems, and are equivalent they have the same solution space

Theorem 1.2 : Row Equivalence Equivalent Systems

You must prove it for each individual operation

Corollary 1.3 : Homogenous row Equivalent Systems and be two two row equivalent matrices, then they exactly have the same solution!

Example

Theorem 1.2 Solutions to Equivalent homogenous systems

Row-Reduced Echelon Form (RREF)

A system of linear equations is in RREF if it follows the follow requirements

  1. The first non-zero entry in each row is equal to 1
  2. Each column of the system contaisn the leading non-zro entry for some row has all its other entries 0

Lecture 3 :

  • Added Example to RREF example

Week 2

Lecture 4 :

affine space

Ring

Field

Week 8

Lecture 20

Theorem :

Theorem

Given a basis ,a linear map

is the same thing as an n-tuple of vectors Namely

  • Given , define an n-tuple as :
  • Conversely, given , define by declaring that
  • This allows us to uniquely define a linear transformation such that

To be even more explicit, we can define W with a basis of

Definition :

To denote the matrix of relative to the ordered Bases is

Where number of rows and number of columns is

Note

Or each basis vector as :

Lecture 21

  • Write somewhere
  • Let with the standard basis. Then we can express we can express this in terms of coordinate vectors such that
  • Here is a better example : Let be the basis, hence we can express polynomials which gives us bijections between vector spaces!

Lemma : Coordinate Vectors in Linear Transformations

If , then

Remark : This is basically multiplying a matrix with a Vector! , a vector , then we produce a vector , where the ith component is

Lets examine a matrix

The Motivation? MATRIX MULTIPICATION

Theorem : Matrix Multiplication

Let be vector spaces with bases , and a be a linear map, then

If we define , then

Digression : The Algebra of Linear Transformations

Groups, Rings, Fields, VectorSpaces

(Prop) Fun is a F.V.S. (The Soul Stone of Linear Algebra!)

Consider a set and a F.V.S. , then all possible functions from (denoted or in set theory ) is a F.V.S

Why isn't an F.V.S.

Why is this the so powerful

Think about endgame, the soul stone is able to take any soul and put it into any other soul, in this case, we can take the soul of the set and transform it into an FVS(Given that its being mapped into an FVS)! This powerful took is the backbone to why matrices work!

Homomorphisms and using the Soul Stone

  • While the soul stone is quite unstable, we can make it more stable through homomorphism!

Homomorphism :

Homomorphism is a function in mathematics that preserves the structure between two algebraic structures. In our case, if we want to go between two FVS, say V and W, we want to define a vector space homomorphism over the same field to preserve the vector space properties!

Supose was an FVS, we will call it V, what do we get out of it?

  1. From the prop, we can easily examine that is a vector space
  2. These mappings are also Vector Space Homomorphisms over the same field F

(prop) Linearity is a Vector Space Homomorphism

For any vector spaces over the same field , a function such that , is a vector homomorphism( samething as saying that is linear) if for any :

The space of all linear maps

(prop) is a Vector Space

Lets consider the subset such that the functions are linear. then is a FVS

Function Compositions over Vector Spaces

Consider 3 vector spaces, and 2 functions and . We can apply the same Composition rules to vector spaces as it can be composed as

This naturally gives us the following map

Which can be restricted even further to :

[!note] Linearity and Bilinearity

Compositions of Linear Maps are linear!

Remark : This bilinearity of rings be a ring with two binary operations, then the distribution laws mean that multiplication is bilinear

There is some connection between the two!

Week 9

Lecture 22

  1. Definition of Bilineatiy
  2. Proved the Bilinearity
  3. Showed Bilinearity in rings

Associativity of composition

Let be sets, and lets define the following functions , then :

Remark , might not make sense. Take but note that

Arbitrary composition of functions do not allow it to form a ring since given any

The Algebra of Endomorphisms!

We denote the set of Endomorphisms , so

forms a Ring under and

We can define addition and multiplication in the following ways: We can define two special elements as well (I will note that the zero function is often denoted by the zero element in the ring, but I want to introduce this new notation for even more clarity. It follows a similar style to the Identity) Collecting all of these items, we can form into a ring! In particular, we can see that the distributes over . Take note that is not abelian as for some particular and


(2, 5, -3) = 2 + 5x -3x^2 = 2(1+x) + 5(1+x+x^2) -3(1+x+x^4) = 4 + 4x + 5x^2 -3x

sin((1, 0))

sin(-x) = -sin(x) cos(-x) = cos(x) cos(x - pi/2) = -sin(x)*sin(-pi/2) = sin(x) sin(x - pi/2) = cos(x)sin(-pi/2) = -cos(2x) sin(x+pi/2) = cos(x) cos(x + pi/2) = -sin(x) cos(-2\alpha + pi/2) = -sin(-2\alpha) = sin(2\alpha) sin(-2\alpha + pi/2) = cos(-2\alpha) = cos(2\alpha) goal = (sin(2x), -cos(2x))

its a clockwise rotation around (-pi/2 + 2\alpha)

pi/2 - alpha

is alpha is 30

then beta is 60

so the rotation would end up in being in -30

-30 = -90 + 2 * 30

Lecture 23

Matrix Multiplication

Matrix Multiplication Operation be an matrix and be an matrix, then where

Let

Matrix Multiplication Representation in terms of Linear Transformations with bases respectively with 2 linear transpositions, and , then

Suppose we have 3 vector spaces over a field F,

Commutative Diagrams

Lecture 24

Change Of Coordinates Formula

Suppose V is a Vector Space over F and we are given two bases

Given any vector , there naturally exists two coordinate vectors

These two coordinate vectors give natural rise to an identity transformation: Combining our previous ideas,

(Remember that !)

Commutative Diagram

Remark you find the coordinate vectors that is a solution to :

This is an easy computation as its all a system of linear equations. To find

Change of a Matrix of a linear map

Suppose and be two bases. From the precious theorem, then Characterized by Now choose another pair of bases, , which will give rise to :

How are and related?

Commutative diagram . This is what we get in general!

Insert Second COmm diagram

Change of basis Matrix

Let V be a vector space. If this set exists such that makes it a basis, then the matrix is called the change of basis matrix This is equivalent to saying to saying if is a basis, then Therefore, in order to changes bases, we need an n-tuple of vectors, meaning we need an matrix

Recall that Fact:

How do we find

Suppose we are given an old basis and a new basis such that , hence . Find either for some or for all

Solution is fixed, then its coordinates are a solution to Which means that You can notice that we get the following solution! Now suppose that is arbitrary, there are 2 equivalent answers, then with cant be specified. There are two ways to tackle this problem!

  1. This is where you need Indeed, as In order to find , we just solve an n system , where 1 is located in the jth component Hence, we just solve an augmented matrix p_{11} & ... & p_{1n} &1 & & 0\\ & \ddots & & & \ddots & \\ p_{n1} & & p_{nn} & 0 & & 1 \end{array}\right]$$ Solving will give us : $$\left[\begin{array}{c c c|c c c} 1 & & 0 & & & \\ & \ddots & & & * & \\ 0 & & 1 & & & \end{array}\right]$$ Where the $*$ is $P^{-1} = [Id]^B_{B'}$

Week 10

God damn this theory is fucked and all over the place

  • Prove that has a basis of n elements where half of them are left inverses of the dual of V and half of them are the right inverses of the dual V
  • maybe show that it is a direct product?

Lecture 25 : The Dual Vector Space or Matrix Transposition

Matrix Transpose

Given , we define its transpose

Example

\Cmatrix{}{1&2&3\\4&5&6}^\top = \Cmatrix{}{1&4\\2&5\\3&6}

What is the meaning of ?

Linear Functionals and Dual Spaces

Let be a vector space over , then its dual is defined to be

We call elements in linear functionals as a function Note : Dual Spaces are also denoted as but i don’t like that notation. Also the start is often reserved for the adjoint!

(Thm) Dimension of

THE BEUAUTY OF THE DUAL

Given any linear transformation and their respective duals , the linear transformation is the composition pictured below!

(Lem) Matrices, Transpose, and Duals Oh My

Given a linear map, , it defines the map such that

  • Note : Composition of functions is a bi-linear operation : F is linear and G is linear, therefore, is Linear

What is matrix

Suppose we have fixed bases and , hence , then the dual bases, and Recall that We obtain that , hence these matrices are just transposes of each other!

(Lem) Linear maps and their transpose

If , then

Proof BY the definition of a dual basis: Hence:

By definition

Lecture 26: Matrix Transpose Properties

(Prop)

Why the swap? Gives rise to They are the same map! so

Proof

The structure of a linear map

(Rank-Nullity Theorem)

Goal : Given a linear map , try to understand what it really does

19th century Interpretation

Given a linear map, , Let bet he change of bases matrices, w.r.t. A, then what is

Question:

  • Find matrices P, Q, such that is as nice as possible

(Thm) Change of Basis Matrix

Given finite dimensional vector spaces, and a transformation , then there exists bases, such that , where

  • Note : These bases are not unique, but any pair of such bases give the same matrix!

Week 12

  • QUstion: Why does he reoder the matrix determinant differently. I am used to the notion of the matrix being ?
  • FIX THIS HORENDOUS SHIT NOTATION WHEN ADDING MORE VECTOR SPACES
  • “Mathematics is about thinking about theory, not so much about memorizing computation”
  • “Some times mistakes happen, and saying sorry is okay. But sometimes its about how you fix the problem” - Pizza Guy
  • Finish the proof for the direct sums

Lecture 31 : 4/1 : Bi-Linear Functionals!

Motivating example, Lets define the matrix: \Cmatrix{}{a_{11} & a_{12} \\ a_{21} & a_{22}} \in M_{2 \times 2}(F) Define This definition is a function! Note: This function is a polynomial in 4 variables!

Is this function Linear? , then can be considered linear. This is what is known as a bi-linear function!

NO: IT IS QUADRATIC, but it can be linear. Suppose we fix variables

End Goal : Construction of the determinant of the matrix!

Trick: Realize that the deteminant, up to proportionality, is just n-linear anti-symmetric functions that maps

Bilinear Functions!

https://kconrad.math.uconn.edu/blurbs/linmultialg/bilinearform.pdf This who approach is just taking another idea we already discovered! Given Vector spaces with bases , then the linear map is the same as the set of n-tuples such that Another example: has basis with dual WE COME BACK TO THE DELTA FUNCTIONS!

AN even further generalization

Bilinear Functionals

Let be vector spaces, and define a map . We say is BiLinear if linear respective to each argument when another is fixed ie. If is BiLinear, then we can express it as one summation : One take away is recall how distribution works, so the order of i and j can be different! 3

Distribution on Rings is a ring!, then the two distribution laws show that is bilinear!

If

A note on certain linear functionals:

The linear functionals we are going to examine in this next section will involve examining vectorspaces of the same dimension : n, so we will compress the notation even further below :

The set of all Bilinear Functionals Other sources may use

  1. is a FVS (Check lecture 18) Examine that is a vectorspace, then done!

What is the basis of

We know the basis of to be Lets denote the bases and dual bases as followed, : We can then determine a pair of bi-linear maps by the following: This function is quadratic!, namely : B(v_1, v_2) = \sum_{i, j} a_{1i}b_{2j} \sum_{k,l} B(v_{1i}, v_{2j})v^{\vee}_{1i}(v_{1k})v^{\vee}_{2j}(v_{2l})$$$$ = \sum_{i,j} B(v_{1i}, v_{2j})\overbrace{v^{\vee}_{1i}(v_{1k})}^{\delta_{ik}}\overbrace{v^{\vee}_{2j}(v_{2l})}^{\delta_{jl}} = B(v_{1k}, v_{2l}) 2) The monomial : is bilinear

  1. fix then is linear. Or fix , then is linear
  2. Therefore, spans

Therefore, the basis of consists of elements of the form : for all combinations of i and j

We know that hence we can define the usual

Consider the following Vector Space :

Cartesian Product of Vector Spaces

  • Examples and Proof Provided by Axlers Book

Example Note the following:

Consider the following vector spaces

(prop)

Proof denote the basis of vectorspace , where it contains vectors. Lets also denote to be the additive identity of vector space Claim : the basis of of can be split up based on the dimension of each vector space ex) cartesian product of 2 vector spaces:

Since its a subspace, and by prop…471-addbacklink, then is also a vectorspace!

Now, for every vector space , construct vector spaces of the same size by using the following cartesian product : be a vector space called . Claim:

  1. Trivial
  2. Trivial Therefore,

Fact

In an additive category, such as abelian groups and vector spaces over a field F, the cartesian product and the direct-sum are the same thing! https://math.stackexchange.com/questions/39895/the-direct-sum-oplus-versus-the-cartesian-product-times https://en.wikipedia.org/wiki/Biproduct https://en.wikipedia.org/wiki/Additive_category Using this fact, we will use the direct sum to denote cartesian products, just because it looks cleaner:

Clearing up the confusion

  • The latter is just better notation
  • The former is all linear maps, the latter is all bi-linear maps!

Tensor Product

Notation and Definition dump

We say something is linear when We say something is bi-linear when and We say something is tri-linear when We say something is ,multi-linear/Poly-Linear/N-Linear if

BEST SLOGAN : “Tensor products of vector spaces are to Cartesian products of sets as direct sums of vectors spaces are to disjoint unions of sets.”\

Goated Sources:

  1. https://www.math.brown.edu/reschwar/M153/tensor.pdf
  2. https://people.math.harvard.edu/~elkies/M55a.10/tensor.pdf
  3. https://www.cin.ufpe.br/~jrsl/Books/Linear%20Algebra%20Done%20Right%20-%20Sheldon%20Axler.pdf

BIG BOY TIME

Lecture 32: 4/5 :Multi-Linear Functionals

The connection between group theory and Linear Algebra!

In order to truly appreciate the beauty ok multi-linear functionals, we must explore Group theory, and it is the study of group theory which introduces the building blocks of symmetry! Review : 410 Branch

  1. Group
  2. Group Homomorphism
  3. Symmetric Group
  4. Sign Function :

Fancy Bilinear Transfomation Definition

Recall Using group theory, we can more concisely write this span in the following way!

MultiLinear Functional

Which describe all n-linear functions from , then Lets look at let Hence the basis for will be denoted as The dual basis :

By definition, N linear functionals is the same thing as

The det operation is Anti-symmetric

Anti-symmetry: a relation is antisymmetric if The determinant is anti-symmetric! What we basically did was swap

Concrete Example :

Week 13

Lecture 33: 4/8 : Decomposition of Bi-Linear Maps and Intro to K-Linear maps

(Lem)

Suppose we consider two linear maps such that (\Lambda B) (v_1, v_2) = \frac{1}{2}(B(v_1, v_2) + B(v_2, v_1))$$$$(\Lambda'B) (v_1, v_2) = \frac{1}{2}(B(v_1, v_2) - B(v_2, v_1)) These are known as Symmetrizers

Generalization : K-linear maps

Suppose we consider the following vector space :

  • ,

    • (Note, these are the same vector space, but with different bases)
  • Bases :

  • We will use this shorthand

Special Maps

Symmetric Maps , is symmetric if:

Let

Anti-Symmetric Maps (or Skew-Symmetric) , is Anti-Symmetric if:

Let

Alternating maps , is Alternating if:

Let

Lecture 34 : 4/10 Groups acting on Vector Spaces

More Defintions

Why do we need groups?

Because we can use groups to act on sets, which is helpful in counting the number of elements in a basis! A groups purpose in life is to act on things

Useful Properties :

  1. acts on ex) Let and

Prove defines a group action

A try :

F is antisymmetric if

  • Problem! Too many permutations!

Another try!

More problematic than appears

look :

!! It works out!!!

“swap of changing letters can be done in different ways!”

Perform different out of a simple permutation

Perm = the parity of the number of the same is the same

Perm = different computations of a simple permutation

Consider a such that

Now suppose we consider acting on this using

remember one important fact about the symmetric group, their inverses!

In order for our equations to match up, we need to consider the following :

in order for the two to be equal, we must actually permute the vertices of the functionals

Now define the antisymmetric group (or alternating, however this terminology is outdated)

number of choices : n choose k

dim


Question :

Suppose we examine a finite field of 2 elements and we form a vector space of dimension 2 :

Examine the following :

Basis:

Lecture : 4/12

  1. Fix :
  2. =

because it is symmetric

k=2

Each

is symmetric!

proof :

Want :

NOTES : remember, the is an abelian group as it is isomorphic to

Therefore, the span is L.I., therefore it forms a basis!

Question : What are the dimensions!

Span =

combinations of combinatorions with repitions of k for out of dim V =

Segway into the Determinant!

Focus on

Let is determined up to a particular scalar is determined by

is that functional whose value , where is the famous epsilon function

Conventions :

look at the formulas on the right!

is a vector, is a functional denotes the 1st basis vector in v, denote the first linear functional of v, This is near as or which looks exactly like the dot product in a sinple way :

Week 14

Lec 36: 4/15

Determinant Properties!

:

Properties :

  1. det(A) Invertible

Cancellation Law of Group Theory!

Fact :

Let

E.R.O. V.S. Det(A)

if two rows are the same, in most fields except characteristic 2

Properties :

  1. Row swap :
    1. Why :
    2. This expression is nonzero

upper right triangle :

(4 2 1 1) > (3 1 1 1)

Determinants as expansions by minors of a row/a column

Recall, its linear in respect to the column!

A linear function :

or

question : What is the coeeficient?!

what are there existion marks

Adjoint of a matrix :

4 equations

Property!

  1. exists
  2. If A is a 0 divisor!

Consider or a field on 2 elements

number o functions : unique functions

Claim!

In all characteristics, In not characteristic 2, In Characteristic 2, In characteristic not 2,

Question 1) What is the total number of linear functions from

  1. , hence the dimension of all linear functions from is 2
  2. Claim 1: Every Symmetric Matrix is Antisymmetric

Number of symmetric : Only in characteristic not 2

Case 1 k = 1:

case 2 : k = 2:

case 3 : k = 3?><<dfsssss;‘x

In characteristic 2 :


for bilinear maps, consider the following :

functionals on their own are not bi-linear, so we have to multiply!

consider the following for a V.S. of dimension 3

is not bilinear, however, is not bilinear. just doesn’t make sense! So, the basis of consist of 9 elements, choosing 1 from the left and 1 from the right

Question : what linear combination of

Observation: Consider a vector v with elements , where each element is non-zero. then

But notice something : , so inorder to find its counterpart , we should also consider that is also a problem as

Lemma 1) : Proof : Therefore, we can construct this monomial such that

Note: If , then our linear combination is not linearly independent as , therefore will always be 0 Conclusion, we just permute both i and j!, we need cycle that must be length 2: Lets consider (as our vector space is 3 dimensional). and let , if we consider all options, of , we get the following :

e : (12) : (23) : (13) : : (132) :

From this example, there is only permutation in which granted use the element : (funily enough, its the ) Lemma 2)

The big thing we must consider is that while the vectorspaces have the same dimension, they might have different bases and thus different functionals!

Hence, we have witnessed that ! Its just a permutation of the indices!

Now consider the following :

ITS JUST acting on a set of n numbers, in our case, its

acting on

https://math.stackexchange.com/questions/242812/s-n-acting-transitively-on-1-2-dots-n

acting on =


acting on is

acting on is

acting on should have


Naive Combinatorics

Consider a vector space of dimension n over a field F. Let denote all k-linear maps from . Prove the dimension of or all symmetric k-linear maps is

Known Facts :

Lec 38: 4/18

The Jordan Normal Form of a Linear Map (Ch 6, 7)

Problem : Given a space V,

Goal : Classify Linear maps

Produce a small (Finite) list of maps such that any is equivalent to exactly 1 on the list

ex) We’ve seen an example

, such that

Classification of such that

such that must T such that Want to quit , any P, Q must

Marketing the problem precise :

Problem: CLassify matrices up to similarity, i.e. is equvalent to if

Ex)

If allowed, is equivalent to , where

Is that true

In general :

When

Now, if its a square matrix :

recall :

Fundamental result actually uses a neutering of the Jordan Normal Form

Hard problem : Simplification

Given : Q) What does it do? Idea : Find , such that is “Good”

Amazing Idea : “Good” means diagonal such that with a bunch of c_i in between Degrees of freedom are decoupled

Q) Is every matrix diagonizable? I.E. does there exist P for any T such that

Example : Basically symmetric matrix!

What does it mean?

Eigen-Vector

Given , is called an eigen vector if of eigen value c if and

T is digitizable if a basis of eigen vectors

Q) Is there atelast 1 eigenvector?

  • Yes Iff has a non-zero Solution

eigenvalue characteristic eigenvector characteristic vector

Characteristic Polynomial

The characteristic polynomial of is

WE WANT EIGEN VALUES, so from now on, is algebraically closed

Let V be a vector space over a field F of dimension n. find the dimension of all k-linear maps :

Suppose n = 5, k = 3

BUT

Lec 39 : 4/24 : Jordan Normal Form of a Matrix

Recall : over F

diagonalizable

Experimental Material :

Not every linear map is diagonalizable :

  1. T : \Cmatrix{}{c & 1 & 0 & ... \\ 0 & c & 1 & 0 ...\\ \vdots \\ 0 & ... & c & 1 \\ 0 & ... & 0& c}

Ways to solve :

  1. Note :

\Cmatrix{}{c & 1 & 0 & ... \\ 0 & c & 1 & 0 ...\\ \vdots \\ 0 & ... & c & 1 \\ 0 & ... & 0& c} \iff there is a chain

but

Also :

  1. Doing a concrete example

\Cmatrix{}{2 & 1 & -1\\ 2 & 2 & -1\\ 2 & 2 & 0} : \C^3 \to C^3

From this, this means its not diagonalizable

eignevectors :

[T]_{\{\alpha_1, \alpha_2, \alpha_3\}} = \Cmatrix{}{1 & 0 & 0\\ 0 & 2 & 1\\ 0 & 0 & 2}

Fact : \ker(I-2Id)^2 = \Cmatrix{}{0 & 1\\ 0 & 0}^2 = \Cmatrix{}{0 & 0\\ 0 & 0}

B: The result :

(Def) Root space

The kernel : is called the root space. It is denoted as : Note :

  1. , if , then it means that it is diagonalizable!
  2. , we can also see that
  3. is a subspace!

(Lem) is a T-Invariant, i.e.

Proof

(Thm)

  1. For the restriction of to , each j, there is a basis of such that bunch of blocks in the matrix : add photo each block being of the form… (add more photos)
  2. The collection of sizes of blocks up to order, is independent of the basis!

Question : Are the sizes of the blocks dependent on the dimension of the root space?

Jordan Cell !

\Cmatrix{}{c_j & 1 & 0 & ... \\ 0 & c_j & 1 & 0 ...\\ \vdots \\ 0 & ... & c_j & 1 \\ 0 & ... & 0& c_j}

Suppose T is upper triangular, where the diagonals are denoted by , then determinant is

Now suppose its in the jordan normal form

sizes of all cells with on the diaganols

Cases and examples :

2 cases! \Cmatrix{}{0 & *\\ 0 & *}, \Cmatrix{}{c_1 & 1\\ 0 & c_1}