Contents
 1 Numbers
 1.1 Scalars
 1.2 Vectors
 1.3 Tensors
 1.4 Multivectors
 1.4.1 Multiplication of arbitrary vectors
 1.4.2 Rules
 1.4.3 Multiplication tables
 1.4.4 Basis
 1.4.5 Relation to other algebras
 1.4.6 Multivector multiplication using tensors
 1.4.7 Squares of pseudoscalars are either 1 or 1
 1.4.8 Bivectors in higher dimensions
 1.4.9 Other quadratic forms
 1.4.10 Rotors
 1.4.11 Spinors
 2 Search Math wiki
 3 See also
 4 External links
 5 References
Numbers
Scalars
 See also: Peano axioms^{w}, ^{*}Hyperoperation, ^{*}Algebraic extension
The basis of all of mathematics is the ^{*}"Next" function. See Graph theory^{w}.
 Next(0)=1
 Next(1)=2
 Next(2)=3
 Next(3)=4
 Next(4)=5
We might express this by saying that One differs from nothing as two differs from one. This defines the Natural numbers^{w} (denoted ). Natural numbers are those used for counting.
 These have the convenient property of being transitive^{w}. That means that if a<b and b<c then it follows that a<c. In fact they are totally ordered^{w}. See ^{*}Order theory.
Integers
Addition^{w} (See Tutorial:arithmetic) is defined as repeatedly calling the Next function, and its inverse is subtraction^{w}. But this leads to the ability to write equations like for which there is no answer among natural numbers. To provide an answer mathematicians generalize to the set of all integers^{w} (denoted because zahlen means count in german) which includes negative integers.
 The Additive identity^{w} is zero because x + 0 = x.
 The absolute value or modulus of x is defined as
 ^{*}Integers form a ring (denoted ) over the field of rational numbers. Ring^{w} is defined below.
 Z_{n} or is used to denote the set of ^{*}integers modulo n .
 ^{*}Modular arithmetic is essentially arithmetic in the quotient ring^{w} Z/nZ (which has n elements).
 Consider the ring of integers Z and the ideal of even numbers, denoted by 2Z. Then the quotient ring Z / 2Z has only two elements, zero for the even numbers and one for the odd numbers; applying the definition, [z] = z + 2Z := {z + 2y: 2y ∈ 2Z}, where 2Z is the ideal of even numbers. It is naturally isomorphic to the finite field with two elements, F_{2}. Intuitively: if you think of all the even numbers as 0, then every integer is either 0 (if it is even) or 1 (if it is odd and therefore differs from an even number by 1).
 An ^{*}ideal is a special subset of a ring. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3.
 A ^{*}principal ideal is an ideal in a ring that is generated by a single element of through multiplication by every element of .
 A ^{*}prime ideal is a subset of a ring that shares many important properties of a prime number in the ring of integers. The prime ideals for the integers are the sets that contain all the multiples of a given prime number.
 The study of integers is called Number theory^{w}.
 means a divides b.
 means a does not divide b.
 means p^{a} exactly divides n (i.e. p^{a} divides n but p^{a+1} does not).
 A prime number is a number that can only be divided by itself and one.
 If a, b, c, and d are primes and x=abc and y=c^{2}d then:
 Two integers a and b are said to be relatively prime, mutually prime, or coprime if the only positive integer that divides both of them is 1. Any prime number that divides one does not divide the other. This is equivalent to their greatest common divisor (gcd) being 1.
Rational numbers
Multiplication^{w} (See Tutorial:multiplication) is defined as repeated addition, and its inverse is division^{w}. But this leads to equations like for which there is no answer. The solution is to generalize to the set of rational numbers^{w} (denoted ) which include fractions (See Tutorial:fractions). Any number which isnt rational is irrational^{w}. See also ^{*}padic number
 The set of all rational numbers except zero forms a ^{*}multiplicative group which is a set of invertible elements.
 Rational numbers form a ^{*}division algebra because every nonzero element has an inverse. The ability to find the inverse of every element turns out to be quite useful. A great deal of time and effort has been spent trying to find division algebras.
 The Multiplicative identity^{w} is one because x * 1 = x.
 Division by zero is undefined and undefinable^{w}. 1/0 exists nowhere on the complex plane^{w}. It does, however, exist on the Riemann sphere^{w} (often called the extended complex plane) where it is surprisingly well behaved. See also ^{*}Wheel theory and L'Hôpital's rule^{w}.
 (Addition and multiplication are fast but division is slow ^{*}even for computers.)
Binary multiplication  

The binary numbers 101 and 110 are multiplied as follows: 1 0 1 (5 in decimal) × 1 1 0 (6 in decimal)  0 0 0 + 1 0 1 + 1 0 1  = 1 1 1 1 0 (30 in binary) Binary numbers can also be multiplied with bits after a ^{*}binary point: 1 0 1 . 1 0 1 (5.625 in decimal) × 1 1 0 . 0 1 (6.25 in decimal)  1 . 0 1 1 0 1 + 0 0 . 0 0 0 0 + 0 0 0 . 0 0 0 + 1 0 1 1 . 0 1 + 1 0 1 1 0 . 1  = 1 0 0 0 1 1 . 0 0 1 0 1 (35.15625 in decimal)
Our universe is tiny. Starting with only 2 people and doubling the population every 100 years will in only 27,000 years result in enough people to completely fill the observable universe. 
Irrational and complex numbers
Exponentiation^{w} (See Tutorial:exponents) is defined as repeated multiplication, and its inverses are roots^{w} and logarithms^{w}. But this leads to multiple equations with no solutions:
 Equations like The solution is to generalize to the set of algebraic numbers^{w} (denoted ). (See also ^{*}algebraic integer and algebraically closed.) To see a proof that the square root of two is irrational see Square root of 2^{w}.
 Equations like The solution (because x is transcendental^{w}) is to generalize to the set of Real numbers^{w} (denoted ).
 Equations like and The solution is to generalize to the set of complex numbers^{w} (denoted ) by defining i = sqrt(1). A single complex number consists of a real part a and an imaginary part bi (See Tutorial:complex numbers). Imaginary numbers^{w} (denoted ) often occur in equations involving change with respect to time. If friction is resistance to motion then imaginary friction would be resistance to change of motion wrt time. (In other words, imaginary friction would be mass.) In fact, in the equation for the Spacetime interval^{w} (given below), ^{*}time itself is an imaginary quantity.
 The Complex conjugate^{w} of the complex number is (Not to be confused with the dual^{w} of a vector.)
 Complex numbers form an ^{*}Algebra over a field (Kalgebra) because complex multiplication is ^{*}Bilinear.
 The complex numbers are not ordered^{w}. However the absolute value^{w} or ^{*}modulus of a complex number is:
 A Gaussian integer a + bi is a Gaussian prime if and only if either:
 one of a, b is zero and absolute value of the other is a prime number of the form 4n + 3 (with n a nonnegative integer), or
 both are nonzero and a^{2} + b^{2} is a prime number (which will not be of the form 4n + 3).
 A Gaussian integer a + bi is a Gaussian prime if and only if either:
 There are n solutions of
 0^0 = 1. See Empty product^{w}.
Hypercomplex numbers
Complex numbers can be used to represent and perform rotations^{w} but only in 2 dimensions. Hypercomplex numbers^{w} like quaternions^{w} (denoted ), octonions^{w} (denoted ), and ^{*}sedenions (denoted ) are one way to generalize complex numbers to some (but not all) higher dimensions.
A quaternion can be thought of as a complex number whose coefficients are themselves complex numbers (hence a hypercomplex number).
Where
and
Any real finitedimensional ^{*}division algebra over the reals must be:^{[1]}
 isomorphic to R or C if ^{*}unitary and commutative (equivalently: associative and commutative)
 isomorphic to the quaternions if noncommutative but associative
 isomorphic to the octonions if nonassociative but alternative.
The following is known about the dimension of a finitedimensional division algebra A over a field K:
 dim A = 1 if K is algebraically closed,
 dim A = 1, 2, 4 or 8 if K is ^{*}real closed, and
 If K is neither algebraically nor real closed, then there are infinitely many dimensions in which there exist division algebras over K.
^{*}Splitcomplex numbers (hyperbolic complex numbers) are similar to complex numbers except that i^{2} = +1.
Tetration
Tetration^{w} is defined as repeated exponentiation and its inverses are called superroot and superlogarithm.
Hyperreal numbers
 See also: ^{*}Nonstandard calculus
When a quantity, like the charge of a single electron, becomes so small that it is insignificant we, quite justifiably, treat it as though it were zero. A quantity that can be treated as though it were zero, even though it very definitely is not, is called infinitesimal. If is a finite amount of charge then using Leibniz's notation^{w} would be an infinitesimal amount of charge. See Differential^{w}
Likewise when a quantity becomes so large that a regular finite quantity becomes insignificant then we call it infinite. We would say that the mass of the ocean is infinite . But compared to the mass of the Milky Way galaxy our ocean is insignificant. So we would say the mass of the Galaxy is doubly infinite .
Infinity and the infinitesimal are called Hyperreal numbers^{w} (denoted ). Hyperreals behave, in every way, exactly like real numbers. For example, is exactly twice as big as In reality, the mass of the ocean is a real number so it is hardly surprising that it behaves like one. See ^{*}Epsilon numbers and ^{*}Big O notation
In ancient times infinity was called the "all".
Groups and rings
 Main articles: Algebraic structure^{w}, Abstract algebra^{w}, and ^{*}group theory
Addition and multiplication can be generalized in so many ways that mathematicians have created a whole system just to categorize them.
A ^{*}magma is a set with a single ^{*}closed binary operation (usually, ^{*}but not always, addition. See ^{*}Additive group).
 a + b = c
A ^{*}semigroup is a magma where the addition is associative. See also ^{*}Semigroupoid
 a + (b + c) = (a + b) + c
A ^{*}monoid is a semigroup with an additive identity element.
 a + 0 = a
A ^{*}group is a monoid with additive inverse elements.
 a + (a) = 0
An ^{*}abelian group is a group where the addition is commutative.
 a + b = b + a
A ^{*}pseudoring is an abelian group that also has a second closed, associative, binary operation (usually, but not always, multiplication).
 a * (b * c) = (a * b) * c
 And these two operations satisfy a distribution law.
 a(b + c) = ab + ac
A ^{*}ring is a pseudoring that has a multiplicative identity
 a * 1 = a
A ^{*}commutative ring is a ring where multiplication commutes, (e.g. ^{*}integers)
 a * b = b * a
A ^{*}field is a commutative ring where every element has a multiplicative inverse (and thus there is a multiplicative identity),
 a * (1/a) = 1
 The existence of a multiplicative inverse for every nonzero element automatically implies that there are no ^{*}zero divisors in a field
 if ab=0 for some a≠0, then we must have b=0 (we call this having no zerodivisors).
is the ^{*}quotient ring of by the ideal containing all integers divisible by n.
 Thus is a field when is a ^{*}maximal ideal, that is, when n is prime.
The ^{*}center of a ^{*}group is the commutative subgroup of elements c such that c+x = x+c for every x. See also: ^{*}Centralizer and normalizer.
The ^{*}center of a ^{*}noncommutative ring is the commutative subring of elements c such that cx = xc for every x.
The ^{*}characteristic of ring R, denoted char(R), is the number of times one must add the ^{*}multiplicative identity to get the ^{*}additive identity.
A ^{*}Lie group is a group that is also a smooth differentiable manifold, in which the group operation is multiplication rather than addition.^{[2]} (Differentiation requires the ability to multiply and divide which is usually impossible with most groups.)
All nonzero ^{*}nilpotent elements are ^{*}zero divisors.
 The square matrix^{w} is nilpotent
Numbers dont lie. (But they sure help)
From Wikipedia:Mathematical fallacy:
The fallacy is in line 5: the progression from line 4 to line 5 involves division by a − b, which is zero since a = b. Since division by zero is undefined, the argument is invalid.
Intervals
 [2,5[ or [2,5) denotes the interval^{w} from 2 to 5, including 2 but excluding 5.
 [3..7] denotes all integers from 3 to 7.
 The set of all reals is unbounded at both ends.
 An open interval does not include its endpoints.
 ^{*}Compactness is a property that generalizes the notion of a subset being closed and bounded.
 The ^{*}unit interval is the closed interval [0,1]. It is often denoted I.
 The ^{*}unit square is a square whose sides have length 1.
 Often, "the" unit square refers specifically to the square in the Cartesian plane^{w} with corners at the four points (0, 0), (1, 0), (0, 1), and (1, 1).
 The ^{*}unit disk in the complex plane is the set of all complex numbers of absolute value less than one and is often denoted
Vectors
 See also: ^{*}Algebraic geometry, ^{*}Algebraic variety, ^{*}Scheme, ^{*}Algebraic manifold, and Linear algebra^{w}
The one dimensional number line can be generalized to a multidimensional Cartesian coordinate system^{w} thereby creating multidimensional math (i.e. geometry^{w}). See also ^{*}Curvilinear coordinates
For sets A and B, the Cartesian product^{w} A × B is the set of all ordered pairs^{w} (a, b) where a ∈ A and b ∈ B.^{[3]}
 is the Cartesian product
 is the Cartesian product^{w} (See ^{*}Complexification)
The ^{*}direct product generalizes the Cartesian product. 

(See also ^{*}Direct sum)

A vector space^{w} is a coordinate space^{w} with vector addition^{w} and scalar multiplication^{w} (multiplication of a vector and a scalar^{w} belonging to a field^{w}).
 If are orthogonal^{w} unit^{w} ^{*}basis vectors
 and are arbitrary vectors
 and are scalars belonging to a field then we can (and usually do) write:
 See also: Linear independence^{w}
 A ^{*}module generalizes a vector space by allowing multiplication of a vector and a scalar belonging to a ring^{w}.
Coordinate systems define the length of vectors parallel to one of the axes but leave all other lengths undefined. This concept of "length" which only works for certain vectors is generalized as the "norm^{w}" which works for all vectors. The norm of vector is denoted The double bars are used to avoid confusion with the absolute value of the function.
 Taxicab metric^{w} (called L^{1} norm. See ^{*}L^{p} space. Sometimes called Lebesgue spaces. See also Lebesgue measure^{w}.) A circle in L^{1} space is shaped like a diamond.
 In Euclidean space^{w} the norm (called L^{2} norm) doesnt depend on the choice of coordinate system. As a result, rigid objects can rotate in Euclidean space. See proof of the Pythagorean theorem^{w} to the right. L^{2} is the only ^{*}Hilbert space among L^{p} spaces.
 In Minkowski space^{w} (See ^{*}PseudoEuclidean space) the Spacetime interval^{w} is
 In ^{*}complex space the most common norm of an n dimensional vector is obtained by treating it as though it were a regular real valued 2n dimensional vector in Euclidean space
 Infinity norm. (In this space a circle is shaped like a square.)
 A ^{*}Banach space is a ^{*}normed vector space that is also a complete metric space^{w} (there are no points missing from it).
Manifolds 

A manifold^{w} is a type of topological space^{w} in which each point has an infinitely small neighbourhood^{w} that is homeomorphic^{w} to Euclidean space^{w}. A manifold is locally, but not globally, Euclidean. A ^{*}Riemannian metric on a manifold allows distances and angles to be measured.
A ^{*}Lie group is a group that is also a finitedimensional smooth manifold, in which the group operation is multiplication rather than addition.^{[5]} ^{*}n×n invertible matrices (See below) are a Lie group.

Spaces
Around 1735, Euler discovered the formula relating the number of vertices, edges and faces of a convex polyhedron, and hence of a ^{*}planar graph. No metric is required to prove this formula. The study and generalization of this formula is the origin of topology^{w}.
A topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence.^{[6]}
The metric is a function that defines a concept of distance between any two points. The distance from a point to itself is zero. The distance between two distinct points is positive.
1. 2. iff 3. 4.
A norm is the generalization to real vector spaces of the intuitive notion of distance in the real world. All norms on a finitedimensional vector space are equivalent from a topological viewpoint as they induce the same topology (although the resulting metric spaces need not be the same).^{[7]}
A norm is a function that assigns a strictly positive length or size to each vector in a vector space—except for the zero vector, which is assigned a length of zero. ^{[8]}
 Failed to parse (syntax error): {\displaystyle \\mathbf{v}\ ≥ 0}
 iff (the zero vector)
 Failed to parse (syntax error): {\displaystyle \\mathbf{u} + \mathbf{v}\ ≤ \\mathbf{u}\ + \\mathbf{v}\ \quad} (The ^{*}Triangle inequality)
A seminorm, on the other hand, is allowed to assign zero length to some nonzero vectors (in addition to the zero vector).^{[9]}
From Wikipedia:List of vector spaces in mathematics:
 ^{*}Topological space
 ^{*}Manifold
 ^{*}Metric space
 ^{*}Vector space
 Dual space^{w}
 ^{*}Riesz space
 ^{*}Topological vector space
 ^{*}Montel space
 ^{*}Locally convex topological vector space
 ^{*}Normed vector space
 ^{*}Inner product space
 ^{*}Banach space
 ^{*}Tsirelson space
 ^{*}Orlicz space
 ^{*}Morrey–Campanato space
 ^{*}Hilbert space
 ^{*}Real coordinate space
 ^{*}L^{p} space
 Euclidean space^{w}
 ^{*}PseudoEuclidean space
 ^{*}Minkowski space
 ^{*}L^{p} space
 ^{*}Sobolev space
 ^{*}Real coordinate space
 ^{*}Normed vector space
 ^{*}Besov space
 ^{*}Bochner space
 ^{*}Fock space
 ^{*}Fréchet space
 ^{*}Schwartz space
 ^{*}Hardy space
 ^{*}Hölder space
 ^{*}LFspace
Multiplication of vectors
Multiplication can be generalized to allow for multiplication of vectors in 3 different ways:
Dot product
Dot product^{w} (a Scalar^{w}):
 Strangely, only parallel components multiply.
 The dot product can be generalized to the bilinear form^{w} where A is an (0,2) tensor. (For the dot product in Euclidean space A is the identity tensor. But in Minkowski space A is the ^{*}Minkowski metric).
 Two vectors are orthogonal if
 A bilinear form is symmetric if
 Its associated ^{*}quadratic form is
 In Euclidean space
 A nondegenerate bilinear form is one for which the associated matrix is invertible (its determinate is not zero)
 for all v implies that u = 0.
 The inner product^{w} is a generalization of the dot product to complex vector space.
 (See ^{*}Bra–ket notation.)
 The inner product can be generalized to a sesquilinear form^{w}
 A complex Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form h : V × V → C such that^{[10]}
 A is a ^{*}Hermitian operator iff^{w} Often written as
 The curl operator, is Hermitian.
 A ^{*}Hilbert space is an inner product space^{w} that is also a Complete metric space^{w}.
 The inner product of 2 functions and between and is
 If this is equal to 0, the functions are said to be orthogonal on the interval. Unlike with vectors, this has no geometric significance but this definition is useful in ^{*}Fourier analysis. See below.
Outer product
Outer product^{w} (a tensor^{w} called a dyadic^{w}):
 As one would expect, every component of one vector multipies with every component of the other vector.

 For complex vectors, it is customary to use the conjugate transpose of v (denoted v^{H} or v*):^{[11]}
 Taking the dot product of u⊗v and any vector x (See Visualization of Tensor multiplication^{w}) causes the components of x not pointing in the direction of v to become zero. What remains is then rotated from v to u. Therefore an outer product rotates one component of a vector and causes all other components to become zero.
 To rotate a vector with 2 components you need the sum of at least 2 outer products (a bivector). But this is still not perfect. Any 3rd component not in the plane of rotation will become zero.
 A true 3 dimensional rotation matrix can be constructed by summing three outer products. The first two sum to form a bivector. The third one rotates the axis of rotation zero degrees but is necessary to prevent that dimension from being squashed to nothing.
 The Tensor product^{w} generalizes the outer product^{w}.
Geometric product
The geometric product^{w} will be explained in detail below.
Wedge product
Wedge product^{w} (a simple bivector^{w}):
 The wedge product of 2 vectors is equal to the ^{*}geometric product minus the inner product as will be explained in detail below.
 The wedge product is also called the exterior product^{w} (sometimes mistakenly called the outer product).
 The term "exterior" comes from the exterior product of two vectors not being a vector.
 Just as a vector has length and direction so a bivector has an area and an orientation.
 In three dimensions is the dual^{w} of the cross product^{w} which is a pseudovector^{w}.
 The triple product^{w} a∧b∧c is a trivector which is a 3rd degree tensor.
 In 3 dimensions a trivector is a pseudoscalar so in 3 dimensions every trivector can be represented as a scalar times the unit trivector. See LeviCivita symbol^{w}
 The dual^{w} of vector a is bivector ā:
Covectors
The Mississippi flows at about 3 km per hour. Km per hour has both direction and magnitude and is a vector.
The Mississippi flows downhill about one foot per km. Feet per km has direction and magnitude but is not a vector. Its a covector.
The difference between a vector and a covector becomes apparent when doing a changing units. If we measured in meters instead of km then 3 km per hour become 3000 meters per hour. The numerical value increases. Vectors are therefore contravariant.
But 1 Foot per km becomes 0.001 foot per meter. The numerical value decreases. Covectors are therefore covariant.
Tensors are more complicated. They can be part contravariant and part covariant.
A (1,1) Tensor is one part contravariant and one part covariant. It is totally unaffected by a change of units. It is these that we will study in the next section.
Tensors
 See also: ^{*}Matrix norm and ^{*}Tensor contraction
 External links: Review of Linear Algebra and HighOrder Tensors
Just as a vector is a sum of unit vectors multiplied by constants so a tensor is a sum of unit dyadics () multiplied by constants. Each dyadic is associated with a certain plane segment having a certain orientation and magnitude. (But a dyadic is not the same thing as a bivector^{w}.)
A simple tensor is a tensor that can be written as a product of tensors of the form (See Outer Product above.) The rank of a tensor T is the minimum number of simple tensors that sum to T.^{[12]} A bivector^{w} is a tensor of rank 2.
The order or degree of the tensor is the dimension of the tensor which is the total number of indices required to identify each component uniquely.^{[13]} A vector is a 1storder tensor.
Complex numbers can be used to represent and perform rotations^{w} but only in 2 dimensions.
Tensors^{w}, on the other hand, can be used in any number of dimensions to represent and perform rotations and other linear transformations^{w}. See the image to the right.
 Any affine transformation^{w} is equivalent to a linear transformation followed by a translation^{w} of the origin. (The origin^{w} is always a fixed point for any linear transformation.) "Translation" is just a fancy word for "move".
Multiplying a tensor and a vector results in a new vector that can not only have a different magnitude but can even point in a completely different direction:
Some special cases:
One can also multiply a tensor with another tensor. Each column of the second tensor is transformed exactly as a vector would be.
And we can also switch things around using a ^{*}Permutation matrix. (See also ^{*}Permutation group):
Matrices do not in general commute:
 but
The Determinant^{w} of a matrix is the area or volume of the ndimensional parallelepiped spanned by its column (or row) vectors and is frequently useful.
Matrices do have zero divisors:
Decomposition of tensors 

Every tensor of degree 2 can be decomposed into a symmetric and an antisymmetric tensor The Outer product (tensor product) of a vector with itself is a symmetric tensor: The wedge product of 2 vectors is antisymmetric: Any matrix X with complex entries can be expressed as where
This is the ^{*}Jordan–Chevalley decomposition. 
Block matrix 

The matrix can be partitioned into 4 2×2 blocks The partitioned matrix can then be written as the matrix product can be formed blockwise, yielding as an matrix with row partitions and column partitions. The matrices in the resulting matrix are calculated by multiplying: Or, using the ^{*}Einstein notation that implicitly sums over repeated indices: 
Normal matrices
A diagonal matrix^{w}:
 The determinate of a diagonal matrix:
A superdiagonal entry is one that is directly above and to the right of the main diagonal. A subdiagonal entry is one that is directly below and to the left of the main diagonal. The eigenvalues of diag(λ_{1}, ..., λ_{n}) are λ_{1}, ..., λ_{n} with associated eigenvectors of e_{1}, ..., e_{n}.
A ^{*}spectral theorem is a result about when a matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations.
A matrix is normal if and only if it is ^{*}diagonalizable.
All unitary^{w}, Hermitian^{w}, and ^{*}skewHermitian matrices are normal. 
All orthogonal^{w}, symmetric^{w}, and skewsymmetric^{w} matrices are normal. 
A Unitary matrix^{w} is a complex square matrix whose rows (or columns) form an ^{*}orthonormal basis of with respect to the usual inner product. 
An orthogonal matrix is a real unitary matrix. Its columns and rows are orthogonal unit vectors (i.e., ^{*}orthonormal vectors). A permutation matrix is an orthogonal matrix. 
A Hermitian matrix^{w} is a complex square matrix that is equal to its own conjugate transpose^{w}. The diagonal elements must be real. 
A symmetric matrix^{w} is a real Hermitian matrix. It is equal to its transpose^{w}. 
A ^{*}SkewHermitian matrix is a complex square matrix whose conjugate transpose is its negative. The diagonal elements must be imaginary. 
A Skewsymmetric matrix^{w} is a real SkewHermitian matrix. Its transpose equals its negative. A^{T} = −A The diagonal elements must be zero. 
Change of basis
An nbyn square matrix A is invertible if there exists an nbyn square matrix A^{1} such that
A matrix is invertible if and only if its determinant is nonzero.
The standard basis for would be:
Given a matrix whose columns are the vectors of the new basis of the space (new basis matrix), the new coordinates for a column vector are given by the matrix product .
From Wikipedia:Matrix similarity:
Given a linear transformation:
 ,
it can be the case that a change of basis can result in a simpler form of the same transformation.
 ,
 where x' and y' are in the new basis.
 and P is the changeofbasis matrix.
To derive T in terms of the simpler matrix, we use:
Thus, the matrix in the original basis is given by
 Therefore
From Wikipedia:Matrix similarity
Two nbyn matrices A and B are called similar if
for some invertible nbyn matrix P.
A transformation A ↦ P^{−1}AP is called a similarity transformation or conjugation of the matrix A. In the ^{*}general linear group, similarity is therefore the same as ^{*}conjugacy, and similar matrices are also called conjugate.
Linear groups
A square matrix^{w} of order n is an nbyn matrix. Any two square matrices of the same order can be added and multiplied. A matrix is invertible if and only if its determinant is nonzero.
GL_{n}(F) or GL(n, F), or simply GL(n) is the ^{*}Lie group of n×n invertible matrices with entries from the field F. The group operation is matrix multiplication^{w}. The group GL(n, F) and its subgroups are often called linear groups or matrix groups.
 SL(n, F) or SL_{n}(F), is the ^{*}subgroup of GL(n, F) consisting of matrices with a determinant^{w} of 1.
 U(n), the Unitary group of degree n is the group^{w} of n × n unitary matrices^{w}. The group operation is matrix multiplication^{w}.^{[14]} The determinant of a unitary matrix is a complex number with norm 1.
 SU(n), the special unitary group of degree n, is the ^{*}Lie group of n×n unitary matrices^{w} with determinant^{w} 1.
Symmetry groups
^{*}Affine group
 ^{*}Poincaré group: boosts, rotations, translations
 ^{*}Lorentz group: boosts, rotations
 The set of all boosts, however, does not form a subgroup, since composing two boosts does not, in general, result in another boost. (Rather, a pair of noncolinear boosts is equivalent to a boost and a rotation, and this relates to Thomas rotation.)
Aff(n,K): the affine group or general affine group of any affine space over a field K is the group of all invertible affine transformations from the space into itself.
 E(n): rotations, reflections, and translations.
 O(n): rotations, reflections
 SO(n): rotations
 so(3) is the Lie algebra of SO(3) and consists of all skewsymmetric^{w} 3 × 3 matrices.
Clifford group: The set of invertible elements x such that for all v in V The ^{*}spinor norm Q is defined on the Clifford group by
 Pin_{V}(K): The subgroup of elements of spinor norm 1. Maps 2to1 to the orthogonal group
 Spin_{V}(K): The subgroup of elements of Dickson invariant 0 in Pin_{V}(K). When the characteristic is not 2, these are the elements of determinant 1. Maps 2to1 to the special orthogonal group. Elements of the spin group act as linear transformations on the space of spinors
Rotations
In 4 spatial dimensions a rigid object can ^{*}rotate in 2 different ways simultaneously.
 See also: ^{*}Hypersphere of rotations, ^{*}Rotation group SO(3), ^{*}Special unitary group, ^{*}Plate trick, ^{*}Spin representation, ^{*}Spin group, ^{*}Pin group, ^{*}Spinor, Clifford algebra^{w}, ^{*}Indefinite orthogonal group, ^{*}Root system, Bivectors^{w}, Curl^{w}
Consider the solid ball in R^{3} of radius π. For every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The two rotations through π and through −π are the same. So we ^{*}identify (or "glue together") ^{*}antipodal points on the surface of the ball.
The ball with antipodal surface points identified is a ^{*}smooth manifold, and this manifold is ^{*}diffeomorphic to the rotation group. It is also diffeomorphic to the ^{*}real 3dimensional projective space RP^{3}, so the latter can also serve as a topological model for the rotation group.
These identifications illustrate that SO(3) is ^{*}connected but not ^{*}simply connected. As to the latter, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else the loop will "break open". (In other words one full rotation is not equivalent to doing nothing.)
Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the north pole (using the fact that north and south poles are identified), and then again run from north pole down to south pole, so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The ^{*}Balinese plate trick and similar tricks demonstrate this practically.
The same argument can be performed in general, and it shows that the ^{*}fundamental group of SO(3) is cyclic group^{w} of order 2. In physics applications, the nontriviality of the fundamental group allows for the existence of objects known as ^{*}spinors, and is an important tool in the development of the ^{*}spinstatistics theorem.
Spin group 

The ^{*}universal cover of SO(3) is a ^{*}Lie group called ^{*}Spin(3). The group Spin(3) is isomorphic to the ^{*}special unitary group SU(2); it is also diffeomorphic to the unit ^{*}3sphere S^{3} and can be understood as the group of ^{*}versors (quaternions^{w} with absolute value^{w} 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in ^{*}quaternions and spatial rotation. The map from S^{3} onto SO(3) that identifies antipodal points of S^{3} is a ^{*}surjective ^{*}homomorphism of Lie groups, with ^{*}kernel {±1}. Topologically, this map is a twotoone ^{*}covering map. (See the ^{*}plate trick.)
The spin group Spin(n)^{[15]}^{[16]} is the ^{*}double cover of the ^{*}special orthogonal group SO(n) = SO(n, R), such that there exists a ^{*}short exact sequence of ^{*}Lie groups (with n ≠ 2) As a Lie group, Spin(n) therefore shares its ^{*}dimension, n(n − 1)/2, and its ^{*}Lie algebra with the special orthogonal group. For n > 2, Spin(n) is ^{*}simply connected and so coincides with the ^{*}universal cover of ^{*}SO(n). The nontrivial element of the kernel is denoted −1, which should not be confused with the orthogonal transform of ^{*}reflection through the origin, generally denoted −I . Spin(n) can be constructed as a ^{*}subgroup of the invertible elements in the Clifford algebra^{w} Cl(n). A distinct article discusses the ^{*}spin representations. 
Matrix representations
 See also: ^{*}Group representation, ^{*}Presentation of a group, ^{*}Abstract algebra
Real numbers
If a vector is multiplied with the the ^{*}identity matrix then the vector is completely unchanged:
And if then
Therefore can be thought of as the matrix form of the scalar a. The scalar matrices are the center of the algebra of matrices.
 .
 .
(Note: Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.)
Complex numbers
Complex numbers can also be written in matrix form^{w} in such a way that complex multiplication corresponds perfectly to matrix multiplication:
The absolute value of a complex number is defined by the Euclidean distance of its corresponding point in the complex plane from the origin computed using the Pythagorean theorem.
Quaternions
There are at least two ways of representing quaternions as matrices^{w} in such a way that quaternion addition and multiplication correspond to matrix addition and matrix multiplication^{w}.
Using 2 × 2 complex matrices, the quaternion a + bi + cj + dk can be represented as
Multiplying any two Pauli matrices always yields a quaternion unit matrix. See Isomorphism to quaternions below.
By replacing each 0, 1, and i with its 2 × 2 matrix representation that same quaternion can be written as a 4 × 4 real (^{*}block) matrix:
Therefore:
However, the representation of quaternions in M(4,ℝ) is not unique. In fact, there exist 48 distinct representations of this form. Each 4x4 matrix representation of quaternions corresponds to a multiplication table of unit quaternions. See Wikipedia:Quaternion#Matrix_representations.
The obvious way of representing quaternions with 3 × 3 real matrices does not work because:
Vectors
Euclidean
 See also: ^{*}Splitcomplex numbers
Unfortunately the matrix representation of a vector is not so obvious. First we must decide what properties the matrix should have. To see consider the square (^{*}quadratic form) of a single vector:
From the Pythagorean theorem we know that:
So we know that
This particular Clifford algebra is known as Cl_{2,0}. The subscript 2 indicates that the 2 basis vectors are square roots of +1. See ^{*}Metric signature. If we had used then the result would have been Cl_{0,2}.
The set of 3 matrices in 3 dimensions that have these properties are called ^{*}Pauli matrices. The algebra generated by the three Pauli matrices is isomorphic to the Clifford algebra of ℝ^{3}.
The Pauli matrices are a set of three 2 × 2 complex^{w} matrices^{w} which are Hermitian^{w} and unitary^{w}.^{[17]} They are
Squaring a Pauli matrix results in a "scalar":
Do NOT confuse this scalar with the vectors above. It may look similar to the Pauli matrices but it is not the matrix representation of a vector. It is the matrix representation of a scalar. Scalars are totally different from vectors and the matrix representations of scalars are totally different from the matrix representations of vectors. They are NOT the same.
Multiplication is ^{*}anticommutative:
And
Exponential of a Pauli vector which is analogous to Euler's formula, extended to quaternions:
commutation^{w} relations:
^{*}anticommutation relations:
Adding the commutator () to the anticommutator () gives the general formula for multiplying any 2 arbitrary "vectors" (or rather their matrix representations):
If is identified with the pseudoscalar then the right hand side becomes which is also the definition for the geometric product^{w} of two vectors in geometric algebra^{w} (Clifford algebra^{w}). The geometric product of two vectors is a multivector^{w}.
For any 2 arbitrary vectors:
Applying the rules of Clifford algebra we get:
Isomorphism to quaternions  

Multiplying any 2 Pauli matrices results in a quaternion. Hence the geometric interpretation of the quaternion units as bivectors in 3 dimensional (not 4 dimensional) space. Quaternions form a ^{*}division algebra—every nonzero element has an inverse—whereas Pauli matrices do not.
And multiplying a Pauli matrix and a quaternion results in a Pauli matrix: 
Further reading: ^{*}Generalizations of Pauli matrices, ^{*}GellMann matrices and ^{*}Pauli equation
PseudoEuclidean
 See also: ^{*}Electron magnetic moment
Gamma ^{*}matrices, , also known as the Dirac matrices, are a set of 4 × 4 conventional matrices with specific ^{*}anticommutation relations that ensure they ^{*}generate a matrix representation of the Clifford algebra^{w} Cℓ_{1,3}(R). One gamma matrix squares to 1 times the ^{*}identity matrix and three gamma matrices square to 1 times the identity matrix.
The defining property for the gamma matrices to generate a Clifford algebra^{w} is the anticommutation relation
where is the ^{*}anticommutator, is the ^{*}Minkowski metric with signature (+ − − −) and is the 4 × 4 identity matrix.
Minkowski metric 

From Wikipedia:Minkowski_space#Minkowski_metric The simplest example of a Lorentzian manifold is ^{*}flat spacetime, which can be given as R^{4} with coordinates and the metric Note that these coordinates actually cover all of R^{4}. The flat space metric (or ^{*}Minkowski metric) is often denoted by the symbol η and is the metric used in ^{*}special relativity. A standard basis for Minkowski space is a set of four mutually orthogonal vectors { e_{0}, e_{1}, e_{2}, e_{3} } such that These conditions can be written compactly in the form Relative to a standard basis, the components of a vector v are written (v^{0}, v^{1}, v^{2}, v^{3}) where the ^{*}Einstein summation convention is used to write v = v^{μ}e_{μ}. The component v^{0} is called the timelike component of v while the other three components are called the spatial components. The spatial components of a 4vector v may be identified with a 3vector v = (v_{1}, v_{2}, v_{3}). In terms of components, the Minkowski inner product between two vectors v and w is given by and Here lowering of an index with the metric was used. The Minkowski metric^{[18]} η is the metric tensor of Minkowski space. It is a pseudoEuclidean metric, or more generally a constant pseudoRiemannian metric in Cartesian coordinates. As such it is a nondegenerate symmetric bilinear form, a type (0,2) tensor. It accepts two arguments u, v. The definition yields an inner productlike structure on M, previously and also henceforth, called the Minkowski inner product, similar to the Euclidean inner product, but it describes a different geometry. It is also called the relativistic dot product. If the two arguments are the same, the resulting quantity will be called the Minkowski norm squared. This bilinear form can in turn be written as where [η] is a 4×4 matrix associated with η. Possibly confusingly, denote [η] with just η as is common practice. The matrix is read off from the explicit bilinear form as and the bilinear form with which this section started by assuming its existence, is now identified. 
When interpreted as the matrices of the action of a set of orthogonal basis vectors for ^{*}contravariant vectors in Minkowski space^{w}, the column vectors on which the matrices act become a space of ^{*}spinors, on which the Clifford algebra of ^{*}spacetime acts. This in turn makes it possible to represent infinitesimal ^{*}spatial rotations and Lorentz boosts^{w}. Spinors facilitate spacetime computations in general, and in particular are fundamental to the ^{*}Dirac equation for relativistic spin½ particles.
In Dirac representation^{w}, the four ^{*}contravariant gamma matrices are
is the timelike matrix and the other three are spacelike matrices.
The matrices are also sometimes written using the 2×2 ^{*}identity matrix, , and the ^{*}Pauli matrices.
The gamma matrices we have written so far are appropriate for acting on ^{*}Dirac spinors written in the Dirac basis; in fact, the Dirac basis is defined by these matrices. To summarize, in the Dirac basis:
Another common choice is the Weyl or chiral basis,^{[19]} in which remains the same but is different, and so is also different, and diagonal,
Original Dirac matrices 

Surprisingly the 4 by 4 table above forms a multiplication table even though it is actually created by the following rules: where and are the original 2x2 Pauli matrices and is the ^{*}Kronecker product (not the tensor product) The Dirac matrices are commonly referred to by the following name. Note that do not refer to the original Pauli matrices. The 16 original Dirac matrices form six anticommuting sets of five matrices each (Arfken 1985, p. 214). Any of the 15 original Dirac matrices (excluding the identity matrix ) anticommute with eight other original Dirac matrices and commute with the remaining eight, including itself and the identity matrix. Any of the 16 original Dirac matrices multiplied times itself equals 
Higherdimensional gamma matrices 

^{*}Analogous sets of gamma matrices can be defined in any dimension and for any signature of the metric. For example, the Pauli matrices are a set of "gamma" matrices in dimension 3 with metric of Euclidean signature (3,0). In 5 spacetime dimensions, the 4 gammas above together with the fifth gamma matrix to be presented below generate the Clifford algebra. It is useful to define the product of the four gamma matrices as follows:
Although uses the letter gamma, it is not one of the gamma matrices of Cℓ_{1,3}(R). The number 5 is a relic of old notation in which was called "". From Wikipedia:Higherdimensional gamma matrices Consider a spacetime of dimension d with the flat ^{*}Minkowski metric, where a,b = 0,1, ..., d−1. Set N= 2^{⌊d/2⌋}. The standard Dirac matrices correspond to taking d = N = 4. The higher gamma matrices are a dlong sequence of complex N×N matrices which satisfy the ^{*}anticommutator relation from the ^{*}Clifford algebra Cℓ_{1,d−1}(R) (generating a representation for it), where I_{N} is the ^{*}identity matrix in N dimensions. (The spinors acted on by these matrices have N components in d dimensions.) Such a sequence exists for all values of d and can be constructed explicitly, as provided below. The gamma matrices have the following property under hermitian conjugation, 
Further reading: Quantum Mechanics for Engineers and How (not) to teach Lorentz covariance of the Dirac equation
Multivectors
 See also: ^{*}Dirac algebra
External links:
 A brief introduction to geometric algebra
 A brief introduction to Clifford algebra
 The Construction of Spinors in Geometric Algebra
 Functions of Multivector Variables
 Clifford Algebra Representations
Clifford algebra is a type of algebra characterized by the geometric product of scalars, vectors, bivectors, trivectors...etc.
Just as a vector has length so a bivector has area and a trivector has volume.
Just as a vector has direction so a bivector has orientation. In three dimensions a trivector has only one possible orientation and is therefore a pseudoscalar. But in four dimensions a trivector becomes a pseudovector and the quadvector becomes the pseudoscalar.
Multiplication of arbitrary vectors
The dot product of two vectors is:
 Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} \mathbf{u} \cdot \mathbf{v} &= {\color{blue}\text{vector}} \cdot {\color{blue}\text{vector}} \\ &= (u_{x} + u_{y})(v_{x} + v_{y}) \\ &= {\color{red}u_{x} v_{x} + u_{y} v_{y}} \end{split} }
But this is actually quite mysterious. When we multiply and we dont get so why is it that when we multiply vectors we only multiply parallel components? Clifford algebra has a surprisingly simple answer. The answer is: We dont! Instead of the dot product or the wedge product Clifford algebra uses the geometric product.
 Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} \mathbf{u} \mathbf{v} &= (u_{x} {\color{blue}e_{x}} + u_{y} {\color{blue} e_{y}} ) (v_{x} {\color{blue}e_{x}} + v_{y} {\color{blue} e_{y}} ) \\ &= u_{x} v_{x} {\color{red}e_{x} e_{x}} + u_{x} v_{y} {\color{green}e_{x} e_{y}} + u_{y} v_{x} {\color{green}e_{y} e_{x}} + u_{y} v_{y} {\color{red}e_{y} e_{y}} \\ &= u_{x} v_{x} {\color{red}(1)} + u_{y} v_{y} {\color{red}(1)} + u_{x} v_{y} {\color{green}e_{x} e_{y}}  u_{y} v_{x} {\color{green}e_{x} e_{y}} \\ &= (u_{x} v_{x} + u_{y} v_{y}){\color{red}(1)} + (u_{x} v_{y}  u_{y} v_{x}) {\color{green}e_{xy}} \\ &= {\color{red}\text{scalar}} + {\color{green}\text{bivector}} \end{split} }
A scalar plus a bivector (or any number of blades of different grade) is called a multivector. The idea of adding a scalar and a bivector might seem wrong but in the real world it just means that what appears to be a single equation is in fact a set of ^{*}simultaneous equations.
For example:
 would just mean that:
 Failed to parse (syntax error): {\displaystyle (u_{x} v_{x} + u_{y} v_{y}){\color{red}(1)} = 5 \\ \quad \quad \quad \text{and} \\ (u_{x} v_{y}  u_{y} v_{x}) {\color{green}e_{xy}} = 0 }
Rules
All the properties of Clifford algebra derive from a few simple rules.
Let and be perpendicular unit vectors.
Multiplying two perpendicular vectors results in a bivector:
Multiplying three perpendicular vectors results in a trivector:
Multiplying parallel vectors results in a scalar:
Clifford algebra is associative therefore the fact that multiplying parallel vectors results in a scalar means that:
 Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} {\color{green}(e_{x} e_{y})} {\color{blue} (e_{y}) } &= {\color{blue} e_{x}} {\color{red}(e_{y} e_{y})} \\ &= {\color{blue} e_{x}} {\color{red}(1)} \\ &= {\color{blue} e_{x}} \end{split} }
 and:
 and:
Rotation from x to y is the negative of rotation from y to x:
 Therefore:
 Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} {\color{green}(e_{x} e_{y})} {\color{blue} (e_{x}) } &= \phantom{}{\color{blue} e_{x}} {\color{green}(e_{y} e_{x})} \\ &= {\color{blue} e_{x}} {\color{green}(e_{x} e_{y})} \\ &= {\color{red} (e_{x} e_{x})} {\color{blue} e_{y}} \\ &= {\color{red} (1)} {\color{blue} e_{y}} \\ &= {\color{blue} e_{y}} \end{split} }
Multiplication tables
 In one dimension:
 In two dimensions:
 In three dimensions:
 In four dimensions:

Basis
Every multivector of the Clifford algebra can be expressed as a linear combination of the canonical basis elements. The basis elements of the Clifford algebra Cℓ_{3} are and the general element of Cℓ_{3} is given by
If are all real then the Clifford algebra is Cℓ_{3}(R). If the coefficients are allowed to be complex then the Clifford algebra is Cℓ_{3}(C).
A multivector can be separated into components of different grades:
 Failed to parse (syntax error): {\displaystyle \langle \mathbf{A} \rangle_0 = a_0{\color{red}(1)}\\ \langle \mathbf{A} \rangle_1 = a_1 {\color{blue}e_x} + a_2 {\color{blue}e_y} + a_3 {\color{blue}e_z}\\ \langle \mathbf{A} \rangle_2 = a_4 {\color{green}e_{xy}} + a_5 {\color{green}e_{xz}} + a_6 {\color{green}e_{yz}}\\ \langle \mathbf{A} \rangle_3 = a_7 {\color{orange}e_{xyz}}\\ }
The elements of even grade form a subalgebra because the sum or product of even grade elements always results in an element of even grade. The elements of odd grade do not form a subalgebra.
Relation to other algebras
Failed to parse (syntax error): {\displaystyle Cℓ_0 (\mathbf{R})} : Real numbers (scalars). A scalar can (and should) be thought of as zero vectors multiplied together. See Empty product.
Failed to parse (syntax error): {\displaystyle Cℓ_0 (\mathbf{C})} : Complex numbers
Failed to parse (syntax error): {\displaystyle Cℓ_1 (\mathbf{R})} : Splitcomplex numbers
Failed to parse (syntax error): {\displaystyle Cℓ_1 (\mathbf{C})} : Bicomplex numbers
Failed to parse (syntax error): {\displaystyle Cℓ_2^0 (\mathbf{R})} : Complex numbers (The superscript 0 indicates the even subalgebra)
Failed to parse (syntax error): {\displaystyle Cℓ_3^0 (\mathbf{R})} : Quaternions
Failed to parse (syntax error): {\displaystyle Cℓ_3^0 (\mathbf{C})} : Biquaternions
Multivector multiplication using tensors
To find the product
we have to multiply every component of the first multivector with every component of the second multivector.
 Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} AB = & \phantom{+} (a_0 b_0 {\color{red}1}{\color{red}1} + a_0 b_1 {\color{red}1}{\color{blue} e_{x}} + a_0 b_2 {\color{red}1}{\color{blue} e_{y}} + a_0 b_3 {\color{red}1}{\color{green} e_{xy}}) \\ &+ (a_1 b_0 {\color{blue} e_{x}}{\color{red}1} + a_1 b_1 {\color{blue} e_{x}}{\color{blue} e_{x}} + a_1 b_2 {\color{blue} e_{x}}{\color{blue} e_{y}} + a_1 b_3 {\color{blue} e_{x}}{\color{green} e_{xy}}) \\ &+ (a_2 b_0 {\color{blue} e_{y}}{\color{red}1} + a_2 b_1 {\color{blue} e_{y}}{\color{blue} e_{x}} + a_2 b_2 {\color{blue} e_{y}}{\color{blue} e_{y}} + a_2 b_3 {\color{blue} e_{y}}{\color{green} e_{xy}}) \\ &+ (a_3 b_0 {\color{green} e_{xy}}{\color{red}1} + a_3 b_1 {\color{green} e_{xy}}{\color{blue} e_{x}} + a_3 b_2 {\color{green} e_{xy}}{\color{blue} e_{y}} + a_3 b_3 {\color{green} e_{xy}}{\color{green} e_{xy}}) \end{split} }
Then we reduce each of the 16 resulting terms to its standard form.
 Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} AB = & \phantom{+} (a_0 b_0 {\color{red}1} + a_0 b_1 {\color{blue} e_{x}} + a_0 b_2 {\color{blue} e_{y}} + a_0 b_3 {\color{green} e_{xy}}) \\ &+ (a_1 b_0 {\color{blue} e_{x}} + a_1 b_1 {\color{red}1} + a_1 b_2 {\color{green} e_{xy}} + a_1 b_3 {\color{blue} e_{y}}) \\ &+ (a_2 b_0 {\color{blue} e_{y}}  a_2 b_1 {\color{green} e_{xy}} + a_2 b_2 {\color{red}1}  a_2 b_3 {\color{blue} e_{x}}) \\ &+ (a_3 b_0 {\color{green} e_{xy}}  a_3 b_1 {\color{blue} e_{y}} + a_3 b_2 {\color{blue} e_{x}}  a_3 b_3 {\color{red}1}) \end{split} }
Finally we collect like products into the four components of the final multivector.
 Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} AB = & \phantom{+} ( a_0 b_0 + a_1 b_1 + a_2 b_2  a_3 b_3 ) {\color{red}1} \\ &+ ( a_1 b_0 + a_0 b_1 + a_3 b_2  a_2 b_3 ) {\color{blue} e_{x} } \\ &+ ( a_2 b_0  a_3 b_1 + a_0 b_2 + a_1 b_3 ) {\color{blue} e_{y} } \\ &+ ( a_3 b_0  a_2 b_1 + a_1 b_2 + a_0 b_3 ) {\color{green} e_{xy} } \end{split} }
This is all very tedious and errorprone. It would be nice if there was some way to cut straight to the end. Tensor notation allows us to do just that.
To find the tensor that we need we first need to know which terms end up as scalars, which terms end up as vectors...etc. There is an easy way to do this and it involves the multiplication table.
First lets start with an easy one.
Complex numbers
The multiplication table for Failed to parse (syntax error): {\displaystyle Cℓ_2^0 (\mathbf{R})} (Which is isomorphic to complex numbers)
We can see then that:

It worked! All the terms in the first row are scalars and all the terms in the second row are bivectors. This is exactly what we are looking for.

Pay special attention to the signs in the final matrix above.
Therefore to find the product
We would multiply:
Each row of the final matrix has exactly the right terms with exactly the right signs.
The vector above represents a complex number. You should think of the first column of the matrix above as representing another complex number. All the other terms in the matrix are just there to make our lives a little bit easier.
It works. It works so well that complex numbers can be represented as matrices as:
Which corresponds perfectly to a multiplication table for complex numbers:
Quaternions
The multiplication table for Failed to parse (syntax error): {\displaystyle Cℓ_3^0 (\mathbf{R})} (Which is isomorphic to quaternions) is:
The entire 2nd row of the multiplication table is just multiplied by the entire first row.
The entire 3rd row of the multiplication table is just multiplied by the entire first row.
The entire 4th row of the multiplication table is just multiplied by the entire first row.
We can see then that if we multiply each row by the first row again then we get:

This works because we have in effect multiplied each term by a second term twice. In other words we have multiplied every term by the square of another term and the square of every term is either 1 or 1.
Therefore to find the product
We would multiply:
Just as complex numbers can be represented as matrices, so a quaternion can be represented as:
Which corresponds to a multiplication table for quaternions:


CL_{2}
The multiplication table for Failed to parse (syntax error): {\displaystyle Cℓ_{2} (\mathbf{R})} is:
We can see then that:

Therefore to find the product
We would multiply:
 Failed to parse (unknown function "\phantom"): {\displaystyle \left( \begin{array}{rrrr} {\color{red} b_0 } & {\color{blue} b_1 } & \phantom{} {\color{blue} b_2 } & {\color{green} b_3 } \\ {\color{blue} b_1 } & {\color{red} b_0 } & {\color{green} b_3 } & {\color{blue} {b_2 }} \\ {\color{blue} b_2 } & {\color{green} b_3 } & {\color{red} b_0 } & {\color{blue} b_1 } \\ {\color{green} b_3 } & {\color{blue} b_2 } & {\color{blue} {b_1 }} & {\color{red} b_0 } \end{array} \right) \left( \begin{array}{rrrr} {\color{red} a_0 } \\ {\color{blue} a_1 } \\ {\color{blue} a_2 } \\ {\color{green} a_3 } \end{array} \right) = \left( \begin{array}{rrrr} {\color{red} b_0 } {\color{red} a_0 } & + & {\color{red} b_1 } {\color{red} a_1 } & + & {\color{red} b_2 } {\color{red} a_2 } &  & {\color{red} b_3 } {\color{red} a_3 } \\ {\color{blue} b_1 } {\color{blue} a_0 } & + & {\color{blue} b_0 } {\color{blue} a_1 } & + & {\color{blue} b_3 } {\color{blue} a_2 } &  & {\color{blue} {b_2 }} {\color{blue} a_3 } \\ {\color{blue} b_2 } {\color{blue} a_0 } &  & {\color{blue} b_3 } {\color{blue} a_1 } & + & {\color{blue} b_0 } {\color{blue} a_2 } & + & {\color{blue} b_1 } {\color{blue} a_3 } \\ {\color{green} b_3 } {\color{green} a_0 } &  & {\color{green} b_2 } {\color{green} a_1 } & + & {\color{green} {b_1 }} {\color{green} a_2 } & + & {\color{green} b_0 } {\color{green} a_3 } \end{array} \right) }
Squares of pseudoscalars are either 1 or 1
In 0 dimensions:
In 1 dimension:
In 2 dimensions:
In 3 dimensions:
In 4 dimensions:
In 5 dimensions:
In 6 dimensions:
In 7 dimensions:
In 8 dimensions:
In 9 dimensions:
Bivectors in higher dimensions
A simple bivector can be used to represent a single rotation.
In four dimensions a rigid object can rotate in two different ways simultaneously. Such a rotation can only be represented as the sum of two simple bivectors.
In six dimensions a rigid object can rotate in three different ways simultaneously. Such a rotation can only be represented as the sum of three simple bivectors.
From Wikipedia:Bivector
The wedge product of two vectors is a bivector, but not all bivectors are wedge products of two vectors. For example, in four dimensions the bivector
cannot be written as the wedge product of two vectors. A bivector that can be written as the wedge product of two vectors is simple. In two and three dimensions all bivectors are simple, but not in four or more dimensions;
A bivector has a real square if and only if it is simple.
 But:
Other quadratic forms
The square of a vector is:
 Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} \mathbf{v} \mathbf{v} &= (v_{x} {\color{blue}e_{x}} + v_{y} {\color{blue}e_{y}}) (v_{x} {\color{blue}e_{x}} + v_{y} {\color{blue}e_{y}}) \\ &= v_{x} v_{x} {\color{red}e_{x} e_{x}} + v_{x} v_{y} {\color{green}e_{x} e_{y}} + v_{y} v_{x} {\color{green}e_{y} e_{x}} + v_{y} v_{y} {\color{red}e_{y} e_{y}} \\ &= v_{x} v_{x} {\color{red}(1)} + v_{y} v_{y} {\color{red}(1)} + v_{x} v_{y} {\color{green}e_{x} e_{y}}  v_{y} v_{x} {\color{green}e_{x} e_{y}} \\ &= (v_{x} v_{x} + v_{y} v_{y}){\color{red}(1)} + (v_{x} v_{y}  v_{y} v_{x}) {\color{green}e_{xy}} \\ &= (v_{x}^2 + v_{y}^2){\color{red}(1)} + (0) {\color{green}e_{xy}} \\ &= (v_{x}^2 + v_{y}^2){\color{red}(1)} \\ &= {\color{red}\text{scalar}} \end{split} }
 () is called the quadratic form. In this case both terms are positive but some Clifford algebras have quadratic forms with negative terms. Some have both positive and negative terms.
From Wikipedia:Clifford algebra:
Every nondegenerate quadratic form on a finitedimensional real vector space is equivalent to the standard diagonal form:
where n = p + q is the dimension of the vector space. The pair of integers (p, q) is called the ^{*}signature of the quadratic form. The real vector space with this quadratic form is often denoted R^{p,q}. The Clifford algebra on R^{p,q} is denoted Cℓ_{p,q}(R). The symbol Cℓ_{n}(R) means either Cℓ_{n,0}(R) or Cℓ_{0,n}(R) depending on whether the author prefers positivedefinite or negativedefinite spaces.
A standard basis^{w} {e_{i}} for R^{p,q} consists of n = p + q mutually orthogonal vectors, p of which square to +1 and q of which square to −1. The algebra Cℓ_{p,q}(R) will therefore have p vectors that square to +1 and q vectors that square to −1.
From Wikipedia:Spacetime algebra:
^{*}Spacetime algebra (STA) is a name for the Clifford algebra^{w} Cl_{3,1}(R), or equivalently the geometric algebra^{w} G(M^{4}), which can be particularly closely associated with the geometry of special relativity^{w} and relativistic spacetime^{w}. See also ^{*}Algebra of physical space.
The spacetime algebra may be built up from an orthogonal basis of one timelike vector and three spacelike vectors, , with the multiplication rule
where is the Minkowski metric^{w} with signature (− + + +).
Thus:
The basis vectors share these properties with the ^{*}Gamma matrices, but no explicit matrix representation need be used in STA.
Failed to parse (syntax error): {\displaystyle Cℓ_{3,0} (\mathbf{R})}
:Algebra of physical space (Time = scalar)
Failed to parse (syntax error): {\displaystyle Cℓ_{3,1} (\mathbf{R})} :Spacetime algebra (Time = vector)
Failed to parse (syntax error): {\displaystyle Cℓ_{0,2} (\mathbf{R})} :Quaternions (Three quaternions = two vectors that square to 1 and one bivector that squares to 1)
Rotors
 See also: ^{*}Rotor (mathematics)
The inverse of a vector is:
The projection of onto (or the parallel part) is
and the rejection of from (or the orthogonal part) is
The reflection of a vector along a vector , or equivalently across the hyperplane orthogonal to , is the same as negating the component of a vector parallel to . The result of the reflection will be

If a is a unit vector then and therefore
is called the sandwich product which is called a doublesided product.
If we have a product of vectors then we denote the reverse as
Any rotation is equivalent to 2 reflections.
R is called a Rotor
If a and b are unit vectors then the rotor is automatically normalised:
2 rotations becomes:
R_{2}R_{1} represents Rotor R_{1} rotated by Rotor R_{2}. This would be called a singlesided transformation. (R_{2}R_{1}R_{2} would be doublesided.) Therefore rotors do not transform doublesided the same way that other objects do. They transform singlesided.
Quaternions
The square root of the product of a quaternion with its conjugate is called its ^{*}norm:
A unit quaternion is a quaternion of norm one. Unit quaternions, also known as ^{*}versors, provide a convenient mathematical notation for representing orientations and rotations of objects in three dimensions.
Every nonzero quaternion has a multiplicative inverse
Thus quaternions form a ^{*}division algebra.
The inverse of a unit quaternion is obtained simply by changing the sign of its imaginary components.
A ^{*}3D Euclidean vector such as (2, 3, 4) or (a_{x}, a_{y}, a_{z}) can be rewritten as 0 + 2 i + 3 j + 4 k or 0 + a_{x} i + a_{y} j + a_{z} k, where i, j, k are unit vectors representing the three ^{*}Cartesian axes. A rotation through an angle of θ around the axis defined by a unit vector
can be represented by a quaternion. This can be done using an ^{*}extension of Euler's formula^{w}:
It can be shown that the desired rotation can be applied to an ordinary vector in 3dimensional space, considered as a quaternion with a real coordinate equal to zero, by evaluating the conjugation of p by q:
using the ^{*}Hamilton product
The conjugate of a product of two quaternions is the product of the conjugates in the reverse order.
Conjugation by the product of two quaternions is the composition of conjugations by these quaternions: If p and q are unit quaternions, then rotation (conjugation) by pq is
 ,
which is the same as rotating (conjugating) by q and then by p. The scalar component of the result is necessarily zero.
The imaginary part of a quaternion behaves like a vector in three dimension vector space, and the real part a behaves like a ^{*}scalar in R. When quaternions are used in geometry, it is more convenient to define them as ^{*}a scalar plus a vector:
When multiplying the vector/imaginary parts, in place of the rules i^{2} = j^{2} = k^{2} = ijk = −1 we have the quaternion multiplication rule:
From these rules it follows immediately that (^{*}see details):
It is important to note, however, that the vector part of a quaternion is, in truth, an "axial" vector or "pseudovector", not an ordinary or "polar" vector.
the reflection of a vector r in a plane perpendicular to a unit vector w can be written:
Two reflections make a rotation by an angle twice the angle between the two reflection planes, so
corresponds to a rotation of 180° in the plane containing σ_{1} and σ_{2}.
This is very similar to the corresponding quaternion formula,
In fact, the two are identical, if we make the identification
and it is straightforward to confirm that this preserves the Hamilton relations
In this picture, quaternions correspond not to vectors but to bivectors^{w} – quantities with magnitude and orientations associated with particular 2D planes rather than 1D directions. The relation to complex numbers^{w} becomes clearer, too: in 2D, with two vector directions σ_{1} and σ_{2}, there is only one bivector basis element σ_{1}σ_{2}, so only one imaginary. But in 3D, with three vector directions, there are three bivector basis elements σ_{1}σ_{2}, σ_{2}σ_{3}, σ_{3}σ_{1}, so three imaginaries.
The usefulness of quaternions for geometrical computations can be generalised to other dimensions, by identifying the quaternions as the even part Cℓ^{+}_{3,0}(R) of the Clifford algebra^{w} Cℓ_{3,0}(R).
Spinors
 See also: ^{*}Bispinor
External link:An introduction to spinors
Spinors may be regarded as nonnormalised rotors which transform singlesided.^{[20]}
Note: The (real) ^{*}spinors in threedimensions are quaternions, and the action of an evengraded element on a spinor is given by ordinary quaternionic multiplication.^{[21]}
A spinor transforms to its negative when the space is rotated through a complete turn from 0° to 360°. This property characterizes spinors.^{[22]}
In three dimensions...the ^{*}Lie group ^{*}SO(3) is not ^{*}simply connected. Mathematically, one can tackle this problem by exhibiting the ^{*}special unitary group SU(2), which is also the ^{*}spin group in three ^{*}Euclidean dimensions, as a ^{*}double cover of SO(3).
SU(2) is the following group,
where the overline denotes ^{*}complex conjugation.
For comparison: Using 2 × 2 complex matrices, the quaternion a + bi + cj + dk can be represented as
If X = (x_{1},x_{2},x_{3}) is a vector in R^{3}, then we identify X with the 2 × 2 matrix with complex entries
Note that −det(X) gives the square of the Euclidean length of X regarded as a vector, and that X is a ^{*}tracefree, or better, tracezero ^{*}Hermitian matrix.
The unitary group acts on X via
where M ∈ SU(2). Note that, since M is unitary,
 , and
 is tracezero Hermitian.
Hence SU(2) acts via rotation on the vectors X. Conversely, since any ^{*}change of basis which sends tracezero Hermitian matrices to tracezero Hermitian matrices must be unitary, it follows that every rotation also lifts to SU(2). However, each rotation is obtained from a pair of elements M and −M of SU(2). Hence SU(2) is a doublecover of SO(3). Furthermore, SU(2) is easily seen to be itself simply connected by realizing it as the group of unit ^{*}quaternions, a space ^{*}homeomorphic to the ^{*}3sphere.
A unit quaternion has the cosine of half the rotation angle as its scalar part and the sine of half the rotation angle multiplying a unit vector along some rotation axis (here assumed fixed) as its pseudovector (or axial vector) part. If the initial orientation of a rigid body (with unentangled connections to its fixed surroundings) is identified with a unit quaternion having a zero pseudovector part and +1 for the scalar part, then after one complete rotation (2pi rad) the pseudovector part returns to zero and the scalar part has become 1 (entangled). After two complete rotations (4pi rad) the pseudovector part again returns to zero and the scalar part returns to +1 (unentangled), completing the cycle.
The association of a spinor with a 2×2 complex ^{*}Hermitian matrix was formulated by Élie Cartan.^{[23]}
In detail, given a vector x = (x_{1}, x_{2}, x_{3}) of real (or complex) numbers, one can associate the complex matrix
Matrices of this form have the following properties, which relate them intrinsically to the geometry of 3space:
 det X = – (length x)^{2}.
 X ^{2} = (length x)^{2}I, where I is the identity matrix.
 ^{[23]}
 where Z is the matrix associated to the cross product z = x × y.
 If u is a unit vector, then −UXU is the matrix associated to the vector obtained from x by reflection in the plane orthogonal to u.
 It is an elementary fact from ^{*}linear algebra that any rotation in 3space factors as a composition of two reflections. (Similarly, any orientation reversing orthogonal transformation is either a reflection or the product of three reflections.) Thus if R is a rotation, decomposing as the reflection in the plane perpendicular to a unit vector u_{1} followed by the plane perpendicular to u_{2}, then the matrix U_{2}U_{1}XU_{1}U_{2} represents the rotation of the vector x through R.
Having effectively encoded all of the rotational linear geometry of 3space into a set of complex 2×2 matrices, it is natural to ask what role, if any, the 2×1 matrices (i.e., the ^{*}column vectors) play. Provisionally, a spinor is a column vector
 with complex entries ξ_{1} and ξ_{2}.
The space of spinors is evidently acted upon by complex 2×2 matrices. Furthermore, the product of two reflections in a given pair of unit vectors defines a 2×2 matrix whose action on euclidean vectors is a rotation, so there is an action of rotations on spinors.
Often, the first example of spinors that a student of physics encounters are the 2×1 spinors used in Pauli's theory of electron spin. The ^{*}Pauli matrices are a vector of three 2×2 ^{*}matrices that are used as ^{*}spin ^{*}operators.
Given a ^{*}unit vector in 3 dimensions, for example (a, b, c), one takes a ^{*}dot product with the Pauli spin matrices to obtain a spin matrix for spin in the direction of the unit vector.
The ^{*}eigenvectors of that spin matrix are the spinors for spin1/2 oriented in the direction given by the vector.
Example: u = (0.8, 0.6, 0) is a unit vector. Dotting this with the Pauli spin matrices gives the matrix:
The eigenvectors may be found by the usual methods of ^{*}linear algebra, but a convenient trick is to note that a Pauli spin matrix is an ^{*}involutory matrix, that is, the squareof the above matrix is the identity matrix.
Thus a (matrix) solution to the eigenvector problem with eigenvalues of ±1 is simply 1 ± S_{u}. That is,
One can then choose either of the columns of the eigenvector matrix as the vector solution, provided that the column chosen is not zero. Taking the first column of the above, eigenvector solutions for the two eigenvalues are:
The trick used to find the eigenvectors is related to the concept of ^{*}ideals, that is, the matrix eigenvectors (1 ± S_{u})/2 are ^{*}projection operators or ^{*}idempotents and therefore each generates an ideal in the Pauli algebra. The same trick works in any ^{*}Clifford algebra, in particular the ^{*}Dirac algebra that are discussed below. These projection operators are also seen in ^{*}density matrix theory where they are examples of pure density matrices.
More generally, the projection operator for spin in the (a, b, c) direction is given by
and any non zero column can be taken as the projection operator. While the two columns appear different, one can use a^{2} + b^{2} + c^{2} = 1 to show that they are multiples (possibly zero) of the same spinor.
 From Wikipedia:Tensor#Spinors:
When changing from one ^{*}orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not ^{*}simply connected (see ^{*}orientation entanglement and ^{*}plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1.^{[24]} A ^{*}spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.^{[25]}^{[26]}
Succinctly, spinors are elements of the ^{*}spin representation of the rotation group, while tensors are elements of its ^{*}tensor representations. Other ^{*}classical groups have tensor representations, and so also tensors that are compatible with the group, but all noncompact classical groups have infinitedimensional unitary representations as well.
 From Wikipedia:Spinor:
Quote from Elie Cartan: The Theory of Spinors, Hermann, Paris, 1966: "Spinors...provide a linear representation of the group of rotations in a space with any number of dimensions, each spinor having components where or ." The star (*) refers to Cartan 1913.
(Note: is the number of ^{*}simultaneous independent rotations an object can have in n dimensions.)
Although spinors can be defined purely as elements of a representation space of the spin group (or its Lie algebra of infinitesimal rotations), they are typically defined as elements of a vector space that carries a linear representation of the Clifford algebra. The Clifford algebra is an associative algebra that can be constructed from Euclidean space and its inner product in a basis independent way. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, and in applications the Clifford algebra is often the easiest to work with. After choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anticommutation relations. The spinors are the column vectors on which these matrices act. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of gamma matrices, and the twocomponent complex column vectors on which these matrices act are spinors. However, the particular matrix representation of the Clifford algebra, hence what precisely constitutes a "column vector" (or spinor), involves the choice of basis and gamma matrices in an essential way. As a representation of the spin group, this realization of spinors as (complex) column vectors will either be irreducible if the dimension is odd, or it will decompose into a pair of socalled "halfspin" or Weyl representations if the dimension is even.
In three Euclidean dimensions, for instance, spinors can be constructed by making a choice of Pauli spin matrices corresponding to (angular momenta about) the three coordinate axes. These are 2×2 matrices with complex entries, and the twocomponent complex column vectors on which these matrices act by matrix multiplication are the spinors. In this case, the spin group is isomorphic to the group of 2×2 unitary matrices with determinant one, which naturally sits inside the matrix algebra. This group acts by conjugation on the real vector space spanned by the Pauli matrices themselves, realizing it as a group of rotations among them, but it also acts on the column vectors (that is, the spinors).
 From Wikipedia:Spinor:
In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles. More precisely, it is the fermions of spin1/2 that are described by spinors, which is true both in the relativistic and nonrelativistic theory. The wavefunction of the nonrelativistic electron has values in 2 component spinors transforming under threedimensional infinitesimal rotations. The relativistic ^{*}Dirac equation for the electron is an equation for 4 component spinors transforming under infinitesimal Lorentz transformations for which a substantially similar theory of spinors exists.
Next section: Intermediate mathematics/Functions
Search Math wiki
See also
External links
 MIT open courseware
 Cheat sheets
 http://mathinsight.org
 https://math.stackexchange.com
 https://www.eng.famu.fsu.edu/~dommelen/quantum/style_a/IV._Supplementary_Informati.html
 http://www.sosmath.com
 https://webhome.phy.duke.edu/~rgb/Class/intro_math_review/intro_math_review/node1.html
 Wikiversity:Mathematics
 w:c:4chanscience:Mathematics
References
 ↑ Wikipedia:Division algebra
 ↑ Wikipedia:Lie group
 ↑ Wikipedia:Cartesian product
 ↑ Wikipedia:Tangent bundle
 ↑ Wikipedia:Lie group
 ↑ Wikipedia:Topological space
 ↑ Wikipedia:Normed vector space
 ↑ Wikipedia:Norm (mathematics)
 ↑ Wikipedia:Norm (mathematics)
 ↑ Wikipedia:Sesquilinear form
 ↑ Wikipedia:Outer product
 ↑ Wikipedia:Tensor (intrinsic definition)
 ↑ Wikipedia:Tensor
 ↑ Wikipedia:Special unitary group
 ↑ Lawson, H. Blaine; Michelsohn, MarieLouise (1989). Spin Geometry. Princeton University Press. ISBN 9780691085425 page 14
 ↑ Friedrich, Thomas (2000), Dirac Operators in Riemannian Geometry, American Mathematical Society^{w}, ISBN 9780821820551 page 15
 ↑ "Pauli matrices". Planetmath website. 28 March 2008. http://planetmath.org/PauliMatrices. Retrieved 28 May 2013.
 ↑ The Minkowski inner product is not an ^{*}inner product, since it is not ^{*}positivedefinite, i.e. the ^{*}quadratic form η(v, v) need not be positive for nonzero v. The positivedefinite condition has been replaced by the weaker condition of nondegeneracy. The bilinear form is said to be indefinite.
 ↑ The matrices in this basis, provided below, are the similarity transforms of the Dirac basis matrices of the previous paragraph, , where .
 ↑ Wikipedia:Rotor (mathematics)
 ↑ Wikipedia:Spinor#Three_dimensions
 ↑ Wikipedia:Spinor
 ↑ ^{23.0} ^{23.1} Cartan, Élie (1981) [1938], The Theory of Spinors, New York: Dover Publications, ISBN 9780486640709, MR 631850, https://books.google.com/books?isbn=0486640701
 ↑ Roger Penrose (2005). The road to reality: a complete guide to the laws of our universe. Knopf. pp. 203–206.
 ↑ E. Meinrenken (2013), "The spin representation", Clifford Algebras and Lie Theory, Ergebnisse der Mathematik undihrer Grenzgebiete. 3. Folge / A Series of Modern Surveys in Mathematics, 58, SpringerVerlag, doi:10.1007/9783642362163_3
 ↑ S.H. Dong (2011), "Chapter 2, Special Orthogonal Group SO(N)", Wave Equations in Higher Dimensions, Springer, pp. 13–38