Abstracts Statements Story

Properties of vector length in Euclidean space. Euclidean spaces

Euclidean space

Euclidean space(Also Euclidean space) - in the original sense, a space whose properties are described by the axioms of Euclidean geometry. In this case, it is assumed that the space has dimension 3.

In the modern sense, in a more general sense, it can designate one of the similar and closely related objects defined below. Usually the -dimensional Euclidean space is denoted by , although the not entirely acceptable notation is often used.

,

in the simplest case ( Euclidean norm):

where (in Euclidean space you can always choose a basis in which this simplest version is true).

2. Metric space, corresponding to the space described above. That is, with the metric entered according to the formula:

,

Related definitions

  • Under Euclidean metric can be understood as the metric described above as well as the corresponding Riemannian metric.
  • By local Euclideanity we usually mean that each tangent space of a Riemannian manifold is a Euclidean space with all the ensuing properties, for example, the possibility (due to the smoothness of the metric) to introduce coordinates in a small neighborhood of a point in which the distance is expressed (up to some order of magnitude) ) as described above.
  • A metric space is also called locally Euclidean if it is possible to introduce coordinates on it in which the metric will be Euclidean (in the sense of the second definition) everywhere (or at least on a finite domain) - which, for example, is a Riemannian manifold of zero curvature.

Examples

Illustrative examples of Euclidean spaces are the following spaces:

More abstract example:

Variations and generalizations

see also

Links


Wikimedia Foundation.

2010.

    See what “Euclidean space” is in other dictionaries: Finite-dimensional vector space with positive definite scalar product. Is direct. generalization of ordinary three-dimensional space. In E. space there are Cartesian coordinates, in which the scalar product of (xy)vectors x...

    A space whose properties are studied in Euclidean geometry. In a broader sense, Euclidean space is an n-dimensional vector space in which the scalar product ... Big Encyclopedic Dictionary

    Euclidean space- a space whose properties are described by the axioms of Euclidean geometry. In a simplified way, Euclidean space can be defined as a space on a plane or in a three-dimensional volume in which rectangular (Cartesian) coordinates are given, and... ... The beginnings of modern natural science

    Euclidean space- see Multidimensional (n-dimensional) vector space, Vector (linear) space... Economic and mathematical dictionary

    Euclidean space- - [L.G. Sumenko. English-Russian dictionary on information technology. M.: State Enterprise TsNIIS, 2003.] Topics information Technology in general EN Cartesian space... Technical Translator's Guide

    A space whose properties are studied in Euclidean geometry. In a broader sense, Euclidean space is an n-dimensional vector space in which the scalar product is defined. * * * EUCLIDEAN SPACE EUCLIDEAN... ... encyclopedic Dictionary

    Space, the properties of which are studied in Euclidean geometry. In a broader sense, E. p. is called. n-dimensional vector space, in which the scalar product ... Natural science. encyclopedic Dictionary

    Space, the properties of which are described by the axioms of Euclidean geometry. In a more general sense, an E. space is a finite-dimensional real vector space Rn with the scalar product (x, y), x, in appropriately chosen coordinates... ... Mathematical Encyclopedia

    - (in mathematics) a space whose properties are described by the axioms of Euclidean geometry (See Euclidean geometry). In a more general sense, E. space is called an n-dimensional Vector space in which it is possible to introduce some special... ... Big Soviet encyclopedia

    - [named after other Greek. mathematics of Euclid (Eukleides; 3rd century BC)] space, including multidimensional, in which it is possible to introduce coordinates x1,..., xn so that the distance p (M, M) between points M (x1 ..., x n) and M (x 1, .... xn) maybe... ... Big Encyclopedic Polytechnic Dictionary

§3. Dimension and basis of vector space

Linear combination of vectors

Trivial and non-trivial linear combination

Linearly dependent and linearly independent vectors

Properties of vector space associated with linear dependence of vectors

P-dimensional vector space

Dimension of vector space

Decomposition of a vector into a basis

§4. Transition to a new basis

Transition matrix from the old basis to the new one

Vector coordinates in the new basis

§5. Euclidean space

Scalar product

Euclidean space

Length (norm) of the vector

Properties of vector length

Angle between vectors

Orthogonal vectors

Orthonormal basis


§ 3. Dimension and basis of vector space

Consider some vector space (V, Å, ∘) over the field R. Let be some elements of the set V, i.e. vectors.

Linear combination vectors is any vector equal to the sum of the products of these vectors by arbitrary elements of the field R(i.e. on scalars):

If all scalars are equal to zero, then such a linear combination is called trivial(the simplest), and .

If at least one scalar is nonzero, the linear combination is called non-trivial.

The vectors are called linearly independent, if only the trivial linear combination of these vectors is equal to:

The vectors are called linearly dependent, if there is at least one non-trivial linear combination of these vectors equal to .

Example. Consider the set of ordered sets of quadruples real numbers is a vector space over the field of real numbers. Task: find out whether the vectors are , And linearly dependent.

Solution.

Let's make a linear combination of these vectors: , where are unknown numbers. We require that this linear combination be equal to the zero vector: .

In this equality we write the vectors as columns of numbers:

If there are numbers for which this equality holds, and at least one of the numbers is not equal to zero, then this is a non-trivial linear combination and the vectors are linearly dependent.

Let's do the following:

Thus, the problem reduces to solving the system linear equations:

Solving it, we get:

The ranks of the extended and main matrices of the system are equal and less than the number of unknowns, therefore, the system has an infinite number of solutions.

Let , then and .

So, for these vectors there is a non-trivial linear combination, for example at , which is equal to the zero vector, which means that these vectors are linearly dependent.

Let's note some properties of vector space associated with linear dependence of vectors:

1. If the vectors are linearly dependent, then at least one of them is a linear combination of the others.

2. If among the vectors there is a zero vector, then these vectors are linearly dependent.

3. If some of the vectors are linearly dependent, then all of these vectors are linearly dependent.

The vector space V is called P-dimensional vector space, if it contains P linearly independent vectors, and any set of ( P+ 1) vectors is linearly dependent.

Number P called dimension of the vector space, and is denoted dim(V) from the English “dimension” - dimension (measurement, size, dimension, size, length, etc.).

Totality P linearly independent vectors P-dimensional vector space is called basis.

(*)
Theorem(about the decomposition of a vector by basis): Each vector of a vector space can be represented (and in a unique way) as a linear combination of basis vectors:

The formula (*) is called vector decomposition by basis, and the numbers vector coordinates in this basis .

A vector space can have more than one or even infinitely many bases. In each new basis, the same vector will have different coordinates.


§ 4. Transition to a new basis

In linear algebra, the problem often arises of finding the coordinates of a vector in a new basis if its coordinates in the old basis are known.

Let's look at some P-dimensional vector space (V, +, ·) over the field R. Let there be two bases in this space: old and new .

Task: find the coordinates of the vector in the new basis.

Let the vectors of the new basis in the old basis have the expansion:

,

Let's write the coordinates of the vectors into the matrix not in rows, as they are written in the system, but in columns:

The resulting matrix is ​​called transition matrix from the old basis to the new.

The transition matrix connects the coordinates of any vector in the old and new basis by the following relation:

,

where are the desired coordinates of the vector in the new basis.

Thus, the task of finding the vector coordinates in a new basis is reduced to solving the matrix equation: , where X– matrix-column of vector coordinates in the old basis, A– transition matrix from the old basis to the new one, X* – the required matrix-column of vector coordinates in the new basis. From the matrix equation we get:

So, vector coordinates in a new basis are found from the equality:

.

Example. In a certain basis, the vector decompositions are given:

Find the coordinates of the vector in the basis.

Solution.

1. Let’s write out the transition matrix to a new basis, i.e. We will write the coordinates of the vectors in the old basis in columns:

2. Find the matrix A –1:

3. Perform multiplication , where are the coordinates of the vector:

Answer: .


§ 5. Euclidean space

Let's look at some P-dimensional vector space (V, +, ·) over the field of real numbers R. Let be some basis of this space.

Let us introduce in this vector space metric, i.e. Let's determine a method for measuring lengths and angles. To do this, we define the concept of a scalar product.

Even at school, all students are introduced to the concept of “Euclidean geometry,” the main provisions of which are focused around several axioms based on such geometric elements as a point, a plane, a straight line, and motion. All of them together form what has long been known as “Euclidean space”.

Euclidean, which is based on the principle of scalar multiplication of vectors, is a special case of a linear (affine) space that satisfies a number of requirements. Firstly, the scalar product of vectors is absolutely symmetrical, that is, a vector with coordinates (x;y) is quantitatively identical to a vector with coordinates (y;x), but opposite in direction.

Secondly, if a scalar product of a vector is performed with itself, then the result of this action will be positive. The only exception will be the case when the initial and final coordinates of this vector are equal to zero: in this case, its product with itself will also be equal to zero.

Thirdly, the scalar product is distributive, that is, the possibility of decomposing one of its coordinates into the sum of two values, which will not entail any changes in the final result of the scalar multiplication of vectors. Finally, fourthly, when multiplying vectors by the same thing, their scalar product will also increase by the same amount.

If all these four conditions are met, we can confidently say that this is Euclidean space.

From a practical point of view, Euclidean space can be characterized by the following specific examples:

  1. The simplest case is the presence of a set of vectors with a scalar product defined according to the basic laws of geometry.
  2. Euclidean space will also be obtained if by vectors we understand a certain finite set of real numbers with a given formula describing their scalar sum or product.
  3. A special case of Euclidean space should be recognized as the so-called null space, which is obtained if the scalar length of both vectors is equal to zero.

Euclidean space has a number of specific properties. Firstly, the scalar factor can be taken out of brackets from both the first and second factors of the scalar product, the result will not undergo any changes. Secondly, along with the distributivity of the first element of the scalar product, the distributivity of the second element also operates. In addition, in addition to the scalar sum of vectors, distributivity also occurs in the case of subtraction of vectors. Finally, thirdly, when scalar multiplying a vector by zero, the result will also be equal to zero.

Thus, Euclidean space is the most important geometric concept used in solving problems with the relative position of vectors relative to each other, to characterize which a concept such as a scalar product is used.

Definition of Euclidean space

Definition 1. A real linear space is called Euclidean, If it defines an operation that associates any two vectors x And y from this space number called the scalar product of vectors x And y and designated(x,y), for which the following conditions are met:

1. (x,y) = (y,x);

2. (x + y,z) = (x,z) + (y,z) , where z- any vector belonging to a given linear space;

3. (?x,y) = ? (x,y) , where ? - any number;

4. (x,x) ? 0 , and (x,x) = 0 x = 0.

For example, in a linear space of single-column matrices, the scalar product of vectors

can be determined by the formula

Euclidean dimension space n denote En. notice, that There are both finite-dimensional and infinite-dimensional Euclidean spaces.

Definition 2. Length (modulus) of vector x in Euclidean space En called (x,x) and denote it like this: |x| = (x,x). For any vector of Euclidean spacethere is a length, and the zero vector has it equal to zero.

Multiplying a non-zero vector x per number , we get a vector, length which is equal to one. This operation is called rationing vector x.

For example, in the space of single-column matrices the length of the vector can be determined by the formula:

Cauchy-Bunyakovsky inequality

Let x? En and y? En – any two vectors. Let us prove that the inequality holds for them:

(Cauchy-Bunyakovsky inequality)

Proof. Let be? - any real number. It's obvious that (?x ? y,?x ? y) ? 0. On the other hand, due to the properties of the scalar product we can write

Got that

The discriminant of this quadratic trinomial cannot be positive, i.e. , from which it follows:

The inequality has been proven.

Triangle inequality

Let x And y- arbitrary vectors of the Euclidean space En, i.e. x? En and y? En.

Let's prove that . (Triangle inequality).

Proof. It's obvious that On the other side,. Taking into account the Cauchy-Bunyakovsky inequality, we obtain

The triangle inequality has been proven.

Norm of Euclidean space

Definition 1 . Linear space?called metric, if any two elements of this space x And y matched non-negativenumber? (x,y), called the distance between x And y , (? (x,y)? 0), and are executedconditions (axioms):

1) ? (x,y) = 0 x = y

2) ? (x,y) = ? (y,x)(symmetry);

3) for any three vectors x, y And z this space? (x,y) ? ? (x,z) + ? (z,y).

Comment. Elements of a metric space are usually called points.

The Euclidean space En is metric, and as the distance between vectors x? En and y? En can be taken x ? y.

So, for example, in the space of single-column matrices, where

hence

Definition 2 . Linear space?called normalized, If each vector x from this space is associated with a non-negative number called it the norm x. In this case, the axioms are satisfied:

It is easy to see that a normed space is a metric space stvom. In fact, as the distance between x And y can be taken . In Euclideanspace En as the norm of any vector x? En is its length, those. .

So, the Euclidean space En is a metric space and, moreover, The Euclidean space En is a normed space.

Angle between vectors

Definition 1 . Angle between non-zero vectors a And b Euclidean spacequality E n name the number for which

Definition 2 . Vectors x And y Euclidean space En are called orthogonlinen, if equality holds for them (x,y) = 0.

If x And y- are non-zero, then from the definition it follows that the angle between them is equal

Note that the zero vector is, by definition, considered orthogonal to any vector.

Example . In geometric (coordinate) space?3, which is a special case of Euclidean space, unit vectors i, j And k mutually orthogonal.

Orthonormal basis

Definition 1 . Basis e1,e2 ,...,en the Euclidean space En is called orthogonlinen, if the vectors of this basis are pairwise orthogonal, i.e. If

Definition 2 . If all vectors of the orthogonal basis e1, e2 ,...,en are unitary, i.e. e i = 1 (i = 1,2,...,n) , then the basis is called orthonormal, i.e. Fororthonormal basis

Theorem. (on the construction of an orthonormal basis)

In any Euclidean space E n there exist orthonormal bases.

Proof . Let us prove the theorem for the case n = 3.

Let E1 ,E2 ,E3 be some arbitrary basis of the Euclidean space E3 Let's construct some orthonormal basisin this space.Let's put where ? - some real number that we chooseso that (e1 ,e2 ) = 0, then we get

and what is obvious? = 0 if E1 and E2 are orthogonal, i.e. in this case e2 = E2, and , because this is the basis vector.

Considering that (e1 ,e2 ) = 0, we get

It is obvious that if e1 and e2 are orthogonal to the vector E3, i.e. in this case we should take e3 = E3. Vector E3? 0 because E1, E2 and E3 are linearly independent,therefore e3 ? 0.

In addition, from the above reasoning it follows that e3 cannot be represented in the form linear combination of vectors e1 and e2, therefore vectors e1, e2, e3 are linearly independentsims and are pairwise orthogonal, therefore, they can be taken as a basis for the Euclideanspace E3. All that remains is to normalize the constructed basis, for which it is sufficientdivide each of the constructed vectors by its length. Then we get

So we have built a basis - orthonormal basis. The theorem has been proven.

The applied method for constructing an orthonormal basis from an arbitrary basis is called orthogonalization process . Note that in the process of prooftheorem, we established that pairwise orthogonal vectors are linearly independent. Except if is an orthonormal basis in En, then for any vector x? Enthere is only one decomposition

where x1, x2,..., xn are the coordinates of the vector x in this orthonormal basis.

Because

then scalarly multiplying equality (*) by, we get .

In what follows we will consider only orthonormal bases, and therefore for ease of writing, zeroes are on top of the basis vectorswe will omit.