TAM 2xx References

Vector Calculus

Dot product

*This topic appears in 2 reference pages*

Complete in reference page "vector and bases"

Explained in reference page "vector identities"

The dot product (also called the inner product or scalar product) is defined by

Dot product from components.
$$ \vec{a} \cdot \vec{b}= a_1 b_1 + a_2 b_2 + a_3 b_3 $$

An alternative expression for the dot product can be given in terms of the lengths of the vectors and the angle between them:

Dot product from length/angle. #rvv-ed
$$ \vec{a} \cdot \vec{b}= a b \cos\theta $$

We will present a simple 2D proof here. A more complete proof in 3D uses the law of cosines.

Start with two vectors \( \vec{a} \) and \( \vec{b} \) with an angle \( \theta \) between them, as shown below.

Observe that the angle \( \theta \) between vectors \( \vec{a} \) and \( \vec{b} \) is the difference between the \( \theta_a \) and \( \theta_b \) from horizontal.

If we use the angle sum formula for cosine, we have

$$ \begin{aligned} a b \cos\theta &= a b \cos(\theta_b - \theta_a) \\ &= a b (\cos\theta_b \cos\theta_a + \sin\theta_b \sin\theta_a) \end{aligned} $$

We now want to express the sine and cosine of \( \theta_a \) and \( \theta_b \) in terms of the of \( \vec{a} \) and \( \vec{b} \).

We re-arrange the expression so that we can use the fact that \( a_1 = a \cos\theta_a \) and \( a_2 = a \sin\theta_a \), and similarly for \( \vec{b} \). This gives:

$$ \begin{aligned} a b \cos\theta &= (a \cos\theta_a) (b \cos\theta_b) + (a \sin\theta_a) (b \sin\theta_b) \\ &= a_1 b_1 + a_2 b_2 \\ &= \vec{a} \cdot \vec{b} \end{aligned} $$

The fact that we can write the dot product in terms of components as well as in terms of lengths and angle is very helpful for calculating the length and angles of vectors from the component representations.

Length and angle from dot product. #rvv-el
$$ \begin{aligned} a &= \sqrt{\vec{a} \cdot\vec{a}} \\ \cos\theta &= \frac{\vec{b}\cdot \vec{a}}{b a}\end{aligned} $$

The angle between \( \vec{a} \) and itself is $\theta = 0$, so \( \vec{a} \cdot \vec{a} = a^2 \cos 0 = a^2 \), which gives the first equation for the length in terms of the dot product.

The second equation is a rearrangement of #rvv-ed.

If two vectors have zero dot product \( \vec{a} \cdot \vec{b} = 0 \) then they have an angle of \( \theta = 90^\circ = \frac{\pi}{2}\rm\ rad \) between them and we say that the vectors are perpendicular, orthogonal, or normal to each other.

In 2D we can easily find a perpendicular vector by rotating \( \vec{a} \) counterclockwise with the following equation.

Counterclockwise perpendicular vector in 2D. #rvv-en
$$ \vec{a}^\perp = -a_2\,\hat\imath + a_1\hat\jmath $$

It is easy to check that \( \vec{a}^\perp \) is always perpendicular to \( \vec{a} \):

$$ \vec{a} \cdot \vec{a}^\perp = (a_1\,\hat\imath + a_2\,\hat\jmath) \cdot (-a_2\,\hat\imath + a_1\hat\jmath) = -a_1 a_2 + a_2 a_1 = 0. $$
The fact that \( \vec{a}^\perp \) is a \( +90^\circ \) rotation of \( \vec{a} \) is apparent from Figure #rvv-fn.

In 2D there are two perpendicular directions to a given vector \( \vec{a} \), given by \( \vec{a}^\perp \) and \( -\vec{a}^\perp \). In 3D there is are many perpendicular vectors, and there is no simple formula like #rvv-en for 3D.

The perpendicular vector \( \vec{a}^\perp \) is always a \( +90^\circ \) rotation of \( \vec{a} \).

Dot product identities

Dot product symmetry. #rvi-ed
$$ \vec{a} \cdot \vec{b} = \vec{b} \cdot \vec{a} $$

Using the coordinate expression #rvv-es gives:

$$ \vec{a} \cdot \vec{b} = a_1 b_1 + a_2 b_2 + a_3 b_3 = b_1 a_1 + b_2 a_2 + b_3 a_3 = \vec{b} \cdot \vec{a}. $$

Dot product vector length. #rvi-eg
$$ \vec{a} \cdot \vec{a} = \|a\|^2 $$

Using the coordinate expression #rvv-es gives:

$$ \vec{a} \cdot \vec{a} = a_1 a_1 + a_2 a_2 + a_3 a_3 = \|a\|^2. $$

Dot product bi-linearity. #rvi-ei
$$ \begin{aligned} \vec{a} \cdot(\vec{b} + \vec{c}) &=\vec{a} \cdot \vec{b} + \vec{a}\cdot \vec{c} \\ (\vec{a} +\vec{b}) \cdot \vec{c} &=\vec{a} \cdot \vec{c} + \vec{b}\cdot \vec{c} \\ \vec{a} \cdot (\beta\vec{b}) &= \beta (\vec{a} \cdot\vec{b}) = (\beta \vec{a}) \cdot\vec{b}\end{aligned} $$

Using the coordinate expression #rvv-es gives:

$$ \begin{aligned} \vec{a} \cdot (\vec{b} + \vec{c}) &= a_1 (b_1 + c_1) + a_2 (b_2 + c_2) + a_3 (b_3 + c_3) \\ &= (a_1 b_1 + a_2 b_2 + a_3 b_3) + (a_1 c_1 + a_2 c_2 + a_3 c_3) \\ &= \vec{a} \cdot \vec{b} + \vec{a} \cdot \vec{c} \\ (\vec{a} + \vec{b}) \cdot \vec{c} &= (a_1 + b_1) c_1 + (a_2 + b_2) c_2 + (a_3 + b_3) c_3 \\ &= (a_1 c_1 + a_2 c_2 + a_3 c_3) + (b_1 c_1 + a_2 c_2 + a_3 c_3) \\ &= \vec{a} \cdot \vec{c} + \vec{b} \cdot \vec{c} \\ \vec{a} \cdot (\beta \vec{b}) &= a_1 (\beta b_1) + a_2 (\beta b_2) + a_3 (\beta b_3) \\ &= \beta (a_1 b_1 + a_2 b_2 + a_3 b_3) \\ &= \beta (\vec{a} \cdot \vec{b}) \\ &= (\beta a_1) b_1 + (\beta a_2) b_2 + (\beta a_3) b_3 \\ &= (\beta \vec{a}) \cdot \vec{b}. \end{aligned} $$

Cross product

*This topic appears in 2 reference pages*

Complete in reference page "vector and bases"

Explained in reference page "vector identities"

The cross product can be defined in terms of components by:

Cross product in components. #rvv-ex
$$ \vec{a} \times \vec{b} = (a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} $$

It is sometimes more convenient to work with cross products of individual basis vectors, which are related as follows.

Cross products of basis vectors. #rvv-eo
$$ \begin{aligned}\hat\imath \times \hat\jmath &= \hat{k}& \hat\jmath \times \hat{k} &= \hat\imath& \hat{k} \times \hat\imath &= \hat\jmath \\\hat\jmath \times \hat\imath &= -\hat{k}& \hat{k} \times \hat\jmath &= -\hat\imath& \hat\imath \times \hat{k} &= -\hat\jmath \\\end{aligned} $$

Writing the basis vectors in terms of themselves gives the components:

$$ \begin{aligned} i_1 &= 1 & i_2 &= 0 & i_3 &= 0 \\ j_1 &= 0 & j_2 &= 1 & j_3 &= 0 \\ k_1 &= 0 & k_2 &= 0 & k_3 &= 1. \end{aligned} $$
These values can now be substituted into the definition #rvv-ex. For example,
$$ \begin{aligned} \hat\imath \times \hat\jmath &= (i_2 j_3 - i_3 j_2) \,\hat{\imath} + (i_3 j_1 - i_1 j_3) \,\hat{\jmath} + (i_1 j_2 - i_2 j_1) \,\hat{k} \\ &= (0 \times 0 - 0 \times 1) \,\hat{\imath} + (0 \times 0 - 1 \times 0) \,\hat{\jmath} + (1 \times 1 - 0 \times 0) \,\hat{k} \\ &= \hat{k} \end{aligned} $$
The other combinations can be computed similarly.

Warning: The cross product is not associative. #rvv-wc

The cross product is not associative, meaning that in general

$$ \vec{a} \times (\vec{b} \times \vec{c}) \ne (\vec{a} \times \vec{b}) \times \vec{c}. $$
For example,
$$ \begin{aligned} \hat{\imath} \times (\hat{\imath} \times \hat{\jmath}) &= \hat{\imath} \times \hat{k} = - \hat{\jmath} \\ (\hat{\imath} \times \hat{\imath}) \times \hat{\jmath} &= \vec{0} \times \hat{\jmath} = \vec{0}. \end{aligned} $$
This means that we should never write an expression like
$$ \vec{a} \times \vec{b} \times \vec{c} $$
because it is not clear in which order we should perform the cross products. Instead, if we have more than one cross product, we should always use parentheses to indicate the order.

Rather than using components, the cross product can be defined by specifying the length and direction of the resulting vector. The direction of \( \vec{a} \times \vec{b} \) is orthogonal to both \( \vec{a} \) and \( \vec{b} \), with the direction given by the right-hand rule. The magnitude of the cross product is given by:

Cross product length. #rvv-el2
$$ \| \vec{a} \times \vec{b} \| = a b \sin\theta $$

Using Lagrange's identity we can calculate:

$$ \begin{aligned} \| \vec{a} \times \vec{b} \|^2 &= \|\vec{a}\|^2 \|\vec{b}\|^2 - (\vec{a} \cdot \vec{b})^2 \\ &= a^2 b^2 - (a b \cos\theta)^2 \\ &= a^2 b^2 (1 - \cos^2\theta) \\ &= a^2 b^2 \sin^2\theta. \end{aligned} $$
Taking the square root of this expression gives the desired cross-product length formula.

This second form of the cross product definition can also be related to the area of a parallelogram.

The area of a parallelogram is the length of the base multiplied by the perpendicular height, which is also the magnitude of the cross product of the side vectors.

A useful special case of the cross product occurs when vector \( \vec{a} \) is in the 2D \( \hat\imath,\hat\jmath \) plane and the other vector is in the orthogonal \( \hat{k} \) direction. In this case the cross product rotates \( \vec{a} \) by \( 90^\circ \) counterclockwise to give the perpendicular vector \( \vec{a}^\perp \), as follows.

Cross product of out-of-plane vector \( \hat{k} \) with 2D vector \( \vec{a} = a_1\,\hat\imath + a_2\,\hat\jmath \). #rvv-e9
$$ \hat{k} \times \vec{a} = \vec{a}^\perp $$

Using #rvv-eo we can compute:

$$ \begin{aligned} \hat{k} \times \vec{a} &= \hat{k} \times (a_1\,\hat\imath + a_2\,\hat\jmath) \\ & a_1 (\hat{k} \times \hat\imath) + a_2 (\hat{k} \times \hat\jmath) \\ &= a_1\,\hat\jmath - a_2\,\hat\imath \\ &= \vec{a}^\perp. \end{aligned} $$

Cross product identities

Cross product anti-symmetry. #rvi-ea
$$ \begin{aligned} \vec{a} \times\vec{b} = - \vec{b} \times\vec{a}\end{aligned} $$

Writing the component expression #rvv-ex gives:

$$ \begin{aligned} \vec{a} \times \vec{b} &= (a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} \\ &= -(a_3 b_2 - a_2 b_3) \,\hat{\imath} - (a_1 b_3 - a_3 b_1) \,\hat{\jmath} - (a_2 b_1 - a_1 b_2) \,\hat{k} \\ &= -\vec{b} \times \vec{a}. \end{aligned} $$

Cross product self-annihilation. #rvi-ez
$$ \begin{aligned}\vec{a} \times \vec{a} = 0\end{aligned} $$

From anti-symmetry #rvi-ea we have:

$$ \begin{aligned} \vec{a} \times \vec{a} &= - \vec{a} \times \vec{a} \\ 2 \vec{a} \times \vec{a} &= 0 \\ \vec{a} \times \vec{a} &= 0. \end{aligned} $$

Cross product bi-linearity. #rvi-eb2
$$ \begin{aligned}\vec{a} \times (\vec{b} + \vec{c})&= \vec{a} \times \vec{b} + \vec{a} \times \vec{c} \\(\vec{a} + \vec{b}) \times \vec{c}&= \vec{a} \times \vec{c} + \vec{b} \times \vec{c} \\\vec{a} \times (\beta \vec{b})&= \beta (\vec{a} \times \vec{b})= (\beta \vec{a}) \times \vec{b}\end{aligned} $$

Writing the component expression #rvv-ex for the first equation gives:

$$ \begin{aligned} \vec{a} \times (\vec{b} + \vec{c}) &= (a_2 (b_3 + c_3) - a_3 (b_2 + c_2)) \,\hat{\imath} \\ &\quad + (a_3 (b_1 + c_1) - a_1 (b_3 + c_3)) \,\hat{\jmath} \\ &\quad + (a_1 (b_2 + c_2) - a_2 (b_1 + c_1)) \,\hat{k} \\ &= \Big((a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} \Big) \\ &\quad + \Big((a_2 c_3 - a_3 c_2) \,\hat{\imath} + (a_3 c_1 - a_1 c_3) \,\hat{\jmath} + (a_1 c_2 - a_2 c_1) \,\hat{k} \Big) \\ &= \vec{a} \times \vec{b} + \vec{a} \times \vec{c}. \\ \end{aligned} $$
The second equation follows similarly, and for the third equation we have:
$$ \begin{aligned} \vec{a} \times (\beta \vec{b}) &= (a_2 (\beta b_3) - a_3 (\beta b_2)) \,\hat{\imath} + (a_3 (\beta b_1) - a_1 (\beta b_3)) \,\hat{\jmath} + (a_1 (\beta b_2) - a_2 (\beta b_1)) \,\hat{k} \\ &= \beta \Big( (a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} \Big) \\ &= \beta (\vec{a} \times \vec{b}). \end{aligned} $$
The last part of the third equation can be seen with a similar derivation.

Derivatives

Time-dependent vectors can be differentiated in exactly the same way that we differentiate scalar functions. For a time-dependent vector \( \vec{a}(t) \), the derivative \( \dot{\vec{a}}(t) \) is:

Vector derivative definition. #rvc-ed
$$ \begin{aligned} \dot{\vec{a}}(t)&= \frac{d}{dt} \vec{a}(t) = \lim_{\Delta t\to 0} \frac{\vec{a}(t + \Delta t) -\vec{a}(t)}{\Delta t}\end{aligned} $$

Note that vector derivatives are a purely geometric concept. They don't rely on any basis or coordinates, but are just defined in terms of the physical actions of adding and scaling vectors.

Increment:

\(\Delta t = \) 2 s

Show:

Time:

\(t = \) 0 s

Vector derivatives shown as functions of \(t\) and \(\Delta t\). We can hold \(t\) fixed and vary \(\Delta t\) to see how the approximate derivative \( \Delta\vec{a}/\Delta t \) approaches \( \dot{\vec{a}} \). Alternatively, we can hold \(\Delta t\) fixed and vary \(t\) to see how the approximation changes depending on how \( \vec{a} \) is changing.

We will use either the dot notation \( \dot{\vec{a}}(t) \) (also known as Newton notation) or the full derivative notation \( \frac{d\vec{a}(t)}{dt} \) (also known as Leibniz notation), depending on which is clearer and more convenient. We will often not write the time dependency explicitly, so we might write just \( \dot{\vec{a}} \) or \( \frac{d\vec{a}}{dt} \). See below for more details.

Newton versus Leibniz Notation

Most people know who Isaac Newton is, but perhaps fewer have heard of Gottfried Leibniz. Leibniz was a prolific mathematician and a contemporary of Newton. Both of them claimed to have invented calculus independently of each other, and this became the source of a bitter rivalry between the two of them. Each of them had different notation for derivatives, and both notations are commonly used today.

Leibniz notation is meant to be reminiscent of the definition of a derivative:

$$ \frac{dy}{dt}=\lim_{\Delta t\rightarrow0}\frac{\Delta y}{\Delta t}. $$

Newton notation is meant to be compact:

$$ \dot{y} = \frac{dy}{dt}. $$

Note that a superscribed dot always denotes differentiation with respect to time \(t\). A superscribed dot is never used to denote differentiation with respect to any other variable, such as \(x\).

But what about primes? A prime is used to denote differentiation with respect to a function's argument. For example, suppose we have a function \(f=f(x)\). Then

$$ f'(x) = \frac{df}{dx}. $$

Suppose we have another function \(g=g(s)\). Then

$$ g'(s) = \frac{dg}{ds}. $$

As you can see, while a superscribed dot always denotes differentiation with respect to time \(t\), a prime can denote differentiation with respect to any variable; but that variable is always the function's argument.

Sometimes, for convenience, we drop the argument altogether. So, if we know that \(y=y(x)\), then \(y'\) is understood to be the same as \(y'(x)\). This is sloppy, but it is very common in practice.

Each notation has advantages and disadvantages. The main advantage of Newton notation is that it is compact: it does not take a lot of effort to write a dot or a prime over a variable. However, the price you pay for convenience is clarity. The main advantage of Leibniz notation is that it is absolutely clear exactly which variable you are differentiating with respect to.

Leibniz notation is also very convenient for remembering the chain rule. Consider the following examples of the chain rule in the two notations:

$$ \begin{aligned}&\text{Newton:}&\dot{y}=y'(x)\dot{x} \\ &\text{Leibniz:}&\frac{dy}{dt}=\frac{dy}{dx}\frac{dx}{dt}.\end{aligned} $$

Notice how, with Leibniz notation, you can imagine the \(dx\)'s "cancelling out" on the right-hand side, leaving you with \(dy/dt\).

Derivatives and vector "positions"

When thinking about vector derivatives, it is important to remember that vectors don't have positions. Even if a vector is drawn moving about, this is irrelevant for the derivative. Only changes to length and direction are important.

Show:

Cartesian

Complete in reference page "vector calculus"

The example in Fig \ref Fig:CartesianDerivatives could be added:

Fig: Cartesian Derivatives
Taken from L03-Notes, slide 6.

In a fixed basis we differentiate a vector by differentiating each component:

Vector derivative in components. #rvc-ec
$$ \dot{\vec{a}}(t) = \dot{a}_1(t)\,\hat{\imath} + \dot{a}_2(t)\,\hat{\jmath} + \dot{a}_3(t) \,\hat{k} $$

Writing a time-dependent vector expression in a fixed basis gives:

$$ \vec{a}(t) = a_1(t)\,\hat{\imath} + a_2(t) \,\hat{\jmath}. $$
Using the definition #rvc-ed of the vector derivative gives:
$$ \begin{aligned}\dot{\vec{a}}(t) &= \lim_{\Delta t \to 0}\frac{\vec{a}(t + \Delta t) -\vec{a}(t)}{\Delta t} \\ &= \lim_{\Delta t\to 0} \frac{(a_1(t + \Delta t) \,\hat{\imath} +a_2(t + \Delta t) \,\hat{\jmath}) - (a_1(t)\,\hat{\imath} + a_2(t) \,\hat{\jmath})}{\Delta t} \\&= \lim_{\Delta t \to 0} \frac{(a_1(t + \Delta t)- a_1(t)) \,\hat{\imath} + (a_2(t + \Delta t) -a_2(t)) \,\hat{\jmath}}{\Delta t} \\ &=\left(\lim_{\Delta t \to 0} \frac{a_1(t + \Delta t) -a_1(t)}{\Delta t} \right) \,\hat{\imath} +\left(\lim_{\Delta t \to 0} \frac{a_2(t + \Delta t) -a_2(t) }{\Delta t}\right) \,\hat{\jmath} \\ &=\dot{a}_1(t) \,\hat{\imath} + \dot{a}_2(t)\,\hat{\jmath}\end{aligned} $$
The second-to-last line above is simply the definition of the scalar derivative, giving the scalar derivatives of the component functions \(a_1(t)\) and \(a_2(t)\).

Warning: Differentiating each component is only valid if the basis is fixed. #rvc-wc

When we differentiate a vector by differentiating each component and leaving the basis vectors unchanged, we are assuming that the basis vectors themselves are not changing with time. If they are, then we need to take this into account as well.

Time: \(t = \) 0 s
Show:
Basis: \( \hat\imath,\hat\jmath \) \( \hat{u},\hat{v} \)

The vector derivative decomposed into components. This demonstrates graphically that each component of a vector in a particular basis is simply a scalar function, and the corresponding derivative component is the regular scalar derivative.

Non-Cartesian: Polar basis

An example like in Fig \ref Fig:Non-CartesianPolarBasis could be added in "vector and bases"

Fig: Non-Cartesian Polar Basis
Taken from L03-Notes, slide 7.

Graphical estimation

The information in Fig \ref Fig:GraphicalEstimation could be included as a new section in "vector and bases"

Fig: Graphical Estimation of Derivatives
Taken from L03-Notes, slide 8.

Chain rule

Scalars

Complete in "Vector Calculus"

Second derivative

Complete in "vector Calculus"

Integration

Include summary of both Polar and Cartesian integration as in Fig \ref fig:SummaryVecInt

Fig: SummaryVectorIntegration

Cartesian basis

Complete in "Vector calculus"

The Riemann-sum definition of the vector integral is:

Vector integral. #rvc-ei
$$ \int_0^t \vec{a}(\tau) \, d\tau= \lim_{N \to \infty} \underbrace{\sum_{i=1}^N \vec{a}(\tau_i) \Delta\tau}_{\vec{S}_N}\qquad \tau_i = \frac{i - 1}{N}\qquad \Delta \tau = \frac{1}{N} $$

In the above definition \( \vec{S}_N \) is the sum with \(N\) intervals, written here using the left-hand edge \( \tau_i \) in each interval.

Time:

\(t = \) 0 s

Show:

Segments:

\(N = \) 1

Integral of a vector function \( \vec{a}(t) \), together with the approximation using a Riemann sum.

Just like vector derivatives, vector integrals only use the geometric concepts of scaling and addition, and do not rely on using a basis. If we do write a vector function in terms of a fixed basis, then we can integrate each component:

Vector integral in components. #rvc-et
$$ \int_0^t \vec{a}(\tau) \, d\tau= \left( \int_0^t a_1(\tau) \, d\tau \right) \,\hat\imath+ \left( \int_0^t a_2(\tau) \, d\tau \right) \,\hat\jmath+ \left( \int_0^t a_3(\tau) \, d\tau \right) \,\hat{k} $$

Consider a time-dependent vector \( \vec{a}(t) \) written in components with a fixed basis:

$$ \vec{a}(t) = a_1(t) \,\hat\imath + a_2(t) \,\hat\jmath. $$
Using the definition #rvc-ei of the vector integral gives:
$$ \begin{aligned}\int_0^t \vec{a}(\tau) \, d\tau&= \lim_{N \to \infty} \sum_{i=1}^N\vec{a}(\tau_i) \Delta\tau \\&= \lim_{N \to \infty} \sum_{i=1}^N\left( a_1(\tau_i) \,\hat\imath+ a_2(\tau_j) \,\hat\jmath \right) \Delta\tau \\&= \lim_{N \to \infty} \left( \sum_{i=1}^Na_1(\tau_i) \Delta\tau \,\hat\imath+ \sum_{i=1}^N a_2(\tau_j) \Delta\tau \,\hat\jmath \right) \\&= \left( \lim_{N \to \infty} \sum_{i=1}^Na_1(\tau_i) \Delta\tau \right) \,\hat\imath+ \left( \lim_{N \to \infty}\sum_{i=1}^N a_2(\tau_j) \Delta\tau \right) \,\hat\jmath \\&= \left( \int_0^t a_1(\tau) \, d\tau \right) \,\hat\imath+ \left( \int_0^t a_2(\tau) \, d\tau \right) \,\hat\jmath.\end{aligned} $$
The second-to-last line used the Riemann-sum definition of regular scalar integrals of \( a_1(t) \) and \( a_2(t) \).

Warning: Integrating each component is only valid if the basis is fixed. #rvc-wi

Integrating a vector function by integrating each component separately is only valid if the basis vectors are not changing with time. If the basis vectors are changing then we must either transform to a fixed basis or otherwise take this change into account.

Example Problem: Integrating a vector function. #rvc-xi

The vector \( \vec{a}(t) \) is given by

$$ \vec{a}(t) = \Big(2 \sin(t + 1) + t^2 \Big) \,\hat\imath+ \Big(3 - 3 \cos(2t)\Big) \,\hat\jmath. $$
What is \( \int_0^t \vec{a}(\tau) \, d\tau \)?

$$ \begin{aligned}\int_0^t \vec{a}(\tau) \,d\tau&= \left(\int_0^t \Big(2 \sin(\tau + 1) + \tau^2 \Big)\,d\tau\right) \,\hat\imath+ \left(\int_0^t \Big(3 - 3 \cos(2\tau)\Big)\,d\tau\right) \,\hat\jmath \\&=\left[-2 \cos(\tau + 1) + \frac{\tau^3}{3}\right]_{\tau=0}^{\tau=t} \,\hat\imath+ \left[3 \tau - \frac{3}{2} \sin(2\tau)\right]_{\tau=0}^{\tau=t} \,\hat\jmath \\&= \left( -2\cos(t + 1) + 2 \cos(1)+ \frac{t^3}{3}\right)\,\hat\imath+ \left(3t - \frac{3}{2} \sin(2t)\right)\,\hat\jmath.\end{aligned} $$

Warning: The dummy variable of integration must be different to the limit variable. #rvc-wd

In the coordinate integral expression #rvc-ei, it is important that the component expressions \(a_1(t)\), \(a_2(t)\) are re-written with a different dummy variable such as \(\tau\) when integrating. If we used $\tau$ for integration but kept \(a_1(t)\) then we would obtain

$$ \int_0^t a_1(t) \,d\tau= \left[a_1(t) \, \tau\right]_{\tau = 0}^{\tau = t}= a_1(t) \, t, $$
which is not what we mean by the integral. Alternatively, if we leave everything as \(t\) then we would obtain
$$ \int_0^t a_1(t) \,dt $$
which is a meaningless expression, as dummy variables must only appear inside an integral.

Polar basis

Add the information in Fig \ref fig:PolarIntegration

Fig: PolarIntegration

Solving equations

Include step by step on how to solve vector equations as shown in Fig \ref fig:SolvingEqnsSteps

Fig: SolvingEqnsSteps
Include examples as shown in Figs \ref fig:SolvingEqns, \ref fig:SolvingEqns2
Fig: SolvingEqns
Fig: SolvingEqns2