## 1 矩阵

### 正定矩阵(Orthogonal matrices)

$U^TU=I_n$

1. 映射$x->Ux$后, x将保持长度不变, 改变的是角度.
2. x,y同时变化后, 其相对角度不变.

### Dyads

Dyads are a special class of matrices, also called rank-one matrices, for reasons seen later.

### QR分解

matlab代码如下

[Q,R] = qr(A,0); % A is a mxn matrix,
%Q is mxn orthogonal, R is nxn upper triangular


[Q,R] = qr(A);%Q is mxm orthogonal, R is mxn upper triangular


### 线性映射

1. 对一个$m{\times}m$矩阵$A$, 可以把A看做一个线性映射$f:R^n->R^m$, 即$f(x)=Ax$, 反过来看, 对每一个线性映射, 我们都可以找到这样一个f, 来使得$f(x)=Ax$

2. 对于仿射, 有对$f(x)=Ax+b$, 当 $b = 0$的时候该仿射为线性映射.

3. 解释上这样理解, $A_{ij}$为$x_j$对$yi$的影响程度. 因此, 如果$A{13} >> A_{14}$, 则可以认为$x_3$比起$x_4$对$y1$有更大的影响力, 或者, 当$A{24}=0$时说$y_4$对$x_2$的影响程度为0, 其值不依靠$x_2$来影响, 通常来说, $b=f(0)$被看作偏置(bias).

4. 非线性映射的一阶线性近似. 其方法为泰勒展开的一阶展开, 对非线性映射
$$R^n->R^m$$, 若其是可微的, 对其的线性近似形式 为
$$f(x){\approx}\dot f=f(x_0)+A(x-x0)$$
这里的
$$A {ij}=\frac{\partial y_i}{\partial x_j}(x_0)$$, 这里记忆可以理解为在利用函数在$x_0$上的值与其导数进行一阶线性近似, 同时$A$为$f$在$x_0$处的$jacobian$矩阵.

### 矩阵范数

Motivation

RMS gain (root-mean-square) : the Frobenius norm

$$||A||F :=\sqrt{\sum{i=1}^{m}\sum{j=1}^nA^{2}{ij}}=\sqrt{Tr(A^TA)}$$
matlab中可以写作

norm(A,'fro');


Largest singular value norm.: An alternate way to measure matrix size is based on asking the maximum ratio of the norm of the output to the norm of the input. When the norm used is the Euclidean norm, the corresponding quantity

matlab中计算可以用

lsv_norm = norm(A);


矩阵间的距离

最大方差投影方向
Let us try to visualize the data set by projecting it on a single line passing through the origin. The line is thus defined by a vector$x\in R^m$ , which we can without loss of generality assume to be of Euclidean norm . The data points, when projected on the line, are turned into real numbers$x^Ta_i,i=1,…,n$ .

It can be argued that a good line to project data on is one which spreads the numbers$x^Ta_i$ as much as possible. (If all the data points are projected to numbers that are very close, we will not see anything, as all data points will collapse to close locations.)

### 条件数(condition number)

Condition number.:The condition number of an invertible matrix is the ratio between the largest and the smallest singular values

$$k(A)=\frac{\sigma_1}{\sigma_n}=||A|| \times ||A^{-1}||$$

## 2 线性方程组

Motivation

$$y=log\frac{I{rec}}{I{source}}=\sum{(i,j)\in T}A{ij}d_{ij}$$

实际上可以看到的是, 一般来说列的数目都要远小于行的数目, 这是由于x光发射的数目远小于密度矩阵的单元个数, 所以这样的矩阵一般来说都是解不唯一的.