As an amateur photographer, I was believing DSL is better than phone camera since it has a much larger CMOS so that it can receive more photons, until I installed Google Camera on my Galaxy S8. The results surprised me a lot. I mean, it performed better than my Conan 5DIII in the most of case without editing in the software like photoshop.
Except HDR+ and portrait mode, Google camera provides a magic mode called Night Sight. Unfortunately, this can only work on Pixel series (Pixel 3/3a is the best) phone with hardware support. You can find the A/B Testing here.
How does Night Sight improve the quality of shots in the night?
HDR+. HDR+ is the foundational function for Night Sight. It is a computational photography technology that improves this situation by capturing a burst of frames, aligning the frames in software, and merging them together. Since each frame is short enough to prevent the blur caused by hand shake, the result turns out to be sharper and wilder dynamic range than without HDR+.
Positive-shutter-lag (PSL) . In the regular mode, Google camera uses zero-shutter-lag (ZSL) protocol which limits exposures to at most 66ms no matter how dim the scene is, and allows our viewfinder to keep up a display rate of at least 15 frames per second. Using PSL means you need to hold still for a short time after pressing the shutter, but it allows the use of longer exposures, thereby improving SNR at much lower brightness levels.
Motion metering. Optical image stabilization (OIS) is widely used in many devices to prevent hand shake. But it doesn’t help for long exposure and motion object. Pixel 3 adds motion metering to detect the motion object and adjust the exposure time for each frame. For example, if it detects a dog moving in the frame or we are using the tripod, it will increase exposure time.
Super Res Zoom. HDR essentially uses algorithm to aliment and merge the frames to increase the SNR( signal to noise ratio). Pixel 3 provides a new algorithm called Super Res Zoom for super-resolution and reduce noise, since it averages multiple images together. Super Res Zoom produces better results for some nighttime scenes than HDR+, but it requires the faster processor of the Pixel 3.
Learning-based AWB algorithm. When it is dim, AWB( auto white balance) is not functional well. And it is an ill-posed problem, which means we cannot inverse the problem (find out the real color of object in the dark). In this case, Google camera utilizes machine learning to “guess” what is the true color and shift the white balance.
S-curve into our tone mapping. As we know, if we exposure a picture for a long time, all the objects become brighter so that we can not figure out when this picture takes. Google uses sigmoid function to adjust the object brightness ( dark objects become darker, light objects become brighter).
Dictionary of keys: (row, column)-pairs to the value of the elements.
List of Lists: stores one list per row, with each entry containing the column index and the value.
Coordinate List: a list of (row, column, value) tuples.
Compressed Sparse Row: three (one-dimensional) arrays (A, IA, JA).
Compressed Sparse Column: same as SCR
covert to sparse matrix python: csr_matrix(dense_matrix)
covert to dense matrix python: sparse_matrix.todense()
sparsity = 1.0 – count_nonzero(A) / A.size
algriothm is similar to matrix
dot product: python: tensordot()
, L is lower triangle matrix, U is upper triangle matrix. P matrix is used to permute the result or return result to the orignal order.
where Q a matrix with the size mm, and R is an upper triangle matrix with the size mn.
square symmtric matrix where values are greater than zero
, L is lower triangle matrix, U is upper triangle matrix.
twice faster than LU decomposition.
eigenvector: , is matrix we want to decomposite, is eigenvector, is eigenvalue(scalar)
a matrix could have one eigenvector and eigenvalue for each dimension. So the matrix can be shown as prodcut of eigenvalues and eigenvectors. where Q is the matrix of eigenvectors, is the matrix of eigenvalue. This equotion also mean if we know eigenvalues and eigenvectors we can construct the orignal matrix.
SVD(singluar value decomposition)
, where A is m*n, U is m*m matrix, is m*m diagonal matrix also known as singluar value, is n*n matrix.
select top largest singluar values in
, where column select from , row selected from , B is approximate of the orignal matrix A.
variance: , python: var(vector, ddof=1)
standard deviation: , python:std(M, ddof=1, axis=0)
covariance: , python: cov(x,y)[0,1]
coralation: , normorlized to the value between -1 to 1. python: corrcoef(x,y)[0,1]
project high dimensions to subdimesnion
, which order by eigenvalue
pca = PCA(2) # get two components
print(pca.componnets_) # values
print(pca.explained_variance_) # vectors
B = pca.transform(A) # transform to new matrix
, where b is coeffcient and unkown
linear least squares( similar to MSE) , then . Issue: very slow
MSE with SDG
Reference: Basics of Linear Algebra for Machine Learning, jason brownlee, https://machinelearningmastery.com/linear_algebra_for_machine_learning/