
I do photography myself. I teach myself high level technical concepts, how to take great photos, and most importantly, actually doing it. I researched current smartphones and noticed issues.Examples of overengineering I observed include:32 megapixel selfie cameras. That's way past the point of meaningful return even if the selfies are printed.Quad Bayer CFA sensors. All they do is create confusion. A 48 megapixel Quad Bayer sensor outputs to 12 megapixel. As for improved dynamic range, alternatives include improving the downstream design and using dual gain output.Proprietary CFA to improve light collection. Alternatives to the Bayer CFA often create new problems like lower color quality. The real problem is not insufficient light collection but is the downstream design. As the electrical signals drain, the proportion and distribution of the random fluctuations become more extreme. (side note: there're high speed cameras that have smaller pixels than the ISO 4 million Canon camera yet have a base ISO higher than 800)8K video recording. For end output, anything higher than 4k will exceed the point of meaningful return. Our eyes can only discern so much especially for motion picture. High resolution makes more sense for photos meant to be looked at up close by laymen. When taking video, sampling above (like 8K) is great for improving 4K end output. This is because of the Nyquist theorem which means you need twice as many samples as you think you need. Even then, 8K end output is wasteful.Examples of underengineering include:Insufficient megapixel count for photos more likely to be printed (like 2 MP for macro mode).Insufficient optical corrections. The smaller the recording area, the more elaborate the corrections must be to ensure high enough effective resolving power. Even taking into account how phones must be kept thin, the optics could be better. Raising the recording area size may when possible may help too.Insufficient emphasis on the downstream design. While the downstream design has improved (like on chip ADC), it's still not emphasized enough in sensor design. The downstream design influences low light performance more than anything else. It's why we have dual ISO sensors in many cinema cameras. It's why the older Phase One cameras lagged in high ISO performance compared to full frame from the same period.From your observations, what are some examples of being overkill or insufficient you observed with smartphone cameras? What thoughts do you have? via /r/photography https://ift.tt/3moHmBt
No comments:
Post a Comment