Artificial intelligence is becoming more prevalent in the photography industry. From AI-powered software to AI-powered autofocus, companies across the board are incorporating AI technology to improve image editing and camera performance to help photographers capture better photos.
AI is heavily incorporated into the latest smartphones to help get the most from relatively small sensors and lenses. Broadly speaking, computational photography is a large part of the overall mobile photography experience. But how much further can AI take smartphone camera technology? Android Authority spoke with Qualcomm’s vice president of product management, Judd Heape, and the executive had a lot to say about AI, including a claim that AI will someday help smartphones surpass dedicated camera systems.
Heape discussed how AI is used now in cameras. Machine learning has helped improve noise reduction, video judder reduction, object removal during editing and much more. AI is used for different scene detection tasks and can help your smartphone differentiate between a subject and the background, between skin and hair, and other image aspects. Based on how your smartphone perceives parts of the image, it can apply fine-tuned image processing.
iPhone 14 Pro
Heape believes there are four stages of AI in photography. The first is basic scene recognition, such as identifying a specific subject in the image frame. The second stage involves AI controlling features such as autofocus, automatic white balance, and automatic exposure. The Qualcomm executive thinks that modern smartphone AI technology is in the third stage now, which is advanced segment recognition. This isn’t just seeing a specific subject in the frame, but rather being able to identify nearly everything in a scene and adjust them accordingly. Phones can now detect if a horizon is crooked or if a specific face in a photo requires exposure adjustments. This stage of AI also comprises advanced AF/AE technologies such as face and eye detection.
The iPhone 14 Pro includes a new 48MP main camera, the highest-resolution and largest sensor Apple has used in an iPhone. Smartphones from other companies, such as Samsung, have eclipsed 100MP.
The fourth stage, which Heape thinks the industry is about three to five years from reaching, is when AI will process an entire image. Heape said, ‘Imagine a world from the future where you’d say, “I want the picture to look like this National Geographic scene,” and the AI engine would say “okay, I’m going to adjust the colors and the texture and the white balance and everything to look like and feel like this image you just showed me.”
That’s advanced AI technology, but is it enough to help smartphones eclipse dedicated camera systems? Earlier this year, the president and CEO of Sony Semiconductor Solutions, Terushi Shimizu, remarked that smartphone breakthroughs could help mobile cameras surpass DSLR and mirrorless cameras as soon as 2024. Heape agrees, although it’s worth noting that there are still physical limitations for smartphones.
Not only are the image sensors significantly smaller but so are the optical components in the lenses. Still, Heape believes that the processing power of smartphones, which is more powerful than what’s found in current dedicated cameras, can help bridge the gap and overcome the physical challenges associated with smaller sensors and lenses. It’s also true that some major players in the smartphone space are throwing significant money and engineering power at smartphone development. While dedicated cameras are becoming more advanced, the market size difference between smartphones and interchangeable lens cameras is massive.
Sony Xperia 5 IV
There’s still a major difference in potential image quality between smartphones and dedicated cameras with larger sensors, such as Micro Four Thirds, APS-C, full-frame and medium-format cameras. Even though some smartphones offer over 100 megapixels, more than any full-frame camera on the market, the pixel quality is different. The lens quality is vastly different too. A 108MP smartphone doesn’t produce a sharper image than a 100MP medium-format camera, or any modern ILC camera, for that matter. The smartphone may have more pixels, but the pixels are much smaller. AI can help compensate for physical constraints, but so far, AI hasn’t been able to overcome them completely.
The next great frontier for smartphone photography could be a combination of AI and revised lens design. It’s possible to incorporate anamorphic lenses and then rebuild a proper-looking image using finely-tuned algorithms. While this hasn’t been done yet in commercially-available products, the idea of fitting more and better glass into a smaller area is enticing. While not as extreme as unusually-shaped glass, we’ve seen huge advances in the ILC space with optical engineers combining better glass and much more sophisticated software lens corrections to achieve spectacular results. There’s a lot of room for growth with optical technology in the smartphone market as well, especially as processing power improves.
When will smartphones best cameras such as the Sony a7 IV in terms of overall image quality?
We’ve also seen Sony incorporate true variable optical zoom in its Xperia smartphones. It’s conceivable that other companies are developing similar, or perhaps even better, variable zoom designs, which will improve the overall usability of smartphones for a wider variety of applications.
There’s a lot to be excited about. However, Heape’s optimism that smartphones can pass dedicated camera systems in terms of image quality still feels a bit too hopeful. There’s no doubt that improvements in software, hardware, and AI technology have helped smartphones make massive strides for photo and video. It’s also true that images from a smartphone are sufficient for many users. However, to overcome physics and cancel out the gains offered by bigger sensors, larger pixels and extremely advanced lenses will require much more than processor bumps and more sophisticated image processing. Head to Android Authority to read the full interview.