It’s a question many photographers, including myself, have been asking for years. Why don’t mirrorless and DSLR cameras utilize smartphone technology and apps?
My friend Mary Wade, using a co-worker’s ring light, had photographed portraits of her fellow employees using both an older DSLR and her new Google Pixel smartphone. “The photos that I took with my phone looked a lot better,” she noted.
After she found that much of that is due to how newer camera phone tech processes the images, she thought, “Sign me up for auto processing!” She’s busy, after all. She writes and does a billion other things at the office.
Four benefits we might get with smartphone processing technology applied to cameras
Cameras on smartphones can take some astonishingly great photos. Given that they use a considerably smaller sensor, just think what images we could take if this technology were applied to much larger, more detailed sensors found in mirrorless and DSLR cameras. Just think what it might look like if we applied that to all that amazing glass. Computational photography would no longer have to battle the physics of smaller sensors and tiny lenses.
1. Instant image processing similar to smartphones
Many of us, like Mary, often like the way modern smartphones automatically process photos. Smartphones, after all, automatically adjust white balance, adjust color, automatically sharpen, create high-dynamic photos and more, using computational photography. It can reduce noise or brighten certain aspects of an image automatically. And it can do it at lightning speed.
For real estate or sports photographers, this could also mean being able to send images to clients faster than ever. Photographers could also send their images to printers much more efficiently as well.
Some photographers, including us night photographers, might not want the camera to process our photos automatically. Or perhaps we might want it to, but also leave the original copy intact so we can process it on our own and compare. Cameras could offer these choices.
2. Bokeh
Portrait Mode in iPhone or Android, creating bokeh, might also be a great application. Being able to use machine learning to automatically focus on eyes (yes, I know many mirrorless cameras already do this) while creating a creamy bokeh might be good.
3. Top Shot
Mirrorless cameras could incorporate technology such as Google’s Top Shot. For instance, You might take a photo where one person blinks. Top Shot to the rescue. Top Shot will work to select the best shots through utilizing a three-second video and on-device machine learning, taking into account open eyes or other factors such as smiling or whether the subjects are looking at the camera, and create a photo for you.
4. Capturing life experiences in three dimensions
iPhone has already rolled out a LiDAR scanner and app. This allows you to take night mode portraits, video-based Portrait Mode, and low-light bokeh to new heights. But coupled with machine learning, it promises more, such as 3D photography, augmented reality, and of course, continually improving bokeh for day portraits.
The idea of capturing your child’s first birthday or your wedding using LiDAR to create a three-dimensional documentation is intriguing. Much of the issue with LiDAR so far has been lack of resolution. However, quality lenses and large sensors may offset this.
5. Combining elements
Perhaps machine learning might be applied toward combining qualities not physically possible in a lens. Could we have a weird 400mm fisheye? Or a 180-degree wide-angle that blurs like a Lensbaby?
Using smartphone OS and apps on cameras
I would love to see my cameras with larger screens and sporting operating systems such as the ones found in our smartphones. Wouldn’t it be particularly powerful if a camera had the ability to use apps such as Clear Outside, PhotoPills or apps that offered more specific controls over the camera, such as exposure ramping (gradually changing your camera’s settings) for time-lapses? The mount just reels at the amount of control, information and ease of operation you could have.
Going further, one could also have photo editing apps in-camera. One could use Photoshop Express, Lightroom, Snapseed or many others.
Two cameras that may bring us closer to smartphone-like technology
1. Alice Camera
Alice Camera, currently in development, shows promise. This device promises the experience of a smartphone coupled with the capability of a high-end mirrorless camera. Alice is an AI-accelerated computational camera with a high-resolution four-thirds sensor and interchangeable lens. You may instantly stream and post in 4K to Twitch, YouTube, TikTok, Instagram or Facebook easily.
Unlike previous smartphone attachments such as lenses, this is a full slim-lined camera where you slide the smartphone on so both are integrated. It feels more like the smartphone is attached to the camera, not the other way around.
2. Yongnuo YN455
Yongnuo is perhaps best known in the United States for producing inexpensive flashes, lenses and other accessories. But they did begin manufacturing cameras incorporating a smartphone with traditional camera functions, running an older version of Android.
Now, Yongnuo has submitted a patent for a modular mirrorless camera. And it’s also announced a new camera, the YN455. Like its predecessor, the not-fully-realized YN450, the YN455 incorporates an Android operating system. Yongnuo says that it will have a micro four-thirds sensor, cellular 4G and Wi-Fi. And yes, you guessed it, that means that like Alice Camera, you may also share straight to social media easily.
There is no news on when this will be released or whether it will be available outside China. If it’s as good as it seems, you might want to begin making friends with someone in China now so you can get your mitts on this.
Security technology in cameras
Cameras could also incorporate smartphone technology in other ways.
Canon has already patented a fingerprint sensor on not only the camera but also the lens. What security implications could this provide?
Beyond security, the sensor might also be able to instantly load custom presets for particular photographers. This obviously has some downsides, such as identity failure or slowing down someone from using the camera. But perhaps this sort of technology could be activated only during certain times.
Tracking your camera
If a camera were running iOS or Android, perhaps we could also track cameras. This could be valuable if the camera is lost, stolen, or left in the last Lyft car we took.
I once came across a night photographer in Mesquite Dunes in Death Valley National Park. “Have you seen a camera anywhere?” He continued frantically searching.
Now, if you’ve been to Mesquite Dunes, you know that it’s quite possible for miles of sand dunes to look similar after a while. This would be exponentially compounded at night. But if you could use something similar to a Find Your Phone app, perhaps you could greatly narrow down the location. Perhaps you could make it emit a sound. After all, most cameras already have GPS devices on board.
What’s the holdup?
Why haven’t the largest camera manufacturers, such as Sony, Nikon, Canon and others not fully embraced the sort of technology in smartphones that seem intuitive to just about everyone? Why is this left to smaller companies such as Alice Camera and Yongnuo?
Major camera manufacturers seem to mostly produce incrementally better cameras with each generation. But they seem reluctant to incorporate smartphone technology. Rarely if ever do we see operating systems with the ability to use apps. With sales of mirrorless and especially DSLRs dwindling, this makes their decision all the more puzzling.
What smartphone technology would you love to see on a camera?
Intriguing article. I would not be surprised if the major camera companies are already working on integrated applications like these. In fact, I’d bet they are. Think back just a few years and you could hear the people decrying Canon for being so far behind Nikon and Sony. Oh, Canon will never catch up. Then…Boom, they are right up there again. They’d been working diligently behind the scenes. Tech companies remind me of Area 51 in the Nevada Test Site. We have no idea what they are up to until they want you to know. Alternately; I think that technology… Read more »
Thanks, Tony. I just wonder why this wasn’t wholeheartedly embraced before. Having some sort of OS with the ability to somewhat customize a camera with various apps seems like it would be a real win for the consumer, resulting in increased sales for the manufacturers.
I would really laugh if someone like Pentax, who has no mirrorless cameras, swoops in suddenly with smartphone technology and an enormous touchscreen and says, “Take THAT, world! How do ya like us NOW!?!”
Your right, someone like Pentax, Olympus or maybe even Fuji will one day soon say “take that!” and then the rest will scramble to play catch up for the next 3 years like sony did with their first mirrorless cameras. The problem is what does the consumer do now? Will they have to wait until they reach retirement for this tech to come out? We already are paying premium price for tech that seems already outdated! Maybe we should just stick to smartphones. After all, they are less expensive and easier carry!
Thanks. I wish that would happen. If it does, however, it’s unlikely to be Pentax, as much as I love them (I use a Pentax K-1 constantly because it’s a brilliant camera). It seems to me that it would be reasonably simple to incorporate these sorts of interfaces into more cameras, but yet it rarely seems to happen.
Thanks, Tony. I just wonder why this wasn’t wholeheartedly embraced before. Having some sort of OS with the ability to somewhat customize a camera with various apps seems like it would be a real win for the consumer, resulting in increased sales for the manufacturers.
I would really laugh if someone like Pentax, who has no mirrorless cameras, swoops in suddenly with smartphone technology and an enormous touchscreen and says, “Take THAT, world! How do ya like us NOW!?!”
This is a very timely article for me. The other day I was shooting with my a6400. I was taking a picture of my son, outside in the sun with a very dark background and he was all blown out color-wise. I snapped a pic of him with my iPhone 12 and it came out much better. And it made me think this very same thing. Hopefully someone is able to integrate these sort of smart features soon. I for one would be very interested in it.
Thanks. If I hear anything about those two cameras in development, I will probably write about them. I have a feeling a lot of people including photographers are interested in it. After all, Mary Wade, who is not a photographer, is the one who asked me about this question, sort of giving me a good angle for this article I was developing anyway.
28OCT2022, Nikon owner since 1977 from Nikon F, F2, D40, D700, D750, and D7500. I get frustrated how well my Android S21 Ultra manages to use AI and expose parts of an image differently such as sky and foreground to achieve high dynamic range, or combines several shots in low light to get a nice looking image, or how it treats mixed lighting with ease, such as in a museum. In 2022 my DLSR’s still expose the entire image for the average exposure of the scene all at once (like cameras have done for over 100 years) instead of analyzing… Read more »
Samsung actually did this quite well when they were in the camera business with the Galaxy Camera and Galaxy NX (which had an APS-C sensor) which ran full Android. It’s too bad Samsung got out of the biz before smartphone imaging as quite as ubiquitous as it is today.
Thanks. True, they did make those, although I was thinking more of cameras with interchangeable lenses. Samsung would have the financial muscle to make this happen, but I suspect that they along with Google and Apple might be reluctant to jump into the dwindling interchangeable lens (DSLR and mirrorless) market.
Samsung’s NX line were cameras with interchangeable lenses. I owned an nx500 a few years ago but they are no longer supported, they dropped out of the camera game.
I thought the same thing a few years ago looking at the perfect colors of my Samsung Note 4. It is a pity that Samsung has left the photo world.
You may find the Arsenal 2 intriguing, it brings that instant post-processing and app-like functionality to your existing camera: https://witharsenal.com/
It would be cool to see those smarts built into the body, but in the meantime, it’s a good.
They do seem intriguing. I like the idea of them. I just am hoping to customize functions more by being able to use various apps and do so similarly to a smartphone. But yes, Arsenal seems great.
What is your favorite aspects of Arsenal? What do you feel it does best? Do you find it intuitive?
I’ve watched enough videos on the original arsenal to decide it was not for me. I am intrigued by its sequel but waiting so see some reviews on it as well before I decided weather or not to bite.
That seems logical. I am intrigued but for now am interested in keeping the stuff I haul around as lightweight and simple as possible.
the fuji system which allows in-camera editing of raw images into multiple copies is a start
It’s a start. And Fuji is super great about making meaningful updates to their firmware, including new features.
My non-photographer friends cannot believe that smartphone technology in DSLRs and mirrorless cameras are not commonplace.
There may be a licensing agreement between phone and camera manufacturers prohibiting camera makers from using certain features. That said I would love to see a serious camera with more phone like image processing and connectivity. I would love to just post my images to social media directly from my camera.
Then you would have to ask how some of the examples in my article are being made, including Alice Camera, the older Samsung camera, or Yongnuo. They have been doing it.
Also, even if there were licensing agreement issues, why would it have taken so long to work past that?
Why wouldn’t Sony, who has rather deep pockets, simply pay someone to develop something similar if they couldn’t work out a licensing agreement?
Great questions to ask. My take is that so far they haven’t *had* to do it because the optical / physics advantages of the larger cameras has kept them able to stay ahead without having to use the computational photography that we see on our iPhone. But now that folks know what is possible, it’s only a matter of time until someone in the “big” camera market will make the leap. In my book on artificial intelligence and photography I surmised about various advantages we could see if we had more AI built into our DSLRs and mirrorless cameras. It’s… Read more »
Aaron, thanks, I love your comment. Full of insight. Since Aaron might trigger the spambots, I’ll mention that the name of his book is “The Computer Ate My Photos: Artificial Intelligence and the Future of Photography”. I have not read it, but it sounds intriguing. And yes, it is available on Amazon, and I’m sure other places. I would agree that it appears so far, the larger manufacturers have focused (haha) on eye recognition and such rather than post-processing or UI. In many respects, I am just as interested in UI as I am with post-processing. Regardless, I am getting… Read more »
I have been a Nikon owner since 1977, F1, FTN, F2, FE, FM, D40, D300, D700, D7500. I’ve been asking for years, why hasn’t Nikon hired some Android type developers to use AI to process images. I actually get irritated that my smartphone photos look great, with high dynamic range and AI which avoids blown out highlights such as clouds with great looking foregrounds, which never need editing before posting on Flickr or sending to family or friends. With the DLSR, I have to expose for the foreground, or expose to the right (ETTR) then pull out the clouds in… Read more »
AI and other tech seems to be slow in coming, and I’ve found myself wondering why for years. I suppose we should be “grateful” that these cameras often come with WIFI and GPS!!
One smartphone technology I really wish seeing in DSLRs is panoramic stitching! Yes, image quality computational photography is amazing, but I can still do that in Photoshop or Lightroom easily, but not panoramic shoots! The ability to recognise objects when we pan the camera for panoramic shoot, and put them properly in their place, is something still missing in DSLR!
Dinosaurs had the same problem as Nikon and Canon have now.
Successful,
The only game in town
No reason to change.
Until it was too late
I think the main reason is that they use highly optimized firmware for their cameras. They cannot simply replace them by Android without missing something big. On top of my head I can think of losing the ability to quickly turn on and off your camera. It’s like TVs. Back in the day you could turn on and watch something instantaneously on your TV. But after they got smart, when you turn them on it takes a long time to watch anything, even though Android is on standby and doesn’t have power limitations that come with mobile devices such as… Read more »
Thanks! You make a great point. I had the impression that you could design firmware with a lot of tech that is similar to a smartphone experience. However, I don’t know that for sure. Obviously, if the firmware is solely for camera work, it would boot considerably faster than something with lots of third-party apps. But how much faster? We all would want a camera to boot very quickly!
You absolutely don’t need to boot up an OS from zero every time the user turns on the camera. Sleeping (with a very low consumption deep sleep mode) is absolutely doable.
I’ve been telling this since years. Smartphones take great photos, but they have the limitation of their small body, which is small sensors. Imagine the same technology with proper light capturing hardware and lenses! This could be far superior to anything we ever had. My mind blows that nobody ever did that. Insane. Also the other points make a lof of sense. One more thing to think about: Imagine a camera, which uses WiFi automatically in background like your phone, and just like that, uploads your shots to iCloud/Google Photos automatically. Not like current cameras, which need user interactions and… Read more »
I did a bunch of tests with my Pixel 4a and my Panasonic G85 and Canon M. In almost all cases, the Pixel 4a produced pictures with better color and exposure, more pleasing to look at. The only area where the cameras excelled was in pixel peeping (zooming in to see fine detal) (which doesn’t matter for typical viewings) and natural bokeh from a fast lens, particularly portrait shots. Otherwise, the Pixel wins out, with far less fuss. Baffled as to why either the camera manufactures have not integrated computational photography, or Apple/Google/Samsung/Huwawei have not produced DSLRs or mirrorless cams.
I do like a lot of camera features, and certainly, having automatic wifi in the background would be useful. This does appear on some cameras, and for some reason, it really seems to run down the battery, unfortunately.
I am surprised that a company, especially companies with particularly deep pockets and AI technology, doesn’t incorporate more integrated computational photography. I see that they are incorporating this for identifying people and animals. Regardless, Sony actually has a department devoted to AI, so they seem like they could lead the way on this.