Much has been said about Galaxy AI, Samsung’s suite of artificial intelligence features, introduced with the Galaxy S24 series last month. The company spent over half an hour talking about Galaxy AI during the Unpacked event. It clearly wants to position this as a groundbreaking evolution of smartphone functionality, and while it’s promising enough, it’s still far from what would take it beyond novelty.
Galaxy AI, like any AI functionality on phones these days, isn’t self-sufficient in that you have to invoke it for it to do something. It doesn’t just take care of things for you, keeping you informed and updated about relevant matters without requiring direct input. For example, I’d love it if Galaxy AI could track my package from a shipment notification in my inbox and send me a push notification when the delivery courier is just about to arrive.
We’ve played around enough with all Galaxy AI features to get a good sense of where things stand right now. Samsung’s suite of features is good enough for what it claims to do. There’s live translation which works well if you interact with people who speak a different language. Summarization of notes and webpages can be useful as well in some instances.
Perhaps the most useful Galaxy AI features are those in the camera app. From effortless manipulation of objects in an image to the ability to create a slow-motion video out of any clip. It vastly expands the range of what has been capable with Samsung’s camera app, and provides users with more options to edit what they capture.
Yet, the limitation remains that you must manually invoke all of these features. Galaxy AI doesn’t proactively do anything for you yet, or even inform you of its capabilities. For example, if the Galaxy S24 Ultra is someone’s first Samsung phone and they’re not used to the UI, they’ll likely miss many of the AI features if they’re not too technologically inclined.
I envisage a future where Galaxy AI is omnipresent, there when you need it and tucked away when you don’t, but always willing to lend a helping hand. If I take a picture of someone and the image has clear obstructions that the photo would look nice without, Galaxy AI should give me a heads up that it can remove those objects if I wanted to.
If I capture videos of fast moving objects, it should automatically generate small snippets in slow-motion just to show how my footage would look slowed down, and if I wanted to, it accomplishes that at the touch of a button. These are just some very basic examples of how Galaxy AI could be truly integrated into the fabric of the smartphone user experience, the possibilities here are most definitely endless.
It should align with SmartThings and trigger my Routines when I’m parking my car in the driveway, if a lose a Galaxy Smart Tag 2, it should pull up the location history and map a route so I can trace my steps and hopefully find what I’ve lost, and it should pull up what I’m running low on in my Samsung smart fridge and create a grocery list when I’m at the supermarket.
AI’s true usability for smartphone users has to go beyond containerized solutions that exist in their little bubbles. Yes, live translations are good for phone calls, but Galaxy AI can’t translate a YouTube video that’s in a different language.
Making all of this happen is obviously going to require a lot of technological advancement. That’s also one of the reasons why we see so much collaboration between companies in the AI space. While everyone is trying to crack the AI problem on their own, there’s also a realization that true user value can only be delivered once there’s an all-encompassing solution.
Perhaps this may all be on Samsung’s radar and we may see some glimpse of this happening over the next few years as AI on phones matures. The future of AI shouldn’t exist in silos. AI functionality should flow effortlessly between apps, services, and platforms, taking users into a new era of smartphone control.