AI is already in your home. Soon it could tell you how much sleep to get or help you communicate better with the world.
- AI is already deployed in the home through applications like Google's Nest, and voice assistants. Entertainment and consumer health are two areas where AI is predicted to develop new applications rapidly.
- Google has big ambitions for Google Translate, including the ability to translate languages through the camera on mobile devices.
- This article includes an overview of AI in consumer technology, plus the top three trends to watch, and an example of how AI is driving development of Google Translate.
- Read how AI is transforming health, transportation, investing, and more in other articles from our special report, How AI is Changing Everything.
If you ask the average person to give an example of real-world artificial-intelligence technology, they'd probably say Amazon's Alexa or Apple's Siri.
Those intelligent assistants may be the most well-known forms of AI in consumer products, but they are hardly alone. Versions of the technology can be found in services and devices ranging from Facebook to refrigerators. Different iterations of AI are already doing everything from helping people find something to watch to driving them home.
Consumers could soon find AI doing a lot more for them.
"AI has the potential to bring about some pretty profound impacts for the betterment of the ways that we live, do work, and relax at home," said Adam Wright, a senior analyst at market research firm IDC.
Siri and Alexa have perhaps had the most resonance with consumers because they seem to come directly from the world of science fiction. Alexa is designed to be like the talking computer aboard the Starship Enterprise or like "2001: A Space Odyssey's" HAL 9000, minus the evil intentions.
But real-life AI encompasses a lot more than just talking computers. In fact, it's a term used for a collection of related technologies that in different ways allow computing devices to make human-like judgments and decisions or take human-like actions in response to data. AI includes areas such as image and speech recognition, robotics, and machine learning - essentially the ability for computers to learn from observations or past experience.
You can find AI today in thermostats, such as those from Google's Nest, that automatically adjust the temperature based on users' past preferences and whether they are actually at home. It's at the heart of the feature on YouTube and Netflix that recommends videos for you to watch based on what you've viewed in the past and the similar one on Amazon that recommends products to you based on your past shopping history.
And it's at the core of the features on Apple's iPhone that allow you to open up a chat window with your spouse with one tap and that suggest when you should leave for an upcoming appointment based on your schedule and prevailing traffic conditions.
In the near future, AI technologies could take on new and more complicated tasks. One big opportunity for AI is to tie together disparate services and do multiple things for people based on simple commands. For example, if you told the smart assistant on your phone to "cancel your trip," it might cancel all of your airline tickets, hotel bookings, and car reservations all at once, said Tom Galizia, the US technology strategy and architecture leader for consulting firm Deloitte.
Another potential example: A smart home system that could choose a movie for you to watch, adjust the lighting in your living room and the sound on your entertainment system based on not only what you've watched in the past, but your emotional state, the time of day, and things you've purchased.
Thanks to AI, "the world will become a much more efficient and effective place for all of us," Galizia said.
In terms of particular areas, AI experts think the technology could have the biggest impact through services and gadgets that help consumers maintain their health.
Already devices such as Apple Watch and Fitbit Versa monitor wearers' heart rates and activity and can recommend adjustments to their routines to be healthier. That kind of monitoring and those kinds of recommendations are only going to become more sophisticated and personalized in the future, detecting and responding to users' physical, mental, and emotional states, said Phil Libin, the CEO of All Turtles, a company set up to nurture the development of AI applications and services.
AI is going to be critical to making sense of all that data and translating it into information that consumers can use and act on, he said. The potential is that in the next 10 years, some 1 billion people around the world could be living fundamentally healthier lives because of such devices and recommendations, he said.
"The stuff that I'm most excited about, in a positive sense that I think is going to just do a huge amount of good, is broadly in healthcare," Libin said.
Three big opportunities for AI in consumer technologyHealth and wellness: Nascent
Devices such as Apple Watch and Fitbit Versa are already helping users track their workouts and monitor their heart rates, and Apple's smartwatch can even detect if users have fallen or are having irregular heartbeats. In the future, such devices will likely be even more sophisticated and able to detect a wider range of physical, emotional, and mental states, AI experts say.
Artificial intelligence systems could use that and other data on consumers - such as their medical history, say, or their DNA records - to make specific and personalized recommendations for how much or when they should sleep, exercise, or relax. The technology has the potential to change the focus of the healthcare industry from treating and curing diseases to preventing them.
AI could be "truly disruptive to the whole industry," said Tom Galizia, the US technology strategy and architecture leader for consulting firm Deloitte.
Entertainment: Growing
Services such as Netflix and Spotify use a form of AI to recommend shows for users to watch and songs for them to listen to. But those recommendations are fairly unsophisticated and have the potential to be much more tailored to individual tastes in the future, AI experts said.
In the future, recommendation services could tap into a much wider range of data about users to make suggestions. Even if you had never told them what books you read, they might be able to recommend one for you based on the movies you've watched or the type of music you like or the kinds of products you've shopped for. They might also be able to take into account factors such as the time of day, and your emotional and physical state.
AI is going to allow entertainment and other apps and services to be "more and more fine tuned and personalized to whoever is using them," said Alexandre Robicquet, CEO of Crossing Minds, a startup that's developing just such a recommendation service.
Translation: Growing
You can already use an app on your phone to make the words on a sign in a foreign language readable in your own tongue. And Google's Pixel Buds headphones will help you understand what someone is saying to you in another country by connecting to the company's translation service. Improvements in artificial intelligence that help it better identify and understand how languages are written and spoken could soon make such services even better.
In the future, translation services could understand what language is being spoken without being told and glean cultural nuances that are sometimes lost today. Ultimately, researchers at Google and elsewhere are aiming for services that could act like the real-time universal translators that are a staple of science fiction.
Google Translate changed the way we interpret language, but it still has a long way to go
Many of the world's biggest tech companies have promised that artificial intelligence has the potential to transform daily life, from the way we work to how doctors are able to diagnose diseases. But one of the most apparent benefits to come from the advancement of AI so far has been in shaping the way we communicate with people around the world.
Perhaps no tool has been more instrumental in this effort than Google Translate, which launched in 2006 as a basic translator that could only interpret two languages and has grown to become one of the world's most popular translation services. It has more than 500 million installs on Android alone and sits comfortably within the top 100 free apps on Apple's iOS App Store, which boasts millions of apps, according to app-analyst firm App Annie.
But Google isn't content with simply making it possible to translate web pages and text. Its longterm vision for Translate is essentially to invent a real-life version of the Babel Fish, the tiny yellow creature from "Hitchhiker's Guide to the Galaxy" that made it possible to understand any language when placed in one's ear. To move closer to that vision, Google is investing in growing its translation service in a few key areas, specifically when it comes to speech recognition and its ability to translate languages using a mobile device's camera.
"Not only do we work on the quality of our existing languages, but we really like to enable new languages in all the different input modalities," said Jeff Pitman, an engineering manager on the Translate team at Google.
Just this month, Google updated its instant camera translation feature with several improvements, including support for 60 additional languages, automatic language detection, and the implementation of neural machine translation technology that should reduce errors by between 55% and 85%.
The automatic language identification update is the culmination of a two-year long effort within Google that required the firm to revamp the video-processing architecture it uses to interpret language through a phone's camera. When translating text via the camera, there are several steps happening behind the scenes in Google Translate: image processing, machine translation, and then matching elements like colors and fonts against the background. That last step makes it so that when you're translating text on a sign or a restaurant menu through your smartphone's camera, the text blends seamlessly onto the page.
With its old architecture, Google ran this process for every frame when using the camera to translate text. Now, Google Translate only does so every eight to 10 frames, making it easier to run the app smoothly on low-end phones. That's just one way Google expanded its Translate service in recent months. Back in January, it announced Interpreter Mode for the Google Assistant, which lets Google Home owners use the device to translate speech to a different language as they speak.
But for all the advancements Translate has made in recent years, it's still far from what Google considers to be the Holy Grail of translation technology. There are still challenges and complexities that come with understanding different languages as thoroughly as one's own, particularly when it comes to translating cultural contexts between tongues.
Google made a noticeable leap forward in this realm back in 2016 when it began using a new system that could consider an entire sentence as a single unit for translating, rather than breaking it down by words and phrases. But there are still broader challenges that exist, like ensuring that translation algorithms don't turn up biased results when translating languages that use gender ambiguous pronouns, like Turkish. And when translating speech, Google can't yet automatically detect what language you're speaking if you don't choose a preset option.
But Pitman is confident Google will eventually reach its ultimate objective with Translate, one step at a time. "We will be making baby steps," he said. "And we'll be making progress toward that goal."