Samantha Lee/Business Insider
- Voice-based computing systems - like all computers - pose security and privacy risks.
- For example, researchers recently demonstrated that they could activate voice assistants such as Amazon's Alexa and Apple's Siri by focusing a laser beam at the microphones built into the devices on which they operated.
- The microphones at the heart of voice-based computing systems can not only be hijacked to take control of the devices, they pose a threat to privacy, because they listen in on what people are saying, can be accidentally triggered, and could potentially be compromised.
- The growing number of devices and services users are connecting to their Amazon Echo smart speakers and other voice-based gadgets, raises other concerns, including that hackers could use those connections to get access to owner's accounts.
- Users can take some steps to protect themselves, but they have limited control over the security of the prominent voice-based systems.
- Click here for more BI Prime stories.
Alexa, did you know you can be hacked?
It's true. Just like PCs and smartphones, voice-based computers such as Amazon's popular line of Alexa-powered Echo smart speakers are vulnerable to attacks that could get them to do things that their owners or designers wouldn't want or didn't authorize.
A team of researchers has shown that the voice assistants inside smart speakers and smartphones can be fooled into opening garage doors or starting cars by using the oscillations of a laser beam pointed at the devices' microphones to simulate their owners speaking commands. Separately, another team of researchers has demonstrated that they could effectively block Alexa from responding to its owner by playing specially tuned background music.
And some consumers have reported that when using a voice agent to dial a customer support number, they were connected to a scam operator instead.
It's hard to know just how concerned consumers and businesses should be about the potential vulnerabilities of voice computers and systems, security experts say. But they should be aware that as fun and useful as such devices and services can be, they aren't risk-free. And the risks will only increase as the devices become more popular and more services and other gadgets are connected to them.
"We're opening up a new world of dangers with these things where some really smart people might start figuring out how to do things that cause these devices to behave in unexpected ways," said Martin Reynolds, an analyst for market research firm Gartner who focuses on emerging technologies.
Alexa is (almost) always listening
The threats posed by voice-based computing systems come in a number of different forms, some of which are similar to those of other types of computers and some of which are unique or unusual.
AP/Elaine Thompson
The most distinctive thing about the Alexas and Siris of the world, of course, is that they're built around microphones - that's how users interact with them. They're what allow users to check the weather, turn on their lights, and unlock their doors by simply speaking commands or queries.
But while the microphones give smart assistants their basic capabilities, they also give rise to privacy and security concerns. In order for voice computers to be at their owners' beck and call, those microphones need to be turned on at all times, listening for their "wake words." The companies behind the voice assistants generally say they don't record anything before the devices hears a wake word or the assistants are otherwise activated.
Sometimes, though, the assistants can be accidentally triggered, whether because of an inadvertent button push or because they mistook another phrase as the wake word. Once they're activated, the devices start recording, potentially overhearing sensitive or highly private information.
Access to those recordings isn't necessarily tightly controlled. A malicious actor could get access to them if they hacked a users' Amazon account, for example.
Meanwhile, Apple and other companies weathered a minor scandal recently when they acknowledged that they had sent recordings made by their voice assistants to employees and contractors to review, ostensibly for assessing how well the assistants understood and responded to the request. Contractors with access to the Siri recordings reportedly overheard people having sex, doctors discussing patients medical history with them, and even drug deals, the Guardian reported.
There's also the concern that a hacker could compromise the microphone in the devices to surreptitiously listen in on private conversations and thereby glean valuable information, whether from within people's homes or inside corporate offices.
"The people I know that work in security and privacy don't use these devices," in part because of such privacy concerns, said Eugene Spafford, a computer science professor at Purdue University and executive director emeritus of its Center for Education and Research in Information Assurance and Security.
The devices' microphones can give others control over them
But the microphones at the heart of voice-based computing systems pose more than just privacy risks. They also can be a security threat by providing an opening for people other than the owners of the devices to take control of them.
Amazon's Alexa, for example, can be controlled by anyone who talks to it. Two years ago, Burger King demonstrated it could activate a Google Home device by broadcasting the phrase "OK, Google" in a television commercial. And researchers in Japan and at the University of Michigan have now shown that they can interact with voice assistants in both smart speakers and phones from a distance using a laser beam pointed at their microphones.
Apple
What makes that vulnerability more dangerous is that people are connecting more and more devices and services to their voice computing systems. Consumers can ask Alexa to check their bank balances and can tell Siri to make a payment to someone over Apple Pay. They can use the systems to unlock doors, turn off lights, start cars, and open garages.
"We expect that the problem would grow over time," said Benjamin Cyr, a PhD student at the University of Michigan who was a part of the team that discovered the voice assistants' laser vulnerability. "As they are able to do more stuff," he continued, "an attacker can do more damage."
Many of the smart home and Internet of Things devices that users are connecting to smart speakers themselves have security shortcomings. Security researchers have shown that they can hack into cars via their internet-connected stereo systems. And three years ago, hackers were able to bring much of the internet to a crawl by hijacking poorly secured connected security cameras and other online devices.
There's a chance such devices could also be used to compromise the smart speakers and other devices that are connected to them.
"People are investing a lot of money in these products because of the apparent convenience, and they don't understand the underlying risks," Spafford said.
A smart speaker is like a black box
A further security concern is that the voice assistants and computing systems are almost like black boxes. End users have little control over how they work and particularly over what basic security measures they have in place. An Echo owner can't install an anti-virus program on it. If the device has a security flaw in its software, the owner is dependent on Amazon to push out a fix.
Elaine Thompson/AP
"Security is not under your control," said Bruce Schneier, a cybersecurity expert and lecturer at Harvard's Kennedy School of Government. "There's not a lot you can do," he continued, "except decide not to play."
But some experts think that the security concerns relating to smart speakers and voice assistants are overblown. The threats that most worry the experts are those that can be easily exploited on a large scale for significant financial gain. Those also happen to be the ones that are usually the most attractive to criminals.
Hacking into one smart speaker at a time with a laser beam doesn't really qualify as something that's easily scalable, Gartner's Reynolds said. Even if criminals were able to compromise millions of smart speakers with malicious code, it's unclear what they could do with that ability. It would likely be difficult to turn that ability into a way to scam money, he said.
The danger of having more smart devices connected to smart speakers and voice assistants may be overstated too, he said. Being able to hijack a smart speaker to open someone's door may sound scary, but a criminal is more likely to take the much easier step of just using a crowbar, he said.
To date, most of the vulnerabilities that have been discovered don't seem particularly worrisome, Reynolds said.
"Mostly these things fall into the class of more mischief than anything else," he said.
Users can take steps to protect themselves
For their part, the leading voice assistant providers do say they keep security and privacy in mind with their products. They each allow users to delete recordings of their voice-assistant requests, and Google and Amazon allow users to review them first. Google's Home devices and Amazon's Echo smart speakers have physical buttons that allows users to turn off their microphones; Apple HomePod owners can turn off its mic via an app.
Meanwhile, Amazon, screens all the skills, or apps, that developers create for Alexa before it makes them available to end users and those skills must meet its security requirements. Apple similarly reviews all the apps in its app store, including those with Siri features. Each of the vendors also offers a way to automatically update users' smart speakers so they're running the latest software.
And users can take steps to better protect themselves. They can set up two-factor authentication on the online accounts that are linked to their smart speakers to better secure their personal data from hackers. Amazon allows users to configure Alexa so that it can't perform certain functions, such as placing an online order, unless you give it a preset PIN first.
But security experts are still assessing the threats the systems pose. Even Reynolds acknowledged that it's quite possible that criminals will find security vulnerabilities in voice-based computers and figure out a way to exploit them for financial gain in a big way.
"In the security world, there's nothing you can rule out," said Daniel Genkin, an associate professor at the University of Michigan who was part of the team that discovered the laser exploit.
Got a tip about tech? Contact this reporter via email at twolverton@businessinsider.com, message him on Twitter @troywolv, or send him a secure message through Signal at 415.515.5594. You can also contact Business Insider securely via SecureDrop.