Did you ever command your phone to perform a task by saying “OK, Google”? Or use Siri to dictate a response on Facebook?
In this age of the Internet of things, our smartphones, computers, and many other devices are equipped with voice-activated virtual assistants — Siri, Alexa, Cortana, Google Assistant, Bixby, and others — that can do many cool things for us, like checking weather status, ordering pizza delivery, read our emails and schedule a meeting for us on our calendars.
While they offer a lot of convenience for our everyday use, smart devices can become a target of cyberattacks. The assistants pretty much listen to you all the time, ready to be activated by a trigger word (like “Alexa” or “OK, Google”). Without proper security measures, these devices can allow hackers to access your information and eavesdrop on your conversations.
In this article, we will take a look at the different ways to secure these voice assistants.
Security concerns about the virtual assistants
As their popularity rises, AI assistants become an addition to many homes and workplaces — as well as an increasingly “tasty” target for cybercriminals. Users need to be aware of the potential security issues and how to deal with them.
Let’s see what the main security concerns related to virtual assistants are out there.
Eavesdropping
You may reconsider taking around devices equipped with virtual assistants without having them muted first. The thing is, to cybercriminals, virtual assistants and other similar AI can be a gateway into people’s lives via eavesdropping on conversations, looking into personal data, and spying on the tasks the assistant performs.
Even without malicious intent, a simple glitch in voice processing that makes the app misinterpret its owner’s speech may lead to major breaches of privacy — for example, sending a private communication to another person in the contact list (or even a random number).
Remote Control
Voice control also makes it easier to inject all kinds of malware and spyware when the victims unwittingly add new “skills” to the assistant through “trigger” words. What’s worse, so-called “surfing attacks” that infiltrate virtual assistants through inaudible ultrasonic waves can exploit many smartphone features, not only without directly touching the device but even without a voice to be heard!
Using a tapping device, a signal processing module, and an ultrasonic transducer, surfing attacks can quickly propagate voice command signals to interfere with your device through the mechanical coupling method. The assistant will “think” that you are saying a command, and proceed to, for example, download malware or send spam messages.
Lack of additional authentication
Most virtual assistants lack strong user authentication systems, which is why it is often so easy for hackers to exploit voice commands.
Additionally, many smart devices or programs don’t use any end-to-end encryption, which makes the personal data of their users vulnerable to all kinds of third parties.
How can you protect yourself when using a virtual assistant?
Here are some tips that you can use to secure your virtual assistant:
Turn off your microphone
Remember: your assistant records everything you say when it “hears” (or “thinks” that it “hears”) the trigger words. To make the app stop listening, you should mute your voice assistant when you don’t need it.
You might also be able to activate alerts that tell you when the assistant is actively listening.
Delete redundant commands and information
Clear up the commands you don’t need, as well as any important data you don’t want lingering around. For example, your credit card information, passwords, and other credentials are not the things you’d want your assistant to have access to.
You also should think about disconnecting features that link the assistant to your contact list or your calendar — the places hackers often try to assess to get the user’s data.
Set up voice recognition
One good way of preventing hackers from accessing your virtual assistant is configuring it for voice recognition so that it will only react to your voice and not to anyone else’s.
Have an additional layer of security
To prevent remote intrusions that don’t use actual voice (like surfing attacks or cases when hackers somehow got your password), have two-factor authentication set up.
For this, you need both your password and an additional piece of information, like a code sent to your phone or a random number generated by an app or token.
Conclusion
While they are very convenient and make some of our day-to-day tasks a bit easier, virtual assistants also carry quite a high risk of your devices falling victim to hackers eager to exploit their vulnerabilities.
So, if you use Google Assistants, Siri, Alexa, or any other virtual assistants of your phone (or other devices), be careful and mind what information you share.
If this topic interests you, check out our article on preparing for the Internet of Things as well as how you can avoid letting smart devices outsmart you.
How secure are virtual assistants? .