AI voice assistant’s like Amazon’s Alexa and Google’s Assistant have seen their popularity soar over the last few years. This will only continue, but placing highly tuned distance microphones all over your home raises certain security issues. If they can always hear, does that mean that they’re always listening? The developers say no, and laud the security technology they’ve bundled into their AI, but researchers at a series of universities have found some alarming vulnerabilities.
Alexa's cool new updates sam.cmd.push(function() { sam.display('see-through-article-page-desktop'); }); Check them out now
Alexa's cool new updates
The problem lies in the rapidly expanding number of voice assistant apps that are rolling out across the different platforms. Alexa and Assistant are the two biggest platforms and users can add third-party skills or actions to their voice assistant. These new apps are the raison d’etre behind the blog players developing voice assistants as a platform. They want to control a whole new marketplace.
As pointed out on the MalwareBytes blog , the researchers have discovered a particular vulnerability called voice squatting or masquerading :
“Voice squatting is a method wherein a threat actor takes advantage or abuses the way a skill or action is invoked. Let’s take an example used from the researchers’ white paper. If a user says, “Alexa, open Capital One” to run the Capital One skill, a threat actor can potentially create a malicious app with a similarly pronounced name, such as Capital Won. The command meant for the Capital One skill is then hijacked to run the malicious Capital Won skill instead.”
Users could inadvertently be invoking a harmful action from their voice assistant, simply because it sounds similar to the name of a legitimate action.
Voice masquerading takes this action even further. Rather than simply tricking users with similar sounds, voice masqueraders outright deceive. They pretend to be legitimate apps so that they can phish personal information from the unsuspecting user. If a malicious app pretends to be your bank, your most personal data immediately becomes at risk.
Another trick these fake voice apps use is to pretend to switch to another app or to offer a fake termination of an app, but then continue to listen in. Again this could be to phish information or to simply listen in and record what is going on around the smart speaker. Theoretically, this type of vulnerability could evolve an entirely new kind of ransomware with blackmail against the release of secret conversations being used to extort money from unsuspecting smart assistant users.
Check out this fully holographic AI assistant Read now
Check out this fully holographic AI assistant
These two types of vulnerability are alarming, but they shouldn’t turn you off smart assistants altogether. MalwareBytes recommends that if you use a smart assistant, you need to really get to know the all-hearing product you’ve brought into your home. Understanding how the smart speakers work will help you protect yourself from potential attack. In the video examples shown above, a vigilant user would have noticed the discrepancies between the two responses to the voice command.
We talk a lot here at Softonic about spotting fake emails and web pages so that your information can’t be phished and then used against you. Malicious actors are now also using sonically activated means of tricking users, which will be much harder to spot. Also, a s the technology develops, talking to these smart assistants will sound more and more like talking to an actual person. This will cause you to lower your guard, but you have to remain alert to potential threats against you.
As always with these types of security issues, you are the person responsible for your security. Your own vigilance is the best line of defense you have.
More from Softonic
7 fun things to try with Amazon Alexa
5 things Alexa does better than Google Home
5 things Google Home does better than Alexa
3 fun things to do with your Google Home