Biometrics
The Human Voice:
How to Make It the Holy Grail of Security
Voice technology is seeing rapid adoption among both consumers and businesses, with significant implications for security. Nikolay D Gaubitch, research director EMEA at Pindrop, considers the risks and possibilities of voice security
The last few years has seen biometric security measures enter into the daily lives of most customers. Before, they had to remember an array of passwords, pins and security questions to access their private information; now they simply have to press a finger to a scanner or look at a camera and access is granted. The iPhone X’s Face ID was launched with huge fanfare in 2017, establishing a next generation of security that offers an alternative to the 67% of phones shipped in 2018 that have fingerprint scanners.
But voice is looking like it will become the de-facto mode of security in the future, with smart speakers now owned by 10% of the UK population and influencing ever more of our daily lives. There’s a growing adoption of this in industry too – 85% of businesses will use voice technology to engage with customers over the next 12 months.
This is making voice data widely available, reflected in the fact that Interpol has developed a voice identification database that combines voice samples from 192 law enforcement agencies from around the world, allowing it to match samples with known individuals.
It’s essentially a collation of voice information that has been building up for a while now, with programmes running within countries for years; the effort has now moved international. Given that Interpol already has well-established and recognised databases that collect iris data and fingerprints, their move into voice is timely – it signifies that we’re entering a new era of authentication.
The implications of voice security for businesses
What does all this mean for businesses who handle private customer information? The assumption would be that it makes things easier. Passwords no longer have to be written down, or authentication questions asked that are easily remembered (and therefore easily cracked).
In short, it helps improve the customer experience by smoothing out their ability to access their own private accounts of information.
But with this ease of access comes a new risk. Voices can be taken from a wide array of online and offline resources and used for illegal ends. YouTube videos, public events and phone calls are all ideal sources from which to begin synthesising someone else’s voice, using easily accessible software.
“85% of businesses are planning on using voice-activated technology to engage with their customers. This means that they are actually exposing themselves to a new type of fraudulent activity.”
Products such as LyreBird have made voice synthesis accessible to the masses. This readily available information that made passwords and pins a weak link in our digital security – indeed, nearly two thirds of consumers admit to posting such details online – is doing the same in a new voice-activated world.
This poses a significant problem for businesses whose duty it is to protect customer information. Research released in June by Pindrop found that 85% of businesses are planning on using voice-activated technology to engage with their customers. This means that they are actually exposing themselves to a new type of fraudulent activity.
This is made worse by the fact that, in the same research, it was discovered that 80% of IT directors don’t know how to effectively protect the data generated by voice technology. Fraud is responsible for $14bn a year lost by businesses, so the opening of a new threat vector due to voice biometrics is something that, if not taken seriously, could impact industry heavily.
Technology as a solution
Yet such technology can be the solution to this as well as the problem. As Interpol’s identification database highlights, it is possible to accurately identify individuals using voice data, a capability which can be applied to keeping private information secure and an area in which Pindrop has been developing a product suite since its foundation in 2011.
This is done by utilising the full range of data contained within audio files of a human voice. This data is not just a simple replication of a voice; it contains information about the natural behaviour of our voices, as well as metadata, such as the location of the user attempting to gain access.
Implementing security that utilises all of this available information, rather than simply checking the surface layer voice for a match – a method which is easily conned – will help voice become a much more secure biometric channel than it currently is.
“Within five years, 25% of businesses in the UK will be using voice-activated technology for all of their customer communications.”
Yet this technology will have to be rolled out quickly. Within five years, 25% of businesses in the UK will be using voice-activated technology for all of their customer communications. Amazon’s Alexa already has a number of banking skills on its platform and voice is becoming a key part of multi-factor authentication.
This all means that the amount of personal and business-critical information secured by voice biometrics is increasing rapidly, a speed which needs to be matched by its effectiveness in authentication. If it isn’t, then it’s likely it will take a major breach before security professionals spot the chink in their armour.
But they can get in front of this. The technology to protect this attack vector already exists; those with responsibility simply need to implement it.
PR nightmares: Ten of the worst corporate data breaches
LinkedIn, 2012
Hackers sold name and password info for more than 117 million accounts
Target, 2013
The personal and financial information of 110 million customers was exposed
JP Morgan, 2014
One JP Morgan Chase’s servers was compromised, resulting in fraud schemes yielding up to $100m
Home Depot, 2014
Hackers stole email and credit card data from more than 50 million customers
Sony, 2014
Emails and sensitive documents were leaked, thought to be by North Korea im retaliation for Sony’s production of a film mocking the country’s leader Kim Jong Un
Hilton Hotels, 2015
Dozens of Hilton and Starwood hotels had their payment systems compromised and hackers managed to steal customer credit card data
TalkTalk, 2015
The personal data of 156,959 customers, including names, addresses, dates of birth and phone numbers, were stolen
Tesco, 2016
Hackers made off with around $3.2m from more than 9,000 Tesco Bank accounts
Swift, 2016
Weaknesses in the Swift payment system resulted in $81m being stolen from the Bangladesh Central Bank’s account at the New York Federal Reserve
Chipotle, 2017
Phishing was used to steal the credit card information of millions of Chipotle customers, thought to be part of a wider restaurant customer scam orchestrated by an Eastern European criminal gang
LinkedIn, 2012
Hackers sold name and password info for more than 117 million accounts
Target, 2013
The personal and financial information of 110 million customers was exposed
JP Morgan, 2014
One JP Morgan Chase’s servers was compromised, resulting in fraud schemes yielding up to $100m
Home Depot, 2014
Hackers stole email and credit card data from more than 50 million customers
Sony, 2014
Emails and sensitive documents were leaked, thought to be by North Korea im retaliation for Sony’s production of a film mocking the country’s leader Kim Jong Un
Hilton Hotels, 2015
Dozens of Hilton and Starwood hotels had their payment systems compromised and hackers managed to steal customer credit card data
TalkTalk, 2015
The personal data of 156,959 customers, including names, addresses, dates of birth and phone numbers, were stolen
Tesco, 2016
Hackers made off with around $3.2m from more than 9,000 Tesco Bank accounts
Swift, 2016
Weaknesses in the Swift payment system resulted in $81m being stolen from the Bangladesh Central Bank’s account at the New York Federal Reserve
Chipotle, 2017
Phishing was used to steal the credit card information of millions of Chipotle customers, thought to be part of a wider restaurant customer scam orchestrated by an Eastern European criminal gang