‘Loose lips sink ships’; it was an ubiquitous adage in the west during World War II, an essential caution, blared from propaganda posters to remind people that careless talk, from servicemen and citizens alike, could cost lives. It’s an idiom we’d all do well to remember.  

In October, the UK’s National Cyber Security Centre [NCSC] celebrated its first anniversary of operations. The GCHQ subdivision, a taskforce of previously unconnected parts of government, MI5 and GCHQ, was formed in response to what Ian Levy, the technical director of the NSCS, calls “the real and growing” threat of data leaks. While some of these are leaks are intentional and malicious – engineered by hackers and extortionists – accidental leaks appear to be on the rise too.

In the last few months alone, a number of high-profile occurrences have taken place: a vast amount of data belonging to Viacom and Verizon customers was left exposed online in a misconfigured and publicly accessible Amazon S3 bucket; insurer Aviva inadvertently disclosed sensitive policyholder insurance details when they sent documents out to the wrong person, resulting in a $6,000 fine and swathes of American voters were compromised when the marketing company employed by the Republican National Committee accidentally left sensitive personal details – including home addresses, birthdates, phone numbers and political views – of almost 62% of the US population exposed in what’s been reported as the largest breach of electoral data in the US to date.

The causes of accidental cybersecurity leaks

So how do these potentially catastrophic accidental leaks happen? There are several scenarios, says Stephen Burke, founder and CEO at Cyber Risk Aware.

“A website or an internal application may have a technical vulnerability that has either not been detected by the owner or it has been detected, but in both cases it has not been patched. If a cyber-criminal detects the vulnerability they can then exploit it in order to gain access to the underlying data that the application has access to, such as a database, e.g SQL Injection.”  

Then there’s the issue of misclassified data, says Burke.

“This can lead to sensitive data being accessible by far too many people, which makes it incredibly difficult to protect and monitor where it is going. It is quite simply not practical to do so, owing to such ‘open access’. There’s also data that is retained forever and not removed when it is no longer needed, exacerbating the above issue.”

A lack of proper training can also be a weak spot in a business or organisations armour.

“This happens when staff do not know how to protect data when saving it on their network or sharing it externally, either by email or on removable media such as USB keys,” says Burke. “This leaves the data exposed to being lost/intercepted if transferred unencrypted over the internet, saved by an external recipient who has weak security or if a USB key is lost.”

“All of us, including security experts, are prone to cons and trickery when we’re facing an intelligent, human adversary.”

In all the above scenarios, it’s fair to say that it wasn’t the tech that failed, but rather the humans in charge of that technology. As cybersecurity professionals have reiterated for some time, it's people – rather than systems – who are likely to prove the weakest link in data protection.

Sam Curry, CSO at Cybereason, is inclined to agree. “It’s a simple statement of fact. All of us, including security experts, are prone to cons and trickery when we’re facing an intelligent, human adversary. The weakest link in most physical security, from banks to battlefields, is also humans. Ultimately, cyber conflict is human conflict fought with new tools.” 

Levy, however, appeared to argue otherwise at a recent Symantec conference, according to The Guardian.

“Cybersecurity professionals have spent the last 25 years saying people are the weakest link. That’s stupid!” he said. “They cannot possibly be the weakest link – they are the people that create the value at these organisations. What that tells me is that the systems we’ve built, as technical systems, are not built for people. Techies build systems for techies, they don’t build technical systems for normal people.”

However, Burke disputes this.

“Mr Levy has missed an opportunity to put into the correct context what cyber security professionals have been saying, which is not stupid but the harsh reality,” he argues.

“People are the greatest asset in any organisation, they create the value; that has never been in doubt. However, cyber criminals are actively targeting people not systems because they are the weakest link. That is fact. In over 95% of security incidents, human error was the root cause, with phishing emails being the leading cause. That is the context to be considered.”

Plugging organisational leaks

How do we address these leaks? Are cryptographic services and products our safest bet? 

“They definitely help in making systems and data more secure,” says Burke. “But it is very time consuming and costly, both upfront and on a continuous basis.”

There are also weak spots, he says. "Why go through the cost and labour of encryption, at rest and in transit, if trusted third parties aren’t doing the same?"

There’s also the on-going issue of trying to retrofit encryption into older systems. “Often, the old kit/technical stack does not support encryption, which leads to gaps.”

The NCSC and others acknowledge that not all attacks are preventable, advocating harm reduction and mitigation alongside pre-emptive defenses

“The more that tech can automate and detect the better, but no technical system will ever give you complete protection.”

What would the former typically involve? “The key pieces to reducing harm and mitigation are acknowledging that there are risks to your business that you need to address,” says Burke

“That way, you can determine what the impacts could be and plan to reduce the harm by employing security controls that might mitigate those risks. This is what an Information Security Management System (ISMS) entails. 

“It sounds easy but it takes time. Then you need to put in place and test your incident response plans. You only have to look at Equifax and TalkTalk for an example of how not to do incident response. Work on the basis of planning for worst case scenarios and ‘when’ rather than ‘if’ they will occur, and you will be in a very strong position.

“People need help in being made more aware so they make the right choices when it comes to protecting data, their credentials, following best practices such as patching systems and what to do when it comes to email security to spot suspicious emails,” he continues.

“The more that tech can automate and detect the better, but no technical system will ever give you complete protection.  Therefore if we can help staff we will create a human firewall and have the greatest level of security using a combination of tech and people defences.”

Share this article