There’s a perception that cyber-security is an IT problem alone and that the solution is purely technical. It’s not: it’s a human behaviour problem. I’ve just completed a masters thesis on the relationship between people’s awareness of cyber-security policy and whether or not they comply with it. The disturbing thing is that there is no correlation. It’s human nature to believe that we’ll never experience these threats, the “optimum bias theory”. However much you may know about cyber-security, you probably put yourself and your organisation at risk every day by doing things that run counter to its policy, even if they are socially or culturally acceptable.
The cyber-security threat is increasing exponentially, and it is becoming cyber-physical: a cyber attack can have a direct consequence in the real world. As smart buildings, smart cities and the internet of things become a reality, every object in our homes, streets and cities will be network-enabled so that they can communicate over the internet. That means they could be potentially be taken over by hackers and used against us.
One of the biggest threats is the creation of a botnet — a robot network made up of many devices. Once the virus gets onto one device, it continues to replicate itself to infect everything that connects. It lies dormant until, at some point in the future, the person who controls the botnet takes control of all those devices. A botnet in a road traffic system or the Uber app could turn all the street lights red or tell every car to go to the same address and lock down a whole area of a city. Hackers could block access to a hospital because the entire neighbourhood is gridlocked, resulting in loss of life and preventing first responders from attending events. And then the controller of the botnet can hold us to ransom. A noteworthy percentage of our computers are likely to be infected already, but we will only know when they are activated.
People don’t take cyber-security seriously yet because there has not been a major event. There will be: a “cyber 9/11” is all but inevitable. If somebody turned the internet off for a week — and it’s possible — people might take more notice. But my research showed that fear doesn’t work as a deterrent. Even when we fully understand the risks, it doesn’t change our behaviour. We still download files and apps without knowing where they come from, and plug in USB sticks that haven’t been virus scanned. Everyone knows that their company has a cyber-security policy, but very few employees are aware that they’ve signed it, let alone understand it. They very rarely follow it. But then most companies don’t audit the policy or keep a record of how many violations there have been.
A simple example of bad practice is storing personal files on a work laptop. If you store a lot of information about yourself in one place, a criminal can build a profile around you. This will greatly increase the chances that they can crack your passwords or pass security checks to access company networks and sensitive information.
Cyber-security is linked to physical and operational security, so we have to look at it holistically. One of the easiest ways to hack a company is to go into the office and insert a USB drive containing malicious code into a network-connected machine. But it’s not just about physical protection.
A significant threat today is “social engineering”: psychologically manipulating people so that they give up confidential information. Social engineering is far more successful when the aggressor holds information about you. Somebody might call you claiming to be from your bank and ask for your security information. Or they might call you at work and say they’re from the IT department, and ask you to install an update from an email they’ve sent. They can find your phone number and the real name of someone in IT on the internet, and they can link their story to corporate events publicized on LinkedIn or Twitter to develop plausible and convincing cover stories. As soon as you click on the link, that’s it. The botnet virus is in and it can spread throughout the company network.
Photos on social media could be used in the same way, and once information is out there, it can never be guaranteed to be fully taken down. Photography inside offices is now being reviewed and often banned. You never think about what’s in the background, but the quality of the cameras on smart devices is so good now that information on screens can potentially be seen. There’s a Russian application called FindFace that can identify faces in the background of photos posted on social media. You can take a photo of a company leader from a corporate website or a site like LinkedIn and run these apps to find every photo that’s ever been published of them. The intelligence services have been using these sorts of techniques for decades, but now it’s possible for anyone to do it.
It is scary. But this time tomorrow, I guarantee that you probably won’t be thinking about it. And that’s just human nature as well.
Peter Richards is head of security risk management at WSP
Article originally published on www.the-possible.com