There’s a perception that cyber-security is an IT problem alone and that the solution is purely technical. It’s not: it’s a human behaviour problem. I’ve just completed a masters thesis on the relationship between people’s awareness of cyber-security policy and whether or not they comply with it. The disturbing thing is that there is no correlation. It’s human nature to believe that we’ll never experience these threats, the “optimum bias theory”. However much you may know about cyber-security, you probably put yourself and your organisation at risk every day by doing things that run counter to its policy, even if they are socially or culturally acceptable.

The cyber-security threat is increasing exponentially, and it is becoming cyber-physical: a cyber attack can have a direct consequence in the real world. As smart buildings, smart cities and the internet of things become a reality, every object in our homes, streets and cities will be network-enabled so that they can communicate over the internet. That means they could be potentially be taken over by hackers and used against us.

One of the biggest threats is the creation of a botnet — a robot network made up of many devices. Once the virus gets onto one device, it continues to replicate itself to infect everything that connects. It lies dormant until, at some point in the future, the person who controls the botnet takes control of all those devices. A botnet in a road traffic system or the Uber app could turn all the street lights red or tell every car to go to the same address and lock down a whole area of a city. Hackers could block access to a hospital because the entire neighbourhood is gridlocked, resulting in loss of life and preventing first responders from attending events. And then the controller of the botnet can hold us to ransom. A noteworthy percentage of our computers are likely to be infected already, but we will only know when they are activated.