Cyber security and the importance of usability

There is nothing new or unusual about the need to design usable systems. A whole industry has grown up around the business of making sure that commercial websites and apps are easy to use and deliver the behaviour, such as spending money, that the owners of those websites and apps want to see.

Usable systems generally require three things: the system has to be useful, or at least perceived as useful, by the end user; the system has to be easy to use by the end user; and the system has to be persuasive so that the user to take the actions that the owner desires.

Is cyber security any different?

These three requirements of utility, usability and persuasiveness are seen in cyber security systems. However there are some differences compared with the consumer-facing world. Making sure a cyber security system succeeds is in some ways more important than making a commercial system succeed.

One issue is that the cyber security system has to work for everyone: potentially if just one person fails to use the system properly then the organisation will be put at risk.

In addition cyber security systems are like stable doors – they need to be shut when you want them to be as there is no use locking them after a breach has happened. If an online shop doesn’t work for some reason then the user can go back and try again, but with a cyber security system, if it doesn’t work first time then the damage may be done.

These are stringent requirements. Unfortunately the nature of cyber security means that these requirements are hard to meet:

  • Users have little motivation to comply with security requirements as keeping secure is not their main purpose; indeed security systems are part of a technical infrastructure that may have no real meaning or relevance to the end users
  • Security systems can “get in the way” of tasks and so can be thought of as a nuisance rather than a benefit
  • Security systems are often based on arbitrary and little understood rules set by other people, such as those found in security policies, rather than on the desires of the end user
  • Users may find complying with the requirements of security systems socially difficult as they may force the user to display distrust towards colleagues

These are all challenging issues and any security systems you design need to ask the very minimum of effort from the user if it is to overcome them.

Unfortunately many cyber security systems demand a degree of technical knowledge. For instance they may use jargon: “Do you want to encrypt this document?” will have an obvious meaning to anyone working in IT but may mean nothing to some users.

Furthermore some security requirements may of necessity require a degree of “cognitive overload”: the requirement to remember a strong password (perhaps 12 random characters) is an example. Again this will cause additional difficulty.

Users are not naturally motivated towards cyber security systems. And they may find them hard to use. So how can success – universal and efficient use of systems – be achieved?

Delivering success

Start with the end user. Ensure, through the use of a combination of interviews (including the standard “speak aloud” protocol used by many UX practitioners), observation and expert evaluation identify where the obstacles to successful use of the system are placed. Obviously the usual rules of good usability will apply: consistency, reduced cognitive overload, feedback, and help when mistakes are made.

Learnability is also important. Accept that some form of help may be needed by the user and ensure that this is available, ideally within the system. Help files shouldn’t just tell people how to achieve something but also why it is important.

But for cyber security systems there is also a lot of work to be done around persuasion. This will involve educating the end user about the importance of the system – how it protects their organisation, and how it protects them as individuals.

It will also involve ensuring that the system is credible – that end users realise that the system does what it is supposed to do and isn’t just a tick box exercise or something dreamed up by the geeks in IT to make everyone’s live that little bit harder.

And it will involve demonstrating to the end user that all their colleagues are using the system – and if they don’t use it then they will be out of line with the majority.

“Usability is not enough” is a common theme in retail website design. It is even more important in the design of cyber security systems.

 

 

 

 

 

 

 

A New Year’s resolution for CEOs

“I am going to take cyber security seriously in 2016.”

On the whole senior executives claim that they want to act in an ethical manner. And yet if they fail to embrace cyber security they are clearly lying.

Why do I say that? Because playing fast and loose with customer data wrecks lives. It is as simple as that. Lose your customers’ data and you expose them to a major risk of identity theft – and that can and does cause people massive personal problems.

The problems that David Crouse experienced in 2010 are typical. When his identity was stolen he saw $900,000 in goods and gambling being drained from his credit card account in less than 6 months. His credit score was ruined and he spent around $100,000 trying to solve the problems.

Higher interest rates and penalty fees for missed payments just made his financial situation worse. His debts resulted in his security clearance for government work being rescinded. Having lost his job, other employers wouldn’t touch him because of his debts and credit score. He felt suicidal. “It ruined me, financially and emotionally” he said.

Data breaches frequently result in identity theft. And this can have a devastating emotional impact on the victims, as it did with David Crouse. Research from the Identity Theft Resource Center  indicates that 6% of victims actually feel suicidal while 31% experience overwhelming sadness.

The directors of any company whose negligence results in customers feeling suicidal cannot consider themselves to be ethical.

Unfortunately most data breaches that don’t involve the theft of credit card details are dismissed by corporations as being unimportant. And yet a credit card can be cancelled and replaced within hours. A stolen identity can take months, or longer, to repair.

And all sorts of data can be used to steal an identity. An email address and password; a home and office address; the names of family members; a holiday destination; a regular payment to a health club… Stolen medical records, which are highly effective if you want to steal an identity, will sell for around £20 per person online, while credit card details can be bought for as little as £1. Go figure, as they say in the USA.

Organisations must accept that any loss of customer data puts those customers in harm’s way. And if they want to be seen as ethical they must take reasonable steps to prevent data breaches. Until they do, well the EU’s new data protection rules can’t come on-stream quickly enough for me!

The EU’s General Data Protection Regulations explained

The EU’s General Data Protection Regulations (GDPR) may not be the most exciting topic on your agenda (!) but it is important as new rules to be published shortly will replace current laws on protecting personal data.

A draft was published earlier this year; it is still being discussed but it should be agreed in early 2016. It will then pass directly into law although it will be 2 years before you have to comply with it. This means you don’t have to panic until 2018. (But you should start thinking about it now.)

The rules are designed to give people control over of their personal data and to simplify the regulatory environment for business.

By the way, there is also a Network Information Security Directive aimed at curbing cyber crime. This is different: don’t get them confused!

The GDPR draft is still under discussion so the following information may change; however, the important points are likely to be as shown below.

  • Definitions:
    • Personal data is defined as ‘any information relating to a data subject’ (that’s you or me). The ICO defines it more helpfully as ‘any detail about a living individual that can be used on its own, or with other data, to identify them’. This can include photos, email addresses and perhaps even computer IP addresses
    • Processing data means pretty much anything – collecting it, storing it, analysing it, sharing it…
    • A data controller is the person who decides what can be done with personal data; often they are the people who have collected the data; they will appoint a data processor, sometimes in their organisation but often outside it
    • A data processor processes the data  on behalf of the controller; the new rules will increase responsibility on these people
  • Geography: The regulations will apply if the data controller or the data processor or the data subject is in the EU; so it applies to, say, US companies processing UK data
  • Employment data is excluded and member states can create their own rules for this
  • Fines for non-compliance will increase. The current draft proposes a maximum of Euro 1million or 2% of global turnover although there has been discussion of a higher level of Euro 100 million of 5% of global turnover. Ouch.
  • Existing data protection principles remain and these include:
    • Data processing must be fair to the person concerned, lawful (i.e. with their consent or to fulfill a contract with them) and transparent (i.e. they are able to see its results)
    • Data processing can only be for the purpose specified when the data was collected
    • Only data relevant to purpose specified can be collected
    • Data must be accurate and up to date as far as possible
    • Data can only be held as long as needed for the purpose specified, although if the data is needed for legal purposes it can be kept as long as any further processing is “limited” (i.e. you can’t carry on using it)
    • Data must be secure
  • Certain organisations must appoint a Data Protection Officer (DPO); these include:
    • Public bodies; organisations processing data from more than 5000 people; organisations employing over 250 people that process personal data; and organisations where data processing involving systematic monitoring of people is the core activity
    • DPOs will advise organisations about the rules and monitor compliance with them; they must be free to operate as they think fit (“independent”) and will need a range of skills beyond compliance monitoring: they must be able to manage IT processes (e.g. controlling access to data, retaining data), data security (including dealing with cyber-attacks) and other critical business continuity issues
    • DPOs must be offered a minimum 2 year contract term – so you can’t get rid of them if you don’t like what they are doing, unless they prove unable to perform their duties
  • When high risk activities are proposed e.g. the processing of data that could result in financial loss, identity theft, discrimination, damage to reputation, and loss of professional confidentiality, Data Protection Impact Assessments (DPIA) must be conducted
    • The DPIA must contain a description of the data and the processes used, an evaluation of any risks to the data, and description of how you propose to manage those risks
    • The local data protection authority (DPA), which in the UK would be the Information Commissioner’s Office, must authorise (or forbid) high risk data processing after reviewing the DPIA (this requirement is contentious and may be removed)
  • Data collection requires consent; consent must be opt in (“clear affirmative action”) which means you can’t have a ready-ticked opt-in box; and the opt in must be specific (not part of a wider agreement)
    • Data about minors (up to 13 years old) can only happen with the parent’s consent
    • Sensitive data (e.g. about religion, ethnicity, etc) cannot be processed
    • Consent for the use of data for “direct marketing” must be explicitly obtained; this doesn’t appear to rule out highly targeted mass marketing where people are not addressed by name – but see the next point
    • Automated profiling that could have some form of “legal effect” and which is based on (or which will predict) personal characteristics such as performance at work, economic situation, location, health, personal preferences, reliability or behaviour is forbidden unless specifically requested by the person concerned
  • Data breaches must be reported to the Data Protection Authority (the ICO) and also to the victims (unless the data was encrypted)
  • Data transfer out of the EU is only allowed under certain conditions. This means that the use of cloud computing services (such as Google Docs, Dropbox and Gmail) is likely to be problematic if personal data is involved as the data may not be secure, may not be held in the EU, or may be shared by the cloud service owner; remember this applies to “informal” cloud computing use by employees – whether or not you know about it

There are a number of things that organisations need to start thinking about in order to ensure they are compliant. Talk to a lawyer when the final wording is approved but in the meantime consider the following:

  • Identify any personal data that you hold
  • Think about how you can timestamp and put time limits on holding personal data
  • If you want to hold data for analysis purposes after you have used it for its original purpose, think about how you can anonymise it, so that it remains legal to hold (“pseudonomysing” data, e.g. by hiding personal details, so that it can be “re-personalised” at a later date won’t help)
  • Develop a system that enables you to pull off any personal data if it is requested by the relevant person
  • Formalise your data protection policies and processes – and keep records
  • Think about how you are going to manage cloud computing, and also the use of home computers, smartphones and tablets by employees: if you don’t do this then your employees may create compliance failures for you
  • Be aware of the potential of Big Data analysis techniques to create new personal data – even accidentally; for instance an anonymous record of a disability or a first name linked to a postcode could result in new personal data
  • Ensure appropriate security so that unlawful destruction or processing, such as unauthorised disclosure or access, is prevented

Take the protection of personal data privacy seriously. Compliance with the GDPR shouldn’t be a tick-box exercise. Privacy needs to be designed into your business processes for legal and ethical reasons.

Business processes and cyber risk

Cyber risk doesn’t just involve malicious techies hacking into corporate accounts. It can also involve risk to every day business processes: “process cyber risk”. Unfortunately, because the IT Department are kept busy defending the corporate network from the hackers, these process risks are often left to themselves.

What do I mean by process cyber risk? Quite simply, a risk of loss or damage to an organisation caused by a weak business process combined with the use of computer technology. These weak processes are often found within finance departments, but you will also find them in HR, in marketing and across organisations.

Process risk and identity

Many business processes rely on a particular document being signed off by an authorised individual. As many processes migrate online, the assumption is that the sign-off process can also be undertaken online. Sign on as an individual and perhaps you have authorisation to access a particular document or process.

As most people have to log in to company systems with a password and a name, then this shouldn’t be a problem. Except that passwords get shared. Busy people often share log-in details with juniors, allowing unauthorised people to access systems and documents that they are not authorised to access.

Any authorisation process that simply relies on someone logging in with name and password is weak because it is easily subverted. Issuing “dongles” as a second factor authentication device isn’t much better as these can get shared (unless they are integral to a company identity card). Robust processes where sensitive data or decisions are concerned should assume that a password has been shared (or stolen) and require additional security such as a second pair of eyes.

Process risks and finance departments

One big risk for finance departments is invoice fraud. This can happen in several ways. A common way is for thieves to gather information about a company, perhaps the news that it is investing in new technology. They will then use this information plus other easily obtainable assets such as company logos and the names of senior people in an organisation to put together a scam.

This might involve an email “from” a director of the organisation to a mid ranking person in the finance department asking for an invoice to be paid promptly; the invoice, which is of course a fake, is attached to the email.

In other cases the invoice is genuine. For instance thieves may pose as a supplier and ask for details of any unpaid invoices. They then resubmit a genuine invoice – but with the bank payment details changed.

All too often the unwitting finance executive passes the invoice for payment. Once the money has reached the thief’s bank account it is quickly transferred to another account making it unrecoverable.

This type of fraud is big business. Earlier this year Ubiquiti Networks disclosed that thieves stole $46.7 million in this way. While in the UK, the police’s Action Fraud service received reports of around 750 in the first half of 2015. And of course many similar frauds go unreported – or undetected.

What can you do to protect against this? Well start by educating staff about the nature of the threat – all staff not just in the finance department. Ensure that the details of all invoices are scrutinised carefully: Is the logo up-to-date? Is the email address correct (perhaps it is a .org instead of a .com)? Are the bank payment details the same as usual (if they have changed then telephone someone you know at the supplier to ask for confirmation)? And take extra care with larger invoices, for instance requiring them to be check by two separate people.

There are other cyber risks within finance processes – and often these are internal risks, initiated by employees. Examples include purchase fraud when personal items are bought using company money or when required items are bought at inflated prices, with the purchaser then getting a kick back at a later date. Again fake emails can be used to support these purchases. And again simple processes can disarm the threat.

Process risks within HR

Within HR there are numerous process risks. Let’s start with recruitment. The risks here can involve social media profiles designed to misinform, perhaps with fake endorsements or untrue job details. Looking at a LinkedIn profile is an easy way to identify potential candidates – but it is important to realise that the profile you see may well be substantially embroidered.

Another short cut, especially when looking for “knowledge leaders”, is to see what sort of “rating” candidates have on sites like Klout.com. Superficially this is fine. However, it is essential to be aware of how people are rated by the site (for instance what data is used) before making a judgement using this type of data as you may well be given an untrue perspective.

Another risk of using social media to identify candidates is that you open yourself to accusations of discrimination. An attractive cv may not have information on social media about age, ethnicity or sexual preference. Social media will. You really don’t want to know this sort of information but once you know something you can’t “unknown it”: and this can open you up to accusations of bias. It isn’t unknown for companies to commission an edited summary of a candidate’s social media profiles with anything that could lead to accusations of discrimination taken out in order to de-risk the profile before it is given to the recruiter.

In fact HR is full of cyber risk, especially where social media is concerned. There may be problems with the posts employees make on social media. There may be issues around bullying or discrimination at work. And maintaining a positive “employer brand” can be very difficult if an ex-employee starts to deride their old employer on line in sites such as Glassdoor.

Process risk and marketing

Process risk is also very at home in marketing. Again social media is one of the culprits. Not everyone, even in marketing, is a social media addict. Senior marketers frequently hand over their brands’ social media profiles to junior marketers, or even interns, because “they have a Facebook page”.

It’s a mistake. Not only is it likely that the output will be poor, the junior marketer may well (they frequently do) break advertising regulations (for instance by glamorising alcohol, or even fair trading laws (e.g. by including “spontaneous” endorsements from paid celebrities).

This shouldn’t be difficult: there is no reason that the processes that govern advertising in general can’t be applied to social media.

Procurement and cyber risk

Finally there is procurement – and the process of ensuring that third party suppliers don’t represent a cyber risk. This is a huge area of risk and one that is not always well appreciated.

The issue is not just that the third party may be insecure (for instance the massive hack to US retailer Target came about via an insecure supplier) and it is hard to know whether they are secure or not. It is also that people working for a supplier who have been given access may then leave the supplier without you being told: and as a result they retain access to your information, perhaps after they have joined a competitor. In additions suppliers may well have their own reasons for being a risk – they are in dispute with you, they are in financial difficulty, they have been taken over by a competitor…

Business processes frequently have the potential to be undermined by online technologies. It takes imagination to identify where the threats lie. However once they have been identified, actions to reduce the effect of the threat are often very simple.

Persuasion and cyber security

You can’t rely on technology to solve your cyber security issues.

Cyber security is largely a “people” issue: cyber breaches are generally caused by people behaving in an unsafe manner, whether they know they are doing so or not. The solution is to persuade them to behave safely.

But how can you persuade people to do this?

Effective cyber communication

The first step is developing an appropriate communication programme. Of course you already know that this shouldn’t be a “death by PowerPoint” style lecture.

You are going to make your communication engaging and interactive with lots of colour and interesting imagery. You are going to start training sessions off with uplifting material that gets people into a good mood – games, stories, or other activities designed to generate a feeling of well being.

But what about the content of your communications? How should you structure the messages that you need to get across? Here are a few Do’s and Dont’s:

  • Do describe security problems in a clear cut and simple way so that people can understand everything you are saying. Don’t use jargon and make it all sound frightfully difficult because you want to look clever
  • Do give people hope – while 100% security is impossible, you should emphasise that there is a lot that can be done to minimise threats and the consequences of a cyber incident. Don’t use “fear, uncertainty, doubt” to persuade people of the importance of the risk: they will just bury their heads in the sand.
  • Do make the risk relevant to the individuals you are talking to – describe personal risks, to their reputation or their jobs. Don’t describe it as a risk to the faceless organisation they work for.
  • Do stress that the risks are immediate ones that are all around you as you speak. Use examples of things that have happened, ideally to your organisation or a competitor. Don’t describe potential incidents that might happen sometime in the future.

Marketing techniques

There are also a number of marketing techniques you may be able to bring into your communications:

  • Use the power of FREE when describing techniques that people can take to avoid risk; this could be FREE training to avoid phishing, or some FREE software people can download to use at work and at home
  • Use the power of loss. When faced with a potential loss, people are risk averse. So emphasise what people might lose if they behave unsafely, not what they might gain if they behave safely. The loss needs to be personal, for instance it could relate to losing money when shopping online
  • Use the power of authority to persuade people. If you can ensure that your organisation’s leaders will act – and you can show them acting – in a cyber safe way then you have a good chance that people will follow their lead.
  • Use the power of peer pressure. People will often follow the lead of the people around them as they don’t want to seem out of step with the majority’s way of behaving. So if you can persuade some people to endorse safe behaviour during a training session, others will inevitably follow them. Having a few “stooges” as part of your audience may help!
  • Use the power of discovery. Guide people towards uncovering solutions to cyber risks, rather than telling them what to do. If they are responsible for defining solutions they will value those solutions. If you simply give them someone else’s solution it may well be discounted as “Not invented here”

You are trying to change people’s behaviour and it is important that you succeed. Think about what will persuade people. And don’t be afraid to use a few cheesy marketing techniques along the way.

Tackling invoice fraud

Invoice fraud is on the rise. It may involve a spoof email, apparently from the CEO of your organisation, “authorising” the payment of a fake invoice. In other cases the email seems to come from a trusted supplier.

Two of my friends who run SMEs have recently been exposed to invoice fraud, in one case for around £70,000. In both cases the fraud was picked up before payment was made.

But in a lot of cases it isn’t. For instance last year a small Norfolk manufacturer was scammed out of £350,000 by a fraudulent email. And because of the nature of online banking, once the money has left your account it is very hard to retrieve.

There are ways of reducing the risk of fraudulent emails. For large organisations an anomalytics service might be the answer. These services build up a picture of normal email traffic in order to identify unusual emails that can be subjected to further examination.

Another tool is the DMARC email standard. This prevents people sending emails that are apparently from you. It doesn’t of course stop people from breaking into your email account and actually sending emails “from” you. Nor does it prevent phishing attacks. But it is a useful tool nonetheless as it makes it harder for fraudsters.

But the real way to address invoice fraud is through implementing stronger business processes. These will include:

  • Ensuring only properly trained and authorised people are able to make online payments
  • Creating a “whitelist” of approved suppliers and their agreed payment instructions (bank details etc)
  • Double checking any changes in payment instructions from suppliers on the whitelist with people you know are authorised to approve those changes
  • Checking any payment requests made by managers in your company with that person on the phone or face to face (and not as an email reply)
  • Ensuring that appropriate documentation is present and checked before any payment is authorised: this might include an invoice, a purchase order, and a “goods received” slip or equivalent
  • Creating a process where additional authorisation is need to sign off payments
    • Over a certain amount
    • To new suppliers who are not yet on an approved “whitelist”
    • To existing whitelist suppliers who have provided new bank details
    • Where a payment is requested to a country outside normal trading patterns.

Using your common sense and raising a query when something seems odd. (This of course requires a culture that is sympathetic to juniors raising queries. If you don’t have this sort of culture, or if senior staff bully their juniors, this type of fraud becomes much more likely.)

Process won’t be enough on its own though. Training the finance team is also important so they are aware of the nature of invoice fraud. This training should include advice about how to take extra care with urgent or aggressive requests for payment.

Keep cyber safe!

Cyber security and peer pressure

Cyber security training – most IT security people would say that’s the answer to solving the insider threat problem.

But is it? Giving people information is certainly part of the solution. But training rarely changes behaviour.

Peer pressure does.

And that is why socialising safe cyber behaviour is an essential strategy if you want to ensure cyber security.

What does socialising cyber safety mean?

Socialising safe cyber behaviour involves changing an organisation’s culture so that unsafe cyber behaviour becomes as socially unacceptable as bullying and sexual harassment. It uses peer pressure or “social influence” to steer people into an acceptable way of behaving.

One of the reasons that peer pressure works is because most people like being in groups. Being in a group is safer than being isolated and, because humans evolved in a very dangerous world full of nasty predators (including other humans), we are hard wired to want this.

If we want to belong to a particular group it helps a lot to behave like the other people in the group – wearing the same types of clothing, supporting the same football team, looking, sounding, thinking and behaving like them.

In other words, people in a group often think in similar ways. Promote the right way of thinking and you can use the people’s need to be in a group to enhance cyber security and encourage cyber safe behaviour.

Following the herd

The most powerful way of doing this is to make safe behaviour seem to be what everyone else does – the “social norm”. There is lots of evidence that telling people about social norms changes people’s behaviour.

A good example of a company that successfully used social norms to change people’s behaviour is Opower a US energy company. Opower wanted to drive energy use down. It tried several messages to do this such as:

  • You can save $54 this month
  • You can save the planet
  • You can be a good citizen

None of these were particularly successful. So they tried a fourth message along the lines of:

  • Your neighbours are doing better than you

The results were amazing: more than 75% of the people who found this out started to save energy. In the same way, a message along the lines of  “83% of your colleagues never click on links in emails from unknown sources” is likely to influence people.

Mirroring

Making safe behaviour “visible” is another effective technique. People have a tendency to follow other people, as we have seen. So if one person is seen as behaving in a particular way, then others are likely to follow.

You are in the canteen when the person behind you at the counter sees what you have on our tray and says “That looks nice. I’ll have one of those.” Why do they do it? It might be because they didn’t notice it before. But it is just as likely to be because they want to make a good impression on you by imitating you.

You will see this with body language too. When you are sitting opposite someone try changing your posture: cross your legs, fold your arms, rub your chin. The chances are the person opposite will “mirror” at least some of your changes in posture.

The same with cyber security. If one person acts in a particular way, then people near them may well imitate them. For instance, if you can get someone to claim they always use a complicated password to log on or say “No way I’d log on to our corporate network using Costalotta’s free wi-fi”, then others may well follow their lead.

Leading

People follow authority figures. So you want your leaders to act in a cyber safe manner.

There are different sorts of leaders in any group. I’d always suggest that you try to get your organisation’s formal leaders to act in a visibly cyber safe way (or at least avoid obviously unsafe behaviour).

But the CEO might not have much credibility when it comes to technology. One of the new interns may be far more credible, and influential. Or perhaps there are some popular social leaders in your organisation: these too will have lots of leadership power. Empowering your leaders to act as cyber safety role models will pay dividends.

Incentivising the group

Using group rewards that disappear if anyone steps out of line is an interesting idea. With this technique there is a reward when everyone behaves well but no one gains if only one person behaves wrongly. And as most people dislike being unpopular they do their utmost to ensure that others don’t lose out.

Creative agency 23Red used this technique to get people to complete their time sheets.

Of course this technique only works if everyone belongs to the same social group. If there is a clique of people, perhaps in sales, who don’t interact much with the rest of the organisation, then they may well not feel obliged to behave well for the sake of their colleagues.

Social shaming

This is a little controversial, although I have seen it used successfully as a way of keeping the size of email directories down. With this technique bad behaviour is reported publicly – the digital equivalent to being put in the stocks. People may not throw cabbages at you but it still embarrassing to be called out in front of your peers for antisocial behaviour.

Peer groups

It may be practical to start the socialising process off using one or more small groups rather than trying to influence the whole of an organisation. Socialising behaviour in a small group has obvious limitations, but get some people to engage with cyber safety and their behaviour will soon be copied by others.

Cyber security expert Richard Knowlton suggested to me that telling stories in groups is a great way of generating understanding and acceptance of the threats that cyber brings. “My email was hacked…”, “I definitely got a phishing message on LinkedIn the other day…”, “One of my friends was emailed a fraudulent invoice the other day…” Share stories and you bring the problem to life and make it seem relevant for the people in your group.

You can even think about generating solutions as a group. Thinking tends to converge when people are in small groups, especially when people are faced with a hard problem. If you set your group a cyber threat problem they will probably come up with a common view of how to solve it.

Of course you want that view to be effective and practical, so you may want one or two “stooges” in your group who have been briefed about good solutions and who can lead the conversation in the right direction. But once you have arrived at the solution, the whole group are likely to agree with it as they have been involved in uncovering it.

Rewarding good behaviour

Providing public rewards and status can also generate social pressure. If people who have behaved safely – perhaps challenging a stranger who isn’t wearing a visitor’s badge, or politely suggesting to their boss that they shouldn’t use the business centre’s PC to log into the office network – are rewarded, and if the rewards are made public, that will encourage others.

Sales teams often work this way, with the most successful salesperson being publicly rewarded, and applauded by his (no doubt slightly envious) peers. The key to this is to make the reward public: “Cyber Safe Employee of the Month” notices, mugs and mouse mats, special privileges such as being allowed to go home early…

All in all…

Socialising cyber safe behaviour is a very powerful tool. It isn’t the only tool you can use of course, and it’s not a magic wand. You will need the right tools and the right processes in place as well. But used with imagination it can make a big difference to your cyber security as well as helping more general team bonding.

Keep cyber safe!