The dangers of hidden data

How many times have you leaked strategic data by accident? And do you even know when you have?

There are a multitude of opportunities to share strategic information with third parties such as clients and suppliers by accident. Information that could seriously damage your negotiating position. And if you are not aware of these dangers, it is very easy to do this.

Take Microsoft Office documents. If you ever share Excel spreadsheets with clients, do you make sure that any “hidden” columns don’t contain information you would rather keep hidden. Creating pivot tables to communicate your data analysis? Are you sure that the original detailed data isn’t available somewhere? And what about PowerPoint. Are those “Notes” pages suitable for sharing, or do they contain thoughts that you would rather not put in writing? And those text boxes that you pulled off the side of slides when you were writing them – you know they are still there of course!

Have you collaborated with others to produce a document? Most likely you will have written notes and tracked changes. If you are not careful much of the history of your document could be available to the final recipients: and that could be embarrassing!

Don’t forget document metadata either. Are there any interesting titbits in the “Properties” of your documents – the original author perhaps or the date the document was first drafted? Who know what value that might be to someone else.

Perhaps you think you have blocked some text out. Ineffective “redaction” is the cause of a lot of data leakage. For instance, blocking out text using a “highlight” the same colour as the text won’t delete it – and it could be very easy get rid of the highlight.

It’s not just documents though. There are lots of places where information can be hidden. Are your social media posts geo-tagged for instance? If you are regularly visiting a particular location, that could be of interest to competitors – or your colleagues.

Software can be another culprit. Is there any hidden text in your website, perhaps in an “invisible” font or in a comment tag. And that software you have commissioned _ are you sure the developers haven’t left any notes that could give away secrets?

Is there strategic data hidden in plain site? You might be surprised where interesting data lurked. Security blogger Brian Krebs tells how he analysed an airline boarding card and found a wealth of information in the bar code – including information that could have helped him disrupt future travel plans.

And finally – do be careful how you delete sensitive files. It isn’t sufficient to “delete” them as they will probably still exist in some form on your hard drive, easy for anyone reasonably skilled to find. You need to actively scrub them out. There is plenty of free software available online to do this. (Make sure you do this carefully when you recycle a personal computer or smartphone.)

The data you don’t value is often surprising valuable to other people, especially competitors and suppliers. Don’t share it accidentally because you simply can’t see it.

Cyber security and the importance of usability

There is nothing new or unusual about the need to design usable systems. A whole industry has grown up around the business of making sure that commercial websites and apps are easy to use and deliver the behaviour, such as spending money, that the owners of those websites and apps want to see.

Usable systems generally require three things: the system has to be useful, or at least perceived as useful, by the end user; the system has to be easy to use by the end user; and the system has to be persuasive so that the user to take the actions that the owner desires.

Is cyber security any different?

These three requirements of utility, usability and persuasiveness are seen in cyber security systems. However there are some differences compared with the consumer-facing world. Making sure a cyber security system succeeds is in some ways more important than making a commercial system succeed.

One issue is that the cyber security system has to work for everyone: potentially if just one person fails to use the system properly then the organisation will be put at risk.

In addition cyber security systems are like stable doors – they need to be shut when you want them to be as there is no use locking them after a breach has happened. If an online shop doesn’t work for some reason then the user can go back and try again, but with a cyber security system, if it doesn’t work first time then the damage may be done.

These are stringent requirements. Unfortunately the nature of cyber security means that these requirements are hard to meet:

  • Users have little motivation to comply with security requirements as keeping secure is not their main purpose; indeed security systems are part of a technical infrastructure that may have no real meaning or relevance to the end users
  • Security systems can “get in the way” of tasks and so can be thought of as a nuisance rather than a benefit
  • Security systems are often based on arbitrary and little understood rules set by other people, such as those found in security policies, rather than on the desires of the end user
  • Users may find complying with the requirements of security systems socially difficult as they may force the user to display distrust towards colleagues

These are all challenging issues and any security systems you design need to ask the very minimum of effort from the user if it is to overcome them.

Unfortunately many cyber security systems demand a degree of technical knowledge. For instance they may use jargon: “Do you want to encrypt this document?” will have an obvious meaning to anyone working in IT but may mean nothing to some users.

Furthermore some security requirements may of necessity require a degree of “cognitive overload”: the requirement to remember a strong password (perhaps 12 random characters) is an example. Again this will cause additional difficulty.

Users are not naturally motivated towards cyber security systems. And they may find them hard to use. So how can success – universal and efficient use of systems – be achieved?

Delivering success

Start with the end user. Ensure, through the use of a combination of interviews (including the standard “speak aloud” protocol used by many UX practitioners), observation and expert evaluation identify where the obstacles to successful use of the system are placed. Obviously the usual rules of good usability will apply: consistency, reduced cognitive overload, feedback, and help when mistakes are made.

Learnability is also important. Accept that some form of help may be needed by the user and ensure that this is available, ideally within the system. Help files shouldn’t just tell people how to achieve something but also why it is important.

But for cyber security systems there is also a lot of work to be done around persuasion. This will involve educating the end user about the importance of the system – how it protects their organisation, and how it protects them as individuals.

It will also involve ensuring that the system is credible – that end users realise that the system does what it is supposed to do and isn’t just a tick box exercise or something dreamed up by the geeks in IT to make everyone’s live that little bit harder.

And it will involve demonstrating to the end user that all their colleagues are using the system – and if they don’t use it then they will be out of line with the majority.

“Usability is not enough” is a common theme in retail website design. It is even more important in the design of cyber security systems.

 

 

 

 

 

 

 

Business processes and cyber risk

Cyber risk doesn’t just involve malicious techies hacking into corporate accounts. It can also involve risk to every day business processes: “process cyber risk”. Unfortunately, because the IT Department are kept busy defending the corporate network from the hackers, these process risks are often left to themselves.

What do I mean by process cyber risk? Quite simply, a risk of loss or damage to an organisation caused by a weak business process combined with the use of computer technology. These weak processes are often found within finance departments, but you will also find them in HR, in marketing and across organisations.

Process risk and identity

Many business processes rely on a particular document being signed off by an authorised individual. As many processes migrate online, the assumption is that the sign-off process can also be undertaken online. Sign on as an individual and perhaps you have authorisation to access a particular document or process.

As most people have to log in to company systems with a password and a name, then this shouldn’t be a problem. Except that passwords get shared. Busy people often share log-in details with juniors, allowing unauthorised people to access systems and documents that they are not authorised to access.

Any authorisation process that simply relies on someone logging in with name and password is weak because it is easily subverted. Issuing “dongles” as a second factor authentication device isn’t much better as these can get shared (unless they are integral to a company identity card). Robust processes where sensitive data or decisions are concerned should assume that a password has been shared (or stolen) and require additional security such as a second pair of eyes.

Process risks and finance departments

One big risk for finance departments is invoice fraud. This can happen in several ways. A common way is for thieves to gather information about a company, perhaps the news that it is investing in new technology. They will then use this information plus other easily obtainable assets such as company logos and the names of senior people in an organisation to put together a scam.

This might involve an email “from” a director of the organisation to a mid ranking person in the finance department asking for an invoice to be paid promptly; the invoice, which is of course a fake, is attached to the email.

In other cases the invoice is genuine. For instance thieves may pose as a supplier and ask for details of any unpaid invoices. They then resubmit a genuine invoice – but with the bank payment details changed.

All too often the unwitting finance executive passes the invoice for payment. Once the money has reached the thief’s bank account it is quickly transferred to another account making it unrecoverable.

This type of fraud is big business. Earlier this year Ubiquiti Networks disclosed that thieves stole $46.7 million in this way. While in the UK, the police’s Action Fraud service received reports of around 750 in the first half of 2015. And of course many similar frauds go unreported – or undetected.

What can you do to protect against this? Well start by educating staff about the nature of the threat – all staff not just in the finance department. Ensure that the details of all invoices are scrutinised carefully: Is the logo up-to-date? Is the email address correct (perhaps it is a .org instead of a .com)? Are the bank payment details the same as usual (if they have changed then telephone someone you know at the supplier to ask for confirmation)? And take extra care with larger invoices, for instance requiring them to be check by two separate people.

There are other cyber risks within finance processes – and often these are internal risks, initiated by employees. Examples include purchase fraud when personal items are bought using company money or when required items are bought at inflated prices, with the purchaser then getting a kick back at a later date. Again fake emails can be used to support these purchases. And again simple processes can disarm the threat.

Process risks within HR

Within HR there are numerous process risks. Let’s start with recruitment. The risks here can involve social media profiles designed to misinform, perhaps with fake endorsements or untrue job details. Looking at a LinkedIn profile is an easy way to identify potential candidates – but it is important to realise that the profile you see may well be substantially embroidered.

Another short cut, especially when looking for “knowledge leaders”, is to see what sort of “rating” candidates have on sites like Klout.com. Superficially this is fine. However, it is essential to be aware of how people are rated by the site (for instance what data is used) before making a judgement using this type of data as you may well be given an untrue perspective.

Another risk of using social media to identify candidates is that you open yourself to accusations of discrimination. An attractive cv may not have information on social media about age, ethnicity or sexual preference. Social media will. You really don’t want to know this sort of information but once you know something you can’t “unknown it”: and this can open you up to accusations of bias. It isn’t unknown for companies to commission an edited summary of a candidate’s social media profiles with anything that could lead to accusations of discrimination taken out in order to de-risk the profile before it is given to the recruiter.

In fact HR is full of cyber risk, especially where social media is concerned. There may be problems with the posts employees make on social media. There may be issues around bullying or discrimination at work. And maintaining a positive “employer brand” can be very difficult if an ex-employee starts to deride their old employer on line in sites such as Glassdoor.

Process risk and marketing

Process risk is also very at home in marketing. Again social media is one of the culprits. Not everyone, even in marketing, is a social media addict. Senior marketers frequently hand over their brands’ social media profiles to junior marketers, or even interns, because “they have a Facebook page”.

It’s a mistake. Not only is it likely that the output will be poor, the junior marketer may well (they frequently do) break advertising regulations (for instance by glamorising alcohol, or even fair trading laws (e.g. by including “spontaneous” endorsements from paid celebrities).

This shouldn’t be difficult: there is no reason that the processes that govern advertising in general can’t be applied to social media.

Procurement and cyber risk

Finally there is procurement – and the process of ensuring that third party suppliers don’t represent a cyber risk. This is a huge area of risk and one that is not always well appreciated.

The issue is not just that the third party may be insecure (for instance the massive hack to US retailer Target came about via an insecure supplier) and it is hard to know whether they are secure or not. It is also that people working for a supplier who have been given access may then leave the supplier without you being told: and as a result they retain access to your information, perhaps after they have joined a competitor. In additions suppliers may well have their own reasons for being a risk – they are in dispute with you, they are in financial difficulty, they have been taken over by a competitor…

Business processes frequently have the potential to be undermined by online technologies. It takes imagination to identify where the threats lie. However once they have been identified, actions to reduce the effect of the threat are often very simple.

Persuasion and cyber security

You can’t rely on technology to solve your cyber security issues.

Cyber security is largely a “people” issue: cyber breaches are generally caused by people behaving in an unsafe manner, whether they know they are doing so or not. The solution is to persuade them to behave safely.

But how can you persuade people to do this?

Effective cyber communication

The first step is developing an appropriate communication programme. Of course you already know that this shouldn’t be a “death by PowerPoint” style lecture.

You are going to make your communication engaging and interactive with lots of colour and interesting imagery. You are going to start training sessions off with uplifting material that gets people into a good mood – games, stories, or other activities designed to generate a feeling of well being.

But what about the content of your communications? How should you structure the messages that you need to get across? Here are a few Do’s and Dont’s:

  • Do describe security problems in a clear cut and simple way so that people can understand everything you are saying. Don’t use jargon and make it all sound frightfully difficult because you want to look clever
  • Do give people hope – while 100% security is impossible, you should emphasise that there is a lot that can be done to minimise threats and the consequences of a cyber incident. Don’t use “fear, uncertainty, doubt” to persuade people of the importance of the risk: they will just bury their heads in the sand.
  • Do make the risk relevant to the individuals you are talking to – describe personal risks, to their reputation or their jobs. Don’t describe it as a risk to the faceless organisation they work for.
  • Do stress that the risks are immediate ones that are all around you as you speak. Use examples of things that have happened, ideally to your organisation or a competitor. Don’t describe potential incidents that might happen sometime in the future.

Marketing techniques

There are also a number of marketing techniques you may be able to bring into your communications:

  • Use the power of FREE when describing techniques that people can take to avoid risk; this could be FREE training to avoid phishing, or some FREE software people can download to use at work and at home
  • Use the power of loss. When faced with a potential loss, people are risk averse. So emphasise what people might lose if they behave unsafely, not what they might gain if they behave safely. The loss needs to be personal, for instance it could relate to losing money when shopping online
  • Use the power of authority to persuade people. If you can ensure that your organisation’s leaders will act – and you can show them acting – in a cyber safe way then you have a good chance that people will follow their lead.
  • Use the power of peer pressure. People will often follow the lead of the people around them as they don’t want to seem out of step with the majority’s way of behaving. So if you can persuade some people to endorse safe behaviour during a training session, others will inevitably follow them. Having a few “stooges” as part of your audience may help!
  • Use the power of discovery. Guide people towards uncovering solutions to cyber risks, rather than telling them what to do. If they are responsible for defining solutions they will value those solutions. If you simply give them someone else’s solution it may well be discounted as “Not invented here”

You are trying to change people’s behaviour and it is important that you succeed. Think about what will persuade people. And don’t be afraid to use a few cheesy marketing techniques along the way.

Tackling invoice fraud

Invoice fraud is on the rise. It may involve a spoof email, apparently from the CEO of your organisation, “authorising” the payment of a fake invoice. In other cases the email seems to come from a trusted supplier.

Two of my friends who run SMEs have recently been exposed to invoice fraud, in one case for around £70,000. In both cases the fraud was picked up before payment was made.

But in a lot of cases it isn’t. For instance last year a small Norfolk manufacturer was scammed out of £350,000 by a fraudulent email. And because of the nature of online banking, once the money has left your account it is very hard to retrieve.

There are ways of reducing the risk of fraudulent emails. For large organisations an anomalytics service might be the answer. These services build up a picture of normal email traffic in order to identify unusual emails that can be subjected to further examination.

Another tool is the DMARC email standard. This prevents people sending emails that are apparently from you. It doesn’t of course stop people from breaking into your email account and actually sending emails “from” you. Nor does it prevent phishing attacks. But it is a useful tool nonetheless as it makes it harder for fraudsters.

But the real way to address invoice fraud is through implementing stronger business processes. These will include:

  • Ensuring only properly trained and authorised people are able to make online payments
  • Creating a “whitelist” of approved suppliers and their agreed payment instructions (bank details etc)
  • Double checking any changes in payment instructions from suppliers on the whitelist with people you know are authorised to approve those changes
  • Checking any payment requests made by managers in your company with that person on the phone or face to face (and not as an email reply)
  • Ensuring that appropriate documentation is present and checked before any payment is authorised: this might include an invoice, a purchase order, and a “goods received” slip or equivalent
  • Creating a process where additional authorisation is need to sign off payments
    • Over a certain amount
    • To new suppliers who are not yet on an approved “whitelist”
    • To existing whitelist suppliers who have provided new bank details
    • Where a payment is requested to a country outside normal trading patterns.

Using your common sense and raising a query when something seems odd. (This of course requires a culture that is sympathetic to juniors raising queries. If you don’t have this sort of culture, or if senior staff bully their juniors, this type of fraud becomes much more likely.)

Process won’t be enough on its own though. Training the finance team is also important so they are aware of the nature of invoice fraud. This training should include advice about how to take extra care with urgent or aggressive requests for payment.

Keep cyber safe!

Cyber security and peer pressure

Cyber security training – most IT security people would say that’s the answer to solving the insider threat problem.

But is it? Giving people information is certainly part of the solution. But training rarely changes behaviour.

Peer pressure does.

And that is why socialising safe cyber behaviour is an essential strategy if you want to ensure cyber security.

What does socialising cyber safety mean?

Socialising safe cyber behaviour involves changing an organisation’s culture so that unsafe cyber behaviour becomes as socially unacceptable as bullying and sexual harassment. It uses peer pressure or “social influence” to steer people into an acceptable way of behaving.

One of the reasons that peer pressure works is because most people like being in groups. Being in a group is safer than being isolated and, because humans evolved in a very dangerous world full of nasty predators (including other humans), we are hard wired to want this.

If we want to belong to a particular group it helps a lot to behave like the other people in the group – wearing the same types of clothing, supporting the same football team, looking, sounding, thinking and behaving like them.

In other words, people in a group often think in similar ways. Promote the right way of thinking and you can use the people’s need to be in a group to enhance cyber security and encourage cyber safe behaviour.

Following the herd

The most powerful way of doing this is to make safe behaviour seem to be what everyone else does – the “social norm”. There is lots of evidence that telling people about social norms changes people’s behaviour.

A good example of a company that successfully used social norms to change people’s behaviour is Opower a US energy company. Opower wanted to drive energy use down. It tried several messages to do this such as:

  • You can save $54 this month
  • You can save the planet
  • You can be a good citizen

None of these were particularly successful. So they tried a fourth message along the lines of:

  • Your neighbours are doing better than you

The results were amazing: more than 75% of the people who found this out started to save energy. In the same way, a message along the lines of  “83% of your colleagues never click on links in emails from unknown sources” is likely to influence people.

Mirroring

Making safe behaviour “visible” is another effective technique. People have a tendency to follow other people, as we have seen. So if one person is seen as behaving in a particular way, then others are likely to follow.

You are in the canteen when the person behind you at the counter sees what you have on our tray and says “That looks nice. I’ll have one of those.” Why do they do it? It might be because they didn’t notice it before. But it is just as likely to be because they want to make a good impression on you by imitating you.

You will see this with body language too. When you are sitting opposite someone try changing your posture: cross your legs, fold your arms, rub your chin. The chances are the person opposite will “mirror” at least some of your changes in posture.

The same with cyber security. If one person acts in a particular way, then people near them may well imitate them. For instance, if you can get someone to claim they always use a complicated password to log on or say “No way I’d log on to our corporate network using Costalotta’s free wi-fi”, then others may well follow their lead.

Leading

People follow authority figures. So you want your leaders to act in a cyber safe manner.

There are different sorts of leaders in any group. I’d always suggest that you try to get your organisation’s formal leaders to act in a visibly cyber safe way (or at least avoid obviously unsafe behaviour).

But the CEO might not have much credibility when it comes to technology. One of the new interns may be far more credible, and influential. Or perhaps there are some popular social leaders in your organisation: these too will have lots of leadership power. Empowering your leaders to act as cyber safety role models will pay dividends.

Incentivising the group

Using group rewards that disappear if anyone steps out of line is an interesting idea. With this technique there is a reward when everyone behaves well but no one gains if only one person behaves wrongly. And as most people dislike being unpopular they do their utmost to ensure that others don’t lose out.

Creative agency 23Red used this technique to get people to complete their time sheets.

Of course this technique only works if everyone belongs to the same social group. If there is a clique of people, perhaps in sales, who don’t interact much with the rest of the organisation, then they may well not feel obliged to behave well for the sake of their colleagues.

Social shaming

This is a little controversial, although I have seen it used successfully as a way of keeping the size of email directories down. With this technique bad behaviour is reported publicly – the digital equivalent to being put in the stocks. People may not throw cabbages at you but it still embarrassing to be called out in front of your peers for antisocial behaviour.

Peer groups

It may be practical to start the socialising process off using one or more small groups rather than trying to influence the whole of an organisation. Socialising behaviour in a small group has obvious limitations, but get some people to engage with cyber safety and their behaviour will soon be copied by others.

Cyber security expert Richard Knowlton suggested to me that telling stories in groups is a great way of generating understanding and acceptance of the threats that cyber brings. “My email was hacked…”, “I definitely got a phishing message on LinkedIn the other day…”, “One of my friends was emailed a fraudulent invoice the other day…” Share stories and you bring the problem to life and make it seem relevant for the people in your group.

You can even think about generating solutions as a group. Thinking tends to converge when people are in small groups, especially when people are faced with a hard problem. If you set your group a cyber threat problem they will probably come up with a common view of how to solve it.

Of course you want that view to be effective and practical, so you may want one or two “stooges” in your group who have been briefed about good solutions and who can lead the conversation in the right direction. But once you have arrived at the solution, the whole group are likely to agree with it as they have been involved in uncovering it.

Rewarding good behaviour

Providing public rewards and status can also generate social pressure. If people who have behaved safely – perhaps challenging a stranger who isn’t wearing a visitor’s badge, or politely suggesting to their boss that they shouldn’t use the business centre’s PC to log into the office network – are rewarded, and if the rewards are made public, that will encourage others.

Sales teams often work this way, with the most successful salesperson being publicly rewarded, and applauded by his (no doubt slightly envious) peers. The key to this is to make the reward public: “Cyber Safe Employee of the Month” notices, mugs and mouse mats, special privileges such as being allowed to go home early…

All in all…

Socialising cyber safe behaviour is a very powerful tool. It isn’t the only tool you can use of course, and it’s not a magic wand. You will need the right tools and the right processes in place as well. But used with imagination it can make a big difference to your cyber security as well as helping more general team bonding.

Keep cyber safe!

Does your cyber security have the right aura?

Can cyber security have auras?

How can cyber security have an “aura”? It sounds like a meaningless question. But step back a little and think about how direct marketing works.

Commonly, people in direct marketing use a simple mnemonic to describe the steps they take consumers through when persuading them to buy: AURAL. I think this is relevant for cyber security.

AURAL stands for Awareness, Understanding, Relevance, Action, and Loyalty. In other words:

  1. You start by making people aware of your product
  2. You move on to helping them understand what it does – its benefits and features
  3. Then you persuade them that the product is relevant for their own needs, that it solves a particular problem they have
  4. Next you call them to the action you want them to take, which is generally putting their hand in their pocket and shelling out for whatever you are selling
  5. And finally you hope to generate some loyalty so that they will come back and buy again, and perhaps even recommend your product to their friends.

As I said, this process (which by the way doesn’t have to be linear) is pretty relevant for cyber security too. Except that “loyalty” isn’t really appropriate. But rather than simply getting rid of the “L” I am going to change it to an S: AURAS. The final S stands for Socialise. You will see what I mean in a moment.

So what do I mean by “AURAS”?

Awareness

As with direct marketing, in cyber security we need Awareness. This is aimed at keeping cyber threats, and the need for cyber security, at the front of everyone’s minds.

You might create awareness with posters (remember to move them around and change their message so that people don’t become blind to them), emails (personalised messages can be highly effective), messages when people start their computers up or start to do certain things (again remember to change them), even things like mugs and mouse mats which can be given to reward cyber safe behaviour.

Understanding

It isn’t enough to be aware of a threat though. People also need Understanding about what they can do. For instance, if you have a policy of insisting on complex passwords that are changed every month then you need to give people the tools to do this – otherwise they are likely to write their passwords down on sticky notes and put them on their monitors, hardly the cyber safe behaviour you want to encourage. (There is a hint about complex passwords at the end of this post.) This is where training comes in: helping people understand how they need to behave to keep safe.

Relevance

You also need to ensure that people feel the training they have had has real Relevance to their own lives. Not everyone lives to work. Most people regard work as a way of getting the things they want in life. Of course their job is important – so stressing that unsafe behaviour could damage their employer, and hence their own job, is one tactic.

A stronger tactic though (and one that might even generate a bit of gratitude) is to show them how being cyber safe can help them outside their work life – protecting their identity, their bank accounts, their children’s physical safety.

Action

Now you need to call them to Action. This involves communication at the moment they are doing something. For instance, BAE’s email security service has a very handy feature: if a user is tempted to click on a link in an email (generally accepted as unsafe behaviour unless you are certain who the email is from) they can be served a CAPTCHA image which makes them stop and think about what they are doing before they click on the link.

(I haven’t seen these images: it would be nice to think that instead of a standard CAPTCHA image such as a random set of numbers they contain a little message like “Are you sure?” or “Links can hurt”.)

Socialise

And finally you need to Socialise cyber safe behaviour into the organisation. The aim will be to make unsafe behaviour socially unacceptable – just as drink driving, not showering after a lunchtime run, or eating fish soup at your desk are all pretty unacceptable.

One of the most powerful way of socialising behaviour is telling people that the majority of their fellows act in the way you are hoping to persuade them to act.This doesn’t have to be complicated. For instance Northern Illinois University halved the amount of binge drinking by students simply by promoting the message “Most students drink in moderation.” People follow the crowd.

AURAS

AURAS: it’s a great way of thinking about the different things you need to do to change the way people think about cyber security and to change the way they behave.

An easy way to complex passwords

Now I did say I would give you a tip about remembering complex passwords that change every month. It’s easy. You need two things: a memorable phrase; and a date “protocol” (I’ll explain).

Let’s say your IT people have demanded a password of at least 12 characters that includes at least one of each of the following: upper case letter, lower case letter, number and symbol. They also want you to change it every month.

First of all, the phrase. This isn’t the same as a “pass phrase” where people use several words as a passwords: there is some evidence that this isn’t very secure.

You need to think of a phrase such as: I love my job at Acme Widgets, Dorking! Take the first letter of each word and the symbols and you get: 1lmj@AW,D! (the word “at” is useful as it turns nicely into a symbol and the “I” is useful as you can turn it into a number 1).

Now think about a date “protocol”. A really simple one might be to use the first of the month. It’s October 2015 so that makes: 01 10 15. Just for a bit of fun I am going to put the first thee numbers at the start and the last three numbers at the end. So my password this month is: 0111lmj@AW,D!015. Easy to remember and I can change it every month.

Keep cyber safe!