The dangers of hidden data

How many times have you leaked strategic data by accident? And do you even know when you have?

There are a multitude of opportunities to share strategic information with third parties such as clients and suppliers by accident. Information that could seriously damage your negotiating position. And if you are not aware of these dangers, it is very easy to do this.

Take Microsoft Office documents. If you ever share Excel spreadsheets with clients, do you make sure that any “hidden” columns don’t contain information you would rather keep hidden. Creating pivot tables to communicate your data analysis? Are you sure that the original detailed data isn’t available somewhere? And what about PowerPoint. Are those “Notes” pages suitable for sharing, or do they contain thoughts that you would rather not put in writing? And those text boxes that you pulled off the side of slides when you were writing them – you know they are still there of course!

Have you collaborated with others to produce a document? Most likely you will have written notes and tracked changes. If you are not careful much of the history of your document could be available to the final recipients: and that could be embarrassing!

Don’t forget document metadata either. Are there any interesting titbits in the “Properties” of your documents – the original author perhaps or the date the document was first drafted? Who know what value that might be to someone else.

Perhaps you think you have blocked some text out. Ineffective “redaction” is the cause of a lot of data leakage. For instance, blocking out text using a “highlight” the same colour as the text won’t delete it – and it could be very easy get rid of the highlight.

It’s not just documents though. There are lots of places where information can be hidden. Are your social media posts geo-tagged for instance? If you are regularly visiting a particular location, that could be of interest to competitors – or your colleagues.

Software can be another culprit. Is there any hidden text in your website, perhaps in an “invisible” font or in a comment tag. And that software you have commissioned _ are you sure the developers haven’t left any notes that could give away secrets?

Is there strategic data hidden in plain site? You might be surprised where interesting data lurked. Security blogger Brian Krebs tells how he analysed an airline boarding card and found a wealth of information in the bar code – including information that could have helped him disrupt future travel plans.

And finally – do be careful how you delete sensitive files. It isn’t sufficient to “delete” them as they will probably still exist in some form on your hard drive, easy for anyone reasonably skilled to find. You need to actively scrub them out. There is plenty of free software available online to do this. (Make sure you do this carefully when you recycle a personal computer or smartphone.)

The data you don’t value is often surprising valuable to other people, especially competitors and suppliers. Don’t share it accidentally because you simply can’t see it.


Cyber security and the importance of usability

There is nothing new or unusual about the need to design usable systems. A whole industry has grown up around the business of making sure that commercial websites and apps are easy to use and deliver the behaviour, such as spending money, that the owners of those websites and apps want to see.

Usable systems generally require three things: the system has to be useful, or at least perceived as useful, by the end user; the system has to be easy to use by the end user; and the system has to be persuasive so that the user to take the actions that the owner desires.

Is cyber security any different?

These three requirements of utility, usability and persuasiveness are seen in cyber security systems. However there are some differences compared with the consumer-facing world. Making sure a cyber security system succeeds is in some ways more important than making a commercial system succeed.

One issue is that the cyber security system has to work for everyone: potentially if just one person fails to use the system properly then the organisation will be put at risk.

In addition cyber security systems are like stable doors – they need to be shut when you want them to be as there is no use locking them after a breach has happened. If an online shop doesn’t work for some reason then the user can go back and try again, but with a cyber security system, if it doesn’t work first time then the damage may be done.

These are stringent requirements. Unfortunately the nature of cyber security means that these requirements are hard to meet:

  • Users have little motivation to comply with security requirements as keeping secure is not their main purpose; indeed security systems are part of a technical infrastructure that may have no real meaning or relevance to the end users
  • Security systems can “get in the way” of tasks and so can be thought of as a nuisance rather than a benefit
  • Security systems are often based on arbitrary and little understood rules set by other people, such as those found in security policies, rather than on the desires of the end user
  • Users may find complying with the requirements of security systems socially difficult as they may force the user to display distrust towards colleagues

These are all challenging issues and any security systems you design need to ask the very minimum of effort from the user if it is to overcome them.

Unfortunately many cyber security systems demand a degree of technical knowledge. For instance they may use jargon: “Do you want to encrypt this document?” will have an obvious meaning to anyone working in IT but may mean nothing to some users.

Furthermore some security requirements may of necessity require a degree of “cognitive overload”: the requirement to remember a strong password (perhaps 12 random characters) is an example. Again this will cause additional difficulty.

Users are not naturally motivated towards cyber security systems. And they may find them hard to use. So how can success – universal and efficient use of systems – be achieved?

Delivering success

Start with the end user. Ensure, through the use of a combination of interviews (including the standard “speak aloud” protocol used by many UX practitioners), observation and expert evaluation identify where the obstacles to successful use of the system are placed. Obviously the usual rules of good usability will apply: consistency, reduced cognitive overload, feedback, and help when mistakes are made.

Learnability is also important. Accept that some form of help may be needed by the user and ensure that this is available, ideally within the system. Help files shouldn’t just tell people how to achieve something but also why it is important.

But for cyber security systems there is also a lot of work to be done around persuasion. This will involve educating the end user about the importance of the system – how it protects their organisation, and how it protects them as individuals.

It will also involve ensuring that the system is credible – that end users realise that the system does what it is supposed to do and isn’t just a tick box exercise or something dreamed up by the geeks in IT to make everyone’s live that little bit harder.

And it will involve demonstrating to the end user that all their colleagues are using the system – and if they don’t use it then they will be out of line with the majority.

“Usability is not enough” is a common theme in retail website design. It is even more important in the design of cyber security systems.








A New Year’s resolution for CEOs

“I am going to take cyber security seriously in 2016.”

On the whole senior executives claim that they want to act in an ethical manner. And yet if they fail to embrace cyber security they are clearly lying.

Why do I say that? Because playing fast and loose with customer data wrecks lives. It is as simple as that. Lose your customers’ data and you expose them to a major risk of identity theft – and that can and does cause people massive personal problems.

The problems that David Crouse experienced in 2010 are typical. When his identity was stolen he saw $900,000 in goods and gambling being drained from his credit card account in less than 6 months. His credit score was ruined and he spent around $100,000 trying to solve the problems.

Higher interest rates and penalty fees for missed payments just made his financial situation worse. His debts resulted in his security clearance for government work being rescinded. Having lost his job, other employers wouldn’t touch him because of his debts and credit score. He felt suicidal. “It ruined me, financially and emotionally” he said.

Data breaches frequently result in identity theft. And this can have a devastating emotional impact on the victims, as it did with David Crouse. Research from the Identity Theft Resource Center  indicates that 6% of victims actually feel suicidal while 31% experience overwhelming sadness.

The directors of any company whose negligence results in customers feeling suicidal cannot consider themselves to be ethical.

Unfortunately most data breaches that don’t involve the theft of credit card details are dismissed by corporations as being unimportant. And yet a credit card can be cancelled and replaced within hours. A stolen identity can take months, or longer, to repair.

And all sorts of data can be used to steal an identity. An email address and password; a home and office address; the names of family members; a holiday destination; a regular payment to a health club… Stolen medical records, which are highly effective if you want to steal an identity, will sell for around £20 per person online, while credit card details can be bought for as little as £1. Go figure, as they say in the USA.

Organisations must accept that any loss of customer data puts those customers in harm’s way. And if they want to be seen as ethical they must take reasonable steps to prevent data breaches. Until they do, well the EU’s new data protection rules can’t come on-stream quickly enough for me!

The EU’s General Data Protection Regulations explained

The EU’s General Data Protection Regulations (GDPR) may not be the most exciting topic on your agenda (!) but it is important as new rules to be published shortly will replace current laws on protecting personal data.

A draft was published earlier this year; it is still being discussed but it should be agreed in early 2016. It will then pass directly into law although it will be 2 years before you have to comply with it. This means you don’t have to panic until 2018. (But you should start thinking about it now.)

The rules are designed to give people control over of their personal data and to simplify the regulatory environment for business.

By the way, there is also a Network Information Security Directive aimed at curbing cyber crime. This is different: don’t get them confused!

The GDPR draft is still under discussion so the following information may change; however, the important points are likely to be as shown below.

  • Definitions:
    • Personal data is defined as ‘any information relating to a data subject’ (that’s you or me). The ICO defines it more helpfully as ‘any detail about a living individual that can be used on its own, or with other data, to identify them’. This can include photos, email addresses and perhaps even computer IP addresses
    • Processing data means pretty much anything – collecting it, storing it, analysing it, sharing it…
    • A data controller is the person who decides what can be done with personal data; often they are the people who have collected the data; they will appoint a data processor, sometimes in their organisation but often outside it
    • A data processor processes the data  on behalf of the controller; the new rules will increase responsibility on these people
  • Geography: The regulations will apply if the data controller or the data processor or the data subject is in the EU; so it applies to, say, US companies processing UK data
  • Employment data is excluded and member states can create their own rules for this
  • Fines for non-compliance will increase. The current draft proposes a maximum of Euro 1million or 2% of global turnover although there has been discussion of a higher level of Euro 100 million of 5% of global turnover. Ouch.
  • Existing data protection principles remain and these include:
    • Data processing must be fair to the person concerned, lawful (i.e. with their consent or to fulfill a contract with them) and transparent (i.e. they are able to see its results)
    • Data processing can only be for the purpose specified when the data was collected
    • Only data relevant to purpose specified can be collected
    • Data must be accurate and up to date as far as possible
    • Data can only be held as long as needed for the purpose specified, although if the data is needed for legal purposes it can be kept as long as any further processing is “limited” (i.e. you can’t carry on using it)
    • Data must be secure
  • Certain organisations must appoint a Data Protection Officer (DPO); these include:
    • Public bodies; organisations processing data from more than 5000 people; organisations employing over 250 people that process personal data; and organisations where data processing involving systematic monitoring of people is the core activity
    • DPOs will advise organisations about the rules and monitor compliance with them; they must be free to operate as they think fit (“independent”) and will need a range of skills beyond compliance monitoring: they must be able to manage IT processes (e.g. controlling access to data, retaining data), data security (including dealing with cyber-attacks) and other critical business continuity issues
    • DPOs must be offered a minimum 2 year contract term – so you can’t get rid of them if you don’t like what they are doing, unless they prove unable to perform their duties
  • When high risk activities are proposed e.g. the processing of data that could result in financial loss, identity theft, discrimination, damage to reputation, and loss of professional confidentiality, Data Protection Impact Assessments (DPIA) must be conducted
    • The DPIA must contain a description of the data and the processes used, an evaluation of any risks to the data, and description of how you propose to manage those risks
    • The local data protection authority (DPA), which in the UK would be the Information Commissioner’s Office, must authorise (or forbid) high risk data processing after reviewing the DPIA (this requirement is contentious and may be removed)
  • Data collection requires consent; consent must be opt in (“clear affirmative action”) which means you can’t have a ready-ticked opt-in box; and the opt in must be specific (not part of a wider agreement)
    • Data about minors (up to 13 years old) can only happen with the parent’s consent
    • Sensitive data (e.g. about religion, ethnicity, etc) cannot be processed
    • Consent for the use of data for “direct marketing” must be explicitly obtained; this doesn’t appear to rule out highly targeted mass marketing where people are not addressed by name – but see the next point
    • Automated profiling that could have some form of “legal effect” and which is based on (or which will predict) personal characteristics such as performance at work, economic situation, location, health, personal preferences, reliability or behaviour is forbidden unless specifically requested by the person concerned
  • Data breaches must be reported to the Data Protection Authority (the ICO) and also to the victims (unless the data was encrypted)
  • Data transfer out of the EU is only allowed under certain conditions. This means that the use of cloud computing services (such as Google Docs, Dropbox and Gmail) is likely to be problematic if personal data is involved as the data may not be secure, may not be held in the EU, or may be shared by the cloud service owner; remember this applies to “informal” cloud computing use by employees – whether or not you know about it

There are a number of things that organisations need to start thinking about in order to ensure they are compliant. Talk to a lawyer when the final wording is approved but in the meantime consider the following:

  • Identify any personal data that you hold
  • Think about how you can timestamp and put time limits on holding personal data
  • If you want to hold data for analysis purposes after you have used it for its original purpose, think about how you can anonymise it, so that it remains legal to hold (“pseudonomysing” data, e.g. by hiding personal details, so that it can be “re-personalised” at a later date won’t help)
  • Develop a system that enables you to pull off any personal data if it is requested by the relevant person
  • Formalise your data protection policies and processes – and keep records
  • Think about how you are going to manage cloud computing, and also the use of home computers, smartphones and tablets by employees: if you don’t do this then your employees may create compliance failures for you
  • Be aware of the potential of Big Data analysis techniques to create new personal data – even accidentally; for instance an anonymous record of a disability or a first name linked to a postcode could result in new personal data
  • Ensure appropriate security so that unlawful destruction or processing, such as unauthorised disclosure or access, is prevented

Take the protection of personal data privacy seriously. Compliance with the GDPR shouldn’t be a tick-box exercise. Privacy needs to be designed into your business processes for legal and ethical reasons.

Business processes and cyber risk

Cyber risk doesn’t just involve malicious techies hacking into corporate accounts. It can also involve risk to every day business processes: “process cyber risk”. Unfortunately, because the IT Department are kept busy defending the corporate network from the hackers, these process risks are often left to themselves.

What do I mean by process cyber risk? Quite simply, a risk of loss or damage to an organisation caused by a weak business process combined with the use of computer technology. These weak processes are often found within finance departments, but you will also find them in HR, in marketing and across organisations.

Process risk and identity

Many business processes rely on a particular document being signed off by an authorised individual. As many processes migrate online, the assumption is that the sign-off process can also be undertaken online. Sign on as an individual and perhaps you have authorisation to access a particular document or process.

As most people have to log in to company systems with a password and a name, then this shouldn’t be a problem. Except that passwords get shared. Busy people often share log-in details with juniors, allowing unauthorised people to access systems and documents that they are not authorised to access.

Any authorisation process that simply relies on someone logging in with name and password is weak because it is easily subverted. Issuing “dongles” as a second factor authentication device isn’t much better as these can get shared (unless they are integral to a company identity card). Robust processes where sensitive data or decisions are concerned should assume that a password has been shared (or stolen) and require additional security such as a second pair of eyes.

Process risks and finance departments

One big risk for finance departments is invoice fraud. This can happen in several ways. A common way is for thieves to gather information about a company, perhaps the news that it is investing in new technology. They will then use this information plus other easily obtainable assets such as company logos and the names of senior people in an organisation to put together a scam.

This might involve an email “from” a director of the organisation to a mid ranking person in the finance department asking for an invoice to be paid promptly; the invoice, which is of course a fake, is attached to the email.

In other cases the invoice is genuine. For instance thieves may pose as a supplier and ask for details of any unpaid invoices. They then resubmit a genuine invoice – but with the bank payment details changed.

All too often the unwitting finance executive passes the invoice for payment. Once the money has reached the thief’s bank account it is quickly transferred to another account making it unrecoverable.

This type of fraud is big business. Earlier this year Ubiquiti Networks disclosed that thieves stole $46.7 million in this way. While in the UK, the police’s Action Fraud service received reports of around 750 in the first half of 2015. And of course many similar frauds go unreported – or undetected.

What can you do to protect against this? Well start by educating staff about the nature of the threat – all staff not just in the finance department. Ensure that the details of all invoices are scrutinised carefully: Is the logo up-to-date? Is the email address correct (perhaps it is a .org instead of a .com)? Are the bank payment details the same as usual (if they have changed then telephone someone you know at the supplier to ask for confirmation)? And take extra care with larger invoices, for instance requiring them to be check by two separate people.

There are other cyber risks within finance processes – and often these are internal risks, initiated by employees. Examples include purchase fraud when personal items are bought using company money or when required items are bought at inflated prices, with the purchaser then getting a kick back at a later date. Again fake emails can be used to support these purchases. And again simple processes can disarm the threat.

Process risks within HR

Within HR there are numerous process risks. Let’s start with recruitment. The risks here can involve social media profiles designed to misinform, perhaps with fake endorsements or untrue job details. Looking at a LinkedIn profile is an easy way to identify potential candidates – but it is important to realise that the profile you see may well be substantially embroidered.

Another short cut, especially when looking for “knowledge leaders”, is to see what sort of “rating” candidates have on sites like Superficially this is fine. However, it is essential to be aware of how people are rated by the site (for instance what data is used) before making a judgement using this type of data as you may well be given an untrue perspective.

Another risk of using social media to identify candidates is that you open yourself to accusations of discrimination. An attractive cv may not have information on social media about age, ethnicity or sexual preference. Social media will. You really don’t want to know this sort of information but once you know something you can’t “unknown it”: and this can open you up to accusations of bias. It isn’t unknown for companies to commission an edited summary of a candidate’s social media profiles with anything that could lead to accusations of discrimination taken out in order to de-risk the profile before it is given to the recruiter.

In fact HR is full of cyber risk, especially where social media is concerned. There may be problems with the posts employees make on social media. There may be issues around bullying or discrimination at work. And maintaining a positive “employer brand” can be very difficult if an ex-employee starts to deride their old employer on line in sites such as Glassdoor.

Process risk and marketing

Process risk is also very at home in marketing. Again social media is one of the culprits. Not everyone, even in marketing, is a social media addict. Senior marketers frequently hand over their brands’ social media profiles to junior marketers, or even interns, because “they have a Facebook page”.

It’s a mistake. Not only is it likely that the output will be poor, the junior marketer may well (they frequently do) break advertising regulations (for instance by glamorising alcohol, or even fair trading laws (e.g. by including “spontaneous” endorsements from paid celebrities).

This shouldn’t be difficult: there is no reason that the processes that govern advertising in general can’t be applied to social media.

Procurement and cyber risk

Finally there is procurement – and the process of ensuring that third party suppliers don’t represent a cyber risk. This is a huge area of risk and one that is not always well appreciated.

The issue is not just that the third party may be insecure (for instance the massive hack to US retailer Target came about via an insecure supplier) and it is hard to know whether they are secure or not. It is also that people working for a supplier who have been given access may then leave the supplier without you being told: and as a result they retain access to your information, perhaps after they have joined a competitor. In additions suppliers may well have their own reasons for being a risk – they are in dispute with you, they are in financial difficulty, they have been taken over by a competitor…

Business processes frequently have the potential to be undermined by online technologies. It takes imagination to identify where the threats lie. However once they have been identified, actions to reduce the effect of the threat are often very simple.

Persuasion and cyber security

You can’t rely on technology to solve your cyber security issues.

Cyber security is largely a “people” issue: cyber breaches are generally caused by people behaving in an unsafe manner, whether they know they are doing so or not. The solution is to persuade them to behave safely.

But how can you persuade people to do this?

Effective cyber communication

The first step is developing an appropriate communication programme. Of course you already know that this shouldn’t be a “death by PowerPoint” style lecture.

You are going to make your communication engaging and interactive with lots of colour and interesting imagery. You are going to start training sessions off with uplifting material that gets people into a good mood – games, stories, or other activities designed to generate a feeling of well being.

But what about the content of your communications? How should you structure the messages that you need to get across? Here are a few Do’s and Dont’s:

  • Do describe security problems in a clear cut and simple way so that people can understand everything you are saying. Don’t use jargon and make it all sound frightfully difficult because you want to look clever
  • Do give people hope – while 100% security is impossible, you should emphasise that there is a lot that can be done to minimise threats and the consequences of a cyber incident. Don’t use “fear, uncertainty, doubt” to persuade people of the importance of the risk: they will just bury their heads in the sand.
  • Do make the risk relevant to the individuals you are talking to – describe personal risks, to their reputation or their jobs. Don’t describe it as a risk to the faceless organisation they work for.
  • Do stress that the risks are immediate ones that are all around you as you speak. Use examples of things that have happened, ideally to your organisation or a competitor. Don’t describe potential incidents that might happen sometime in the future.

Marketing techniques

There are also a number of marketing techniques you may be able to bring into your communications:

  • Use the power of FREE when describing techniques that people can take to avoid risk; this could be FREE training to avoid phishing, or some FREE software people can download to use at work and at home
  • Use the power of loss. When faced with a potential loss, people are risk averse. So emphasise what people might lose if they behave unsafely, not what they might gain if they behave safely. The loss needs to be personal, for instance it could relate to losing money when shopping online
  • Use the power of authority to persuade people. If you can ensure that your organisation’s leaders will act – and you can show them acting – in a cyber safe way then you have a good chance that people will follow their lead.
  • Use the power of peer pressure. People will often follow the lead of the people around them as they don’t want to seem out of step with the majority’s way of behaving. So if you can persuade some people to endorse safe behaviour during a training session, others will inevitably follow them. Having a few “stooges” as part of your audience may help!
  • Use the power of discovery. Guide people towards uncovering solutions to cyber risks, rather than telling them what to do. If they are responsible for defining solutions they will value those solutions. If you simply give them someone else’s solution it may well be discounted as “Not invented here”

You are trying to change people’s behaviour and it is important that you succeed. Think about what will persuade people. And don’t be afraid to use a few cheesy marketing techniques along the way.

Tackling invoice fraud

Invoice fraud is on the rise. It may involve a spoof email, apparently from the CEO of your organisation, “authorising” the payment of a fake invoice. In other cases the email seems to come from a trusted supplier.

Two of my friends who run SMEs have recently been exposed to invoice fraud, in one case for around £70,000. In both cases the fraud was picked up before payment was made.

But in a lot of cases it isn’t. For instance last year a small Norfolk manufacturer was scammed out of £350,000 by a fraudulent email. And because of the nature of online banking, once the money has left your account it is very hard to retrieve.

There are ways of reducing the risk of fraudulent emails. For large organisations an anomalytics service might be the answer. These services build up a picture of normal email traffic in order to identify unusual emails that can be subjected to further examination.

Another tool is the DMARC email standard. This prevents people sending emails that are apparently from you. It doesn’t of course stop people from breaking into your email account and actually sending emails “from” you. Nor does it prevent phishing attacks. But it is a useful tool nonetheless as it makes it harder for fraudsters.

But the real way to address invoice fraud is through implementing stronger business processes. These will include:

  • Ensuring only properly trained and authorised people are able to make online payments
  • Creating a “whitelist” of approved suppliers and their agreed payment instructions (bank details etc)
  • Double checking any changes in payment instructions from suppliers on the whitelist with people you know are authorised to approve those changes
  • Checking any payment requests made by managers in your company with that person on the phone or face to face (and not as an email reply)
  • Ensuring that appropriate documentation is present and checked before any payment is authorised: this might include an invoice, a purchase order, and a “goods received” slip or equivalent
  • Creating a process where additional authorisation is need to sign off payments
    • Over a certain amount
    • To new suppliers who are not yet on an approved “whitelist”
    • To existing whitelist suppliers who have provided new bank details
    • Where a payment is requested to a country outside normal trading patterns.

Using your common sense and raising a query when something seems odd. (This of course requires a culture that is sympathetic to juniors raising queries. If you don’t have this sort of culture, or if senior staff bully their juniors, this type of fraud becomes much more likely.)

Process won’t be enough on its own though. Training the finance team is also important so they are aware of the nature of invoice fraud. This training should include advice about how to take extra care with urgent or aggressive requests for payment.

Keep cyber safe!