X Close

Risky Business

Home

Tips and tricks for securing information

Menu

KRACK Attacks (WiFi security vulnerability)

By Gen Cralev, on 17 October 2017

KRACK Attacks

Security researchers have announced a major security vulnerability in the WPA2 protocol yesterday called KRACK (Key Reinstallation Attacks). WPA2 (WiFi Protected Access II) is the encryption protocol that secures all modern WiFi networks. It was designed to provide wireless networks with stronger data protection and network access control. The current vulnerability exploits a weakness in the encryption process, allowing an attacker to eavesdrop on wireless traffic. An attacker may also be able to inject and manipulate data (e.g. uploading malware to a website).

logo

 

Impact

Most devices that support WiFi are affected by this vulnerability until the manufacturers release a patch to address it. If exploited, an attacker will be able to steal sensitive information that a client device sends to an access point on a wireless network. This may include credit card details, passwords, chat messages, photos etc. Malicious software can also be loaded onto the device, causing further damage.

What can I do?

Certain precautions can be taken to ensure that you do not fall victim to such an attack. Firstly, ensure that all communication is encrypted – for example, by only browsing sites over HTTPS. Most sites support HTTPS by default. For those that don’t, this feature may be enabled with an extension such as “HTTPS Everywhere” which forces websites to work in HTTPS mode whenever possible. Whenever browsing a website that requires any data input, check to make sure that ‘HTTPS’ is in the address bar and a green padlock is visible. Secondly, use a VPN provider which creates an encrypted tunnel between your device and the VPN host, encrypting all traffic automatically. UCL provies a free VPN service for all staff and students. Lastly, update your wireless devices as soon as patches becomes available. If possible, avoid using WiFi and use a wired connection instead!

Further reading

More details on the attack, a proof-of-concept and FAQs can be found on the KRACK Attacks site. The NCSC provided some useful guidance in relation to the vulnerability.

CIA Triad

By Gen Cralev, on 18 August 2017

There is a well-known model within information security called the CIA triad (a.k.a. AIC triad to avoid confusion with the Central Intelligence Agency). The letters stand for Confidentiality, Integrity and Availability. In this blog post I will briefly define each of these concepts and look at some examples of how they are incorporated into the policies, procedures and standards of an organisation.

CIA Triad

Confidentiality

Confidentiality refers to data being accessible only by the intended individual or party. The focus of confidentiality is to ensure that data does not reach unauthorised individuals.

Measures to improve confidentiality may include:

  • Training
    • Sensitive data handling and disposal
  • Physical access control
    • Storing personal documents in locked cabinets
  • Logical access control
    • User IDs, passwords and two-factor authentication
  • Data encryption

Integrity

Integrity is roughly equivalent to the trustworthiness of data. It involves preventing unauthorised individuals from modifying the data and ensuring the data’s consistency and accuracy over it’s entire life cycle. Specific scenarios may require data integrity but not confidentiality. For example, if you are downloading a piece of software from the Internet, you may wish to ensure that the installation package has not been tampered with by a third party to include malicious code.

Integrity can be incorporated in a number of ways:

  • Use of file permissions
    • Limit access to read only
  • Checksums
  • Cryptographic signatures
    • Hashing

Availability

Availability simply refers to ensuring that data is available to authorised individuals when required. Data only has value if it is accessible at the required moment. A common form of attack on availability is a Denial of Service (DoS) which prevents authorised individuals from accessing the required data. You may be aware of the recent ransomware attack on UCL. This was a DoS attack as it prevented users from being able to access their own files and requested for a ransom in exchange for reinstating that access.

In order to ensure availability of data, the following measures may be used:

  • Regular backups
  • Redundancy
    • Off-site data centre
  • Adequate communication bandwidth

Each aspect of the triad plays an important role in increasing the overall posture of information security in an organisation. However, it can sometimes be difficult to maintain the right balance of confidentiality, integrity and availability. As such, it is important to analyse an organisation’s security requirements and implement appropriate measures accordingly. The following information classification tool has been developed for use at UCL to help classify the level of confidentiality, integrity and availability of data: https://opinio.ucl.ac.uk/s?s=45808. Have a go – the results aren’t saved.

Auditing- what is it?

By Bridget Kenyon, on 31 July 2017

Brace yourself: we are heading into the Unknown Land of Terror and Tedium. Yes, the domain of the Auditor!

IMG_1173

Seriously, though, it isn’t as scary, or as boring, as that. Having carried out audits for two different security standards (ISO/IEC 27001 and PCI DSS), I have visited that Land, and am able to unveil its mysteries to you. I’ve also been audited, so have seen both sides of the process.

First, dismiss any thought of auditing being uniform across all standards. Auditors are trained very differently depending upon what they are auditing against. For example, payment card audits are very technical, while 27001 audits can be carried out by non-technical (but trained and experienced) people.

It’s auditing, so there must be a list…

Having said that, there are significant similarities between the audits I have been involved in. As this is about audit, there needs to be a list:

  • You have to audit against something, a fixed point of comparison. This is usually a standard. You can audit against policy, too.
  • Audits tend to have predefined possible outcomes: e.g. pass or fail.
  • Audits also produce “non-conformities”, or “findings” relating to parts of the document which you are auditing against.
  • Audits look for evidence of compliance; positives, not negatives. A good auditor is looking for reasons to give their client a pass.
  • Hard evidence is key to the whole thing. There is a saying: if it’s worth doing, it’s worth documenting. But proof doesn’t have to be documentation. Some standards can be satisfied by everyone demonstrably doing the same thing.
  • Auditors usually test part of the environment to be audited, not the whole thing. So even if you pass an audit, that’s not bullet-proof.
  • There are two different types of audit: internal and external.
  • External auditors are from a company specialising in audit, which is usually “accredited” to prove that it provides a good quality of audit.
  • Internal auditors are often internal to your company, but can be from another company. They do not have to be accredited, as their findings are private to the company being audited.
  • Internal audits usually look at part of the standard to be audited against, not the whole thing. More of a spot check than a full review.
  • If you are an external auditor from an accredited auditing company, and the company you audit passes the audit, it can say that it is “certified”. Not “accredited”!
  • You can be asked to audit part of an environment, not the whole thing. This can get really messy if it isn’t clearly agreed and documented.
  • Auditors tend to think of EVERYTHING as a process.

To sum up

I hope that this has given you a little peek inside the world of auditing, and that it wasn’t as tedious or unsettling as you expected!

What’s governance about?

By Bridget Kenyon, on 31 July 2017

There are a number of special terms which are bandied about in the world of information security. Today let’s look at “governance”. Even in the rest of the business world, the term is a little slippery. People use it in conjunction with “strategy” a lot. Let’s start by taking a look at it by itself; what can we see?

IMG_1170

Governance is in the eye of the beholder

I like to think of this as being the proverbial “elephant described by people who have only seen a part of it” situation. People who are in hands-on operational roles see one facet. People in top management see another, and external organisations yet another. It could be a source of edicts; it could be a lever to move the earth (cf Galileo), or it could even be a magic Harry Potter mirror in which one can see what one cares about the most.

IMG_1171

What about when it’s not there?

OK, so governance looks different to everyone, depending on what your role is. Next, we can ask ourselves what is it for? Or more interestingly, what happens if you don’t have governance?

One thing you don’t get is a clear idea of where you are going, and how close you are to getting there. Another thing you don’t get is any idea of what is and isn’t allowed. You have a good chance of going round in circles.

IMG_1172

The purpose and definition of governance

The main purpose of governance, then, is to provide direction and purpose to an organisation.

As to what it is, I like the definition used by the World Bank:

“[the process] by which authority is conferred on rulers, by which they make the rules, and by which those rules are enforced and modified.”

This makes a bit more sense at last. We can apply this definition very cleanly to the arena of information security, where we consider the rules to be relating to information risk management, and the “rulers” to be the organisation’s top management, e.g. the senior management team, or the board of directors. It incorporates the idea of delegation, of creation, and of enforcement and monitoring.

Do we already address governance in information security?

If you look at the text of ISO/IEC 27001, you will find that it is essentially a blueprint for information security governance. It also goes into a bit of depth on management, which for my money is the way in which governance is enacted.

Risk (in)tolerance

By Bridget Kenyon, on 27 July 2017

Here’s a question. What’s your tolerance for risk, and do you think it’s the same as your employer’s?

IMG_0634

Risk tolerance differs

Let’s look at an example. Seat belts are pretty popular in the UK. In fact, they’re mandatory for almost everyone (except taxi drivers, interestingly). The risk? That you’ll get thrown through the windscreen, and die horribly, if there is an accident. But so many people in the past decided to accept that risk that wearing a seatbelt was made law, to improve the number of people actually using seatbelts. The risk tolerance of many individuals differed from the risk tolerance of the Government. It also differs from the risk tolerance of anyone who has been in a car accident and has benefited from the use of a seat belt.

In the same way, each individual will have a different risk tolerance. Some people wore seat belts before they were mandatory.

What’s the impact?

Where personal risk tolerance is different enough from the risk tolerance of an employer, this can cause real problems.

IMG_1109

An individual who has a significantly lower risk tolerance than that of their company will be constantly worried that the security measures are inadequate. They may believe that the company is on the verge of disaster. They may start to try to force other people to apply security measures which are not mandatory, and report risks which are at a level where the organisation will not act. They believe that the organisation should lower its risk tolerance to match theirs.

IMG_0464

Conversely, a person whose risk tolerance is higher than that of their organisation will exist in a state of perpetual frustration.

They see unnecessary and pettifogging rules everywhere, designed to get in their way and waste time and money. They may develop work-arounds to evade security measures, or simply refuse to comply with them. They believe that the organisation should raise its risk tolerance to match theirs.

Closing the gap

What’s the solution? There’s no silver bullet here (I feel another blog post about easy answers coming on). However, there are approaches to get everyone on the same page. I’m assuming here that you are acting on behalf of the organisation.

Encourage a neutral perspective

First, stop worrying about who is “right” or “justified”. That basically guarantees fisticuffs at dawn, and other adversarial outcomes. The organisation has its own risk tolerance. There are things which are objectively correct or incorrect, but risk tolerance as a whole is a very qualitative thing.

Awareness is key

Second, ensure that staff have a good understanding of risks and threats. Half of an understanding is worse than none.

Here is an example. Did you know that patient data is used for research without patient consent, sent to multiple universities and even private companies?

That sounds alarming, yes? But you only have half the knowledge you need to make an informed decision!

Now add in the following: There are incredibly strict rules and clearly governed processes to allow identifiable patient data to be used without consent, and the data is very well protected. Much of the research is intended to do things like identify causes of, and potential cures for, deeply unpleasant and lethal medical conditions, and it’s working.

That sounds better. You now know there are benefits to this data sharing, and that the risk is being managed.

In the same way, if people are informed of risks, benefits, and how security measures work to protect them, they are more comfortable with the risk and also more likely to implement the security measures.

A sense of separation

A fundamental truth which is difficult for really committed people to accept: this is actually not their risk. Risk belongs to the organisation, just as the information does. Letting go of this feeling of personal ownership without losing a sense of personal pride in your work can be a really hard balance to strike.

Don’t discount organisational error

Finally, don’t forget that the organisation might have a risk tolerance which really is too high or too low.

Test Phishing Campaigns

By Daniela Cooper, on 21 July 2017

test-phishing

For the past year we have been running test phishing campaigns on a particular group of staff at UCL. We started off easy and have slowly ramped up the difficulty rating as the year has gone on. We have been mostly happy with the results, I say mostly, as much as I would like a 0% click rate, I’m being realistic.

 

The test phishing campaigns come with instant training for those users that do fall for it, this helps these users to realise straight away where they have gone wrong and how they can identify phishing emails in the future.

 

For the next coming year we are planning on extending this service to cover all staff at UCL. Along with this, we hope to increase our awareness work on phishing (and other information security areas), to help combat this very real threat that we all face both at work and at home.

 

Don’t miss my next blog post, it will be a competition with the chance to win some Amazon vouchers!

 

Privacy risk

By Ravi Miranda, on 16 May 2017

Privacy Impact Assessment

Previously

We looked at what information privacy is and how information sharing affects us all. We also had a brief look at what Privacy Impact Assessment (PIA)  is and its contribution to the organisation in terms of safeguarding reputation and reducing costs.

This blog piece covers the basic aspects of a PIA.

Privacy Risk

Privacy risk is the risk of harm arising through an intrusion of privacy. Privacy harm can be caused through the use or misuse of personal information. This harm can be quantifiable or tangible; an individual could lose their job. It could also be less tangible; damage to personal relationships. Going a bit further, what might not be a great harm to an individual a cumulative loss of data could be a huge damage to society.
Some of the ways that this can arise by personal information:

  • being inaccurate, insufficient or out of date,
  • excessive or irrelevant
  • kept for too long
  • disclosed to inappropriate individuals;
  • used in ways that are unexpected or unacceptable to the person it is about; or
  • not kept securely

The outcome of a PIA should be the minimisation of privacy risk. This involves the understanding of what constitutes privacy and privacy risk. There is no one size fits all as one can imagine. Data collection for visa issuance is far different than that for an admission process even though personal information is collected in both situations. Thus privacy risk involves an understanding of the relationship between the organisation and the individual.

Something to think about .

Does your organisation need to be aware of obligations under the Human Rights Act?
If so, use a PIA to ensure that any actions that interfere with the right ot private life are necessary a proportionate.

That’s all for this blog! In the next blog, I intend to cover the benefits of a PIA and whose responsibility it is of conducting a PIA

Further reading:
https://ico.org.uk/media/for-organisations/documents/1595/pia-code-of-practice.pdf

Ceremonial security

By Bridget Kenyon, on 12 May 2017

There is a type of security which is nothing but smoke and mirrors; a ceremony of actions which has no actual effect but that of making people feel better.

This can be a good thing, or a bad one.

What do we mean by “risk appetite”?

An organisation uses security measures to meet its obligations to other parties (including the government). However, the organisation also needs to meet its “risk appetite”. The exec and the board, or the senior management team, take the strategic priorities and plans of the organisation into account, then work out how much information risk is just short of “too much”. That level is its risk appetite.

Clear enough? OK, now remember that the organisation is composed of individual people. They each have their own individual risk appetite; their own idea of what is an acceptable level of risk.

Some people will think that the organisation is too draconian, with policies which are overkill. Others will feel their concerns on information risk are being ignored, and believe that the organisation is dicing with death.

When might ceremonial security be worthwhile?

IMG_0838

For people in the latter category, you can implement something which makes them feel better about risk, but doesn’t actually make any actual difference. By doing this, you may benefit both the individual (they get to sleep at night) and the organisation (they get a better performing staff member and they are not over-egging the pudding).

When might ceremonial security be damaging?

IMG_0731

What if you implement risk management activities which don’t have a beneficial effect, even though they are actually expected to? Let’s pick an example. Imagine that  you implement mandatory virus scanning on your computers- but you take no action if a virus is detected, and no-one ever looks at the results of the scans. That’s a dangerous situation. You have something which looks like a very good idea, but is exactly useless. It may even have a negative effect on security, as you may assume you are safe from viruses, and let down your guard.

In summary…

What’s the take-home lesson from this? Maybe it’s that there are different ways to see risk, but no “single right answer”. Those who look for the simple, easy way out are doomed to believe that they have found it.

Controls: what are they?

By Bridget Kenyon, on 12 May 2017

What is a control? If you have spent more than five minutes talking to me or my team, we will probably have spoken of “controls”, and probably risk (but let’s stick to controls for this post).

Definition of a control

IMG_0660

A control is a change you make to part (or all) of an organisation to reduce its exposure to information risk. For example, you may decide to put sensitive documents into a shredder when they are not needed any longer, rather than putting them in the standard paper recycling. Or you could encrypt files on shared storage, rather than storing them in clear text.

So a policy is NOT a control, but in fact describes a control. For example, you may create a policy stating that all passwords must be at least ten characters long.

Categorising controls

3D render of files on bookshelves

3D render of files on bookshelves

There is a tendency amongst all people who look at security controls to try to fit them into categories. A common set is:

  • Physical
  • People
  • Technical
  • Process/organisational

This helps people to understand what thing a control is changing. A physical control, for example, will be a physical change (e.g. putting a lock on a door, or shredding paper). A technical control could be applying security patches to systems within X days. A process control could be performing a security review when a change is planned to a system. A people control could be doing background checks on people who are to be granted access to sensitive information.

Other attributes

Questions multicolour

Questions multicolour

There are many other ways of categorising a security control. These can be used as appropriate. Examples include:

  • Main purpose: detective, reactive, preventative
  • Intended effect on risk: reduction of impact, reduction of likelihood, or both
  • Which role(s) it applies to: which are responsible, accountable, consulted and informed
  • Which part(s) of the organisation it applies to
  • How long it is intended to be in effect for
  • What sanctions will apply if it is not applied
  • What risk(s) it is intended to affect
  • What business process(es) relate to it

 

Privacy Impact Assessment – An Introduction

By Ravi Miranda, on 12 May 2017

Information Privact Assessment

Privacy

According to The Cambridge Dictionary ‘Privacy’ is defined as “someone’s right to keep their personal matters and relationships secret”. This should be taken to mean that people would like to share information selectively. Informational privacy is the ability of a person to control, edit, manage and delete information about themselves. The person should also be able to  decide how and to what extent such information is communicated to others.

Information Sharing

There are several theories about what constitutes privacy and its application in different cultures. I will not consider these as part of the blog posts. We do not  want to share our personal information with all and sundry. However, in today’s modern world, we share a lot of information with everyone; friends, organisations that we work with, the Government and others. We feel that the information thus shared will remain within the boundaries of the relationship. We share personal information in exchange for services, buying an air ticket, or earnings for tax purposes. We feel dismayed when this doesn’t happen and we should be assured of a decent level of protection when this sharing happens.

Collect just enough information (Short version)

When personal information is to be collected in the course of business working, an organisation must ensure that the collected data is relevant. Organisations should  consider a privacy by design approach. According to the Information Commissioner’s Office (https://ico.org.uk/), Privacy Impact Assessments (PIAs) are a tool, which can help organisations identify the most effective way to comply with their data protection obligations and meet individuals’ expectations of privacy. An effective privacy impact assessment will help an organisation to identify and fix problems at an early stage. This will reduce costs and damage to reputation that may possibly occur.

In future blog posts I intend to cover the PIA process in some detail.