Cyber Technology Institute represented at MILCOM’18 (IEEE/AFCEA Military Communications Conference)

Attending MILCOM (IEEE/AFCEA Military Communications Conference) is always a great experience. Not only because MILCOM is one of the flagship conferences for the IEEE Communications Society, but also because this conference is unique from any other conference in many ways.

In contrast to most academic focussed conferences, MILCOM is heavily based on the presence of military and industry communications professionals. This means that the general profile of the attendees and the topics of conversation between presentations are slightly different from other conferences. MILCOM is packed with technical presentations, tutorials and discussion panels led by the industry. Therefore, for me, attending MILCOM is an excellent way to get outside the academic enclosure, and to learn first hand where the industry is heading and to gain more clarity on how technology is evolving.

(Image: MILSATCOM Program Manager Panel moderated by Jay Santee, Vice President, The Aerospace Corporation)

This is the second time I have presented at MILCOM. This year, I presented my latest work relating to “Multi-stage attack detection using contextual information”. This is based on joint work between De Montfort University, Loughborough University and Newcastle University.  In this paper, we describe a novel Intrusion Detection System, which exploits the use of contextual information to improve its detection capabilities, and is able to detect Multi-Stage Attacks in real time. There was a decent audience turnout, and I had some interesting questions which led to even more interesting follow-up conversations after the session.

In my opinion, one of the most inspiring and eye-opening talks during the conference was delivered by LTG Bruce T. Crawford, for the U.S. Army. During his presentation, LTG Crawford touched upon different topics ranging from autonomous vehicles to ubiquitous computing, providing Internet/network connectivity anytime anywhere. But I would highlight one aspect in particular from this talk – the need for more Blue Team capabilities, starting from Blue Team specialists. According to LTG Crawford, the Department of Defence (DoD) has enough Red Team resources (and I would also assume enough cyber-attack campaigns from foreign nations targeting their resources, to work with or to deal with), but not so much for Blue Team. This statement may well be an implicit message for us, as academics at DMU, to prepare our students with all the skills and knowledge possible for them to lead in the future the cyber defence of networked infrastructures against cyber threats.

(Image of Barbara Borgonovi, Vice President, Integrated Communication Systems, Raytheon Space and Airborne Systems)

And last but not least, I cannot ignore the fact that MILCOM was held this year in Los Angeles! I had one day off after the conference to do some sightseeing – through Hollywood boulevard and Beverly Hills, (window) shopping at Rodeo Drive, and bike riding from Venice beach to Santa Monica pier.

It was a fantastic conference, and I am already looking forward to attending MILCOM’19!

This blog post was written by Dr Francisco Navarro Aparicio, Lecturer in Cyber Security in the Cyber Technology Institute, De Montfort University.

 

Posted in Uncategorized | Leave a comment

ICS CTF at BRUCON

LOCATION: University of Ghent, Belgium

DATE: 4th-5th October 2018

EVENT: BruCon

DMU was once again invited by ICS Village to help develop and run an Industrial Control Systems Capture the Flag event at a major international conference.

Following on from the success of the CTF at RSA San Francisco this time we were honoured to be in the wonderful University City of Ghent in Belgium for the 10th Annual BruCon (Belgian Style Hacking).

The ICS CTF was developed by the Cyber Technology Institute in collaboration with our industry partners at Airbus and Claroty.  The team for this event was myself (Dr Richard Smith), Joe Stirland from Airbus and Eirini Anthi from Cardiff/Airbus.

Unsurprisingly given the logo, there was a certain involvement of beer to the proceedings, but our plucky Associate Professor was far too professional to try any of it (until after the CTF finished on the 2nd day!).  The conference itself had a laid back atmosphere, with the main location being one of the University of Ghent’s historic buildings.  Participants could listen to lectures, attend workshops, participate in a number of different CTFs or play retro computer games, the favourite of this lecturer was a rather obscure game on the Amiga with the rather obtuse controls not being helped by the fact that everything was in German (not that this helped the German speakers understand them either).

The event was a roaring success, with over 40 teams participating during what was already an action packed itinerary.  Going into the first day there was a clear leader “noclue_nofear”, a team including a few familiar faces as they had participated in the UK Cyber Security Challenge Masterclass in 2017, an event for which both Airbus and DMU provided the infrastructure in the form of an eight identical networks connecting to ICS equipment.

Despite their best efforts however, you don’t win any prizes for being top of the leaderboard at the half way point. New teams signed up on the second day, some of which ended up sitting at the tables right in front of our equipment and rather than attend the rest of the conference they worked feverishly to rack up the points to take the lead.

With only 15 minutes left of the clock we were faced with a tense situation, a three way dead heat for first place!  We weren’t cut out for all of this suspense and we lost one of our team members to stress (although some might call it packing)  Joe and I stayed to the bitter end to see “The Hot Club” score an injury-time winner with a final 300 points and claim the title of ICS CTF Champion!

Same time next year????

This blog post was written by Dr Richard Smith, Associate Professor in Cyber Security in the Cyber Technology Institute at De Montfort University.

 

Posted in Uncategorized | Leave a comment

Another successful ICS-CSR conference

This August, the ICS-CSR conference, jointly organised by De Montfort University, the St. Pölten University of Applied Sciences, Austria and Airbus, was hosted at the University of Hamburg.

This was the 5th International Symposium for ICS & SCADA Cyber Security Research – an event which brings together researchers with an interest in the security of industrial control systems in the light of their increasing exposure to cyber-space.

With topics ranging from security for hardware/firmware used in industrial control systems to the human aspects of cyber security such as behaviour modelling and training, the conference attracts participants from a variety of domains.

Molly Betts, eKTP Associate with DMU and Airbus said:

“I had a great week at the ICS-CSR conference in Hamburg as I met lots of brilliant people with equally brilliant ideas. There was a wide variety of people from both academic and industry backgrounds, and it was refreshing to go to a conference where there was a relaxed and friendly atmosphere.”

“There were lots of interesting talks on work conducted around ICS and SCADA covering a wide range of topics, including device discovery, intrusion detection and the current security issues. I especially enjoyed the keynotes from Dr Robert Oates from Rolls Royce and Harald Niggemann from BSI, and of course the buffet which was joint with the ARES conference attendees as prizes were handed out. Looking forward to next year’s already!”

[Image: ICS-CSR Organising Committee: Thomas Brandstetter, St Polten University, Dr Kevin Jones, Airbus; Professor Helge Janicke, DMU]

This year’s proceedings are now available at: https://ewic.bcs.org/category/19361

Posted in Uncategorized | Leave a comment

How Google stays under the radar by being everywhere

When Google started 20 years ago, most mainstream “news” about the Internet was fluff: the month’s most popular search terms  were the main item.

In 2018, the news is reporting on Internet services undermining democracy, contributing to genocide, or profiling people for mental health issues . How come such stories are almost always about Facebook, and rarely about Google?

One possible answer is that Google (technically maybe “Alphabet”) is everywhere. Both the data it takes in and the resulting effects turn up in multiple places, not always necessarily related – and as a consequence, Google is hiding in plain sight.

For Facebook, we are mostly aware of the data streams: their inputs come in the first place from our use of Facebook itself, and this is also where the effects appear. Privacy scandals appear through the cracks: the appearance of something on a timeline implies that Facebook must have taken in more information, or done more intrusive processing on it, than their customers expected. Occasionally a scandal is caused through Facebook’s additional input streams: targeted adverts on Facebook reminding us they snoop on our web browsing through tracking pixels or “like” buttons, or fellow patients of our psychiatrist turning up as friend recommendations pointing at Facebook’s gathering “contacts” information through its mobile apps or correlating people’s location data.

Google’s many data streams

For Google, the majority of data collected is probably in two categories: location and interests (e.g. search topics). Both of these get picked up from at least four different services, at different levels of operation.

The Android operating system in our mobile phones has access to location data, app usage, and contact information. Using cell tower information, the phone can report its location to Google even with GPS switched off. The advertising infrastructure of many web pages and mobile apps is provided by Google owned services such as DoubleClick, and gathers info on location and interests to target advertising. Our use of Google search (from a logged in Google account, maybe?) does not just provide interest through search terms, but can also be combined with browser fingerprinting for some location info. Google Maps of course serves up lots of location with a side dish of interests, and other apps sneak out location info too. The privacy settings offered by Google probably do not completely stop the information from being gathered. The likelihood is that some or all of it will still be recorded but merely associated with our browser profile, device or phone number rather than with our name or Google account. Google would then have to argue, e.g. for legal reasons , that it isn’t really personal data – even when they could reliably link it. So overall we usually can’t even tell how Google knows of a particular location or interest. On top of that, the Google family (excepting Youtube) are also subtle about how they use the data. People are not easily able to observe the extent to which search results, including the sponsored ones, are personalised. People are looking for relevance, and personalisation aims to provide that. The effectiveness of ad personalisation does not require user surveys. It is directly and imperceptibly measurable through a high click-through rate.

Where is Google heading?

So from a search engine initially, Google has grown into a very efficient internet advertising machine – particularly on mobile phones where they control the platform at many levels , where they have now come under close scrutiny by EU competition authorities. More broadly, through their search engine, Google have been quite successful in providing many people’s main portal into the internet — something that others like Microsoft, Yahoo and Facebook have tried and failed at.

From the platform’s other activities, it is clear that Google would like to see the advertising more as “paying the bills” than as their ultimate goal. Their response to “the right to be forgotten” may have given an indication. They claimed the court judgement implied “censorship” and an infringement on the public’s right to find out information. Never mind that they had been censoring on behalf of media companies and the police for years. Google’s underlying ambition, to be the ultimate arbiters of access to knowledge, may not be intrinsically evil, but a bit megalomaniac certainly – and a human rights problem when they also aim to please oppressive regimes like China .

AI and health data

It would also be cynical to assume that Google DeepMind’s efforts in AI and in particular in applying that to medical data are driven by potential advertising opportunities alone. There is no doubt that machine learning is useful in the personalised advertising sphere, certainly as long as the lack of transparency on such processing remains tolerated. Google have long expressed grand ambitions on health as a justification for large scale medical data sharing. They were probably encouraged when they got away scot-free from nonchalantly obtaining excessive medical data on 1.5M NHS patients. Many more DeepMind NHS projects are ongoing and expanding towards comprehensive patient record systems, with Google’s ultimate motivations obscured.

It is hard to suppress the feeling that they want to know “everything”. We will need to keep a close eye on their ambitions and activities in the health sphere and elsewhere. If we believe that power corrupts, and that data is power, then with the data it already has and the additional data it wants, Google presents a much larger potential risk to society than (say) Facebook or any other company.

This blog post was written by Professor Eerke Boiten, Professor of Cyber Security and Director of the Cyber Technology Institute, De Montfort University.

Posted in Uncategorized | Leave a comment

Meet our experts…Dr Francisco Aparicio Navarro

This month, we welcome a new member to the Cyber Technology Institute team.

Dr Francisco Aparicio Navarro received his B.Eng. degree in Telecommunications Engineering, specialising in Computer Networks, from the Technical University of Cartagena, Spain, in 2009, and his Ph.D. in Computer Network Security from Loughborough University, UK, in 2014.

His PhD research focused on the design of a novel Unsupervised and Self-adaptive Anomaly-based Intrusion Detection System, based on a Multi-metric Cross-layer Data Fusion architecture, able to provide real-time attack detection. The system developed during his Ph.D. was successfully licensed to a leading company in the defence sector.

From 2013 and 2016, he was a Research Associate in the School of Electronic, Electrical, and Systems Engineering, at Loughborough University, and from 2016 to 2018, he was a Research Associate in the School of Engineering, at Newcastle University, UK.

Between 2013 and 2018, he was part of the University Defence Research Collaboration in Signal Processing (UDRC) phase II (https://udrc.eng.ed.ac.uk/), project funded by Dstl/MoD and EPSRC. This video (https://www.youtube.com/watch?v=D7pKrhtWwRk) describes some of his work in the area of Multi-Stage Attack Detection Using Contextual Information.

He is an expert in the areas of Computer Networks, Cyber Security and Anomaly Detection.  His research interests are in the areas of computer networks, cyber security, network security, intrusion detection, and network traffic analysis.

Dr Navarro joins us as a new Lecturer in Cyber Security from the beginning of July 2018. We are really pleased to welcome Francisco to our team!

To find out more about Dr Aparicio Navarro and view his publications, please visit: https://scholar.google.co.uk/citations?user=eaIdHYoAAAAJ&hl=en

Posted in Uncategorized | Leave a comment

Liam Fox’s cyber security export strategy: guess which elephant in the room?

As part of his ongoing efforts to ensure an economically viable post-Brexit Britain, Secretary of State for International Trade Liam Fox has recently released a new Cyber Security Export Strategy for the UK, targeting the period up to 2021. As the UK has experienced some eventful times since the previous strategy was released in early 2014, this was in principle a welcome move.

However, the strategy lacks convincing substance on the technological side, and targets the wrong countries. Liam Fox apparently does not want to admit that the UK’s largest cyber security export market of all is seriously at risk for multiple reasons.

How much, when, and where?

The numbers bear out that in the five year period up to 2016, the cyber security sector has grown massively. Estimates of the size of the world market have gone from between 35 and 120 billion pounds in 2011 to over 150 billion pounds now. UK exports in cyber security were stated to be some 800 million pounds in 2011, and for 2021 Liam Fox and his team are aiming for £2.6 billion. That sounds like a solid but ambitious target.

The new strategy mentions a number of target markets. Expansion is particularly aimed for in USA, the Gulf states, India, Japan, and South-East Asia. In 2016, these accounted together for less than 40% of the cyber security export market of the UK[1]. Of the total IT and telecommunications export for the UK in 2016[2], the US took 22%, Gulf states including Saudi Arabia 4%, India 1%, Japan 1%, Singapore 1%. The US is considered a mature rather than developing market, which means it would only ever grow slowly; even doubling the exports to all the other listed countries would increase total exports by only 7%.

So what were the target markets for the 2014 strategy[3]? Has targeting actually worked over the last period? There was special mention for Brazil because of the Olympics, using London 2012 as a cyber showcase for Rio 2016 – but the total share of IT and telecoms UK exports to Brazil is now 0.5%, having dropped dramatically since 2014.  (Let’s see if the similar argument made now for Japan and its Olympics works out this time.) Malaysia was also mentioned because of its early identification of cyber security as an issue in the 1990s – in 2016 they were at 0.3% of the UK total IT exports, and slightly below the 2014 level. The Gulf states and India were targets in the previous round, too  – with India dipping in recent years on total IT exports. So none of these have contributed much to the near-doubling of UK cyber security exports from £805M in 2011 to £1.5B in 2016.

Don’t mention the EU

With Liam Fox’s position on Brexit all too well-known, maybe it is no surprise that the EU is barely mentioned in the new cyber export strategy. Well – it gets two mentions, both in the context of regulation that the UK is subject to: on weapons exports, and on data protection. We will have to come back to the GDPR later. The importance of the EU to the UK’s cyber exports is evident from the figures. In 2016, the EU-27 counted for well over half of it. Of the total IT exports from the UK, they have been receiving some 40% over the last few years, with otherwise only the US achieving a double figure percentage. With the potential of significant trade barriers between the UK and the EU-27 after Brexit, this market has to be considered at serious risk now. Ironically, if any sector knows that strategy may be about avoiding disasters rather than about sketching rosy futures, it’s cyber security!

Interestingly, the lack of reference to the EU in the cyber export strategy is not a 2018 novelty. The 2014 strategy was also looking the other way – maybe justifiably as trade with Europe in the pre-Brexit days was not really perceived as “export”. So this strategy happily claimed US, China, Japan, and India taking up some 70% of UK cyber security exports between them – which could only be correct if the EU was excluded. Maybe an indication of how things felt only four years ago – exports to the EU running so smoothly that they were hardly noticed.

GDPR

Next, how can an opinion piece in computing from May 2018 be complete without considering the ominous GDPR? Liam Fox’s advert for his strategy in the New Statesman[4] is probably the exception. At least the export strategy acknowledges that “New regulation such as the EU’s General Data Protection Regulation is driving organisations to build information security into their wider strategy” – in a document which consistently reduces privacy and data protection to just data security.

However, here may be another area in which the strategy fails to consider a risk to the UK’s exports. Post Brexit, the UK will be implementing a new Data Protection Act which despite its faults[5] still closely matches the GDPR. If the UK were still an EU country, this would be enough for UK cyber businesses to be able to process personal data for European customers. However, with Britain outside the EU, an explicit decision on adequacy of the UK legislation will need to be taken, and the outcome is by no means a certainty according to the European Commission[6]. Doubts in this area relate to the wide ranging powers of internet surveillance and retention in the UK, but possibly also to exemptions slipped in to the Data Protection Bill at various stages.

Will this affect UK cyber security businesses? Certainly not all of them – hardware and many kinds of software contain and process no personal data, so such trade is largely impervious to the GDPR. However, where cyber security software overlaps with AI (another of the UK flagship IT industries according to the government line), and in the cyber intelligence analysis industry, where the market is set to grow dramatically, personal data is likely to play a role. An adjudged lack of data protection in the UK may stop UK companies from successfully providing such services to EU customers, for example in the cloud. So it’s not just “no-deal” and other possible trade barriers that contribute a Brexit risk to the UK cyber industry.

So what is in it?

The strategy certainly contains some interesting insights. For example, “the rise in disruptive digital technologies” is held responsible for the discovery of vulnerabilities, when we had been assuming it was due to ancient bugs, badly designed interfaces, and unimaginative attacker models.

Of course it couldn’t avoid mentioning the UK government’s £1.9B investment in cyber security – Fox’s New Statesman piece even took that for its title. We can’t really tell how much of it has been spent already – but given that it was first announced in 2016 we should hope the pot has been emptied somewhat by now. Much of the export strategy reiterates elements of this old overall strategy, including work on the academic research side that has only a very thin connection to exports, and a picture of the shiny new National Cyber Security Centre building.

The Department for International Trade’s main activities will be “Pursue”, “Enable”, and “Respond”. These represent targeting governments with their CNI (critical national infrastructure), bespoke offers in specific sectors (government; finance; automotive; health; energy and CNI; infrastructure), and rebranded marketing with general exporting advice, respectively. None of the export advice sounds revolutionary: regional representatives, trade fairs, and mentored “growth mindsets” for SMEs.

A vision of where the thematic growth in the UK cyber security industry might or should be is mostly lacking, summarised in the document as “The Digitisation of Everything”. There are brief mentions of AI and the recent government initiatives in that area. We are told that blockchain is “entirely web-based”, and has commercially available applications in “personal identification” – the one area where exports indeed had better be outside the EU, as the GDPR precludes its use for personal data.

Overall the UK government is presenting a cyber security export strategy which ignores its main export market despite it being under serious threat. Given that this threat is mostly of the politicians’ own making, the blinkered view of the world was maybe unavoidable. This still should not have stopped them from deepening the thematic vision and long term strategy for the UK cyber industry. Privacy by design, smart cities, assisted living, and internet of things, for example, are all areas with security dimensions and significant potential within the UK that do not even get a mention. Given world-wide growth in demand, cyber security exports outside the EU will likely grow, but it is not clear whether and how this strategy contributes to that.

This blog post was written by Professor Eerke Boiten, Director of the Cyber Technology Institute at De Montfort University.

It was published as “An opinion on the UK’s Cyber Security Export Strategy” in Cyber Security Practitioner 4 (6), http://www.cecileparkmedia.com/cyber-security-practitioner/

[1] UK Defence and Security Export statistics for 2016, https://www.gov.uk/government/publications/uk-defence-and-security-export-figures-2016/uk-defence-and-security-export-statistics-for-2016

[2] Office for National Statistics: Trade in services by country and type of service 2014 to 2016, https://www.ons.gov.uk/economy/nationalaccounts/balanceofpayments/adhocs/008172tradeinservicesbycountryandtypeofservice2014to2016

[3] Cyber Security, the UK’s approach to exports, UKTI, February 2014, https://www.gchq.gov.uk/sites/default/files/Cyber_Security-the_UKs_approach_to_exports.pdf

[4] New Statesman, Liam Fox MP: Why the UK is investing £1.9bn in cyber security, 4 May 2018, https://www.newstatesman.com/spotlight/cyber/2018/05/why-uk-investing-19bn-cyber-security

[5] Information Commissioner’s Office, Policy views for parliamentarians and legislators. https://ico.org.uk/about-the-ico/what-we-do/ico-policy-views/

[6] European Commission, Notice to stakeholders: withdrawal of the United Kingdom and EU rules in the field of data protection, http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=611943

Posted in Uncategorized | Leave a comment

How safe is our Facebook data?

Professor Eerke Boiten from Cyber Technology Institute, De Montfort University, was recently asked by  BBC3 to comment on the security of our Facebook data after the Cambridge Analytica Scandal.

His comments were quoted in the article:

‘I downloaded all my Facebook data and it was a nightmare’
Ever wondered what your data actually looks like? by Radhika Sanghani

Here you can read his responses in full to the questions raised in this interview:

  • Even after the Cambridge Analytica scandal, how safe is our Facebook data? For instance, how do we know our info isn’t used again and again when it comes to FB custom audience/profiling?

EB: Facebook haven’t changed anything substantive since the Cambridge Analytica scandal. They still do profiling on their customers, on all kinds of criteria including sensitive. This means that companies can still market via FB on the basis of race, or on the basis of mental stability. Even when such routes are not directly available, “lookalike” audiences can be created to market to people with similar views and interests. They are trying to stop “political” advertising around particular elections and referenda, but the stories coming out of that suggest they don’t really know yet how to even detect political advertising. A lot of the things FB have said around the CA scandal have been proved to be incorrect, for example that they stopped the sharing of friends’ info via apps as soon as they found out it was being abused.

  • What steps, in your opinion, would actually make our data safer?

EB: Now this is where GDPR should make a difference. Companies have to give insight into what they do with people’s data, and show that they can justify what they are doing with it. Experiments relating to mental health, like Facebook have done in the past, would need very explicit permission from the guinea pigs – which they probably wouldn’t give. The problem is that Facebook, Google, and the like have become so large that it is very hard for anyone to properly inspect all of what they are doing. At the moment, we can only look at what creeps out at the seams, along the line of: “if it turns out they’re able to do this, internally they must be applying an algorithm which does profiling for that”. So a significant increase in budget for organisations like the ICO would be essential to keep the internet giants in line.

  • Should we – digital natives – just resign ourselves to giving over all of this information about ourselves? It’s become so accepted but does it have to be this way?

EB: The problem isn’t even with the information that we give away itself. Most of us know how to apply the privacy settings that make sure it doesn’t get any further than we want it to go. The CA story was a scandal for many people because it violated their expectations about such control of their data: apps on someone’s Facebook leaking information about their friends without permission.

The main problem is with information that is not knowingly given away, such as Facebook like buttons and cookies tracking our web browsing, or Google Maps recording our every movement – and with the information that can be deduced from such tracking on the internet or in the real world. It’s hard to even be aware of how much such tracking exists, and you certainly don’t get many privacy controls on how it is used or passed on. For this, the GDPR should help too, but again it’s hard to enforce a law against such large scale processing by large companies that mostly sit outside the UK and the EU.

For the full article on BBC3, please visit: https://www.bbc.co.uk/bbcthree/article/93d1393a-1c12-485f-b7fe-5146cd48c12c

Posted in Uncategorized | Leave a comment

Critical infrastructure firms face crackdown over poor cybersecurity

An EU-wide cyber security law is due to come into force in May to ensure that organisations providing critical national infrastructure services have robust systems in place to withstand cyber attacks.

The legislation will insist on a set of cyber security standards that adequately address events such as last year’s WannaCry ransomware attack, which crippled some ill-prepared NHS services across England.

But, after a consultation process in the UK ended last autumn, the government had been silent until now on its implementation plans for the forthcoming law.

The NIS Directive (Security of Network and Information Systems) was adopted by the European parliament in July 2016. Member states, which for now includes the UK, were given “21 months to transpose the directive into their national laws and six months more to identify operators of essential services.”

The Department for Digital, Culture, Media and Sport (DCMS) finally slipped out its plans on a Sunday, but – given its spin on fines – it doesn’t seem as though the government was attempting to bury the story.

Interesting spin

The DCMS warned – in rather alarmist language – that “organisations risk fines of up to £17m if they do not have effective cybersecurity measures” in place. There are echoes of the EU’s General Data Protection Regulation (GDPR), by matching its €20m (£17m) maximum penalty level – though the option to charge 4% of turnover for NIS as well was dropped after consultation.

However, exorbitant penalties have been used as a scare tactic by GDPR snake oil salesmen, despite clear statements from the Information Commissioner’s Office (ICO) indicating a cautious regime. Did the DCMS mean to invite overblown headlines about the NIS directive, too?

Another peculiarity is that the government announcement doesn’t once mention the EU. Instead, the NIS directive is presented as an important part of the UK Cyber Security Strategy, even though it is an EU initiative. A pattern is emerging here: the removal of mobile roaming fees, a ban on hidden credit card charges and environmental initiatives have all been claimed as UK policies by Theresa May’s government without any adequate attribution to the EU. Digital minister Margot James said:

We are setting out new and robust cybersecurity measures to help ensure the UK is the safest place in the world to live and be online. We want our essential services and infrastructure to be primed and ready to tackle cyber-attacks and be resilient against major disruption to services.

Who needs to be aware of the NIS directive?

The government consultation response clarifies which operators of essential services and digital service providers the directive will apply to, once transposed into UK law. It uses a narrow definition of “essential”, excluding sectors such as government and food. Small firms are mostly excused from compliance; nuclear power generation has been left out, presumably to cover it exclusively under national security; and electricity generators are excluded from compliance if they don’t have smart metering in place. Digital service providers expected to comply with the NIS directive include cloud services (such as those providing data storage or email), online marketplaces and search engines.

The law requires one or more “competent authorities”, which the UK plans to organise by sector. It means communications regulator Ofcom will oversee digital infrastructure businesses and data watchdog the ICO will regulate digital service providers. They will receive reports on incidents, give directions to operators and set appropriate fines.

It’s worth noting that the ICO, in its multiple roles, could fine a service provider twice for different aspects of the same incident – once due to non-compliance with NIS and once due to non-compliance with GDPR. But incidents need to be considered significant in order to be on the radar for this directive. It will be judged on the number of affected users, the duration and geographical spread of any disruption and the severity of the impact.

Clearly, once this legislation is in place, the next WannaCry-style incident will be closely scrutinised by regulators to see how well prepared organisations are to deal with such a major event.

National and international coordination

The coordination of many NIS activities falls to the UK’s National Cyber Security Centre (NCSC), part of the government’s surveillance agency, GCHQ. It will provide the centralised computer security incident response team (CSIRT), and act as the “single point of contact” to collaborate with international peers as a major cyber attack unfolds. The NCSC will play a central role in reporting and analysing incidents, but remains out of the loop on enforcing the law and fines.

Sharing cyber incident information within an industry sector or internationally is important for larger scale analysis and better overall resilience. However, there are risks due to the inclusion of cyber vulnerability implications, business critical information and personal data in such sensitive reports. Two EU research projects (NeCS and C3ISP) aim to address these risks through the use of privacy preserving methods and security policies. The C3ISP project says its “mission is to define a collaborative and confidential information sharing, analysis and protection framework as a service for cybersecurity management.”

More security standards?

The idea of having prescriptive rules per sector was considered and rejected during the UK’s consultation process on the NIS directive. It’s in line with how the GDPR imposes cybersecurity requirements for personal data: it consistently refers to “appropriate technical and organisational measures” to achieve security, without pinning it down to specifics. Such an approach should help with obtaining organisational involvement that goes beyond a compliance culture.

A set of 14 guiding principles were drawn up, with the NCSC providing detailed advice including helpful links to existing cybersecurity standards. However, the cyber assessment framework, originally promised for release in January this year, won’t be published by the NCSC until late April – a matter of days before the NIS comes into force.

Nonetheless, the NIS directive presents a good drive to improve standards for cybersecurity in essential services, and it is supported by sensible advice from the NCSC with more to come. It would be a shame if the positive aspects of this ended up obscured by hype and panic over fines.

This blog post was written by Eerke Boiten, Professor of Cyber Security in the Cyber Technology Institute, De Montfort University.

The article was originally published on 30th January 2018 in The Conversation: Critical infrastructure firms face crackdown over poor cybersecurity

Posted in Uncategorized | Leave a comment

Cyber peacekeeping is integral in an era of cyberwar – here’s why

Cyber warfare is upon us, from interference in elections to a leak of cyber weapons from a national stockpile. And, as with most evolutions in warfare, the world is largely unprepared. Cyber peacekeeping presents significant challenges, which we explore in our research.

Any theatre of war now includes cyberspace. It has been used in targeted attacks to disable an adversary’s capabilities, such as Stuxnet, where Iran’s ability to enrich weapon-grade Uranium was disrupted. It can also be exploited in traditional warfare through electronic interference with intelligence and communication systems.

With little to guide nations and scant experience to build upon, many states are having to learn the hard way. In the context of warfare, it takes a long time to understand the impact of new technologies. One only need look at the example of landmines to see why. Once considered a legitimate weapon to stifle enemy movement, most countries now agree that landmines are indiscriminate and disproportionate weapons that cause civilian suffering long after a conflict has ended.

It’s possible that cyber warfare holds unknown consequences that future world leaders will agree to ban for similar, gut-wrenching reasons in the aftermath.

There are, however, efforts to fill the gaps in knowledge. Researchers, such as my colleague Michael Robinson, have attempted to characterise cyber warfare to understand how it can be effectively and ethically conducted. These include efforts to create cyber warfare laws to the control and restriction of cyber weapons.

These efforts are beginning to bear fruit, with the Tallinn Manual – first published in 2013 – offering a comprehensive analysis of how existing international law applies to cyberspace.

Stop the fight

But while a large proportion of research focuses on how to conduct cyber warfare, there is very little research on restoring peace in the aftermath of an online conflict between nation states.

Just as we cannot expect a nation to spring back to peace and prosperity following years of boots-on-the-ground war, countries affected by prolonged periods of cyber warfare also need assistance to recover.

A nation’s reliance on critical infrastructure brings the need to understand the damage cyber warfare can inflict on a society into sharp focus. Computer systems running essential services at hospitals, nuclear power plants and water treatment plants may be infected with advanced malware, which resists removal and prolongs civilian suffering – much like landmines persist long after a conflict ends. The physical effects of cyber weapons make cyber peacekeeping a key enabler to help bring about lasting peace.

After a conventional conflict, interventions to restore peace and security are performed on the international stage. The United Nations (UN), with its white vehicles and blue helmets, is the most widely recognised peacekeeping organisation. It has a long history of maintaining peace around the world and has evolved to match the shifting nature of warfare from inter-state to intra-state conflict over the years.

UN peacekeepers were initially ill-equipped to deal with such a change, which led to high profile failures such as Rwanda and Somalia.

With the rise of cyber warfare, peacekeepers will increasingly have to operate in this domain. But are the UN and similar organisations prepared for this expected onslaught or will they suffer a repeat of past failures, having been caught out by changes in the nature of conflict? Protracted UN cyber warfare talks fell apart last year because a consensus couldn’t be reached amid suspicions that reportedly mirrored the Cold War era. Nonetheless, questions must be asked of the UN’s peacekeeping strategy on its readiness to tackle cyber threats.

Peace is the word

Can existing peacekeeping activities simply be adapted for the internet, or should a completely new framework be drawn up to adequately address how to maintain or restore order online? What kind of technical obstacles will cyber peacekeepers encounter? Could they achieve something that contributes towards restoring or maintaining peace?

Disarmament illustrates these operational problems well: the destruction or confiscation of physical armoury means that assets cannot be easily replaced by a warring faction should peace efforts stall or falter. Cyber weapons are predominantly software applications that can be replicated, archived, encrypted and passed on with almost no cost or significant logistic efforts, research shows.

The effectiveness of cyber weapons diminishes once the vulnerabilities they have exploited become known, so one approach would be to publish detected cyber weapons to render them obsolete. Responsible disclosure would allow vendors to come up with fixes and give potential victims a chance to apply the patches – which can be a lengthy process.

Doing so “destroys” all cyber weapons of this kind – regardless of whether they belong to any of the warring factions. This approach has a nasty side-effect: it inadvertently leads to a proliferation of cyber weapons, because it’s easier for other nations or criminals to acquire the technology before adequate protections can be put in place on a global scale. It also throws up political challenges.

Conventionality belongs to yesterday

It’s no secret that the UN struggles to find money for peacekeeping contributions. The US, the largest contributor to the UN budget by far, has – under president Trump – disagreed with how the organisation is governed, and confirmed it will reduce payments to the peacekeeping budget.

If securing troops under difficult budget restrictions is already difficult, then securing highly-skilled cyber personnel in a competitive global market will be even more challenging.

United Nations peacekeepers wear distinctive blue helmets and drive white vehicles in regions ravaged by war. Shutterstock
And there’s an additional complication: those countries conducting cyber warfare are the advanced nations, many of which already contribute the lion’s share of UN funding and possess the greatest cyber expertise. Would they be willing to contribute their knowledge, wealth and people to aid their adversaries?

Conflict affects every nation, so it’s in everyone’s interests to have an internationally available capability to restore peace and security in the aftermath of cyber warfare.

This blog post was written by Helge Janicke, Professor of Computer Science, Head of the of School Computer Science and Informatics at De Montfort University.

This article was also published in The Conversation on 29th January, 2018.

Posted in Uncategorized | Leave a comment

Why we need to know if users don’t stick to IT security policy

Is finding out that users don’t comply with the policy a nightmare scenario for an IT security officer, for example of the House of Commons? Hardly. Unless you find out through Twitter, of course, along with the rest of the world (See: https://twitter.com/NadineDorries/status/937019367572803590)

A policy that only demands self-evident behaviour does not contribute, and probably does not solve a problem. For a realistic policy in an ever changing cyber security landscape, you should expect some aspects of compliance to be strenuous initially, and more of them over time. It is  counterproductive to assume that security versus utility is a zero-sum game, but trade-offs are always likely. The research area of “usable security” works to minimise this effect.

So you have to monitor policy compliance. Probably not through social media research, though. It would be interesting to see how compliance gets checked in the House of Commons. There is a decent chance that there’s education and advice but otherwise reliance on individual MPs’ responsibility. That worked for everything including MPs’ expense claims, until we realised that it didn’t. To complicate things, IT security where it concerns the Data Protection Act does devolve to individual MPs, as they are all separate data controllers.

IT security policy compliance should be monitored to cover the risks that the policy is supposed to mitigate. Business should normally link non-compliance to disciplinary procedures. As some tweets said this week, sharing logins is a sacking offence in some businesses. Non-compliance can also be an indication of changes in cyber risks and risk perception, and changes in business processes – so the exact areas of non-compliance may just be where the security policy needs to reflect such changes.

Most of all, however, usable security research tells us what the ultimate value of non-compliance information is: it indicates where users have found security too burdensome, and where they have found their own workarounds. This is also known as “shadow security”. This creates the seams through which cyber risks can come into the organisation.

Is the password for the shared drive too hard to remember? Sharing logins is one solution for sharing files. Another is to use the cloud (Dropbox, Google Drive, etc) or worse: a USB stick. So links to just about anywhere on the internet can refer to official documents – or not –, and a USB stick casually passed on can contain important official information. And be lost on the train. All this normalises dubious cyber hygiene.

Is communication by email not secure enough, maybe because emails can even be read by interns on exchange programmes? Create a WhatsApp group for gossip or conspiracy. If the Honourable Member for Backwardbury South defects to the opposition or turns out to be on Putin’s pay list, whose responsibility is it to remove them from the group? Presumably there’s no harm in Facebook knowing who is in the gang either?

These examples should give some indication of the value of knowing about non-compliance with security policy. The response is not simply to shout at the users for misbehaving – it is also to explore where business and security procedures can be integrated in a more usable way.

That does not provide an excuse for the recent behaviour of Nadine Dorries and other MPs. She didn’t exactly raise login sharing as an example of unworkable IT and its workarounds. Rather, it was to make a public argument to dissipate Damian Green’s responsibility for the porn that had been found on his work computer. From an information security perspective, that is inexcusable – and that point of view should be supported by management. One role of logins is to represent a user’s permissions, responsibilities and actions in an IT system in a way that makes them checkable, recordable and auditable. Morally if not also legally, a user should always remain responsible for what is done using their login – the more so if it is willingly shared. Dorries’ alternative for the “maybe his login was hacked” excuse was ill-considered for that reason alone.

This blog post was written by Eerke Boiten, Professor of Cyber Security in the Cyber Technology Institute, De Montfort University.

 

 

Posted in Uncategorized | Leave a comment