How safe is our Facebook data?

Professor Eerke Boiten from Cyber Technology Institute, De Montfort University, was recently asked by  BBC3 to comment on the security of our Facebook data after the Cambridge Analytica Scandal.

His comments were quoted in the article:

‘I downloaded all my Facebook data and it was a nightmare’
Ever wondered what your data actually looks like? by Radhika Sanghani

Here you can read his responses in full to the questions raised in this interview:

  • Even after the Cambridge Analytica scandal, how safe is our Facebook data? For instance, how do we know our info isn’t used again and again when it comes to FB custom audience/profiling?

EB: Facebook haven’t changed anything substantive since the Cambridge Analytica scandal. They still do profiling on their customers, on all kinds of criteria including sensitive. This means that companies can still market via FB on the basis of race, or on the basis of mental stability. Even when such routes are not directly available, “lookalike” audiences can be created to market to people with similar views and interests. They are trying to stop “political” advertising around particular elections and referenda, but the stories coming out of that suggest they don’t really know yet how to even detect political advertising. A lot of the things FB have said around the CA scandal have been proved to be incorrect, for example that they stopped the sharing of friends’ info via apps as soon as they found out it was being abused.

  • What steps, in your opinion, would actually make our data safer?

EB: Now this is where GDPR should make a difference. Companies have to give insight into what they do with people’s data, and show that they can justify what they are doing with it. Experiments relating to mental health, like Facebook have done in the past, would need very explicit permission from the guinea pigs – which they probably wouldn’t give. The problem is that Facebook, Google, and the like have become so large that it is very hard for anyone to properly inspect all of what they are doing. At the moment, we can only look at what creeps out at the seams, along the line of: “if it turns out they’re able to do this, internally they must be applying an algorithm which does profiling for that”. So a significant increase in budget for organisations like the ICO would be essential to keep the internet giants in line.

  • Should we – digital natives – just resign ourselves to giving over all of this information about ourselves? It’s become so accepted but does it have to be this way?

EB: The problem isn’t even with the information that we give away itself. Most of us know how to apply the privacy settings that make sure it doesn’t get any further than we want it to go. The CA story was a scandal for many people because it violated their expectations about such control of their data: apps on someone’s Facebook leaking information about their friends without permission.

The main problem is with information that is not knowingly given away, such as Facebook like buttons and cookies tracking our web browsing, or Google Maps recording our every movement – and with the information that can be deduced from such tracking on the internet or in the real world. It’s hard to even be aware of how much such tracking exists, and you certainly don’t get many privacy controls on how it is used or passed on. For this, the GDPR should help too, but again it’s hard to enforce a law against such large scale processing by large companies that mostly sit outside the UK and the EU.

For the full article on BBC3, please visit:

Posted in Uncategorized | Leave a comment

Critical infrastructure firms face crackdown over poor cybersecurity

An EU-wide cyber security law is due to come into force in May to ensure that organisations providing critical national infrastructure services have robust systems in place to withstand cyber attacks.

The legislation will insist on a set of cyber security standards that adequately address events such as last year’s WannaCry ransomware attack, which crippled some ill-prepared NHS services across England.

But, after a consultation process in the UK ended last autumn, the government had been silent until now on its implementation plans for the forthcoming law.

The NIS Directive (Security of Network and Information Systems) was adopted by the European parliament in July 2016. Member states, which for now includes the UK, were given “21 months to transpose the directive into their national laws and six months more to identify operators of essential services.”

The Department for Digital, Culture, Media and Sport (DCMS) finally slipped out its plans on a Sunday, but – given its spin on fines – it doesn’t seem as though the government was attempting to bury the story.

Interesting spin

The DCMS warned – in rather alarmist language – that “organisations risk fines of up to £17m if they do not have effective cybersecurity measures” in place. There are echoes of the EU’s General Data Protection Regulation (GDPR), by matching its €20m (£17m) maximum penalty level – though the option to charge 4% of turnover for NIS as well was dropped after consultation.

However, exorbitant penalties have been used as a scare tactic by GDPR snake oil salesmen, despite clear statements from the Information Commissioner’s Office (ICO) indicating a cautious regime. Did the DCMS mean to invite overblown headlines about the NIS directive, too?

Another peculiarity is that the government announcement doesn’t once mention the EU. Instead, the NIS directive is presented as an important part of the UK Cyber Security Strategy, even though it is an EU initiative. A pattern is emerging here: the removal of mobile roaming fees, a ban on hidden credit card charges and environmental initiatives have all been claimed as UK policies by Theresa May’s government without any adequate attribution to the EU. Digital minister Margot James said:

We are setting out new and robust cybersecurity measures to help ensure the UK is the safest place in the world to live and be online. We want our essential services and infrastructure to be primed and ready to tackle cyber-attacks and be resilient against major disruption to services.

Who needs to be aware of the NIS directive?

The government consultation response clarifies which operators of essential services and digital service providers the directive will apply to, once transposed into UK law. It uses a narrow definition of “essential”, excluding sectors such as government and food. Small firms are mostly excused from compliance; nuclear power generation has been left out, presumably to cover it exclusively under national security; and electricity generators are excluded from compliance if they don’t have smart metering in place. Digital service providers expected to comply with the NIS directive include cloud services (such as those providing data storage or email), online marketplaces and search engines.

The law requires one or more “competent authorities”, which the UK plans to organise by sector. It means communications regulator Ofcom will oversee digital infrastructure businesses and data watchdog the ICO will regulate digital service providers. They will receive reports on incidents, give directions to operators and set appropriate fines.

It’s worth noting that the ICO, in its multiple roles, could fine a service provider twice for different aspects of the same incident – once due to non-compliance with NIS and once due to non-compliance with GDPR. But incidents need to be considered significant in order to be on the radar for this directive. It will be judged on the number of affected users, the duration and geographical spread of any disruption and the severity of the impact.

Clearly, once this legislation is in place, the next WannaCry-style incident will be closely scrutinised by regulators to see how well prepared organisations are to deal with such a major event.

National and international coordination

The coordination of many NIS activities falls to the UK’s National Cyber Security Centre (NCSC), part of the government’s surveillance agency, GCHQ. It will provide the centralised computer security incident response team (CSIRT), and act as the “single point of contact” to collaborate with international peers as a major cyber attack unfolds. The NCSC will play a central role in reporting and analysing incidents, but remains out of the loop on enforcing the law and fines.

Sharing cyber incident information within an industry sector or internationally is important for larger scale analysis and better overall resilience. However, there are risks due to the inclusion of cyber vulnerability implications, business critical information and personal data in such sensitive reports. Two EU research projects (NeCS and C3ISP) aim to address these risks through the use of privacy preserving methods and security policies. The C3ISP project says its “mission is to define a collaborative and confidential information sharing, analysis and protection framework as a service for cybersecurity management.”

More security standards?

The idea of having prescriptive rules per sector was considered and rejected during the UK’s consultation process on the NIS directive. It’s in line with how the GDPR imposes cybersecurity requirements for personal data: it consistently refers to “appropriate technical and organisational measures” to achieve security, without pinning it down to specifics. Such an approach should help with obtaining organisational involvement that goes beyond a compliance culture.

A set of 14 guiding principles were drawn up, with the NCSC providing detailed advice including helpful links to existing cybersecurity standards. However, the cyber assessment framework, originally promised for release in January this year, won’t be published by the NCSC until late April – a matter of days before the NIS comes into force.

Nonetheless, the NIS directive presents a good drive to improve standards for cybersecurity in essential services, and it is supported by sensible advice from the NCSC with more to come. It would be a shame if the positive aspects of this ended up obscured by hype and panic over fines.

This blog post was written by Eerke Boiten, Professor of Cyber Security in the Cyber Technology Institute, De Montfort University.

The article was originally published on 30th January 2018 in The Conversation: Critical infrastructure firms face crackdown over poor cybersecurity

Posted in Uncategorized | Leave a comment

Cyber peacekeeping is integral in an era of cyberwar – here’s why

Cyber warfare is upon us, from interference in elections to a leak of cyber weapons from a national stockpile. And, as with most evolutions in warfare, the world is largely unprepared. Cyber peacekeeping presents significant challenges, which we explore in our research.

Any theatre of war now includes cyberspace. It has been used in targeted attacks to disable an adversary’s capabilities, such as Stuxnet, where Iran’s ability to enrich weapon-grade Uranium was disrupted. It can also be exploited in traditional warfare through electronic interference with intelligence and communication systems.

With little to guide nations and scant experience to build upon, many states are having to learn the hard way. In the context of warfare, it takes a long time to understand the impact of new technologies. One only need look at the example of landmines to see why. Once considered a legitimate weapon to stifle enemy movement, most countries now agree that landmines are indiscriminate and disproportionate weapons that cause civilian suffering long after a conflict has ended.

It’s possible that cyber warfare holds unknown consequences that future world leaders will agree to ban for similar, gut-wrenching reasons in the aftermath.

There are, however, efforts to fill the gaps in knowledge. Researchers, such as my colleague Michael Robinson, have attempted to characterise cyber warfare to understand how it can be effectively and ethically conducted. These include efforts to create cyber warfare laws to the control and restriction of cyber weapons.

These efforts are beginning to bear fruit, with the Tallinn Manual – first published in 2013 – offering a comprehensive analysis of how existing international law applies to cyberspace.

Stop the fight

But while a large proportion of research focuses on how to conduct cyber warfare, there is very little research on restoring peace in the aftermath of an online conflict between nation states.

Just as we cannot expect a nation to spring back to peace and prosperity following years of boots-on-the-ground war, countries affected by prolonged periods of cyber warfare also need assistance to recover.

A nation’s reliance on critical infrastructure brings the need to understand the damage cyber warfare can inflict on a society into sharp focus. Computer systems running essential services at hospitals, nuclear power plants and water treatment plants may be infected with advanced malware, which resists removal and prolongs civilian suffering – much like landmines persist long after a conflict ends. The physical effects of cyber weapons make cyber peacekeeping a key enabler to help bring about lasting peace.

After a conventional conflict, interventions to restore peace and security are performed on the international stage. The United Nations (UN), with its white vehicles and blue helmets, is the most widely recognised peacekeeping organisation. It has a long history of maintaining peace around the world and has evolved to match the shifting nature of warfare from inter-state to intra-state conflict over the years.

UN peacekeepers were initially ill-equipped to deal with such a change, which led to high profile failures such as Rwanda and Somalia.

With the rise of cyber warfare, peacekeepers will increasingly have to operate in this domain. But are the UN and similar organisations prepared for this expected onslaught or will they suffer a repeat of past failures, having been caught out by changes in the nature of conflict? Protracted UN cyber warfare talks fell apart last year because a consensus couldn’t be reached amid suspicions that reportedly mirrored the Cold War era. Nonetheless, questions must be asked of the UN’s peacekeeping strategy on its readiness to tackle cyber threats.

Peace is the word

Can existing peacekeeping activities simply be adapted for the internet, or should a completely new framework be drawn up to adequately address how to maintain or restore order online? What kind of technical obstacles will cyber peacekeepers encounter? Could they achieve something that contributes towards restoring or maintaining peace?

Disarmament illustrates these operational problems well: the destruction or confiscation of physical armoury means that assets cannot be easily replaced by a warring faction should peace efforts stall or falter. Cyber weapons are predominantly software applications that can be replicated, archived, encrypted and passed on with almost no cost or significant logistic efforts, research shows.

The effectiveness of cyber weapons diminishes once the vulnerabilities they have exploited become known, so one approach would be to publish detected cyber weapons to render them obsolete. Responsible disclosure would allow vendors to come up with fixes and give potential victims a chance to apply the patches – which can be a lengthy process.

Doing so “destroys” all cyber weapons of this kind – regardless of whether they belong to any of the warring factions. This approach has a nasty side-effect: it inadvertently leads to a proliferation of cyber weapons, because it’s easier for other nations or criminals to acquire the technology before adequate protections can be put in place on a global scale. It also throws up political challenges.

Conventionality belongs to yesterday

It’s no secret that the UN struggles to find money for peacekeeping contributions. The US, the largest contributor to the UN budget by far, has – under president Trump – disagreed with how the organisation is governed, and confirmed it will reduce payments to the peacekeeping budget.

If securing troops under difficult budget restrictions is already difficult, then securing highly-skilled cyber personnel in a competitive global market will be even more challenging.

United Nations peacekeepers wear distinctive blue helmets and drive white vehicles in regions ravaged by war. Shutterstock
And there’s an additional complication: those countries conducting cyber warfare are the advanced nations, many of which already contribute the lion’s share of UN funding and possess the greatest cyber expertise. Would they be willing to contribute their knowledge, wealth and people to aid their adversaries?

Conflict affects every nation, so it’s in everyone’s interests to have an internationally available capability to restore peace and security in the aftermath of cyber warfare.

This blog post was written by Helge Janicke, Professor of Computer Science, Head of the of School Computer Science and Informatics at De Montfort University.

This article was also published in The Conversation on 29th January, 2018.

Posted in Uncategorized | Leave a comment

Why we need to know if users don’t stick to IT security policy

Is finding out that users don’t comply with the policy a nightmare scenario for an IT security officer, for example of the House of Commons? Hardly. Unless you find out through Twitter, of course, along with the rest of the world (See:

A policy that only demands self-evident behaviour does not contribute, and probably does not solve a problem. For a realistic policy in an ever changing cyber security landscape, you should expect some aspects of compliance to be strenuous initially, and more of them over time. It is  counterproductive to assume that security versus utility is a zero-sum game, but trade-offs are always likely. The research area of “usable security” works to minimise this effect.

So you have to monitor policy compliance. Probably not through social media research, though. It would be interesting to see how compliance gets checked in the House of Commons. There is a decent chance that there’s education and advice but otherwise reliance on individual MPs’ responsibility. That worked for everything including MPs’ expense claims, until we realised that it didn’t. To complicate things, IT security where it concerns the Data Protection Act does devolve to individual MPs, as they are all separate data controllers.

IT security policy compliance should be monitored to cover the risks that the policy is supposed to mitigate. Business should normally link non-compliance to disciplinary procedures. As some tweets said this week, sharing logins is a sacking offence in some businesses. Non-compliance can also be an indication of changes in cyber risks and risk perception, and changes in business processes – so the exact areas of non-compliance may just be where the security policy needs to reflect such changes.

Most of all, however, usable security research tells us what the ultimate value of non-compliance information is: it indicates where users have found security too burdensome, and where they have found their own workarounds. This is also known as “shadow security”. This creates the seams through which cyber risks can come into the organisation.

Is the password for the shared drive too hard to remember? Sharing logins is one solution for sharing files. Another is to use the cloud (Dropbox, Google Drive, etc) or worse: a USB stick. So links to just about anywhere on the internet can refer to official documents – or not –, and a USB stick casually passed on can contain important official information. And be lost on the train. All this normalises dubious cyber hygiene.

Is communication by email not secure enough, maybe because emails can even be read by interns on exchange programmes? Create a WhatsApp group for gossip or conspiracy. If the Honourable Member for Backwardbury South defects to the opposition or turns out to be on Putin’s pay list, whose responsibility is it to remove them from the group? Presumably there’s no harm in Facebook knowing who is in the gang either?

These examples should give some indication of the value of knowing about non-compliance with security policy. The response is not simply to shout at the users for misbehaving – it is also to explore where business and security procedures can be integrated in a more usable way.

That does not provide an excuse for the recent behaviour of Nadine Dorries and other MPs. She didn’t exactly raise login sharing as an example of unworkable IT and its workarounds. Rather, it was to make a public argument to dissipate Damian Green’s responsibility for the porn that had been found on his work computer. From an information security perspective, that is inexcusable – and that point of view should be supported by management. One role of logins is to represent a user’s permissions, responsibilities and actions in an IT system in a way that makes them checkable, recordable and auditable. Morally if not also legally, a user should always remain responsible for what is done using their login – the more so if it is willingly shared. Dorries’ alternative for the “maybe his login was hacked” excuse was ill-considered for that reason alone.

This blog post was written by Eerke Boiten, Professor of Cyber Security in the Cyber Technology Institute, De Montfort University.



Posted in Uncategorized | Leave a comment

Meet our experts…Professor Eerke Boiten

Professor Eerke Boiten joined the Cyber Technology Institute in April 2017 from the University of Kent where he was the Director of the Cyber Security Research Centre.

Professor Boiten spent the first twenty years of his research career, first in the Netherlands and then in the UK, on mathematics and logic based methods to guarantee and verify the correctness of software. He published over 50 peer reviewed papers on formal methods, including program transformation, viewpoint specification, and refinement in process algebra and state-based systems (e.g. Z). On the latter topic, he authored the monograph “Refinement in Z and Object-Z” with John Derrick (Springer 2004, 2015), and organised many conferences and workshops including the last nine editions of the BCS-FACS Refinement Workshop.

In recent years, he has been applying such techniques in the context of cryptography and security. He led the highly successful UK network on cryptography, security and formal methods CryptoForma. In addressing the broader cyber security research agenda, he also actively engages with other disciplines and external stakeholders.

Professor Boiten has also been a frequent commentator on issues in data security and privacy, including in The Guardian, Le Monde, and frequently in The Conversation, see: Recent comment topics have included: health data sharing, Google, Facebook, the Right to be Forgotten, surveillance, encryption and Ransomware.

He is currently Principal Investigator on the Economical, Psychological and Societal Impact of Ransomware (EMPHASIS) project which aims to build economical and behavioural models of ransomware which can be used to improve ransomware mitigation and advice, as well as support providing support to law enforcement.

For more information about Professor Boiten and his publications, please visit:

Posted in Uncategorized | Leave a comment

Meet our experts….Dr Isabel Wagner

Dr Isabel Wagner

Dr Isabel Wagner is a Senior Lecturer in the Cyber Technology Institute here at De Montfort University. She completed her PhD in engineering (Dr.-Ing.) and M.Sc. in computer science (Dipl.-Inf. Univ.) from the Department of Computer Science, University of Erlangen in 2010 and 2005, respectively. In 2011 she was a JSPS Postdoctoral Fellow in the research group of Prof. Masayuki Murata at the University of Osaka, Japan.

Dr Wagner has made significant contributions in wireless sensor networks, computing education, and privacy-enhancing technologies. These diverse contributions are united by a focus on measurement and the application of simulation methodology and statistics. Dr Wagner’s work has been published in renowned peer-reviewed journals and conferences and has been cited more than 900 times (Google Scholar).

This month, Isabel has been elevated to the rank of Senior Member by the Association for Computing Machinery (ACM). ACM is the world’s largest computing society and honors the top 25% of its members as Senior Members for their demonstrated excellence in the computing field.

The following examples illustrate the results of her outstanding research:

In the area of wireless sensor networks, Dr Wagner proposed a new metric for the lifetime of sensor networks. This highly cited work (currently the 6th most-cited paper in ACM Trans. on Sensor Networks) analysed metrics and application scenarios for sensor networks, and proposed a composite metric that can be configured based on the requirements of the application scenario. This metric enables objective comparisons between different algorithms and configurations of sensor networks.

In computing education, Dr Wagner has focused on gender equality. In a large statistical study of the achievement of female CS students, she found that across all UK universities, female CS students are awarded significantly fewer first class degrees (corresponding to a 70% average) than male students (published in ACM Trans. on Computing Education).
This result is now informing her local work in supporting female students and making staff aware of unconscious biases.

In the area of privacy-enhancing technologies, Dr Wagner has investigated the measurement of privacy as a prerequisite for objective comparisons between privacy-enhancing technologies. She has proposed a taxonomy for privacy metrics and a general method to assess the strength of privacy metrics. Her study of privacy metrics for genomic privacy (published in ACM Trans. on Privacy and Security) evaluated 24 privacy metrics for genomics and found weaknesses in several common privacy metrics.

Her research has been funded by the Engineering and Physical Sciences Research Council (EPSRC), the Japan Society for the Promotion of Science (JSPS), and major companies. She also acts as an expert reviewer for the EPSRC, the EU Horizon 2020 programme, and several high-ranking journals and serves on the technical program committees of leading conferences.

Posted in Uncategorized | Leave a comment

Can you avoid being hit by ransomware?

Yes, you can. Having said that, for the NHS it was probably a bit more difficult to avoid it.

After last weekend, it is hardly necessary to explain what ransomware is anymore – even if not all media got the details correct. Ransomware is a particular type of malicious software (“malware”), that asks for a ransom to get the affected computer back to its original state. Like most ransomware, the current variant (“WannaCry”) replaces the user’s data files by encrypted versions for which only the criminals have the decryption key. Often such ransoms need to be paid in the online currency “bitcoin”. This means that even paying the ransom is a challenging experience for many of the victims, with the criminals often offering help (!) This is part of the game: the criminals need their victims to build up some trust, so they will also trust the criminals to deliver when they pay up. Nevertheless the official advice is still not to pay, as you can never be sure, and nobody likes to support this particular “business” model. As far as we can tell nobody has even received a decryption key after paying for this particular infection.

So how could you land with ransomware on your computer?

Old software, missing updates, clicking the wrong links …

All malware relies on “vulnerabilities” in software for the malware to take hold. In this case, it was a vulnerability in Microsoft operating systems, for which updates had been sent out in March 2017. Nobody who applied those updates will have been hit by WannaCry. Unfortunately, public free support for Windows XP (not sold since 2008) had stopped in 2014, so no free update for that was available. The vulnerability exists in Windows XP, too, and Microsoft had a fix available – initially for a price, but as of this weekend this is also available for free.

The existence of a vulnerability by itself will not normally lead to ransomware infection – it also needed some action by a user. The most common such action these days is clicking on a “wrong” link in an email which looks like it comes from a trusted source (“phishing”, or if it’s cleverly targeted, “spear fishing”). Unfortunately, there is an “arms race” in this area: criminals get better at creating realistic looking emails, so even though users are more aware of the risks, they also stand a worse chance of spotting the best phishing emails than ever before. With all sorts of internet services regularly sending out emails with bona fide links in there, this is a problem that will need a radical solution soon.

The NHS, despite a huge IT budget, was always at a higher risk of catching this strand of ransomware than most people at home. Many of their computers still run on Windows XP, so would not have been updated in time. In many cases, moving away from XP for the NHS (and many other large organisations) is not just a question of simple replacement cost. They also have crucial software that will not work with newer operating systems, or worse: an XP based computer may actually be built into a complex medical instrument. Replacing those in their entirety is a much bigger job, and even having had extended XP support over 2014-15 it is not clear the NHS could have realistically done so by now. Most home computers on XP have probably long been retired because they were getting too slow for the newest games …

This aspect of the story won’t go away with Microsoft releasing an XP update to combat WannaCry. Every update released for newer Microsoft operating systems addresses and through that implicitly publicizes a vulnerability that may have existed in XP already, with no free public updates provided for that …

Another very political can of worms in this story is that the vulnerability had been known to the NSA, held in their stash of vulnerabilities to exploit when they needed to break into people’s computers. The NSA will likely have known about this one since well before XP support was stopped.

Can you be safe even if you’ve been hit by ransomware?

Yes, provided you had backups of your data. That has always been a good strategy – disc drives can crash, laptops can get stolen, and in this case having a backup allows you to put the original files in place again instead of the maliciously encrypted ones. Because you also need to get rid of the malware, and you need to avoid re-infecting yourself and others, this is a task that should not be undertaken without expertise.

Current Research

Cyber security researchers are working on research to address all this in various directions, often with interdisciplinary aspects as some of it relates to how humans operate and can be manipulated. Ransomware encryption methods are broken, bitcoin payments on the blockchain are traced, email filtering gets improved to catch more phishing emails.

Funded by the national research funding agency EPSRC, Professor Eerke Boiten at the CTI is leading EMPHASIS, a £900K research project into all aspects of ransomware, with computer scientists, economists, psychologists and criminologists from the universities of Kent, Leeds and Newcastle, De Montfort University and City University London.

This blog post was written by Professor Eerke Boiten, Professor of Cyber Security at the Cyber Technology Institute, De Montfort University, Leicester.



Posted in Uncategorized | Leave a comment

CYRAN: a realistic environment for cyber warfare training

Cyber Security of ICS/SCADA systems is a major aspect of current research in the cyber community. Here at the Cyber Technology Institute, we have developed CYRAN – a hybrid cyber range that is a combination of physical and virtual components which is an ideal environment for hands-on training in cyber warfare training, cyber resilience testing and cyber technology development.

A key challenge in Cyber Security training is the ability to perform practical exercises in a realistic environment, especially for areas where the ability to incorporate real equipment is almost non-existent.

To this end, the Cyber Technology Institute at De Montfort University have created the CYRAN cyber range. CYRAN has been developed utilising a hybrid approach, combining virtualised components with actual physical hardware.  This includes the capacity for switches, routers, user terminals with a variety of operating systems, programmable logic controllers, human machine interfaces, geographically distributed networks and virtual private networks.

Scenarios can be developed to better represent operational environments by incorporating physical systems such as control systems and bespoke technologies, providing enhanced resiliency testing.

Once a scenario has been developed Red vs Blue exercises (where one team attack the system and the other attempt to identify and attribute the attacks) can be performed highlighting areas of weakness likely to be exploited by malicious actors and assessing the level of information required for successful attribution.  Tokens worth a predetermined number of points are spread throughout the scenario and are associated with particular techniques or exploits.


This approach introduces an element of competition, which can be tailored to assess the impact of differing schemes.  Competition can be simply between Red and Blue, but provision exists to monitor individual points meaning competition within teams can also be assessed.  Any combination of these can also be implemented; one that has proved successful in the past is to award Blue points solely to the team whilst awarding individual points to the Red team, leading to greater teamwork amongst the defenders whilst highlighting individuality for the attackers.

A key component of a scenario is the White team; not only do they ensure the smooth running of the event providing hints or extra information when necessary, but they can also take on the role of other members of an organisation to increase the realistic demands of a situation.

With CYRAN, we can provide attendees with practical and technical skills as well as the experience of working with others within a simulated scenario.  It is also  easy to create and add new scenarios in order to tailor the training to the specific needs of organisations.

For more information about the training opportunities with CYRAN, please contact us:

For more detail about the development of CYRAN, please see:

Posted in Uncategorized | Leave a comment

Spotlight on Research….Privacy Measurement by Dr Isabel Wagner

PryMe – a Universal Framework to Measure the Strength of Privacy-enhancing Technologies


Privacy is a fundamental human right codified in the European Convention on Human Rights. However, privacy in today’s digital society is constantly under threat, and privacy protections are needed to guard against privacy violations. Privacy-enhancing technologies can protect privacy on a technical level and thus offer much stronger protection than privacy policies or privacy laws. Our expert, Dr Isabel Wagner, has been awarded an EPSRC grant to advance the state of the art in privacy measurement – a fundamental building block for the creation of new privacy-enhancing technologies.

Privacy is a universal value and an important matter of human rights, security, and freedom of expression. However, in the digital era privacy is increasingly becoming eroded, and existing protections in terms of laws and privacy policies turn out to be insufficient because they do not prevent privacy violations from happening. In contrast, privacy protections on a technical level, so-called privacy-enhancing technologies, can prevent privacy violations and are thus a topic of much current research.

One way to show how effective new privacy-enhancing technologies are, i.e. to what extent they are able to protect privacy, is to use privacy metrics to measure the amount of privacy the technologies provide. Even though many privacy metrics have been proposed, there are many studies showing their shortcomings in terms of consistency, reproducibility, and applicability in different application domains. This is an important issue because use of a weak privacy metric can lead to real-world privacy violations if the privacy metric overestimates the amount of privacy provided by a technology.

The proposed research addresses this issue by evaluating the quality of existing privacy metrics, identifying their strengths and weaknesses, and building on this evidence to propose new, much stronger privacy metrics. Our aim is to create novel privacy metrics that measure the effectiveness of privacy-enhancing technologies consistently, reproducibly, and across application domains. To achieve this aim, we will (i) create the modular framework PryMe for the systematic evaluation of privacy metrics, (ii) apply the PryMe framework to evaluate privacy metrics across application domains, and (iii) propose strong new privacy metrics that work in each application domain.

By proposing a single framework to evaluate privacy metrics in many application domains, we allow research ideas on privacy metrics from different domains to complement each other, which will transform how privacy is measured. To further this transformation, we will release open source code for the PryMe framework to enable other researchers to study different application domains and new privacy metrics. In the long term, this will be relevant to improve privacy-enhancing technologies, and thereby improve privacy for end users.

Privacy measurement is important not only to improve privacy-enhancing technologies, but also to analyse trade-offs between privacy and data utility, or between privacy and security. Better privacy metrics therefore not only improve privacy for end users, but also improve the decision-making in situations when privacy needs to be weighed against utility or security. Better privacy metrics can also help improve the user acceptance of new technologies such as vehicular networks and smart homes by showing that privacy issues have been addressed on a technical level.

Dr Isabel Wagner is a Senior Lecturer in the Cyber Technology Institute at De Montfort University.

We are currently recruiting a Research Fellow to support this project:

For more information about this project, please contact:

Posted in Uncategorized | Leave a comment

#DMUCyberWeek at the Cyber Technology Institute


We will be hosting an exciting programme of workshops and activities here in the Cyber Technology Institute at De Montfort University from Monday 8th May – Friday 12th May.

#DMUCyberWeek will be a combination of career events for our current students as well as an opportunity for local businesses and security enthusiasts to come and find out more about cyber security and the research and commercial engagement happening in the CTI.

On Monday, we will welcome a team from Cyber Security Challenge UK who will be delivering a day of career workshop sessions for our students: a great opportunity for them to explore the varied career options available when graduating from our cyber security programmes.

Tuesday and Wednesday will be focused on the issues of cyber threats in critical systems with workshops delivered by members of the Cyber Security Team from Airbus – one of our Industrial Advisory Group partners.

Later in the week, we will also have sessions on:

• Social Engineering from Ian Mann, the founder of ECSC Group PLC;

• Honeypots from Thomas Brandstetter who was the official Incident handler of the Stuxnet incident for Siemens;

• Pen-Testing from the Pen-Testing team from Deloitte – another partner organisation from our Industrial Advisory Group.

There will also be a social evening during the week which will be an opportunity for our academic experts, students, speakers and visitors to meet informally to discuss common interests in the field of cyber security.

We are really grateful to all the individuals and organisations taking part in #DMUCyberWeek and are looking forward to welcoming lots of visitors – both old, and, new!

For more information about #DMUCyberWeek, please contact:

Posted in Uncategorized | Leave a comment