Blogs & Podcasts

Dr Vasileios Giotsas from SCC, shares his thoughts on how fragile the internet is.

As part of the "Are You Safe Online? article, Dr Vasileios Giotsas School of Computing and Communications, talks to The Naked Scientists on how fragile the Internet really is. With everything on the cloud, transmitted through transient beams of wi-fi from one device to another, the internet can often feel like an ethereal, intanglible thing. But, of course, it’s not it’s cables and servers and a lot of infrastructure. And maybe that infrastructure itself, isn’t a solid as we like to think.

Adam Murphy spoke to Vasileios Giotsas about the internet's weak spots...

Vasileios - So the internet can be really fragile. It has been designed to withstand a nuclear disaster. But when it was designed it was assumed that everyone who is connected on the internet, would not have any interest in harming the internet. So everybody is automatically trusted. Which makes it very susceptible to poisoning, from bogus or intentionally erroneous information. And when this information is propagated in the core of the Internet it can cause widespread disruptions that are very hard to mitigate.

Adam - So how can the internet malfunction?

Vasileios - So the internet, think of it as a really large and complex road network. Right. And traffic can take many different paths, and it needs a G.P.S. systems, a navigation system, and this navigation system is called the routing protocol. So the routing protocol is the protocol that decides how traffic would travel from your computer, to let's say the BBC website, or to the Lancaster University website. Right? Now, if this information is accurate then everything works as expected. If this information for any reason becomes poisoned, then traffic can go through unpredictable ways. It can never reach its destination. And this can cascade to many different destinations. And at the end you have millions of users being unable to access their desired destinations or services. So essentially what happens is that the Internet has this routing system, this navigation system, that is really sensitive to any sort of small change. These small changes, if they are either intentionally or unintentionally wrong can cause the whole network to crumble.

Adam - And this isn't just a hypothetical. Whenever there are large outages of several websites this problem is often to blame. In July 2019 this happened to one company, CloudFlare, and the outage took out 10 percent of all web traffic in the U.K. It happens to Google. It happens to Facebook. So why haven't we fixed it?



LU Hack team attends capture the flag tournament at Etihad Stadium

On the second of April 2019, 8 LUHack members ventured to the Etihad Stadium to attend Academy Day, a capture the flag tournament hosted by Palo Alto Networks and UCEN Manchester, with challenges provided by Hacking Lab.


The day started at 0830 for our members, arriving at the impressive stadium to be greeted with a surprise - we wouldn’t be on our own team, instead they would be randomised.  Despite the initial shock, some LUHack members did end up on teams together and the situation made for a great excuse to spend a day networking with other universities.

We filled up on coffee and shortbread before embarking on a tour of the grounds, participants were then taken downstairs for a panel session with Sean Marshall of F3, PJ Jagdale of Palo Alto Networks, and our own Dr Daniel Prince, all moderated by Ron Dodge of Palo Alto Networks.

After the panel session we were briefed about the day’s CTF activities and then left for our respective team tables to get set up.  Everyone connected over the stadium’s WiFi and navigated to the dashboard and challenges over the internet.

The CTF itself was comprised of typical jeopardy style challenges including cryptography, reverse engineering, pwn, web application, steganography, and forensics, as well as some miscellaneous ones.  The difficulty was well set with easy, moderate, and hard and a good balance between all of them, further balanced by dynamic scoring dependant on how many teams completed a challenge.

Coffee, tea, and an excellent lunch were all provided over the course of the day to keep everyone concentrating on the task at hand.

Throughout the day communication with unfamiliar teammates was more vital than ever and addressing everyone’s strengths and weaknesses was something which needed to be done very early on. Working with members of other CTF teams provided great insight into how they would ordinarily work and left us with great ideas for improving our workflow in future CTFs when we’re back together as LUHack teams.

Team 04, with Ric Derbyshire (PhD student), James Boorman (PhD student), and Ben Simms (first year compsci) were neck and neck with Team 13 with Henry Clarke (second year compsci) and it was anybody's guess who would beat who after the scoreboards went dark, half an hour before the end.  This was further exacerbated by the fact that the teams were competing for positions in the top 3 of the leaderboard.

The competition ended at 1645 with the challenges being closed and everyone being ushered downstairs.  We were given some statistics of the tournament and then moved onto what we all wanted to heard, the results!

Team 04 managed to make it into the top 3, coming 3rd place, with each member (3 of who were from LUHack) receiving £50 amazon gift vouchers (thank you Palo Alto and UCEN Manchester).

In summary, LUHack enjoyed some fantastic challenges and food in a spectacular venue and would be absolutely thrilled to be invited back to another Academy Day in the near future.


You can our writeups of the challenges on LUHack’s website HERE:

Dr Daniel Prince joins The CyberWire daily podcast to talk about "Quantum Hardware Security Primitives"

Dr Daniel Prince, Senior Lecturer in Cyber Security and Associate Director of Security Lancaster talks to CyberWire about "Quantum Hardware Security Primitives"


The CyberWire is a free, community-driven cyber security news service based in Baltimore. Its mission is to provide concise and relevant daily briefings on the critical news happening across the global cyber security domain. In an industry overloaded with information, the CyberWire also helps individuals and organizations rapidly find the news and information that's important to them—with more signal and less noise.

26th October 2018 Podcast

In today's podcast, we hear that British Airways' breach has gotten bigger. Mexico's financial institutions say they've contained the anomalies in interbank transfer systems. "Demonbot" is infesting poorly secured Hadoop servers. Google receives criticism for slow action against ad fraud. Bitdefender and Romanian police produce a decryptor for GandCrab ransomware. Discussion of a "Civilian Cybersecurity Corps:" are white hats the radio hams of the Twenty-first Century?

You can hear Daniels segment HERE, or listen to the entire podcast HERE.

You can also learn more at

Charles Weir talks to Angela Sasse about "User-centered Security"

In this interview, Angela speaks to Charles Weir of Security Lancaster, discussing the relationships between security experts and professional users. Why are responsible employees, including software developers, often right to subvert security controls? How can organisations and security experts change their approach and training to address the problems this causes? Expect unconventional and fascinating views from this world expert on usable security! 


M. Angela Sasse FREng is the Professor of Human-Centred Technology in the Department of Computer Science at University College London, UK.  A usability researcher by training, she started investigating the causes and effects of usability issues with security mechanisms in 1996.  In addition to studying specific mechanisms such as passwords, biometrics, and access control, her research group has developed human-centred frameworks that explain the role of security, privacy, identity and trust in human interactions with technology.  

She is currently the Director of the multidisciplinary UK Research Institute for Science of Cyber Security (RISCS), funded by EPSRC and GCHQ and now its second phase.  

Angela has recently been appointed Professor of Human Centered Security at the Horst Goertz Institute for IT Security at the Ruhr University of Bochum, Germany.

Listen to the full Podcast HERE.

Dr Daniel Prince joins The CyberWire daily podcast to talk on "Re-writing Digital Histories"

Dr Daniel Prince, Senior Lecturer in Cyber Security and Associate Director of Security Lancaster talks to CyberWire on "Re-writing Digital Histories".


The CyberWire is a free, community-driven cyber security news service based in Baltimore. Its mission is to provide concise and relevant daily briefings on the critical news happening across the global cyber security domain. In an industry overloaded with information, the CyberWire also helps individuals and organizations rapidly find the news and information that's important to them—with more signal and less noise.

19h July 2018 Podcast

In today's podcast, we hear that Fancy Bear has taken a Roman Holiday, and the Italian Navy may be taking note. A criminal espionage campaign is underway, with Ukraine's government as its target. An exposed AWS S3 bucket leaks voter information. A security firm and a vendor dispute whether an issue is a vulnerability or a case of user abuse. NIST announces its intention of withdrawing some obsolete cyber security publications. Congress presses tech companies about content moderation. Daniel Prince from Lancaster University on rewriting digital histories. Guest is Matt Cauthorn from ExtraHop on a new worm spreading through Android devices

You can hear Daniels segment HERE, or listen to the entire podcast HERE.

You can also learn more at

Dr Daniel Prince joins The CyberWire daily podcast to talk about "Cascading Failure in Complex Systems"

Dr Daniel Prince, Senior Lecturer in Cyber Security and Associate Director of Security Lancaster talks to CyberWire about Cascading Failure in Complex Systems.


The CyberWire is a free, community-driven cyber security news service based in Baltimore. Its mission is to provide concise and relevant daily briefings on the critical news happening across the global cyber security domain. In an industry overloaded with information, the CyberWire also helps individuals and organizations rapidly find the news and information that's important to them—with more signal and less noise.

26th June 2018 Podcast

In today's podcast, we hear warnings of Russian cyber operations from Romania and the UK. Recent attempts at developing international rules of conduct (and conflict) in cyberspace. Bronze Butler's naughty USB drives—not as scary as they sound, but a useful reminder of some sound precautions. FireEye says it never hacked back. Smart batteries may be too smart for their users' good. A new venture fund lends credibility to cryptocurrency and blockchain startups. Overwatch hacker gets jail time in Inchon.

You can hear Daniels segment HERE, or listen to the entire podcast HERE.

You can also learn more at

Dr Daniel Prince joins The CyberWire daily podcast to talk on "Security of Industrial Control Systems"

Dr Daniel Prince, Senior Lecturer in Cyber Security and Associate Director of Security Lancaster talks to CyberWire on “Security of Industrial Control Systems”.


The CyberWire is a free, community-driven cyber security news service based in Baltimore. Its mission is to provide concise and relevant daily briefings on the critical news happening across the global cyber security domain. In an industry overloaded with information, the CyberWire also helps individuals and organizations rapidly find the news and information that's important to them—with more signal and less noise.

7th June 2018 Podcast

Iron Group said to use Hacking Team source code to build a backdoor. Operation Prowli both cryptojacks and sells traffic. Fancy Bear may be getting noisier. VPNFilter has a more extensive set of victim devices than previously believed. ZTE pays a billion dollar fine. CloudPets are oversharing via an unsecured server. The US Senate wants answers from both Facebook and Google about their user data sharing with Chinese companies. Daniel Prince from Lancaster University on the security of Industrial Control Systems. Guests are Kyle Lady and Olabode Anise from Duo Security covering their annual report on authentication

You can hear Daniels segment about Security of Industrial Control Systems HERE, or listen to the entire podcast HERE.

You can also learn more at

Dr Daniel Prince joins The CyberWire daily podcast to talk about "Risk Management and Uncertainty"

Dr Daniel Prince, Senior Lecturer in Cyber Security and Associate Director of Security Lancaster talks to CyberWire about Risk Management and Uncertainty.


The CyberWire is a free, community-driven cyber security news service based in Baltimore. Its mission is to provide concise and relevant daily briefings on the critical news happening across the global cyber security domain. In an industry overloaded with information, the CyberWire also helps individuals and organizations rapidly find the news and information that's important to them—with more signal and less noise.

23rd May 2018 Podcast

In today's podcast we hear a bit more on Variant 4—we may see more like it. Mitigations are under preparation. The Confucius threat group modifies its approach to targets. Turla adopts a two-stage infection technique. A misconfigured AWS S3 bucket exposes a California not-for-profit's clients. ZTE's lifeline may not be so strong after all: the US Administration wants significant concessions and the US Congress seems to want none of it at all. Facebook's EU testimony gets tepid reviews. And a botnet is pushing smart pills and diet supplements—not that any of you will be tempted. Daniel Prince from Lancaster University on risk management and uncertainty. Guest is Sung Cho from SEWORKS on research they did on the security of fitness apps.

You can hear Daniels segment HERE, or listen to the entire podcast HERE.

You can also learn more at

Dr Daniel Prince joins The CyberWire daily podcast to talk on "Security in the financial sector"

Dr Daniel Prince, Senior Lecturer in Cyber Security and Associate Director of Security Lancaster talks to CyberWire on “Security in the financial sector”.


The CyberWire is a free, community-driven cyber security news service based in Baltimore. Its mission is to provide concise and relevant daily briefings on the critical news happening across the global cyber security domain. In an industry overloaded with information, the CyberWire also helps individuals and organizations rapidly find the news and information that's important to them—with more signal and less noise.

25th April 2018 Podcast

In today's podcast, we hear that North Korea has gone big with GhostSecret. Meanwhile, Pyongyang's elite tries to cover its online tracks. PyRoMine uses EternalRomance to disable security systems enroute to cryptomining. Russia enagages in video disinformation about Syrian nerve agent attacks. A complicated alt-coin heist may be misdirection for something bigger. Huawei may be in trouble over Iran sanctions. Apple patches. Europol takes down Webstresser. General Nakasone confirmed as Director NSA and Commander US CyberCom. Daniel Prince from Lancaster University on security in the financial sector. Guest is Joe Cincotta from Thinking Studio on how smart design leads to better security

You can hear Daniels segment about clandestine data transmission and steganography HERE, or listen to the entire podcast HERE.

You can also learn more at

Dr Daniel Prince joins The CyberWire daily podcast to talk about clandestine data transmission and steganography.

Dr Daniel Prince, Senior Lecturer in Cyber Security and Associate Director of Security Lancaster talks to CyberWire about clandestine data transmission and steganography.


The CyberWire is a free, community-driven cyber security news service based in Baltimore. Its mission is to provide concise and relevant daily briefings on the critical news happening across the global cyber security domain. In an industry overloaded with information, the CyberWire also helps individuals and organizations rapidly find the news and information that's important to them—with more signal and less noise.

10th April 2018 Podcast

In today's podcast, we hear that Facebook begins facing the Congressional music today. What are the rules for online research, professors? Experts say they're worried about weaponized IoT hacks. Hoods exploiting Cisco switch vulnerability in unpatched systems. Named threat groups and bugs as insider misdirection. As relations between Russia and the West worsen, some in Moscow call an end to Peter the Great's experiment. And how do cybercriminals make, and what do they spend it on? Daniel Prince from Lancaster University on clandestine data transmission and steganography. Guest is Gabriel Bassett from Verizon, reviewing his work on the Verizon DBIR report".

You can hear Daniels segment about clandestine data transmission and steganography HEREor listen to the entire podcast HERE.

You can also learn more at

Dr Daniel Prince joins The CyberWire daily podcast to talk about Cyber Security Risk Management.

Dr Daniel Prince, Senior Lecturer in Cyber Security and Associate Director of Security Lancaster talks to CyberWire about Risk Management & Cyber Security


The CyberWire is a free, community-driven cyber security news service based in Baltimore. Its mission is to provide concise and relevant daily briefings on the critical news happening across the global cyber security domain. In an industry overloaded with information, the CyberWire also helps individuals and organizations rapidly find the news and information that's important to them—with more signal and less noise.

26th March 2018 Podcast

In today's podcast we hear that Sixty Russian diplomats are now persona non grata in the US. It's the largest such retaliation so far for the Russian nerve agent attack in Salisbury, England. Fear of a Russian riposte against Western power grids remains high. Cambridge Analytica was raided over the weekend in the continuing Facebook data scandal. Facebook faces more difficulties over Android data collection. Notes on malware circulating in the wild. Iran objects to US indictments. Dr Daniel Prince from Lancaster University discusses risk management. And finally, the alleged Carbanak "mastermind" is arrested in Spain.

You can hear Daniels segment about Risk Management HERE, or listen to the entire podcast HERE.

You can also learn more at

Forcing Apple to open doors to our digital homes would set a worrying precedent - Dr Daniel Prince

Who controls what in the digital world? Apple is currently involved in a court face off with the FBI, and has refused to produce software that would help investigators to unlock the phone of San Bernardino gunman Syed Rizwan Farook. The clash is just the latest illustration of how important, as more and more of our lives are reduced to streams of data, access to that data has become.

The process by which Apple could help the FBI weaken the iPhone’s access security is technically feasible. But rather than focus on the technical challenge, we should ask why the FBI has asked Apple to undermine these security mechanisms. The implication of a court judgement in the FBI’s favour – with the precedent it would create – is that law enforcement and the state would gain the right to undermine the security we apply to our devices to access the enormous amount of personal information that we store on them.

Smartphones have become powerful pocket computers, stuffed with data revealing our thoughts and behaviour through records of our online browsing, our social activities, connections to friends and groups, interests and so on. A smartphone is more than a contact list, it provides an intimate portrait of its user. Just as the law affords an individual in their home a degree of privacy and security and protection, so it should treat our phones as digital “homes”.

Following this line of reasoning, consider a smartphone like a serviced apartment. While it is where we “live” digitally, it is also maintained and supported by a third party whose services include the offer of protection from unauthorised entry.

In the physical world, law enforcement may request the right to enter our real homes lawfully through obtaining a warrant. The physical act of gaining entry is rather trivial, it is the legal process that must be worked through first and is sometimes trickier to negotiate.

In the case of our digital homes, gaining access can be much more complicated and nuanced. Should the landlord of our digital homes – in this case Apple – be required to help law enforcement force open the door?

If the internet has shown us anything, it’s that technologies once invented can be quickly copied, altered and distributed. Look, for example, at the issues surrounding digital content and copyright infringement. It’s no different in the cyber-security business: tools to break into software or digital devices are quickly replicated, modified and distributed once discovered.

The Stuxnet worm – weapons-grade software that attacked industrial control systems in an Iranian nuclear processing plant – turned up online six months later, with significant parts of the code freely available for anyone to use. The fear is that the same could happen here with the tool Apple would be required to create. Worse still, is that the tool could be used covertly to break into people’s phones and leave no trace.

It comes down to a question of who we should trust to protect us and our digital lives. Traditionally, the role of government was to protect its civil population, and yet in this case it would seem that a global tech corporation is acting as the defender of our civil liberties.

Does society believe there are appropriate legal protections in place that will provide a balance for civil liberties against police and investigators' demands? In the wake of the Snowden revelations among others, it’s fair to say that western societies are questioning whether the current protections are strong enough, and if the legal framework has fallen far behind the pace of technological change.

Infrastructure Resilience: Security beyond the Castle Walls - Dr Andreas Mauthe

Digital technology nowadays provides the cornerstone for many of our economic and social activities. We rely on it for most of our communication needs, for managing industrial processes, within retail and commercial transactions, but also for our social and interpersonal interactions. This dependency is predicted to increase further considering emerging areas such as Internet of Things (IoT), Big Data, autonomous vehicles and transport systems, digital dexterity, etc. Information and Communication Technology (ICT) is at the heart of our Critical Infrastructures and as such has become a Critical Infrastructure component itself.

Our dependence is such that most processes cannot be carried out once parts of the ICT infrastructure fail. Further, systems are interconnected and a failure of one component can bring down most critical services within an entire area.

We have experienced this very poignantly here in Lancaster in the past few days, where due to the recent flood events an electricity substation failed, which subsequently brought on the failure of the entire digital communication infrastructure and ICT systems people and local businesses rely upon for their daily lives. Only the plain old telephone infrastructure was upheld. But due to most households using cordless handsets communication through it was in many cases not possible either. This incident demonstrated clearly how vital it is that our digital infrastructure is kept operational under all circumstances.

Cyber Security is one of the areas investigating how infrastructures can be kept safe and operational. Though, within the Cyber Security domain this task is repeatedly described through terms such as “securing”, “protecting”, “defending”, often only looking at this in the context of singular systems or even individual system parts. Frequently the analogy of a medieval castle is used to explain how to secure ICT systems. However, medieval castles and fortifications became obsolete. This was on the one hand due to new weapons that made these defensive structures less effective. On the other hand society as a whole changed, it became more dynamic and mobile. Hence, the (reduced) safety provided through castles was outweighed by the benefits a more open environment and society had to offer. In the post-castle era internal security has been provided through policing, whereas for external threats more dynamic and responsive defence strategies have been developed.

Reflecting on today’s ICT-based infrastructures this means that systems have to be appropriately secured but at the same time there have to be mechanisms in place allowing to dynamically detect and react to challenges caused by, for instance, cyber attacks, human or system failure, or natural disasters. This has to go hand-in-hand with ensuring that infrastructures are designed so that there are no single points of failure, that they have sufficient redundancy, potential back-up capacities and the ability to selectively isolate or remove parts of the infrastructures. Thus, the elements constituting resilient infrastructures are architectural as well as managerial.

Anomaly detection and remediation mechanisms play a key-role in the early discovery of the onset of attacks and launch of countermeasures. After the event recovery has to take place that should not just restore the original state, but possibly improve the resilience by actively deducing causes and trying to remove the respective vulnerabilities. Since there are interdependencies between infrastructures (as can be easily seen by the recent events) this needs to be carried out in a coordinated manner across different domains, systems, and Critical Infrastructures. Further, infrastructures have to become more adaptive through the use of self-learning and self-healing properties.

The benefits of infrastructure resilience investment can only be assessed by the damage it prevents and the opportunities it creates. In order to provide system and infrastructure resilience it will be necessary to review our Critical Infrastructures, assess their architectural and structural properties, and implement active resilience mechanisms. Important mechanisms are for instance anomaly detection (e.g. for the discovery of DDoS or Spam attacks) and remediation (e.g. defending infrastructures through removing malicious traffic and process automatically). Research has demonstrated the feasibility of the discussed concepts. The goal has to be to make our Critical Infrastructures more secure, and, more importantly, to also make them more resilient and adaptive to all kinds of challenges so that neither events such as the recent floods nor malicious attacks will result in the complete breakdown of our digital Critical Infrastructure within an entire region.

With thanks to Dr Andreas Mauthe, Reader in Networked Systems, Lancaster University

In the web’s hidden darknet, criminal enterprise is thriving - Dr Daniel Prince

Criminals have always done their best to use new technology to their advantage and the rapid development of new digital technologies and online markets has provided the criminal entrepreneur with as much opportunity for innovation as their legitimate counterpart - Security Lancaster's Dr Daniel Prince explains how criminal enterprise is thriving on the internet.

Europol’s recent Internet Organised Crime Threat Assessment (iOCTA) report spells this out in no uncertain terms, revealing how entire criminal enterprises have developed around using the internet to hawk criminal services to anyone with the cash.

Broadly cybercrime can be broken down into two categories:

Cyber-dependent crime: a criminal act that only exists because of the computer, such as writing and releasing malware or efforts to hack and penetrate computer or network security.

Cyber-enabled crime: a criminal act that is enhanced through the use of technology, such as Ponzi schemes or credit card fraud.

The bulk of cybercrime is computer-enabled crime, predominantly economic in nature such as fraud, financial scams, and so on. This is why Action Fraud, the lead cybercrime reporting mechanism in the UK, has joined the National Fraud Intelligence Bureau (NIFB) of the City of London Police. Reports of cybercrime are analysed and then passed on to local police forces, or the National Crime Agency which deals with serious and organised crime.

Crime is getting easier

However these classifications hide a very concerning trait: the extent to which technology makes it easy to commit crime at a distance, in anonymity, and with worldwide reach. Committing a crime of potentially equivalent financial return in person, such as a bank robbery, might require getting hold of a gun – a significant barrier of entry for the average person.

On the other hand the tools to conduct cybercrime – hacking software, scanning scripts, keyloggers – can be downloaded freely, if you know where to look. There are even step-by-step video instructions online that explain how to use them. We can see from looking at standard consumer technology that it only takes a few iterations of a product for it to become straightforward to use. So the barrier to entry for cybercrime is very low.

What the iOCTA report highlights is the extent to which crime as a service has matured, as facilitated by the hidden internet.

The concept of a “dark market” where cybercriminals trade their skills and ill-gotten gains was brought to the world’s attention in 2011 through Misha Glenny’s book DarkMarket: Cyberthieves, Cybercops and You, in which how he explains how cybercriminal networks trade their services.

A hacker discovers a bug, a vulnerability in a software program. This is sold to another, who writes a program that exploits this vulnerability in order to take control of a computer. Now compromised, this machine – perhaps someone, somewhere’s home PC – is sold on to yet another hacker who might group it with other compromised machines to form a botnet of remotely controlled computers. The botnet becomes a platform – also for sale – from which cybercriminals can launch attacks against websites or networks, for financial, political or even military ends. This criminal market has developed sophisticated systems to establish trust and guarantee financial transactions, while minimising the risk of being traced.

Self-supporting crime

In the past these types of systems would have only been available only to technology-savvy cybercriminals. Now such criminal services can be bought and used by anyone, regardless of their technical skills. What this evolution has revealed is the extent to which other criminal activities, beyond economic crime, are now being supported by these infrastructures.

The drug trade has shifted significantly online in the form of the Silk Road illegal marketplace (which has been shut down) and its many successors. These marketplaces use anonymity software such as Tor to hide the location of the servers and identities of those involved in the transactions, while Bitcoin and other cryptocurrencies allow anonymous financial transactions. It’s not just drugs on offer, with reports of anything from credit card details, to gold, guns and even hit-man services.

The business of supporting online criminality has become a truly global enterprise, with help desks, regular software updates and platform development road-maps created to service the needs of their users. In fact having imitated the very best enterprise approaches and unbound by legislative requirements, they have become innovative and agile businesses.

The thin blue line

This creates a two-tiered, organised criminal enterprise: those committing crimes that directly victimise and those that are automating and supporting the businesses of crime. The question becomes where should law enforcement’s limited resources be allocated: the criminals that carry out the crime, or those that provide the infrastructure that make it possible?

A key issue is the shortage of technical skills, highlighted by a National Audit Office report which suggests it will take 20 years to ensure the development of sufficient skills – it’s a problem found worldwide. With this demand for skills – and the private sector’s capacity to pay more than the public sector – it’s debatable whether law enforcement will be able to recruit the right people. A recent NSPCC report reiterates this, highlighting the shortage of skilled police staff to deal with the number of seized child abuse images requiring review.

Another complication is that many of the technologies criminal enterprises use have legitimate uses. For example, Tor provides a means to communicate anonymously that is vital for groups living under repressive regimes, or whistleblowers. Society has to find a way of balancing the use of privacy technology with the need to investigate criminal activity.

The iOCTA report recommendations for greater collaboration between international law enforcement and improving business cybersecurity are not enough – a step change is needed in the way police agencies deal with the impact of technology. The worry is that, given the skills shortage and ingrained, institutional approaches to enforcement, this step change is still generations away.

The Syrian Electronic Army is rewriting the rules of war - Dr Mark Lacy

PhD Candidate in International Relations at Lancaster University Oliver Fitton and Associate Director, Security Lancaster at Lancaster University and Dr Mark Lacy review how the Syrian Electronic Army is rewriting the rules of war.

In Dragon Day – a provocative new movie on release in the US in November – we see the consequences of a 'cyber 9/11'. China has attacked the critical infrastructure of the US in a large-scale cyber-attack. The film illustrates one of the dominant fears about cyber-security and cyber-war: a superpower attacking the networked infrastructure that supports all aspects of life in the 21st century.

Some argue that this fictional scenario is unlikely to ever play out in real life because a cyber-war on such a scale would ultimately be too self-destructive in an inter-connected world. Others believe such a cyber-conflict wouldn't take place because it couldn't: fears about the fragility and vulnerability of our networked society are overstated – useful scenarios for Hollywood films but not something we should worry about.

But even if complete societal meltdown is not on the horizon, the attacks coming out of Syria over the past few months are redefining the rules of the game. Until now, the use of cyber-expertise to attack others has been the preserve of rich nations - the technological innovators. Take Stuxnet, the most famous incident of this kind. While the perpetrators of the 2010 attack on Iranian nuclear facility have never been confirmed, it is widely believed that the US government was to blame.

Cyber-crime emanates from 'shadow economies' all the time, but we see it as just another nuisance of life in the digital age, a 'disease of affluence' that can be controlled through greater precaution and awareness.

The conflict in Syria has created uncomfortable political and strategic problems for Barack Obama and David Cameron but it has traversed geopolitical distance in other ways too. 'Local' conflicts have long had a 'global' dimension through the use of strategies like hijacking planes or kidnapping citizens and while this conflict exhibits all the attributes of the most brutal civil wars of the twentieth century, it has become 'global' through the use of digital strategies. The role played by the Syrian Electronic Army (SEA) challenges our common conception of cyber-security and cyber-war as the terrain of the most 'advanced' and 'developed'.

The SEA developed spontaneously as a group of Facebook users in 2011, during the early days of the Arab uprisings. Its relationship to the Assad regime is much debated and the SEA denies it is state-sponsored. Either way, it certainly lacks the billions of dollars of investment that goes into cyber-superpower projects elsewhere. As the conflict has unfolded, the SEA has proven to be an active and effective presence in the conflict, using phishing techniques to crack social media accounts, redirect web addresses to its own page and deface other websites. The New York Times has been hit and this week also fell victim to the group.

A lot of the SEA's work is nuisance. Defaced websites are usually corrected quickly, hijacked Twitter accounts don't stay that way for long. While it is annoying for consumers and potentially costly for the companies and organisations targeted, these cyber-attacks are a far way removed from the 'missiles' in the form of code that some have envisaged in the age of cyber-war.

But earlier this year those who saw the SEA as an irritation rather than a threat were silenced. The army made its presence felt in a big way by hijacking the Associated Press' Twitter account and tweeting that President Obama had been injured in an explosion, days after the Boston marathon bombing. As a direct result, the Dow Jones dropped 143 points, temporarily draining $136.5 billion from the US economy.

The 'flash crash' was not due to nervous traders but software which used a technique called algorithmic trading. This monitors news feeds and social media in order to trade automatically based on the real world events it sees. It worked perfectly in this case - except there were no explosions at the White House and President Obama was perfectly safe.

This was clearly a limited attack. Ten minutes later, with an apology from the Associated Press, the Dow Jones recovered. The financial wobble was likely to have been an unintended consequence from a typical social media 'crack' rather than an actual attack itself. But there is no denying that the SEA proved that one tweet could impact complex financial systems. What this event proved is that poor cyber-security across multiple systems can coalesce into a much bigger problem. This cascade effect is a significant danger because it takes such basic technical know-how and the holes are nearly impossible to plug.

Some of the methods used by the SEA are simple. The attack on the Associated Press, for example, was more a case of social engineering than hacking. The real problem is imperfections in the cyber-security of these complex and interconnected systems. Twitter's speedy expansion left serious flaws in its security and the Associated Press attack forced the company to accelerate the introduction of two-step verification for accounts. It also forced financial institutions to look at the effectiveness of their algorithmic trading systems, having seen how easily they could be spoofed.

This cascade effect is a genuine threat which is being exploited by diverse groups for strategic reasons (or just for the lulz) - and because it is so easy these attacks are becoming more frequent. The SEA is likely to become a blueprint for cyber 'assets' who emerge in conflicts around the world from an increasingly technically proficient population.

This new element of conflict might not result in a 'middle ranked' state like Syria launching an attack of the type depicted in Dragon Day but the acceleration of technological change and the rapid growth in know-how on all sides might be leading us towards a world in which “action at a distance” becomes more destructive than ever before.

This in turn could radically disrupt the 'great chain of being' that orders the world we live in. William Gibson famously wrote: "The future is already here - it’s just not very evenly distributed." What we see in the conflict in Syria is that the distribution of the 'future' is changing the nature of conflict around the world in ways we need to pay attention to. The SEA has warned that more is to come, and it seems it is to be believed.

How do you judge a crook who uses a laptop instead of a gun? - Dr Daniel Prince

Some recent high-profile crimes have got people thinking about how we should handle those who break the law using digital technologies. Criminal sentencing is decided by the type of crime and a range of factors, such as intention, harm and motive. And yet the question remains as to how we deal with criminals and their use of technology. 

Security Lancaster's Dr Daniel Prince and Claire HargreavesDepartment of Mathematics and Statistics, examine the criminal sentencing of cyber-criminals in their blog.

We've held a series of discussions with solicitors, government representatives and others involved in the collection of cyber-crime data. The main conclusion was that in the UK we need to do a much better job of gathering the data to help us make informed decisions about the scale of the cyber-crime problem.

Technology evolves incredibly quickly and cyber-criminals are particularly adept at rolling with the changes. You can rely on them to come up with a new way to commit cyber-crime almost as quickly as we can find ways to protect ourselves.

The implication of this is that what is or is not considered to be a cyber-crime is also evolving. This makes it troublesome to answer fundamental questions if you are going to measure criminal acts. And we need to consider what the implications of this evolution are when attempting to arrest, charge, prosecute and sentence those that commit criminal acts that involve sophisticated technology.

Cyber-crime classification is currently divided into two main categories: computer enabled and computer dependent. Some crimes, such as fraud, can be committed with or without the help of technology and can be considered enabled. Whereas others, such as hacking, could not exist with computers and are normally considered dependent.

We're not in a bad place when it comes to computer dependent cyber-crime. We've got new and updated legislation, such as the Computer Misuse Act, to deal with the new crimes that could not have existed before the advent of a new technology, such as writing a virus. So we can be pretty clear on how to deal with a computer dependent criminal act.

We struggle, though, when we come up against old crimes that are covered by existing legislation but have been reinvented for the digital age. Do we need new legislation to deal with financial crime using the latest smartphone? If this is the case, would we be confident that the due process of checks and balances that are needed when we introduce new legislation could be completed quickly enough to keep up with the rapid evolution of technology?

Impact over technique

Rather than trying to draw up new legislation, perhaps the answer lies in thinking about how criminals use technology to increase the harm they do, just as we already do when we punish acts of violence.

In a fist fight, the impact of physical blows is limited according to the physical strength of the combatants. But if one of them brings a weapon to the fight, their ability to do harm to the other is amplified and is considered an aggravating factor when sentences are handed out.

The use of cyber-attacks to amplify or enhance physical attacks has been a reported military tactic. The Israeli bombing of a suspected Syrian nuclear materials site in 2007 was reported to include a cyber-attack on Syria's radar systems that allowed Israeli jets to fly to their target without being detected. In this case, the digital weapon enhanced the effectiveness of the physical act of combat.

This type of force amplification can be seen in cyber-criminality too. Stock market pump and dump scams are a good example. This is a crime that has been around for years but has become easier and more damaging in the digital age.

As has always been the case in these crimes, dodgy stock brokers spread information to artificially boost the value of dud stock and then encourage unwitting investors to buy them. But while this used to require significant effort on the part of the scammer, who had to make phone calls and press the flesh to fabricate value, it can now be done at the touch of a button. False information can be spread in an instant through emails and online forums, greatly amplifying the scope of the exercise and the damage that can be done.

Online and in the dock

The idea of using sentencing guidelines to manage computer enabled crime has some interesting implications. It shifts the focus onto the impact a crime has on the victim and the impact of using technology rather than the way that technology works. That means the legal system does not necessarily have to understand the technical details of a technology in order to reach a decision, it just has to look at the effect it has had.

This approach means that our existing legislation and criminal definitions can be left alone. Fraud is still fraud, instead of being "computer fraud", which allows us to avoid having to work out whole new rules for a whole new crime.

There is still much work to be done on deciding how we approach this issue but it's important to start making progress. Cyber-criminals aren't going to wait around for us to make up our minds.

In cyber-war, you could change history at the touch of a button - Dr Mark Lacy

Not all violence in war and conflict is simply strategic. And not all the destruction that takes place is a consequence of territorial or geopolitical objectives. Authors, Dr Mark Lacy and Dr Daniel Prince, both Associate Directors of Security Lancaster, explore cyberwarfare in their blog.

Taking over the next village, blocking a trade route or destroying the critical infrastructure that supports everyday life are the fundamentals of strategic advance but other actions are intended to undermine morale and have a psychological impact on the victim.

The degradation of the urban environment, or urbicide, is one such action. This is the destruction or desecration of buildings, the eradication of public space, the attempt to erase history and memory through attacks on libraries and sites of historical importance.

Urbicide is not just about physically removing people from a territory, it is an attempt to erase any trace of their existence in that territory. It is rewriting the history books to justify one side of an argument. This is particularly true for religious or ethnic conflicts, where one side aims to undermine the other's right to a disputed piece of land.

Cybercide, the cyber-crime equivilent to this practice, is a relatively new concept but could prove to be an equally powerful tool as we become more dependent on digital services in our daily lives. Yet we rarely think of preparing to defend ourselves against attack in this way.

Digital disruption

Acts of cyber-vandalism are increasingly common and are used to deliver a message. They are symbolic statements that are often used to great effect.

Over a three-week period in April 2007, websites in Estonia were hit by denial of service attacks - a well known technique that aims to debilitate an online service by disrupting the technology on which it runs, such as internet connectivity. The websites of the Estonian parliament, banks and news outlets were hit, disrupting services for people across the country.

The attacks were a response to tensions between the Estonian government and Russian groups over the relocation of the Bronze Soldier of Tallin and other issues related to Soviet-era war graves.

Estonia's decision to move the statue away from Soviet war graves to the Tallinn Military Cemetery was seen by many as an act of traditional urbicide. The removal of the statue undermined the significance of the war grave sites and could make it easier to suggest they were never there. The denial-of-service attacks - for which a Russian official was later convicted - were an act of digital disruption. It was about attacking Estonian infrastructure and creating a nuisance in a time when people increasingly rely on websites in their everyday lives.

In another example, the Bangladeshi Cyber Army claimed that it had defaced around 1,000 websites in protest against the actions of India's Border Security Force. The attacks began on 7 January 2013, marking the two-year anniversary of the death of a 15-year-old Bangladeshi girl at the Indian border.

The road to cybercide

Incidents like these are not full-blown cases of cybercide but they could well be seen as a sign of things to come. The attacks in Estonia and India were a nuisance but caused only temporary problems that could be resolved. The desecration of websites is more like digital graffiti, a means for those on the margins to circulate messages in public space and leave their mark. These acts may cause offence but there is no obvious permanent damage caused so they are not the same as destroying a bridge or a building. Only those unwary website owners who don't back up their online content suffer long-term problems when attacked in this way.

Cyberspace is fast becoming fundamental to life. The web is now vital for commerce and more aspects of our lives are stored and shaped through digital culture than ever before. It's possible that attacks like those carried out in Estonia or India or by the Syrian Electronic Army could be more permanent and severe.

In the race to digitize more and more aspects of our existence we might be failing to grasp the potential accidents and vulnerabilities on the horizon. The speed and efficiency with which we try to digitize services might limit thinking and planning on the more negative unintended consequencesof technological change.

How safe are our financial details, for example? Would a group be able to delete our financial histories or - in conjunction with ethnic cleansing - erase property deeds to make it seem like certain people have no rights on land?

And what of libraries and artefacts? More and more books and music are being published digitally - sometimes with a hard copy version but sometimes without. If, in 30 years time, we find ourselves in the age of completely digital libraries, a whole new set of vulnerabilities is possible.

In 2009, Kindle owners who had bought the George Orwell classic 1984 woke up one day to find Amazon had simply erased the title from their devices. Hard copies still exist, of course, but which future classics do or will only exist in digital form in the future? A decision such as this by a company or a government could wipe that piece of literature off the face of the Earth. Just as Stalin used what technology was available to him in his time to repaint history, the dictators of the future might try to re-write history by altering the books stored in online national libraries.

Of course, it can be argued that the 'distributed' nature of digital life provides protection. Information is distributed across too many locations to be completely erased, which protects us from the actions of states or criminal organisations that would seek to control information. Anxiety about future cybercide may well be a symptom of living in a time of rapid and disruptive social, economic and political change but that doesn't mean we shouldn't plan for the future.

However, cybercide potentially embodies more subtle forms of social manipulation. It is relatively hard to degrade or alter the urban environment to erase a group of people or a historical artifact without anyone noticing. And yet the subversion and subtle manipulation of digitally held information is the lifeblood of hackers the world over.

What if the aim is not to destroy a whole piece of literature, but to subtly alter the text, say of a school book to change its meaning or remove passages from disputed literature. These changes may not be noticed in time to prevent them from becoming conventional wisdom or perception. They could change a whole generation’s understanding of a historic event or specific social group.

The concept of cybercide provides an infinite spectrum of disruptive possibilities to undermine morale and have a psychological impact on the intended victims. Ones that may be more subtle and seditious than we have seen before or could possibly imagine.

This article was originally published on The Conversation. Read the original article.

Employers can predict rogue behaviour using your emails - Professor Paul Taylor

Most office workers send dozens of electronic communications to colleagues in any given working day, through email, instant messaging and intranet systems. So many in fact that you might not notice subtle changes in the language your fellow employees use. The intricacies of our emails are explored in the latest blog from Professor Paul Taylor, Professor of Psychology & Director of Security Lancaster.

Instead of ending their email with "see ya!" they might suddenly offer you "kind regards." Instead of talking about "us" they might refer to themselves more. Would you pick up on it if they did?

These changes are important and could hint at a disgruntled employee about to go rogue. Our findings demonstrate how language may provide an indirect way of identifying employees who are undertaking an insider attack.

My team has tested whether it's possible to detect insider threats within a company just by looking at how employees communicate with each other. If a person is planning to act maliciously to damage their employer or sneak out commercially sensitive material, the way they interact with their co-workers changes.

We discovered this by running day-long simulations of an organisational environment in which we monitored multiple aspects of worker behaviour. We looked at the documents the workers used, who they interacted with and their email content. At the beginning of the day everybody was a co-worker. At the morning coffee break, however, we offered a few people £50 to sneak some information out of the system for us. We then continued to offer bigger incentives for more information as the day went on.

Once they agreed to be an insider, workers showed distinct changes in their email behaviour. They used singular rather than plural pronouns, reflecting a greater focus inwards on themselves. They also showed greater negative affect, as their negativity toward the organisation and its representatives leaked into their outward presentation. Finally, their language became more nuanced and error-prone, reflecting the cognitive impact of having to juggle the double identity of being a colleague and an insider.

There was also an important change at the interpersonal level. While other workers continued to show the degree of language mimicry typical of cooperative interaction, the insiders reduced their mimicry of other workers. This change in behaviour, which is suggestive of inadvertent social distancing, increased over time to a point where it was possible to use this metric to differentiate 92.6% of insiders from their co-workers.


Your linguistic footprint might make you easier to spot when you are doing wrong, but it also opens avenues for protecting yourself against crime. The field of authorship attribution looks to identify a person?s linguistic fingerprint so that they can be identified as authors of pieces of text. That means you can identify a person even if they use multiple identities online.

This comes in handy in cases such as when you want to try to identify an adult pretending to be a child in a chatroom. The way adults communicate is fundamentally different from the way teenagers address each other and even an adult trying to pose as a child allows some of his or her adult tendencies to seep through. They might overuse a "txt speak" but this style is not as ubiquitous in child's writing as adults expect. Their overuse gives them away.

Once identified, these distinctions can be used to drive an early warning system that either alerts the children to the presence of an adult or acts discretely by alerting the police.

Even in every day scenarios, the traits that give away our bad behaviour can also be used to protect us. When industry worries about cybersecurity, users- the actual customers- are seen as the thorn in the system. They leave passwords under mouse mats, click links that are quite clearly spam and use Facebook as though only nice people will look at the content.

They are the reason why our technology fails, the cross that the industry must bear. We have to build bigger and better systems so that the irritating, error-prone human can be managed.

Although there are elements of truth in all that, it might be more useful to see humans as an asset. Some of the best security systems are the ones that make the most of the unique characteristics that make us human.

Online banking systems already take advantage of human associative memory - the idea that places, sights, smells and experiences are linked for us in ways that cannot be guessed using an algorithm. In these systems, rather than ask you to present a password, the bank might show you a picture and ask you to recall an associated memory. This is just one way that human memory affords an opportunity for good cybersecurity that other approaches can't beat.

Psychologists have learned to tell quite a lot from user behaviour online and in the workplace. Language use can reveal psychologically important things about who you are and how you are, for example. It can provide clues about your personality, your emotional state, the clarity of your thoughts and the extent to which you are focused on the past, the present or the future.

These all build up to produce a complex picture of the user that could be used as a protective shield. As we try to cope with the myriad cybersecurity threats that affect us daily, this might be the only cast iron technique to ward off those who want to imitate us online for criminal gain.

Human users are imperfect creatures and we have long been exploited for our weaknesses online. But we should also be looking at the problem from the other side. We should use our human qualities to make better decisions about cybersecurity instead of just beating ourselves up over our inability to remember passwords.

Technology could be 'aggravating' factor in sentencing - Dr Daniel Prince

How do we find out about cyber criminals in the UK? This is the question Security Lancaster set out to answer at a workshop with attendees from the face of our legislative and data collection institutions, ranging from solicitors to government agencies and departments.

A number of conclusions were drawn from discussions with the attendees, the majority of which indicated that the collation of suitable public data to make rational and justifiable decisions on the cyber criminal impact in the UK should be considerably improved.

One aspect that caught people's attention is the concept of adapting the sentencing guidelines to take into account the use of technology as an aggravating or mitigating factor.

Technology is complicated and evolving fast, with an even faster evolution in social uses of this technology for legitimate and criminal enterprise. The issue comes when you are trying to work out what is and isn't a cyber crime - one of the fundamental questions you need to answer if you are going to measure this criminal act.

Recently, cyber crime classification has evolved to provide two main categories Computer Enabled and Computer Dependent. For the latter category we have created or updated legislation, such as the Computer Misuse Act, to deal with new crimes, crimes which could not have existed before the advent of a new technology. The difficulty comes when you have old crimes with existing legislation, such as fraud, which have been enhanced or adapted through the use of new digital technology which is the computer enabled category.

Should legislation be updated in respect to a crime like fraud to encompass the new digital techniques and could we even legislate fast enough to do this? This is one of the distinctions that we discussed in depth during the workshop, when new crimes come into being time must be taken to consider them and generate appropriate laws of the land.

However, it would appear1 that many of the cyber crimes that do occur currently fall into the bracket of computer enabled, old crimes reinvented for the digital age. How can we possibly deal with such a situation?

Here the concept for the idea of sentencing guidelines comes from a fact long discussed in the military that digital technology and cyber security technologies act as force amplification for the action of an individual or group to increase the impact and outcome on the target.

The same can be seen in cyber criminality, a stock market "pump and dump" scam is greatly amplified through the ability to email nearly everyone connected to the Internet simultaneously, where as previously the same fraudulent act would have taken considerable effort and time to communicate with the same number of people in order to elicit a response.

A simple physical analogy is the concept of a fist fight. If the fight is one on one with no weapons, the impact of the physical blows on each individual is limited to physical strength of the combatants. If however, one of the individuals deliberately brings a weapon to the fight, this would amplify the individual's ability to do harm to the other and is considered an aggravating factor in sentencing for crimes such as Assault Occasioning Actual Bodily Harm. Similarly an individual's capability to defraud thousands of people is enhanced via the use of digital equipment.

The concept of using sentencing guidelines to manage computer enabled crime has some interesting implications. It creates a focus on the impact to the victim(s), creating a victim focused process.

The focus on impact of the use of technology, rather than the way the technology is used enables the legal system to move away from having to understand complex technical details of how the technology was used.

Our existing legislation and criminal definitions can be left alone, fraud is still fraud instead of being "computer fraud" with the accompanying complexities that title potentially has.

And finally for the purposes of the original intention of the workshop, we can ask the simple question, "Was the use of computers a consideration in sentencing?" in order to gain statistical details on the impact of computer enabled crime.

To be clear neither of us are legal experts, Claire is an applied statistician and Daniel is a computer scientist, this field was represented at the workshop along with criminology and formed part of the collective discussion between individuals who normally sit on opposite sides of the table to develop these ideas and others.

We believe at Security Lancaster, Lancaster University's research centre that brings together research in cyber security, security futures, investigative expertise, violence and society, and transport and infrastructure protection, that it is multi-disciplinary thinking that is required if we are to be able to tackle these new and emergent societal issues.

Through working together and across traditional disciplinary and organisational boundaries, we are asking the "what if?" questions that could provide the solutions to problems we might face in the future.


1 We say this as our work has demonstrated a lack of substantive evidence to validate the claim.

With the right tech, online bullies can be outsmarted - Professor Awais Rashid

Recent revelations about the frequency with which children experience cyber-bullying have caused alarm among parents, advertisers that feature on social media sites and even the Prime Minister.

Social media services such as Facebook, Twitter and enable people from across the world and various walks of life to come together and share materials and experiences. However, they also present the classical dual-use dilemma, whereby technology that is used for good can also be exploited for harm.

Cyber-bullying is one such consequence. Perpetrators have direct and easy access to potential victims 24 hours a day, particularly since many users can now access these sites on mobile phones. The reach of such media is also practically global so the victimisation doesn't end by removal of physical proximity (as has been the case in traditional offline bullying).

Arguments are often made that victims should simply disengage from the social media used by perpetrators of bullying. However, the reality is not that simple. Social media sites are now an integral part of young people's daily lives and are becoming ingrained in the social fabric of society. Disengaging from such social media can often mean disengaging from one's friends and family.

We have to accept that young people are going to continue to use social networks so it might be wise to think about how we can make it safe for them, using technological know-how. Technology is not an answer in its own right but it can be used to reinforce the excellent education work carried out by charities such as Beat Bullying. Used wisely, it can be a lynch-pin in detecting and apprehending cyber-bullies.

For a start, social networks no longer need to manually read the massive volume of online communications that take place between users to identify bullies. Agressive and abusive language can be automatically flagged. But the concept of identity can be fluid in the online world and this has enabled bullies to flourish. It is easy to assume different faces online in a way that is impossible in real life, so cyber-bullies can hide their true selves and even switch identities to continue to victimise someone if they have been pulled up for bad behaviour under another persona.

One example of how technology can be used to fight cyber-bullies addresses this problem in particular. At the Isis project, we work on resolving the identities of individuals and groups online to make it hard for perpetrators to hide their identities or use multiple personae. By analysing the language used in online communications we can detect key characteristics that distinguish the online interactions of one person from those of another. Social network hosts can then automatically compare communications originating from multiple identities to detect if the same person or group is hiding behind more than one identity and take the necessary action against them if they step out of line.

Another technological solution to this growing problem is to actively engage young people in designing the social networks they use. The UDesignIt project, for example, calls on young people to collaborate on designing their social media environments. The sharp distinction between bully and victim is softened by this collaborative effort and the social media space becomes a safer place to be.

Parents and schools have a role to play to both highlight the risks posed by online interactions, and encourage standards of good behaviour online (just as they do offline). They also know how best to support victims of cyber-bullying when it happens. But it is unfair to expect victims to miss out on the benefits of social media just as it is unfair to tell a mugging victim to stop walking in the streets at night. Thinking smart on this front can help them to have the best of both worlds.

Image Warfare Revisited - Dr Nathan Roger

Dr Nathan Roger was invited to give a seminar at Security Lancaster where he talked about image warfare and Osama bin Laden & discussed how, since September 11th 2001, image warfare has replaced techno-war as the dominant warfighting model.

In my recently published book Image Warfare in the War on Terror (Palgrave Macmillan) (which examines the Bush years of the War on Terror and the start of the Obama years), I discuss how, since September 11th 2001, image warfare has replaced techno-war as the dominant warfighting model. I argue that image warfare has been embraced by Al Qaeda while the West is still playing catch-up.

I believe that throughout the Bush years both America and Britain were repeatedly drawn into a dangerous game of mimetic one-upmanship which benefitted Al Qaeda's image warfare. For example, Tony Blair's appearance on al Jazeera in response to bin Laden's first video message after the 9/11 attacks; the Pentagon's publication of death images of Uday and Qusay Hussein and the Iraqi governments turning of Saddam Hussein's execution into a media spectacle. However, the operation which resulted in the killing of bin Laden on May 2nd 2011 does - in my opinion - represent something of a quantum leap forward in terms of the West's understanding of image warfare. This is because the Pentagon succeeded in breaking the dangerous cycle of mimetic one-upmanship which it had fallen into with Al Qaeda. It did this by not publishing death images of bin Laden. Instead, images showing bin Laden's hideout immediately after the security operation had been concluded were released to the world's media along with an image showing a shocked looking President Obama, Secretary of State Clinton and other senior members of the Obama administration as they viewed the security operation live - via a video link - in a White House incident room. These images have since become the defining 'image munitions' of the event. After DNA samples had been taken from the body, the body was then quickly prepared for burial in accordance with Islamic tradition, the identity of the body was then confirmed to be that of bin Laden and he was then buried anonymously at sea.

This is evidence that the counterinsurgency strategy, devised by Hank Crumpton, David Kilcullen and other Pentagon officials and mapped out in the 2006 Quadrennial Defense Review, is beginning to be integrated into the Pentagon's image warfare strategy. In Twenty-Eight Articles: Fundamentals of Company-level Counterinsurgency Kilcullen writes: 'One of the biggest differences between the counterinsurgencies our fathers fought and those we face today is the omnipresence of globalized media.' (Kilcullen, 2006: 6) He also provides the following warning: 'Beware the "scripted enemy", who plays to a global audience and seeks to defeat you in the court of global public opinion. (ibid.: 6)

The 'scripted enemy' has become even more deadly in the age of camera/video phones as they are no longer solely reliant on journalists and news crews reporting from the scenes of terrorist incidents to get their messages out to the watching world. The proliferation of camera/video phones means that today anyone can potentially record graphic scenes from terrorist incidents and then immediately upload them onto the internet to be watched by a global audience. The July 7th 2005 London Bombings are a powerful example of this as London Underground passengers who were caught up in the attacks recorded what they witnessed on their camera/video phones and so captured and deployed some of the most powerful 'image munitions' of 7/7.

Fast forward to May 22nd 2013 and the brutal murder of Drummer Lee Rigby (2nd Battalion Royal Regiment of Fusiliers) by Michael Adebolajo and Michael Adebowale - near the Royal Artillery Barracks in Woolwich (London); the first successful terrorist attack to take place in Britain since the 7/7 London Bombings. Here the 'scripted enemy' (Adebolajo and Adebowale) succeeded in opening a new chapter in the evolution of image warfare, where the image and the media event have seemingly completely consumed the event itself.

Adebolajo and Adebowale singled out Drummer Rigby to be attacked and killed because he was a British soldier (he was also wearing a 'Help for Heroes' top). They subjected him to an horrendous, brutal and depraved attack and then instead of fleeing the scene they remained behind and Adebolajo (with blood-soaked hands and a meat cleaver in his left hand) called upon the gathered crowd of shocked onlookers to take out their camera/video phones and record what they saw. Adebolajo then delivered a chilling message to onlookers recording video-phones where he attempted to justify his actions: 'The only reason we have killed this man today is because Muslims are dying daily by British soldiers. And this British soldier is one. It is an eye for an eye and a tooth for a tooth. By Allah, we swear by the almighty Allah we will never stop fighting you until you leave us alone.' (Telegraph, 2013: unpaginated) Armed police then arrived at the scene, Adebolajo and Adebowale then charged at the police, the police responded by shooting them in the legs and arresting them. 'Image munitions' of the attack have since circulated globally, causing intense international debate and resulting in a series of anti-Muslim reprisal attacks.


Kilcullen, D. (2006) Twenty-Eight Articles: Fundamentals of Company-level Counterinsurgency. Small Wars Journal, pp. 1-11. [Accessed on 2nd June 2013]

Telegraph (2013) Woolwich Attack: The Terrorist's Rant. [Accessed on 2nd June 2013]

Department of Defense (2006) Quadrennial Defense Review Report. [Accessed on 2nd June 2013]

Why don't organisations adopt cyber security measures? - Tony Dyhouse

Is it easier for smaller organisations to adopt appropriate advice? Persistent problems are rarely easy to solve. It is often necessary to go back to grass roots and question accepted assumptions and theories to make progress. Why are organisations not adopting appropriate cyber security measures?

ICT KTN and Security Lancaster set about finding out. Lots of cyber security resource and advice has been levied at all organisations in the UK over the last few years. We hear about an ever-increasing range of attacks against UK industry, trying to steal identities and intellectual property.

Yet despite increasing assistance, large organisations keep falling victim to such attacks, usually as a result of human gullibility rather than technological genius. This is understandable due in part to the large number of people they employ - each person effectively forming part of a vulnerability footprint.

So, surely, a smaller organization should find it easier to adopt appropriate advice? Our Small Business Survey 2012 indicated that this was not the case and that even cyber-savvy SMEs were failing to adopt the 'best practice' measures being regularly suggested. A key finding in the report refers to the current practice of lumping together any company with between 1 and 250 employees as an 'SME'.

When you think about it, that's clearly not sensible due to the differing requirements throughout that size-band. Obvious? Then why do we insist on a 'one-size-fits-all' approach for SMEs? Further, although cyber security professionals insist the sky is falling in, most micro and small businesses don't care because the complexity and the cost of doing something about it would threaten their existence anyway. They often conclude that the treatment is worse than the illness as it takes away their agility and flexibility - their prime survival advantage.

One thing we had predicted is that the "fear, uncertainty and doubt" expressed in a new language and handed out by the bucket-full has had a negative effect. Couched in this strange language, which is often a source of contention, even for those who claim to understand it, 'attacks', 'hacks' and 'compromise' are not words without inherent emotion.

As one participant commented: "Cyber security is presented in such a scary way I am not about to poke the wasps' nest to see how scary it actually is!"

Small Business Cyber Security Workshop 2013: Towards Digitally Secure Business Growth

A note on The Cabinet Office Enhanced SAGE Guidance. A strategic framework for the Scientific Advisory Group for Emergencies (SAGE) - Professor Monika Buscher

Six days of airspace closure during the 2010 Icelandic volcanic ashcloud is estimated to have cost the global airline industry US$1.7 billion. The 21st Century has been termed the "century of disasters", and in an effort to support a response, the Financial Times (2011) lists Disaster Management among the top 10 challenges for science. But utilising science to address disasters is easier said than done, of course.

The Cabinet Office Enhanced SAGE Guidance. A strategic framework for the Scientific Advisory Group for Emergencies (SAGE), published in October 2012, aims to facilitate effective integration of advice from the natural and social sciences into emergency response in the UK. The stakes for such integration are high for individuals, society and scientists. For example, in the days after the Fukushima nuclear accident, SAGE advice was very important for many British nationals in Japan (Julia Longbottom, Foreign Office, House of Commons, 15th June 2011). In an arguably less positive way, scientific models played a part in an - according to a British Airways memorandum - unnecessary closure of airspace for six days during the 2010 Icelandic volcanic ashcloud (House of Commons, March 2011), which caused serious disruption to millions of travellers and is estimated to have cost the global airline industry 1.7 billion US$ (IATA). The debate shows how high the stakes are for scientists, too: pressed to provide 'decision-relevant' advice about uncertain consequences of complex phenomena like volcanic ash particles in jet engines, they may be drawn to account for over- as well as under-estimating risks from a position of hindsight. The recent verdict in the l'Aquila trial, where six scientists and engineers and an official of Italy's Civil Protection Agency were convicted of manslaughter for providing false reassurances to the public regarding the earthquake, is an extreme example.

The UK Cabinet Office Enhanced SAGE Guidance on the use of scientific and technical advice in emergencies is clearly an important document, and it provides detailed and useful clarification, but it also disappoints. Scientific Advisory Groups for Emergencies (SAGE) are part of a new structure for scientific advice in emergencies, replacing a permanent standing committee, the Scientific Advisory Panel on Emergency Response (SAPER), which provided independent scientific advice to the Government Chief Scientific Adviser (GCSA) until 2009. SAGE, in contrast, are assembled ad-hoc in the event of emergencies that require cross-government coordination and report to ministerial groups within the Cabinet Office Briefing Room (COBR). They liaise with Scientific and Technical Advice Cells (STACs) which advise local emergency responders during emergencies as well as Scientific Advisory Groups (SAGs), and a network of Chief Scientific Advisors (CSAs), who work more permanently with almost every Government Department except the Treasury. The first SAGE was formed during the 2009-10 H1N1 Swine Flu influenza pandemic, the most recent after the Fukushima Daiichi nuclear accident in 2011.

The Enhanced SAGE Guidance is the most recent contribution to a quietly controversial comprehensive restructuring of how scientific advice is sought and used in emergencies in the UK. It has been produced in response to a rather critical, broad ranging inquiry carried out by the Science and Technology Committee (3rd Report, House of Commons, March 2011). The Science and Technology Committee reviewed extensive evidence from stakeholders involved in the 2009-10 H1N1 Swine Flu pandemic, the 2010 volcanic ash crisis, and, with an eye to future risks, experts on space weather and cyber attack.

From this consultation, the committee raise 58 concerns over the restructuring of scientific advice in emergencies in general and SAGE in particular. Most significantly, the inquiry reveals a lack of involvement of a broad base of scientists in the whole emergency management cycle fromrisk identification, to planning, preparedness, response and recovery. Specifically worrying is a lack of influence in the formulation of the National Risk Assessment. The committee also criticize an unnecessarily secretive approach in forming and conducting SAGE, indeed a carte-blanche approach, and the use of 'reasonable' worst case scenarios, which have led to sensationalised media reporting, and, at the same time, concerns that the word 'reasonable' is 'influenced by the need to find a reasonable level of public expenditure for contingency planning rather than outlining the worst scenario that might realistically happen" (3rd Report, House of Commons, March 2011). They also highlight a lack of focus on social and behavioural sciences.

The government first replied in writing (4th Report, House of Commons, May 2011), addressing 47 of the 58 concerns directly, and then provided a supplementary written response to the remaining concerns (6th Report, House of Commons, June 2011). This was followed by oral evidence gathering on the progress of the implementation of the government's response to the committee's concerns by the Science and Technology Committee (15th June 2011), featuring a question and answer session with Professor Sir John Beddington, Government Chief Scientific Adviser, Christina Scott, Head of the Civil Contingencies Secretariat, Cabinet Office, and Julia Longbottom, Head of China Department (formerly Head of Far Eastern Department), Foreign and Commonwealth Office. It is apparent that significant progress has been made in integrating scientific advice into the National Risk Assessment and the longer term cycle of emergency management, as well as in addressing some of the other concerns raised. The Enhanced SAGE Guidance complements and consolidates this process.

The process of reform and review has clearly sensitized many parties to many of the important complexities of integrating scientific advice into disaster management. It has produced some workable collaboration infrastructures, and the Enhanced SAGE Guidance is an important part of this. It specifies when and how a SAGE should be assembled, lines of communication, and interdependencies with more permanent science-disaster management collaboration mechanisms such as SAGs and departmental CSAs. But the process of producing this guidance has also drawn attention to difficulties that are - disappointingly - not addressed when they could be. Most significant amongst these difficulties is that providing decision relevant scientific analysis in a time-critical manner in situations of great complexity and uncertainty, especially with regard to high impact, low probablity risks (such as a volcanic ash cloud or severe earthquake) is risky for scientists, responders and affected communities. On this, the guidance is silent, apart from exploring it as a matter of 'communication'.

The Government Chief Scientific Adviser commissioned the Blackett Review of High Impact Low Probability Risks (Government Office for Science, January 2012), but this, too, remains firmly within a framework that sees scientific knowledge as something to be 'communicated to' emergency responders, the media and the general public. And, filled with a broad anxiety about these recipients' inability or unwillingness to understand messages about risks correctly, the (predominantly natural) scientists involved in the reform of the use of science in emergencies in the UK call for better integration of the social sciences and humanities - frequently referred to as 'behavioural sciences'. Sociologists, psychologists, historians, media and communication experts are expected to have useful insight into the ways in which 'behaviour' determines perception and understanding of science and emergency response command and control measures. This limited and outdated view of social sciences and humanities and practices of sense-making in crises is very disappointing. Social sciences, for example, are not concerned with 'behaviour' but the relational dynamics of society, social and material practices of sense-making and of producing social order, and this perspective is powerful.

Advances within the social sciences show that a view of scientific knowledge as hard and fast, evidence-based, objective fact is flawed. In a study of the use of science in the response to the Chernobyl disaster, Brian Wynne, for example, shows how a more open and reflexive approach that acknowledges uncertainty and takes local knowledge more seriously could create significant opportunities for managing disasters better (1992). More recently, Matthias Gross (2010) has shown how a scientific process that fosters collaborative, multi-disciplinary learning in critical situations can be highly effective. And such collaborative learning should not only involve scientists, government officials, and professional responders, but also citizens. Grassroots appropriation of social media by members of the public for 'crisis informatics' (Palen 2009) and citizen science (Boulos 2011, Kera 2011) has opened the door to new forms of sense-making in emergency situations and new models of citizen engagement, such as 'whole community' security (Pisano-Pedigo 2011) and agile response (Harrald 2006, Perng et al. 2013).

Research and innovation in best practice and technology pays attention to these emerging trends, for example in the systems of systems approach to interoperability pursued in the Bridge project: Bridging resources and agencies in large-scale emergency management (, where Lisa Wood and I inform innovation with social science studies. It would be useful if the UK science policy reform and guidance on the use of science in emergencies could be more open to engagement with social science beyond a concern with 'behaviour'.


Boulos, M. N. K., Resch, B., Crowley, D. N., Breslin, J. G., Sohn, G., Burtner, R., Pike, W. A., et al. (2011). Crowdsourcing, citizen sensing and sensor web technologies for public and environmental health surveillance and crisis management: Trends, OGC standards and application examples. International Journal of Health Geographics, 10(1), 67. doi:10.1186/1476-072X-10-67

Cabinet Office (16th October 2012) Enhanced SAGE Guidance. A Strategic framework for the Scientific Advisory Group for Emergencies (SAGE)  [Accessed 3rd December 2012]

Harrald, J. R. (2006). Agility and Discipline: Critical Success Factors for Disaster Response. The ANNALS of the American Academy of Political and Social Science, 604(1), 256-272. doi:10.1177/0002716205285404

House of Commons (2nd March 2011) Scientific advice and evidence in emergencies. Third Report of Session 2010-11. Volume I. London: The Stationery Office Ltd.  [Accessed 3rd December 2012]

House of Commons (17th May 2011) Scientific advice and evidence in emergencies: Government Response to the Committee's Third Report of Session 2010-12. 4th Special Report of Session 2010-12. London: The Stationery Office Ltd.  [Accessed 3rd December 2012]

House of Commons (14th June 2011) Scientific advice and evidence in emergencies: Supplementary Government Response to the Committee's Third Report of Session 2010-12. 6th Special Report of Session 2010-12. London: The Stationery Office Ltd. [Accessed 3rd December 2012]

House of Commons (15th June 2011) Science and Technology Committee - Minutes of Evidence. Scientific advice and evidence in emergencies: follow-up  [Accessed 3rd December 2012]

Government Office for Science (13th January 2012) Blackett Review of High Impact Low Probability Risks.  [Accessed 3rd December 2012]

Gross, M. (2010). Ignorance and Surprise. Cambridge, MA: MIT Press.

Kera, D. (2011). Entrepreneurs, squatters and low-tech artisans: DIYbio and Hackerspace models of citizen science between EU, Asia and USA. ISEA2011.  [Accessesed 4th december 2012]

Palen, L., Vieweg, S., Sutton, J., & Liu, S. B. (2009). Crisis Informatics : Studying Crisis in a Networked World. Social Science Computer Review, 27(4), 467-480.

Perng, S.-Y., Büscher, M., Wood, L., Halvorsrud, R., Stiso, M., Ramirez, L., Al-Akkad, A. (2013, forthcoming) Peripheral response: Microblogging during the 22/7/2011 Norway attacks. In IJISCRAM. Draft Available at 

Pisano-Pedigo, L. (2011). Partners in Preparedness. Conversations are Building Blocks for the Success of the Whole Community. Denver.  [Accessesed 4th december 2012]

Wynne, B. (1992). Misunderstood misunderstanding: social identities and public uptake of science. Public Understanding of Science, 1(3), 281-304. doi:10.1088/0963-6625/1/3/004

About the authors...

Professor Paul Taylor

Professor Paul Taylor is the Director of both Security Lancaster & CREST, and is interested in the processes that underpin cooperation and violence. Using experimental, archival and field research, he has studied both the fundamental behavioural and cognitive processes that make human interaction possible and, more practically, the kinds of tactics and policies that promote peaceful resolutions.

Dr Dan Prince

Dr Daniel Prince is an Associate Director and business partnerships manager for Security Lancaster. Prior to this he was the course director for the multi-disciplinary MSc in Cyber Security teaching penetration testing, digital forensics and information security risk management. He now lectures in informed defence and digital forensics as part of the MSc in Cyber Security at Lancaster University.

Dr Mark Lacy

Dr Mark Lacy is an Associate Director of Security Lancaster and the theme lead for Security Futures: Security Futures is an interdisciplinary space to examine the ethical, economic, legal and technical implications of new technologies - to identify new areas of research and to examine the optimism or fear in debates over emerging trends and moral panics about new technologies.

Dr Andreas Mauthe

Dr Andreas Mauthe is a Reader in Computing and Communications and his research is in the area of Networked Systems, focusing on two main areas, Network Management and Multimedia Systems. He has also been working on research related to energy efficient computer architectures. Through his research, Dr Mauthe is part of the Energy Lancaster and Security Lancaster.

Charles Weir

Charles Weir has thirty years of experience as a researcher, software architect, design consultant and company MD, specialising in human-centred aspects of software development, especially security, software architecture and the workings of development teams. He was app development lead for EE Cash on Tap, the UK’s first commercial Android payments app, and is now working on packages to help teams improve their software security.

Monika Büscher

Monika Büscher is Professor of Sociology, Director of the Centre for Mobilities Research and Associate Director for the Institute for Social Futures at Lancaster University. She co-edits the book series Changing Mobilities. Monika currently leads research on disaster mobilities and ethical, legal and social issues of IT innovation in the EU FP7 SecInCore project.

Dr Nathan Roger

Dr Nathan Roger is an Honorary Research Associate in the Research Institute for Arts and Humanities (RIAH) at Swansea University. He is the author of Image Warfare in the War on Terror (Palgrave Macmillan, 2013); is an Editorial Assistant for the Journal of War and Culture Studies and is an Editorial Review Board member for the Journal of International Relations Research.

Tony Dyhouse

Tony Dyhouse: Tony is the Director of the Cyber Security Knowledge Transfer Network and works with a range of public and private sector organisations on matters of Cyber Security.

Professor Awais Rashid

During his time with Lancaster University, Professor Awais Rashid was a Co-Director of Security-Lancaster and head of the Academic Centre of Excellence (A.C.E) in Cyber Security Research. His focus is on novel software modularity techniques that underpin software, which naturally ties in with his cyber security research which focuses on developing tools and techniques that are adaptable to the constantly changing threat patterns utilised by criminals online.