Tag Archive: perception


New Airport Security Scanners


The security systems at airports are an interesting example of security “theatre”, where much of what goes on is about re-assurance rather than being particularly effective. I’ve blogged before about this and had some interesting responses, especially around the intrusiveness of current processes versus their effectiveness and where vulnerabilities lie. For obvious reasons, I won’t go in to this.

However, the TSA in the United States is rolling out a new version of their full-body scanner, apparently in response to the criticism that the old-versions were a step too far: the TSA initially denied, for example, that pictures of people’s naked bodies could be stored until several incidents emerged of security staff doing exactly that. Apparently this will be available as a software upgrade. The question is, will the UK do the same?

The new scanner overlays identified potential threats from scans over a generic diagram representing the human form and so masking who the subject is. This has to be a good thing, but like I said in my earlier post, a reliance on technology rather than using intelligence-led investigations will always lead to vulnerabilities while inconveniencing that majority of people.

I’d rather the people who would do me harm never made it to the airport.


So far, this year, hundreds of millions of users of online services have had their accounts compromised or sites taken down. From Sony, Nintendo, the US Senate, SOCA, Gmail to the CIA, the FBI and the US version of X-Factor. Self-inflicted breaches have occurred at Google, DropBox and Facebook. Hackers have formed semi-organised super-groups, such as LulzSec and Anonymous. Are we at the point where information security professionals are starting to say, “I told you so”?

The telling thing about nearly all of these breaches is simple it would have been to limit the impact: passwords have been stored in the clear, known vulnerabilities not patched, corporate secrecy getting in the way of a good PR message and variable controls on sites of the same brand.

The media’s response is often “hire the hackers!”, an idea that is fundamentally flawed. Would you hire a bank robber to develop the security for a bank? No. The fact is that there are tens of thousands of information security professionals, many of whom are working in the organisations recently attacked, who know very well what needs to be done to fix many of the problems being exploited.

Many corporations have decided to prioritise functionality over security to the extent where even basic security fundamentals get lost. There needs to be a re-assessment of every organisation’s priorities as LulzSec and Anonymous will soon realise that there are juicy and easier pickings away from the large corporates and Government sites, who have had the foresight to invest in information security controls.

This may sadly be just the beginning.


While I am not a lawyer and others have said this before, notably Rob Carolina in his talk “The Cyberspace Frontier has Closed“, I thought it worth reviewing some recent developments that demonstrate the fact that the Internet is not lawless and behaviour online may well result in liabilities “in the real world”.

There still seems to be this perception that laws don’t apply to online activity. Take Joanne Fraill, the juror who was jailed for eight months for contempt of court by contacting one of the defendants in the trial she was on. She had received clear guidance from the Judge on the case, as had all of the other jurors, not to research the case online and definitely not to contact anyone related to the trial. I had exactly the same advice when I was a juror at the Old Bailey a couple of years ago.

And, yet, she still did it, no doubt believing that:

  1. It wasn’t so bad, and;
  2. She wouldn’t get caught anyway.

She was wrong. The trial collapsed.

This sort of thinking is rife online, which is exacerbated by the fact that any search will bring back results that confirm every point of view on every subject, thus not really being much help.

Other areas on the Internet that people should consider in terms of consequences, include:

  • Copyright infringements
  • Data protection issues
  • Harassment
  • Money laundering
  • Tax evasion
  • Libel

Some of these apply to corporate organisations in a different way to individuals. For example, a data protection breach has the potential to seriously damage an organisations reputation. Libel may get you a hefty fine.

Just because people have a romantic notion of the Internet where normal laws don’t apply, doesn’t make it reality.

Sony’s Woes


Sony continue to get attacked. Over, and over again. In different countries with different impacts. Searching Google News for “sony hack” comes up with 1880+ articles (6th of June 2011).

It seems to me that Sony don’t have an effective, consistent strategy for dealing with the security of their global online presence. These attacks have gone beyond what the attackers can achieve in terms of compromising systems and are now almost simply providing Sony’s brand and reputation a beating. Even if a script-kiddie were to deface a small-scale, country specific website, the mere fact that it happens to be a Sony site guarantees headlines.

As I have said in previous posts, the biggest change the Internet brings is that distance is no longer a factor when dealing with crime: a hack can look like it’s coming from the other side of the world when, in fact, it’s actually being performed in a coffee shop down the road.

Companies facing these types of issues really have to do some serious work in limiting the impact of future attacks. The first issue is identifying all of the targets, however tenuous a link they may have with the parent brand, and prioritise them in terms of their connectivity to back-end systems or sensitive data. Classify them and review existing controls then implement consistent controls making best use of limited security resources.

I’ve heard senior executives at various organisations state that they don’t see the point of implementing good security because they don’t believe they are a target. It’s impossible to say what motivates every hack, but it’s definitely true to say that it costs organisations less in the long run if they do things properly from the start rather than trying to bolt on security processes after a major incident.

Just look at Sony.


Human beings are natural risk assessors. Every decision we take, from when to cross the road, to what food to eat is, at least in part, based on an innate risk assessment.

People are able to do this because in the real world it is possible to see, or imagine, the consequences of an action going wrong. So, when crossing the road, people tend not to walk out in front of a moving car.

Children are the exception; they often undertake risky activities because they don’t have the experience to be able to judge whether what they’re doing is dangerous.

When using a computer, connected to the Internet, it is very difficult to judge the threat level or understand the risk because there is such a lack of information available to help inform.

Companies have spent years trying to work out the level of warning before “click-fatigue” takes over. I remember using an early iteration of ZoneAlarm’s firewall, where every minute, a pop-up would appear asking to authorise a particular app, or to tell me I was being port-scanned. While I knew the difference between allowing inbound NetBIOS and outbound POP3 access, the vast majority of people don’t. Nor do they know the significance of being port-scanned, having their anti-virus block a Trojan horse or what issues they face on an unsecured wireless network.

There needs to be a recognition that there’s a difference between technical risks that can result in the compromise of the person’s computer and associated data, and activities that lead to identity theft.

I’d like to see a simple “threat-o-meter” on computers that takes information from the various systems in place on most people’s computers, like the firewall, anti-virus software and the type of network connected to, and displays a simple coloured chart to indicate how worried the user should be.

It could be extended to take information from vulnerability scanning tools, like Secunia, or rate the severity of seeing a particular piece of malware. Add this to some basic information on the configuration of the machine, like password length, firewall configuration or whether auto-updates are enabled and it could provide really useful feedback to the user on how to reduce risk.

All of this information is about the context of the device. Most users don’t want to be information security professionals.

Comments welcome.


I had an interesting conversation yesterday about the concept of eliminating risk completely. It seems that the population at large have been conditioned into thinking everything is safe, that nothing can befall them and, if it does, they should sue.

One great example of this is the anti-vaccine movement in the US. There’s a really interesting article in Wired about this. Essentially, a group of people including several well-known, high-profile people are trying to convince parents not to vaccinate their children against particular diseases, citing statistics that show that there is a (very low) risk of their children developing complications as a result. What they fail to understand is that the alternative represents a much higher risk of the same children having complications or dying from the disease they would otherwise be vaccinated against.

The conversation yesterday revolved around airports: as stated in previous posts, I believe that much of the security around airports is misplaced. An awful lot of money is spent on technology to detect very specific threats rather than taking a more holistic approach. The problem with having specific controls for specific threats are those threats you don’t have controls for. That’s not to say that threat-focused controls don’t have a place: of course they do.

However, where there is money that can be spent on lowering the risk, spending it on devices like the 3D body scanner may not be the most useful (which, incidentally, apparently could raise the risk of you getting cancer more than it lowers the risk of you dying in a terrorist incident) but drawing a line and saving the money isn’t the solution either.

I truly believe that we have a responsibility for lowering the likelihood of incidents happening where we can, effectively and not intrusively. And this is the perennial security problem: where do you draw the line?


The majority of people I talk to want to do the right things online to protect themselves but don’t know what to do. That said, most people won’t go hunting for information to help themselves because they have to wade through great mountains of jargon and impenetrable comments from all quarters. If they do go looking for stuff, many give up.

So, I have been organising a series of three evenings at LSE, in the Old Theatre, with eminent speakers to explain what’s going on in the information security world, and how you can protect yourselves.

These will take place on the 19th, 20th and 21st of October from 6.30pm and are open to the general public.

#ssol on Twitter

Hoax Malware


If you’ve had an email account for any length of time, you will have received an email that probably starts along the lines of:

URGENT! VIRUS!

This information arrived this morning, from Microsoft and Norton. Please send it to everybody you know who accesses the Internet.

You may receive an apparently harmless email with a PowerPoint presentation called “Life is beautiful.pps.”

If you receive it DO NOT OPEN THE FILE UNDER ANY CIRCUMSTANCES, and delete it immediately.

If you open this file, a message will appear on your screen saying: “It is too late now, your life is no longer beautiful”, subsequently you will LOSE EVERYTHING IN YOUR PC and the person who sent it to you will gain access to your name, email and password.

There are lots of these hoaxes floating around on the Internet; you just need to search for “hoax” at Symantec’s Security Center to see that there are hundreds. What people don’t appreciate is that the hoaxes do also cause damage. People can panic when not fully aware of facts and Chinese whispers can distort a fairly benign situation into something seemingly far worse.

An example of this is today’s announcement by Facebook Security that rumours have started about a virus that was affecting user profiles called the “knob face virus” (full article is here). The full text states:

Virusspreading like wildfire onFaceBook!! It is a trojan worm called “knob face”. It will steal your info, invade your system and shut it down! Do NOT open the link “Barack Obama Clinton Scandal”! If “SmartGirl15” adds you, don’t accept it; it is a virus. If somebody on your list adds her, ……then……. you get the …………virus too!! Copy and paste to your wall

So, the advice? Don’t forward or post anything like this without checking it out. All it does is create fear and clog up inboxes.

Social Networking Risks


I went to Royal Holloway this week to give a presentation at the Information Security Group Alumni Conference about my personal views on social networking and the perception of risk. As a short summary, my main points were:

We’re bad at assessing risk

People really can’t tell whether it’s safer to fly that to drive and whether it’s more likely to drown in a flood than be hit by lightning. It all comes down to the perception of the risk.

Without context, it gets worse

People want to have sensory cues to allow them to work out the context in which they’re operating so that they can assess the risks they’re taking. People are inherently scared of the dark, because they can’t see what’s around them. In the absence of context, people fill the vacuum with the information that they do have: if they’re sitting at home, using the Internet, they are much less wary than in an Internet café in Bangkok but the level of risk hasn’t necessarily changed.

Younger people don’t necessarily have an enlightened view of privacy

While young people growing up today are much more au fait with technology than their forebears were at the same age, they don’t have the life experience in which to assess the long-term impact of their actions. Few people realise that:

  1. It’s very hard to delete stuff from the Internet
  2. That large employers will take into consideration anything they find on the Internet about a candidate before making a decision to employ them
  3. Most things are open, anyway and you should never post anything anywhere that you don’t want made public.

Facebook’s Privacy policy includes a section that says:

Risks inherent in sharing information. Although we allow you to set privacy options that limit access to your information, please be aware that no security measures are perfect or impenetrable. We cannot control the actions of other users with whom you share your information. We cannot guarantee that only authorized persons will view your information. We cannot ensure that information you share on Facebook will not become publicly available*. We are not responsible for third party circumvention of any privacy settings or security measures on Facebook. You can reduce these risks by using common sense security practices such as choosing a strong password, using different passwords for different services, and using up to date antivirus software.

*My emphasis

The Information Security Industry’s Responsibility

I’d suggest that we need to make it easier for people to manage their privacy settings and have a default=closed policy for social networking sites. The IT industry went through a period of providing operating systems, network gear and other kit with all of the bells and whistles turned on out of the box. It was realised that this wasn’t a particularly good way to go. I think the same is true of social network sites.

I realise that it is in the commercial interest of social media companies to have as much openness as possible and, in Facebook’s case, people who truly believe that a transparent society in the form of Mark Zuckerberg.

But nothing is inevitable and I, personally, am not happy with living on a society where everything is open. People do have a legitimate right not to tell people everything, in my opinion, and while you could argue no one is forcing them to post information (which is of course true) no one can say that anyone is posting data online in an informed way.

Several people made excellent points after my presentation, including one thought provoking one about the fact we don’t know what’s going to happen to all of this data in 10-15 years’ time and how it will affect people (think insurance companies harvesting data about you and it affecting your premiums, for example). There is a much wider, social problem about society’s inability to “forget” stuff, which is beyond the scope of this blog.

Comments welcome