Tag Archive: risk



I wrote yesterday about the control systems implemented in the Boeing 787 Dreamliner, and the fact that, since the issue was reported in 2008, not much information on the way these systems interoperate, if at all. There have been references to “firewalling” the two networks from each other and this got my thinking after I posted:

  • Modern aircraft often have 30-year, or more, lifespans
  • Some element of the safety-case given to the FAA must rest on the fact that there are no inputs into the passenger entertainment system, i.e. there aren’t any network ports in the cabin
  • Some airlines are moving to implement WiFi on aircraft, like Delta and Lufthansa.
  • Over the 30-year lifespan of an aircraft, the cabin will be upgraded, entertainment system changed and services added

Thirty years is a long time to rely on an IT system. There aren’t many operational systems now running that were implemented in 1981. Those that are still running are seen to be very vulnerable to attack and treated very carefully. This is because the types of attacks have evolved massively in this time, with systems implemented just months ago vulnerable to attack.

My question is: how will these security systems be maintained? What if a vulnerability is found in the firewall(s) itself? How will the safety case change if the parameters of the entertainment system change? Does the FAA have any recommendations of the logical segregation of traffic if data from, for instance, WiFi hotspots, or GSM/3G pico-cells implemented in cabins needs to run over the same cabling infrastructure?

Again, maybe I have the wrong end of the stick, but I am concerned that, seemingly, no-one’s really looking into the implications of this and, given my own experience, unless these systems are implemented by people with a very deep understanding of process control security, it may not have been thought about.

Private Emails


Michael Gove is reported to have been using his private email account and won’t reply to emails sent to his official address. There are so many reasons why this is a bad idea. Here is my (almost certainly incomplete) list just in case the Rt. Hon. Michael Gove happens to pass by:

  1. It’s not based in the UK. In fact, Google pride themselves in not telling you were the data is held (just try finding out);
  2. Google is a US-headquartered company. As per Microsoft’s announcement, the US PATRIOT Act seemingly trumps EU and UK data protection law, even if the data was in the EU;
  3. You can’t encrypt the emails at rest;
  4. There’s no guarantee that the data will be there tomorrow, as this example from Yahoo amply demonstrates;
  5. While Gmail allows you to turn on HTTPS and a form of two-factor authentication, these are optional and probably turned off;
  6. The foreign governments are alleged to have already hacked into Gmail;
  7. On occasion, email accounts have been mixed up, where one person reads someone else’s mail;
  8. These emails may not be retrievable under the Freedom of Information Act.

You only risk what you don’t value. If Mr. Gove believes the emails he receives and send to be of such low importance to put them at this sort of risk, is he the best person to be a cabinet minister?


This week, LSE received a couple of calls from “Microsoft”, stating that they had detected a virus on the PC that the user was using and could they install an update. Luckily, the person they called is in our support team and she managed to string them along for a bit. We have managed to get the originating telephone number, apparently a Croatian number, and have passed it on to the Police.

It’s worth following up on these calls, which are blatant social engineering attempts and informing staff. We have had reports that Skype users are also being targeted.


So far, this year, hundreds of millions of users of online services have had their accounts compromised or sites taken down. From Sony, Nintendo, the US Senate, SOCA, Gmail to the CIA, the FBI and the US version of X-Factor. Self-inflicted breaches have occurred at Google, DropBox and Facebook. Hackers have formed semi-organised super-groups, such as LulzSec and Anonymous. Are we at the point where information security professionals are starting to say, “I told you so”?

The telling thing about nearly all of these breaches is simple it would have been to limit the impact: passwords have been stored in the clear, known vulnerabilities not patched, corporate secrecy getting in the way of a good PR message and variable controls on sites of the same brand.

The media’s response is often “hire the hackers!”, an idea that is fundamentally flawed. Would you hire a bank robber to develop the security for a bank? No. The fact is that there are tens of thousands of information security professionals, many of whom are working in the organisations recently attacked, who know very well what needs to be done to fix many of the problems being exploited.

Many corporations have decided to prioritise functionality over security to the extent where even basic security fundamentals get lost. There needs to be a re-assessment of every organisation’s priorities as LulzSec and Anonymous will soon realise that there are juicy and easier pickings away from the large corporates and Government sites, who have had the foresight to invest in information security controls.

This may sadly be just the beginning.


News reaches us of the latest, unannounced Facebook feature: facial recognition. What this implies is that Facebook will trawl through all the photos on the site, automatically “tagging” you in pictures that the system think you’re in.

Great time saver, you might think, but there are several things to think about:

  1. It was enabled, quietly, without user consent and requires users to actively disable the feature
  2. No technology of this sort is 100% accurate, so if you don’t disable it, you may find yourself tagged in embarrassing pictures that aren’t of you
  3. This is an indication of the power of data mining. What’s to stop Facebook mining Google or Bing, looking for pictures on other sites?

With thanks to the Sophos blog on this topic, here’s how you disable it:

Go to Account -> Privacy Settings -> Customise Settings (near the bottom) and go to the “Things others share” section.

Then go down to “Suggest photos of me to friends” and click the edit button.

 

Then select “Disable”.

If Facebook want to be seen to be taking privacy seriously, they should start by adopting a policy of opt-in for new features.

Sony’s Woes


Sony continue to get attacked. Over, and over again. In different countries with different impacts. Searching Google News for “sony hack” comes up with 1880+ articles (6th of June 2011).

It seems to me that Sony don’t have an effective, consistent strategy for dealing with the security of their global online presence. These attacks have gone beyond what the attackers can achieve in terms of compromising systems and are now almost simply providing Sony’s brand and reputation a beating. Even if a script-kiddie were to deface a small-scale, country specific website, the mere fact that it happens to be a Sony site guarantees headlines.

As I have said in previous posts, the biggest change the Internet brings is that distance is no longer a factor when dealing with crime: a hack can look like it’s coming from the other side of the world when, in fact, it’s actually being performed in a coffee shop down the road.

Companies facing these types of issues really have to do some serious work in limiting the impact of future attacks. The first issue is identifying all of the targets, however tenuous a link they may have with the parent brand, and prioritise them in terms of their connectivity to back-end systems or sensitive data. Classify them and review existing controls then implement consistent controls making best use of limited security resources.

I’ve heard senior executives at various organisations state that they don’t see the point of implementing good security because they don’t believe they are a target. It’s impossible to say what motivates every hack, but it’s definitely true to say that it costs organisations less in the long run if they do things properly from the start rather than trying to bolt on security processes after a major incident.

Just look at Sony.


Human beings are natural risk assessors. Every decision we take, from when to cross the road, to what food to eat is, at least in part, based on an innate risk assessment.

People are able to do this because in the real world it is possible to see, or imagine, the consequences of an action going wrong. So, when crossing the road, people tend not to walk out in front of a moving car.

Children are the exception; they often undertake risky activities because they don’t have the experience to be able to judge whether what they’re doing is dangerous.

When using a computer, connected to the Internet, it is very difficult to judge the threat level or understand the risk because there is such a lack of information available to help inform.

Companies have spent years trying to work out the level of warning before “click-fatigue” takes over. I remember using an early iteration of ZoneAlarm’s firewall, where every minute, a pop-up would appear asking to authorise a particular app, or to tell me I was being port-scanned. While I knew the difference between allowing inbound NetBIOS and outbound POP3 access, the vast majority of people don’t. Nor do they know the significance of being port-scanned, having their anti-virus block a Trojan horse or what issues they face on an unsecured wireless network.

There needs to be a recognition that there’s a difference between technical risks that can result in the compromise of the person’s computer and associated data, and activities that lead to identity theft.

I’d like to see a simple “threat-o-meter” on computers that takes information from the various systems in place on most people’s computers, like the firewall, anti-virus software and the type of network connected to, and displays a simple coloured chart to indicate how worried the user should be.

It could be extended to take information from vulnerability scanning tools, like Secunia, or rate the severity of seeing a particular piece of malware. Add this to some basic information on the configuration of the machine, like password length, firewall configuration or whether auto-updates are enabled and it could provide really useful feedback to the user on how to reduce risk.

All of this information is about the context of the device. Most users don’t want to be information security professionals.

Comments welcome.


An interesting story on Slashdot this morning is about a Brazilian report [and here in the original Portuguese] into the effectiveness of free anti-virus software against non-English threats. Admittedly, they only tested six, all of which were free, but the results were pretty disappointing, especially compared to a set of independent statistics (taken from “Virus Bulletin“):

Name % detected (in the report) % detected (independent stats1)
Avira 78% 99.7%
AVG 75% 93%
Panda Cloud 70.6% NDA
Avast! 69.8% 98.2%
PC Tools 64.7% NDA
Microsoft Security Essentials 13.4% 87.1%

1 These results are from 2009, but give an indication.

So, there are a number of things to draw from this, aside from the fact that no paid-for software was tested. Even if there is a large margin of error, the discrepancy in the results is quite stark and might make large organisations, particularly multi-nationals, re-consider their AV protection. What works in one part of the world may not be quite so effective in another.

It’s also worth mentioning that most anti-virus products will use a variety of techniques to detect malicious software, from signatures to heuristics and these results will almost certainly not reflect real-world detection rates if everything is turned on and additional software, like firewalls and anti-spyware products are used.

Hacktivism vs cyberwar?


The BBC have an interesting article, entitled “Is cyber-warfare a genuine threat?”, which poses several interesting questions. There is a general consensus that something needs to be done to allow for a consistent approach to

All this relates to the document entitled “[the] First Joint Russian-U.S. report on Cyber Conflict“, created by the EastWest Institute. Some of the things they looked at were:

  • Just as a Red Cross designates a protected entity in the physical world, is it feasible to use special markers to designate protected zones in cyberspace?
  • Should we reinterpret convention principles in light of the fact that cyber warriors are often non-state actors?
  • Are certain cyber weapons analogous to weapons banned by the Geneva Protocol?
  • Given the difficulties in coming up with an agreed definition for cyber war, should there be a third, “other-than-war” mode for cyberspace?

One of the things that comes out of this document is the need to provide real-world analogies for issues on the Internet in order to contextualise the issue and come up with an appropriate response. If you sit at a desktop PC as an end-user, you have absolutely no idea what’s going on on the Internet beyond what’s currently displayed on your screen. This opacity has a number of consequences:

  • Most people take risks that they wouldn’t do if they understood the threat they faced;
  • Hacktivists or casual hackers have no understanding of the damage that they do or the power that they wield, resulting in potentially catastrophic consequences.

In light of my previous post about Hacktivism, is there a danger that if the definition of cyberwar is too strict, that a teenager in his bedroom could start a global conflict? As one comment indicated, the power in the hands of an individual can far outweigh the power they would have in the real world and, therefore, to some extent, everyone is equal. Where are the boundaries? And what should be sacred? The document outlines some ideas about having an agreed set of “neutral” entities, like the Red Cross or Red Crescent, but who is entitled to agree on the list?

Traditionally, only militaries had the capability to wage war and, therefore, it was appropriate for their associated governments to sign treaties that governed the rules of war. Now, however, everyone has the same potential.

While you can control the substances needed to make bombs, you can’t control the creation of code.

Update

This post prompted a lot of discussion offline, summarised thus:

  • The biggest problem is determining accurately where an attack comes from in order to respond to it;
  • Compromised machines will become the main launch-pad for attacks, as it allows for deniability on the part of the originator of an attack;
  • The “super powers” will probably want to have the ability to respond conventionally to a cyber-attack, as online they don’t have the same overwhelming power as they do in the real world;
  • “Protected organisations” will quickly find themselves exploited as launch-pads for attacks if their not very well defended.

I had an interesting conversation yesterday about the concept of eliminating risk completely. It seems that the population at large have been conditioned into thinking everything is safe, that nothing can befall them and, if it does, they should sue.

One great example of this is the anti-vaccine movement in the US. There’s a really interesting article in Wired about this. Essentially, a group of people including several well-known, high-profile people are trying to convince parents not to vaccinate their children against particular diseases, citing statistics that show that there is a (very low) risk of their children developing complications as a result. What they fail to understand is that the alternative represents a much higher risk of the same children having complications or dying from the disease they would otherwise be vaccinated against.

The conversation yesterday revolved around airports: as stated in previous posts, I believe that much of the security around airports is misplaced. An awful lot of money is spent on technology to detect very specific threats rather than taking a more holistic approach. The problem with having specific controls for specific threats are those threats you don’t have controls for. That’s not to say that threat-focused controls don’t have a place: of course they do.

However, where there is money that can be spent on lowering the risk, spending it on devices like the 3D body scanner may not be the most useful (which, incidentally, apparently could raise the risk of you getting cancer more than it lowers the risk of you dying in a terrorist incident) but drawing a line and saving the money isn’t the solution either.

I truly believe that we have a responsibility for lowering the likelihood of incidents happening where we can, effectively and not intrusively. And this is the perennial security problem: where do you draw the line?