Tag Archive: industry

Boeing 787

Way back in 2008 there were a number of stories floating around that the new Boeing 787, the first production airframe of which was delivered this week, had a serious security weakness. It turns out that Boeing, in their infinite wisdom, had decided to not segregate the flight control systems from the seat-back entertainment systems and would, instead, firewall them from each other.

I’ve been searching online but can’t find any up-to-date information whether this architecture was changed. Some good articles on this came from Wired and Bruce Schneier’s blog. Wikipedia’s 787 entry includes the following:

The airplane’s control, navigation, and communication systems are networked with the passenger cabin’s in-flight internet systems.In January 2008, Boeing responded to reports about FAA concerns regarding the protection of the 787’s computer networks from possible intentional or unintentional passenger access by stating that various hardware and software solutions are employed to protect the airplane systems. These included air gaps for the physical separation of the networks, and firewalls for their software separation. These measures prevent data transfer from the passenger internet system to the maintenance or navigation systems.

The reference to firewalls and air gaps leads me to suspect that these systems are not fully segregated. If this is the case, I really hope that they’ve had some seriously good information security advice.Process control systems, and this is a process control system of sorts, aren’t always as well implemented as they could be. Where there is a safety-critical element, air gaps or data diodes are the only ways to go.

Designing out the vulnerabilities has to be better than retrofitting security afterwards.

I’d welcome comments from anyone, especially those who know more about the actual implementation.

Update: I’ve added another post about this here.

So far, this year, hundreds of millions of users of online services have had their accounts compromised or sites taken down. From Sony, Nintendo, the US Senate, SOCA, Gmail to the CIA, the FBI and the US version of X-Factor. Self-inflicted breaches have occurred at Google, DropBox and Facebook. Hackers have formed semi-organised super-groups, such as LulzSec and Anonymous. Are we at the point where information security professionals are starting to say, “I told you so”?

The telling thing about nearly all of these breaches is simple it would have been to limit the impact: passwords have been stored in the clear, known vulnerabilities not patched, corporate secrecy getting in the way of a good PR message and variable controls on sites of the same brand.

The media’s response is often “hire the hackers!”, an idea that is fundamentally flawed. Would you hire a bank robber to develop the security for a bank? No. The fact is that there are tens of thousands of information security professionals, many of whom are working in the organisations recently attacked, who know very well what needs to be done to fix many of the problems being exploited.

Many corporations have decided to prioritise functionality over security to the extent where even basic security fundamentals get lost. There needs to be a re-assessment of every organisation’s priorities as LulzSec and Anonymous will soon realise that there are juicy and easier pickings away from the large corporates and Government sites, who have had the foresight to invest in information security controls.

This may sadly be just the beginning.

Sony’s Woes

Sony continue to get attacked. Over, and over again. In different countries with different impacts. Searching Google News for “sony hack” comes up with 1880+ articles (6th of June 2011).

It seems to me that Sony don’t have an effective, consistent strategy for dealing with the security of their global online presence. These attacks have gone beyond what the attackers can achieve in terms of compromising systems and are now almost simply providing Sony’s brand and reputation a beating. Even if a script-kiddie were to deface a small-scale, country specific website, the mere fact that it happens to be a Sony site guarantees headlines.

As I have said in previous posts, the biggest change the Internet brings is that distance is no longer a factor when dealing with crime: a hack can look like it’s coming from the other side of the world when, in fact, it’s actually being performed in a coffee shop down the road.

Companies facing these types of issues really have to do some serious work in limiting the impact of future attacks. The first issue is identifying all of the targets, however tenuous a link they may have with the parent brand, and prioritise them in terms of their connectivity to back-end systems or sensitive data. Classify them and review existing controls then implement consistent controls making best use of limited security resources.

I’ve heard senior executives at various organisations state that they don’t see the point of implementing good security because they don’t believe they are a target. It’s impossible to say what motivates every hack, but it’s definitely true to say that it costs organisations less in the long run if they do things properly from the start rather than trying to bolt on security processes after a major incident.

Just look at Sony.

Human beings are natural risk assessors. Every decision we take, from when to cross the road, to what food to eat is, at least in part, based on an innate risk assessment.

People are able to do this because in the real world it is possible to see, or imagine, the consequences of an action going wrong. So, when crossing the road, people tend not to walk out in front of a moving car.

Children are the exception; they often undertake risky activities because they don’t have the experience to be able to judge whether what they’re doing is dangerous.

When using a computer, connected to the Internet, it is very difficult to judge the threat level or understand the risk because there is such a lack of information available to help inform.

Companies have spent years trying to work out the level of warning before “click-fatigue” takes over. I remember using an early iteration of ZoneAlarm’s firewall, where every minute, a pop-up would appear asking to authorise a particular app, or to tell me I was being port-scanned. While I knew the difference between allowing inbound NetBIOS and outbound POP3 access, the vast majority of people don’t. Nor do they know the significance of being port-scanned, having their anti-virus block a Trojan horse or what issues they face on an unsecured wireless network.

There needs to be a recognition that there’s a difference between technical risks that can result in the compromise of the person’s computer and associated data, and activities that lead to identity theft.

I’d like to see a simple “threat-o-meter” on computers that takes information from the various systems in place on most people’s computers, like the firewall, anti-virus software and the type of network connected to, and displays a simple coloured chart to indicate how worried the user should be.

It could be extended to take information from vulnerability scanning tools, like Secunia, or rate the severity of seeing a particular piece of malware. Add this to some basic information on the configuration of the machine, like password length, firewall configuration or whether auto-updates are enabled and it could provide really useful feedback to the user on how to reduce risk.

All of this information is about the context of the device. Most users don’t want to be information security professionals.

Comments welcome.

ITV “Cyber Wars” programme

There was a programme on ITV last night, entitled “Cyber Wars”, which is unfortunate, as it was primarily about people being scammed, wireless networks being compromised and identity theft.

STUXNET was mentioned and to the possibility of the Internet becoming a battlefield. It’s worth a watch, but it is a bit cringeworthy.


Information Warfare

One of the course books I had way back when I was doing my MSc in Information Security at Royal Holloway was entitled “Information Warfare and Security“, and written by Dorothy Denning. It was an interesting book and got me thinking about the use of the Internet for military purposes and how the pervasiveness of the Internet could impact society if it were to be attacked.

The book was written in 1998 and a lot has changed since then; I was still using a 28kbps dialup modem and the communications course on my Computer Science degree focused a lot on ATM packet transmission. But the fundamental issues were already there.

The film WarGames was the first that addressed the issue of the possibility of hacking military systems but the most vulnerable networks now are civilian, those run by organisations that provide utilities and services to the general population, power and water for example. Given that private companies generally don’t spend as much on information security as governments, there is a risk that they haven’t spent enough. And people are being targeted with sophisticated Trojans whose purpose is unclear.

So, as a country whose critical infrastructure is under attack, how do you:

  1. Determine where the attack is coming from
  2. Determine whether it is state-sponsored or the work of “hacktivists”
  3. Decide what to do in retaliation, if anything

At what point does a cyber-war escalate into a physical one?

I realise that there are plenty of studies around the globe looking at these issues. I am not sure that there has been any final agreement about the implications of declaring Internet war nor under what circumstances. I do know, however, that many countries are developing their cyber warfare capabilities.

A news item that keeps bubbling up in the information security world is about STUXNET, a malicious piece of software that was originally said to target nuclear reactors in Iran. This might seem a bit odd, as most malicious software is pretty random, infecting anything it comes across. This malware seems to have had a very particular purpose.

It has been well known since its discovery that STUXNET targeted SCADA (Supervisor Control and Data Acquisition) systems, which are used in industrial process control environments, essentially providing electro-mechanical control over a logical network, be that the Internet or via a dial-up modem. SCADA systems are used all over the place, controlling sluice gates, traffic lights and in nuclear reactors. In general, these systems are kept as far away from public networks as possible, to prevent the infection of the networks they reside on, as the results can often be catastrophic.

However, an article in The Register, referencing a Symantec blog, detailed that this malware was even more specifically targeted. In summary, the article explains how STUXNET was aimed at frequency converter drives made by Fararo Paya of Iran and Vacon of Finland, both, presumably, used in the Iranian nuclear programme. Not only that, but only those drives that operate at very high speeds, between 807 Hz and 1210 Hz. It also had the capability to spread via USB sticks, thereby not being dependent on an accessible process control network.

The code reveals that the malware would change the output of the drives, intermittently, over a period of months, thereby disrupting whatever they were controlling, albeit subtly. Interestingly, this type of equipment has export restrictions placed on it by the US as they can be used in the centrifuges that enrich uranium.

One has to assume that the purpose of the malware was to sabotage the Iranian uranium enrichment programme in such a way as to not be discovered.

The reason it got discovered was that it was too successful. Tens of thousands of systems across the world have been infected by STUXNET, notably in Indonesia.

Given the level of targeting and pre-requisite knowledge of uranium enrichment, was this written by the regular clan of virus writers, whose main aim is quick profit? Unlikely.

The majority of people I talk to want to do the right things online to protect themselves but don’t know what to do. That said, most people won’t go hunting for information to help themselves because they have to wade through great mountains of jargon and impenetrable comments from all quarters. If they do go looking for stuff, many give up.

So, I have been organising a series of three evenings at LSE, in the Old Theatre, with eminent speakers to explain what’s going on in the information security world, and how you can protect yourselves.

These will take place on the 19th, 20th and 21st of October from 6.30pm and are open to the general public.

#ssol on Twitter

Social Networking Risks

I went to Royal Holloway this week to give a presentation at the Information Security Group Alumni Conference about my personal views on social networking and the perception of risk. As a short summary, my main points were:

We’re bad at assessing risk

People really can’t tell whether it’s safer to fly that to drive and whether it’s more likely to drown in a flood than be hit by lightning. It all comes down to the perception of the risk.

Without context, it gets worse

People want to have sensory cues to allow them to work out the context in which they’re operating so that they can assess the risks they’re taking. People are inherently scared of the dark, because they can’t see what’s around them. In the absence of context, people fill the vacuum with the information that they do have: if they’re sitting at home, using the Internet, they are much less wary than in an Internet café in Bangkok but the level of risk hasn’t necessarily changed.

Younger people don’t necessarily have an enlightened view of privacy

While young people growing up today are much more au fait with technology than their forebears were at the same age, they don’t have the life experience in which to assess the long-term impact of their actions. Few people realise that:

  1. It’s very hard to delete stuff from the Internet
  2. That large employers will take into consideration anything they find on the Internet about a candidate before making a decision to employ them
  3. Most things are open, anyway and you should never post anything anywhere that you don’t want made public.

Facebook’s Privacy policy includes a section that says:

Risks inherent in sharing information. Although we allow you to set privacy options that limit access to your information, please be aware that no security measures are perfect or impenetrable. We cannot control the actions of other users with whom you share your information. We cannot guarantee that only authorized persons will view your information. We cannot ensure that information you share on Facebook will not become publicly available*. We are not responsible for third party circumvention of any privacy settings or security measures on Facebook. You can reduce these risks by using common sense security practices such as choosing a strong password, using different passwords for different services, and using up to date antivirus software.

*My emphasis

The Information Security Industry’s Responsibility

I’d suggest that we need to make it easier for people to manage their privacy settings and have a default=closed policy for social networking sites. The IT industry went through a period of providing operating systems, network gear and other kit with all of the bells and whistles turned on out of the box. It was realised that this wasn’t a particularly good way to go. I think the same is true of social network sites.

I realise that it is in the commercial interest of social media companies to have as much openness as possible and, in Facebook’s case, people who truly believe that a transparent society in the form of Mark Zuckerberg.

But nothing is inevitable and I, personally, am not happy with living on a society where everything is open. People do have a legitimate right not to tell people everything, in my opinion, and while you could argue no one is forcing them to post information (which is of course true) no one can say that anyone is posting data online in an informed way.

Several people made excellent points after my presentation, including one thought provoking one about the fact we don’t know what’s going to happen to all of this data in 10-15 years’ time and how it will affect people (think insurance companies harvesting data about you and it affecting your premiums, for example). There is a much wider, social problem about society’s inability to “forget” stuff, which is beyond the scope of this blog.

Comments welcome

%d bloggers like this: