Latest Entries »

Sony’s Woes

Sony continue to get attacked. Over, and over again. In different countries with different impacts. Searching Google News for “sony hack” comes up with 1880+ articles (6th of June 2011).

It seems to me that Sony don’t have an effective, consistent strategy for dealing with the security of their global online presence. These attacks have gone beyond what the attackers can achieve in terms of compromising systems and are now almost simply providing Sony’s brand and reputation a beating. Even if a script-kiddie were to deface a small-scale, country specific website, the mere fact that it happens to be a Sony site guarantees headlines.

As I have said in previous posts, the biggest change the Internet brings is that distance is no longer a factor when dealing with crime: a hack can look like it’s coming from the other side of the world when, in fact, it’s actually being performed in a coffee shop down the road.

Companies facing these types of issues really have to do some serious work in limiting the impact of future attacks. The first issue is identifying all of the targets, however tenuous a link they may have with the parent brand, and prioritise them in terms of their connectivity to back-end systems or sensitive data. Classify them and review existing controls then implement consistent controls making best use of limited security resources.

I’ve heard senior executives at various organisations state that they don’t see the point of implementing good security because they don’t believe they are a target. It’s impossible to say what motivates every hack, but it’s definitely true to say that it costs organisations less in the long run if they do things properly from the start rather than trying to bolt on security processes after a major incident.

Just look at Sony.


Human beings are natural risk assessors. Every decision we take, from when to cross the road, to what food to eat is, at least in part, based on an innate risk assessment.

People are able to do this because in the real world it is possible to see, or imagine, the consequences of an action going wrong. So, when crossing the road, people tend not to walk out in front of a moving car.

Children are the exception; they often undertake risky activities because they don’t have the experience to be able to judge whether what they’re doing is dangerous.

When using a computer, connected to the Internet, it is very difficult to judge the threat level or understand the risk because there is such a lack of information available to help inform.

Companies have spent years trying to work out the level of warning before “click-fatigue” takes over. I remember using an early iteration of ZoneAlarm’s firewall, where every minute, a pop-up would appear asking to authorise a particular app, or to tell me I was being port-scanned. While I knew the difference between allowing inbound NetBIOS and outbound POP3 access, the vast majority of people don’t. Nor do they know the significance of being port-scanned, having their anti-virus block a Trojan horse or what issues they face on an unsecured wireless network.

There needs to be a recognition that there’s a difference between technical risks that can result in the compromise of the person’s computer and associated data, and activities that lead to identity theft.

I’d like to see a simple “threat-o-meter” on computers that takes information from the various systems in place on most people’s computers, like the firewall, anti-virus software and the type of network connected to, and displays a simple coloured chart to indicate how worried the user should be.

It could be extended to take information from vulnerability scanning tools, like Secunia, or rate the severity of seeing a particular piece of malware. Add this to some basic information on the configuration of the machine, like password length, firewall configuration or whether auto-updates are enabled and it could provide really useful feedback to the user on how to reduce risk.

All of this information is about the context of the device. Most users don’t want to be information security professionals.

Comments welcome.

Stuxnet presentation

I have finally got around to uploading the PowerPoint presentation that I gave at the ISSA Ireland Conference in Dublin at the beginning of the month. Sorry it took so long!

You can get it here.

I http ward an interesting rumour regarding (formerly MessageLabs) at InfoSec Europe about the implications of IPv6 rollout.

One of the methods currently used for combating spam is through blacklisting of IP addresses. In the new IPv6 world, Symantec will be whitelisting the IPs of mail servers. It’s understandable why but I suspect it will generate a lot of debate when people realise the implications.

Student Loan Scam

There is news today of another scam, targeting students. From the article, it seems that several students fell victim to this and are several thousand pounds out of pocket. In essence, this is just another phishing scam but specifically aimed at students, offering them the possibility of a bursary if they fill in a form with their personal details.

The Student Loan’s Company have been the bait for a number of scams over the last few years.

Unfortunately, this has happened before and will happen again. The Government have put some advice up here about what to look out for and some general advice on staying safe online here.

IPv6 Challenges

There’s been a lot of discussion about the advantages of IPv6 in the press in recent months, focusing mainly on the huge increase in address space that a migration will give. But there are other features of IPv6 that are both a boon for the individual user and a headache for an IT department. Like many things, it’s a bit of a double-edged sword, one that cannot be ignored indefinitely.

The wonders of IPv4

IPv4 is one version of the “Internet Protocol“, an integral part of TCP/IP which was developed in the mid-1970s as a set of scalable communications protocols. The intention was to keep it as simple as possible, allowing any type of equipment with the right protocol stack installed to communicate with any other device, regardless of what those devices were. In those days, four billion addresses seemed like “enough”.

One of the consequences of this design strategy was to include no provision for security in general, with unencrypted networks, no authentication and any number of potential ways of attacking a victim. To be fair, in those days, people had a very different attitude to these networks; it was never envisaged that anyone would want to attack someone else. It just wasn’t “the done thing”.

Fast forward 30 years and a number of things have happened: an explosion in the number of devices connecting to the Internet, malicious software, Denial of Service attacks holding on-line companies hostage and the fear of being snooped on by anyone who has access to your data connection (anyone from the Government to Phorm).

The issue of a fast-reducing available address space was identified, and to some extent mitigated by using Network Address Translation, to allow organisations to use reserved IPv4 address ranges, (, and and only use a limited number of properly routable addresses on the Internet, effectively hiding the machines they have on their internal networks; all consumer equipment these days is configured to use NAT.

IPv6 – a new era

In 1998, the IETF published RFC2460 that outlined IPv6 which had a number of features not included in IPv4, including:

  • a vastly increased address space – IPv4 had a total of 4,294,967,296 addresses. IPv6 has 2128 (approximately 340 undecillion or 3.4×1038) this amounts to approximately 5×1028
    addresses for each of the 6.8 billion people alive in 2010 (taken from Wikipedia)
  • integration of IPSec, including packet authentication and encryption
  • stateless address autoconfiguration

These are real advances over IPv4. However, there are some things that companies do routinely that may become a whole lot more complicated:

Penetration testing: LSE, for example, has been allocated an IPv6 address space which has more available addresses than are available in IPv4 in total. The length of time to scan a space of this size is enormous.

Firewalling: subnetting works differently in IPv6 to IPv4 and there is provision for frequent address changes. In addition, having every outbound connection effectively opening up a VPN into an organisation’s network means that banned traffic can be transparently tunnelled through a firewall which would otherwise block it.

Deep packet inspection: this becomes very difficult if all packets are encrypted

Web filtering: again, with packet-layer encryption, how can traffic be inspected before it hits the end device?

SSL has always been difficult to monitor on IPv4 networks, with companies needing to inspect this traffic having to simulate a man-in-the-middle attack to terminate a connection from a user on a device and re-establish a secured connection to the requested resource, e.g. a bank, to create a break in the session to inspect the traffic. It’s a messy solution and doesn’t go down well with users. In a full IPv6 world, this type of challenge will be with us every day.

There’s a really great paper on some of these issues here.

Social Engineering

After I wrote my last post on the callous nature of people exploiting the Japanese Tsunami and subsequent problems at the Fukushima nuclear plant, it occurred to me that I haven’t really written much on social engineering.

The easiest way to get someone’s password is to ask for it.

It’s quite simple: people want to be helpful and don’t want to be seen to be a problem in the organisations. So, when someone phones them up, saying they work for “IT” and they need their user ID and password, most people simply provide it. In many cases, phone-lists are available online so it’s easy to come across as authoritative. It is vitally important to get the message across to all staff that they must never share their passwords. If there is any doubt, people will provide it to whoever asks as they don’t want to get into trouble.

There have been several studies into how much people value their personal information. One such study was done at LSE, as part of Project FLAME (pdf) where different types of user information were requested for different levels of incentive (in this case, varying qualities of chocolate) and then verified. The results can be found here.

So, even when people are being blatantly asked for information probably more personal than their work password, they are happy to divulge it.

The Art of Deception

Kevin Mitnick is a former hacker, turned computer security consultant, who knows a lot about social engineering. At the age of 12, he figured out a way of riding the bus system in Los Angeles for free, re-using tickets others had thrown away by modifying them with a hole-punch after a friendly conversation with an LA bus-driver. He subsequently went on to use his ability to convince others to part with information to gain access to a number of high profile systems, including Digital Equipment Corporation (now part of HP), Pacific Bell, Motorola, Nokia, Sun and Fujitsu Siemens.

Much of this activity was done with the unknowing complicity of the staff at these organisations. He has gone on to write a best-selling book, called “The Art of Deception“, which makes for chilling reading and is essential reading for those in the information security industry.

Years ago, I remember listening to something on the Hackers News Network, a quasi-radio station on the Internet, that would publish mp3s of sessions that they held. One of these was to phone a Blockbuster Video store in the US somewhere, pretending to be someone they had found in a phone-book. The session was fascinating: the poor shop assistant was trying to be as helpful as possible, but ended up revealing a credit-card number and address of someone who the callers intrinsically didn’t know anything about.

There are people out there who are more than willing to abuse the trust of good-natured people. It’s always worth being a little suspicious.

It’s a sad fact that many people exploit human nature for their own ends. The BBC reports that there is a text message circulating in Asia suggesting that radiation has “leaked” [sic] across Asia from the Fukushima power plant in Japan. Sophos’ Graham Cluley has blogged about malware spreading across the globe in the guise of videos supposedly coming from Japan with subject lines like: “VIDEO: The village that escaped the tsunami”, “VIDEO: Struggle for normal life in Japan”, “VIDEO: Woman talks about tsunami escape”, and “Japan tsunami touches New Zealand”.

Other examples include the fake Japanese Tsunami charity appeals, fakes CNN footage of the tidal wave, and a Facebook “clickjacking” scam that entices people with the bizarre claim of showing viewers a whale stuck in a building after the Tsunami.

This goes to show that everyone needs to be extra careful when tragedies such as the one in Japan happen, as people will try to hijack the event, appealing to people’s curiosity or good nature for their own purposes. Even viewing a video or clicking on a site may reveal more than you want.

If you want to donate to the relief effort, go directly to a reputable charity.

ITV “Cyber Wars” programme

There was a programme on ITV last night, entitled “Cyber Wars”, which is unfortunate, as it was primarily about people being scammed, wireless networks being compromised and identity theft.

STUXNET was mentioned and to the possibility of the Internet becoming a battlefield. It’s worth a watch, but it is a bit cringeworthy.

An interesting story on Slashdot this morning is about a Brazilian report [and here in the original Portuguese] into the effectiveness of free anti-virus software against non-English threats. Admittedly, they only tested six, all of which were free, but the results were pretty disappointing, especially compared to a set of independent statistics (taken from “Virus Bulletin“):

Name % detected (in the report) % detected (independent stats1)
Avira 78% 99.7%
AVG 75% 93%
Panda Cloud 70.6% NDA
Avast! 69.8% 98.2%
PC Tools 64.7% NDA
Microsoft Security Essentials 13.4% 87.1%

1 These results are from 2009, but give an indication.

So, there are a number of things to draw from this, aside from the fact that no paid-for software was tested. Even if there is a large margin of error, the discrepancy in the results is quite stark and might make large organisations, particularly multi-nationals, re-consider their AV protection. What works in one part of the world may not be quite so effective in another.

It’s also worth mentioning that most anti-virus products will use a variety of techniques to detect malicious software, from signatures to heuristics and these results will almost certainly not reflect real-world detection rates if everything is turned on and additional software, like firewalls and anti-spyware products are used.

%d bloggers like this: