Tag Archive: privacy


Private Emails


Michael Gove is reported to have been using his private email account and won’t reply to emails sent to his official address. There are so many reasons why this is a bad idea. Here is my (almost certainly incomplete) list just in case the Rt. Hon. Michael Gove happens to pass by:

  1. It’s not based in the UK. In fact, Google pride themselves in not telling you were the data is held (just try finding out);
  2. Google is a US-headquartered company. As per Microsoft’s announcement, the US PATRIOT Act seemingly trumps EU and UK data protection law, even if the data was in the EU;
  3. You can’t encrypt the emails at rest;
  4. There’s no guarantee that the data will be there tomorrow, as this example from Yahoo amply demonstrates;
  5. While Gmail allows you to turn on HTTPS and a form of two-factor authentication, these are optional and probably turned off;
  6. The foreign governments are alleged to have already hacked into Gmail;
  7. On occasion, email accounts have been mixed up, where one person reads someone else’s mail;
  8. These emails may not be retrievable under the Freedom of Information Act.

You only risk what you don’t value. If Mr. Gove believes the emails he receives and send to be of such low importance to put them at this sort of risk, is he the best person to be a cabinet minister?

New Airport Security Scanners


The security systems at airports are an interesting example of security “theatre”, where much of what goes on is about re-assurance rather than being particularly effective. I’ve blogged before about this and had some interesting responses, especially around the intrusiveness of current processes versus their effectiveness and where vulnerabilities lie. For obvious reasons, I won’t go in to this.

However, the TSA in the United States is rolling out a new version of their full-body scanner, apparently in response to the criticism that the old-versions were a step too far: the TSA initially denied, for example, that pictures of people’s naked bodies could be stored until several incidents emerged of security staff doing exactly that. Apparently this will be available as a software upgrade. The question is, will the UK do the same?

The new scanner overlays identified potential threats from scans over a generic diagram representing the human form and so masking who the subject is. This has to be a good thing, but like I said in my earlier post, a reliance on technology rather than using intelligence-led investigations will always lead to vulnerabilities while inconveniencing that majority of people.

I’d rather the people who would do me harm never made it to the airport.


News reaches us of the latest, unannounced Facebook feature: facial recognition. What this implies is that Facebook will trawl through all the photos on the site, automatically “tagging” you in pictures that the system think you’re in.

Great time saver, you might think, but there are several things to think about:

  1. It was enabled, quietly, without user consent and requires users to actively disable the feature
  2. No technology of this sort is 100% accurate, so if you don’t disable it, you may find yourself tagged in embarrassing pictures that aren’t of you
  3. This is an indication of the power of data mining. What’s to stop Facebook mining Google or Bing, looking for pictures on other sites?

With thanks to the Sophos blog on this topic, here’s how you disable it:

Go to Account -> Privacy Settings -> Customise Settings (near the bottom) and go to the “Things others share” section.

Then go down to “Suggest photos of me to friends” and click the edit button.

 

Then select “Disable”.

If Facebook want to be seen to be taking privacy seriously, they should start by adopting a policy of opt-in for new features.

IPv6 Challenges


There’s been a lot of discussion about the advantages of IPv6 in the press in recent months, focusing mainly on the huge increase in address space that a migration will give. But there are other features of IPv6 that are both a boon for the individual user and a headache for an IT department. Like many things, it’s a bit of a double-edged sword, one that cannot be ignored indefinitely.

The wonders of IPv4

IPv4 is one version of the “Internet Protocol“, an integral part of TCP/IP which was developed in the mid-1970s as a set of scalable communications protocols. The intention was to keep it as simple as possible, allowing any type of equipment with the right protocol stack installed to communicate with any other device, regardless of what those devices were. In those days, four billion addresses seemed like “enough”.

One of the consequences of this design strategy was to include no provision for security in general, with unencrypted networks, no authentication and any number of potential ways of attacking a victim. To be fair, in those days, people had a very different attitude to these networks; it was never envisaged that anyone would want to attack someone else. It just wasn’t “the done thing”.

Fast forward 30 years and a number of things have happened: an explosion in the number of devices connecting to the Internet, malicious software, Denial of Service attacks holding on-line companies hostage and the fear of being snooped on by anyone who has access to your data connection (anyone from the Government to Phorm).

The issue of a fast-reducing available address space was identified, and to some extent mitigated by using Network Address Translation, to allow organisations to use reserved IPv4 address ranges, (192.168.0.0/16, 172.16.0.0/12 and 10.0.0.0/8) and only use a limited number of properly routable addresses on the Internet, effectively hiding the machines they have on their internal networks; all consumer equipment these days is configured to use NAT.

IPv6 – a new era

In 1998, the IETF published RFC2460 that outlined IPv6 which had a number of features not included in IPv4, including:

  • a vastly increased address space – IPv4 had a total of 4,294,967,296 addresses. IPv6 has 2128 (approximately 340 undecillion or 3.4×1038) this amounts to approximately 5×1028
    addresses for each of the 6.8 billion people alive in 2010 (taken from Wikipedia)
  • integration of IPSec, including packet authentication and encryption
  • stateless address autoconfiguration

These are real advances over IPv4. However, there are some things that companies do routinely that may become a whole lot more complicated:

Penetration testing: LSE, for example, has been allocated an IPv6 address space which has more available addresses than are available in IPv4 in total. The length of time to scan a space of this size is enormous.

Firewalling: subnetting works differently in IPv6 to IPv4 and there is provision for frequent address changes. In addition, having every outbound connection effectively opening up a VPN into an organisation’s network means that banned traffic can be transparently tunnelled through a firewall which would otherwise block it.

Deep packet inspection: this becomes very difficult if all packets are encrypted

Web filtering: again, with packet-layer encryption, how can traffic be inspected before it hits the end device?

SSL has always been difficult to monitor on IPv4 networks, with companies needing to inspect this traffic having to simulate a man-in-the-middle attack to terminate a connection from a user on a device and re-establish a secured connection to the requested resource, e.g. a bank, to create a break in the session to inspect the traffic. It’s a messy solution and doesn’t go down well with users. In a full IPv6 world, this type of challenge will be with us every day.

There’s a really great paper on some of these issues here.

Social Engineering


After I wrote my last post on the callous nature of people exploiting the Japanese Tsunami and subsequent problems at the Fukushima nuclear plant, it occurred to me that I haven’t really written much on social engineering.

The easiest way to get someone’s password is to ask for it.

It’s quite simple: people want to be helpful and don’t want to be seen to be a problem in the organisations. So, when someone phones them up, saying they work for “IT” and they need their user ID and password, most people simply provide it. In many cases, phone-lists are available online so it’s easy to come across as authoritative. It is vitally important to get the message across to all staff that they must never share their passwords. If there is any doubt, people will provide it to whoever asks as they don’t want to get into trouble.

There have been several studies into how much people value their personal information. One such study was done at LSE, as part of Project FLAME (pdf) where different types of user information were requested for different levels of incentive (in this case, varying qualities of chocolate) and then verified. The results can be found here.

So, even when people are being blatantly asked for information probably more personal than their work password, they are happy to divulge it.

The Art of Deception

Kevin Mitnick is a former hacker, turned computer security consultant, who knows a lot about social engineering. At the age of 12, he figured out a way of riding the bus system in Los Angeles for free, re-using tickets others had thrown away by modifying them with a hole-punch after a friendly conversation with an LA bus-driver. He subsequently went on to use his ability to convince others to part with information to gain access to a number of high profile systems, including Digital Equipment Corporation (now part of HP), Pacific Bell, Motorola, Nokia, Sun and Fujitsu Siemens.

Much of this activity was done with the unknowing complicity of the staff at these organisations. He has gone on to write a best-selling book, called “The Art of Deception“, which makes for chilling reading and is essential reading for those in the information security industry.

Years ago, I remember listening to something on the Hackers News Network, a quasi-radio station on the Internet, that would publish mp3s of sessions that they held. One of these was to phone a Blockbuster Video store in the US somewhere, pretending to be someone they had found in a phone-book. The session was fascinating: the poor shop assistant was trying to be as helpful as possible, but ended up revealing a credit-card number and address of someone who the callers intrinsically didn’t know anything about.

There are people out there who are more than willing to abuse the trust of good-natured people. It’s always worth being a little suspicious.

WikiLeaks


Like everyone else, I’ve been following the WikiLeaks story over the past few weeks, waiting for some juicy titbit to be revealed. I’ve also been wondering: whose fault is it?

This particular question seems to be at the heart of the frenzied arguments relating to Julian Assange: that he should be assassinated, hunted down like Osama Bin Laden, that he be tried for treason. But does the blame really lie with him?

WikiLeaks publishes content that it gets sent by third parties. In the case of the recent US diplomatic cables, these were apparently supplied by Private First Class Bradley E. Manning, who is currently awaiting trial.

This begs the question: how did a Private manage to get access to over a quarter of a million diplomatic cables, discussing issues as sensitive as various Middle Eastern countries’ attitudes towards Iran?

One of the most basic tenets of information security is that of compartmentalisation, i.e. the basis of “need to know”. It is incredible that any one person, at the level of a Private, could access all of this information.

I would suggest that Private Manning was naïve and broke the law if he did what he is accused of. It would be a gross misuse of trust. But it must be acknowledged that there are serious issues within the security framework of the US Government if this could happen at all.


As we know, the new 3D airport scanners in use across the United States and being introduced in the UK are designed to show reveal whether there are any concealed weapons on a person’s body. As discussed in an earlier post, the principle is somewhat flawed, as there are so many ways around this system, especially around the concept of a sterile airport environment, post-security. This is analogous to having a simple network-perimeter security model in an IT-context.

However, the other big problem is the fact that these things take pictures of people’s naked bodies and people are in charge of selecting passengers and reviewing the images. There’s a great article on Gizmodo entitled “TSA Says Body Scanners Saving Images ‘Impossible'” with a saved image from a body scanner in the article. The difficulty here is that this whole area is ripe for abuse.

I do want to make it clear that those performing security checks at airports are doing a decent job. As with any large group of people, especially with a certain level of temptation, there will be the odd bad apple. It needs to be made clear that leering at people is not appropriate and is not just “a bit of fun”. Take the case of Donna D’Errico, a former Baywatch star. She has been singled out numerous times for the 3D scanner treatment and she accuses the security personnel of voyeurism.

So, given that they can be easily circumvented, is it appropriate to put a system in that can so easily be abused, where there is little chance of redress? Many clubs and companies use x-ray scanners to scan personal possessions prior to entry: would we be happy for 3D scanners to be widely deployed in the same way?

Airport Security


The security at airports, and 3D body scanners in particular, have been in the news a lot recently. The reason I wanted to write about this is that it demonstrates the reactionary way some security is implemented, while actually making things worse for everyone.

It seems that there are two types of security measure: those that reassure the public that something is being done to protect them, and those that actually help. The former is usually a lot less effective than the latter.

Consider traditional airport security. The departure lounge of an airport is considered a “sterile” environment; all of those in it have been screened. For many years, the visible side of this primarily consisted of an x-ray of carry-on luggage and a metal detector for people to walk through. These devices were designed to prevent people bringing knives and guns on board. In addition, hold-baggage cannot travel in an aircraft without an associated passenger as, in general, people don’t want to blow themselves up.

After entering the departure lounge, a passenger has entered into the “sterile” airline system: people transit through different airports and arrive at totally different destinations via different airlines, often without re-screening in transit.

The question is: what type of attack will this actually prevent? Consider the holes in the sterile environment: the baggage handlers, terminal shop staff, flight crews, maintenance workers and the physical security of the airport perimeter.

The additional security measures brought in over the last few years haven’t really addressed the holes, they simply reinforce the idea that something is being done to protect the travelling public. First it was the shoe scanner. Then belts had to come off, liquids were banned and now we have full body scanners. All of these can be circumvented. All of this inconveniences the travelling public, which I wouldn’t mind so much if someone could convince me that there is a point to it.

I will, like most people, grudgingly comply, but I wonder what measures are put in place to determine whether the benefits justify the cost and who actually makes that call. It is possible to opt-out of the enhanced screening (at least in the US), but this means you will be patted down physically, which can be traumatic for some people, especially kids.

These new controls are also inconsistently applied across the network of airports and some airports can opt-out of the TSA programme. I have, inadvertently, walked through a metal-detector at Heathrow with a solid, stainless-steel watch and been through multiple airports with bottles of water. For any control to work, it has to be applied consistently.

This post may come as a surprise as security people are often portrayed as wanting to lock down the world but I am of the belief it is both impossible and undesirable to live in a 100% risk-free environment and a balance has to be struck between security and preventing people getting on with their lives. What I don’t like are controls that are inconsistent and not comprehensive.

Bruce Schneier has much to say on this topic here.

Here’s a video from the TSA on airport security:

The BBC have an article on the balance between civil liberties and security.


The majority of people I talk to want to do the right things online to protect themselves but don’t know what to do. That said, most people won’t go hunting for information to help themselves because they have to wade through great mountains of jargon and impenetrable comments from all quarters. If they do go looking for stuff, many give up.

So, I have been organising a series of three evenings at LSE, in the Old Theatre, with eminent speakers to explain what’s going on in the information security world, and how you can protect yourselves.

These will take place on the 19th, 20th and 21st of October from 6.30pm and are open to the general public.

#ssol on Twitter


This case just goes to show that you really should never post anything online you don’t want the world to see.

In summary, a woman in the US has been claiming that she is largely bed-ridden. The company that she works for disputes this, citing pictures of her being active on her Facebook account and they have applied to a judge to gain access to her Facebook and MySpace postings, including those that she has deleted.

It’s not overly clear from the article whether deleted posts were actually recovered, but Facebook’s privacy policy implies that at least some deleted content can be recovered.

More analysis can be found from The Register.