Skip to content

Device security and the FBI

Published by Martin Kleppmann on 30 Mar 2016.

This article was originally published on The Conversation under the title “FBI backs off from its day in court with Apple this time – but there will be others”.

After a very public stand-off over an encrypted terrorist’s smartphone, the FBI has backed down in its court case against Apple, stating that an “outside party” – rumoured to be an Israeli mobile forensics company – has found a way of accessing the data on the phone.

The exact method is not known. Forensics experts have speculated that it involves tricking the hardware into not recording how many passcode combinations have been tried, which would allow all 10,000 possible four-digit passcodes to be tried within a fairly short time. This technique would apply to the iPhone 5C in question, but not newer models, which have stronger hardware protection through the so-called secure enclave, a chip that performs security-critical operations in hardware. The FBI has denied that the technique involves copying storage chips.

So while the details of the technique remain classified, it’s reasonable to assume that any security technology can be broken given sufficient resources. In fact, the technology industry’s dirty secret is that most products are frighteningly insecure.

Even when security technologies are carefully designed and reviewed by experts, mistakes happen. For example, researchers recently found a way of breaking the encryption of Apple’s iMessage service, one of the most prominent examples of end-to-end encryption (which ensures that even the service provider cannot read the messages travelling via its network).

Most products have a much worse security record, as they are not designed by security experts, and often contain flaws that are easily found by attackers. For example, internet-connected baby monitors that could be hacked and allow strangers to talk to their child at night. Insecure cars that could be controlled via an internet connection while being driven. Drug infusion pumps at US hospitals that could be hacked by an attacker to manipulate drug dosage levels.

Even national infrastructure is vulnerable, with software weaknesses exploited to cause serious damage at a German steel mill, bring down parts of the Ukrainian power grid, and alter the mix of chemicals added to drinking water. While our lives depend more and more on “smart” devices, they are frequently designed in incredibly stupid ways.

Insecure by design

The conflict between Apple and the FBI was particularly jarring to security experts, seen as an attempt to deliberately make technology less secure and win legal precedent to gain access to other devices in the future. Smartphones are becoming increasingly ubiquitous, and we know from the Snowden files that the NSA can turn on a phone’s microphone remotely without the owner’s knowledge. We are heading towards a state in which every inhabited space contains a microphone (and a camera) that is connected to the internet and which might be recording anything you say. This is not even a paranoid exaggeration.

So, in a world in which we are constantly struggling to make things more secure, the FBI’s desire to create a backdoor to provide it access is like pouring gasoline on the fire.

The problem with security weaknesses is that it is impossible to control who can use them. Responsible researchers report them to the vendor so that they can be fixed, and sometimes receive a bug bounty in return. But those who want to make more money may secretly sell the knowledge to the highest bidder. Customers of this dark trade in vulnerabilities often include governments with repressive human rights records.

If the FBI has found a means of getting data off a locked phone, that means the intelligence services of other countries have probably independently developed the same technique – or been sold it by someone who has. So if an American citizen has data on their phone that is of intelligence interest to another country that data is at risk if the phone is lost or stolen.

Most people will never be of intelligence interest of course, so perhaps such fears are overblown. But the push from governments, for example through the pending Investigatory Powers Bill in the UK, to allow the security services to hack devices in bulk – even if the devices belong to people who are not suspected of any crime – cannot be ignored.

Bulk hacking powers, taken together with insecure, internet-connected microphones and cameras in every room, are a worrying combination. It is a cliche to conjure up Nineteen Eighty-Four, but the picture it paints is something very much like Orwell’s telescreens.

Used by one, used by all

To some extent law enforcement has historically benefited from poor computer security, as hacking a poorly secured digital device is easier and cheaper than planting a microphone in someone’s house or rifling their physical belongings. No wonder that the former CIA director loves the Internet of Things.

This convenience often tempts governments to deliberately weaken device security – the FBI’s case against Apple is just one example. In the UK, the proposed Investigatory Powers Bill allows the secretary of state to issue “technical capability notices”, which are secret government orders to demand manufacturers make a device or service deliberately less secure than it could be. GCHQ’s new MIKEY-SAKKE standard for encrypted phone calls is also deliberately weakened to allow easier surveillance.

But a security flaw that can be used by one can be used by all, whether legitimate police investigations or hostile foreign intelligence services or organised crime. The fears of criminals and terrorists “going dark” are overblown, but the risk to life from insecure infrastructure is real: fixing these weaknesses should be our priority, not striving to make devices less secure for the sake of law enforcement.