Update 10/06/12: Another response from CAM gives more detail of the feature in question and claims Microsemi do not inform their customers of the fuse that disables the Internal Test mode. Microsemi implied that they do. Questions rest there.
To those aware of the “backdoor silicon” paper episode I’ll summarize it in one run-on sentence: Someone mistakes undocumented JTAG instructions for a backdoor only to later realize they might have been wrong and then release a followup paper highlighting instead the analysis techniques they developed for testing but never actually admitting anywhere they might have been wrong in their original conclusions. Before explaining why this is the likely case let me explain briefly what this is about.
This starts with a paper from The University of Cambridge TAMPER laboratory that was supposedly leaked. The paper shows how novel electrical analysis techniques they developed were used to extract a key from a military grade chip and then used to “unlock” additional, potentially sensitive, functionality within the physical debug interface (JTAG) of the chip. The paper claims rather prominently that the unlocked features provide backdoor functionality, even remotely.
Based on the vagueness of details in their leaked paper, the response from the vendor, their response to that and then the followup paper; it is clear that their claims are likely false and certainly unproven at this point.
Why is it FUD
First, military grade devices would have JTAG grounded or disabled. But if enabled it is common for JTAG implementations to include undocumented instructions and functionality. I run into this so often that I’ve written tools that search the instruction space and help one find those registers that are docile and those that are volatile depending on stimulation. In 2009 at the Chaos Communication Congress Felix Domke documented techniques to analyze the undocumented register space to find a hidden data bus. He didn’t go so far as the University of Cambridge team to call this a backdoor expressly because this can be common.
Embedded chips are notoriously undocumented. Security researchers live with the religion that “there is no security by obscurity”. At both the embedded and silicon level of hardware it is possible to assume, and perhaps the security conscious should assume, that no documentation is obfuscation until proven otherwise. So when you find that toggling JTAG instructions in a special way opens up a whole new instruction space, instructions that might enable networking features or access to sensitive memory, I can understand the concern. However many of these features could be necessary for development of the intended device. If this is a backdoor or a valid feature can only be answered through full reverse engineering or complete documentation.
The chip manufacturer responded to this episode by explaining that the features the University of Cambridge found could be turned off by developers in a way that they can not be turned on again.
Microsemi’s customers who are concerned about the possibility of a hacker using DPA have the ability to program their FPGAs …[to] … disable the use of any type of passcode to gain access to all device configurations, including the internal test facility.
This statement can be verified by their customers and I would find it unlikely that they would lie so clearly. It implies said functionality is there for customers to develop and turn it off. “Turn off” likely means with a One-Time-Programmable fuse. Tampering with such fuses likely requires intrusive hardware attacks that could only be useful on a single chip bases, on-hand and not easily replicated, most certainly not “in-field”. This is enough to negate any labels of backdoors. This also implies that the University of Cambridge might have had an unlocked chip for testing and were unaware of this feature due to lack of documentation normally provided customers.
In their leeked paper, response to the vendor and followup official paper, the University of Cambridge team do not address what type or configuration of the chip they had nor what level of documentation they had access to. In the leaked paper they make claims to potential attack from the network without clarifying the basis for such a claim at all. The method of unlocking and how the key they extracted was “used” with JTAG is not described in the initial leaked paper that caused the concern either. Even the slightest bit more of information on the procedure for using this key with JTAG could have helped peers determine if this was a common JTAG feature for, say, protecting customers Intellectual Property which configures the substrate of the FPGA rather than unlocking said features (yes, regardless of the fact that the nature of the register space changed).
In their official followup paper they moderate their backdoor claims but only slightly and in a way that makes it more obvious that these claims are without base. The paper explains that the techniques they developed to expose flaws in the handling of memory, which are legitimate findings, can be exploited to install a trojan:
Ultimately, an attacker can extract the intellectual property (IP) from the device as well as make a number of changes to the firmware such as inserting new Trojans into its configuration.
Microsoft Windows having a vulnerability that would let an attacker insert a new trojan is very different from Windows coming with a trojan or backdoor already installed. This would be the parallel to their claims which they continue to push even in the followup paper when they say “It took as long as one day to extract the passkey and backdoor key“. Passkey yes, backdoor key they have yet to show any foundation for. They end their official response to the entire media episode by saying:
We have been contacted by several companies which use Actel ProASIC3 and other Flash FPGAs in critical applications. They are very concerned about the backdoor which allows an attacker to gain full access to all IP blocks (ARRAY bitstream, FROM, NVM). Therefore, we have developed and successfully tested some protection techniques which can make the attack more difficult to perform.
Certainly the technique they developed does present a good way to test some silicon level security features so of course there is interest. However, it does not determine debug functionality, something that would require reverse engineering. In truth I have a great deal of respect for the University of Cambridge TAMPER laboratory. However, the continued claim of a backdoor without further proof is a FUD as immature as vendor FUD in-response to serious flaws due to design.