A project called NSA@home has been making the rounds. It’s a gem. Stanislaw Skowronek got some old HDTV hardware off of eBay, and managed to create himself a pre-image brute force attack machine against SHA-1. The claim is that it can find a pre-image for an 8 character password hash from a 64 character set in about 24 hours.
The key here is that this hardware board uses 15 field programmable gate arrays (FPGAs), which are generic integrated circuits that can perform any logic function within their size limit. So, Stanislaw reverse engineered the connections between the FPGAs, wrote his own designs and now has a very powerful processing unit. FPGAs are better at specific tasks compared to general purpose CPUs, especially for functions that can be divided into many independently-running smaller chunks operating in parallel. Some cryptographic functions are a perfect match; our own Richard Clayton and Mike Bond attacked the DES implementation in the IBM 4758 hardware security module using an FPGA prototyping board; DES was attacked on the FPGA-based custom hardware platform, the Transmogrifier 2a; more recently, the purpose-built COPACOBANA machine which uses 120 low-end FPGAs operating in parallel to break DES in about 7 days; a proprietary stream cipher on RFID tokens was attacked using 16 commercial FPGA boards operating in parallel; and finally, people are now in the midst of cracking the A5 stream cipher in real time using commercial FPGA modules. The unique development we see with NSA@home is that it uses a defunct piece of hardware.
Let me explain why this is important. The Virtex-II PRO FPGA—the one used by NSA@home—was introduced in 2002, only about five years ago. It is spec’d at about 400 MHz while today’s latest FPGAs, two generations later, are spec’d at around 550 MHz. So we have not gained that much in speed, but rather in size, specialization, and integration of embedded interface functions such as fast serial transceivers, Ethernet MACs, more embedded memory, etc. But if you want only plain logic and memory for parallelism, the old dinosaurs of a few years ago are still very much relevant, especially if you can get them for next to nothing and someone already took the effort to design the PCB for you. Hobbyists are (extremely) displeased that the dense ball grid packages put them out of business, so to speak (and that the FPGA vendors do not care so much about it, which is another discussion). So, with FPGAs being used in ever more applications, I see this type of recycling becoming more popular. When will we see an enterprising student create a Logic-101 lab from recycled consumer electronics?
The other interesting aspect of NSA@home is the trade-offs. Much like Clayton and Bond, Stanislaw is checking subsets of the hash, places them within ranges, and leaves the final checking for the host PC; he could barely fit an unrolled, pipe-lined SHA-1 into each FPGA, so that trade-off was necessary. Another important aspect is the realization that FPGA resources are still there even if you do not use them. Usually, you would try to minimize the resources you use in order to make room for other functions; or, try to fit your design into a smaller FPGA and save money. But when you have a single application, and a given device size, you should try to use all at your disposal. This designer decided to use the embedded block RAMs, which would otherwise be unused, as look-up tables. Good job.