I’m working on a security-related project with the Raspberry Pi and have encountered an annoying problem with the on-board sound output. I’ve managed to work around this, so thought it might be helpful the share my experiences with others in the same situation.
The problem manifests itself as a loud pop or click, just before sound is output and just after sound output is stopped. This is because a PWM output of the BCM2835 CPU is being used, rather than a standard DAC. When the PWM function is activated, there’s a jump in output voltage which results in the popping sound.
Until there’s a driver modification, the work-around suggested (other than using the HDMI sound output or an external USB sound card) is to run PulseAudio on top of ALSA and keep the driver active even when no sound is being output. This is achieved by disabling the
module-suspend-on-idle PulseAudio module, then configuring applications to use PulseAudio rather than ALSA. Daniel Bader describes this work-around and how to configure MPD, in a blog post. However, when I tried this approach, the work-around didn’t work.
Continue reading Fixing popping/clicking audio on Raspberry Pi
Yesterday the European Commission launched its new draft directive on cybersecurity, on a webpage which omits a negative Opinion of the Impact Assessment Board. This directive had already been widely leaked, and I wrote about it in an EDRi Enditorial. There are at least two serious problems with it.
The first is that it will oblige Member States to set up single “competent authorities” for technical expertise, international liasion, security breach reporting and CERT functions. In the UK, these functions are distributed across GCHQ, MI5/CPNI, the new NCA, the ICO and various private-sector bodies. And the UK is relatively centralised; in Germany, for example, there’s a constitutional separation between police and intelligence functions. Centralisation will not just damage the separation of powers essential in any democracy, but will also harm operational effectiveness. Most of our critical infrastructure is in the hands of foreign companies, from O2 through EDF to Google; moving cybersecurity cooperation from the current loose association of private-public partnerships to a centralised, classified system will make it harder for most of them to play.
Second, whereas security-breach notification laws in the USA require firms to report breaches to affected citizens, articles 14 and 15 instead require breach notification to the “competent authority”. Notification requirements can be changed later by order (14.5-7) and the “competent authorities” only have to tell us if they determine it’s in the “public interest” (14.4). So instead of empowering us, it will empower the spooks. But that’s not all. Member States must “ensure that the competent authorities have the power to require market operators and public administrations to: (a) provide information needed to assess the security of their networks and information systems, including documented security policies; and (b) undergo a security audit carried out by a qualified independent body or national authority and make the results thereof available to the competent authority” (15.2). States must also “ensure that competent authorities have the power to issue binding instructions to market operators and public administrations” (15.3) Now as Parliament has just criticised the Home Office’s attempt to take powers to order firms like Google and Facebook to disclose user data by means of the Communications Data Bill, I hope everyone will think long and hard about the implications of passing this Directive as it stands. It’s yet another unfortunate step towards the militarisation of cyberspace.
Research in the Security Group has uncovered various flaws in systems, despite them being certified as secure. Sometimes the certification criteria have been inadequate and sometimes the certification process has been subverted. Not only do these failures affect the owners of the system but when evidence of certification comes up in court, the impact can be much wider.
There’s a variety of approaches to certification, ranging from extremely generic (such as Common Criteria) to highly specific (such as EMV), but all are (at least partially) descendants of a report by Willis H. Ware – “Security Controls for Computer Systems”. There’s much that can be learned from this report, particularly the rationale for why certification systems are set up as the way they are. The differences between how Ware envisaged certification and how certification is now performed is also informative, whether these differences are for good or for ill.
Along with Mike Bond and Ross Anderson, I have written an article for the “Lost Treasures” edition of IEEE Security & Privacy where we discuss what can be learned, about how today’s certifications work and should work, from the Ware report. In particular, we explore how the failure to follow the recommendations in the Ware report can explain why flaws in certified banking systems were not detected earlier. Our article, “How Certification Systems Fail: Lessons from the Ware Report” is available open-access in the version submitted to the IEEE. The edited version, as appearing in the print edition (IEEE Security & Privacy, volume 10, issue 6, pages 40–44, Nov‐Dec 2012. DOI:10.1109/MSP.2012.89) is only available to IEEE subscribers.
I’m delighted to announce that my book Security Engineering – A Guide to Building Dependable Distributed Systems is now available free online in its entirety. You may download any or all of the chapters from the book’s web page.
I’ve long been an advocate of open science and open publishing; all my scientific papers go online and I no longer even referee for publications that sit behind a paywall. But some people think books are different. I don’t agree.
The first edition of my book was also put online four years after publication by agreement with the publishers. That took some argument but we found that sales actually increased; for serious books, free online copies and paid-for paper copies can be complements, not substitutes. We are all grateful to authors like David MacKay for pioneering this. So when I wrote the second edition I agreed with Wiley that we’d treat it the same way, and here it is. Enjoy!
We’ve been assured for 29 years that quantum crypto is secure, and for 19 years that quantum computing is set to make public-key cryptography obsolete. Yet despite immense research funding, attempts to build a quantum computer that scales beyond a few qubits have failed. What’s going on?
In a new paper Why quantum computing is hard – and quantum cryptography is not provably secure, Robert Brady and I try to analyse what’s going on. We argue that quantum entanglement may be modelled by coupled oscillators (as it already is in the study of Josephson junctions) and this could explain why it’s hard to get more than about three qubits. A companion paper of Robert’s on The irrotational motion of a compressible inviscid fluid presents a soliton model of the electron which shows for the first time how spin-1/2 symmetry, and the Dirac equation, can emerge in a completely classical system. There has been a growing amount of work recently on classical models of quantum behaviour; see for example Yves Couder’s beautiful experiments.
The soliton model challenges the Bell tests which purport to show that the wavefunctions of entangled particles are nonlocal. It also challenges the assumption that the physical state of a quantum system is entirely captured by its wavefunction Ψ. It follows that local hidden-variable theories of quantum mechanics are not excluded by the Bell tests, and that in consequence we do not have to believe the security proofs offered for EPR-based quantum cryptography. We gave a talk on this at the theoretical physics seminar at Warwick on January 31st; here are the slides and here’s the video, parts 1, 2, 3, 4 and 5.