Jon Anderson, Ben Laurie, Kris Kennaway, and I were pleased to see prominent mention of Capsicum in the recent FreeBSD 9.0 press release:
Continuing its heritage of innovating in the area of security research, FreeBSD 9.0 introduces Capsicum. Capsicum is a lightweight framework which extends a POSIX UNIX kernel to support new security capabilities and adds a userland sandbox API. Originally developed as a collaboration between the University of Cambridge Computer Laboratory and Google and sponsored by a grant from Google, FreeBSD was the prototype platform and Chromium was the prototype application. FreeBSD 9.0 provides kernel support as an experimental feature for researchers and early adopters. Application support will follow in a later FreeBSD release and there are plans to provide some initial Capsicum-protected applications in FreeBSD 9.1.
“Google is excited to see the award-winning Capsicum work incorporated in FreeBSD 9.0, bringing native capability security to mainstream UNIX for the first time,” said Ulfar Erlingsson, Manager, Security Research at Google.
We first wrote about Capsicum, a hybridisation of the capability system security model with POSIX operating system semantics developed with support from Google, in Capsicum: practical capabilities for UNIX (USENIX Security 2010 and ;login magazine). Capsicum targets the problem of operating system support for application compartmentalisation — the restructuring of applications into a set of sandboxed components in order to enforce policies and mitigate security vulnerabilities. While Capsicum’s hybrid capability model is not yet used by the FreeBSD userspace, experimental kernel support will make Capsicum more accessible to researchers and software developers interested in deploying application sandboxing. For example, the Policy Weaving project at the University of Wisconsin has been investigating automated application compartmentalisation in support of security policy enforcement using Capsicum.
Earlier this month, I blogged about monitoring password-guessing attacks on a server, via a patched OpenSSH. This experiment has now been running for just over two weeks, and there are some interesting results. I’ve been tweeting these since the start.
As expected, the vast majority of password-guessing attempts are quite dull, and fall into one of two categories. Firstly there are attempts with a large number of ‘poor’ passwords (e.g. “password”, “1234”, etc…) against a small number of accounts which are very likely to exist (almost always “root”, but sometimes others such as “bin”).
Secondly, there were attempts on a large number of accounts which might plausibly exist (e.g. common first names and software packages such as ‘oracle’). For these, there were a very small number of password attempts, normally only trying the username as password. Well established good practices such as choosing a reasonably strong password and denying password-based log-in to the root account will be effective against both categories of attacks. Surprisingly, there were few attempts which were obviously default passwords from software packages (but they perhaps were hidden in the attempts where username equalled password). However, one attempt was username: “rfmngr”, password: “$rfmngr$”, which is the default password for Websense RiskFilter (see p.10 of the manual).
There were, however, some more interesting attempts. Continue reading Observations from two weeks of SSH brute force attacks
Privacy and anonymity are increasingly important in the online world. Corporations, governments, and other organizations are realizing and exploiting their power to track users and their behavior. Approaches to protecting individuals, groups, but also companies and governments, from profiling and censorship include decentralization, encryption, distributed trust, and automated policy disclosure.
The 12th Privacy Enhancing Technologies Symposium addresses the design and realization of such privacy services for the Internet and other data systems and communication networks by bringing together anonymity and privacy experts from around the world to discuss recent advances and new perspectives.
The symposium seeks submissions from academia and industry presenting novel research on all theoretical and practical aspects of privacy technologies, as well as experimental studies of fielded systems. We encourage submissions with novel technical contributions from other communities such as law, business, and data protection authorities, that present their perspectives on technological issues.
Submissions are due 20 February 2012, 23:59 UTC. Further details can be found in the full Call for Papers.
There’s a huge literature on the properties of static or slowly-changing social networks, such as the pattern of friends on Facebook, but almost nothing on networks that change rapidly. But many networks of real interest are highly dynamic. Think of the patterns of human contact that can spread infectious disease; you might be breathed on by a hundred people a day in meetings, on public transport and even in the street. Yet if we were facing a flu pandemic, how could we measure whether the greatest spreading risk came from high-order static nodes, or from dynamic ones? Should we close the schools, or the Tube?
Today we unveiled a paper which proposes new metrics for centrality in dynamic networks. We wondered how we might measure networks where mobility is of the essence, such as the spread of plague in a medieval society where most people stay in their villages and infection is carried between them by a small number of merchants. We found we can model the effects of mobility on interaction by embedding a dynamic network in a larger time-ordered graph to which we can apply standard graph theory tools. This leads to dynamic definitions of centrality that extend the static definitions in a natural way and yet give us a much better handle on things than aggregate statistics can. I spoke about this work today at a local workshop on social networking, and the paper’s been accepted for Physical Review E. It’s joint work with Hyoungshick Kim.
Last year when I wrote a paper about mitigating malware I needed some figures on the percent of machines infected with malware. There are a range of figures, mainly below 10%, but one of the highest was 25%.
I looked into why this occurred and wrote it up in footnote #9 (yes, it’s a paper with a lot of footnotes!). My explanation was:
The 2008 OECD report on Malware  contained the sentence “Furthermore, it is estimated that 59 million users in the US have spyware or other types of malware on their computers.” News outlets picked up on this, e.g. The Sydney Morning Herald  who divided the 59 million figure into the US population, and then concluded that around a quarter of US computers were infected (assuming that each person owned one computer). The OECD published a correction in the online copy of the report a few days later. They were actually quoting PEW Internet research on adware/spyware (which is a subtly different threat) from 2005 (which was a while earlier than 2008). The sentence should have read “After hearing descriptions of ‘spyware’ and ‘adware’, 43% of internet users, or about 59 million American adults, say they have had one of these programs on their home computer.” Of such errors in understanding the meaning of data is misinformation made.
We may be about to have a similar thing happen with Facebook account compromises.
Continue reading Beware of cybercrime data memes
The USENIX Security Symposium brings together researchers, practitioners, system administrators, system programmers, and others interested in the latest advances in the security of computer systems and networks. The 21st USENIX Security Symposium will be held August 8–10, 2012, in Bellevue, WA.
All researchers are encouraged to submit papers covering novel and scientifically significant practical works in computer security. Submissions are due on Thursday, 16 February 2012, 11:59 p.m. PST. The Symposium will span three days, with a technical program including refereed papers, invited talks, posters, panel discussions, and Birds-of-a-Feather sessions. Workshops will precede the symposium on August 6 and 7. Further details can be found in the full Call for Papers.
In common with other USENIX conferences, the proceedings of USENIX Security 2012 will be open access, and made available for free to everyone from the first day of the event.
I recently set up a server, and predictably it started seeing brute-force password-guessing attempts on SSH. The host only permits public key authentication, and I also used fail2ban to temporarily block repeat offenders and so stop my logs from being filled up. However, I was curious what attackers were actually doing, so I patched OpenSSH to log the username and password for log-in attempts to invalid users (i.e. all except my user-account).
Some of the password attempts are predictable (e.g. username: “root”, password: “root”) but others are less easy to explain. For example, there was a log-in attempt for the usernames “root” and “dark” with the password “ManualulIngineruluiMecanic”, which I think is Romanian for Handbook of Mechanical Engineering. Why would someone use this password, especially for the uncommon username “dark”? Is this book common in Romania; is it likely to be by the desk of a sys-admin (or hacker) trying to choose a password? Has the hacker found the password in use on another compromised system; is it the default password for anything?
Over the next few weeks I’ll be posting other odd log-in attempts on my Twitter feed. Follow me if you would like to see what I find. Feel free to comment here if you have any theories on why these log-in attempts are being seen.