CFP: Learning from Authoritative Security Experiment Results (LASER 2016)

This year, I’m on the PC for LASER 2016: the Oakland-attached workshop on Learning from Authoritative Security Experiment Results. The LASER 2016 CFP is now online, with a focus on methodologies for computer security experimentation, new experimental approaches, unexpected results or failed experiments, and, more generally, consideration of how to standardise scientific approaches to security research. Please consider submitting a paper — especially if you are pushing the boundaries on how we conduct experiments in the field of computer-security research!

The deadline is 29 January 2016. A limited number of student scholarships will be available to attend.

Excerpt from the CFP:

Workshop Goals 

Each year, the Learning from Authoritative Security Experiment Results (LASER) workshop focuses on some aspect of experimentation in computer security. The 2016 workshop focus is on methodologies for computer security experimentation that can provide insight that might help to catalyze a new generation of useful, experimental computer security research. Central to this will be new approaches to experimentation that are repeatable and can be shared across communities.

It should be noted that this call for research into the computer security experimentation is different from the current fundamental research into the “science of cybersecurity,” though they are certainly complementary in their eventual goals. Along with establishing a field of research into useful evidence derived from computer security experiments, substantial new approaches to sharing are needed in order to enable scalable, cross-disciplinary experiments.

LASER 2016 seeks research papers that focus on computer security experimentation methods and results and that demonstrate approaches to increasing the repeatability and archiving of experiments, methods, results, and data so that they can teach valuable lessons to the computer security research community regarding conducting security experiments and ultimately serve to foster a dramatic change in the paradigm of computer security research and experimentation.

One aspect of experimental research is the potential for unexpected results or even failed experiments. Early LASER workshops focused on both the need for sound science and on studying these unexpected or negative results. LASER 2016 returns to this idea while examining the experimental approaches employed and seeking new research into defining and standardizing scientifically based experimental methods. Thus, papers discussing either positive or negative results from well-executed experiments are welcomed and encouraged. This includes both research that is considered “successful,” as well as research that was unsuccessful, either because of a so-called “negative result” or perhaps because of some sort of unexpected issue arising in the experimental approach. This doesn’t mean “bad” research — it means research that had a valid hypothesis and well-reasoned methods that others can learn from.

Leave a Reply

Your email address will not be published. Required fields are marked *