Security Economics MOOC

In two weeks’ time we’re starting an open course in security economics. I’m teaching this together with Rainer Boehme, Tyler Moore, Michel van Eeten, Carlos Ganan, Sophie van der Zee and David Modic.

Over the past fifteen years, we’ve come to realise that many information security failures arise from poor incentives. If Alice guards a system while Bob pays the cost of failure, things can be expected to go wrong. Security economics is now an important research topic: you can’t design secure systems involving multiple principals if you can’t get the incentives right. And it goes way beyond computer science. Without understanding how incentives play out, you can’t expect to make decent policy on cybercrime, on consumer protection or indeed on protecting critical national infrastructure

We first did the course last year as a paid-for course with EdX. Our agreement with them was that they’d charge for it the first time, to recoup the production costs, and thereafter it would be free.

So here it is as a free course. Spread the word!

8 thoughts on “Security Economics MOOC

  1. I am Security Engineering enthusiast. I hope this course will provide me enough knowledge on Security.

  2. I think the statement “If Alice guards a system while Bob pays the cost of failure, things can be expected to go wrong.” is a little too simple. A lot will depend on the relationship between these two people. If you mistrust people to begin with, they will never behave trustworthy. If you treat adults like small children, they will start to behave childish in no time. I am not saying that we should trust everyone and anyone – that is exactly the same simplification I am arguing against. I hope this course will go beyond the obvious generalizations and rather provide some in depth analysis of the people side of security.

    1. In the example, Alice and Bob are rarely human individuals with an existing relationship. More frequently they are companies, computer programs or governments, and in that context they will act in a selfish manner because they are compelled to often by law.

      So yes, if Alice and Bob are husband and wife, and Alice chooses the cheapest door lock in the shop, and then Bob looses all his possessions in a burglary because of it, then yes there has been a moral failure.

      But in the context of security economics, Alice is likely to be the car alarm manufacturer that uses a cheaper design to save ten cents, and Bob the insurance company who pays out the extra claims, and can’t do much about it. The human victims in the middle have no influence in any of it.

    2. Koos,

      The nature of the relationship also affects whether or not the incentives are misaligned. Taken from the example above. Alice guards the system. Bob pays the cost of a breach. Bob doesn’t like this, so a contract is negotiated which Bob and Alice agree that Alice will implement some security controls to reduce risk to Bob’s bottom line.
      This does two things:
      1) Changes the nature of the relationship (what you point out) and
      2) Provides better alignment of incentives. (such contracts can even include indemnification clauses)

      In other words, I believe what you point out (relationship complexity) it is in part what the class is about.

      I would also disagree with your comment on distrust. Large cooperation and organizations rely heavily on contractual agreements. Contracts which are clear to both parties lead to less litigation. Bob negotiating security requirements into a contract with Alice isn’t treating Alice like a small child; rather it is simply a matter of doing due diligence.

      Just my two cents — I live and work in the US, which is a highly litigious society for what it is worth.

  3. Alignment and tuning of economic incentives is the key. The mechanism of alignment remains less important than the outcome.

  4. “But in the context of security economics, Alice is likely to be the car alarm manufacturer that uses a cheaper design to save ten cents, and Bob the insurance company who pays out the extra claims, and can’t do much about it. The human victims in the middle have no influence in any of it.”

    This example passes the smell test to me. By taking a shortcut during the development of the product, Alice has front-loaded technical debt that will someday need to be paid off by someone. Of course, Bob’s probably just a consumer or business user who’s had his credentials or identity stolen and his files encrypted and there’s no recourse for poor Bob.

    Let’s suppose that Alice is a software developer who rushed through her code and introduced bugs by not following best practice norms. She got the product to market quickly enough to satisfy her bosses, though, and enjoyed the reward that comes with posting her work to GitHub. Bob, another dev, later cloned her repo, made some minor changes, then slapped Alice’s code on his company’s product. Both Alice and Bob’s products went on to enjoy much success in the marketplace for many years until one day, the bugs got exploited, the devices were turned into an enormous botnet that was then used to harm Carol in some way.

    Who’s at fault here? Alice had to hurry through her code to get the product to market; her incentives were structured poorly. Bob, working under similar incentives, replicated her error and made the problem even worse.

    I think security in the context of technology is a public good, like a river. Alice polluted the river upstream, Bob was a victim of her negligence as was Carol. The remedy to this problem is similar to remedies we’ve employed when other public goods are contaminated, overused, or abused.

Leave a Reply to steve smith Cancel reply

Your email address will not be published. Required fields are marked *