ML models must also think about trusting trust

Our latest paper demonstrates how a Trojan or backdoor can be inserted into a machine-learning model by the compiler. In his Turing Award lecture, Ken Thompson explained how this could be done to an operating system, and in previous work we’d shown you you can subvert a model by manipulating the order in which training data are presented. Could these ideas be combined?

The answer is yes. The trick is for the compiler to recognise what sort of model it’s compiling – whether it’s processing images or text, for example – and then devising trigger mechanisms for such models that are sufficiently covert and general. The takeaway message is that for a machine-learning model to be trustworthy, you need to assure the provenance of the whole chain: the model itself, the software tools used to compile it, the training data, the order in which the data are batched and presented – in short, everything.

2 thoughts on “ML models must also think about trusting trust

  1. Ken Thompson’s Turing Award Lecture was about how this could be done to a compiler, not to an operating system. Which makes it all the more relevant.

  2. “The takeaway message is that for a machine-learning model to be trustworthy, you need to assure the provenance of the whole chain”
    Not just for machine learning. The same applies in every department of engineering – the chain is as weak as its weakest link. Specifically in the IT arena, when one investigates the entire background to any of the major data breaches (e.g. Equifax 2017), one commonly finds numerous non-technological vulnerabilities, resulting in fragility without which the technological one that gets actually “attacked” would at the least have far less adverse consequences. Sadly, almost everyone in the defence community concentrates exclusively on the technological attributes of the breach.

Leave a Reply to Andrew Appel Cancel reply

Your email address will not be published. Required fields are marked *