I’m at the world’s first conference on ethics in mathematics and will be speaking in half an hour. Here are my slides. I will be describing the course I teach to second-year computer scientists on Economics, Law and Ethics. Courses on ethics are mandatory for computer scientists while economics is mandatory for engineers; my innovation has been to combine them. My experience is that teaching them together adds real value. We can explain coherently why society needs rules via discussions of game theory, and then of network effects, asymmetric information and other market failures typical of the IT industry; we can then discuss the limitations of law and regulation; and this sets the stage for both principled and practical discussions of ethics.
7 thoughts on “Ethics of mathematics”
Here are some liveblog notes of the other talks, starting with the three that took place before the coffee break.
James Franklin taught a “professional issues and ethics” in maths course in New South Wales from 1998-2012; this came from the university’s governing body that mandated it for all degrees. Passing the course was compulsory and the students had to attend classes to get rated. He presented job ads and brought in visiting speakers to motivate them. Those planning a maths research career generally seemed happy that maths was being used in the wider world. He has written a wikipedia article on ethics in maths; please add to it!
Maurice Chiodo described his experience organising one seminar in ethics, then three, then eight … struggling with lack of support and political difficulties. Eventually he set up a student “ethics in maths” society. In questions people discussed mathematicians often being on the spectrum and having difficulty with the anxiety of uncertainty. Ethical education has to bring this out. For example: you’re doing an MRI experiment and you see one subject has a brain tumour. Do you tell them? Often the class splits into half who say it’s obvious you tell him, and half who say it’s obvious you don’t. So: not everything can be axiomatised. “It’s the sort of hard that mathematicians aren’t used to.” They need a method of problem solving that they don’t know exists.
Anna Alexandrova of HPS teaches ethics and philosophy of science, technology ad medicine. She discussed three different approaches. Model 1: theory first, starting with utilitarianism, deontology, virtue ethics, natural law or Rawls, and deduce the result: tempting for a mathematician, but Anna argues this is more or less “a pipe dream”. See Peter Singer’s uncompromising deductions from utilitarianism . Model 2: Wittgenstein noted that language was a haphazard mess, like an ancient city; ethics is similar, and common law is what emerges from this. Casuistry gives primacy to paradigm cases and extends incrementally to new situations. Model 3 is principalism which starts in the middle: autonomy, beneficence, non-malevolence and justice. This sort of “mid-level theory” is a typical approach nowadays in medical ethics.
> MRI experiment and you see one subject has a brain tumour
At least now you know who to run over with the trolley.
James Wright is a security researcher at Royal Holloway who’s been sensitised to ethics by the Volkswagen case and the issues around responsible disclosure, and set up an ethics discussion group. Discussions tended initially to be utilitarian. Things get more complex when you start thinking about how things can change as they get into the hands of people with other intentions. How do you “find the harms”? The humanities provide many different lenses through which you can criticise your work; he uses security studies, international relations, structuralism and parts of critical theory. In discussion, James’s approach emphasises getting an ethical consciousness going while mine operates within the framework of a capitalist liberal democracy. James is more concerned about how his work might be used by authoritarian regimes; I responsded that we do work on issues such as export control, but there’s only so much you can do in an eight-hour course.
David Pritchard has been running a course at Strathclyde teaching ethics within an (externally-imposed) framework of professional identity, which he tries to subvert. There are two workshops and two assessed essays, and fr PhD students, so it’s probably too little too late. Professional ethics are usually defensive, about justifying a socially licensed monopoly. Talking about good/bad ethics gets people’s backs up as everyone believes they’re ethical; a less polarising entry point is to talk about good/poor practice. His structure is to send out reading materials a week in advance to introduce questions and terminology; have 3-4 speakers talking for 15–20 minutes each, and then have a 3-hour small-group discussion where 3–5 students of mixed genders and backgrounds discuss a scenario in groups and then in plenary. The session leader’s job to play devil’s advocate and sharpen everything up. Typical case studies include an authorship dispute, a police algorithm trained on “big data” that turns out to be racist, and a research deal offered by a dodgy company. The idea is to explore ethical dilemmas in contexts that are clearly relevant to research students. The racist algorithm scenario is particularly good at highlighting the illusion of objectivity, while the “dodgy client” scenario gets interesting when the group leader asks “What if the dodgy client is the government?” The essays are sometimes really good, only 3% are plagiarised, and some students take the class without needing the credit.
Vint Cerf worked on terabit services from the 1980s believing they would transform the world; we’re now on the threshold of getting there and he believes we’ll see all sorts of novel real-time behaviours. The global Internet so far has been largely a success; what went wrong? Well, some parts of the Internet don’t have your best interests in mind, so you get malware and all the rest of it. That’s down to the fact that people are people, and all we can do is to evolve laws and defend human rights. The next iteration will involve software-defined networks and clouds, which will entail new standards, and not just for hardware and software suppliers. New architectures (including the IoT) will empower interworking in all sorts of new ways. Will we ever have a secure network? Probably not; we just have to diminish the risks, and that’s about all it’s reasonable to expect.
Paul-Olivier Dehaye has been investigating Cambridge Analytica. He used to be a maths prof at Zurich, a privacy activist, and interested in responsible machine learning. He’s now running an NGO focused on empowering people to use their personal data. When services are personalised it becomes harder to fight back, both internally and externally; people are affected differently and solidarity is broken. Kogan learned of Kosinski’s work, told Strategic Communications/CA, negotiations broke down and Kogan decided to replicate Kosinski’s work following a suggestion by a Palantir employee. The new deal was “sell all your friends for a dollar” and this was not communicated to the marks. Even then in 2014 this was illegal but nobody noticed and the fact that Kogan was now in a different department may have hindered accountability. Things started to bubble up with the Ted Cruz campaign, the Grassegger’s article and finally Carole Cadwalladr’s articles in the Observer. Chris Wylie’s emergence as a whistleblower really galvanised the media as his first-person testimony made real the issues that had already been covered by journalists such as Grassegger and Mattathias Schwartz. Lessons learned include that there are lots of problems in plain sight, including systemic problems, but telling the story in a way that captivates the world is harder.
Michael Harris talked about the hazards of an instrumental approach, and of simply assuming that academic effort should support existing power structures, recalling the efforts and attitudes of mathematicians such as Hardy. He’s written for example about the contribution made by cryptographic agencies to his field of number theory, but then neither Hardy nor Russell refused a Trinity fellowship because of the moral ambiguity (to put it politely) of Henry VIII. Our salaries are paid by institutions whose function is to preserve the status quo.
Bonnie Shulman talked about how one practices ethical behaviour. As with martial arts training, you train and you train and then you can react when you have to fight; the problem with ethics is that you may not realise you’re in a situation where you have to act ethically. You need to make your values explicit and uncover conflicting values. You may have to suddenly apply unexamined values. For example, cheating is widespread in professional life; in her experience, a student found the answer to a takehome exam in a book, and shared it, which embarrassed not just the students but the professor. If we can never use the medical data collected in Dachau, can we never visit the pyramids as they were built with slave labour? In questions, it was noted that we spend a lot of effort trying to get our students to be the best mathematician, without teaching them to be a good mathematician. And how does one deal with trigger risks? Persuading students that they don’t need to have opinion X or Y to get good grades is important but hard.
Whit Diffie started by noting that we scheduled the meeting for Hitler’s birthday. He is sceptical about arguments about ethics, as often it’s an argument people make against things they couldn’t get laws passed against. For example the American Veterinary Association declares it unethical to use a drug off-label if there is an on-label alternative; this is just about keeping prices up. Hardy claimed that the value of his mathematical life was nil, as nothing he’d ever done in number theory had affected the amenity of the world; but then he claimed more or less the same of relativity and quantum mechanics. Even anti-establishment cryptographers respect the work of Gordon Welchman. Do we have any examples of unethical mathematics – of people who solved problems they shouldn’t? Well, it would be a bad thing if someone invented an non-Ulam-Teller H-bomb as that could be much cheaper and easier to make, so perhaps that work should not be done. But in general you’d to have to assume that you were uniquely clever to run such an argument. Of course, people can be unethical by plagiarism and the like, but that’s no different from anywhere else in academia. Questions ranged from whether Ulam should have developed the H-bomb to the differences in culture between cryptographers who work in the intelligence agencies and those who work in academia. Whit noted that nuclear weapons are almost uniquely expensive in terms of the cost of manufacture as well as the cost of the damage they cause; we can make expensive computational problems in cryptography, but what about expensive problems in mathematics? In biology, potentially dangerous things can be done with synthetic biology and the biologists don’t have a culture of keeping things secret, so perhaps there one can make a case that some research projects should just not be undertaken. Whit is otherwise sceptical about whether ethics varies by discipline in any real way other than cost.
Bill Binnie warned us that people often fail to anticipate evil at the top. In the late 1990s he was technical director of the analytic and reporting shop at the NSA, preparing for the input of massive amounts of data. By 1998 they could sessionise fibre data, so they could capture all the packet data in the world, and by mid-98 he’d devised a way to discard data of no interest. However this relied on metadata as a filter, and this started filtering 99.999% of the data out in 1999. The problem was that they could then capture everything in the world. This set us up for disaster when Nixon’s old henchman Dick Cheney came to power in 2001 and took the filters off. That meant the NSA could invade everyone’s privacy, including all members of parliament, members of Canadian parliament and so on. The FBI and DoJ tried three times to prosecute him, fabricating evidence, but he had had the foresight to squirrel away copies of sufficient exculpatory evidence. Some care is needed when whistleblowing. The broader issues can only be tackled through politics, lobbying and the legal machine. However there will never be a foolproof way for Congress to verify what the NSA is up to internally. It would be great if Congressional staff had the power and the clearances to go in, look and report back. US citizens also have a duty to call out when the law and especially the constitution are being broken. The audience gave Bill a round of applause for his service to democracy.
The final talk of the day was by Owen Cotton-Barratt, talking about ethics for consequentialists. Predicting the effects of research is like lookahead in chess or go; we can do better than just throwing up our hands, but past a few moves it’s hard. Progress is typically made by curiosity and this is particularly visible in pure maths. He also works with the open philanthropy institute which is funding a lot of long-term consequentialist projects over the coming years. Questions ranged over what makes research effective, the extent to which it can be managed and how far ahead people should try to look. It’s not just a research ethic but a research aesthetic; what is it that makes problems look cool? And a real test for consequentiality might be Copernicus’ work. For seventy years it was considered to be nonsense, and thereafter it was known to be wrong (as the sun is not at the centre of the earth’s orbit but one of the foci). Yet it changed the world. How is it to be assessed?
What about us “anti-theorists” in ethics?
Here is the video of my talk.