Decepticon: interdisciplinary conference on deception research

I’m at Decepticon 2015 and will be liveblogging the talks in followups to this post. Up till now, research on deception has been spread around half a dozen different events, aimed at cognitive psychologists, forensic psychologists, law enforcement, cybercrime specialists and others. My colleague Sophie van der Zee decided to organise a single annual event to bring everyone together, and Decepticon is the the result. With over 160 registrants for the first edition of the event (and late registrants turned away) it certainly seems to have hit a sweet spot.

10 thoughts on “Decepticon: interdisciplinary conference on deception research

  1. Mark Frank started the first talk at Decepticon with a video where a detained suspect denied having a knife, then pulled a gun. He presented a meta-analysis of deception signals with cognitive, emotional and other bases. Low-stakes lab experiments present only weak signals; how can we capture real-life behaviour, or ethically engineer high-stakes lab experiments. He’s been doing experiments where volunteers who tell the truth or lie get rewards of tens of dollars or punishments of having to listen to noise for half an hour, depending on whether they were believed. He obtained about 69% accuracy overall from reading facial emotions, about half of which were under half a second in length. He concludes that although unimpeachable corroborating evidence is the only way of detecting truth or lies for sure, emotional cues can materially help in the search for evidence.

    The second speaker, Magdalene Ng, is also interested in whether higher stakes can yield more dependable deception cues. She examined 39 homicide cases with video evidence where the suspect’s guilt was unequivocally established by forensic evidence or appeal; a smallest space analysis to examine spatial goodness-of-fit of variable co-occurrences consistent with research hypotheses. Innocent suspects were more centred on the victim, on emotions such as hope, pain and religion, on social aspects such as the victim’s family, and were active in their description of movements; while guilty suspects were self-centred (don’t know, didn’t do it), generally fail to display emotion and describe movement in passive terms. in questions, it was wondered whether murderers might be sufficiently different from the population as a whole for the results to be not entirely general.

    The third speaker, Armin Guenther, works on deception in scientific communication. In 2011 the psychologist Diederik Stapel was exposed for faking many research studies, on which many students and others relied, with the result that their research was “a street full of uncollected garbage and broken windows.” Fake scientific data can thus be really high-stakes lies. Armin manages several open-access journals and is concerned with declining standards of honesty; one journal has had to retract 8 articles since it was founded in 2012. A search of 1871 articles in PsycInfo from 1970 to 2014 revealed 284 retractions, with the rate rising sharply in the last decade; the pattern is much the same if each author is counted just once, removing Stapel’s 50-odd, but then the main cause of retraction shifts from fraud to plagiarism and the proportion of misconduct drops from 69% to 58% (the rest being author error of various kinds). Retraction does not solve the fraud problem, though; lots of retracted articles still get cited.

    Norah Dunbar and Matthew Jensen work on linguistic synchronisation in criminal interviews. Synchrony refers to the give and take of conversational turn-taking, gestural mimicry and rapport generally; skilled interviewers adapt to suspects’ communication styles to create shared mental representations. They started out with the assumption that truth tellers would be better than liars, but it turned out to be the other way round; liars set out to use synchrony as a tool against the interviewers while truth tellers were often put off by difficult questions. So is it still possible to use synchrony for deception detection? They studied linguistic synchrony across three datasets by coding the linguistic behaviour of interviewer and interviewee and calculating a product-moment correlation for affect, cognitive mechanisms and perception separately. Results depended on the nature of the questions such as whether they were forced-choice or not. They believe that linguistic synchrony can be isolated, measured automatically, and fed back in real time to interviewers. It’s volatile, being influenced by many things that have nothing to do with deception; but by and large, deceivers synchronise more.

  2. The second session started with Sophie van der Zee and Ramsey Faragher talking about how to measure nonverbal behaviour more accurately using technology. Sophie had already shown that motion capture suits can be used to detect deception; one of the suits was demonstrated. Can we do the sensing remotely and more cheaply? Ramsey has built a system integrating a Kinect depth camera and a radar set which is cheaper although less accurate; it’s no good when the subject’s hands are behind their back, and can make mistakes with leg posture when subjects are sitting, but is pretty good at detecting head, shoulders, arms and hands. Results show that liars move more across all limbs, and that unobtrusive measurements are promising.

    Jay Nunamaker was next, talking of Avatar, a US system for assessing the credibility of interviewees developed for US border control. He fuses data from various cameras (including a Kinect), a force platform and natural language processing to do automated border interviews. It has now gone through four iterations from a sprawl of lab equipment to a compact automatic kiosk where travelers can present a passport and get interviewed by an avatar. The hope is that the kiosk will be 80% accurate in assessing traveler credibility versus 55% or so for humans. It’s been tested by getting half of a sample of border guards in Warsaw to assemble a bomb and pack it in a bag before interview. At interview, they were shown a similar bomb but with the power supply missing; the guilty subjects spent more time looking at it, with 15 of the 16 of them being detected.

    The next speaker, Judee Burgoon, works on the same project; her talk was the effect of motivation and modality on detection accuracy. She investigated whether modality affects the outcome; she did interviews face to face, by audio or by text. She’s using automated systems (Agent99, Splice) for linguistic analysis. Deceivers experienced higher effort, more negative arousal and more affect-laden language (though with fewer positive affect terms), more nonimmediate model verbs and less complex and diverse language generally. This was all consistent with Di Paolo’s motivation impairment effect hypothesis; up to a certain level, motivation can help people lie better, but past a certain point there’s a choking effect. Motivation can indeed be confused with deception. She concludes we can detect deception by combining a number of features, with the best combination depending on the modality; curiously, low-motivation text interviews gave the best detection results.

    Steven Watson was the last speaker of the morning. Like Sophie he works with motion capture suits; his question is whether you can detect deception by suiting up the interviewer rather than the interviewee. In the field this is often much more practical; a police officer or soldier could wear a covert motion-capture suit which could try to detect whether its wearer was being deceived through mirroring and other cues that the wearer would not detect consciously. He used 40 subjects, measuring total body motion and movement erraticism, and motivating them using a trust game. The deceivers were notably uncooperative in this experiment. The deceit victims moved more than the nonvictims, while the interviewers’ explicit trust judgments discriminated much less and there was an effect from subjects’ side knowledge. So: could interviewers do better if they instrument themselves and get real-time feedback?

  3. The afternoon started with a panel on deception research, chaired by Nick Humphrey. Aldert Vrij first discussed interview techniques; as lying is usually more demanding than truth telling, you can make it more so by eliciting reverse order recall, imposing a secondary task on interviewees, or forcing turn-taking among multiple interviewees. Another technique is to encourage interviewees to say more; truth tellers usually don’t tell the whole story at first pass and can easily elaborate, while liars are reluctant to in case they make mistakes. For this a friendly interviewer and a model of a detailed statement are helpful. Unexpected questions also help spot liars, as do spatial questions, process (rather than outcome) questions and questions about verifiable details (say phone calls rather than seeing people on the street). The way forward is to develop such cognitive lie detection methods in different contexts.

    Dan Ariely tries to tempt people to cheat in many little ways. He sets subjects to play games for money where cheating is easy; he then measures how lucky they get. Odd effects include that people cheat more when they’re sitting next to their significant other and see how successfully that person is cheating; they also cheat more when the benefit mostly goes to the team rather than to themselves. In short, people cheat more when they can rationalise their dishonesty. In general, people cheat but don’t steal; this works for parolees too. Corruption changes this though. 90% of people paid a $3 bribe to a research assistant to win $20 when it was easy to do so; these undergrads were suddenly more liable not just to cheat but to steal too. Nobody was prepared to pay a $7 bribe, but they cheated more after turning it down; it’s sufficient to tar the environment as crooked for ethics to evaporate. A generous vending machine let people take candies; most people too some, but nobody took more than four. This “fudge factor” is the level of dishonesty people can justify; and it depends on the environment. Curiously, nobody reported the machine broken; many phoned their friends to get them to join in (and thus make them feel less guilty). Many people think that people in their country cheat more, but he found no evidence of this, despite testing in many countries (UK, Israel, China, Kenya …). Culture does matter but in a domain-specific way (from illegal downloads through tax payment to marital infidelity).

    Steve Porter‘s top ten ideas for lie detection research are (10) identifying different types and skill levels of both liars and interviewers, and interaction effects (9) work on affective approaches, not just cognitive ones (8) more work on high-stakes deception; few studies of really high-stakes cases, and high-stakes detectors like judges and juries (7) judges v juries: are groups better at detecting lies than individuals, whether passively or as multiple interviewers? (6) is indirect or unconscious detection better, and might it extend to children or infants? (5) can police lies cause people to recall and confess to serious crimes that never happened, and if so how should police techniques be changed to eliminate coerced-internalised confessions? (4) what are the long-term effects of lie detection training? (3) sacred cows such as criteria-based content analysis (CBCA); is linguistic inquiry and word count (LIWC) better? (2) how can we spot psychopaths, whose lies have enormous consequences? (1) there is no number 1, as “I am a practiced deceiver”!

    Time Levine notes that the more judgments are reported in a study, the closer it gets to the meta-analysis accuracy of 54%. The old dominant view was cue theories: truth and lies are different, and lead to signals in demeanour, linguistic content and so on. Yet on the one hand cues come in clusters, and on the other they don’t replicate well across studies. The things that make a difference are prompting, truth-lie base rates, and the availability of other discovery methods such as evaluating context or analysing motices. His proposed alternative, the “truth default theory”, focuses on content (evidence, context, plausibility); persuasion; and motive (people rarely lie unless they have a reason to). Lies are detected by confessions (65%), evidence (27%), and cues (2%). It really matters whether judges can persuade witnesses to be honest.

    In discussion, children learn to lie skilfully by the age of five or six, but there’s not a lot of research on how children learn. Autistic children sometimes can’t lie, which is a terrible social affliction, so presumably it depends on theory of mind. There are massive differences in believability between individuals, and people who are more believable are more likely to cheat while shifty people actually lie less. There are mistakes: when people are invited to recall the ten commandments, they can’t, so make up interesting new ones. There’s framing: forms should have the declaration of truth at the start, like court testimony. It’s also useful to create get-outs; checkpoints where people who have been caught in a web of deceit can do a restart; breakpoints in the slippery slope. Most people want to feel they’re honest; give them a chance! Age also matters: lying decreases with age, and we don’t know whether this is due more experience, less temptation or what. Commitments and credibility are also a factor; should people make a declaration at the start of the tax form that they will tell the truth, and at the end that they did so? If you polygraph people at the start of a bad course of action, you’ll pick up nothing, as people slip into it; a bit of self-deception, a bit of blindness, a bit of carelessness, and they get to the point that they realise they’re in trouble. All of a sudden they are working to not get into trouble, and the scene changes. Then detection techniques might work, but now you’re in the middle of it. When your moral compass gets engaged, affect is changed; this change from the low-stakes to the high-stakes environment maybe needs more research. Emotional flooding at such moments might interact with cognitive demands and functioning in interesting ways. We may also need to look at white lies: are they the “gateway drug” for more serious stuff? And in a world of automation, do we need to reset the threshold of what’s acceptable? Finally, our intuitions about preventing bad behaviour are often shortsighted in that we don’t get people to give enough; social reciprocation is a great glue.

  4. The stage magician and hypnotist Martin S. Taylor gave a demonstration of mentalism: of reading the card in someone’s hand from nonverbal cues as he talked about suits and numbers; and getting someone to choose a preselected card by suggestion. Or did he? Magicians often give misleading explanations; in this case, the tricks were just tricks. In fact, mentalism is usually a smokescreen. Remember Uri Geller? He was a magician who claimed his tricks were real. In the 19th century it was spirit mediums; nowadays it’s psychic Susan. Pretending you’re in league with the dead is unethical in Martin’s opinion. See Tom Binns’ act about Ian de Montfort the medium. Derren Brown did a famous stunt where he got two ad execs to draft an ad for a taxidermy shop, shows he had the same draft ad, and tells a tall tale of how he suggested it to them. It’s not done like that! Yet he’s read papers by psychologists relating this as truth! Suggestion does have a role, for example to get people to misremember stuff, or steer people to a card with maybe 60% or 70% accuracy, but that’s it. Finally, magicians are sometimes honest with you, and you deceive yourself! The trick is to make people think something’s impossible when it isn’t. Hypnotism exists, and sometimes gives spectacular results. And sometimes you can persuade people to go along with a trick they sort of understand, without hypnotising them. People do want to believe!

    The first speaker in the day’s last symposium was Zarah Vernham, talking on detecting deception in alibi witness scenarios. She investigated the effects of joint recall on the number of checkable and uncheckable details provided by pairs of truth tellers and pairs of liars: the former had actually walked a given route while of the latter only one had, while the other stole money. Truth-telling pairs provided more checkable details, while lying pairs produced more uncheckable details; 40% of them later reported doing this purposefully. Observers informed about the verifiability approach did better at telling truth from lies when given just the collective statement from each pair, rather than the individual statements or even both the individual and collective statements! This was hypothesised to happen because they’d focus on more important cues. Truth telling pairs also seem to swap more as they compose their statement than liars.

    Beth Richardson was next, talking on the effect of unconscious priming on cues to deception. Linguistic cues to deception work because lies don’t mimic the effects of genuine recall; can this be usefully influenced by external aspects of the environment, or interpersonal processes? She used LIWC techniques to show that people fabricating a story are more likely to incorporate aspects of irrelevant speech with which they are primed. They also meet more of the CBCA criteria when telling the truth. Future work will look at the language used by actual interviewers and whether this technique can work in the field.

    The day’s last speaker was Lucy Akehurst who’s interested in detecting malingering. This has different medical and legal definitions; the latter sees it as a court finding rather than a medical diagnosis. However expert testimony is generated in a clinical setting where motion capture suits aren’t available; but she used 19 ideas from CBCA and 7 from reality monitoring. There are also traditional cues such as whether the subject tends to confirm existence of all symptoms asked about (malingering) or provide spontaneous corrections (truthful). She got 64 interviewers to each interview one malingerer and one truth teller; while naive interviewers did no better than chance, those given her checklist got it right about 70% of the time. However, from the checklist material, the traditional questions were more effective than the CBCA and RM ones.

  5. Jeff Yan started the second day with a talk on “Playing poker for fun, profit and science”. Poker is a game of both skill and chance, with incomplete information and bluffing: “not a card game, but a people game that is played with cards.” It is also a tool for studying deception, which is essential to poker play. This has two forms, bluff (betting strong with a weak hand) and slow play (doing the reverse, to build a big pot / lure opponents into a trap), and offers greater ecological validity than standard trust games. Jeff used poker to study gender effects in bluffing, with betting tasks against male, female or mixed avatars; male subjects bluffed significantly more against an otherwise all-female table. He also used the same experiment to investigate the effects of Machiavellian traits (which are otherwise hard to study); Machs are more annoyed by slow play and can act emotionally. In questions, it was discussed whether mimicking fake tells as a strategy; Jeff pointed to Slepian’s Stanford work on hand movements being better tells than facial expressions.

    David Modic was next on auction fraud (disclosure: a paper of which I’m a coauthor). People can act irrationally in auctions for many reasons including optimism bias, hedonic shopping and the thrill of the bid; he’s interested in fraudulent auctions, and scam compliance in general depends on victim facilitation. But compliance is a multi-step process; are there psychological differences between people who merely respond to fraudulent offers but then stop, and people who actually lose money? In one study, 6609 fraud victims responded to a BBC survey; 94.8% found at least one of nine presented fraud scenarios to be plausible; 25.5% responded and 22.1 lost utility (the figure for auction fraud specifically was 4.9%). Different mechanisms appear to have influenced different stages of compliance; need for uniqueness was the most significant correlate of actually losing utility. The response phase is associated with sensation seeking and a low need for consistent. In the second study, he contacted 280 auction fraud victims of whom 81 responded in full; they’d “bought” everything from nappies to apartments. Most lost small amounts though, and only a quarter tried to get money back (half of them failing). People who look for deals, or who enjoy the thrill of the chase, are more likely to be victimised.

    Sharon Leal works on deterring fraudulent insurance claims. This is little studied, but of interest as it’s financially significant, and people generally don’t like insurers. She got 96 participants read a vignette in which 8 items were stolen from a student apartment; two each had receipt / photo / both / none. Participants could choose whether to lie in the insurance claim; they could add other items, and even claim cash. They were told truth tellers would get £55, convincing liars £115 and unconvincing liars £15. Some were told to provide evidence of ownership of each item, or explain its absence convincingly; others to be honest. She hypothesised that the first would lie less about items for which they had receipts (which was the case), that the second would claim less (which was also confirmed); and that people would lie more as they went through the test (again, confirmed). Overall, “Be honest” works better than “provide evidence”; self-esteem is stronger than instruction.

    The last speaker in the first session was Ian Reid, who is interested in the effects of culture on deception, both online and off. Culture is a vast and complex subject, and prior work looked both at the strategies people use to assess credibility and what makes things seem credible online. But what’s the intersection? He recruited 22 western (UK) and 18 eastern (Chinese or Indian born) subjects, and analysed perspectives on mediated and unmediated deception by a thematic analysis of interview semantic content. Many deception detection strategies appear universal, and reflect recent deception research; the one exception appears that western people are more aggressive about demanding more information and following up leads if they think they’re being deceived. This may be tied up with general attitudes towards authority.

  6. Glynis Bogaard studies how the beliefs of ordinary people and police officers differ. She asked for her talk not to be blogged.

    Dave Markowitz studies the interaction between language and deception. When people lie about deeply felt personal topics such as abortion they use fewer first-person pronouns, as a means of psychological distancing. Yet Nixon and Clinton used more first person singular in their famous denials, and this is also found in bogus TripAdvisor reviews. Negative emotion terms like “hate” and “dislike” were negatively associated with deception. How can we make sense of all this? Hauch’s meta-analysis found five moderators: event type, emotional valence, interaction type, motivation, and whether the language was written, spoken or typed. The context of the communication also matters. Recently, Levine’s truth-default theory has highlighted the role of discourse community norms. Dave is working on a broad framework, context deception and language (CDAL), to bring psychological dynamics of both emotional and cognitive processes into consideration. One big research frontier might be the extent to which big data can provide the context.

    Aaron Elkins has been studying multi-modal methods which fuse data from multiple sources and modalities, particularly where deception is strategic and deceivers adapt to interviewer feedback. Strategic deceivers control behaviour and adopt multiple strategies and tactics. He got 219 subjects from a number of cultures to lie half the time in a series of 24 questions. The Agent99 accuracy was 67% overall; but could such methods be sufficient for prediction? The only significant difference he found between cultures was that non-US persons raised their voice pitch more when lying; on the linguistic front, the discriminator was between native and non-native speakers of English and the best detection accuracy came from charged questions. He concludes that the most motivated liars were hardest to detect, verifiable questions have the greatest human accuracy, and data fusion appears to have promise: different techniques worked best on different types of question.

    Marcus Juodis has been wondering how people’s deception cues differ. In a first study, he reanalysed old data from undergrads and offenders; he found many appreciable correlations among cues, such as that deceptive accounts took less time and contained fewer head movements, but it was not clear when multiple cues give more signal (e.g. shorter replied contain fewer words and faster talkers made fewer pauses). He looked for 2-cluster solutions; he found that students were more laboured liars and offenders more fluent. A second study with moderately distressing images produced similar results. He concludes that there are different types of liars, and they might need to be treated differently; it’s not obvious that experiments done on psychology students will be optimally predictive of field results when tests are applied to career criminals.

  7. After lunch, I chaired a panel discussion on future directions in lie detection research.

    The first panelist was Jeff Hancock whose focus is language analysis. We live in a unique period; only since the printing press did more than about 1% of people write, and as late as world war two, half of humans were illiterate. But now most people write every day. Natural language processing is now coming into its own, and gives us powerful and non-intuitive classifiers that can base judgments on a large number of signals. Our psychological dynamics, such as distancing and guilt, impact our languages in ways of which we’re not conscious, but which can be measured in what we say and write; see Hauch’s survey article recent survey article. Perhaps we can learn from computer scientists who are less concerned about psychological theory and more focused on what works with the data. But it’s hard when a few lies are mixed in with a lot of truth, and that’s a second challenge, as lies seem to be bursty. Third, we need to remember that communication is interactive, and we need to think of the interrogator too. To facilitate this research programme, he’s building a repository, Cairn, where deception researchers can check in and share our corpora of textual data.

    Judee Burgoon takes a broader view of technology-assisted lie detection; in addition to linguistic analysis, which has given us a lot of really good results recently, many other signals can be picked up by suitable sensors. Detecting deception is an intrinsically complex problem with correlates of deception having many causes and no deterministic signals. Her focus is on detecting non-contact technologies and moving to real-time, remote analysis that’s robust for high-volume applications such as security screening where operators are not highly skilled. Detecting strategic liars in such contexts is particularly hard. She’s been working everything on microexpression analysis to using a force platform to pick up on postural changes. Kinesic analysis extends from posture to gait and gestures, which she tracks using image recognition to pick up skin blobs in video. A new frontier is studying eye movements, ranging from gaze tracking through pupillometry to eyeblink modulation (give people a surprise stimulus when asking them a question, and the “startle blink” length leaks a truth/lie signal). All of these indicators are feeble by themselves but their fusion can be significantly better, and we’re starting to pick up anticipatory as well as reactive data.

    Bruno Verschure does not believe that progress will come from more complex machinery but from simpler approaches. He’s been working on using response times, which he found he can sharpen in subtle ways, such as by increasing response conflict such as by inverting “yes” and “no” buttons where the suspects are presented with the sensitive answer (such as the crime location) and telling subjects to press the decision button as fast as possible. He is now using online platforms such as mTurk and CrowdFlower to automate these studies, as he finds they are now dependable enough to use them as a tool to explore side issues such as item saliency and motivation. Studies can also be run rapidly on topical issues.

    The fourth panellist, Giorgio Ganis, noted that the traditional approach was to assume that a deceptive mental state would lead to a physiological “process D” that could be measured, such as arousal being measured by a polygraph. However process D can be in the brain; we can record all sorts of stuff there nowadays, so why can’t we find neural correlates of deception? Giorgio has been doing various analyses of EEG time series ranging from spectral analysis to ERPs. This is much cheaper than fMRI but gives comparable data. There are interesting research challenges with signal detection either way. These technologies are very promising; mthey just haven’t been around as long as skin conductance. But read Halperin’s “The truth machine” about a putative future with a 100% accurate lie detection machine; and beware of cognitive countermeasures, of tricks you can play in your head.

    Discussion topics included self-deception, fusing EEG and fMRI traces, the replicabillity of many sensor experiments, the use of reaction-time measurements by payday lenders and in concealed information tests, other concealed information techniques such as oculometrics, compliance versus sabotage and in generally the ease of subversion of different test techniques, means for deterring deception particularly in controllable environments such as online and future augmented reality, and our ability to ignore evidence (e.g. of past self-deception).

  8. Bob Arno is a professional pickpocket, who trains law enforcement round the world how to spot the bad guys. He showed parts of a training video on thief spotting with footage from hidden surveillance cameras showing gangs working various tactics. Shots of Bob stealing a phone and a tie back from a thief who took his wallet brought laughter; Bob’s lost his wallet 150 times as he hangs out in touristy neighbourhoods of cities like Lima, Johannesburg, St Petersburg and Barcelona, with his wife as partner and covert photographer. The gangs are generally three or four strong, with a spotter looking for undercover cops (who can be spotted by their radios). Mostly the thieves use finger signals. A city like Paris might have 1500 incidents a day; thieves are hard to catch, unless the undercover cops are really good. Thieves need about two wallets a day, which takes about six attempts. A wallet might be worth 2000 Euros, mostly from the cards. Laws are lax; arrested suspects are often out the next day. Surveillance rules vary; in Germany CCTV videos have to be erased after 60 days unless someone is prosecuted. However thieves can often be spotted by eyeball; you can try looking for people in small groups moving in unison, with coats over arms to provide cover, with signals or using phones, and by lots of other details. In questions, he did notice it becoming easier to sell credit cards ten years ago when underground markets emerged; thieves even run blogs.

    Shahar Ayal gave the first regular talk of Tuesday’s last session. He’s investigating ethical dissonance, the conflict between people’s desire to be moral enough to maintain positive self-esteem, and the fact that most people cheat to some extent. People sometimes try to balance the scales by indulging in pro-social behaviour to counterbalance cheating; altruistic cheating is also easier, gets easier still as you expand the group size. Recently he has been studying charitable donations. Subjects name organisations they support, oppose or don’t care about. You can earn money by giving inaccurate game responses, but half the money goes to you and half to an organisation. People cheat a lot to earn money for charities they like, and a little for charities to which they are indifferent; they won’t cheat to help a charity they hate. In other words, they are willing not to cheat in order not to do what they consider to be social harm. The effect is stronger when 100% goes to the organisation. He then combined a variant of the experiment where the subject got all, half (control) or none of the take with polygraph examination and found a positive (lie) signal in the egocentric condition, none in the control (50/50) condition and a negative correlation when all the money went to a favoured organisation! He calls this a must-lie condition. In questions, Dan Ariely remarked that people want their own politicians to lie (including personal lies) but don’t want the others to; there’s a prevalent feeling of the end justifying the means.

    Kim Serota is interested in the frequency distribution of lying; although most people may tell two lies a day, the distribution has been found to be heavy-tailed in one study after another, and the pattern of response is completely consistent. Some people report telling no or few lies (the normatively honest), while a handful are frequent liars. Is the distribution Poisson? There’s too much data in the tail. He finds he can model the distribution as a Poisson population of normatively honest people and a power-law tail of prolific liars following Zipf’s law. The one exception is teenagers who lie a lot; they seem to be normally distributed. This has significant implications for the accuracy scores of deception experiments! We shouldn’t experiment on teenagers, and should be careful about whether we ought to experiment on the normatively honest or on the rest.

    Tuesday’s last speaker, Raluca Briazu, spoke on “Undoing the past to lie in the future”. Counterfactual thinking plays a role in deceptive communication; both involve recounting something that didn’t happen, and call on similar executive function processes. Both decline with age, and with Parkinson’s. It was also known that observing desired counterfactuals increases our tendency to deceive. She set up a study where subjects told a low-risk lie (to a neighbour, about not making friends) or a high-risk one (to the police, about a car crash). They were asked what they’d say if they wanted to lie and the availability of counterfactuals was manipulated by making the story events appear ordinary and remote in time, or extraordinary and recent. People lied significantly more in the latter. This might shed light on individual differences in deceptive communication.

  9. Wednesday’s proceedings started with a panel, and the first speaker Galit Nahari works on reality monitoring, a verbal lie detection tool with an accuracy of about 70% in the lab. Practitioners don’t apply it yet; is it ready? She’s sceptical; RM depends on both memory and verbal ability, and while lab interrogations take place at once, real-life ones are typically weeks to years after the event. Even truth tellers provide less detail over time, and well-briefed liars can match them. The criminal population has less verbal ability than psychology students. People judge story quality by their own norms. More realistic research is needed.

    Next, Nicolas Rochat hypothesised that pupil size is a signal of deceptive behaviour, and affected in this regard by cognitive load. He used a classical guilty knowledge test procedure and found that pupil diameter did indeed indicate a more important cognitive load, but there were complex effects possibly related to inhibition.

    Joanna Ulatowska works with eye tracking. Previous work associated longer gaze fixation with deeper cognitive processing, so she set out to discover whether lying and truth telling affected gaze behaviour. This turned out to be the case, with significance concentrated in the interviewer’s dwell time looking at the liar’s lips. Second, answering indirect questions did not allow distinction between truth and lies, while direct questions did.

    Leanne ten Brinke is working on Damasia’s hypothesis that it’s a risk signal in the environment (of which lying is just one) that evokes a physiological response, which she measures in the form of peripheral vasoconstriction and sweating. She measured 46/48 datasets of participants who watched Porter’s 12 high-stakes videos, of six murderers and six innocent people making TV appeals to the public. Watching liars causes significant vasoconstriction (p = 0.007 in a sample of 45), substantially in response to murderer smiles. It caused some sweating too (but less; p = 0.34).

    Elena Svetieva is a psychologist doing emotion research and interested in the development of moral behaviour. She crosses over into anthropology (to study differences in morality across cultures and groups) and behavioural economics (in order to study decisions to lie). Emotional contagion and theory of mind are the soil in which moral intuition develop among children. She uses Sneezy’s deception game in which one participant can give another either truthful or mendacious guidance about which option to select, without their ever learning whether they were lied to. Various subjective and objective measures of empathy were used, as well as Haidt’s measures of moral foundations. She rated acceptability of self-gain, altruistic, social-acceptance and conflict-avoidance lies. A Bayesian estimation model showed significant (negative) correlations were between the harm/care moral foundation. On debriefing, some odd attitudes to truth were found, such as a participant who thought the other one would disbelieve him, so told the truth to get him to make the wrong decision.

    Hugues Delmas has been developing photographic questionnaires in the hope of getting better results than the standard textual methods used when doing deception research on facial indicators. His work is based on the facial action coding system and has 54 photos: 43 FACS action units, the three most common eyebrow movements and the seven primary emotions described by Ekman. These were shown to French policemen who were asked whether each face was associated with lying. 17 movements were said to be more present in deception by all participants. These included asymmetric eye and movements, and all faces where lips were pursed or even slightly occluded. 9 were said to be less present including angry, sad and disgusted faces. There were differences between policemen and civilians, but not much; 91% of beliefs were shared.

    Discussion started on whether people can be trained to pick up their own body’s intuitive feelings on whether they’re hearing lies or truth. Leanne has tried awareness raising and found that people with low introceptivity did increase their lie detection accuracy, but the effect size was not huge. The link between moral foundations and lie disapproval suggests that conservatives and liberals might react differently to different types of lies, or be deterred from lying by different primes, but there’s no real research on this yet. The influence of alcohol is a further ecological issue with research; it is a significant factor in over half of all violent crime and a quarter of sexual offences; yet again there is little research, and it causes real operational problems as you can’t use lie detection tools after the initial interrogation when the suspect is sober as he’s now “contaminated”. Dark personalities may lack emotion contagion but still understand others’ feelings, so you might say they still have empathy in some sense. One way to explore what people feel guilty about is to ask people to tell stories about something a bit bad they did but about which they don’t feel too bad (they often lie about not feeling bad, and behavioural effects can be tweaked).

  10. The last session was kicked off by Ewout Meijer who uses physiological responses in guilty knowledge tests, comparing skin conductance, EEG P300 and other signals. A meta-analysis suggests that salience might account for the difference between these two. He’s interested in whether the technique can be used simultaneously on multiple suspects; and in particular whether it’s possible to extract fresh information from a set of individuals, some of whom are knowledgeable? He tried an experiment with 61 students of whom 30 had shared guilty knowledge and the others had been briefed on uncorrelated attack plans. They could use a hidden Markov model to detect guilty groups of up to a dozen people. In a third experiment, 105 participants formed groups of 5 to plan attacks together, with the experimenter blind. 13 out of 19 groups were correctly identified but two false positives. In 7 groups they got the attack location, in 9 of 20 there was no decision, and 4 of 20 were incorrect. A fourth experiment got groups to plot a bomb attack on the road from Tel Aviv to Jerusalem and tried to identify groups using skin conductance measurements as a dot moved along a map. This resulted in a lot of false positives but a possibly usable signal at the distance corresponding to one of the planned bombs; the others were way off.

    Nathalie klein Selle was next on the effects of arousal and delayed testing on concealed information tests. Arousal affects memory by narrowing attention during memory encoding, causing events to be more likely to be stored and their recall to persist or even improve over time, and recall to be richer. Previous studies not only manipulated arousal but also motivation; Nathalie tested 136 participants on pictures that were either arousing or non-arousing faces and scenes and tested either at once or after a delay. It turned out that negative arousing items led to larger skin conductance responses in the delayed condition.

    Lara Warmelink has been working on the implicit association test (IAT), which was initially developed to explore unconscious biases such as racism or misogyny: the subject has to classify an object where (say) green and good share the same response button, as do yellow and bad; bias causes interference which can be measured as delay. Satori showed in 2008 that the IAT can be used to detect deception, while Agosta argued in 2011 it could use it to detect intentions by measuring delays associated with true and false intention statements. So: can you use it to detect deception about intentions? Subjects gave information about true and false intentions, or were given them. Both liars and truth tellers are faster when statements about their intentions and actions are congruent; so you detect lied intentions as well as truthful ones.

    Howard Bowman studies perception using rapid serial visual presentation. He’s been working on subliminal salience search and EEG detection on the fringe of awareness over the last couple of years. When we overload the brain with too many stimuli, it starts doing subliminal salience search, causing your name or a famous person’s name to op out from 30 shown in two seconds. The EEG results are interesting; recognition of images that remain subliminal results in much smaller signals than those images that break into consciousness, as the latter give rise to a brain-scale state and a large P3 signal. So in theory you can present a lot of items to someone and find which are salient; the US military already use this. Howard’s work is on applying this to detecting deception. In theory it should be hard for people to apply a countermeasure to control the P3 response as it is not volitional. Previous EEG methods have been “broken” by subjects making an irrelevant item salient; the interviewer’s task is to distinguish between this “fake” and the true probe. He finds that with appropriate statistical processing, he can; a key fact is that subjects can’t search for repeated irrelevant objects as they don’t consciously perceive them. As a result, subjects can’t effectively inflate the false positive rate. He can get a 50% hit rate on familiar faces and 70% on familiar email addresses.

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *