Category Archives: Security psychology

Bugs in our pockets?

In August, Apple announced a system to check all our iPhones for illegal images, then delayed its launch after widespread pushback. Yet some governments continue to press for just such a surveillance system, and the EU is due to announce a new child protection law at the start of December.

Now, in Bugs in our Pockets: The Risks of Client-Side Scanning, colleagues and I take a long hard look at the options for mass surveillance via software embedded in people’s devices, as opposed to the current practice of monitoring our communications. Client-side scanning, as the agencies’ new wet dream is called, has a range of possible missions. While Apple and the FBI talked about finding still images of sex abuse, the EU was talking last year about videos and text too, and of targeting terrorism once the argument had been won on child protection. It can also use a number of possible technologies; in addition to the perceptual hash functions in the Apple proposal, there’s talk of machine-learning models. And, as a leaked EU internal report made clear, the preferred outcome for governments may be a mix of client-side and server-side scanning.

In our report, we provide a detailed analysis of scanning capabilities at both the client and the server, the trade-offs between false positives and false negatives, and the side effects – such as the ways in which adding scanning systems to citizens’ devices will open them up to new types of attack.

We did not set out to praise Apple’s proposal, but we ended up concluding that it was probably about the best that could be done. Even so, it did not come close to providing a system that a rational person might consider trustworthy.

Even if the engineering on the phone were perfect, a scanner brings within the user’s trust perimeter all those involved in targeting it – in deciding which photos go on the naughty list, or how to train any machine-learning models that riffle through your texts or watch your videos. Even if it starts out trained on images of child abuse that all agree are illegal, it’s easy for both insiders and outsiders to manipulate images to create both false negatives and false positives. The more we look at the detail, the less attractive such a system becomes. The measures required to limit the obvious abuses so constrain the design space that you end up with something that could not be very effective as a policing tool; and if the European institutions were to mandate its use – and there have already been some legislative skirmishes – they would open up their citizens to quite a range of avoidable harms. And that’s before you stop to remember that the European Court of Justice struck down the Data Retention Directive on the grounds that such bulk surveillance, without warrant or suspicion, was a grossly disproportionate infringement on privacy, even in the fight against terrorism. A client-side scanning mandate would invite the same fate.

But ‘if you build it, they will come’. If device vendors are compelled to install remote surveillance, the demands will start to roll in. Who could possibly be so cold-hearted as to argue against the system being extended to search for missing children? Then President Xi will want to know who has photos of the Dalai Lama, or of men standing in front of tanks; and copyright lawyers will get court orders blocking whatever they claim infringes their clients’ rights. Our phones, which have grown into extensions of our intimate private space, will be ours no more; they will be private no more; and we will all be less secure.

Is Apple’s NeuralMatch searching for abuse, or for people?

Apple stunned the tech industry on Thursday by announcing that the next version of iOS and macOS will contain a neural network to scan photos for sex abuse. Each photo will get an encrypted ‘safety voucher’ saying whether or not it’s suspect, and if more than about ten suspect photos are backed up to iCloud, then a clever cryptographic scheme will unlock the keys used to encrypt them. Apple staff or contractors can then look at the suspect photos and report them.

We’re told that the neural network was trained on 200,000 images of child sex abuse provided by the US National Center for Missing and Exploited Children. Neural networks are good at spotting images “similar” to those in their training set, and people unfamiliar with machine learning may assume that Apple’s network will recognise criminal acts. The police might even be happy if it recognises a sofa on which a number of acts took place. (You might be less happy, if you own a similar sofa.) Then again, it might learn to recognise naked children, and flag up a snap of your three-year-old child on the beach. So what the new software in your iPhone actually recognises is really important.

Now the neural network described in Apple’s documentation appears very similar to the networks used in face recognition (hat tip to Nicko van Someren for spotting this). So it seems a fair bet that the new software will recognise people whose faces appear in the abuse dataset on which it was trained.

So what will happen when someone’s iPhone flags ten pictures as suspect, and the Apple contractor who looks at them sees an adult with their clothes on? There’s a real chance that they’re either a criminal or a witness, so they’ll have to be reported to the police. In the case of a survivor who was victimised ten or twenty years ago, and whose pictures still circulate in the underground, this could mean traumatic secondary victimisation. It might even be their twin sibling, or a genuine false positive in the form of someone who just looks very much like them. What processes will Apple use to manage this? Not all US police forces are known for their sensitivity, particularly towards minority suspects.

But that’s just the beginning. Apple’s algorithm, NeuralMatch, stores a fingerprint of each image in its training set as a short string called a NeuralHash, so new pictures can easily be added to the list. Once the tech is built into your iPhone, your MacBook and your Apple Watch, and can scan billions of photos a day, there will be pressure to use it for other purposes. The other part of NCMEC’s mission is missing children. Can Apple resist demands to help find runaways? Could Tim Cook possibly be so cold-hearted as to refuse at add Madeleine McCann to the watch list?

After that, your guess is as good as mine. Depending on where you are, you might find your photos scanned for dissidents, religious leaders or the FBI’s most wanted. It also reminds me of the Rasterfahndung in 1970s Germany – the dragnet search of all digital data in the country for clues to the Baader-Meinhof gang. Only now it can be done at scale, and not just for the most serious crimes either.

Finally, there’s adversarial machine learning. Neural networks are fairly easy to fool in that an adversary can tweak images so they’re misclassified. Expect to see pictures of cats (and of Tim Cook) that get flagged as abuse, and gangs finding ways to get real abuse past the system. Apple’s new tech may end up being a distributed person-search machine, rather than a sex-abuse prevention machine.

Such a technology requires public scrutiny, and as the possession of child sex abuse images is a strict-liability offence, academics cannot work with them. While the crooks will dig out NeuralMatch from their devices and play with it, we cannot. It is possible in theory for Apple to get NeuralMatch to ignore faces; for example, it could blur all the faces in the training data, as Google does for photos in Street View. But they haven’t claimed they did that, and if they did, how could we check? Apple should therefore publish full details of NeuralMatch plus a set of NeuralHash values trained on a public dataset with which we can legally work. It also needs to explain how the system it deploys was tuned and tested; and how dragnet searches of people’s photo libraries will be restricted to those conducted by court order so that they are proportionate, necessary and in accordance with the law. If that cannot be done, the technology must be abandoned.

WEIS 2021 – Liveblog

I’ll be trying to liveblog the twentieth Workshop on the Economics of Information Security (WEIS), which is being held online today and tomorrow (June 28/29). The event was introduced by the co-chairs Dann Arce and Tyler Moore. 38 papers were submitted, and 15 accepted. My summaries of the sessions of accepted papers will appear as followups to this post; there will also be a panel session on the 29th, followed by a rump session for late-breaking results. (Added later: videos of the sessions are linked from the start of the followups that describe them.)

Patient confidentiality in remote consultations

During the lockdown last year, I was asked by the International Psychoanalytic Association (IPA) to help them update their guidance on remote consultations. I spoke to a range of GPs, surgeons, psychologists and psychoanalysts about what they’d learned during the first lockdown about working over the phone, or over Skype or Zoom. The IPA has now published my report, on a web page that also has their guidance to members both before and after the exercise.

Before the pandemic, remote consultation did happen, but not all therapists offered it; and confidentiality concerns tended to focus on technical security measures such as whether the call was encrypted end-to-end. After everyone was forced online in March and April 2020, clinicians learned rapidly to focus on the endpoints. Patients often have problems finding a private space to talk; there may be a family member in earshot, whether by accident, or because they’re cooped up in a tiny apartment, or because they have a controlling partner or parent. A clinician may return a patient’s call and catch them in a supermarket queue. And the clinic too can be interrupted, if the clinician is practicing from home.

Technical endpoint compromise is occasionally an issue; a controlling family member could inspect a patient’s device and discover a therapeutic relationship that had not been disclosed. By far the worst endpoint compromise that happened during the study period was when the Vastaamo chain of clinics in Finland was hit by ransomware; 45,000 patients’ records were stolen, and some were put online by extortionists demanding bitcoin payments. (And now we face an even larger-scale issue in the UK as the government plans to hoover up all our GP records for sale to drug companies unless we opt out by June 25; see here for how to do that.)

Such horrors aside, the core problem is to establish a therapeutic space where both patient and clinician can interact effectively, which means being able to concentrate and also to relax. There’s more to this than just being comfortable trusting the endpoint environments, the devices, the communications medium and any record-keeping mechanism. Interaction matters too. Many clinician communities discovered independently that the plain old telephone system often works better than new-fangled stuff such as skype and zoom. Video calls add maybe half a second of latency for buffering, which destroys conversational turn-taking. A further advantage of the phone is that you’re not staring at someone’s face at an unnatural distance. You can walk around the room, or even walk around the park.

Since doing this work I’ve started to avoid zoom and teams in favour of phone calls when I can, and use end-to-end encrypted voice calls on WhatsApp or Signal where call costs or client confidentiality make it sensible.

WEIS 2020 – Liveblog

I’ll be trying to liveblog the seventeenth Workshop on the Economics of Information Security (WEIS), which is being held online today and tomorrow (December 14/15) and streamed live on the CEPS channel on YouTube. The event was introduced by the general chair, Lorenzo Pupillo of CEPS, and the program chair Nicolas Christin of CMU. My summaries of the sessions will appear as followups to this post, and videos will be linked here in a few days.

SHB Seminar

The SHB seminar on November 5th was kicked off by Tom Holt, who’s discovered a robust underground market in identity documents that are counterfeit or fraudulently obtained. He’s been scraping both websites and darkweb sites for data and analysing how people go about finding, procuring and using such credentials. Most vendors were single-person operators although many operate within affiliate programs; many transactions involved cryptocurrency; many involve generating pdfs that people can print at home and that are good enough for young people to drink alcohol. Curiously, open web products seem to cost twice as much as dark web products.

Next was Jack Hughes, who has been studying the contract system introduced by hackforums in 2018 and made mandatory the following year. This enabled him to analyse crime forum behaviour before and during the covid-19 era. How do new users become active, and build up trust? How does it evolve? He collected 200,000 transactions and analysed them. The contract mandate stifled growth quickly, leading to a first peak; covid caused a second. The market was already centralised, and became more so with the pandemic. However contracts are getting done faster, and the main activity is currency exchange: it seems to be working as a cash-out market.

Anita Lavorgna has been studying the discourse of groups who oppose public mask mandates. Like the antivaxx movement, this can draw in fringe groups and become a public-health issue. She collected 23654 tweets from February to June 2020. There’s a diverse range of voices from different places on the political spectrum but with a transversal theme of freedom from government interference. Groups seek strength in numbers and seek to ally into movements, leading to the mask becoming a symbol of political identity construction. Anita found very little interaction between the different groups: only 144 messages in total.

Simon Parkin has been working on how we can push back on bad behaviours online while they are linked with good behaviours that we wish to promote. Precision is hard as many of the desirable behaviours are not explicitly recognised as such, and as many behaviours arise as a combination of personal incentives and context. The best way forward is around usability engineering – making the desired behaviours easier.

Bruce Schneier was the final initial speaker, and his topic was covid apps. The initial rush of apps that arrived in March through June have known issues around false positives and false negatives. We’ve also used all sorts of other tools, such as analysis of Google maps to measure lockdown compliance. The third thing is the idea of an immunity passport, saying you’ve had the disease, or a vaccine. That will have the same issues as the fake IDs that Tom talked about. Finally, there’s compliance tracking, where your phone monitors you. The usual countermeasures apply: consent, minimisation, infosec, etc., though the trade-offs might be different for a while. A further bunch of issues concern home working and the larger attack surface that many firms have as a result of unfamiliar tools, less resistance to being tols to do things etc.

The discussion started on fake ID; Tom hasn’t yet done test purchases, and might look at fraudulently obtained documents in the future, as opposed to completely counterfeit ones. Is hackforums helping drug gangs turn paper into coin? This is not clear; more is around cashing out cybercrime rather than street crime. There followed discussion by Anita of how to analyse corpora of tweets, and the implications for policy in real life. Things are made more difficult by the fact that discussions drift off into other platforms we don’t monitor. Another topic was the interaction of fashion: where some people wear masks or not as a political statement, many more buy masks that get across a more targeted statement. Fashion is really powerful, and tends to be overlooked by people in our field. Usability research perhaps focuses too much on the utilitarian economics, and is a bit of a blunt instrument. Another example related to covid is the growing push for monitoring software on employees’ home computers. Unfortunately Uber and Lyft bought a referendum result that enables them to not treat their staff in California as employees, so the regulation of working hours at home will probably fall to the EU. Can we perhaps make some input into what that should look like? Another issue with the pandemic is the effect on information security markets: why should people buy corporate firewalls when their staff are all over the place? And to what extent will some of these changes be permanent, if people work from home more? Another thread of discussion was how the privacy properties of covid apps make it hard for people to make risk-management decisions. The apps appear ineffective because they were designed to do privacy rather than to do public health, in various subtle ways; giving people low-grade warnings which do not require any action appear to be an attempt to raise public awareness, like mask mandates, rather than an effective attempt to get exposed individuals to isolate. Apps that check people into venues have their own issues and appear to be largely security theatre. Security theatre comes into its own where the perceived risk is much greater than the actual risk; covid is the opposite. What can be done in this case? Targeted warnings? Humour? What might happen when fatigue sets in? People will compromise compliance to make their lives bearable. That can be managed to some extent in institutions like universities, but in society it will be harder. We ended up with the suggestion that the next SHB seminar should be in February, which should be the low point; after that we can look forward to things getting better, and hopefully to a meeting in person in Cambridge on June 3-4 2021.

Three Paper Thursday: Broken Hearts and Empty Wallets

This is a guest post by Cassandra Cross.

Romance fraud (also known as romance scams or sweetheart swindles) affects millions of individuals globally each year. In 2019, the Internet Crime Complaint Centre (IC3) (USA) had over US$475 million reported lost to romance fraud. Similarly, in Australia, victims reported losing over $AUD80 million and British citizens reported over £50 million lost in 2018. Given the known under-reporting of fraud overall, and online fraud more specifically, these figures are likely to only be a minority of actual losses incurred.

Romance fraud occurs when an offender uses the guise of a legitimate relationship to gain a financial advantage from their victim. It differs from a bad relationship, in that from the outset, the offender is using lies and deception to obtain monetary rewards from their partner. Romance fraud capitalises on the fact that a potential victim is looking to establish a relationship and exhibits an express desire to connect with someone. Offenders use this to initiate a connection and start to build strong levels of trust and rapport.

As with all fraud, victims experience a wide range of impacts in the aftermath of victimisation. While many believe these to be only financial, in reality, it extends to a decline in both physical and emotional wellbeing, relationship breakdown, unemployment, homelessness, and in extreme cases, suicide. In the case of romance fraud, there is the additional trauma associated with grieving both the loss of the relationship as well as any funds they have transferred. For many victims, the loss of the relationship can be harder to cope with than the monetary aspect, with victims experiencing large degrees of betrayal and violation at the hands of their offender.

Sadly, there is also a large amount of victim blaming that exists with both romance fraud and fraud in general. Fraud is unique in that victims actively participate in the offence, through the transfer of money, albeit under false pretences. As a result, they are seen to be culpable for what occurs and are often blamed for their own circumstances. The stereotype of fraud victims as greedy, gullible and naïve persists, and presents as a barrier to disclosure as well as inhibiting their ability to report the incident and access any support services.

Given the magnitude of losses and impacts on romance fraud victims, there is an emerging body of scholarship that seeks to better understand the ways in which offenders are able to successfully target victims, the ways in which they are able to perpetrate their offences, and the impacts of victimisation on the individuals themselves. The following three articles each explore different aspects of romance fraud, to gain a more holistic understanding of this crime type.

Continue reading Three Paper Thursday: Broken Hearts and Empty Wallets

Security and Human Behaviour 2020

I’ll be liveblogging the workshop on security and human behaviour, which is online this year. My liveblogs will appear as followups to this post. This year my program co-chair is Alice Hutchings and we have invited a number of eminent criminologists to join us. Edited to add: here are the videos of the sessions.

Cybercrime is (often) boring

Much has been made in the cybersecurity literature of the transition of cybercrime to a service-based economy, with specialised services providing Denial of Service attacks, cash-out services, escrow, forum administration, botnet management, or ransomware configuration to less-skilled users. Despite this acknowledgement of the ‘industrialisation’ of much for the cybercrime economy, the picture of cybercrime painted by law enforcement and media reports is often one of ’sophisticated’ attacks, highly-skilled offenders, and massive payouts. In fact, as we argue in a recent paper accepted to the Workshop on the Economics of Information Security this year (and covered in KrebsOnSecurity last week), cybercrime-as-a-service relies on a great deal of tedious, low-income, and low-skilled manual administrative work.

Continue reading Cybercrime is (often) boring