X Close

UCL Department of Science, Technology, Engineering and Public Policy

Home

Applied in Focus. Global in Reach

Menu

Understanding Intimate Partner Abuse as a Cybersecurity Issue

By Siobhan Pipa, on 19 August 2019

By Julia Slupska, Digital Ethics Lab at the University of Oxford

Why isn’t revenge porn a cybersecurity issue? Gendered security threats – such as tech abuse in the context of domestic violence – has often been ignored in the cybersecurity discussion. Julia Slupska – in collaboration with the UCL Gender and IoT Lab – is working on an “intimate threat security review” to address this gap.

Domestic violence and intimate partner abuse (IPA) is a frequent cause of death worldwide. A 2018 UN report found that women are more likely to be killed at home than in any other location.[1] IPA is still highly prevalent in the UK: domestic abuse-related crimes accounted for 32 percent of violent crimes in the year ending March 2017.

As digital technologies increasingly mediate personal relations, they also become wielded in ways that perpetuate existing patterns of coercion and control. In a survey of IPV survivors conducted by Women’s Aid (a domestic violence charity), 85 percent of respondents reported online abuse by their partner or ex-partner. Tech abuse falls on a spectrum from the relatively simple – such as harassing texts or messages – to attacks such as spyware which includes software developed for government surveillance or gaslighting using smart home devices.

However, my past research shows that IPA is almost never considered in smart home security analysis papers, despite its widespread prevalence. Instead, these papers’ threat models focus on external actors, such as hackers, thieves, domestic workers and errant AirBnb guests. Similarly, while the UK government has been highly proactive in developing standards for IoT safety and online harms, extant standards do not specifically incorporate the IPA threat model.

This raises a broader question as to why tech abuse in the context of IPA has not been conceptualised as a cybersecurity issue. Feminist and critical theorists of security studies have long argued that gender-related crimes are given a subsidiary status and ‘privatised’ or discounted within security practices. As I have argued elsewhere, the emerging field of cybersecurity seems likely to repeat the same dynamics of exclusion.

Consider the example of ‘revenge porn’ (i.e. non-consensual pornography), a technologically-mediated form of abuse which is highly gendered. Revenge porn is rarely considered a ‘cybersecurity issue’ among cybersecurity experts and practitioners. Informally, practitioners in the field have told me this is because it is really a ‘privacy issue’.

Yet when an employee in a company shares confidential in­formation with an unauthorised third party, this is clearly iden­tified as a cybersecurity issue, and an entire subfield of cybersecurity has been developed to protect companies from insider threat. To combat insider threat, cybersecurity experts develop social and technical ‘access controls’ which, for example, might revoke access to sensitive data if an employment ends acrimoniously. Why do similar access controls not exist for intimate relationships? Instead of developing programmes which revoke access to intimate data when a relationship ends and mak­ing such programmes accessible to the wider public, online safety advocates publish guides aimed at teenagers, advising them not to share intimate photos in the first place (see for example).

To address this gap, this summer I started a project in collaboration with the UCL Gender and IoT (GIoT) project to flesh out an actionable Security Review for Intimate Threats (SRIT). The project’s methodology followed the GIoT project’s commitment to “action research”, in which researchers collaborate with practitioners to learn about their challenges and help them improve existing practices. To engage with practitioners both in support services and IoT design, I

  • Attended GIoT workshops on tech abuse, in which professionals working in organisations like domestic violence shelters, counselling services or police responses to stalking shared their experiences with tech abuse
  • Supported the GIoT response to the DCMS Online Harms White Paper
  • Presented the concept of an IoT SRIT to an audience of IoT practitioners at the London IoT Meetup Group
  • Conducted informal discussions about the SRIT with experts in secure development interventions, usable security, access controls, cybersecurity law and IoT development
  • Attended the 2019 IFSEC International Expo on CCTV and perimeter security technologies to interview smart lock manufacturer

On the basis of the action research discussions and existing literature on security interventions and design principles for IPV, I developed a list of relevant product features and company practices. I then iteratively added to the questions by going through the online product manuals of several devices, and by asking vendors at IFSEC 2019 about their products and practices. This resulted in a security review questionnaire with over 40 questions. Going forward, I plan to develop the SRIT by running co-design workshops with domestic abuse support service practitioners who have experience with tech abuse, as well as IoT designers and more conventional cybersecurity experts. The end result will be an actionable assessment tool which could be used by the following actors:

  • IoT and smart home designers seeking to assess their own designs and practices to create devices which better serve targets of tech abuse
  • Consumer protection bodies seeking to evaluate existing products and practices among Internet of Things (“IoT”) manufacturers
  • Academic researchers seeking a systematic understanding of how products and company practices shape the dynamics of tech abuse
Intimate Threat Feature Framework

Figure 1: Intimate Threat Feature Framework

Many existing solutions to the problem of tech abuse have involved developing guidance to aid targets and support services to improve their privacy and security practices. Although such guides are undoubtedly useful, this solution shifts responsibility onto IPA targets – who already face significant cognitive, emotional and financial strains – to continuously check settings between a multitude of applications and updates.

Consequently, many have called for technology developers to be more proactive in addressing this problem. This reflects a broader argument summarised by former Federal Trade Commission chief technologist Ashkan Soltani: tech companies need to move from conceptualising security narrowly as protecting their own users, towards “abusability testing”, or “the possibility that users could exploit their tech to harm others, or the world.” My project with the GIoT team this summer – and my PhD research going forward – moves us one step closer to a world where abusability testing is status quo whenever companies wish to sell products that mediate intimate relationships.

[1] Although IPA is undoubtedly a gendered crime, an exclusive focus on gender can obscure how intersecting forms of inequality and oppression, such as racism, ethnocentrism, class privilege, and heterosexism shape the dynamics of IPA

Leave a Reply