ETSI Security Conference 2023 Presentation - Implementing Design Practices with the Goal to Prevent Consumer IoT Enabled Coercive Control
This post shares a summary of of presentation I gave at the ETSI Security Conference 2023 on implementing Design Practices with the Goal to Prevent Consumer IoT Enabled Coercive Control.
Introduction
An
Overview of ETSI
EG 203 936 - Implementing Design Practices to Mitigate Consumer IoT-Enabled
Coercive Control.
ETSI
Guide that recommends initial design practices to minimise the potential of
coercive control by consumer Internet of Things (IoT) devices.
The
guide provides emerging design practices through examples and explanatory text
for organisations involved in the development and manufacturing of Consumer IoT
devices and associated services.
Understanding Consumer IoT Enabled Coercive Control
Coercive
control is entrapment in personal life, and it pertains to the set of control
skills also used in other situations of captivity such as hostage situations
and human trafficking to override autonomy and the sense of self and entrap a
person.
The
misuse of novel telecommunications applications such as smartphones, tablets,
social media, wearables, smart speakers, telecare systems, internet connected
cars, internet connected home appliances, smart locks, smart thermostats and
home security systems in the context of coercive control within intimate
relationships often known as Consumer IoT Enabled abuses.
These
behaviours include but are not limited to stalking and omnipresence,
surveillance (wiretapping, bugging, videotaping, geolocation tracking, data
mining, social media mapping, and the monitoring of data and traffic on the
internet), intimidation, impersonation, humiliation, threats, consistent
harassment/unwanted contact, sexting, and image-based sexual abuse.
Types of Consumer IoT enabled Abuse
Toxic
content covers a wide range of attacks involving media that attackers send to a
target or audience—e.g., bullying, trolling, threats of violence, and sexual
harassment.
Content
leakage involves any scenario where an attacker leaks (or threatens to leak)
sensitive, private information to a wider audience, typically with the intent
to embarrass, threaten, intimidate, or punish the target.
Overloading
includes any scenario wherein an attacker forces a target to triage myriad
notifications or comments via amplification, or otherwise makes it technically
infeasible for the target to participate online due to jamming a channel.
False
reporting broadly captures scenarios where an attacker deceives a reporting
system or emergency service—originally intended to protect people—to falsely
accuse a target of abusive behaviour.
Impersonation
occurs when an attacker relies on deception of an audience to assume the online
persona of a target to create content that will damage the target’s reputation
or inflict emotional harm.
Surveillance
involves an attacker leveraging privileged access to a target’s devices or
accounts to monitor the target’s activities, location, or communication.
Lockout
and control involves scenarios where an attacker leverages privileged access to
a target’s account or device—including computers, or Consumer IoT devices—to
gaslight the target or interfere with how they engage with the world.
Coercive Control-Resistant Design
Coercive Control-Resistant Design
can be defined as safeguarding or designing products with anti abuse protections by default to minimise attackers’ ability to use these tools to
harm targets whilst not limiting the access to the device functionality by the
intended user.
These include but are not limited to:
1.Build
consensus and awareness on the nature of the problem.
2.Identify
dilemmas and build consensus on acceptable solutions.
3.Harm
considerations “built in, not bolted on”.
4.Minimise
risks of harms arising.
5.Disrupt
harms that have arisen.
6.Diverse
design team.
7.Privacy
and Choice.
8.Combat
Gaslighting.
9.Security
and Data.
Implementing Coercive Control-Resistant Design
Online
Harms Policy
There
is an expectation from user that companies will have measures in place to
ensure duty of care to keep their users safe from harm.
Security
and Safety of Consumer IoT design
No universal default passwords in
consumer smart products.
Device producers should establish
and maintain a vulnerability disclosure policy. This means there would be a
clear route for users to report security vulnerabilities when they are
discovered, and a process for remediation.
The device producers should
explicitly state how long a product will receive software security updates for.
Threat modelling paired with
usability analysis for the design and development of safer systems.
Incorporating privacy and security
by default, during the design process.
Companies should get users’
permission before collecting and sharing location data. So, this could mean
disabled by default. Also, they should inform users how they can stop the
collection of such information, and its deletion if requested which is under
GPDR right to be forgotten.
Technology
Design
Diversity.
Ensuring a diverse design team to broaden the understanding of user habits.
Privacy
and choice. Allowing users to make informed choices about their privacy
settings.
User
Awareness. Making it clear when settings have been changed and how this affects
the functionality of the devices.
Security
and data. Ensuring that products only collect and share necessary data,
limiting the risk that data are used maliciously.
User
Experience. Giving users greater confidence to use technology by making it
simpler to understand, limiting the risk of attackers exploiting a target’s
lack of technical ability.
Education
and Resources
Many
organisations have produced guidance on the safe use of technologies and how
individuals can implement better privacy protections. Some organisations have
also produced specific guidance on technology abuse for the targets (victims)
and professionals working with targets (victims). These include guidance on how
to document technology abuse, information about spyware and surveillance, and
guidance on privacy and security features of social media platforms.
Role
Technology can Play in Supporting Targets
Technology
can offer a lifeline to targets, enabling them to access support services and
information. It can also provide a way for them to record evidence of their
abuse. There are different ways in which technology may help targets including:
Finding
information.
Accessing
support services and networks.
Connecting
with other targets.
Gathering
evidence.
Protecting
and alerting targets.
Conclusion
Comments
Post a Comment