This playbook includes plays for policymakers and the digital service industry to protect citizens against complex online harms and ensure that technology is developed responsibly.
Our playbook is for policymakers interested in exploring new policy approaches to protecting citizens against complex online harms. It is also for organisational decision-makers within the digital service industry interested in building trust in digital products and safeguarding users.
As digital technology becomes increasingly integral to all aspects of modern life, it is essential to address its capacity for complex harm and ensure its responsibly developed. This playbook equips policymakers and the digital services industry with targeted strategies, practical tools, and clear guidance to mitigate risks while fully harnessing digital technologies' innovative potential.
This playbook begins by outlining the complex harms associated with digital technologies. It then presents recommended strategies for policymakers and the digital services industry to safeguard citizens. To illustrate the practical application of these measures, three use cases - disinformation; FemTech; and smart homes - are included.
This Playbook was authored by the members of the Legal and Ethical Regulation of Complex Online Harms work package forming part of the multi-disciplinary and multi-university AGENCY project.
The playbook has benefited from the insights from the wider AGENCY project and has been supported by the EPSRC under Grant EP/W032481/2.
Citizens must be safeguarded from complex online harms and the unintended consequences of new digital technologies.
Yet advances in digital technologies pose unique challenges, introducing vulnerability concerns and broader safety issues. For instance, the spread of disinformation, privacy risks in FemTech, and the misuse of smart home technologies illustrate how novel innovations can inadvertently undermine public trust and individual well-being.
This playbook serves as a strategic guide for policymakers and industry leaders, providing a structured approach to mitigating online harms. It advocates for a balanced co-regulatory model that strengthens legal oversight while enabling industry self-regulation to embed ethical safeguards within digital products and services from inception.
In the sections that follow, we define the concept of complex online harms, examine their intersectional nature, and present a two-pronged approach:
By advocating for a co-regulatory ecosystem that aligns regulatory intervention with industry-led responsibility, this playbook provides a blueprint for protecting citizens and ensuring that digital technologies produce positive outcomes for society, the economy, and the environment.
In adopting a co-regulatory approach, our Agency by Design approach places a premium on user empowerment, proactive governance, and industry-led Corporate Digital Responsibility (CDR) / Responsible Research and Innovation (RRI) initiatives to ensure that digital products and services are designed with ethical safeguards from the outset. This approach shifts the focus from reactive compliance to proactive, anticipatory governance that relies on companies to assume responsibility for customers as they are your customers and your responsibility and can be affected by complex online harms.
At their most simple complex online harms are intersectional harms that have complex technical and societal causes. They may encompass:
Our research has shown that effectively mitigating complex online harms requires a dual strategy that aligns policy intervention with industry-led responsibility. To this end propose a two-pronged approach that:
Empowers policymakers to adopt an Agency by Design framework, ensuring that regulatory mechanisms proactively safeguard user autonomy, transparency, and ethical digital governance.
Mandates industry leadership in Corporate Digital Responsibility (CDR) and Responsible Research and Innovation (RRI) to embed ethical safeguards within digital products, services, and platforms from inception. By integrating these complementary approaches, we aim to establish a co-regulatory ecosystem where policymakers set clear, agency-enhancing standards, and the industry assumes active responsibility for designing and deploying digital technologies that prioritise user rights, security, and informed decision-making. The following sections detail each pillar of our proposals, providing plays for policymakers and industry to adopt.
To protect users from complex online harm, we advocate for a co-regulatory approach grounded in the Agency by Design framework. Drawing on the provided sources and conversation history, here are seven policies for enhancing user empowerment and control over different technologies, which can be used in a technology-neutral way to devise/reflect international standards and best practices:
| 01 | Incorporate diverse voices in system design |
| 02 | Provide transparent and intelligible information |
| 03 | Offer granular user control tools |
| 04 | User-defined safety and privacy settings |
| 05 | Collaborative agency and social resilience |
| 06 | Meaningful feedback and redress |
| 07 | Whole of life cycle support |
By implementing these policy recommendations, technology service providers can move towards a model of agency by design that genuinely empowers users and fosters a safer and more inclusive digital environment. However, this must be supported by the digital service industry, with policy makers providing oversight of compliance with these principles through the establishment of a co-regulatory system.
Firms providing technologies to end users should actively involve diverse user groups in designing and developing their systems with a view to user safety. This includes individuals from marginalised or vulnerable groups, drawing insights from women, racialized communities, and people with disabilities, who are disproportionately affected by online harms. Their lived experiences and insights can help identify potential biases, blind spots, and gaps in knowledge within existing systems. This participatory design approach shifts users from passive consumers to active collaborators, fostering a sense of ownership and agency.
Previous : Next : Close
Technology providers should clearly and accessibly explain the functions of their systems, the terms upon which a service is provided, and the potential risks or issues arising from system misuse. Users should be able to understand how these systems function, how their data is used, and how to ensure device or app security. This transparency empowers users to make informed choices about their interactions and adjust their preferences accordingly, including which security setting to use, and how.
Previous : Next : Close
Users should have access to a range of tools that allow fine-grained control over their security and safety preferences. This could include options to filter content based on keywords, topics, sources, and user accounts in the context of social media, or allow for different forms of conditional access for smart home technologies. Technology service providers should empower users to take responsibility for their own safety on terms most suitable for their circumstances.
Previous : Next : Close
Users should have control over their privacy and safety settings, allowing them to manage their data sharing, visibility, interactions, and who is able to access their devices and on what terms. This could include limiting the collection and use of personal data, setting up different accounts for smart devices with different levels of access and control, with the possibility of 'emergency response' functions to either disable individual accounts or devices.
Previous : Next : Close
Technology service providers should facilitate collaborative agency among users by providing tools and features that enable them to collectively address harmful content, security flaws, or behaviours. This could involve mechanisms for users to report potential safety concerns of an app collectively, support one another, and crowdsource possible solutions.
Previous : Next : Close
Technology service providers should establish accessible and responsive channels for users to provide feedback on safety concerns, drawing from their collaborative discussions. They should also provide clear and transparent mechanisms for users to report, with clear, transparent, and timely responses to these reports. Effective feedback and redress mechanisms are crucial for ensuring user trust and enhancing online safety.
Previous : Next : Close
Technology service providers should maintain levels of support, including security updates, collaborative discussion and reporting mechanisms, and redress for identified safety concerns for the entirety of a product or service's life cycle. In particular, if a technology is being discontinued, specific policies to ensure safety of critical devices during the End of Life (EoE) period of a service should be adopted.
Previous : Next : Close
The use cases provided across the portfolio of Agency projects highlight the blurring of human physical and digital borders within the 'new normal' of a predominantly digital society post-COVID-19. As a result, across demographics and communities, consumers now span a range of digital technologies and reside in a permanent 'online' status. Necessarily, a sharp focus has centred on technologies that now form part of, and shape our daily lives via digitisation, digitalisation, and digital transformation. Tempered with the increased adoption of technological use and solutions, and drawing on the benefits of convenience and frictionless transactions (social and economic), questions arise surrounding whether all members of society indeed benefit from the digital era.
In particular, who is responsible for ensuring that the design of technology, the subsequent gathering of data, and its reuse are managed in a responsible and authentic manner, producing positive outcomes for society, the economy, and the environment?
To this end, we present the combination of Corporate Digital Responsibility (CDR) and Responsible Innovation (RI) frameworks to act as a point of reference for organisations seeking a method to navigate responsible business practices in the digital era. CDR is a set of practices and behaviours that help an organisation use data and digital technologies in ways that are perceived as socially, economically, and environmentally responsible. This overlaps with RI, similarly, a framework for developing new technologies, products, or services with careful consideration of their ethical, social, and environmental impacts. Both frameworks involve anticipating potential risks and harm, engaging stakeholders in the development process, being transparent about methods and outcomes, and maintaining accountability throughout the innovation lifecycle and beyond CDR to ensure that advancements benefit society while minimising negative unintended consequences. Through fostering an environment in which CDR and RI are embedded into daily practices and actions, an empowering Agency by Design approach to system safety is more likely to be effectively implemented.
To start evaluating where to begin a responsibility journey from an organisational perspective, what follows are some tools that could be used to facilitate an Agency by Design approach to system design.
CDR/RRI share fundamental principles illustrated in the table below, which can form an entry point for evaluating where your organisation ranks in terms of existing competencies. In addition, it highlights areas for improvement to protect customers' digital experiences, creating the conditions necessary to promote Agency by Design in the provision of technology-related services to end-users.
The evaluation process can take the form of holding 'discovery' meetings to raise understanding and awareness of the key tenets of the responsible framework. From these initial sessions a survey can be designed and conducted around the core responsible principles to gain a competency benchmark. Such surveys can be tailored to fit your organisational needs.
Next, using the benchmarks as a starting point, workshops can be held periodically with multi-unit team members to ensure that all stakeholders feel part of the process in representing their unit and add their 'voice' to responsible practice. This is based on research into organisational change. Change is more likely to succeed when the following is in place:
Workshops provide a platform to embed the processes 1-4, create feasible use cases, and members of the workshops can act as 'responsible champions' once back in their teams. This will help increase the likelihood of successful change, where a sense of ownership pervades across the organisational hierarchy, as opposed to being imposed from the C-Suite level downwards.
The following sections provide guidelines on how to take the first steps, suggest several tools to help shape an organisation's journey to digital responsibility, that will align to strategic and operational objectives, which can be tailored depending on your sector. These examples are based on research conducted by several organisations.
Set out a responsible strategy taking a whole business and product/service end to end lifecycle approach. This demands consideration from design to decommissioning, including where third-party elements are procured. Focusing on CDR Principle 1: Purpose and Trust.
Create a CDR Business Case, factoring in an increase in costs but also rewards in the short, medium, and long-term for accelerating progress in the impact economy. What is the impact in investing in responsible practice for you and your clients?
Linked to the prior point, understand that it is not what you do, but the way that you conduct business, it that matters. As demonstrated in our scenarios, social media will capture the real-world impacts of your products and services and will report immediately to a potential global audience. There will always be an end user. Consider:
To understand how these plays can be operationalised, we have developed three use cases focusing on:
Disinformation
FemTech
Smart Homes
AI-generated disinformation presents a critical challenge for digital ecosystems. Social media platforms, which rely extensively on AI algorithms to curate and amplify user-generated content, are especially vulnerable to the rapid spread of false narratives. AI-powered recommendation systems, designed to maximise user engagement, can inadvertently exacerbate the reach of misleading information by prioritising highly engaging yet often inaccurate content.
To illustrate the risks posed by AI-driven disinformation, consider the following use case:
A large social media platform, ConnectSpace, has recently encountered a surge in disinformation campaigns targeting a forthcoming national election. The platform boasts millions of active users, making it a primary source of political news and discussion. However, malicious actors have begun disseminating false narratives regarding voter eligibility, polling dates, and alleged election fraud. These misleading posts circulate rapidly, and ConnectSpace struggles to contain the rapid spread of such content.
Female-oriented technologies (FemTech) refers to digital technologies focused on women's health and well-being. These technologies, including menstrual tracking apps, fertility monitors, and wearable health devices, leverage data-driven insights to give users personalised health recommendations. However, the inherently sensitive nature of the data collected - ranging from menstrual cycles and fertility patterns to behavioural and biometric information presents complex privacy, security, and ethical risks.
To illustrate the risks posed by AI-driven tracking apps, consider the following use case:
MyOvu, a femtech application, offers users detailed insights into reproductive health by tracking menstrual cycles, fertility windows, and related health indicators. The app processes large volumes of personal data, including daily self-reported symptoms, biometric information from wearable devices, and behavioural patterns gleaned from users' browsing histories. To improve predictive accuracy and offer personalised recommendations, MyOvu recently integrated AI-driven analytics and partnered with third-party health research institutes. While this integration enhances the app's capabilities, it also raises significant privacy concerns. Users have expressed growing unease over collecting and sharing sensitive health information.
Our research has demonstrated that smart home devices and interconnected systems introduce substantial privacy and security risks, potentially exposing users to complex digital harms. A primary concern is the widespread lack of adequate information and awareness regarding these risks, harms, and vulnerabilities - particularly among household users. This deficiency impairs users' ability to exercise informed digital autonomy and agency, thereby increasing their susceptibility to privacy breaches, unauthorised surveillance, and other forms of technological exploitation.
The opacity surrounding data collection, processing, and storage practices is a critical challenge in the smart home ecosystem. Many users are unaware of the extent to which their devices continuously gather and process personal data. This lack of transparency, combined with insufficient user control over data retention and sharing, exacerbates the risks associated with smart home technologies. Consequently, malicious actors can exploit these vulnerabilities for nefarious purposes, ranging from unauthorised surveillance to coercive control.
To illustrate the potential real-world consequences of these security and privacy issues, consider the following use case:
VoiceMate, an IoT-enabled smart speaker, is designed to enhance convenience through voice-activated functionalities such as music streaming, home automation, and hands-free assistance. Equipped with sensitive microphones and continuous connectivity to cloud-based services, VoiceMate passively listens for activation commands, capturing and processing audio inputs throughout the day. While this technology streamlines daily activities, it also presents significant risks when misused.
Recent reports highlight incidents where individuals have weaponised VoiceMate as a tool for intimate partner abuse. In some cases, abusers have exploited the device's always-on capabilities to covertly record conversations without the victim's knowledge. Additionally, they have accessed cloud-stored audio logs to monitor or blackmail victims, leveraging sensitive recordings as a means of control. Beyond passive surveillance, perpetrators have actively manipulated the device to issue verbal threats, create an atmosphere of intimidation, and exert psychological dominance over their victims.
To assist the plays described above, there currently exist several tools developed by multi-disciplinary teams from industry, academia, regulation, government, and policy environments.
The Corporate Digital Responsibility (CDR) Worksheet, adapted from Dörr (2021), provides a structured framework for assessing and advancing an organisation's commitment to responsible digital practices. The worksheet is designed to help your organisation evaluate its progress across key CDR principles. Using a self-assessment scale from 0 (Not yet started) to 4 (Transformative), you can systematically measure your maturity in ethical governance, digital inclusion, and sustainability.
| CDR Principle | Actions | Self-Assessment Score | ||||
|---|---|---|---|---|---|---|
| 0 | 1 | 2 | 3 | 4 | ||
| Purpose & Trust | Digital Responsibility Code | |||||
| Align corporate goals with CDR | ||||||
| Digital Ethics Board & Reporting | ||||||
| Advocate Responsible Regulation | ||||||
| Fair & Equitable Access for All | Innovative, Accessible, Inclusive Products & Services | |||||
| Promote Justice, Equity, Diversity & Inclusion | ||||||
| Responsible Employment Rights | ||||||
| Promote Societal Wellbeing | Implement Responsible Data Practices | |||||
| Promote Digital Maturity Skills | ||||||
| Promote Digital Wellbeing | ||||||
| Consider Economic & Societal Impact | Transparency with Stakeholders with Verifiable 3rd Party Data | |||||
| Share Digital Economic Benefits with Relevant Stakeholders | ||||||
| Plan for Sustainable & Responsible Automation | ||||||
| Accelerate Progress with Impact Economy | Invest in Sustainability / Environmental / Impact returns | |||||
| Use Verifiable Environmental Offset | ||||||
| Accelerate & Innovate Sustainable Consumer Behaviours | ||||||
| Creating a Sustainable Planet to Live | Report impact of company against 3rd party impact assessments | |||||
| Innovate & Positively Impact Beyond Corporate Boundary | ||||||
| Reduce Tech Impact on Climate & Environment | Implement an Environmental IT Strategy | |||||
| Measure, Report, Minimise Energy use & move to Renewable Energy | ||||||
Your score is .
MSG
Sources: Corporate Digital Responsibility - Digital, Responsibility. Corporate Digital Responsibility© and Dörr (2021)
This assessment tool enables organisations to evaluate their CDR maturity, drawing heavily on the 2024 CDR Maturity Model developed by Rugeviciute and Courboulay. It provides a structured analytical framework across 5 dimensions and their focus areas.
For each focus area, users should identify the maturity level (ranging from 0 to 5) that best represents current practices. Scores can then be aggregated to assess performance within each dimension and identify areas of improvement. Below is an overview of the maturity level scores organisations can use to evaluate their CDR progress.
Maturity Levels
| Level | Description | |
|---|---|---|
| 0 | CDR Principles Absent | No awareness of CDR principles; no relevant activities or frameworks are in place. |
| 1 | Emerging Awareness | Limited understanding of CDR; absence of formal governance structures; actions are informal and uncoordinated. |
| 2 | Reactive Response | Initial initiatives are primarily reactive to external pressures; implementation is inconsistent and lacks operational integration. |
| 3 | Structured Implementation | Policies and processes are formally established; internal capabilities are developed; performance is monitored at the project level. |
| 4 | Integrated Optimisation | CDR is embedded in organisational strategy and operations; practices include systematic performance measurement and continuous improvement. |
| 5 | Strategic Leadership | CDR is fully integrated into the core business strategy; the organisation demonstrates leadership, fosters innovation, and actively engages with stakeholders. |
This dimension assesses the effectiveness of an organization in integrating CDR into its strategic governance. It highlights the importance of a cohesive CDR strategy that aligns with sustainability goals, incorporates cross-functional governance frameworks, and establishes risk management practices that address digital, social, and environmental impacts.
| Focus Area | Maturity | Evidence/Comments |
|---|---|---|
| Strategy & Commitment: Existence of a defined CDR vision and strategy aligned with organisational values, supported by sustained resources, transparent communication, and identification of CDR-related business opportunities. |
||
| Interdepartmental Governance: Governance structures and KPIs are in place to embed CDR principles across all departments, ensuring coordination, accountability, and continuous monitoring. |
||
| Risk & Compliance: Mechanisms to ensure compliance with CDR-related obligations and proactive risk management using a double materiality perspective. |
This dimension assesses the extent to which CDR is embedded in organisational digital practices and culture. It focuses on the responsible use of technology to minimise environmental impacts and enhance employee well-being. It also examines the development of a CDR-oriented culture through awareness initiatives, training programs, and participatory mechanisms that foster employee-driven innovation in CDR.
| Focus Area | Maturity | Evidence/Comments |
|---|---|---|
| Office & Collaborative Tools: Reduction of environmental and digital well-being impacts associated with digital workplace tools, alongside the implementation of solutions that advance CDR objectives. |
||
| CDR Culture: Organisation-wide adoption of CDR principles through targeted training, inclusive governance, employee engagement in change initiatives and ongoing capacity building. |
This dimension assesses how digital, and IT infrastructure is managed in line with CDR principles across the product lifecycle. It also covers the sustainable use of cloud services, data centres, and networks, focusing on environmental impact and social responsibility.
| Focus Area | Maturity | Evidence/Comments |
|---|---|---|
| IT Hardware: Implementation of CDR-aligned policies for the lifecycle of IT equipment. |
||
| Cloud Infrastructure: Selection of CDR-compliant IaaS or private cloud providers and establishment of mechanisms to monitor their real-time impact. |
||
| Onsite Data Centres: Management of data centre operations to minimise environmental and social impact while ensuring security. |
||
| Networks: Secure and efficient network management to minimise environmental impact. |
This dimension evaluates the application of CDR principles across the lifecycle of digital services. It emphasises the need for responsible data governance, ensuring ethical, legal, and socially aligned data practices, particularly in emerging technologies such as artificial intelligence.
| Focus Area | Maturity | Evidence/Comments |
|---|---|---|
| Project Design and Development: Embed CDR principles in the responsible design and development of digital solutions. |
||
| Maintenance, Operations & End-of-Life: Apply CDR principles to the lifecycle management of digital solutions. |
||
| Data Management: Apply CDR principles throughout the data lifecycle, ensuring ethical use that is aligned with international standards and societal values. |
This dimension assesses the organisation's external digital responsibility through supplier, customer, and external stakeholder relationships. This encompasses responsible procurement, ethical customer engagement, and digital marketing management.
| Focus Area | Maturity | Evidence/Comments |
|---|---|---|
| Procurement: Adoption of responsible procurement policies that prioritise CDR-compliant suppliers, ensuring transparency, resilience, and continuous oversight across the supply chain. |
||
| Market & Customer: Ethical and transparent customer engagement, with clear accountability for the social and environmental impacts of digital products and services. |
||
| External Collaboration: Active participation in cross-sector initiatives to share, develop, and promote CDR best practices. |
Based on your selections across all dimensions, the average CDR maturity level:
Level 0: Non-existent.
No CDR actions taken.
Non existent
Initial awareness
Focus on:
Building awareness
Reaction measures
Focus on:
Reducing operational and legal risks
Proactive
and standarised measures
Focus on:
CDR process optimisation
Continuous and measured improvement
Focus on:
Integration and alignment of CDR processes
Industry Leadship and Innovation
Focus on:
Increasing value for society
The Digital Responsibility Cluster provides a strategic framework for organisations to integrate ethical, inclusive, and sustainable digital practices. It outlines key principles for responsible digital transformation, including digital inclusivity, ethical AI use and environmental sustainability. Organisations can leverage this framework to ensure their digital initiatives are socially responsible and sustainable.
To further enhance, how organisation's can understand how to utilise and implement a CDR/RI framework in avoiding online harms and unintended consequences of digital technologies. Some authors from the Agency project were involved in a collaborative industry partnership in producing an white paper for the Institute of Electronic and Electrical Engineers (IEEE) which can be accessed here.
In addition to these tools outlined about, we have developed a series of 10 probing questions to help technology companies tailor their organisational needs and culture to address online harms and the use of digital tech.
| Question | Check | |
|---|---|---|
| 1 | What are the technologies/applications you use when working with data (e.g., to collect, prepare, analyse, store, share access, or if any Artificial Intelligence is used)? | ✘ |
| 2 | For the product/service that you work with, what are the goals, purposes, and intended applications? | ✘ |
| 3 | How do you/your team monitor and test that products/services meet these goals, purposes, and intended applications? | ✘ |
| 4 | When in the product/service lifecycle do you/your team monitor and test that products/services meet these goals, purposes, and intended applications? | ✘ |
| 5 | Who or what might benefit from this product/service? | ✘ |
| 6 | Who or what might be disadvantaged by this product/service? | ✘ |
| 7 | Does your team have a framework for risk management? | ✔ |
| 8 | How is the data you work with obtained/collected, and are there any risks and negative consequences of using data considered? | ✘ |
| 9 | Are there processes in place to establish whether the product/service might have a negative impact on the rights and liberties of individuals or groups? | ✘ |
| 10 | Does the purchase of equipment (e.g., computers, servers), services (e.g., cloud-based services), storage of data, or infrastructure to support your work consider any environmental factors or sustainability? | ✘ |
Your answers demonstrate your ethical and considered approach to digital responsibility and the protection of citizens.
By addressing these key areas, organisations can build a responsible and sustainable digital ecosystem that mitigates potential harms while fostering innovation and ethical technology development.
Another example to heighten understanding and implications of online harms, is the "Moral-IT Decks" by Dr Lachlan Urquhart & Dr Peter Craigon, Horizon Digital Economy Research Institute. Available at here. These come in a "card" format (see below) and can serve as a guide to engage in ethics by design. Furthermore, make an interactive CDR/RI framework and toolkit that can be adapted to the organisation's preference. Examples illustrated below capture five key themes relative to the central issues of online harm considerations as follows:
1. Privacy
|
2. Ethics
|
3. Law
|
4. Security
|
5. Narrative
|