Resources
Online platforms have a clear responsibility under the UN Guiding Principles on Business and Human Rights to respect human rights and avoid complicity. Upholding that responsibility in the face of state-directed abuses is complex and requires capacity built in advance, including foundational governance, risk assessment, and due-diligence capabilities. Embedding those capabilities across internal functions enables platforms to respond to systematic violations in rights-protective and context-sensitive ways.
As implementation of the Digital Services Act (DSA) and UK Online Safety Act (OSA) matures, questions arise as to the path that enforcement will take. The enforcement trajectory of the General Data Protection Regulation (GDPR) may offer instructive parallels, but important differences between these regulatory regimes, along with key external factors, suggest that the safety regulations’ enforcement curve may not mirror that of the GDPR.
Online platforms of all sizes should deploy a framework for resolving tensions and conflicts between trust domains such as privacy, safety, youth protection, and responsible AI. Such a framework should embed governance, include weighted criteria, and scale as organizations grow. Organizations that deploy such frameworks gain significant competitive advantage, not because they avoid difficult decisions, but because they make them transparently and consistently and improve over time.
Digital trust today is an aggregate function of multiple domains—online safety, privacy, accessibility, youth protection, responsible AI, and fundamental rights—each with its own regulatory logic, standards, and professional communities. On an operational level, these domains are frequently in tension with each other, creating the need for robust, ethical resolution frameworks.

