When Government Actors Are the Safety Problem: Why Trust & Compliance Frameworks Need Human Rights Foundations
Online platforms have a clear responsibility under the UN Guiding Principles on Business and Human Rights to respect human rights and avoid complicity. Upholding that responsibility in the face of state-directed abuses is complex and requires capacity built in advance, including foundational governance, risk assessment, and due-diligence capabilities. Embedding those capabilities across internal functions enables platforms to respond to systematic violations in rights-protective and context-sensitive ways.
Recent events involving state-directed human rights abuses have highlighted an uncomfortable truth: Platform trust and safety frameworks were built primarily to address individual bad actors and non-state groups, not governments perpetrating systematic violations. This gap is an operational reality that platforms are navigating constantly, often without adequate preparation.
Online platforms have a clear responsibility under the UN Guiding Principles on Business and Human Rights to respect human rights and avoid complicity. Upholding that responsibility in the face of state-directed abuses is complex and requires capacity built in advance, including foundational governance, risk assessment, and due-diligence capabilities. Embedding those capabilities across internal functions enables platforms to respond to systematic violations in rights-protective and context-sensitive ways.
The responsibility to assess and build those capabilities lies with senior leadership: platform executives, policy leads, and legal, risk, and government affairs teams.
State Actors Pose Unique Challenges
Most platforms effectively give state actors the benefit of the doubt, for sometimes understandable reasons.
Policy Mismatches. State-actor conduct tends to fall outside standard policy prohibitions.
Policy definitions of violent extremism and terrorism (in line with academic and legal definitions) generally focus on non-state actors who lack the coercive authority of the state.
Dangerous organizations policies often reflect the foreign policy interests of Global North governments, especially the Five Eyes, and therefore do not include actors from those governments.
Policies on graphic violence or hateful content often don’t extend to coordinated state campaigns justifying, celebrating, or denying systematic atrocities—either because individual content items don’t cross policy thresholds or because the content comes from verified government accounts granted presumptive legitimacy.
Documentation Dilemmas. Atrocity content can spread harmful propaganda, but it also constitutes evidence for humanitarian fact-finding, criminal prosecution, international accountability, and public awareness. Over-removal actively undermines justice and transparency, but under-removal can amplify state propaganda and normalize violence.
Open Coordination. Frameworks addressing coordinated inauthentic behavior target covert manipulation, not overt state propaganda. When governments use official channels to systematically dehumanize populations or justify abuses, it’s “authentic” in the narrow sense that it really comes from the government, even when the harm patterns are similar.
Geopolitical Minefields. Determining which state actions constitute systematic human rights abuses requires judgments about contested events in complex political contexts. Platforms lack the expertise and mandate to make those judgments fairly and consistently, yet waiting for indictments or investigative findings could mean doing nothing while atrocities unfold.
Failure in this context isn’t the absence of perfect neutrality or globally uniform outcomes. Rather, it’s organizational drift under government pressure, crisis-driven inconsistency, and ad hoc rationalization without responsible governance.
Why Simple Policy Fixes Fail
Quick-fix options include extending existing policy prohibitions to fill these gaps or creating a new policy category like “state-perpetrated systematic abuses.” But these runs into immediate problems.
Definitional Quicksand. Determining whether a government’s “counter-terrorism operation” is actually a “systematic human rights violation” is deeply fraught. Platforms making these assessments essentially become arbiters of international legitimacy—a role they’re ill-equipped to perform.
Selective Application. Will platforms only act against geopolitical adversaries while overlooking abuses by actors/agencies that are commercial partners? The concern isn’t hypothetical—sanctions enforcement and content moderation both show consistent patterns of selective application.
Commercial Retaliation. States sometimes respond to reactive platform actions with regulatory threats, tax investigations, operational restrictions, and market access limitations.
Speed Mismatches. Atrocities unfold faster than platforms can investigate and process them. Emergency responses made without adequate preparation typically involve either over-removing and suppressing documentation or under-responding and thereby amplifying harm.
Employee Impacts. State-directed abuse cases place psychological and moral burden on policy staff, moderators, and operational personnel, forcing them to make decisions beyond their training or authority.
In short, mere policy changes tend to shift the burden downward, when the problem requires upward accountability.
Foundational Capabilities Platforms Need
Far more important than new policy categories are the foundations for responsible decision making that enable context-sensitive responses to major human rights risks. These foundational capabilities (decision architecture, escalation paths, cross-functional collaboration) contrast starkly with surface-level measures (policy tweaks, enforcement labels, reactive takedowns).
When state actors perpetrate systematic abuses:
Governance structures provide decision frameworks and accountability mechanisms for politically sensitive determinations. These structures include clear escalation protocols, documented decision frameworks, and senior leadership accountability, not just trust and safety teams making best-guess calls.
Risk assessment capabilities enable detection of systematic patterns, rather than evaluating bits of content and conduct in isolation. State-directed campaigns become visible when coordination, targeting, and narrative progression are analyzed over time and by geography.
Evidence-aware content handling facilitates accountability while avoiding amplification of harmful propaganda. When platforms deploy content preservation, provenance, and chain-of-custody tools—all of which enable objective fact finding—the tension between visibility and intervention eases. In essence, platform leaders can choose a responsible path of preservation and non-amplification.
User rights and due process frameworks ensure defensible, documented rationales for high-stakes decisions. Such frameworks must be robust enough to handle politically explosive appeals and transparent enough to demonstrate principled decision making under pressure from affected governments.
Transparency mechanisms communicate complex choices to users, civil society, and regulators without exposing vulnerable individuals or compromising security. This requires pre-built infrastructure, not crisis-driven scrambling.
Incident management protocols activate crisis response and cross-functional coordination, connecting policy, legal, security, and business leadership. State-directed abuse cases inevitably involve government relations, regulatory strategy, and commercial risk and can’t be handled by one team in isolation.
Human rights due diligence frameworks provide context for addressing heightened risks in specific scenarios involving authoritarian regimes, conflict zones, elections, and protests, before crises emerge.
These foundational capabilities are the equivalent of strength training and conditioning: They build the muscle and capacity that enable platforms to navigate the most difficult safety and geopolitical challenges. Platforms without them face impossible choices: over-remove and suppress documentation, under-respond and amplify propaganda, or make inconsistent decisions that please no one and undermine trust.
Building Capacity Before the Crisis
Growing geopolitical instability suggests that systematic human rights abuses by state actors may become more commonplace. Given the unique role that online platforms play in transmitting information and awareness, they have a similarly unique responsibility to respect and uphold the values that undergird human rights.
They can do so by developing the capabilities outlined above before human rights crises occur, not during or after them—including through relationships with trusted partners in civil society, humanitarian organizations, and fact-finding bodies. Platforms that invest in these capabilities ex ante position themselves to navigate state-directed abuse scenarios in principled, operationally sustainable ways.
To be clear: This is not about platforms becoming human rights arbiters. It is about platforms becoming organizationally capable of confronting state power without abandoning their own principles.
What Does the GDPR’s Enforcement Journey Tell Us About DSA and OSA Compliance?
As implementation of the Digital Services Act (DSA) and UK Online Safety Act (OSA) matures, questions arise as to the path that enforcement will take. The enforcement trajectory of the General Data Protection Regulation (GDPR) may offer instructive parallels, but important differences between these regulatory regimes, along with key external factors, suggest that the safety regulations’ enforcement curve may not mirror that of the GDPR.
As implementation of the Digital Services Act (DSA) and UK Online Safety Act (OSA) matures, questions arise as to the path that enforcement will take. The enforcement trajectory of the General Data Protection Regulation (GDPR) may offer instructive parallels, but important differences between these regulatory regimes, along with key external factors, suggest that the safety regulations’ enforcement curve may not mirror that of the GDPR.
Why GDPR Enforcement Patterns Are Relevant for DSA and OSA
The GDPR and the DSA are architecturally similar in ways that suggest their enforcement patterns may parallel each other. Per the European Institute of Public Administration, “the DSA and GDPR, rather than being seen as separate regimes, should be understood as complementary tools in a broader effort to foster transparency, accountability, and user rights in the digital age.” Both regulations:
Employ coordinated enforcement networks across member states.
Include graduated obligations based on platform size and risk.
Impose comparable fines for violations based on global annual turnover.
The UK OSA follows similar patterns: phased implementation, categorization of services by size and functionality, substantial penalty provisions (up to £18 million or 10% of qualifying worldwide revenue), and a single national regulator (Ofcom) with powers modeled on established data protection enforcement approaches.
The GDPR’s Enforcement Journey
Given these parallels, the evolution of GDPR enforcement provides a potentially helpful benchmark for anticipating DSA/OSA enforcement.
Data from the CMS.Law GDPR Enforcement Tracker reveals distinct phases of GDPR enforcement.
Phase 1: Orientation (2018-2020). The initial enforcement period was one of regulatory restraint. Regulators allowed what one analysis describes as an “initial phase to get acquainted with the new data protection regime under the GDPR for both data controllers and themselves.” Authorities imposed relatively few fines, and those imposed were small in amount—typically under €100,000. This restraint was likely a pragmatic recognition that both regulated entities and authorities needed time to understand the practical import of new requirements.
Phase 2: Escalation and Capacity Building (2020-2022). As Data Protection Authorities (DPAs) built institutional capacity and developed enforcement expertise, both the frequency and severity of fines increased. This period saw the emergence of million-euro penalties and the first significant enforcement actions against major technology platforms, which became a focus of higher-profile actions.
However, coordination challenges also emerged in this phase. Cross-border cases proved complex, requiring consensus building among multiple national authorities with varying philosophies and resource levels.
Phase 3: Maturation and Harmonization (2023-Present)
In this phase, GDPR enforcement matured substantially. The number of consistency opinions adopted by the EDPB under art. 64(2) increased. More significantly, for the first time since 2020, the EDPB issued zero binding decisions in disputes between DPAs during 2024—indicating that national authorities had achieved sufficient alignment in their interpretations and approaches that formal dispute resolution was no longer necessary.
As of early 2026, the CMS Tracker documents 2,711 fines totaling over €6.7 billion. At the member-state level, patterns emerge: Ireland and the Netherlands have imposed few but massive fines (hundreds of millions) concentrated on the largest technology companies, while Germany, Spain, Italy, and France issue higher volumes of smaller fines across diverse industries.
Status of DSA/OSA Enforcement
Both the DSA and OSA are in their early implementation period, though with notable variations.
The DSA became fully applicable in February 2024. The European Commission has primary enforcement responsibility for Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), while national Digital Services Coordinators handle other in-scope services. The Commission and DSCs have initiated enforcement actions, but the pattern generally parallels early GDPR: targeted, focused on fundamental compliance gaps, and designed more to establish regulatory credibility than impose maximum penalties.
The UK OSA entered into force more recently, so data is limited. Illegal content duties came into force in March 2025, child safety requirements in July 2025, and categorized service obligations are expected in 2026. Still, Ofcom has demonstrated willingness to use its powers, opening multiple investigations and issuing its first £1 million+ fine in late 2025.
Call for Caution: Material Differences Between Regulatory Regimes
While the structural parallels are compelling, there are significant reasons to be cautious about assuming online safety enforcement will follow the GDPR’s trajectory.
1. Public Salience and Political Pressure
Safety harms, particularly those affecting children, can be more visceral than privacy harms and can generate immediate public outrage and sustained media attention. In turn, politicians and regulators face intense pressure to demonstrate rapid, visible enforcement—a factor that may compress or eliminate the “orientation” period seen with the GDPR.
2. Freedom of Expression Tensions
The GDPR’s core tension is privacy vs. business efficiency—a relatively straightforward commercial tradeoff. The DSA/OSA directly implicate tensions between safety and freedom of expression, making enforcement politically and geopolitically charged in ways privacy enforcement is not. These controversies are prompting legal challenges and polarization, complicating enforcement.
3. Measurement and Verification Complexity
Compared to GDPR compliance, online safety compliance involves more subjective judgment about the effectiveness of mitigation measures and systems. That subjectivity could mean:
Inconsistent enforcement across jurisdictions and even by individual regulators.
Litigation risk as platforms challenge regulatory determinations about what “effective” means.
Moving targets as harms evolve rapidly (e.g., generative AI-driven abuses).
4. Different Industry Starting Points
When the GDPR came into force, most companies were building data protection infrastructure from scratch, creating a relatively level playing field. For online safety, major platforms already have substantial safety operations and detection technologies. This existing maturity could contribute to an expectation among regulators that such platforms should be able to comply immediately, but that expectation may be unreasonable for smaller platforms.
5. Technical Challenges
Many of the technical requirements for GDPR compliance (encryption, access controls, data minimization) were well-understood and implementable at the time the law was passed. Some online safety requirements, on the other hand, involve technical measures that don’t yet exist or are in tension with privacy: age assurance at scale, content detection on end-to-end encrypted services, algorithmic transparency.
These feasibility issues could lead to either compliance theater or enforcement delays while technology catches up to requirements.
6. Regulatory Momentum
The DSA and OSA, along with other online safety regulations, carry forward authorities’ long-term digital regulatory agenda—an agenda that was much less robust at the time the GDPR was passed. Regulators and companies are now much more accustomed to online regulation, building on a learning curve that could heighten regulatory expectations and accelerate enforcement.
7. Enforcement Authority Structure
While both regimes use networked enforcement, there are important distinctions. The GDPR is enforced primarily national DPAs, with cross-border cases coordinated through EDPB. The European Commission, on the other hand, has direct enforcement authority over VLOPs/VLOSEs for DSA compliance, not just coordination authority. And Ofcom has comprehensive authority to enforce the OSA.
These structural differences may accelerate implementation of the safety regulations by avoiding some of the coordination bottlenecks that slowed GDPR enforcement.
Strategic Implications
These differences suggest several adjustments to how platforms should think about DSA/OSA compliance compared to GDPR:
Don’t assume a period of extended restraint. The 2-3 year orientation phase may compress to 12-18 months, particularly for child safety violations.
Prepare for contested enforcement. Greater subjectivity may mean more variation across regulators and more frequent litigation challenging specific determinations.
Build for technical evolution. Safety measures will need continuous updating as harms, technologies, and industry baselines evolve.
Anticipate jurisdictional conflicts. Platforms may need to accept some jurisdiction-specific implementation rather than global solutions.
Frontload foundational work. This early phase is the optimal time to build robust governance structures, risk assessment frameworks, and data governance practices. Platforms that do so position themselves to adapt more efficiently as regulatory expectations evolve.
Invest in Demonstrable Process. The ability to demonstrate consistent, good-faith effort to meet requirements—including through detailed documentation and auditable processes—will be critical to maintaining the trust of regulators, users, and partners.
Prepare for cross-regulatory integration. Regulators have already acknowledged that privacy and safety requirements intersect. Platforms must adopt integrated approaches to risk assessments, data governance, content moderation, user rights, incident management, and transparency reporting rather than treating them as separate silos.
Ultimately, the GDPR enforcement trajectory is instructive for understanding regulatory maturation patterns, but platforms should view the GDPR experience as merely a reference point, not a precise roadmap. Given key differences between the GDPR and DSA/OSA regimes, escalating regulatory scrutiny may come sooner in the safety space than it did for privacy.
The platforms most likely to thrive are those that invest in foundational compliance capabilities now, systematically learn from early enforcement actions, and build integrated systems designed to evolve as regulatory enforcement matures.
From Friction to Framework: A Responsible Approach to Trust Domain Conflicts
Online platforms of all sizes should deploy a framework for resolving tensions and conflicts between trust domains such as privacy, safety, youth protection, and responsible AI. Such a framework should embed governance, include weighted criteria, and scale as organizations grow. Organizations that deploy such frameworks gain significant competitive advantage, not because they avoid difficult decisions, but because they make them transparently and consistently and improve over time.
A recent post outlined five critical tensions that can arise between digital trust domains such as privacy, safety, child protection, AI governance, accessibility, and fundamental rights. These tensions are daily operational realities for platform companies navigating overlapping regulatory frameworks.
As regulator and user expectations mature, it’s increasingly clear that platforms need a robust, practical resolution mechanism that helps them make responsible decisions across these competing imperatives.
The Risks of Ad Hoc Decision Making
The lack of such a resolution mechanism means that many organizations handle cross-domain tensions reactively, raising several risks.
Inconsistency. The same tension (say, data retention for safety investigations versus privacy minimization) gets resolved differently depending on who raises it first or which team has more political capital.
Duplication. Organizations build parallel systems for privacy compliance, safety compliance, or AI governance, each with separate risk assessments, incident response procedures, and reporting mechanisms, without recognizing that a significant portion of these requirements are functionally identical.
Compliance theater. Without an integrated approach, companies implement sometimes contradictory controls that look good on paper but create operational friction and don’t actually promote trust.
Hidden trade-offs. Junior team members make major policy decisions without realizing they’re choosing between competing regulatory obligations. The content moderator deciding data retention timelines is simultaneously making privacy and safety decisions but only sees them through one lens.
Four Core Capabilities
Decisions should flow not from ad hoc, instinctual reactions but from a foundational framework that embeds governance, includes weighted criteria, and scales as organizations grow. Such a framework should include four key components.
1. Explicit Conflict Recognition
Policy language alone cannot resolve inter-domain conflict, which is structural. Mature organizations acknowledge these tensions in operational procedures, train teams to identify them, and create clear escalation pathways when requirements genuinely compete.
Openly anticipating and acknowledging tensions in this way unlocks the capacity to navigate them systematically rather than discover them mid-crisis.
2. Decision Frameworks with Weighted Criteria
Systematic frameworks establish evaluation criteria—harm severity, reversibility, affected populations, public interest, technical alternatives, precedent implications—and weight them based on context. For example:
A decision about livestreaming for 16-year-olds should weigh rights impacts heavily, because expression is at stake.
A decision about grooming-detection algorithms should weigh harm severity and affected population heavily, because children's safety is paramount.
The key is to establish and apply these frameworks before decisions are made, as opposed to reverse engineering them ex post.
3. Empowered Escalation Protocols
Not every tension requires executive attention, but high-stakes conflicts need appropriate decision makers. Effective protocols route decisions to people with three critical attributes: (1) authority to make the call, (2) visibility across relevant domains, and (3) accountability for outcomes.
This means clear criteria—not just uncertainty, but specific triggers—for when to escalate, pre-authorized decision makers for emergencies, cross-functional review for precedent-setting choices, and documentation requirements that scale with decision importance.
Without empowered escalation, two scenarios predominate: Either (a) operational personnel are stuck making tough choices without the visibility necessary to navigate competing regulatory requirements, or (b) executives are flooded with edge cases but lack operational context to make informed decisions.
4. Documentation for Learning and Accountability
Documentation demonstrates rigor and enables improvement. Organizations should document:
Which requirements conflicted
What evaluation process was used
How options performed against criteria
What was chosen and why
What reasonable alternatives existed
When the decision should be revisited
This serves multiple purposes: consistency (similar cases reach similar outcomes), learning (pattern recognition improves over time), and accountability (regulators can see systematic approaches even when they disagree with specific outcomes).
Maturity-Based Implementation
This resolution framework can and should be implemented differently based on organizational maturity.
Early-stage organizations need foundational capabilities: conflict registers, designated decision-makers, and basic escalation pathways. “Good enough” means someone is responsible and high-risk tensions get appropriate attention.
Scaling organizations need systematic frameworks: decision matrices for common tensions, cross-functional review boards, and quarterly pattern analysis. This eliminates the decision paralysis that sometimes plagues mid-size platforms.
Mature enterprises need sophisticated capabilities: specialist teams, formal oversight mechanisms, decision support tools, public transparency, and proactive regulatory engagement.
A maturity-based approach recognizes that attempting VLOP-level sophistication with startup resources creates compliance theater. Conversely, ad hoc decision making that may work for a 50-person company breaks catastrophically at scale.
Moving Forward
Cross-domain tensions are permanent features of platform governance. Organizations that develop systematic frameworks for navigating them gain significant competitive advantage, not because they avoid difficult decisions, but because they make them transparently and consistently and improve over time.
Building these capabilities requires domain expertise, attentiveness to organizational governance and design, and contextual calibration based on specific operations, user base, risk profile, and other factors. Full Fathom Advisory can help build these capabilities through:
Diagnostic assessments identifying your specific cross-domain tensions and current maturity level.
Decision framework development tailored to your operations, regulatory obligations, and organizational capacity.
Implementation roadmaps based on realistic resource constraints.
Team training on conflict recognition, framework application, and escalation protocols.
Documentation systems satisfying regulatory accountability expectations while building institutional knowledge.
If your platform is struggling with cross-domain tensions, reaching inflection points where ad hoc decision making is breaking down, or facing regulatory scrutiny about how you navigate competing requirements, let’s talk about building the robust capabilities your organization needs.
Friction at the Seams: Where Digital Trust Domains Collide
Digital trust today is an aggregate function of multiple domains—online safety, privacy, accessibility, youth protection, responsible AI, and fundamental rights—each with its own regulatory logic, standards, and professional communities. On an operational level, these domains are frequently in tension with each other, creating the need for robust, ethical resolution frameworks.
Digital trust today is an aggregate function of multiple domains—online safety, privacy, accessibility, youth protection, responsible AI, and fundamental rights—each with its own regulatory logic, standards, and professional communities.
Even individually, these regimes can be internally inconsistent or duplicative. Collectively, they’re frequently in tension with each other.
This resource summarizes key friction points, highlighting the limitations of siloed approaches to trust and compliance.
1. Privacy vs. Safety
Privacy and safety are often treated as competing objectives, but in practice they are mutually dependent—and operationally difficult to reconcile.
Common tension points include:
Data retention: Privacy laws emphasize minimization and storage limitation (e.g., GDPR Art. 5), while online safety regulations call for content or logs to be retained for investigations (e.g., UK Online Safety Act, EU DSA).
User anonymity: Privacy regulations prioritize pseudonymity, yet safety teams must detect repeat offenders and prevent ban evasion—often requiring durable identifiers.
Proactive monitoring: Safety regimes increasingly expect platforms to act against illegal or harmful content, while privacy regimes constrain broad scanning or secondary use of data.
Profile building: Behavioral pattern detection supports abuse prevention, but privacy regulations restrict automated profiling—especially of children.
These tradeoffs surface in daily operational decisions for safety and privacy teams, which often must address them without a shared resolution framework.
2. Child Protection vs. Youth Voice
Children’s digital policy is frequently framed in terms of protection. Yet human rights law is clear that protection is only one relevant consideration.
The burgeoning set of regulatory restrictions on youth engagement online underscore these tensions.
Protection or participation: Under the UN Convention on the Rights of the Child, children are entitled not only to safety (Art. 3), but also to participation, expression, access to information, and play (Arts. 12–13, 31). These rights are reaffirmed in UNICEF General Comment No. 25, which warns against over-restrictive digital environments that silence youth agency.
Restrictive defaults or autonomy: Regulatory measures such as the UK Age-Appropriate Design Code and Online Safety Act prioritize protective defaults but provide limited guidance on how to incorporate youth voice, evolving capacity, or participatory governance into platform design.
Control or consultation: The UK Online Safety Act obligates platforms to design away risks to children, whereas General Comment 25 calls for meaningful youth participation in policy and design.
Age assurance or privacy and expression: Child-protection regulations (e.g., AADC, DSA, California AADC) increasingly push for robust age assurance. But stronger age checks can reduce anonymity and chill self-expression.
The result of these tensions is a growing gap between child safety mandates and children’s rights to be heard, to explore, and to shape the environments they use.
3. AI Governance vs. Fundamental Rights
AI governance frameworks emphasize transparency, fairness, explainability, and risk management. Fundamental rights frameworks emphasize dignity, autonomy, non-discrimination, privacy, and freedom of expression.
While often complementary, in practice these imperatives frequently collide.
Explainability vs. privacy and trade secrets: AI governance frameworks push for transparency and explainability (e.g., EU AI Act arts. 13-15, NIST AI RMF 3.4-3.5), but rights frameworks limit disclosure of personal data or sensitive attributes, and commercial protections limit disclosure of model details.
Bias auditing vs. sensitive data processing: The EU AI Act (art. 10) anticipates bias assessment and monitoring, leading teams to infer or process attributes like race, ethnicity, religion, or sexual orientation. But data protection laws may restrict or prohibit such collection or inference (e.g., GDPR art. 9).
Risk mitigation vs. freedom of expression: Platforms are expected to leverage AI systems to proactively detect and remove harmful content, but such automation implicates rights to expression, information, and due process, and whether content is “harmful” is often context-dependent.
Safety controls vs. autonomy and equality: AI-driven classifiers or behavioral scoring can target potentially fraudulent or illicit activity, but they can also limit legitimate user activity or restrict access unequally.
Regimes such as the EU AI Act, GDPR, DSA, OECD AI Principles, and UNESCO AI Ethics Recommendation articulate the values clearly, but they rarely provide the practical means for resolving conflicts between them.
4. Accessibility vs. Safety
Accessibility and safety are both foundational to trust, yet they’re not always aligned.
Friction points include:
Safety friction vs. cognitive accessibility: Interstitials, warnings, and confirmation flows can foster safety but increase cognitive load and introduce complexity that accessibility standards discourage.
Age assurance vs. operability: CAPTCHAs, ID checks, and biometric estimation may enhance child protection, but they may have the effect of excluding users with disabilities.
Content moderation vs. assistive technologies: Safety systems can interact unpredictably with accessibility tools like screen readers, captions, or AI-based assistive tools.
Reporting flows vs. inclusive design: Safety escalation mechanisms are often rigid; accessibility requires multiple interaction pathways.
Without resolution frameworks, accessibility and safety teams are left to weigh these tensions ad hoc.
5. Accountability & Transparency vs. Proportionality
Across trust domains, a deeper fault line recurs.
Regulations accountability and transparency: impact assessments, risk documentation, explainability, audit trails, transparency reports. At the same time, they emphasize proportionality: obligations should scale with risk, size, and context; disclosures should not create new harms; compliance must remain operationally feasible.
These two sets of imperatives are often at odds. In essence, platforms are asked to:
Document everything—while minimizing data.
Explain recommendations and automated decisions—without exposing vulnerabilities, personally identifying information, or safety systems.
Maintain detailed records—while iterating and improving continuously.
Large platforms absorb some of this tension through scale and spend. But for smaller and mid-sized platforms, it often becomes paralyzing. And for regulators and the public, it can yield what looks like performative transparency rather than meaningful trust.
What’s missing is a robust, practical, scalable resolution mechanism that helps organizations decide what to document, who should make decisions, and how to explain them and at what level of detail, coherently and across trust domains.
Why This Matters
Trust failures tend to accumulate at the seams—the friction points where well-intentioned requirements collide and organizations lack tools to reconcile them.
In the coming phases of digital trust governance, organizations can build a competitive advantage with integrated trust strategies that acknowledge trade-offs, document decisions, and align safety, rights, and accountability in practice.

