Qubit Cyber is on a mission to shift focus from simply protecting data and systems (Cybersecurity) to protecting humans in the digital era (Digital Safety). Digital Safety Modelling is a structured process for identifying, understanding, and prioritising potential harms to humans arising from their interaction with or dependence upon technological systems. Where threat modelling asks "what could go wrong with this system?", Digital Safety Modelling asks "how could this system's behaviour—intended or otherwise—harm the human?". This approach considers prevention of harm to individuals, institutions and communities as its primary outcome so that trust and safety become the central focus.
Naturally, this approach helps a business defend and safeguard customer trust by considering adverse impact on their safety and preparing to respond in case of a breach or failure. So digital safety modelling requires stakeholders across the business, legal, HR, customer sucess, product engineering, support services, security and IT all to come together and invest the much needed time.
The shift in thinking is radical and an inevitable necessity. Traditional threat modelling treats the human as a threat vector (the insider threat, the social engineering target, the misconfigured administrator). Digital Safety Modelling treats the human as the protected asset and dampening the impact of failure (cyber attacks, vulnerabilities and breaches) as a design outcome. The system exists to serve them; when that relationship inverts—when humans must adapt their behaviour to protect the system, or when system failures cascade into human harm—something has gone wrong at the architectural level.
This connects directly to your "not bad is not the same as is good" principle. Threat modelling can declare victory when no vulnerabilities are found or design flows are addressed for known vectors. Digital Safety Modelling must ask the harder question: is this human actually safer? Absence of identified harms isn't the presence of safety. You need positive evidence that the human's privacy, finances, physical security, and autonomy are protected—not merely that you haven't yet discovered how they might be violated.
The methodological gap here is significant. We have mature frameworks for threat modelling systems; we don't yet have equivalent rigour for modelling human safety outcomes from digital systems. This is where Qubit Cyber's mission comes to the fore. We are committed to be the protagonists in changing the traditional narrative and develop frameworks and methodologies for "digital safety modelling".
First, you establish scope by mapping the safety exposure surface. You are first decomposing harm to humans, institutions and communities by considering cascading impact, dependencies and vulnerabilities including abuse cases rather than decomposing system architecture. What aspects of a customer's (customers' customers) life intersect with the technology? This includes their physical proximity to cyber-physical systems, their financial instruments connected to digital services, their personal information held by third parties, and their physical safety dependencies on system availability or integrity. The output isn't a data flow diagram—it's something closer to a dependency map or safety exposure surface diagram centred on the human.
Second, you identify potential harms. Where STRIDE categorises threats to system properties, Digital Safety Modelling uses a human-centric taxonomy. A degree of impact is considered in-terms of direct, indirect and tertiary or accumulative impact which in turn defines the countermeasures or in this case "fail-safe" and "response" mechanisms.
The Digital-Safety Modelling framework we use encompasses:
Third, you assess and prioritise based on human impact severity, not system criticality. In many cases this approach scales well to considering impact to institutions and communities. A privacy exposure that seems minor from a data security perspective might be catastrophic for a domestic abuse survivor. Context matters enormously—the same system failure produces radically different human outcomes depending on who the human is, their circumstances, and their adversaries. This stage must account for vulnerability asymmetries: children, elderly, disabled persons, those under coercive control, and those with limited technical literacy face amplified harms from identical system behaviours.
Fourth, you determine safeguards—and critically, these extend beyond technical controls. Safeguards might include design decisions that fail-safe toward human protection, transparency mechanisms, meaningful consent frameworks, human-in-the-loop requirements for consequential decisions, and regulatory or policy interventions. The question shifts from "how do we protect the system?" to "how do we ensure the human remains safe regardless of system state?"
Fifth, you document and iterate, recognising that human circumstances change. A person's risk profile shifts with life events—new relationships, changed living situations, altered health status, changed financial circumstances. Static analysis produces stale protection.

![[background image] image of contact center space (for a data analytics and business intelligence)](https://cdn.prod.website-files.com/image-generation-assets/c6108207-eb8f-4df4-bf21-2ada31eda025.avif)