Guardrails for the Frontier: How AI Safety is Actually Being Built
In January 2026, Dario Amodei wrote a 20,000 word essay that made waves across the internet. The CEO of Anthropic, one of the tech giants and leaders in AI, has been openly talking about safety...
In January 2026, Dario Amodei wrote a 20,000 word essay that made waves across the internet. The CEO of Anthropic, one of the tech giants and leaders in AI, has been openly talking about safety issues and risks associated with this emerging technology. In his essay, The Adolescence of Technology, Dario wrote in depth about various anticipated risks and the need for private organizations and governments to work together in forming policies, laws, and systems to mitigate these risks. He also took a dialectical approach, arguing that the positive impact of AI could far outweigh the risks associated with it.
In the past two years, with the speed of AI development, we have seen governments and private organizations take action through reforms and internal processes to control both current and anticipated threats. The most common approach is to identify, evaluate, and mitigate these risks.
How we categorize and assess AI risks
The most comprehensible threats associated with AI today are Biological/Chemical, Cybersecurity, Manipulation, and Model Autonomy.
Then there are unknown risks that we may not comprehend or anticipate right now but may arise in the future. These threats are achievable when adversaries (individuals or well-resourced organizations) get unauthorized access to model weights or misuse the technology to exploit vulnerabilities.
Currently, most of the leading developers are setting multiple security thresholds depending on the model, an alarm is raised if a threshold is exceeded. Anthropic introduced ASL (AI Safety Levels), where each level requires specific safeguards. Google DeepMind uses Critical Capability Levels (CCLs) to represent points where AI may pose heightened risks. OpenAI tracks risks through defined categories with a gradation scale ranging from low to critical.
How we move from risk to regulation
The EU was the first to introduce the “EU AI Act”, governing AI development based on risk categories: unacceptable, high, limited, and minimal. In the US, the New York legislation introduced the Responsible AI Safety and Education Act (RAISE Act) to govern “frontier models.” California also introduced the Transparency in Frontier Artificial Intelligence Act (TFAIA) in Sep’25, targeting large developers for accountability. Both provide whistleblower protections and include significant financial penalties for non-submission of reports or disclosure of risks. These frameworks focuses majorly on frontier models that are more prone to systemic risks.
In the private sector, tech giants have taken regulation into their own hands. Google DeepMind has the Frontier Safety Framework, Anthropic regularly updates its Responsible Scaling Policy (RSP), and OpenAI has its Preparedness Framework. While distinct, they all share common steps: identifying, evaluating, mitigating, and governing risks. Currently, companies use methods like red teaming to stress-test models at different levels of development and deployment.
Securing access to model weights is one of the most critical safety norms among AI developers. Other key policies include the reporting of risks, rigorous third-party audits, and the tracking of incidents and mitigation for future reference. These frameworks ensure that crossing risk thresholds triggers immediate, non-discretionary actions such as halting deployment or hardening physical security. However, these regulations need to move from unilateral, company-led measures toward a coordinated multilateral ecosystem, where transparency and shared information flow among all stakeholders ensures that AI progress does not outpace our collective ability to control it.
About Author SHRUTI RAJVANSHI, Associate Director | Market Xcel
Shruti Rajvanshi is an Associate Director at Market Xcel and a postgraduate student at the Georgia Institute of Technology (MS, Human-Computer
Interaction). She holds a Bachelor’s degree in Computer Science from the University of Delhi and an MBA in Marketing and Finance. With over a decade of experience across analytics and business strategy, she has independently built AI-driven applications end-to-end using large language models, and has worked across emerging technology ecosystems including cryptocurrencies and NFTs. Her work reflects a strong, hands-on engagement with how technology is being built and deployed in real-world systems.




