Explaining the AI Resilience and Adaptation Framework

Explaining the AI Resilience and Adaptation Framework

AI resilience isn’t about tech companies fixing their systems — it’s about how society adapts when AI safeguards fail. This article explains how the resilience framework, through Avoidance, Defence, and Remedy interventions, prepares society to manage risks from increasingly powerful and accessible AI systems.

On the 6th of December 2025, a seventeen-year-old boy in Japan was arrested for carrying out a cyberattack on the server of Kaikatsu Frontier, an internet café chain operator. The teenager hacked the system by generating code using conversational AI, thereby compromising the data of 7.3 million customers and disrupting all business operations. AI Incident Database reported that the suspect’s prompts to the AI “concealed malicious intent”. The suspect had a case history of unrelated credit card fraud.

The case reveals several facts about the current state of AI. That a teenager was able to access and comprehend powerful AI capabilities easily and cheaply by building sophisticated code, presumably with basic knowledge. Together with the right prompts, he was able to attack a server system that would otherwise need years of expertise to plan and execute.

Clearly, the company’s cyber infrastructure was weak; the AI models are already diffused into our lives. We cannot control who builds it or how people will use it. There will be elements that misuse it, thus exposing the dual capability nature of the models to benefit or harm society.

This pattern isn’t isolated to Japan. In Denmark, a 22-year-old used AI to research how to injure his father without killing him. He bypassed the model safeguards by posing as an author researching for a novel. The AI provided a detailed plan to execute the intended harm.

Let’s look at the magnitude of the teenager’s crime: millions affected by just one action, disrupting business operations all along. How would you protect millions, and in cases where the data exposure results in other harm, such as breaking into the banking systems? How and who would compensate the “millions” of victims?

In the article, I will discuss how adopting the AI resilience framework — which focuses on the societal adaptation through avoidance, defence and remedy interventions — can enhance collective responsibility of Tech companies, governments and society to mitigate AI risks.

AI Resilience complements traditional model-level safety approaches. While the conventional AI safety focuses on training data quality, safeguards, and capability restrictions, resilience addresses what happens when the model-level safety fails or is bypassed, as examples above demonstrate. It is more about how to deal with safety when it is easily accessible (as already is), becoming cheaper to use and deploy for high-end tasks and has a potential for dual use capability. In such a scenario, we need a societal response in terms of adaptation to Advanced AI systems.

Societal Adaptation or AI Resilience begins when AI models are deployed and get diffused. It is at this wide-scale use that new risks are discovered that might slip through even the best of safeguarding measures organically built into the systems, so adaptive interventions are meant to mitigate the harm that might arise in the specific use case of the AI. For example, students use conversational AI for coding or as tutors, but in the above example, one of the teenagers used it for committing a cybercrime.

AI Resilience has a three-part framework:

1. Avoidance: Interventions that stop harmful use before it happens. This includes laws against AI-assisted crimes, age restrictions on accessing harmful content, and monitoring systems that detect suspicious activity. Avoidance aims to prevent attacks from occurring in the first place.

2. Defence — It aims to prevent harm even when the misuse occurs by building systems that withstand it. Interventions include cybersecurity infrastructure that deters cyberattacks, spam filters that catch phishing, and public awareness campaigns to raise awareness of AI harms.

3. Remedy: This intervention reduces the downstream impact after the harm occurs. This includes legal actions such as arrest and prosecution, victim compensation (e.g., people losing money through banking scams) and other rapid responses to contain damage. Remedy faces severe challenges at scale. While legal systems can prosecute one attacker, like in the Denmark case above, they struggle to help millions of victims, like in the Japanese case, where the data of the customers could have been used to hack banking systems.

Let’s deconstruct the Japanese case using the above framework:

Avoidance Failure: In the case of the Japanese teenager, despite his earlier credit card fraud history, the cyber laws didn’t deter him. No monitoring systems tracked his activities, nor any surveillance detected his planning phase, and no age restriction prevented his access to AI’s code-generation capabilities.

Defence Failure: The company’s weak cybersecurity mechanism was vulnerable, and the intrusion succeeded without detection. The generated code succeeded in breaching servers and exposing the data of 7.3 million customers.

Remedy partially worked: Legal accountability led to the arrest of the attacker after months of investigation. However, it compromised the privacy data of 7.3 million people and caused operational havoc which cannot be undone. So one case of misuse of an AI model targeted millions at scale, explaining why remediation systems struggle with AI-enabled mass harm.

Press enter or click to view image in full size

Let’s look at another example to explain the Adaptive Cycle. The Economist this month published an article about “How AI is Rewiring Childhood”, signalling exciting opportunities but also cautioning about the ominous risk.

The article talks about the enormous power of AI to transform education by creating a level playing field (if supported by the right policies). A child educated in Hindi medium in remote Bihar might be able to develop cognitive and comprehensive skills like his counterpart educated in English in New Delhi, thus overcoming language barriers — that is a possibility. But there could be a host of other AI tools that could be used for storytelling and learning, but the actual risk lies in the larger societal impact of such tools.

A child begins to show trust and reliance tendencies on the AI system that always speaks in a friendly tone. The AI grasps the child’s psychology and answers in a specific way. The child becomes intolerant of others who disagree with him or are critical of him. Worst, we could soon have a generation of kids who grow up with poor social and networking skills.

The Adaptive cycle puts the onus not just on the company but also on the governments, schools, parents and researchers to create an enabling environment where a child grows up like a human despite the wide-scale AI diffusion. Here’s how:

Identify and Assess Risks (Stage 1): Researchers assess impacts and identify risks. Companies use real-time data for R&D for product enhancement. Schools evaluate the social implications of wide-scale integration of AI in education.

Develop Responses (Stage 2): Governments create industry benchmarks and age-appropriate guidelines. Tech companies put added parental control features, etc. Schools teach social skills and provide, encourage, and promote outdoor games.

Implement Responses (Stage 3): Companies provide explicit literature to parents. Companies conduct surveys with parents. Parents monitor usage and set boundaries. Schools are adopting social skills programs. Government regulations are enforced across the industry.

Loop back to Stage 1 (creating a continuous cycle): New AI capabilities emerge, interventions don’t work as expected, and children develop new use patterns that weren’t previously anticipated, thus making resilience a continuous process.

As LLM model become widespread and easily accessible, the scope of its misuse grows exponentially given the dual use nature of the technology. Mitigating AI risk is not the concern of AI companies alone but a collective action where tech companies, governments, security agencies and the civil society organisations come together by building mechanisms, so that social adaptation and use of AI is secured for the wider benefit of all. The two examples from Japan and Denmark show that non-suspicious and determined actors can bypass safeguards through simple social engineering causing unprecedented harm and havoc.

AI Resilience is our capacity to adapt when AI safeguards fail. The response requires multi-layered interventions across Avoidance, Defence and Remedy so the scale asymmetry of AI misuse can be addressed.

At the societal level, this requires Avoidance through robust cyber laws, professional enforcement agencies that understand AI Governance. Defence through AI products that inform consumers about their potential misuse, awareness amongst parents and teachers about the dangers of over-risk. Remedy through effective enforcement and comprehensive victim support, including psycho-social support.

Governments around the world are building AI Resilience through a combination of laws and oversight. For example, India has adopted a polycentric adaptive model relying on voluntary compliance rather than centralised regulation. It operationalises the adaptive cycle using real-world evidence to inform periodic revisions.

Resilience through continuous adaptation offers a path forward when model-level safety inevitably fails. Tech companies, governments, and society must accelerate the adaptive cycle — building infrastructure to live safely with AI even when its guardrails are breached.

References

Bernardi, J. (2024, August 3). Resilience and adaptation to advanced AI. Achieving AI Resiliencehttps://achievingairesilience.substack.com/p/resilience-and-adaptation-to-advanced

Bernardi, J., Mukobi, G., Greaves, H., Heim, L., & Anderljung, M. (2024). Societal adaptation to advanced AI. arXiv preprint arXiv:2405.10295. https://arxiv.org/abs/2405.10295

Denmark assault case [Incident 851]. (2025). AI Incident Database. Partnership on AI. https://incidentdatabase.ai/cite/851

How AI is rewiring childhood. (2024, December 7). The Economisthttps://www.economist.com/leaders/2024/12/05/how-ai-is-rewiring-childhood

Ministry of Electronics and Information Technology. (2024, November). India AI governance guidelines. Government of India. https://www.meity.gov.in/

Osaka cyberattack [Incident 1047]. (2025, January 18). AI Incident Database. Partnership on AI. https://incidentdatabase.ai/cite/1047

Sahariah, Sutirtha (2025, December 4). Deconstructing India’s AI governance framework. Mediumhttps://medium.com/@suti011/deconstructing-indias-ai-governance-framework-abd81f6b4cdf

Leave a Reply

Your email address will not be published. Required fields are marked *