Sex Work, Labour, and Empowerment. Nepal (2022)
Published by Routledge – A groundbreaking study on women’s empowerment in Nepal’s informal entertainment sector.
Lessons from the Informal Entertainment Sector in Nepal (2022)
Dr. Sutirtha Sahariah
I am a research consultant with ten years of experience conducting international research in modern slavery, gender-based violence, and human trafficking. With a Ph.D. in International Development from the University of Portsmouth, I specialize in qualitative research, strategic communications, and policy analysis. My work spans across the UK, India, Myanmar, Nepal, and Bangladesh, collaborating with organizations like the Global Fund to End Modern Slavery, University of Liverpool, and various UN agencies. I have published extensively on modern slavery, women's empowerment, and social justice issues. As an independent journalist, my stories have appeared in The Guardian, BBC, World Economic Forum, and other international outlets, bringing visibility to critical development issues and human rights concerns.
It was during the monsoon of 2025 that I travelled to the eastern Indian state of Bihar, a state with over 130 million people, where the per capita income remains one of the lowest in the country. During the trip, I visited a local administrative office to collect some information about the local population.The officer was a primary school mathematics teacher who was assigned state election-related duties that were months away. He was on a data entry job in the run-up to the elections. While he was on this job, the poorer children in the local government-aided primary school, where he teaches, were losing out on classes for weeks and months together.
The teacher told me, “I feel bad, but what can I do?” Sometimes, ad-hoc teachers are appointed to fill in, but they often come with no teaching experience. “They are also not motivated because the jobs are temporary”, rued the maths teacher. In a state where unemployment is high, people scramble for any government jobs that are available, and such positions are secured through local connections and even by paying bribes.
This kind of neglected approach explains why India’s poorest children lack foundational learning skills. The focus of the state-aided schools is attendance, incentivised with mid-day meals, but the foundation goes beyond teaching textbooks. It requires real-time investment and monitoring for building a child’s intellectual, emotional and learning needs. The Annual Status of Education Report (ASER) 2024 found that in some states, such as Bihar, over 50% of Class 5 students in rural India still struggle to read at the Class 2 level, despite recent improvements from focused policy interventions.
Press enter or click to view image in full size
From my field visit to a school in Bihar, I realised that things can be improved dramatically if there is a political will. Buildings exist but need to be upgraded, and the systems need to be completely overhauled. The internet is strong. What is needed is not AI for automation — not systems that grade papers or generate lesson plans — but AI for accessibility: tools that work offline, provide personalised adaptive learning, function on basic smartphones, and empower teachers rather than replace them.
The crisis of the Indian education system is that it favours those who can afford it, thus leaving a vast number of poor children with no additional resources for learning. Hiring a private tutor in India is expensive, but it’s the norm for all school-going children. The private tutoring industry is pegged at a whopping $10.8 billion. This is where AI in Education can be revolutionary by creating a level playing field in access to quality education.
India’s AI in education policy has to be multilayered because there is massive wealth and accessibility inequality. The priority should be to help a significant percentage of children from the poorer and rural communities to leap-frog, so that India achieves a revolution in education by creating a pool of skilled and job-ready population in a generation. To achieve this, AI in education must be designed keeping in mind the poor and uneducated and treat it as a digital public good. The first step is to integrate AI across education systems in public or state-funded schools across all states in different languages.
The systems must be inspired and adopt the best practices, such as UNESCO and the OECD recommendations of using AI as a means to enhance cognitive development and lifelong learning, provided that systems remain human-centred, inclusive, and transparent. Political will and public-private partnership are the keys to the success of a project of such magnitude, with the stated mission of AI for good for everyone, everywhere.
One example from India’s context is OpenAI’s Study Mode feature, launched in July 2025 to help students with technical subjects such as maths and computer science. The design was not prompt-dependent, but the idea was to enforce learning behaviour. The key features of the study mode are
OpenAI’s Head of Education, Leah Belsky, explained that Study Mode originated from field observations in India, where families were spending a significant portion of their earnings on private tuitions, thus disadvantaging children from economically weaker sections. OpenAI used India as a design laboratory by beta testing students nationwide, including those preparing for highly competitive medical and engineering exams. Participants were not individually named for privacy, but research indicates that it spanned everyday learning to high-stakes prep, providing feedback that shaped personalisation and scaffolding features. There are no specific cities mentioned, with very few details about the research outcome
Subsequent OpenAI Learning Accelerator (launched August 2025) collaborated with IIT Madras ($500K research on AI learning outcomes), AICTE, Ministry of Education, and ARISE schools, distributing 500K ChatGPT licenses to educators and students.
OpenAI’s Study Mode supports 11 languages with Voice Capability, which is a significant stride for an inclusive education. The voice-enabled interaction can greatly help first-generation students with learning difficulties and those alienated from traditional classrooms. This approach aligns closely with India’s National Education Policy (NEP) 2020, which emphasises foundational literacy, inquiry-based learning, and the reduction of rote memorisation, along with OpenAI’s own policy “to democratise encouragement, guidance, and confidence — especially for learners who lack access to quality teachers or tutors.”
But to address the learning needs of India’s poorest, AI tools alone will not help. India needs a digital infrastructure and an innovative way to fund digital devices, such as a specially designed tablet that can work as a slate. It has to be an interactive device that is given to all students at a subsidised or no cost. India needs an AI-in-Education Code of Conduct — a governance framework developed through multi-stakeholder consultation, including students, teachers, parents, and civil society. This Code should balance personalisation with autonomy, innovation with equity, and data utility with privacy. Without this, we risk deploying AI that works technically but fails socially.
The approach has to be collaborative, and the programme roll-out has to be meticulously planned because this is where most well-intended projects fail. Every tablet distributed has to have insurance. There has to be a soft penalty system, like “access blocked”, that puts the onus of ownership and accountability of the devices on parents. A behaviour change program must run in parallel, along with investments in control and support centres providing 24/ 7 support through chatbots with human oversight. The progress tracker of every child should be available with a unique password and ID, and where parents are uneducated, teachers will assist.
For children coming from poorer families, there has to be an incentive model. If a child does well, credit points could be provided that parents use for redeeming stationery or paying for something else. Such models can arrest dropouts, encourage parents not to withdraw children from school to join the labour market. Foundational learning with AI should help children transition into an AI-driven skill development program that can help them get decent jobs early on. It should lead to creating a credible employability pipeline. Fixing education has ripple effects on other factors, such as child labour, human trafficking of girls, in addition to providing a demographic dividend to the country.
India has a good measurement infrastructure already in place. The Parakh Rastriya Savbekshan, a national assessment system, tested approximately 2.3 million students from 782 districts covering classes 3, 6 and 9, so India already has a baseline data of massive scale, so when AI tools are introduced, the baseline data can be used for comparison to evaluate how AI is making a difference.
The new thrust of India’s National Curriculum Framework shifts focus to building core competencies of what students can do rather than which class they sat in. An AI tool can be used to find out if students can do things they couldn’t before. It will help in creating a competency–based measurement framework for judging AI’s real impact.
India has also created a multi-level assessment dissemination system where data is shared through workshops at national, regional and state levels to inform practical action. This is ideal for infrastructure for AI programs because AI impact measurement needs a similar pipeline to share results. Since India has already built a system, AI evaluation can leverage it.
Accelerating AI in education needs a bold vision, good data and transparent enforcement of the existing mechanisms. The ASER 2024 report, released by Pratham Foundation in January 2025, shows the highest recorded reading levels for Class 3 government school students since the survey began 20 years ago. This is attributed to focused government programs like the NIPUN Bharat Mission.
ASER data has been referenced in 105 parliamentary questions, used by NITI Aayog (India’s planning body), and cited in the World Bank’s World Development Report. For AI in education to succeed. India has proven it can build trusted measurement systems. Now, as AI tools like Study Mode are deployed, these same systems can track whether accessible AI is delivering on its promise — providing personalised, patient support for foundational skills that current interventions cannot fully reach.
From a policy perspective, India’s Governance Policy Architecture, such as the DPDP ACT 2023, has provisions for student data protection, mandatory algorithmic audits for assessment tools following the framework’s fairness requirements, and teacher empowerment over surveillance. Special child safety provisions — explicitly flagged in the Guidelines — should prevent AI systems from exploiting developing minds. Further integration through DIKSHA, Bhashini, and PARAKH offers the infrastructure; the governance framework offers the guardrails. The opportunity is transformative; responsible deployment ensures no child is left behind.
How AI was used in researching and writing his article
AI Claude was used as an augmentation tool while writing this article. Peplexity was used for deep research. Every citation and data was verified. Gemini was used for infographics. The author has also created a Claude project for iterating on editorial flow discussion etc, but the author ensures that the outcome is his own.
AI resilience isn’t about tech companies fixing their systems — it’s about how society adapts when AI safeguards fail. This article explains how the resilience framework, through Avoidance, Defence, and Remedy interventions, prepares society to manage risks from increasingly powerful and accessible AI systems.
On the 6th of December 2025, a seventeen-year-old boy in Japan was arrested for carrying out a cyberattack on the server of Kaikatsu Frontier, an internet café chain operator. The teenager hacked the system by generating code using conversational AI, thereby compromising the data of 7.3 million customers and disrupting all business operations. AI Incident Database reported that the suspect’s prompts to the AI “concealed malicious intent”. The suspect had a case history of unrelated credit card fraud.
The case reveals several facts about the current state of AI. That a teenager was able to access and comprehend powerful AI capabilities easily and cheaply by building sophisticated code, presumably with basic knowledge. Together with the right prompts, he was able to attack a server system that would otherwise need years of expertise to plan and execute.
Clearly, the company’s cyber infrastructure was weak; the AI models are already diffused into our lives. We cannot control who builds it or how people will use it. There will be elements that misuse it, thus exposing the dual capability nature of the models to benefit or harm society.
This pattern isn’t isolated to Japan. In Denmark, a 22-year-old used AI to research how to injure his father without killing him. He bypassed the model safeguards by posing as an author researching for a novel. The AI provided a detailed plan to execute the intended harm.
Let’s look at the magnitude of the teenager’s crime: millions affected by just one action, disrupting business operations all along. How would you protect millions, and in cases where the data exposure results in other harm, such as breaking into the banking systems? How and who would compensate the “millions” of victims?
In the article, I will discuss how adopting the AI resilience framework — which focuses on the societal adaptation through avoidance, defence and remedy interventions — can enhance collective responsibility of Tech companies, governments and society to mitigate AI risks.
AI Resilience complements traditional model-level safety approaches. While the conventional AI safety focuses on training data quality, safeguards, and capability restrictions, resilience addresses what happens when the model-level safety fails or is bypassed, as examples above demonstrate. It is more about how to deal with safety when it is easily accessible (as already is), becoming cheaper to use and deploy for high-end tasks and has a potential for dual use capability. In such a scenario, we need a societal response in terms of adaptation to Advanced AI systems.
Societal Adaptation or AI Resilience begins when AI models are deployed and get diffused. It is at this wide-scale use that new risks are discovered that might slip through even the best of safeguarding measures organically built into the systems, so adaptive interventions are meant to mitigate the harm that might arise in the specific use case of the AI. For example, students use conversational AI for coding or as tutors, but in the above example, one of the teenagers used it for committing a cybercrime.
AI Resilience has a three-part framework:
1. Avoidance: Interventions that stop harmful use before it happens. This includes laws against AI-assisted crimes, age restrictions on accessing harmful content, and monitoring systems that detect suspicious activity. Avoidance aims to prevent attacks from occurring in the first place.
2. Defence — It aims to prevent harm even when the misuse occurs by building systems that withstand it. Interventions include cybersecurity infrastructure that deters cyberattacks, spam filters that catch phishing, and public awareness campaigns to raise awareness of AI harms.
3. Remedy: This intervention reduces the downstream impact after the harm occurs. This includes legal actions such as arrest and prosecution, victim compensation (e.g., people losing money through banking scams) and other rapid responses to contain damage. Remedy faces severe challenges at scale. While legal systems can prosecute one attacker, like in the Denmark case above, they struggle to help millions of victims, like in the Japanese case, where the data of the customers could have been used to hack banking systems.
Let’s deconstruct the Japanese case using the above framework:
Avoidance Failure: In the case of the Japanese teenager, despite his earlier credit card fraud history, the cyber laws didn’t deter him. No monitoring systems tracked his activities, nor any surveillance detected his planning phase, and no age restriction prevented his access to AI’s code-generation capabilities.
Defence Failure: The company’s weak cybersecurity mechanism was vulnerable, and the intrusion succeeded without detection. The generated code succeeded in breaching servers and exposing the data of 7.3 million customers.
Remedy partially worked: Legal accountability led to the arrest of the attacker after months of investigation. However, it compromised the privacy data of 7.3 million people and caused operational havoc which cannot be undone. So one case of misuse of an AI model targeted millions at scale, explaining why remediation systems struggle with AI-enabled mass harm.
Let’s look at another example to explain the Adaptive Cycle. The Economist this month published an article about “How AI is Rewiring Childhood”, signalling exciting opportunities but also cautioning about the ominous risk.
The article talks about the enormous power of AI to transform education by creating a level playing field (if supported by the right policies). A child educated in Hindi medium in remote Bihar might be able to develop cognitive and comprehensive skills like his counterpart educated in English in New Delhi, thus overcoming language barriers — that is a possibility. But there could be a host of other AI tools that could be used for storytelling and learning, but the actual risk lies in the larger societal impact of such tools.
A child begins to show trust and reliance tendencies on the AI system that always speaks in a friendly tone. The AI grasps the child’s psychology and answers in a specific way. The child becomes intolerant of others who disagree with him or are critical of him. Worst, we could soon have a generation of kids who grow up with poor social and networking skills.
The Adaptive cycle puts the onus not just on the company but also on the governments, schools, parents and researchers to create an enabling environment where a child grows up like a human despite the wide-scale AI diffusion. Here’s how:
Identify and Assess Risks (Stage 1): Researchers assess impacts and identify risks. Companies use real-time data for R&D for product enhancement. Schools evaluate the social implications of wide-scale integration of AI in education.
Develop Responses (Stage 2): Governments create industry benchmarks and age-appropriate guidelines. Tech companies put added parental control features, etc. Schools teach social skills and provide, encourage, and promote outdoor games.
Implement Responses (Stage 3): Companies provide explicit literature to parents. Companies conduct surveys with parents. Parents monitor usage and set boundaries. Schools are adopting social skills programs. Government regulations are enforced across the industry.
Loop back to Stage 1 (creating a continuous cycle): New AI capabilities emerge, interventions don’t work as expected, and children develop new use patterns that weren’t previously anticipated, thus making resilience a continuous process.
As LLM model become widespread and easily accessible, the scope of its misuse grows exponentially given the dual use nature of the technology. Mitigating AI risk is not the concern of AI companies alone but a collective action where tech companies, governments, security agencies and the civil society organisations come together by building mechanisms, so that social adaptation and use of AI is secured for the wider benefit of all. The two examples from Japan and Denmark show that non-suspicious and determined actors can bypass safeguards through simple social engineering causing unprecedented harm and havoc.
AI Resilience is our capacity to adapt when AI safeguards fail. The response requires multi-layered interventions across Avoidance, Defence and Remedy so the scale asymmetry of AI misuse can be addressed.
At the societal level, this requires Avoidance through robust cyber laws, professional enforcement agencies that understand AI Governance. Defence through AI products that inform consumers about their potential misuse, awareness amongst parents and teachers about the dangers of over-risk. Remedy through effective enforcement and comprehensive victim support, including psycho-social support.
Governments around the world are building AI Resilience through a combination of laws and oversight. For example, India has adopted a polycentric adaptive model relying on voluntary compliance rather than centralised regulation. It operationalises the adaptive cycle using real-world evidence to inform periodic revisions.
Resilience through continuous adaptation offers a path forward when model-level safety inevitably fails. Tech companies, governments, and society must accelerate the adaptive cycle — building infrastructure to live safely with AI even when its guardrails are breached.
References
Bernardi, J. (2024, August 3). Resilience and adaptation to advanced AI. Achieving AI Resilience. https://achievingairesilience.substack.com/p/resilience-and-adaptation-to-advanced
Bernardi, J., Mukobi, G., Greaves, H., Heim, L., & Anderljung, M. (2024). Societal adaptation to advanced AI. arXiv preprint arXiv:2405.10295. https://arxiv.org/abs/2405.10295
Denmark assault case [Incident 851]. (2025). AI Incident Database. Partnership on AI. https://incidentdatabase.ai/cite/851
How AI is rewiring childhood. (2024, December 7). The Economist. https://www.economist.com/leaders/2024/12/05/how-ai-is-rewiring-childhood
Ministry of Electronics and Information Technology. (2024, November). India AI governance guidelines. Government of India. https://www.meity.gov.in/
Osaka cyberattack [Incident 1047]. (2025, January 18). AI Incident Database. Partnership on AI. https://incidentdatabase.ai/cite/1047
Sahariah, Sutirtha (2025, December 4). Deconstructing India’s AI governance framework. Medium. https://medium.com/@suti011/deconstructing-indias-ai-governance-framework-abd81f6b4cdf
This article uses the AI lifecycle approach (used in AI Governance research) to examine the risks and policy across three stages — Design/ Testing/ Training; Deployment and Usage; and Longer-Term Diffusion. At each stage, effective governance requires three policy goals: creating visibility into AI systems, promoting best practices for safe development, and establishing enforcement mechanisms.
In November 2025, India unveiled its AI Governance Guidelines, with infrastructure as the first strategic pillar. The guidelines offer plans for 38,231 GPUs, databases across 20 sectors and generous schemes for startups. For a developing country like India with diverse population the focus of AI is to promote inclusive development in health, education, agriculture. India’s AI governance framework is unique in the sense that it looks at building and governing AI simultaneously.
India’s framework is guided by seven core principles (called ‘sutras’): Trust is the foundation; People First; Innovation over Restrain; Fairness over Equity; Accountability; Understandable by Design; Safety; Resilience and Sustainability. The “innovation over Restrain” principle is particularly significant as it explicitly prioritises responsible innovation over cautionary restrain thus shaping the framework’s resilience on voluntary measure rather than restrictive regulation. The seven principles are operationalised by six strategic pillars: Infrastructure, Capacity Building, Policy & Regulation, Risk Mitigation, Accountability, and Institutions.
Design, Training and Testing: Building While Regulating
This article analyses how India’s framework both innovates and reveals critical gap. India’s infrastructure first-approach is to build domestic compute capacity and databases and reduce total dependency on foreign AI systems that it cannot independently evaluate or regulate. To promote innovation India has offered host of incentives for startups and AI entrepreneurs such as tax rebates and subsidised loans
In a way, India is creating the material conditions for governance by making available computing resources, standardised evaluation datasets which is essential for scaling adoption in critical sectors such as health, education and agriculture. Such measures also create an enabling environment for fairness testing of AI systems in the Indian context, which is good template for countries in global south. However, building computational capacity alone without robust accountability mechanism risks enabling harm at scale. India’s near short to long term challenge will be understand if its reliance on voluntary compliance can foster innovation without compromising safety.
India’s proposes to use its innovative Data Empowerment and Protection Architecture (DEPA), a ‘techno-legal’ system for permission-based data sharing which integrates data protection principles into digital public infrastructure ensuring compliance by design. Using DEPA for AI training would support privacy-preserving mechanisms and make use of personal data more transparent and auditable.
However, the document highlights trade-off as privacy protection could impact performance loss on certain benchmarks, which could impact utility but the concern might be mitigated by overriding guiding principles of prioritising “innovation over restrain”. The document recommends complementary measures like algorithmic auditing and sector specific regulations for effective AI governance alongside DEPA for AI training.
On creating visibility, India proposes a combination of formal legal enforcement and voluntary compliance . It mandates AI organisations to publish evaluations of risks and harm of AI systems to society and individuals in the India contexts. Further, it tasked AI Safety Institute with testing and evaluation of AI systems though the submission is voluntary. So in many aspects issues of transparency reporting, peer monitoring lack legal enforceability which contrasts with UK framework where model reporting to regulators, third party auditing and audits are mandatory.
Deployment and Usage: the Deepfakes Priority
At the Deployment Stage, India’s priorities become explicit. The guidelines identify six risk categories: malicious uses, bias/discrimination, transparency failures, systemic risks, loss of control, and national security. But resource allocation tells the real story.
Content authentication receives detailed attention: a committee to develop watermarking standards, integration with Coalition for Content Provenance & Authenticity (C2PA) standards, MeitY’s proposed mandatory labelling rules for AI-generated content. Deepfakes are the ‘growing menace’ requiring ‘immediate action.’ The concerns are valid given the complexity and diversity of India’s political economy and the risk of misinformation inflaming public imagination.
There is, however, no pre-deployment capability evaluations. The guidelines is premised on the assumption that voluntary compliance, market incentives and existing laws will suffice for the moment though it is presumed that many of these guidelines will evolve following systematic review of consumer behaviour and any trade-off or social harm thereof.
Long- Term Diffusion: Who’s Vulnerable
The guidelines mention ‘vulnerable group” thirteen times with specific mention of women and children. For children it raises concern of AI affecting mental health, exposure to harmful content and beyond. The risk for women and girls range from them being targeted by harmful AI generated context and explicitly alludes to “revenge porn.” But these concerns would have to be monitored over time and effective guardrails needs to be put in place. Since India is developing LLM in multiple languages, perhaps in built SOS system can be developed that could be linked with human controlled risk and safety control centres like AI helpline. Also in Indian context, the vulnerability may take unintended forms. At one level it might help overcome bias or discrimination based on race, caste or religion but on the other hand a lot will depend on how well the algorithm train the data and how the bias is managed.
Studies show that, unlike in the West, data in India is not always reliable. Sometimes communities are missing or misrepresented in databases. Also, large swathes of rural population, especially women, indigenous tribes, and elders, do not use the internet at all. The issue of the digital divide is enormous. The Fairness & Equity sutra commits to fairness ‘particularly for marginalized communities.’ But without naming which communities, or creating participation mechanisms for them, this remains aspirational.
India’s Path Forward
India’s institutional framework represents genuine innovation. Rather than creating a single AI regulator (expensive, slow to establish), the guidelines propose coordination across existing institutions of AI Governance Group (AIGG); Technology & Policy Expert Committee (TPEC), Safety Institute (AISI): Research, standards development, safety testing and Sectoral regulators. This ‘whole-of-government’ approach leverages existing regulatory capacity rather than building from scratch.
As India prepares to host the AI Impact Summit in February 2026, the guidelines will shape global conversation in non- western context. What India needs in coordination and enforcement mechanism so that its AI regulation is robust and industry friendly and at the same time protects its citizens from unintended harm.
Concrete next steps could include specifying which AI applications require mandatory (not voluntary) safety evaluations, defining ‘sensitive sectors’ requiring human oversight, creating community participation mechanisms in governance structures, and establishing timeline for converting voluntary measures into mandatory requirements.
References:
Ministry of Electronics and Information Technology (MeitY). (2024). India AI governance guidelines. Government of India. https://www.meity.gov.in/
Data Empowerment and Protection Architecture (DEPA). (2020). DEPA: A new paradigm for data empowerment and protection. NITI Aayog. https://www.niti.gov.in/
Bernardi, J., Mukobi, G., Greaves, H., Heim, L., & Anderljung, M. (2024). Societal adaptation to advanced AI. arXiv preprint arXiv:2405.10295. https://arxiv.org/abs/2405.10295
Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., DasSarma, N., … & Kaplan, J. (2022). Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. https://arxiv.org/abs/2204.05862
Exploring the intersection of artificial intelligence and policy development in the modern digital landscape.
Comprehensive research on human trafficking, forced labor, and modern slavery across South Asia and beyond.
Developing communication strategies and content for international development organizations.
Helping organizations build and maintain their reputation through strategic messaging and crisis communication.
Expert in designing and conducting qualitative studies, focus groups, and stakeholder interviews.
Published by Routledge – A groundbreaking study on women’s empowerment in Nepal’s informal entertainment sector.
Lessons from the Informal Entertainment Sector in Nepal (2022)
Published by Routledge – A groundbreaking study on women’s empowerment in Nepal’s informal entertainment sector.
This book presents an analysis of the concepts of female empowerment and resilience against violence in the informal entertainment and sex industries.
Generally, the key debates on sex work have centred on arguments proposed by the oppressive and empowerment paradigms. This book moves away from such debates to look widely at the micro issues such as the role of income in the lives of sex workers, the significance of peer organisations and networks of women, and how resilience is enacted and empowerment experienced. It also uses positive deviancy theory as a useful strategy to bring about notable changes in terms of empowerment and agency for women working in this sector and also for addressing the wider issues of migration, HIV/AIDS, and violence against women and girls. The focus is on moving beyond a victimisation framework without downplaying the extent of the violence that women in this industry experience. It conceptualises the theories of empowerment and power which have not been tested against women who work in this sector, combined with in-depth interviews with women working in the industry as well as academics, activists, and personnel in the NGO and donor sector. In doing so, it informs the reader of the numerous social, political, and economic factors that structure and sustain the global growth of the industry and analyses the diverse factors that lead many thousands of women and girls around the world to work in this sector.
The work presents an important contribution to the study of citizenship and rights from a non-Western angle and will be of interest to academics, researchers, and policymakers across human rights, sociology, economics, and development studies.
Knowledge for Change? Lessons from co-developing a research agenda on survivor engagement. November 2023.
Comprehensive review of promising practices across South Asia
a case study from the frontline source area in India
View Research → | Read Policy Impact → | Read Reports from the Project →
Knowledge for Change?
November 2023
Introduction and context ‘Survivor engagement’, understood as the involvement of people with lived experience in policy and programming, has seemingly moved to the centre of efforts to address modern slavery and human trafficking, but how can it really shift the way that these issues are tackled? As practice in this area is underdeveloped, the production of knowledge is likely to be crucial in this, changing approaches and responses through the development of new concepts, interpretations, tools and instruments that can be embedded in policy and practice. This report presents a summary of new findings and reflections from an ongoing and collaborative initiative to develop a research agenda through the lens of survivor engagement. It builds on a project that explored promising practices of lived experience engagement in modern slavery policy and programming and which took place in 2022.1 Researchers at the University of Liverpool, with funding from Foreign, Commonwealth, and Development Office (FCDO), built an international network of researchers and consultants to explore effective methods and practices involving persons with lived experience in modern slavery policy and programming. Recognising the collaborative research’s significance, the network secured additional funding from the Modern Slavery and Human Rights Policy and Evidence Centre (Modern Slavery PEC) to expand their study between March and July 2023. This expansion enabled a deeper exploration of engagement with first-hand experience and expertise in policy and programme systems.
View Research → | Read Policy Impact → | Read Reports from the Project →
Fair purchasing practices in garment supply chains.
connecting theory and practice
Matthew Anderson, Tamsin Bradley, Sutirtha Sahariah
connecting theory and practice
Matthew Anderson, Tamsin Bradley, Sutirtha Sahariah
Abstract
In this chapter, we investigate the experience of Fair Trade organisations and how they have translated Fair Trade principles into practice in their value chains. In particular, we focus on the implementation of responsible purchasing practices related to: Equal Partnership, Collaborative Production Planning and Fair Payment Terms. We argue that, if supported, Fair Trade organisations have the potential to be industry front-runners and demonstrate fair purchasing practices that can be replicated and scaled across the garment sector.
Research Consultation • Strategic Communications • Policy Analysis • Writing & Content Creation • Training & Workshops
Phone: +91 9818965091 Email: suti011@gmail.com