Tech

Women in AI: Urvashi Aneja is researching the social impact of AI in India

To give AI academics and others their well-deserved – and overdue – time in the spotlight, TechCrunch is launching an interview series focused on remarkable women who have contributed to the AI ​​revolution. We’ll be publishing several articles throughout the year as the AI ​​boom continues, highlighting key work that often remains overlooked. Read more profiles here.

Urvashi Aneja is the founding director of the Digital Futures Lab, an interdisciplinary research effort that seeks to examine the interaction between technology and society in the Global South. She is also a research associate in the Asia-Pacific program at Chatham House, an independent policy institute based in London.

Aneja’s current research focuses on the societal impact of algorithmic decision-making systems in India, where she is based, and platform governance. Aneja recently authored a study on current uses of AI in India, examining use cases across different sectors, including policing and agriculture.

Questions and answers

In short, how did you get started in AI? What attracted you to the field?

I started my career in research and policy engagement in the humanitarian sector. For several years, I have studied the use of digital technologies during protracted crises in low-resource settings. I quickly learned that there is a fine line between innovation and experimentation, especially when it comes to vulnerable populations. The lessons of this experience deeply concerned me about the techno-solutionist discourses around the potential of digital technologies, in particular AI. At the same time, India had launched its Digital India Mission and National Strategy for Artificial Intelligence. I was troubled by the dominant narratives that saw AI as a silver bullet to India’s complex socio-economic problems, as well as the complete absence of critical discourse on the issue.

What work are you most proud of (in the field of AI)?

I am proud that we were able to draw attention to the political economy of producing AI as well as its broader implications for social justice, labor relations, and environmental sustainability. Very often, AI discourse focuses on the gains of specific applications and, at best, the benefits and risks of that application. But we can’t see the forest behind the trees: a product-focused lens obscures broader structural impacts, such as AI’s contribution to epistemic injustice, the deskilling of work, and the perpetuation of a inconceivable power in the majority world. I am also proud that we have been able to translate these concerns into concrete policies and regulations, from designing public procurement guidelines for the use of AI in the public sector to providing evidence in legal proceedings against large technology companies from the South.

How can we meet the challenges of a male-dominated technology sector and, by extension, the male-dominated AI sector?

By letting my work speak. And constantly asking yourself: why?

What advice would you give to women looking to enter the AI ​​field?

Expand your knowledge and expertise. Make sure your technical understanding of the issues is good, but don’t just focus on AI. Instead, study broadly so you can make connections across fields and disciplines. Too few people understand AI as a socio-technical system born from history and culture.

What are the most pressing issues facing AI as it evolves?

I think the most pressing problem is the concentration of power within a handful of tech companies. Although not new, this problem is exacerbated by new developments in large language models and generative AI. Many of these companies are now stoking fears about the existential risks of AI. Not only does this distract from existing harms, but it also positions these companies as necessary to combat AI-related harms. In many ways, we are losing some of the momentum generated by the “tech clash” that arose in the wake of the Cambridge Analytica episode. In countries like India, I also fear that AI will be seen as necessary for socio-economic development, providing an opportunity to overcome persistent challenges. Not only does this exaggerate the potential of AI, but it also ignores the fact that it is not possible to ignore the institutional development necessary to develop protective measures. Another issue we are not taking seriously enough is the environmental impacts of AI: the current trajectory risks being unsustainable. In the current ecosystem, those most vulnerable to the impacts of climate change are unlikely to be the beneficiaries of AI innovation.

What issues should AI users be aware of?

Users should be aware that AI is not magic, nor anything close to human intelligence. It is a form of computer statistics that has many beneficial uses, but is ultimately just a probabilistic guess based on historical or past patterns. I’m sure there are several other issues that users should also be aware of, but I want to warn you that we should be wary of attempts to shift responsibility downstream to users. I have seen this more recently with the use of generative AI tools in low-resource settings in the majority world: rather than being wary of these experimental and unreliable technologies, the focus is often on how end users, such as farmers or frontline workers, engage with generative AI tools. health workers need to upgrade their skills.

What is the best way to develop AI responsibly?

This must start with assessing the need for AI in the first place. Is there a problem that AI can uniquely solve or are there other possible ways? And if we want to build AI, is a complex black box model necessary, or could a simpler logical model do the job just as well? We also need to refocus domain knowledge on building AI. In the obsession with Big Data, we have sacrificed theory: we need to build a theory of change based on domain knowledge and this should be the basis of the models we build, not just Big Data. This is of course in addition to key issues such as participation, inclusive teams, labor rights, etc.

How can investors better promote responsible AI?

Investors need to consider the entire AI production lifecycle, not just the outcomes of AI applications. This would require examining a range of questions such as the fair value of labor, environmental impacts, the company’s business model (i.e. is it based on commercial oversight?) and measures of internal accountability within the company. Investors should also demand better and more rigorous evidence on the purported benefits of AI.

techcrunch

Back to top button