Navigating the promise and the pitfalls
Photo credit: DPAM
Matthew Welch, Responsible Investment Specialist at DPAM
Artificial Intelligence (AI) has quickly become a familiar companion in the world of investment analysis. It helps us find information faster, test assumptions and uncover risks that once required days of manual research. But while AI can strengthen Environmental, Social and Governance (ESG) analysis, it also raises new ESG questions, from its energy footprint to its social impact. What should we, as a sustainable and responsible investor, expect from investee companies?
How we use AI in ESG
At DPAM, we use AI to help investment teams identify and assess ESG risks more effectively. In the past, we relied on lengthy company reports and occasionally outdated datasets from ESG providers. Today, AI tools allow us to focus less on gathering data and more on interpretation, moving from looking for information to spending time on asking the right questions.
For instance, our tools can compare company disclosures with third-party ESG ratings, highlighting inconsistencies or missing information. This helps us direct our analysis where it matters most. ESG data often shows low correlation between providers and company self-reporting, so AI helps us detect where the story doesn’t quite add up.
It’s important to note that we don’t use AI to make moral or qualitative judgments, that remains a human task. Instead, we guide AI with clear, factual criteria. For example, when assessing whistleblower programmes, a tool checks whether policies meet our baseline expectations: public availability, accessibility in all operating languages, 24/7 functionality and protections against retaliation, etc.
How to see ESG challenges in companies active on AI: the environment
When looking at the ESG risks of AI, we apply a value chain approach. Whereas semiconductor manufacturers might face resource use and energy usage issues, software developers might face water use and energy usage issues. When mapping data centres against water scarcity areas, we observe that water use will be an ever-increasing problem for data centres.
Nevertheless, the biggest environmental risk of AI is its energy usage across the value chain.
AI doesn’t exist in the cloud, it is powered by data centres that consume significant amounts of electricity and water. The latest figures, from MIT, suggest that data centres account for about 4.4% of total U.S. electricity use and this could rise to 12% by 2028. The energy consumption of a single AI query depends on many factors and companies are still not disclosing enough information to form a comprehensive picture on the energy needs of AI models.
What is clear is that training and running large AI models requires substantial computing power. Companies developing and deploying these systems are facing pressure to disclose and mitigate their environmental impact. Encouragingly, many major players are investing heavily in cleaner energy. Amazon has been the world’s largest corporate buyer of renewables for several years. Microsoft and Meta are exploring small-scale nuclear projects, while Apple and Google are expanding data centres powered increasingly by renewable energy sources.
As AI matures, more efficient and specialised models may also reduce energy intensity. Today’s systems often use ‘a bazooka to solve every problem’ but in the future, lighter models that use less energy may handle simpler tasks. This will not offset the energy consumption of data centres, but it might slow consumption growth.
The social risks of AI
AI systems reflect the data they are trained on and, by extension, the biases and blind spots of the people who design them. This means that human stereotypes can easily become embedded in AI, leading to unintended discrimination, for example, there have been cases where automated recruitment tools filtered out qualified candidates and where facial recognition systems misidentified individuals, disproportionately affecting minority groups.
Privacy is another major concern. AI systems may inadvertently memorise or infer sensitive information from training data or user interactions. A well-known example occurred when company employees accidentally exposed confidential material to an AI model while using it to assist with coding. Even without direct data exposure, these systems can sometimes infer private or protected characteristics from patterns of behaviour, raising legitimate ethical and legal questions about data use and consent.
AI can also amplify the scale and speed of disinformation. The use of generative models for voice cloning, fake videos or synthetic media enables false narratives to spread more persuasively than ever before. During the Brexit referendum, for instance, networks of AI-driven bots helped distribute partisan or misleading content online. These developments highlight how AI can influence public opinion and erode trust in reliable sources of information.
The MIT AI Risk Repository identifies up to 24 key risk domains linked to AI, maps more than 700 risks from AI and is a perfect taxonomy to understand the risks linked to AI development and use.
What we expect from companies
What can sustainable and responsible investors do to mitigate the risks described? Responsible AI governance is an essential part of ESG assessment. We look at how companies are managing both the opportunities and the risks associated with AI, from energy consumption and supply chain impacts to privacy safeguards and ethical use.
We expect companies to adopt clear ethical principles for AI development and deployment, supported by governance structures, oversight mechanisms and impact assessments. These principles should translate into concrete practices, for example, embedding human rights considerations in product design or ensuring accountability when AI tools are used in sensitive decisions.
We are also part of the World Benchmarking Alliance’s Collective Impact Coalition (WBA CIC), which encourages companies to meet a shared set of ethical AI expectations across sectors.
One example that stands out is a conversation we had with the ethical AI team of one of the big five tech companies. Their perspective was that ethical considerations should enable AI and not be seen as a barrier to innovation. When engineers and ethics experts work side by side, the outcome is not just safer AI, it’s better AI.
Published by
DPAM
DPAM