Daily CSR
Daily CSR

Daily CSR
Daily news about corporate social responsibility, ethics and sustainability

Navigating AI Regulation: Key Risks and Strategies for Investors



06/20/2024


Artificial intelligence (AI) introduces numerous ethical challenges that can translate into risks for consumers, businesses, and investors. The uneven development of AI regulations across different jurisdictions further heightens this uncertainty. Investors should prioritize transparency and explainability.
 
The ethical dilemmas and associated risks of AI originate with the developers who create the technology. These risks then extend to the companies that implement AI and eventually impact consumers and society at large. Investors, through their stakes in AI developers and companies utilizing AI, are exposed to these risks at both the development and implementation stages.
 
AI is advancing rapidly, outpacing the general public’s understanding. Regulators and lawmakers worldwide are trying to catch up. While it appears that regulatory activity has surged in recent years, with many countries releasing AI strategies and others nearing this stage, the progress is inconsistent and incomplete. There is no standardized approach to AI regulation, and some countries had regulations in place before the launch of ChatGPT in late 2022. As AI continues to proliferate, many regulators will need to update and possibly expand their frameworks.
 
For investors, the regulatory uncertainty adds another layer of risk to those inherent in AI. Understanding the AI business landscape, ethical concerns, and regulatory environment is crucial for managing these risks.
 
AI encompasses a range of technologies designed to perform tasks typically done by humans, often in a human-like manner. Generative AI, which includes creating content like video, voice, text, and music, and large language models (LLMs), focused on natural language processing, are prominent examples. Companies increasingly use LLMs for applications such as chatbots, automated content creation, and data analysis in customer engagement.
 
However, as many companies have discovered, AI innovations can pose risks to their brands. These risks stem from biases in the data used to train LLMs, leading to unintended consequences such as banks discriminating against minorities in home-loan approvals or a health insurance provider facing a lawsuit for allegedly wrongful denial of extended-care claims due to an AI algorithm.
 
Regulators target risks like bias and discrimination, but investors should also consider other issues such as intellectual property rights and data privacy. Measures to mitigate these risks include rigorous testing of AI models for performance, accuracy, and robustness, as well as ensuring transparency and support for companies implementing AI solutions.
 
Understanding AI Regulations: A Deeper Dive
The landscape of AI regulation is evolving differently across jurisdictions. Notable recent developments include the European Union's Artificial Intelligence Act, expected to be enacted by mid-2024, and the UK government's response to a consultation process following the release of its AI regulation white paper.
 
These initiatives highlight contrasting regulatory approaches. The UK favors a principles-based framework, allowing existing regulators to address AI issues within their domains. Conversely, the EU introduces a comprehensive legal framework with risk-graded compliance obligations for AI developers, companies, and importers and distributors.
 
Investors should not only examine the specifics of each jurisdiction's AI regulations but also understand how existing laws—such as copyright and employment laws—are being used to address AI-related issues.
 
Importance of Fundamental Analysis and Engagement
For investors assessing AI risk, a good indicator is whether companies make full disclosures about their AI strategies and policies, suggesting they are prepared for new regulations. Fundamental analysis and issuer engagement remain crucial.
 
Fundamental analysis should explore AI risk factors at the company level, along the business chain, and within the regulatory environment, aligning insights with core responsible-AI principles.
 
Engagement discussions should address AI's impact on business operations and consider environmental, social, and governance perspectives. Investors should ask boards and management:
 
  • AI Integration: How is AI integrated into the company’s business strategy? Provide specific examples of AI applications.
  • Board Oversight and Expertise: How does the board ensure it has sufficient expertise to oversee AI strategy and implementation? Are there specific training programs or initiatives?
  • Public Commitment to Responsible AI: Has the company published a policy on responsible AI? How does it align with industry standards and ethical considerations?
  • Proactive Transparency: What proactive measures are in place to anticipate regulatory implications?
  • Risk Management and Accountability: What processes identify and mitigate AI-related risks? Who is responsible for overseeing these risks?
  • Data Challenges in LLMs: How does the company address privacy and copyright issues in the data used to train large language models? What measures ensure compliance with privacy regulations and copyright laws?
  • Bias and Fairness in Generative AI: What steps prevent or mitigate biased outcomes from AI systems? How does the company ensure AI outputs are fair and unbiased?
  • Incident Tracking and Reporting: How are AI-related incidents tracked and reported? What mechanisms address and learn from these incidents?
  • Metrics and Reporting: What metrics measure AI performance and impact? How are these reported to stakeholders? How is regulatory compliance monitored?
 
To navigate the complexities of AI, investors should remain grounded and skeptical, demanding clear and straightforward answers rather than being swayed by elaborate explanations.