Surge in Funding for Generative AI: Raising Concerns and Opportunities
Since the launch of ChatGPT in 2022, the appetite for generative artificial intelligence (Generative AI) investment has been described as “insatiable.” Within the first six months of 2023, funding for Generative AI-based tools and solutions leaped to more than five times what they were in 2022, with venture capital (VC) firms investing heavily in this new sector. The global Generative AI market is expected to reach US$200.73 billion by 2032.
Addressing the Risks: Embracing a Rights-Respecting Approach to Generative AI
Generative AI refers to algorithmic systems that can create (or “generate”) new content—including audio, images, text, and even computer code. Like any new technology, these systems pose potential risks, including, but not limited to, the amplification of existing societal biases and inequities, undermining the right to privacy, and accelerating the spread of mis- and disinformation.
It is therefore crucial for companies and investors to embrace a rights-respecting approach to the design, development, and deployment of Generative AI technology.
Investor Responsibility: A Key Role in Mitigating Risks and Preventing Harms
The responsibility to address and mitigate these risks, as well as prevent actual harms, lies not only with states and the companies developing Generative AI products but also with investors, including the VC firms that are funding many of the largest Generative AI start-ups.
According to the United Nations Guiding Principles on Business and Human Rights (UN Guiding Principles), companies and investors have a responsibility to respect all human rights wherever they operate in the world and throughout their operations.
Failing to Meet Standards: VC Firms Neglecting Human Rights Due Diligence in Generative AI Investments
Yet, as this research undertaken by Amnesty International and the Business & Human Rights Resource Centre demonstrates, leading VC firms are largely failing in their responsibility to address risks and actual harms, including by conducting human rights due diligence.
To assess the practices of the 10 venture capital funds that had invested the most in Generative AI companies and the two start-up accelerators with the most active investments in Generative AI companies, Amnesty International USA and the Business & Human Rights Resource Centre first conducted a review of the publicly available information about each VC firm and accelerator’s human rights policies and then sent detailed letters to the General Counsels or other senior partners of each of these funds to interrogate the findings.
Key Findings: Deficiencies in Human Rights Practices of Leading VC Firms and Start-Up Accelerators
This analysis showed that leading VC firms and start-up accelerators are critically deficient in their responsibility to conduct human rights due diligence when investing in Generative AI start-ups. Our key findings include:
- The majority of the assessed VC firms and accelerators do not have publicly available human rights policies specifically relating to their investments in Generative AI companies.
- There is limited evidence of any human rights due diligence process being conducted by VC firms and accelerators before making investments in Generative AI start-ups.
- Only a few VC firms and accelerators have taken steps to mitigate risks associated with Generative AI, such as actively seeking diverse and inclusive teams and addressing bias in the technology.
- Most VC firms and accelerators have not publicly committed to transparency and accountability regarding their investments in Generative AI, nor have they established grievance mechanisms to address any adverse impacts.
- Despite the potential for misuse of Generative AI, there is an absence of any significant engagement by VC firms and accelerators in shaping regulatory frameworks or advocating for ethical standards in the sector.
In conclusion, it is evident that VC firms and start-up accelerators must strengthen their human rights due diligence practices and uphold their responsibility to address risks and prevent human rights harms in the context of Generative AI investments. This will require the development and enforcement of comprehensive human rights policies, engagement with diverse and inclusive teams, and active collaboration with regulators and other stakeholders to establish ethical standards and safeguard against abuses. Only through these efforts can the potential of Generative AI be harnessed while protecting human rights.
Analyst comment
Positive news: Surge in Funding for Generative AI: Raising Concerns and Opportunities
Short analysis: The surge in funding for Generative AI presents opportunities for growth and innovation in the market. However, it also raises concerns regarding potential risks such as bias, privacy infringement, and the spread of misinformation. Investors must embrace a rights-respecting approach and conduct human rights due diligence to mitigate these risks and prevent harm. Strengthening policies, engaging diverse teams, collaborating with regulators, and establishing ethical standards are crucial for harnessing the potential of Generative AI while protecting human rights.