The recent emergence of generative artificial intelligence (AI) has caused a stir in the market, prompting investors to question what risks and opportunities the technology poses and whether their holdings have the management capabilities to identify and manage these.
The potential impacts of generative AI are wide ranging, spanning from healthcare and environmental benefits to human rights risks. It is understandable therefore that governing bodies are wary of leaving the regulation and management of this technology to corporate entities.
In March, scientists and technology leaders, such as Elon Musk and Steve Wozniak, signed an open letter warning of a watershed moment and calling on developers to slow down production so that impacts could be researched.
These warnings have been swiftly followed by a number of initiatives aimed at building the trust and confidence required to accelerate widespread adoption of this evolving technology.
Notably, in the run up to the world’s first global AI Summit, convened by the UK government, the US government announced initiatives to advance the safe and responsible use of AI.
These initiatives set the scene for the UK AI Summit, which brought together leading AI nations, technology companies, researchers, and civil society groups. It saw 28 nations and the European Union sign the Bletchley Declaration on AI Safety - a list of pledges crafted to ensure AI is "designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”. This includes ensuring that the benefits of AI are inclusive and accessible to all economies.
Following the declaration, the UN confirmed support for an expert AI panel, with the means to establish a scientific consensus on AI model capabilities. Major tech companies also agreed to collaborate with governments on safety-testing their advanced AI models before and after release.
These are all first steps towards a global approach to managing the risks and opportunities of AI.
In the wake of these events, it feels timely to share our thoughts on this emerging technology.
As investors primarily in small and medium-sized companies (SMEs), we are particularly interested in how the technology could impact our holdings. We have been engaging all year to understand the opportunities and risks our companies identify and how they are preparing to manage these.
What is Generative AI?
Generative AI is a type of artificial intelligence technology that uses a process called deep learning to analyse large sets of data, which it can then use to create new content. In response to user prompts, it can generate text, images, code and voice and other forms of media.
Depending on the type of data provided for deep learning, generative AI can be used for a range of applications. For this reason, it is viewed as both a risk and opportunity to many sectors.
Building Trust and Confidence
The benefits of AI across many sectors are undeniable.
In healthcare, the technology has already transformed and accelerated the development of new drugs, such as antibiotics, which are vital to global health in a time of increasing antimicrobial resistance. Similarly, in clinical settings, the technology has been proven to aid healthcare professionals by improving the speed and accuracy of patient diagnosis.
However, to build effective diagnostic tools that cater to all of society, we need to feed AI models data that justly represent everyone. Historically, this is a factor that medical research has frequently neglected to account for.
For example, an article by Science magazine suggests that people of African ancestry make up only 0.5% of genetic studies and 1.6% of the UK Biobank (one of the largest genetic databases in the world).
Biobanks are used to link disease to genetic patterns. By omitting the genetic information of whole populations, diseases related to the genetics of those populations will be missed, and bias will be built into new systems.
Building inclusive datasets requires a willingness on the part of society to consent to data use, which requires a high level of trust and understanding to ensure benefits can ensue.
In recruitment, generative AI is seen to pose an opportunity. AI models could provide non-biased screening of benefit to companies facing both increasing competition to attract talent and scrutiny regarding their diversity, equity and inclusion policies. AI has the potential to be highly beneficial in advancing social strategies.
However, again, depending on the data the AI is learning from, it is capable of bias similar to that of humans, leading to discrimination. For example, Amazon’s AI hiring tool was dropped after showing gender biased behaviour. Although well intentioned, the failure to implement generative AI successfully resulted in wasted time and money and reputational damage.
The above issues highlight some of the many risks that the application of generative AI can pose to society. Without thoughtful development and deployment there is a risk of reinforcing existing inequalities through bias, job displacement, or unequal access. Trust may be lost and technology adoption inhibited.
In the creative context, AI has the potential to produce the first draft of code for videogame content, increasing the rate at which videogames can be brought to the market. Similarly publishing companies may adopt the technology to speed the analysis of trend data and create new content in a fraction of the time taken by human teams.
However, although time and money-saving, it is important to be wary of generative AI content, as it is capable of “hallucinations” i.e. the generation of false information. Fact-checking by a human workforce is still required and any over-reliance on generative AI’s abilities must be moderated and controlled. Depending on the subject that the model is writing on, the consequences of sharing false information could result in varying degrees of controversy.
For these reasons, creative businesses cannot afford to lose skilled employees. Yet the creative industry has already experienced worker backlash from the Writers Guild of America, who’s 5-month strike was partly attributed to the fear that streamers and studios would use generative AI to cut costs, by replacing human writers for AI-produced scripts.
If smaller companies in the creative sector are to employ generative AI to their advantage, it seems important they ensure employees feel valued and supported in their roles and are given appropriate training to work collaboratively with AI applications and to oversee development and use. Outside of this awareness and approach they may risk skill shortages.
Data Privacy and Protection
As highlighted previously, to build a generative AI model that serves a specific function, you need to feed it relevant data from which to learn. While companies need to ensure data is inclusive, they must also avoid breaching data privacy rules and always have proper consent. As new regulations are gradually rolled out by governing bodies, companies could face legal disputes as well as reputational damage if controls are not properly implemented. Where third parties are employed to create AI applications for smaller companies, the safe transfer and handling of data is imperative, as is understanding responsibility.
Companies will also need to ensure that their cyber security systems are AI-proofed. The exploitation of the capabilities of AI to produce “Deepfakes” is cause for security concerns. By copying an individual’s voice and appearance, Deepfakes threaten current security authentication methods, such as “my voice is my password”, while also undermining society’s ability to trust in published video and voice content online, that previously there would have been no reason to doubt.
As investors, we need to be asking our holdings whether they have, or are developing, appropriate data science strategies that adopt the highest standards of regulation, for the safe and egalitarian implementation of generative AI and optimisation of benefits.
Smaller companies must remain alert to adapt and benefit from generative AI technology, ensuring they have digital strategies in place to identify value-adding capabilities and integrate these into their business processes.
To guard against risk, smaller companies should develop an increasing focus on relevant management controls and governance structures in areas like data privacy and security, adopting the highest standards and ensuring their workforce have the right skills and training.
In decision-making smaller companies must retain their focus on commitments to equity and inclusion in the deployment or development of the technology and ensure they have the right capabilities to enforce human oversight and control of risks or unintended consequence. A good place to start, in our view, is to develop AI policies that ensure responsibilities and protect commitments to build trust in the adoption of the technology.
Companies that are not familiar with AI will require new or upskilled human resources to build and run AI models, or the help of external consultants. Upskilling in AI is a factor that the UK government recognises as a competitive advantage, having pledged £118 million to boost AI skills funding and ensure the UK retains global expertise, to seize the benefits of this technology.
Where smaller companies have the capacity to identify and invest in value-creating generative AI resources now, they should reap the efficiency rewards in future, freeing up time and creating business capacity.
If cash is not readily available, smaller companies could find themselves falling behind larger counterparts. Companies using older AI technology might already have some of the human resource and skills to integrate new applications with greater ease, but there will likely still be knowledge gaps to fill.
Perhaps the biggest risk of all is under-estimating or failing to recognise the role that generative AI could play in future. As long-term investors, it is in our best interests to stay curious, educating ourselves on the development of generative AI and its associated risks and opportunities. By doing so we can ask our holdings the right questions regarding their digital strategies, in order to drive positive change, encourage best practice, and enhance company value.