As AI’s risks loom, organizations must proceed with caution

In 1951, British computer scientist Christopher Strachey designed a program to play checkers on an early computer at the University of Manchester. By the end of 2022, a single generative artificial intelligence (AI) tool was used by more than 100 million people around the world. And through July 27, Nasdaq 100 companies had mentioned “AI” and “artificial intelligence” more than 900 times in quarterly earnings calls this year (opens a new window) — almost three times the number of mentions from the same group in all of 2022.

Artificial intelligence continues to evolve, with many celebrating its promise in shaping the ways in which we live and work. However, organizations using and/or contemplating use of AI should be cautious, mindful of AI’s potential along with its limitations and risks.

Defining artificial intelligence forms

Modern AI can take many forms. Among the most well-known are:

  • Machine learning: AI systems that “learn” — in much the same way as humans do — using data and algorithms. Machine learning applications, for example, can translate customer preferences into making product recommendations.

  • Large language models (LLMs): Artificial networks that use deep learning algorithms to understand language and perform a variety of tasks, including carrying out chatbot conversations.

  • Generative AI: A class of AI techniques designed to generate new content, such as images, text, music or even entire simulations that resemble or imitate human-created content.

  • Deepfake: The use of AI to generate digital media to replace the likeness of one person with another.

Some of these forms of AI can overlap. ChatGPT, for example, is a form of generative AI backed by an LLM.

Generative AI more broadly has attracted significant attention from the public and investors. Just two months after launching on November 30, 2022, ChatGPT had 100 million users (opens a new window). According to Pitchbook, venture capitalists invested $1.69 billion in generative AI in the first quarter of 2023 (opens a new window), putting VCs on pace to invest $6.76 billion this year — more than four times the amount they did just three years earlier.

Applying artificial intelligence

As AI has grown more sophisticated, it’s become a part of daily life for many. Machine learning helps us unlock our smartphones and get turn-by-turn directions. AI drives web search engines and recommendations on social media platforms. And how many of us use LLM-powered voice assistance at home or in our cars?

AI applications are now making their way into the workplace. There is vast potential for AI to help organizations improve the quality of their work product, become more efficient and find cost savings. Early adopters of AI include:

  • Manufacturers, which can use AI to introduce predictive maintenance of critical equipment and automation to improve efficiency and productivity.

  • Healthcare organizations, for which AI can aid in diagnosis, drug discovery and development, training, administration and research.

  • Financial institutions, which can use AI to process loan applications and insurance claims, detect fraud and predict investing strategies.

  • Retailers and travel companies, for which AI can analyze customer data to provide personalized recommendations, automate marketing activities and power virtual customer service agents.

  • Real estate companies, which can use AI to analyze market conditions, determine property values, process lease agreements and mortgages and introduce smart technology to properties.

  • Logistics and transportation companies, for which AI can manage supply chains and automate sorting, packaging and delivery

As organizations capitalize on these opportunities, however, the associated risks may not yet be well-understood.

New areas of risk

As AI applications change and their use expands, concerns are arising over a variety of evolving and emerging risks.

A key risk surrounds AI tools’ mass collection of personal data, including personally identifiable information (PII), and the associated privacy concerns. As data is collected, stored and manipulated by AI, there is a risk of a possible data breach or other cyber event, which could have increasingly severe consequences as governments develop more stringent privacy regulations.

There’s also the possibility of algorithmic bias, an oft-cited concern around the use of AI in diagnostics and other healthcare applications, in facial recognition tools employed by retailers and others, or when used by financial institutions for diligence and underwriting decisions.

Algorithmic bias and other risks could also rear their heads in human resources applications of AI. While chief human resources officers have an opportunity use human capital analytics and other tools to understand workforce productivity and drivers of employee engagement and ultimately shape the workforces of the future, they must be mindful of these tools’ potential racial and gender biases and potential accessibility issues for employees, which could give rise to claims of discrimination.

Other key risks AI users must be aware of include:

  • Questions about the ownership of AI-developed content and the possibility that such content could infringe on others’ intellectual property rights.

  • The possibility that generative AI tools, while seeming to be confident, persuasive and precise, can in fact produce inaccurate, biased, or even offensive content.

  • In cases where the use of an algorithm can disrupt or harm lives, questions about who is legally responsible — the algorithm’s creator or the organization using it.

  • The possibility that AI can be used for social engineering purposes — for example, a deepfake being used to impersonate or defraud an individual.

Growing oversight

Organizations must also contend with an increasingly complex global regulatory landscape. In the U.S., the Biden administration and Congress are actively engaged in efforts to build guardrails around artificial intelligence (AI), while seeking to ensure that innovation can and will continue in the private sector.

Last month, the White House announced voluntary commitments from seven leading tech companies to move toward safe, secure, and transparent development of AI technology. The companies pledged to ensure their products are safe before introducing them for public use, build systems that emphasize security, and develop mechanisms that will ensure users are told when content is AI-generated.

This fall, the Senate will host a series of AI "Insight Forums" covering multiple policy areas, including copyrights, workforce issues, national security, high-risk AI models, existential risks, privacy, transparency, explainability, and elections. The forums will include discussions involving AI industry leaders and critics about the government's role in AI.

Meanwhile, more than a dozen states have adopted resolutions or enacted legislation related to AI. Many of these states have focused on:

  • Governance and the protection of civil rights;

  • Transparency around AI’s decision-making process; and

  • Clear enforcement mechanisms.

Internationally, the European Union and China are leading the charge of creating frameworks to regulate AI. In June 2023, the European Parliament took a major step by finalizing draft rules to establish parameters for providers and users of AI depending on risk levels. The EU seeks to reach an agreement on the law’s final form by the end of 2023. In China, rules regulating generative AI are set to take effect on August 15.

While the EU and China will likely be the first to enact regulations governing AI, both jurisdictions may have to amend and refine their initial regulations as the risks and technology evolves. Other jurisdictions around the world will also likely consider and propose laws to regulate AI as the technology advances and adoption rates increase.

How to move forward with AI

To manage risk associated with AI, organizations should be asking the following questions:

  • Do we use AI technologies or are we contemplating the use of AI technologies?

  • Do our vendors use AI technologies and what are the contractual terms regarding our potential liability because of our vendors’ use of AI technology?

  • What are the security risks associated with the particular AI technology we are using or contemplate use of?

  • What are our regulatory compliance obligations because we use AI technology?

  • Are risk transfer mechanism available to help mitigate risk associate with using AI technology?

For more on AI risks, contact your Lockton advisor. And watch more content on lockton.com about AI’s applications and risks across several industries and corporate functions.