Artificial intelligence (AI) is reshaping our lives. It can assist in daily functions like meal planning, gift-giving or daily outfit selection. In the workplace, AI can assist by enhancing operations, improving productivity, creating new innovations and generating more competition. While this tool provides modern efficiencies, its seemingly accelerated adoption is raising complex questions about AI-related exposures and insurance coverage.

The insurance industry is taking notice and trying to wrap its arms around this evolving risk. Currently, directors and officers (D&O), errors and omissions (E&O), employment practices liability (EPL) and cyber can all play a role in mitigating AI risks, along with at least two new standalone AI products.

However, what the future holds for insurance coverage and AI remains to be seen as insurers are assessing the risks under their current policies, adding or eliminating coverage for AI use, creating new products and changing the underwriting process to account for AI-related exposures.

Current AI use exposures

While AI seems like a new tool, we have already seen how claims can arise from its use. Here are a few examples:

D&O exposure

  • A dangerous trend called AI washing is increasing. AI washing is when a company overstates its AI use or capabilities to attract customers, investors or shareholders. This can lead to SEC investigations and civil penalties.1
  • There are claims alleging directors and officers have failed to oversee or mitigate the risks associated with executing AI processes, leading to company financial loss or reputational damage.

E&O exposure

  • Claims related to AI “hallucinations” are on the rise. A consulting firm produced a report utilizing AI that contained non-existent references and citations leading to corrections and a partial refund to the client.2 Similarly, lawyers are facing bar sanctions and negligence claims for using hallucinated case citations in their briefs.3

EPL exposure

  • The use of AI-powered hiring tools is being tested by litigation alleging algorithmic discrimination in hiring. These cases allege that the AI tool is trained on discriminatory data points resulting in discrimination claims and unfair employment practices.4
  • The use of AI systems to terminate employees can lead to allegations of wrongful termination.5

Cyber exposure

  • AI systems house large amounts of data and, if not properly protected, can lead to breaches, causing unauthorized access or disclosure of sensitive information by cyber criminals. Additionally, an employee using a consumer AI tool instead of their corporate AI system may upload sensitive data, resulting in an unauthorized public disclosure.
  • AI tools have the potential to generate output that infringes upon intellectual property rights, causing unauthorized use of copyright claims.
  • Bad actors utilize AI to launch sophisticated social engineering attacks with speed, scale and realism. Gone are the days of the misspelled email requesting an ACH change. Now bad actors are impersonating CEOs on Zoom calls to dupe employees into transferring millions of dollars leading to large eCrime losses.6

In addition to the above exposures, regulation surrounding AI is ramping up. Jurisdictions around the globe have, or are developing, AI regulatory laws. Additionally, 47 states have passed AI legislation, which often includes enforcement by the State Attorneys General,7and will lead to regulatory exposure for organizations.

How are insurers responding?

Insurance coverage is often reactive and follows industry claims data. At this point, AI-specific claims data is sparse, but growing. With little data relating to how AI affects loss ratios, insurers are trying to understand how current policies cover the risks AI imposes and whether or not they need to adjust language. Based on real-world exposures, a D&O, E&O, EPL or cyber policy as written may currently cover these AI claims. However, the insurance industry will closely monitor claims data and adjust accordingly.

Underwriting

Industry changes will likely start with underwriting processes. For instance, D&O insurers may scrutinize an insured’s AI governance practices during the underwriting phase to determine if an exclusion is necessary. E&O insurers will likely scrutinize all professional services and either include or exclude AI-related services from the professional services definition. Similarly, EPL insurers will likely ask underwriting questions related to technology-assisted employment decisions. Cyber insurers will ask how an insured utilizes AI, what types of data the AI tool is trained in and regularly deals with, if the insured is compliant with AI laws and regulations, and what exposures an insured has for first- and third-party liabilities.

Policy language

After assessing exposures in the underwriting process, insurers will look at their policy language to see if an exclusion or clarification is necessary. Recently, an insurer has introduced an “absolute AI exclusion” that may be used in its liability insurance products. Presumably, AI losses should be covered by the policy in its current form – unless the policy includes this AI exclusion. As for the exclusion, it’s very broad and states it will exclude payments for loss “based upon, arising out of or attributable to” use of AI, the insured’s statements related to its AI usage, any actual or alleged violation of laws, statutes, regulations or rules regulating to AI, or any demand to investigate or respond to the risks, effects or impacts of the insured’s use of AI.”

Defining AI

The exclusion includes a definition of AI to mean “any machine-based system that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments, including, without limitation, any system that can emulate the structure and characteristics of input data in order to generate derived synthetic content, including images, videos, audio, text and other digital content.” While this exclusion is very broad, the definition of AI may become a catalyst for debate among brokers, risk managers, claims adjusters and judges who may lack a technical understanding of AI and create conflicting interpretations leading to coverage disputes. 

Endorsements

On the other hand, cyber insurers appear to be adding AI-affirmative endorsements. The endorsements help clarify both first-party and/or third-party coverage for an insured’s use of AI. Some endorsements simply add an AI definition, while others define various terms like “machine learning,” “data poisoning” and “hallucinations.” Just as no two cyber policy languages are the same, no two AI-affirmative endorsements appear to be the same. Again, the technical definitions in any policy may lead to coverage disputes as the practical understanding of these terms can create conflicting interpretations, especially within such new and evolving tools.

Coverage gaps

Lastly, there are at least two new AI insurance products that have emerged to fill any potential AI coverage gaps. These policies appear to piggyback off traditional E&O and cyber policies. One policy covers first-party losses like a traditional cyber policy but only for the insured’s use of AI, as defined. Another mimics a technology E&O/cyber blend policy and adds an AI component to all insuring agreements. With time, there may be standalone AI products for D&O or EPL, too.

The future is here, and AI must be addressed as a business risk. Organizations need to understand how they are using AI and grasp their organization’s governance around its usage. The insurance industry will inevitably ask underwriting questions about an organization’s use of AI and will add or eliminate coverage based on the insurer’s risk comfort level. Just as the use of AI evolves, the insurance landscape is evolving along with it. 

For more insight into AI use and the appropriate coverage your organization may need, please contact a HUB ProEx Specialist. View more articles in our ProEx Advocate Articles & Insights Directory.

NOTICE OF DISCLAIMER

Neither HUB International Limited nor any of its affiliated companies is a law firm, and therefore they cannot provide legal advice. The information herein is provided for general information only and is not intended to constitute legal advice as to an organization’s or individual's specific circumstances. It is based on HUB International's understanding of the law as it exists on the date of this publication. Subsequent developments may result in this information becoming outdated or incorrect and HUB International does not have an obligation to update this information. You should consult an attorney or other legal professional regarding the application of the general information provided here to your organization’s specific situation.


U.S. Securities and Exchange Commission, “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence,” March 18, 2024.
The Guardian, “Deloitte to pay money back to Albanese government after using AI in $440,000 report,” October 6, 2025.
The Guardian, “US lawyer sanctioned after being caught using ChatGPT for court brief,” May 31, 2025.
Workforce Bulletin, “Artificial Intelligence Bias: Harper v. Sirius XM Challenges Algorithmic Discrimination in Hiring,” October 17, 2025.
Sakara Digital, “Code Without Compassion, Part 1: Fired by Bot – The Amazon Flex Case,” September 29, 2025.
CNN, “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’,” February 4, 2024.
Orrick, “State Attorneys General on Applying Existing State Laws to AI,” February 18, 2025.