Red teaming: An effective tool for insurer assessment of AI risks

By Paige Waters and Stephanie Macro
The insurance industry’s use of artificial intelligence faces increased scrutiny from insurance regulators. Red teaming can be leveraged to address some of the risks associated with an insurer’s use of AI. The U.S. Department of Commerce’s National Institute of Standards and Technology defines a “red team” as:
“A group of people authorized and organized to emulate a potential adversary’s attack or exploitation capabilities against an enterprise’s security posture. The red team’s objective is to improve enterprise cybersecurity by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders (i.e., the blue team) in an operational environment. Also known as cyber red team.”

Red teaming is a concept in cybersecurity. The insurance industry’s enterprise risk, legal and compliance areas are becoming more familiar with the use of red teaming in connection with AI corporate governance efforts.
Insurance regulators view insurers’ use of AI as creating significant risks for the insurance-buying public. Regulators have been working diligently to understand insurers’ use of AI and to develop effective AI regulation. For example, 24 states have adopted the National Association of Insurance Commissioners Model Bulletin on the Use of Artificial Intelligence By Insurers (NAIC Model AI Bulletin), the New York Department of Financial Services has promulgated Cybersecurity Regulation (23 NYCRR 500) and Circular Letter No. 7 Regarding the Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing (Circular Letter No. 7), and Colorado has promulgated Regulation 10-1-1 et seq., Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models (CO AI Regulations). Although state AI guidance does not specifically mandate red teaming, adversarial testing could be a valuable component of an insurer’s related AI corporate governance program.

In the insurance industry, red teaming for AI applications is described as a strategic approach to testing and evaluating AI systems’ security and robustness. This involves simulating adversarial attacks to identify vulnerabilities and assess the resilience of AI models used in various insurance processes, such as underwriting, claims processing, fraud detection and customer service. Red teaming may reveal unlawful bias or unfairly discriminatory practices resulting from the insurer’s use of AI applications.
The primary goal is to objectively assess the AI system’s ability to withstand attacks that could compromise data integrity, privacy or operational functionality. Adversarial testing includes creating scenarios where AI models are exposed to adversarial inputs designed to deceive or manipulate the system, such as altered data or malicious algorithms. Red teaming helps identify potential risks associated with AI deployment, including biases, errors and vulnerabilities that could lead to incorrect decision-making or security breaches. Insurers use red teaming to test internally developed AI applications as well as AI purchased from third-party vendors. Some third-party vendors also disclose their use of red teaming. However, insurers should not rely solely on the red teaming representations of their third-party vendors because the insurer’s use of its own data and proprietary changes to the AI applications may create additional vulnerabilities, biases or unlawful outputs. By following best practices, insurers can enhance their security posture, protect sensitive data and enhance their AI corporate governance.
Other considerations in deploying red teaming include whether the attorney-client privilege or other privileges (e.g., insurance compliance self-evaluative privilege) may apply to red teaming exercises under certain conditions. Such privileges are not automatically applied. For example, the attorney-client privilege may be applicable if the red teaming exercise is conducted in a manner that is intended to provide legal advice or services, and the communications are confidential and made for the purpose of seeking or providing legal advice.
As insurers develop and implement their AI corporate governance, red teaming should be considered another “arrow in the quiver” for demonstrating to insurance regulators that insurers are assessing AI risk effectively. Transparency and documentation of the red teaming risk assessments will be helpful in responding to regulatory scrutiny.
Paige Waters is partner at Troutman Pepper Locke law firm. Contact her at paige.waters@innfeedback.com.
Stephanie Macro is counsel at Troutman Pepper Locke law firm. Contact her at stephanie.macro@innfeedback.com.
© Entire contents copyright 2025 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
The post Red teaming: An effective tool for insurer assessment of AI risks appeared first on Insurance News | InsuranceNewsNet.