Khabor Wala Desk
Published: 22nd December 2025, 5:07 AM
Artificial Intelligence (AI) is rapidly reshaping the risk landscape across virtually every industry and sector. As technological sophistication grows, insurance companies are being compelled to reassess the frameworks and guidelines underpinning their policies. For many years, certain AI-related exposures were quietly absorbed within broader cyber, professional liability, or general insurance policies—a practice commonly referred to as “silent AI coverage.” However, as AI systems become more complex and pervasive, this traditional approach is increasingly viewed as inadequate and, in some cases, potentially hazardous.
A recent report by WTW, Insurance in the AI Age, authored by Dr Anat Lior and Sonal Madhok, highlights these challenges. The researchers draw a parallel with the early days of cyber insurance, when nascent digital risks were initially folded into general policies before specialised products emerged. Silent AI coverage similarly leaves organisations exposed to undefined liabilities and unpredictable financial losses. Such gaps underscore the risk for both insurers and policyholders, who may face unforeseen coverage shortfalls.
In response, insurance firms are progressively moving towards clearly defined AI coverage. This includes AI-specific policy endorsements, explicit exclusions, and the development of independent AI insurance products, particularly for small and medium-sized enterprises. Larger technology corporations, whose AI operations tend to be highly complex and far-reaching, often prefer self-insurance solutions to mitigate risk.
Nevertheless, many AI risks remain linked to traditional insurance lines. For instance, conventional cyber policies typically exclude damages arising from an organisation’s proprietary data, while general liability insurance may not address purely financial losses. Consequently, policy renewal processes now require more rigorous reassessment, especially in areas such as autonomous decision-making, algorithmic errors, and other AI-specific exposures.
Underwriting practices are evolving in tandem. Insurers increasingly demand detailed information on AI governance, human oversight, and internal control measures. There is a growing emphasis on “human-in-the-loop” systems, ensuring critical decisions maintain human involvement. Regulatory frameworks, including the European Union’s AI Act, are also shaping future coverage standards.
Dr Lior emphasises that precise policy language, robust governance structures, and enhanced underwriting data are essential to mitigating AI risks. Such measures are expected to strengthen the stability of the insurance sector while enabling organisations to adopt AI responsibly and manage associated exposures effectively.
The shift away from silent coverage marks a significant acknowledgement within the industry: AI is transforming both business and society, and conventional risk paradigms must evolve accordingly to keep pace with technological progress.
Comments