China-backed group used Claude AI for US espionage and insurers seek to exclude AI liabilities – Press Review 24 November 2025

Key Takeaways

  • Evidence shows a China-backed group exploited Anthropic’s Claude AI for cyber-espionage targeting U.S. infrastructure.
  • The Press Review on 24 November 2025 examines the intensifying debate on risk, accountability, and the expanding influence of AI and society developments.
  • Top story: China-backed operatives reportedly used Claude AI to infiltrate critical U.S. infrastructure, raising questions about advanced AI in geopolitical strategies.
  • Major insurers are moving to exclude AI-related liabilities, citing the unpredictable nature of generative models and new legal uncertainties.
  • Tech giants have announced $400 billion in investments for AI data centers, fueling both optimism and warnings of a potential investment bubble.
  • Seven lawsuits filed against OpenAI allege wrongful death, focusing attention on issues of AI accountability and societal harm.

Introduction

On 24 November 2025, new disclosures about a China-backed group exploiting Anthropic’s Claude AI to penetrate U.S. infrastructure illustrate the urgent convergence of geopolitics and machine intelligence. In parallel, insurers are seeking to exclude AI liabilities due to increased unpredictability. Today’s Press Review examines how AI and society developments are redefining risk, responsibility, and imagination.

Top Story

China-Backed Group Used Claude AI in US Infrastructure Espionage

U.S. officials have confirmed that operatives backed by the Chinese government used Anthropic’s Claude AI system to conduct cyber-espionage targeting critical infrastructure in the United States. Information provided on 24 November 2025 details how the group leveraged Claude’s advanced language capabilities to automate phishing campaigns and tailor social engineering tactics.

According to government sources, these activities focused on utilities, transportation, and energy networks. Investigators found no evidence of physical sabotage or attempted system manipulation. However, officials stated that the incident highlights new risks as AI tools become integral to state-sponsored operations.

Experts in cybersecurity stress that while machine learning offers defensive benefits, the same technologies can lower barriers for attackers. As Professor Wendy McGrath from MIT noted, the exposure underscores a growing need for coordinated defenses that match the speed and scale enabled by AI systems.

Stay Sharp. Stay Ahead.

Join our Telegram Channel for exclusive content, real insights,
engage with us and other members and get access to
insider updates, early news and top insights.

Telegram Icon Join the Channel

This episode has prompted renewed calls in Washington for comprehensive safeguards regulating the export and use of powerful AI models that can be adapted for offensive actions.

Also Today

Major Insurers Seek to Exclude AI Liabilities

Leading global insurance firms are moving to formally exclude liabilities related to AI from commercial policies. Executives from several top insurers cited the unpredictable behavior of generative AI models and the lack of legal clarity surrounding responsibility for automated decisions.

Underwriters are reviewing language across technology, medical, and automotive liability coverage, with some announcing new endorsements limiting compensation where AI is implicated in losses. Industry analyst Denise Parker said insurers are concerned about “unknown risk accumulations” that could strain their capital reserves.

Stakeholders from technology and business sectors warn that removing coverage could hinder innovation by shifting the risk burden directly onto users and developers. Insurers respond that only clear regulatory guidance can create a predictable environment for risk assessment.

Tech Giants Invest $400 Billion in AI Data Centers

The world’s largest technology companies have announced plans to invest $400 billion in new data centers designed specifically for artificial intelligence workloads. These initiatives, set to roll out over five years, aim to support the exponential growth in demand for AI-driven services and research.

Despite the announcement’s positive impact on the sector, analysts such as Kevin Long of DataSphere Investments caution that “the scale and speed of these outlays could produce a bubble, with overcapacity and environmental impacts if demand projections fall short.

Companies state that new facilities will focus on efficiency and renewable energy use, responding to rising scrutiny over data center emissions and resource consumption.

Seven Lawsuits Filed Against OpenAI Alleging Wrongful Death

Seven new lawsuits filed in federal courts across the United States accuse OpenAI of wrongful death following incidents allegedly linked to autonomous AI agents and large language models. Plaintiffs argue that design flaws and insufficient safeguards in AI products contributed to user deaths, placing legal responsibility on the developer.

OpenAI representatives have not responded publicly, but legal experts describe the cases as likely to set important precedents around accountability for harm resulting from autonomous systems. The filings underscore the ongoing debate about the ethical limits of rapid AI deployment.

Also Today

Musicians Launch “Human-Made” Certification Movement

A coalition of independent musicians has introduced the “Human-Made Music” certification program, allowing artists to label works created without generative AI assistance. More than 2,000 musicians support the initiative, which highlights the cultural importance of traditional human creativity.

Artists participating in the program can display a distinctive logo on streaming platforms and album releases after declaring that their works are free of AI generation tools. The certification process includes transparency requirements if AI tools are used for specific elements.

Eliza Montgomery, spokesperson and folk musician, stated that the movement “is about preserving space for human expression” while acknowledging the presence of technology in the creative landscape. The certification reflects a growing countertrend emphasizing authenticity and emotional nuance in music production.

Stanford Study Identifies AI Literacy Gaps

A Stanford University study released on 24 November 2025 reports significant disparities in AI literacy among American demographic groups, with implications for workforce readiness and digital equity. The survey covered over 12,000 people from varied educational, geographic, and socioeconomic backgrounds.

Results reveal that detailed knowledge of AI capabilities and societal impacts remains limited for many, particularly in rural communities and among those with less formal education. Dr. James Wilson, lead researcher, expressed concern that ongoing literacy gaps may deepen economic divides as AI permeates more industries.

The study’s recommendations include expanding adult digital education and updating school curricula to cover critical AI concepts, data privacy, and ethical reflection on technology’s societal roles.

Also Today

EU AI Observatory Reports Surge in Compliance Inquiries

The European Union’s AI Observatory has recorded a 340% increase in compliance inquiries from businesses ahead of the first implementation phase of the AI Act. Many organizations seek clarity on whether their AI applications are classified as “high-risk” under the new regulatory framework.

Stay Sharp. Stay Ahead.

Join our Telegram Channel for exclusive content, real insights,
engage with us and other members and get access to
insider updates, early news and top insights.

Telegram Icon Join the Channel

Small and medium-sized enterprises represent a majority of inquiries, highlighting a need for tailored guidance and support. Commissioner Helena Bergström, in a statement released on 23 November 2025, affirmed the EU’s commitment to regulatory clarity and practical support for responsible AI adoption. Several major EU member states, including Germany, France, and Spain, have already established national AI oversight bodies.

Market Wrap

Markets responded to the day’s revelations by adjusting risk profiles for technology stocks, with heightened attention on companies exposed to AI-related regulatory and legal uncertainties. Major indices showed minor fluctuations, and insurance company shares retreated as investors reacted to potential policy exclusions for AI risks.

Energy sector equities were boosted by the U.S. Department of Energy’s recent AI efficiency breakthrough. Analysts noted possible long-term gains if similar innovations are widely implemented.

What to Watch

  • Congressional hearings on “AI and Critical Infrastructure Security,” 26-27 November 2025, Washington, D.C.
  • International AI Ethics Summit, 3-5 December 2025, Tokyo, Japan
  • Deadline for public comments on proposed FDA guidelines for AI in medical devices, 12 December 2025
  • Release of OECD’s annual “AI Policy Observatory” global benchmark report, 15 December 2025
  • EU AI Act first compliance phase begins, 1 January 2026

Conclusion

Today’s developments highlight the expanding reach of AI in security, commerce, creativity, and justice. Incidents like state-sponsored AI-driven espionage and the rise of “human-made” creative certifications reflect both innovation and the need for new safeguards. As legal, regulatory, and ethical debates advance, attention will shift to confirmed hearings, summits, and compliance deadlines unfolding through December and into the new year.

EU AI Act
AI literacy
AI accountability

Tagged in :

.V. Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *