What is Anthropic’s AI tool that triggered a sell-off in IT stocks? Anil Singhvi, Ajay Bagga decode

The tool can read files, organise folders, draft documents and complete multi-step tasks with user approval. In simple terms, it works like a digital employee that can follow instructions across workflows.
What is Anthropic’s AI tool that triggered a sell-off in IT stocks? Anil Singhvi, Ajay Bagga decode
What is Anthropic’s AI tool that triggered a sell-off in IT stocks? Anil Singhvi, Ajay Bagga decode

A little-known US artificial intelligence firm, Anthropic, suddenly became the centre of global market attention last week. The trigger was the launch of a new AI automation capability under its Claude platform. The announcement sparked fears of deep disruption across software and IT services, dragging down tech stocks in the US and India.

What exactly did Anthropic release?

Anthropic, best known for its Claude chatbot, released 11 open-source plugins for Claude Cowork on January 30. Claude Cowork is an “agentic” AI assistant. It is designed for non-technical professionals, not just developers.

Add Zee Business as a Preferred Source

The tool can read files, organise folders, draft documents and complete multi-step tasks with user approval. In simple terms, it works like a digital employee that can follow instructions across workflows.

The newly released plugins allow companies to customise Claude for specific roles. These include productivity, sales, marketing, finance, data analysis, customer support, product management and even biology research.

Why the legal plugin rattled investors

One plugin stood out. The legal workflow plugin automates contract review, NDA checks, compliance screening and legal brief preparation.

Anthropic has clearly stated that the tool does not offer legal advice and that all outputs must be reviewed by licensed lawyers. Despite this disclaimer, investors reacted sharply.

Markets fear that if routine legal and analytical work gets automated at scale, it could hit revenues of legal software firms, data providers and IT services companies that depend on manpower-led billing.

How Anthropic is different from other AI startups

Most legal AI startups rely on third-party AI models. Anthropic builds its own large language models and tailors them for specific industries.

This gives it the ability to challenge not just traditional software firms, but also smaller AI startups that depend on models from big AI labs. That positioning is what has unsettled investors the most.

How the fear spread to Indian IT stocks

The shock did not remain limited to the US. Global software stocks sold off sharply. That fear quickly travelled to India.

On February 4, the Nifty IT index fell more than 5 per cent in early trade. It marked the sector’s worst single-day fall since March 2020. Nearly Rs 2 lakh crore in market value was wiped out. Shares of TCS, Infosys and Wipro dropped between 5 per cent and 8 per cent in a single session.

Anil Singhvi explains the market reaction

Explaining the sell-off, Anil Singhvi said the IT sector became the biggest drag on the market.

He said investors panicked after hearing claims that Anthropic’s AI tool could handle legal work, data analysis and coding from home, without the need for large IT teams. According to market chatter, if such claims hold true, IT company revenues could fall sharply over the next three years. That fear triggered heavy selling across IT stocks, he said.

Ajay Bagga offers a counter view

Market expert Ajay Bagga urged caution against overreaction. He said large enterprises, banks and regulated businesses are unlikely to rely entirely on unaccountable AI tools.

According to him, companies will still prefer established vendors like TCS, Infosys or Oracle, where accountability and responsibility are clearly defined. He added that while AI will improve productivity, it will also need auditing, oversight and integration, areas where IT services firms still play a key role.

Bagga also flagged current limitations of AI tools, including hallucinations and incorrect data, which make full automation risky for mission-critical systems.