Last Updated:
That concern lay at the heart of the decision by Dario Amodei and his sister Daniela Amodei to walk away from OpenAI in 2021 and start their own company
Dario Amodei, trained as a physicist at Stanford University, played a key role in developing large language models such as GPT-3. (AP)
As much of the world rushed to build faster and more powerful artificial intelligence, two siblings were preoccupied with a quieter, more unsettling question, that what happens if AI grows beyond human understanding or control?
That concern lay at the heart of the decision by Dario Amodei and his sister Daniela Amodei to walk away from OpenAI in 2021 and start their own company. Their venture, Anthropic, was founded on a simple but demanding idea; building AI tools is difficult, but making them safe, transparent and aligned with human values is the real challenge.
The name “Anthropic” itself signals that intent. Derived from the word “human”, the company positions safety and societal benefit alongside commercial success. Registered as a Public Benefit Corporation, Anthropic committed from the outset to balancing profit with responsibility, an unusual stance in an industry driven by speed and scale.
At OpenAI, the siblings had been central figures. Dario, trained as a physicist at Stanford University and later a biophysics researcher at Princeton University, rose to become Vice President of Research and played a key role in developing large language models such as GPT-3. Daniela, whose early interests lay in liberal arts and music, had taken a less conventional route into technology, moving from the fintech firm Stripe to OpenAI, where she served as Vice President for AI safety and policy.
By late 2020, both siblings shared a growing unease. AI capabilities were accelerating at a pace that outstripped efforts to define rules, safeguards and accountability. They feared that without strong guardrails, advanced AI could pose serious risks to society. That fear ultimately became the catalyst for Anthropic.
One of the company’s defining ideas is “Constitutional AI”, a framework in which models are trained from the beginning to follow a set of ethical principles rather than relying solely on post-hoc moderation. The aim is to ensure that systems are honest, controllable and resistant to producing harmful or misleading outputs. Anthropic’s flagship model, Claude, was built around this philosophy and is increasingly positioned as a competitor to ChatGPT, particularly in enterprise settings.
Unlike many AI startups that initially chased mass consumer adoption, Anthropic focused early on business and enterprise clients. That strategy has paid off. The company’s revenues have grown rapidly, with reports indicating a run-rate of around $1 billion by 2025. It has attracted major investments from technology giants including Amazon and Google, and by early 2026, discussions were underway around fresh funding that could value the firm at an extraordinary level.
Internally, responsibilities are clearly divided. Dario leads research, long-term strategy and AI safety policy, while Daniela oversees operations, partnerships and commercial growth. Together, they have built a company that now finds itself competing with established enterprise players such as Oracle, Salesforce and Adobe in areas ranging from coding and data analysis to content creation.
Anthropic’s rapid rise reflects a broader shift in the AI industry, where trust, reliability and governance are becoming as important as raw capability. For the Amodei siblings, the venture is as much about responsibility as it is about technology. As Dario has often warned, AI developed in the right direction could transform the world for the better, but if it goes wrong, the consequences could be catastrophic.
February 04, 2026, 17:59 IST
Read More

