Image Name  - Andrea Iorio
AI, Weapons, and Responsibility: How Trust Became the Real Differentiator for Technology Leaders
FotoCircularAndrea

Andrea Iorio

17 de March, 2026 |
9 min

The year 2026 had barely begun when the winds of war intensified across the international stage. From the U.S. military intervention in Venezuela on January 3rd to the joint attack by the United States and Israel on Iran in March, the world is going through a period of growing geopolitical instability — in stark contrast to what historian Francis Fukuyama once described as “the end of history” in his 1992 book of the same name.

Amid invasions, bombs, and war rooms, one common denominator stands out: artificial intelligence.

Today, AI is an integral part of defense operations, surveillance systems, predictive threat analysis, and decision-making in wartime scenarios. At the center of this transformation lies the Maven Smart System (MSS), a platform developed by Palantir — the company founded by Peter Thiel and Alex Karp, which holds a $10 billion contract with the Pentagon over the next decade.

Originally created in 2017 to apply computer vision to drone imagery and identify objects of interest, MSS has evolved significantly in recent years with the integration of Claude, the AI system developed by Anthropic. It has now become a platform capable of integrating and interpreting multiple sources of data — including satellite images, radar signals, and intelligence reports — connecting them into a single, coherent view of the battlefield.

However, along with these superpowers, the military use of AI brings enormous ethical implications.

Questions arise about who is responsible for potential errors made by AI systems, about the lack of transparency in how these systems reach their conclusions, and about the absence of emotions and moral judgment that a human operator naturally carries when making an extreme decision — such as launching a missile. There are also the reputational consequences for the companies that develop and supply these technologies.

And this is where Anthropic enters the story once again.

ideogram v. A dramatic cinematic illustration of a futuristic war room in . A massive di

In a world where, according to the Edelman Trust Barometer, global trust in AI companies has been declining — falling from 61% in 2019 to just 53% in 2024 — the sector is increasingly competing for public trust.

Anthropic even spent around $16 million on two Super Bowl advertisements to tell people it was an ethical AI company, implicitly criticizing rival OpenAI for advertising ChatGPT.

People noticed. Visits to Anthropic’s website increased by 6.5% after the ad aired, and the number of daily active users grew by 11% following the game — the largest single-day growth in the company’s history.

Did that help? Certainly. But it did not make the real difference.

Three weeks after the Super Bowl, something remarkable happened.

The Pentagon demanded that Anthropic remove restrictions on how Claude could be used — including its limitations regarding mass domestic surveillance and autonomous weapons. The company was asked to accept an “any lawful use” clause or lose a $200 million contract.

Anthropic refused.

The administration then labeled the company a “supply chain risk” — a designation that had never before been publicly applied to an American company. Within hours, OpenAI took over the contract, and Anthropic filed a lawsuit to prevent the Pentagon from placing it on that list.

And here is the question for you:

Would you give up a $200 million contract to protect your company’s values and keep your product free from interference?

Be honest in your answer.

Few would have had the courage not to give in, as Dario Amodei did.

But few would reap the benefits of that decision as well.

Two days later, Claude became the number-one app in the country. Messages such as “you give us courage” and “keep going” appeared outside the company’s offices in San Francisco. Even singer Katy Perry posted a screenshot of her Claude Pro subscription with a heart emoji and the word “done.”

Yes, on one hand Anthropic lost a $200 million contract and angered U.S. Secretary of Defense Pete Hegseth.

But on the other hand, it validated a powerful thesis: in the age of AI, ethical and responsible positioning is not just a moral stance — it is a competitive advantage, and one of the most valuable resources of technology leaders across organizations.

Because when the CEO of an AI company acknowledges the inherent risks of irresponsible uses of their own technology, they send a powerful signal.

A signal that decisions cannot be driven only by short-term commercial considerations.

A signal that reinforces the key word here: trust.

If artificial intelligence has become an infrastructure of power, then trust — built through clear principles and well-defined boundaries — becomes the real differentiator for technology leaders.

The Anthropic episode may seem distant from the reality of most companies. After all, few leaders are asked to make decisions about military contracts worth hundreds of millions of dollars.

But the principle at stake is exactly the same one that is now appearing in everyday organizational life.

If I am the leader of a bank that uses artificial intelligence to approve or deny credit, but I cannot explain why a particular client was rejected, I do not merely have a technical problem — I have a trust problem.

If I use sensitive data without transparency or proper governance, the risk is not only regulatory. It is reputational. It is strategic.

The same applies to insurance companies that price risk using opaque models, to platforms that recommend “AI slop” content, or to organizations that use biased AI systems to screen job candidates.

When automated decisions are not explainable and do not carry the weight of human accountability, the perception of unfairness grows — and trust, once broken, is extremely difficult to rebuild.

As the Edelman Trust Barometer statistics show, trust in AI is already a challenge. But there is an additional layer that receives far less attention.

Technology leaders do not face this trust crisis only in relation to external customers.

They face it inside their organizations as well.

For decades, technology departments held a near monopoly on technical knowledge. Developing software was expensive, complex, and dependent on scarce specialists. This created high barriers to entry — and consolidated the internal power of CTOs and CIOs.

Today, that scenario has changed.

With low-code and no-code tools such as Lovable, and with AI systems capable of writing code, prototyping products, and automating entire workflows, other areas of the company can now build their own digital solutions.

Marketing can launch a sophisticated landing page in minutes. Operations can automate processes with AI agents. HR can implement predictive dashboards without relying on long development cycles.

So if other departments can build digital products on their own, what differentiates the technology function?

The answer is no longer purely technical competence.

It is trust.

Trust that data is protected.
Trust that the architecture is robust.
Trust that governance exists.
Trust that AI is being used ethically and responsibly.
Trust that automated decisions can be explained.

This profoundly transforms the role of the technology leader.

They are no longer simply the guardian of code.

They become the guardian of the organization’s digital trust.

And this role requires not only technical expertise, but increasingly sophisticated human skills: clear communication to translate complexity, collaboration to co-create solutions with other departments, empathy to understand the concerns of customers and employees, the ability to influence without hierarchical authority, and the courage to establish ethical boundaries even under commercial pressure.

Just as Dario Amodei did.

The $16 million Super Bowl ads asked for trust.

But giving up $200 million is what truly earned it.

Because trust cannot be built through marketing campaigns.

People recognize the difference between a message and a decision.

In a world where any department can “build technology,” the true strategic asset is no longer the algorithm itself.

It is the trust placed in the humans who design, train, and deploy the most powerful — and potentially dangerous — technology the world has ever seen.

Contact

In order to check Andrea’s agenda and get a quote for an event, or even to just get in touch with him – please use the form below.

PontilhadosAndreaIorio
FotoCircularAndrea
Logo - Andrea Iorio

With more than 200 keynotes delivered (online and offline) in 2021 to clients across Brazil, Latin America, the United States and Europe, Andrea is today one of the most requested speakers on Digital Transformation, Leadership, Innovation and Soft Skills in Brazil and globally. He has been the head of Tinder in Latin America for 5 years, and Chief Digital Officer at L’Oréal. Today he is also a best-selling author, and a professor at the Executive MBA at Fundação Dom Cabral.

CONTACT FORM

In order to check Andrea’s agenda and get a quote for an event, or even to just get in touch with him – please use the form below.

Contato

Para consultas de data, propostas comerciais, ou para elogios ou até mesmo reclamações, pode preencher o formulário:

CONTACTO

Para consultas de fechas, propuestas comerciales, para felicitaciones o hasta quejas, puede llenar el siguiente formulario

CONTACT FORM

In order to check Andrea’s agenda and get a quote for an event, or even to just get in touch with him – please use the form below.