CONSULTAR DISPONIBILIDADE

CONTATO

Para consultas de data, propostas comerciais, ou para elogios ou até mesmo reclamações, pode preencher o formulário:

CONTACT FORM

In order to check Andrea's agenda and get a quote for an event, or even to just get in touch with him - please use the form below.

CONTACTO

Para consultas de fechas, propuestas comerciales, para felicitaciones o hasta quejas, puede llenar el siguiente formulario

CHECK AVAILABILITY

CONTATO

Para consultas de data, propostas comerciais, ou para elogios ou até mesmo reclamações, pode preencher o formulário:

CONTACT FORM

In order to check Andrea's agenda and get a quote for an event, or even to just get in touch with him - please use the form below.

CONTACTO

Para consultas de fechas, propuestas comerciales, para felicitaciones o hasta quejas, puede llenar el siguiente formulario

CONSULTAR DISPONIBILIDAD

CONTATO

Para consultas de data, propostas comerciais, ou para elogios ou até mesmo reclamações, pode preencher o formulário:

CONTACT FORM

In order to check Andrea's agenda and get a quote for an event, or even to just get in touch with him - please use the form below.

CONTACTO

Para consultas de fechas, propuestas comerciales, para felicitaciones o hasta quejas, puede llenar el siguiente formulario

CONSULTAR DISPONIBILIDADE

CONTATO

Para consultas de data, propostas comerciais, ou para elogios ou até mesmo reclamações, pode preencher o formulário:

CONTACT FORM

In order to check Andrea's agenda and get a quote for an event, or even to just get in touch with him - please use the form below.

CONTACTO

Para consultas de fechas, propuestas comerciales, para felicitaciones o hasta quejas, puede llenar el siguiente formulario

CHECK AVAILABILITY

CONTATO

Para consultas de data, propostas comerciais, ou para elogios ou até mesmo reclamações, pode preencher o formulário:

CONTACT FORM

In order to check Andrea's agenda and get a quote for an event, or even to just get in touch with him - please use the form below.

CONTACTO

Para consultas de fechas, propuestas comerciales, para felicitaciones o hasta quejas, puede llenar el siguiente formulario

CONSULTAR DISPONIBILIDAD

CONTATO

Para consultas de data, propostas comerciais, ou para elogios ou até mesmo reclamações, pode preencher o formulário:

CONTACT FORM

In order to check Andrea's agenda and get a quote for an event, or even to just get in touch with him - please use the form below.

CONTACTO

Para consultas de fechas, propuestas comerciales, para felicitaciones o hasta quejas, puede llenar el siguiente formulario

Did you know that 50% of AI researchers worldwide believe there is a greater than 10% probability that humanity could be eradicated by Artificial Intelligence due to our inability to control it? This alarming statistic, offered by experts in the field, should ring loud warning bells. However, the risks of AI are often so abstract that even when our minds grapple with potential negative impacts, they quickly reset to their previous mode. This fleeting understanding, akin to a rubber band snapping back to its original shape, is what researcher Tristan Harris refers to as the "rubber effect". A former Facebook contributor who rose to prominence through the documentary The Social Dilemma and founder of the Center for Human Technology, Harris urges us not to fall prey to this effect and to comprehend what it takes to prevent AI from causing humanity's downfall. This is what we will explore in this article.

Now, let's delve into the issue at hand: how to prevent AI from causing humanity's extinction. This is precisely what Tristan Harris, former Facebook contributor and founder of the Center for Human Technology, discussed during a recent conference:

 “The feeling I’ve had, personally, is akin to being in 1944, receiving a call from Robert Oppenheimer involved in this thing called The Manhattan Project - of which you have no understanding - and he announces ‘The world is about to fundamentally change. However, what's being deployed isn't being done so in a safe and responsible way, but in a very perilous manner’."

During a March interview with ABC News, OpenAI CEO Sam Altman was asked if there was a kill switch for ChatGPT in case the AI went rogue. Altman responded affirmatively. He further stated, "In reality, any engineer can just say we're going to disable this for now. Or we're going to deploy this new version of the model." However, historian Yuval Harari didn't agree.

According to Harari, as long as these tools are not released into the public sphere, they can be equipped with all sorts of switches. "But once you release them into the public sphere, people start to depend on them for their sustenance, social relations, politics. At that point, it's too late. You can't pull the switch because doing so would cause an economic collapse.

This analogy resonates with the current state of the Internet: imagine if you paused the Internet for a day because it became "bad". Of course, we know that can't happen with the Internet since it is neutral, but it could happen with AI. Would the world not grind to a halt? Or think about the inconvenience you faced when Whatsapp went down for a few hours. So, it is clear that Harari has a point.

Harari believes recent technological revolutions have widened social inequalities and led to political unrest. However, he cautioned that these issues "are nothing" compared to what society might experience in the coming years.

He argues that if we do not address this, some people will grow exceedingly rich and powerful by controlling these new tools, while others could become part of a new 'useless class'. This harsh terminology is intended to prevent the rubber band effect from occurring. Of course, from the perspective of their families, friends, or communities, people are never useless. But from an economic standpoint, they might become redundant if they no longer possess the necessary skills.

Harari characterized this as a "terrifying danger", emphasizing the need to safeguard people before it is too late.

Elon Musk, Steve Wozniak, Harari, and other tech leaders signed a letter in March calling on AI developers to "immediately halt for at least six months the training of AI systems more powerful than GPT-4".

The key point is that Harari doesn't seek to halt all AI development, but instead advocates for a pause before releasing exceedingly potent AI tools into the public sphere.

He likens his stance on AI to the development of new drugs, where laws and medical standards necessitate a company to undergo a stringent safety check to ascertain the product's short- and long-term safety. The same procedure is followed in the pharmaceutical industry!

What would be your reaction if biotech labs started creating new viruses and simply released them into the public sphere merely to impress shareholders with their enhanced capabilities and boost their stock value? We would rightfully deem this madness and call for the perpetrators to be imprisoned.

The issue, Harari asserts, is that AI has the potential to be "much more powerful" than any virus. He's concerned that researchers might develop these extraordinarily potent tools and release them quickly without adequate security checks.

Much like Tristan Harris, I find this situation reminiscent of the nuclear power analogy. Edward Teller's fear, as he saw it, was quite literally the end of the world. In 1942, he informed his Manhattan Project colleagues that when they detonated the world's first nuclear bomb, it could potentially set off a chain reaction. The atmosphere would ignite. All life on Earth would be incinerated. Some of Teller's colleagues dismissed the idea, but others did not. If there was even the slightest possibility of atmospheric ignition, according to Arthur Compton, director of a Manhattan Project laboratory in Chicago, all work on the bomb should be halted. Doesn't this sound a bit like the current debate among experts?

However, AI differs fundamentally from any other tool in human history. Every other tool we've invented has empowered us because the decisions on how to use these tools have always remained in our hands. If you invent a knife, you decide whether to use it to kill someone, save a life in the operating room, or chop a salad. The knife doesn't decide.

This is why Yuval Harari compares AI to the radio or the atomic bomb, revisiting the atomic bomb analogy. The radio doesn't decide what to broadcast and a nuclear weapon doesn't determine its target. There's always a human element, whereas AI is the “first tool in human history” capable of making decisions on its own use and about life itself.

So, what's to be done? Is it all doom and gloom? Not necessarily: AI can aid in improving healthcare, treating cancer, and solving a multitude of problems faced by humanity. So what measures can we take?

Firstly, by fighting against disinformation. One of the most effective ways to do this is to reinforce the need for trusted institutions. In today's modern world, various individuals and companies can disseminate any information, regardless of its grounding in objective reality. They can also produce deepfakes.

He also suggested more regulations or laws on data privacy. For instance, when a person confides something extremely private to a doctor, the doctor cannot use that information to earn money by selling it to third parties for use against the patient or to manipulate them without their consent. The acquisition and sale of data are among the most significant ways Big Tech companies generate profit. They also use this data to curate content and sell products to consumers. Harari proposes that similar rules in the medical field should apply to the technological realm.

"The people who develop the technology cannot be trusted to regulate themselves. They don't represent anybody," Harari stated. "We never voted for them. They don't represent us. What gives them the power to make perhaps the most important decisions in human history?"

This is a critical question to ponder and one that I invite you to contemplate.

Comentários

Comments

Quer saber mais sobre minhas palestras ou nosso programa de podcasts customizados para ajudar sua empresa?

In order to check Andrea's agenda and get a quote for an event, or even to just get in touch with him - please use the form below.

¿Quiere obtener más información sobre mis charlas o nuestro programa de podcast personalizado para ayudar a su negocio?

Com mais de 200 palestras online e offline em 2021 para clientes no Brasil, América Latina, Estados Unidos e Europa, o Andrea é hoje um dos palestrantes sobre Transformação Digital, Liderança, Inovação e Soft Skills mais requisitados a nível nacional e internacional. Ele já foi diretor do Tinder na América Latina por 5 anos, e Chief Digital Officer na L’Oréal, e hoje é também escritor best-seller e professor do MBA Executivo da Fundação Dom Cabral

With more than 200 keynotes delivered (online and offline) in 2021 to clients across Brazil, Latin America, the United States and Europe, Andrea is today one of the most requested speakers on Digital Transformation, Leadership, Innovation and Soft Skills in Brazil and globally. He has been the head of Tinder in Latin America for 5 years, and Chief Digital Officer at L’Oréal. Today he is also a best-selling author, and a professor at the Executive MBA at Fundação Dom Cabral.

Con más de 150 conferencias online y offline en 2022 para clientes en Brasil, América Latina, Estados Unidos y Europa, Andrea es hoy una de los conferencistas más solicitados sobre Transformación Digital, Liderazgo, Innovación y Soft Skills a nivel nacional e internacional. Fue director de Tinder en América Latina durante 5 años y Chief Digital Officer de L’Oréal Brasil. Es autor de best-sellers y profesor del Executive MBA de La Fundación Dom Cabral, una de las instituciones de mayor prestigio en Brasil.

CONTATO

Para consultas de data, propostas comerciais, ou para elogios ou até mesmo reclamações, pode preencher o formulário abaixo

CONTACTO

Para consultas de fechas, propuestas comerciales, para felicitaciones o hasta quejas, puede llenar el siguiente formulario

CONTACT FORM

In order to check Andrea's agenda and get a quote for an event, or even to just get in touch with him - please use the form below.

Andrea Iorio · 2021 © Todos os direitos reservados.

Andrea Iorio · 2021 © All Rights Reserved.

Andrea Iorio · 2021 © Todos los derechos reservados.