The apocalyptic panic and doomerism of AI must give way to the analysis of real risks

The apocalyptic panic and doomerism of AI must give way to the analysis of real risks

Join top executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more


The rapid advance of generative AI marks one of the most promising technological advances of the last century. It has aroused excitement and, like almost all other technological breakthroughs of the past, fear. It’s promising to see Congress and Vice President Kamala Harris, among others, take this matter so seriously.

At the same time, much of the AI ​​discourse has leaned further towards scaremongering, detached from the reality of the technology. Many favor narratives that stick to familiar sci-fi narratives of doom and doom. The anxiety around this technology is understandable, but the apocalyptic panic must give way to a thoughtful and rational conversation about what the real risks are and how we can mitigate them.

So I am the risks of AI?

First, there are concerns that AI could make it easier to impersonate people online and create content that makes it difficult to distinguish between real and fake information. These are legitimate concerns, but they are also incremental challenges to existing problems. Unfortunately, we already have a great deal of misinformation online. Deep fakes and modified media exist in abundance, and phishing emails started decades ago.

Likewise, we know the impact algorithms can have on information bubbles, amplifying disinformation and even racism. AI might make these problems more difficult, but it hardly created them, and AI is being used to mitigate them at the same time.

Event

Transform 2023

Join us in San Francisco July 11-12, where top executives will share how they integrated and optimized AI investments for success and avoided common pitfalls.

subscribe now

The second bucket is the fancier realm: that AI could amass superhuman intelligence and potentially outrun society. These are the kind of worst-case scenarios that have been imbued with society’s imagination for decades if not centuries.

We can and should consider all theoretical scenarios but the idea that humans will accidentally creating a malevolent and omnipotent AI strains credulity and seems to me the AI ​​version of the claim that the large hadron collider at CERN could open a black hole and consume the earth.

Technology always wants to develop

One proposed solution, slowing technological development, is a crude and clumsy response to the rise of AI. Technology always continues to develop. It’s a matter of who develops it and how they distribute it.

The hysterical responses ignore the real opportunity for this technology to profoundly benefit society. For example, it is enabling the most promising advances in healthcare we’ve seen in more than a century, and recent work suggests that the productivity gains of knowledge workers could equal or surpass the largest productivity leaps in history. Investments in this technology will save countless lives, create tremendous economic productivity and enable a new generation of products.

The nation that limits its citizens and organizations from access to advanced AI would be tantamount to denying its citizens access to the steam engine, computer or the Internet. Delaying the development of this technology will mean millions of excess deaths, a major stall in relative national productivity and economic growth, and the ceding of economic opportunity to nations that enable technological progress.

Responsible and thoughtful development

Furthermore, democratic nations that hinder the development of advanced AI present an opportunity for autocratic regimes to catch up and reap the economic, medical and technological gains sooner. Democratic nations must be the first to advance this technology, and they must do so in concert with the teams best equipped to deliver the technology, not in opposition to them.

At the same time, just as it would be a mistake to try to deny technological advances, it would be just as foolish to allow it to develop without an accountable framework. There have been some productive first steps towards this, notably the White House’s AI Bill of Rights, Britain’s pro-innovation approach, and Canada’s AI and Data Act. Every effort balances the imperatives of driving progress and innovation with ensuring that it happens responsibly and thoughtfully.

We must invest in the responsible development of AI and reject doomerism and calls to stop progress. As a society, we must act to protect and support national projects most likely to deliver compelling AI systems. Leaders who are most knowledgeable about technology should help allay misguided fears and refocus the conversation on the current challenges at hand.

This technology is the most exciting and impactful for decades to come. Giving our technology something long considered the sole domain of humanity is an amazing human achievement. It’s crucial for us to have constructive and open conversations about the potential ramifications, but it’s equally important that the dialogue is sober and clear, and that public discourse is guided by reason.

Aidan Gomez is CEO and co-founder of Cohere and was a member of the Google team that developed the backbone of advanced AI language models.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data engineers, can share data-related insights and innovations.

If you want to read cutting-edge ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing your own article!

Read more from DataDecisionMakers

#apocalyptic #panic #doomerism #give #analysis #real #risks

Leave a Reply

Your email address will not be published. Required fields are marked *