AI doomers are a cult, here is the real threat, according to Marc Andreessen

menu icon

  • On Tuesday, venture capitalist Marc Andreessen published a nearly 7,000-word missive on his views on artificial intelligence and the risks it poses.
  • Andreessen points out that artificial intelligence is not sentient, although its ability to mimic human speech can lead people to think otherwise.
  • “The AI ​​doesn’t want to, it has no goals, it doesn’t want to kill you, because it’s not alive,” he wrote.

Marc Andreessen, partner of Andreessen Horowitz

Justin Sullivan | Getty Images

Venture capitalist Marc Andreessen is known for saying that “software is eating the world.” When it comes to AI, he says people should stop worrying and build, build, build.

On Tuesday, Andreessen published a nearly 7,000-word missive about her views on artificial intelligence, the risks it poses, and the regulation it believes it requires. In an attempt to counter all the recent talk of “AI doomerism”, he presents what could be seen as an overly idealistic perspective of the implications.

Andreessen begins with an accurate view of artificial intelligence, or machine learning, calling it “the application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.”

AI is not sentient, he says, despite the fact that its ability to mimic human speech may understandably lead some to believe otherwise. He is trained in human language and finds high-level patterns in that data.

“The AI ​​doesn’t want to, it has no goals, it doesn’t want to kill you, because it’s not alive,” he wrote. “And AI is a machine that won’t come to life any more than your toaster oven will.”

Andreessen writes that there is a “wall of fear and doomerism” in the AI ​​world right now. Without naming names, he’s likely referring to claims by high-profile tech leaders that technology poses an existential threat to humanity. Last week, Microsoft founder Bill Gates, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and others signed off on a letter from the Center for AI Safety on the “extinction risk of AI.”

Tech CEOs are motivated to promote such doomsday visions because “they can make more money if regulatory barriers are erected forming a cartel of government-blessed AI vendors protected by new startups and open competition.” source,” Andreessen wrote.

Many AI researchers and ethicists have also criticized the doomsday narrative. One argument is that excessive focus on the growing power of AI and its future threats distracts from the real-life damage some algorithms cause to marginalized communities right now, rather than in an unspecified future.

But that’s where most of the similarities between Andreessen and the researchers end. Andreessen writes that people in roles such as AI security expert, AI ethicist, and AI risk researcher “are paid to be convicted, and their statements should be processed appropriately,” he wrote. In fact, many leaders in the AI ​​research, ethics, trust, and security community have voiced clear opposition to the doom agenda and have instead focused on mitigating the documented risks of today’s technology.

Instead of acknowledging any AI’s real-life documented risk, its biases can infect facial recognition systems, bail decisions, criminal justice proceedings, mortgage approval algorithms and more Andreessen says the AI it could be “a way to improve everything we care about”.

He argues that AI has enormous potential for productivity, scientific breakthroughs, the creative arts, and the reduction of wartime death rates.

“Anything people do with their natural intelligence today can be done much better with AI,” he wrote. “And we will be able to tackle new challenges that have been impossible to tackle without artificial intelligence, from curing all diseases to achieving interstellar travel.”

While AI has made great strides in many areas, such as vaccine development and chatbot services, the technology’s documented harms have led many experts to conclude that, for certain applications, it should never be used.

Andreessen describes these fears as irrational “moral panic”. He also advocates a return to the tech industry’s past “move fast and break things” approach, writing that both big AI companies and startups “should be empowered to build AI as fast and aggressively as possible.” possible” and that the technology “will accelerate very quickly from here if we leave it.”

Andreessen, who rose to prominence in the 1990s for developing the first popular Internet browser, started his own venture capital firm with Ben Horowitz in 2009. Two years later, he wrote an oft-quoted blog post titled “Why Software is eating the world,” which said that healthcare and education needed to undergo a “fundamental software-based transformation” just like so many other industries before them.

Eating the world is exactly what many people fear when it comes to AI. Beyond trying to squelch those concerns, Andreessen says there’s work to be done. He encourages the controversial use of AI itself to protect people from the harm and harm of AI.

“Governments working in partnership with the private sector should strongly engage in every area of ​​potential risk to use artificial intelligence to maximize society’s defensive capabilities,” he said.

In Andreessen’s idealistic future, “every child will have an AI tutor who is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.” He expresses similar visions for the role of AI as a partner and collaborator for every person, scientist, teacher, CEO, government leader and even military commander.

Near the end of his post, Andreessen points out what he calls “the very real risk of not pursuing the AI ​​with full force and speed.”

That risk, he says, is China, which is developing artificial intelligence rapidly and with authoritarian applications of great concern. According to years of documented cases, the Chinese government relies on surveillance AI, such as using facial recognition and phone GPS data to track and identify protesters.

To ward off the spread of Chinese AI influence, Andreessen writes, “We should drive AI into our economy and society as fast and as hard as possible.”

It then offers a plan for aggressively developing AI on behalf of big companies and tech startups and using “the full power of our private sector, our scientific institution and our governments.”

Andreessen writes with a level of certainty about where the world is going, but isn’t always good at predicting what’s going to happen.

His firm launched a $2.2 billion crypto fund in mid-2021, just before the industry started to slump. And one of his big bets during the pandemic was social audio startup Clubhouse, which soared to a $4 billion valuation as people were stuck at home looking for alternative forms of entertainment. In April, Clubhouse said it was laying off half its staff to “reset” the company.

Throughout Andreessen’s essay, he emphasizes the ulterior motives others have when it comes to expressing their views on AI publicly. But he has his. He wants to make money from the AI ​​revolution and is investing in startups with that goal in mind.

“I don’t think they are reckless or mean,” she concluded in her post. “They are heroes, all of them. My company and I are excited to support as many of them as possible and will stand by them and their work 100 percent.”

CLOCK: CNBC interview with Brad Gerstner of Altimeter Capital

#doomers #cult #real #threat #Marc #Andreessen

Leave a Reply

Your email address will not be published. Required fields are marked *