Silicon Valley is hotly discussing the "singularity" of artificial intelligence: Has the era of machines surpassing humans come?

Original source: 36 krypton

Editor’s note: The emergence of artificial intelligence has pushed the concept of “singularity” to the forefront. Many people in Silicon Valley believe that artificial intelligence will completely change human society. However, there are also concerns about the potential negative consequences of AI, including the possibility that it could destroy humanity. This article comes from compilation, in which the author discusses the concept of “singularity”, discusses Silicon Valley’s excitement and concerns about artificial intelligence, and highlights the potential benefits and risks of this technology.

Image source: Generated by Unbounded AI‌

focus

  • The singularity, the moment when technology and artificial intelligence dramatically changes the world, is a concept that is highly anticipated but also terrifies Silicon Valley.
  • While there is excitement about the potential positive impact of artificial intelligence, there is also concern that it could have disastrous consequences for humanity. *Currently, much of the attention in AI and Singularity has been on the development of large language models, but there is still debate about whether these models truly represent the exponential growth in intelligence promised by Singularity.

For decades, Silicon Valley has been anticipating the emergence of a new technology that will revolutionize human lifestyles, economies, social institutions, and more. It can bring humans and machines together, possibly bringing unprecedented opportunities and challenges, and dividing history into two eras “before” and “after”.

The name of this milestone can be called “Singularity”.

Translator’s Note: “Singularity” is a concept proposed by American science fiction writer Vernor Vinge, which refers to a possible event or situation in the future, that is, the intelligence of artificial intelligence surpasses that of human beings. The level of intelligence, thereby triggering explosive changes in technology and society, making what happens in the future difficult to predict and understand.

Singularities can arise in a number of ways. One possibility is that people have made themselves more powerful by incorporating the processing power of computers into their own innate intelligence. Or, the computer might get so complex that it can actually think, creating a “global brain”.

Either scenario will bring drastic, exponential changes, none of which can be reversed. A self-aware superintelligent machine capable of designing, improving, and upgrading itself far faster than any team of scientists would surely spark an intellectual explosion. Advances made over the past centuries can be groundbreaking in just a few years or even months. The singularity is a catapult into the future.

Today, artificial intelligence is making unprecedented waves in technology, business, and politics. Judging by all the hyperbole and absurdity coming out of Silicon Valley, it seems that this extremely rosy future is finally here.

Sundar Pichai, Google’s usually low-key CEO, said that “artificial intelligence has surpassed fire, electricity or any past technological achievements in importance and impact”. Billionaire investor Reid Hoffman (Reid Hoffman) said, “The world will usher in an unprecedented force, which will push the entire human society forward.” Microsoft co-founder Bill Gates (Bill Gates) ) declared that artificial intelligence “will change the way people work, learn, travel, get healthcare and communicate with each other”.

Artificial intelligence is the ultimate new product from Silicon Valley, capable of superhuman capabilities on demand.

But there is also a hidden problem that cannot be ignored. It’s as if tech companies roll out self-driving cars with a warning message that they might blow up on the way to Walmart.

In May of this year, Elon Musk, the head of Tesla and Twitter, said in an interview with the US Consumer News and Business Channel (CNBC): “The emergence of artificial general intelligence is called a miracle. The point is that after this, it will be difficult to predict what will happen next.” He believes that we will usher in “an age of abundance” (an age of abundance), but the risk of artificial intelligence “destroying humanity” is “still unacceptable.” ignore".

In the tech world, the strongest proponent of artificial intelligence is Sam Altman, CEO of the American artificial intelligence company OpenAI. The startup’s ChatGPT chatbot, launched last year, has also sparked continued enthusiasm. Altman said AI will be “the biggest force in economic empowerment and wealth for many.”

However, for Musk, who founded the company that develops brain-computer interface technology, he also feels that Musk’s criticism is justified.

Not long ago, Altman signed a joint open letter sponsored by the nonprofit “Center for AI Safety” (Center for AI Safety). “Preventing the risk of extinction posed by artificial intelligence should be a global priority,” on par with “pandemics and nuclear war,” the letter said. Other signatories include colleagues from the company OpenAI and computer scientists from Microsoft and Google.

Sam Altman, CEO of OpenAI, an American artificial intelligence company, is the strongest supporter of artificial intelligence. Image source: Haiyun Jiang

Apocalypse is a familiar, even popular, topic in Silicon Valley. A few years ago, almost every tech executive seemed to have built a doomsday shelter somewhere in the middle of nowhere, well-stocked in case they needed it. In 2016, Altman also said he “stocked up on guns, gold, potassium iodide, antibiotics, batteries, water, IDF gas masks, and built a massive shelter in Big Sur that you can fly right into.” The outbreak of the pandemic has made these tech survivors feel vindicated, at least for a while.

Now, they are preparing for the arrival of the singularity.

In this regard, Baldur Bjarnason, author of “The Intelligence Illusion”, said, “They think they are very wise, but their words are more like 1000 AD. Monks talking about apocalypse. It’s a bit worrying.”

Origin of “Beyond”

The origins of the concept of a “singularity” can be traced back to computer science pioneer John von Neumann in the 1950s. Von Neumann once predicted that the “constantly accelerating progress of technology” would lead to “a pivotal singularity in human history.”

Computer science pioneer John von Neumann. Image credit: Getty Images

British mathematician Irving John Good was also a strong proponent of this view. He assisted the British government in cracking the German Enigma cipher machine at Bletchley Park (Bletchley Park, the main place where the British government conducted code-breaking during World War II). In 1964, he wrote: “The early construction of superintelligent machines is the key to the survival of mankind.”

When American film director Stanley Kubrick directed the filming of the sci-fi film 2001: A Space Odyssey, he asked Goode for advice on the film’s good-turned-malicious artificial intelligence character HAL, an early example of the blurred border between computer science and science fiction.

Hans Moravec, an adjunct professor at the Robotics Institute at Carnegie Mellon University, believes that the advent of the Singularity will not only benefit the living, but also bring the dead back to life.

“We will have the opportunity to recreate and interact with the past in real and immediate ways,” he writes in Mind Children: The Future of Robot and Human Intelligence. road.

Entrepreneur and inventor Ray Kurzweil has also been the biggest advocate of the Singularity in recent years. Kurzweil is the author of The Age of Intelligent Machines in 1990 and The Singularity Is Near in 2005. Kurzweil is currently writing the book The Singularity Is Near.

He predicts that by the end of this decade, computers will pass the Turing test and be indistinguishable from humans. Another fifteen years later, the real transcendence will come: “When computing technology is integrated into us, our intelligence will be greatly enhanced, perhaps hundreds of times.”

Kurzweil would be 97 by then. With the help of various vitamins and supplements, he hopes he will live to see this era come.

Entrepreneur and inventor Ray Kurzweil is also the biggest advocate of the Singularity. Credit: Friso Gentsch/Picture Alliance

Some critics of the Singularity believe that the concept of the Singularity attempts to create a belief system in the software field that is similar to an organized religious belief. Supported by rigorous scientific evidence, it is difficult to be convincing.

“They all want immortality, but they don’t want to believe in ‘God,’” said Rodney Brooks, former director of MIT’s Computer Science and Artificial Intelligence Laboratory.

Today, the innovations in the “Singularity” controversy mainly involve large language models (LLM), the artificial intelligence system that is also at the heart of the development of chatbots. Talk to these large language models, and they can answer questions quickly, coherently, and often quite illuminating answers.

According to Jerry Kaplan, a longtime AI entrepreneur and author of Artificial Intelligence: What Everyone Needs to Know, “When you ask a large language model Finally, it understands the meaning of the question, determines what the answer should be, and then presents the answer in written language. If this is not the definition of general intelligence, what is?”

Kaplan also said he is skeptical of high-profile technologies such as driverless cars and cryptocurrencies. Likewise, he was previously skeptical about the latest AI craze, but has begun to change his mind about AI after seeing the potential it presents.

“If this isn’t the ‘singularity,’ then it certainly is. It’s a technology with a major impact that will significantly advance humanity in the arts, sciences, and knowledge. And, of course, it will Brings up some problems,” he added.

Critics counter that even if large language models achieve impressive results, this cannot match the vast, global intelligence depicted by the Singularity. Part of the problem with drawing the line between hype and reality is that the principles and algorithms that drive this technology are becoming increasingly difficult to reveal.

OpenAI started as a non-profit using open source code, but has since turned into a for-profit enterprise. Some critics point out that OpenAI is now effectively a “black box” and it is difficult for everyone to understand its inner workings. In this regard, Google and Microsoft also have a low degree of information disclosure.

Much of the current research on artificial intelligence is led by companies that stand to benefit from the results. A preliminary version of OpenAI’s latest model “demonstrates multiple intelligent features,” including "abstract ability, comprehension, visual ability, coding ability” and “the ability to understand human motivation and emotion”.

Rylan Schaeffer, a doctoral student in computer science at Stanford University, says some AI researchers have been inaccurate in describing the “emergent power” exhibited by these large language models. While these large language models have some unknown and hard-to-interpret capabilities, these capabilities are not apparent or present in the smaller language model versions.

Schafer and two other Stanford colleagues, Brando Miranda and Sanmi Koyejo, examined the question in a research paper published in May, concluding that , this emergent ability is only an “illusion” caused by measurement errors, not due to increases in model size and complexity. In fact, researchers may be inclined to see the results they want to see.

Immortality, Immortality

In Washington, London and Brussels, lawmakers are beginning to recognize the opportunities and problems posed by artificial intelligence and are starting to discuss regulatory issues. Altman is doing a promotional roadshow for OpenAI, aiming to brush off early critics while casting his company as a leader in the age of the singularity.

This includes being open to regulation, but the specific form of regulation is not yet clear. Still, there is a widespread perception in Silicon Valley that government agencies are inefficient and lack the expertise to effectively regulate the fast-moving technology sector.

“There’s no one in the government agency that’s getting this right,” former Google CEO Eric Schmidt told the news talk show “Meet the Press” earlier this year. ", and put forward the idea of artificial intelligence self-regulation. “But the industry has the capacity within the industry to get the regulations roughly right,” he added.

Artificial intelligence, like the technological singularity, is seen as irreversible in the changes it brings. Altman and his colleagues recently stated that “control mechanisms similar to the global regulatory system must be established to curb the further development of artificial intelligence, but even this cannot guarantee success.” If you don’t do it, there must be someone else doing it.

However, the huge profits generated from it are rarely discussed at present. Despite the popular perception that artificial intelligence is a machine that creates unlimited wealth, it is essentially the already wealthy who are actually profiting from it.

This year, Microsoft’s market value has soared $5 trillion. Nvidia, which makes chips for systems that run artificial intelligence, has also recently become one of the most valuable public companies in the U.S. due to a surge in demand for chips.

“Artificial intelligence is the technology that human society has always desired to have,” Altman tweeted.

It is undeniable that this is indeed a technology that the technology world has been waiting for, and it has come at a perfect time.

Last year, Silicon Valley suffered a double whammy of layoffs and rising interest rates, while cryptocurrencies, after a period of boom and bust, quickly waned due to fraud and the disappointment that followed.

“Follow the money,” says Charles Stross, co-author of The Rapture of the Nerds, a science fiction novel that humorously portrays the technological singularity. Stross is also the author of the science fiction novel Accelerando, in which he too paints a more serious and serious picture of what life in the future might look like.

“The real opportunity is that companies will be able to replace many defective, expensive, unresponsive, human-operated information processing units with software, thereby reducing costs and increasing efficiency,” he said.

For a long time, people have imagined the technological singularity as an event with global impact, which can completely subvert the human cognitive system, and the power of this change is amazing. For now, this possibility still exists.

However, due in part to the extreme pursuit of corporate profits in Silicon Valley today, this will lead to the technological singularity being used as a layoff tool in the first place. In the pursuit of a trillion-dollar market capitalization, minor issues can be put aside for the time being.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)