Bill Gates' latest blog post: The risks of AI do exist but are controllable

金色财经_

Author: Jin Qiong

ChatGPT has been popular for more than half a year. In addition to rapid development, in addition to solving technical problems, compliance may be the next big problem-after Congress subpoenaed Sam Altman, now the FTC (Federal Trade Commission) also officially conducts ChatGPT investigation.

This echoes what Bill Gates said a few days ago. In his latest blog post, he said: AI is the most transformative innovation. It is true that there are risks, but the risks are also controllable.

In the blog post, Bill Gates mainly discusses the risks and challenges of artificial intelligence and how to deal with them, and also provides some examples and suggestions.

Bill Gates first expressed his affirmation of people’s concerns about AI today-now we are in an era of profound changes in artificial intelligence, which is an uncertain era. But he is optimistic that these risks are manageable.

Secondly, Bill Gates also listed the risks caused by the current AI, including deep forgery and misinformation generated by artificial intelligence, which may destroy elections and democracy, make it easier for individuals and governments to launch attacks, and take away people’s jobs. And AI inherits human biases and makes things up, students won’t learn to write because AI will do it for them, etc.

Finally, Bill Gates also put forward a series of proposals for regulating AI. At the national government level, governments need to accumulate expertise in artificial intelligence to formulate laws and regulations to deal with AI, to deal with disinformation and deep fakes, security threats, changes in the job market, and the impact on education.

In terms of solutions, government leaders need to cooperate with other countries, not alone, and the ability to have informed and thoughtful dialogue with people.

In addition, for enterprises, AI companies must have a responsible attitude, and work must be carried out to ensure security, including protecting people’s privacy and so on.

The following is the full text, translated from GPT-4, edited by 36 Krypton:

The risks posed by AI seem overwhelming. What about those who lose their jobs to be replaced by intelligent machines? Will AI Affect Election Results? What if future AI decides that humans are no longer needed and wants to get rid of us?

These are legitimate questions, and we need to take seriously the concerns they raise. But we have every reason to believe we can deal with them: this isn’t the first time a major innovation has introduced new threats that must be contained, and we’ve encountered them before.

Whether it was the advent of the automobile or the rise of the personal computer and the internet, people have lived through other transformative moments that, despite many upheavals, ended up for the better. Shortly after the first cars hit the road, the first crashes happened. But instead of banning cars, we have adopted speed limits, safety standards, driver’s license requirements, drink-driving laws, and other rules of the road.

We are now in the early stages of another profound transformation, the age of AI, similar to the era of uncertainty before speed limits and seat belts. AI is changing so quickly that it’s not clear what’s going to happen next. We are facing big questions about how current technology works, how people use it for malicious purposes, and how artificial intelligence can change us as members of society and as individuals.

In moments like these, it’s natural to feel uneasy. But history shows that it is possible to address the challenges posed by new technologies.

I’ve written about how artificial intelligence will revolutionize our lives, helping to solve problems in health, education, climate change, and more that seemed intractable in the past. The Gates Foundation is making this issue a priority, and our CEO, Mark Suzman, recently shared his reflections on the role of AI in reducing inequality.

I will talk more about the benefits of AI in the future, but in this post I want to talk about the most common concerns I hear and read, many of which I share, and explain how I feel about them worried.

From all the writing so far on the risks of artificial intelligence, one thing is clear, and that is that no one person has all the answers. Another point that is clear to me is that the future of artificial intelligence is not as grim as some imagine, nor as rosy as others imagine. The risks are real, but I am optimistic that they can be managed. As I discuss each issue one by one, I will return to a few topics:

  • Many of the problems raised by artificial intelligence have historical precedent. It will have a huge impact on education, for example, but so will portable calculators decades ago, and more recently allowing computers into classrooms, and we can learn from past successes.
  • Many problems brought about by artificial intelligence can also be solved with the help of artificial intelligence.
  • We need to adapt old laws and enact new ones - just as existing anti-fraud laws must adapt to the online world.

In this article, I will focus on existing or imminent risks. I’m not going to discuss what will happen when we develop an AI capable of learning any subject or task. Instead, I’m talking about purpose-built AI today.

Whether we get to this point in a decade or a century, society will need to confront some profound questions. What if a super AI established its own goals? What if they conflict with human goals? Should we be building super AI?

However, immediate risks should not be overlooked when considering these long-term risks. I now turn to these short-term risks.

AI Generated Fake Content and Misinformation Could Destroy Elections and Democracy

The use of technology to spread lies and disinformation is nothing new. People have been spreading lies through books and leaflets for centuries. This practice has become easier with the advent of word processors, laser printers, email and social networking.

Artificial intelligence has amplified this problem of fake text, making it possible for almost anyone to create fake audio and video, known as deepfakes. If you get a voice message that looks like your child saying “I’ve been kidnapped, please send $1,000 to this bank account within the next 10 minutes and don’t call the police,” it will generate far more than the same content. The fearful emotional impact of the mail.

On a larger scale, AI-generated deepfakes could be used to try to influence election results. Of course, it doesn’t take sophisticated technology to cast doubt on the legitimate winner of an election, but artificial intelligence will make the process much easier.

Already, fake videos have emerged that contain fabricated footage of well-known politicians. Imagine that on the morning of an important election, a video of a candidate robbing a bank goes viral. Even though it was false, it took hours for news organizations and campaigns to prove it. How many people are going to see this video and change their vote at the last minute? That could change the outcome of a race, especially if the race is close.

When Open AI co-founder Sam Altman testified before a U.S. Senate committee recently, senators from both parties raised concerns about AI’s impact on elections and democracy. I hope this topic continues to interest everyone. We really haven’t solved the problem of disinformation and deepfakes. But two things make me cautiously optimistic. One is that people have the ability to learn not to trust everything. For years, email users have been duped by someone posing as a Nigerian prince, promising huge rewards for sharing credit card numbers. But eventually, most people learn to double-check those emails. As deceptions become more sophisticated and many targets more devious, we need to create the same awareness for deepfakes.

Another thing that makes me hopeful is that AI can not only create deepfakes, but also help identify them. For example, Intel has developed a deepfake detector, while the government agency DARPA is working on how to identify whether a video or audio has been doctored.

It will be a circular process: Someone finds a way to detect counterfeiting, someone else figures out how to counter it, someone develops a countermeasure, and so on. It won’t be a perfect success, but we won’t be tied down either.

AI makes it easier for humans and governments to launch attacks

Today, when hackers want to find a bug in software, they write code through brute force, attacking potential weaknesses until they find the bug. This involves many detours, so it takes time and patience.

Security professionals who want to fight hackers must do the same. Every software patch you install on your phone or laptop represents a lot of time spent hunting for it by someone benign or malicious.

AI models will speed up this process by helping hackers write more efficient code. They will also be able to use public information about individuals, such as where they work and who their friends are, to develop more advanced phishing attacks than we’re seeing today.

The good news is that AI can be used for both malicious and good purposes. Security teams in government and the private sector need to have the latest tools to find and fix security vulnerabilities before criminals exploit them. I hope the software security industry will expand on the work they already do in this area, it should be their number one concern.

This is why we should not try to temporarily prevent people from implementing new developments in artificial intelligence, as some have suggested. Cybercriminals won’t stop making new tools, and neither will those who want to use AI to design nuclear weapons and bioterror attacks, and stopping them needs to continue at the same pace.

There are also related risks at the global level: an AI arms race could be used to design and launch cyberattacks against other states. The government of every country wants to have the most powerful technology available to deter adversary attacks. This “leaving no one first” motivation could spark a race to create increasingly dangerous cyber weapons. In this way, everyone will be worse off.

It’s a scary idea, but we have history as a lesson. Despite its flaws, the world’s nuclear weapons non-proliferation regime prevented the all-out nuclear war that my generation grew up terrified of. Governments should consider creating a global AI agency similar to the International Atomic Energy Agency.

AI will take jobs from people

The main impact of artificial intelligence on work in the coming years will be in helping people do their jobs more efficiently. This is true whether you are working in a factory or in the office handling sales calls and accounts payable. Eventually, AI’s skill at expressing thoughts will be good enough that it can write emails and manage your inbox for you. You will be able to write a request in plain English or any other language and generate an informative job report.

As I said in my February article, increased productivity is good for society. This leaves more time for people to do other things, whether at work or at home. And, the need for helpful people, such as teaching, caring for the sick, and caring for the elderly, will never go away.

But some workers do need support and retraining as we transition to an AI-driven workplace. That’s the job of governments and corporations to manage it well so that workers aren’t left behind, without the kind of disruption to people’s lives that happened when U.S. manufacturing jobs were lost.

Also, keep in mind that this isn’t the first time new technologies have led to significant changes in the labor market. I don’t think the impact of artificial intelligence will be as huge as the industrial revolution, but it will definitely be as huge as the advent of the personal computer. Word processing applications didn’t eliminate office work, but they changed it forever. Employers and employees had to adapt, and they did. The transformation brought about by AI will be a bumpy transition, but there is every reason to believe we can reduce disruption to people’s lives and livelihoods.

AI inherits our biases and makes them up

Hallucinations, which is when an AI confidently makes some statement that is simply not true, and usually happens because the machine doesn’t understand the context of your request. If you asked an AI to write a short story about a vacation to the moon, it might give you a very imaginative answer. But if you let it help you plan a trip to Tanzania, it might try to drop you at a hotel that doesn’t exist.

Another risk of AI is that it reflects or even exacerbates existing biases about certain gender identities, races, ethnicities, etc.

To understand why such illusions and biases arise, it is important to understand how the most common current AI models work. They’re essentially very complex versions of code that let your email app predict the next word you’re going to type: in some cases, almost all the text available on the web, and they scan vast amounts of text, analyze it, Find patterns in human language.

When you ask the AI a question, it looks at the words you use and then searches for text that is often associated with those words. If you write “list ingredients for pancakes,” it might notice that the words “flour, sugar, salt, baking powder, milk, and eggs” appear frequently with the phrase. Then, based on what it knows about the order in which those words usually appear, it generates an answer (AI models that work this way use so-called transformers, and GPT-4 is one such model).

This process explains why AI can hallucinate or be biased, without any contextual understanding of the questions you ask or the things you tell it. If you tell it you made a typo, it will probably say “sorry, I made a typo”. But that’s just an illusion, it doesn’t mistype anything, it says that because it has scanned enough text to know that “sorry, I mistyped” is one of the things people often write after being corrected sentence.

Likewise, AI models inherit biases inherent in the text they were trained on. If one reads a lot of articles about doctors, and the articles mostly mention male doctors, then its answer will assume that most doctors are men.

Although some researchers argue that hallucinations are an inherent problem, I disagree. I’m optimistic that, over time, we can teach AI models to distinguish fact from fiction. For example, Open AI has done promising work on this.

Other organizations, including the Alan Turing Institute and the National Institute of Standards and Technology, are tackling bias. One approach is to incorporate human values and higher-level reasoning into AI. It’s similar to how self-aware humans work: maybe you think most doctors are male, but you’re conscious enough of that assumption to know you have to consciously fight it. Artificial intelligence can work in a similar way, especially if the models are designed by people from different backgrounds.

In the end, everyone using AI needs to be aware of the issue of bias and be an informed user. A paper you ask an AI to draft can be as full of bias as it is of factual error. You need to check your AI’s biases, as well as your own.

Students will not learn to write because AI will do it for them

Many teachers worry that AI will disrupt their work with students. In an age where anyone with an internet connection can use artificial intelligence to write a respectable first draft of a dissertation, what’s to stop a student from handing it in as their own?

There are already some AI tools that are learning to tell whether an essay was written by a human or a computer, so teachers can tell if their students are doing their homework. But instead of trying to discourage their students from using AI in writing, some teachers are actually encouraging it.

In January, a veteran English teacher named Cherie Shields published an article in Education Week. published an article on how she uses Chat GPT in her classroom. Chat GPT helps her students from starting an essay to writing an outline and even providing feedback on their assignments.

She writes: “Teachers must embrace AI technology as another tool that students can use. Just as we once taught students how to do a good Google search, teachers should design clarity around how Chat GPT bots can assist essay writing.” A curriculum that acknowledges the existence of AI and helps students use it could revolutionize the way we teach.” Not every teacher has the time to learn and use new tools, but educators like Cherie Shields present a good The argument, that those teachers who have the time will benefit greatly.

This reminds me of the era when electronic calculators became popular in the 1970s and 1980s. Some math teachers worry that students will stop learning basic arithmetic, but others embrace the new technology and focus on the thinking behind the arithmetic.

AI can also help with writing and critical thinking. Especially in the early days, when hallucinations and bias are still an issue, educators can have AI generate essays and then fact-check them with students. Educational nonprofits such as the Khan Academy and the OER Project provide teachers and students with free online tools that place great emphasis on the testing of claims, and no skill is more important than knowing how to tell the truth from the fake.

We really need to make sure that educational software helps close the achievement gap, not make it worse. Today’s software is primarily geared toward students who are already motivated to learn. It can create a study plan for you, connect you with good resources, and test your knowledge. However, it does not yet know how to engage you in subjects that do not interest you yet. This is a problem that developers need to address so that all types of students can benefit from AI.

**What to do next? **

I believe we have more reason to be optimistic that we can manage the risks of AI while maximizing its benefits, but we need to move fast.

Governments need to develop AI expertise in order to develop laws and regulations to deal with this new technology. They need to deal with misinformation and deepfakes, security threats, changes in the job market, and the impact on education. Just one example: the law needs to clarify which uses of deepfakes are legal and how deepfakes should be labeled so everyone understands that what they see or hear is fake.

Political leaders need to be able to engage in informed, thoughtful dialogue with constituents. They also need to decide how much to cooperate with other countries on these issues, rather than go it alone.

In the private sector, AI companies need to do their work safely and responsibly. This includes protecting people’s privacy, ensuring their AI models reflect fundamental human values, minimizing bias to benefit as many people as possible, and preventing technology from being exploited by criminals or terrorists. Companies across many sectors of the economy will need to help their employees transition to an AI-centric workplace so no one is left behind. And, customers should always know whether they are interacting with an AI or a human.

Finally, I encourage everyone to pay as much attention to the development of artificial intelligence as possible. This is the most transformative innovation we will see in our lifetime, and a healthy public debate will depend on everyone’s understanding of the technology itself, its benefits, and its risks. The benefits of artificial intelligence will be enormous, and the best reason to believe we can control the risks is that we have been able to.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments