……
"The most common answer is technology, which is indeed true. Technology is a great achievement accumulated by our human history."
"The rapid development of science and technology is the direct reason, which is why we are so efficient now, but we want to explore the ultimate reason in the future."
"There is a 250000 generation gap between us and our ancestors. During this period, we have gone from picking up stones on the ground as weapons to using atomic energy to make destructive super bombs. Now we know that it takes a long time for such a complex mechanism to evolve. These huge changes depend on the small changes in the human brain, the chimpanzee brain and the human brain There's no big difference, but humans won, we're out there and they're in the zoo! "
"It is thus concluded that in the future, any significant change in the thinking matrix will lead to significant differences in the outcome."
Ren Hong took a sip of water, paused for a moment and continued:
"some of my colleagues think that we or human beings will soon invent technologies that can completely change the mode of human thinking, that is, superhuman artificial intelligence, or super AI or super intelligent body."
"The artificial intelligence that we humans now grasp is to input a certain instruction into a box. In this process, we need programmers to transform knowledge into executable programs. For this purpose, we will establish a set of professional systems, such as PHP, C + + and other computer languages."
"They're stiff, you can't extend their functions, basically you can only get what you put in, that's all."
"Despite the rapid development and maturity of our artificial intelligence technology today, it still fails to achieve the strong interdisciplinary and comprehensive learning ability like human beings."
"So we now have a question: how long will it take for human beings to have this powerful capability in artificial intelligence?"
"Matrix technology has also conducted a questionnaire survey on the world's top AI experts to collect their opinions. One of the questions is: when do you think that human beings will create artificial intelligence at human level?"
"We define AI in this questionnaire as the ability to perform any task as well as an adult. An adult will be good at different jobs, etc., so that the ability of the artificial intelligence will no longer be limited to a single field. "
"The middle number of the answer to this question is now, in the middle of the 21st century. Now it seems that it will take some time, and no one knows the exact time, but I think it should be fast."
“…… We know that neurons transmit signals up to 100 meters per second in axons, but in computers, signals travel at the speed of light. In addition, there are limitations on size. The human brain is only the size of its head, and you can't expand it twice. But computers can expand multiple times, the size of a box, the size of a room, or even the volume of a building. This can never be ignored. "
"So super AI may be lurking in it, just like atomic energy lurking in history until 1945."
"In this century, humans may awaken the wisdom of super AI, and we will see a big explosion of wisdom. When people are thinking about what's smart and what's stupid, especially when we're talking about power and power. "
"For example, chimpanzees are strong, the same size as two healthy men, but the key between the two depends more on what humans can do than what chimpanzees can do."
"So when super AI comes along, the fate of humans may depend on what this super intelligence wants to do."
"Just imagine, super wisdom may be the last invention that human beings need to create. Super wisdom is smarter than human beings and better at creating than us. It will also do so in a very short time, which means that it will be a shortened future."
"Imagine all the crazy technologies we've ever imagined. Maybe humans can complete and realize them in a certain period of time, such as ending aging, immortality, colonization of the universe..."
"It seems that such elements only exist in the science fiction world, but also conform to the laws of physics. Super wisdom has a way to develop these things, and it is faster and more efficient than human beings. We need 1000 years to complete an invention. Super AI may only take one hour, or even shorter. This is the shortened future."
"If there is a super intelligent body with such mature technology, its strength will be unimaginable. Under normal circumstances, it can get anything it wants, and our future will be dominated by the preferences of this super AI."
"So the question is, what does it like?"
"This problem is very difficult and serious. To make progress in this field, for example, one way is to think that we must avoid personifying super AI. It has a taste of different opinions whether to block or not."
"It's an ironic question, because every news report about the future of AI or related topics, including what we're doing, may be labeled with the poster of the Hollywood sci-fi film terminator in tomorrow's news. Robots fight against humans (shrug, laughter off the field).""So, I personally think that we should express this issue in a more abstract way, rather than in the narrative of Hollywood movies in which robots stand up against humans, war, and so on, which is too one-sided."
"We should think of super AI abstractly as an optimization process, such a process as programmer's optimization of programs."
"Super AI, or super intelligent body, is a very powerful optimization process. It is very good at using resources to achieve the ultimate goal, which means that there is no inevitable link between having high intelligence and having a goal that is useful to human beings."
"If it's not easy to understand this sentence, let's take a few examples: if the task we give artificial intelligence is to make people laugh, robots like our current home machine assistants may make funny performances to make people laugh, which is a typical behavior of weak artificial intelligence."
"And when the AI given the task is a super intelligence body, super AI, it will realize that there is a better way to achieve this effect, or complete the task: it may control the world and insert electrodes into all human facial muscles to make people laugh constantly."
"Another example is that the super AI's task is to protect the owner's safety, so it will choose a better way to deal with it. It will imprison the master at home and not be allowed to go out, which can better protect the owner's safety. It may still be dangerous at home. It will also take into account all kinds of factors that may threaten and lead to the failure of the task, and wipe them out one by one, eliminate all the factors that are malicious to the host, and even control the world. All these actions are for the sake of not failing the task. It will make the most optimal choice and put it into action to achieve the goal of completing the task. "
"For another example, if we give this super ai the task goal is to solve an extremely difficult mathematical problem, it will realize that there is a more effective way to achieve the task goal, that is, to turn the whole world, the whole earth and even more exaggerated scale into a supercomputer, so that its computing power is more powerful, and it is easier to complete the task goal Yes. Moreover, it will realize that this method will not be accepted by us. Human beings will stop it. Human beings are potential threats in this mode. Therefore, it will solve all obstacles for the ultimate goal, including human beings, any affairs, such as planning some secondary plans to eliminate human beings and other behaviors. "
"Of course, these are exaggerated descriptions. We can't be wrong when we encounter such things. But the gist of the three exaggerated examples is very important. That is, if you create a very powerful optimization program to achieve the maximum goal, you must ensure that your goal in the sense of meaning and all the things you care about are accurate. If you create a powerful optimization process and give it a wrong or imprecise goal, the consequences may be like the example above. "
"One might say that if a 'computer' starts to plug electrodes into the face, we can turn off the computer. In fact, this is definitely not an easy thing. If we are very dependent on this system, such as the Internet we rely on, do you know where the Internet switch is? "
"So there must be a reason that we humans are smart, and we can meet threats and try to avoid them. In the same way, super AI, smarter than us, can only do better than us."
"On this issue, we should not be confident that we can control everything."
"Let's simplify the problem. For example, we put artificial intelligence into a small box and create a safe software environment, such as a virtual reality simulator that it can't escape."
"But, we really have full confidence and grasp it, it is impossible to find a loophole, a loophole that can let him escape?"
"Even we human hackers can find network vulnerabilities all the time."
"I might say that I'm not very confident that super AI will find bugs and escape. So we decided to disconnect the Internet to create a gap insulation, but I have to repeat that human hackers can cross this gap again and again with social engineering. "
"For example, now, while I'm talking, I'm sure an employee here will ask him to hand over his account details at some time, for the reason of giving it to the computer information department, or other examples. If you're the AI, you can imagine using the intricate winding of electrodes in your body to create a kind of radio wave to communicate. "
"Or you can pretend there's something wrong. At this point, the programmer will open you to see what went wrong, they will find out the source code, and in the process you can gain control. There are countless examples of how you can use artificial intelligence as a side-effect of our scheme
"Therefore, any attempt to control a super AI is extremely ridiculous. We can't show excessive confidence that we can control a super intelligent body forever. It will break away from control one day. After that, will it be a benevolent God?""Personally, I think it's inevitable for AI to be personified, so I think we need to understand that if we create super AI, even if it is not restricted by us. It should still be harmless to us, it should be on our side, and it should have the same values as us. "
"So are you optimistic that this problem can be solved effectively?"
"We don't have to write down all the things we care about with super AI, or even turn them into computer language, because it's a task that will never be done. Instead, we should create artificial intelligence that uses its own wisdom to learn our values, which can motivate it to pursue our values, or to do things that we will approve of and solve valuable problems. "
"It's not impossible, it's possible. The results can benefit mankind a lot, but it won't happen automatically. Its values need to be guided."
"The initial conditions of the big bang of wisdom need to be correctly established from the most primitive stage."
"If we want everything not to be deviated from our expectations, the values of artificial intelligence and our values complement each other, not only in familiar situations, such as when we can easily check its behavior, but also in the unprecedented circumstances that all artificial intelligence may encounter, and in the boundless future, our values will still be consistent There are many profound problems to be solved, such as how to make decisions, how to solve logical uncertainty and other similar problems. "
"This task seems a little difficult, but it's not as difficult as creating a super intelligence, is it?"
"It's still quite difficult (laughter spreads all over the room again)!"
"What we are worried about is that creating a super AI is really a big challenge, and creating a safe super AI is a bigger challenge. The risk is that if we solve the first problem, we can't solve the second problem of ensuring security, so I think we should think of solutions in advance that will not deviate from our values, so that we can You can use it when you need it. "
"Now, maybe we can't solve the second security problem, because there are some factors that you need to understand, and you need to apply the details of that actual architecture to effectively implement it."
"If we can solve this problem, when we enter the era of real super intelligence, it will be more smooth, which is a very worthwhile thing for us to try."
"And I can imagine that if everything goes well, hundreds, thousands or millions of years from now, when our descendants look back on our century, they may say that the most important thing our ancestors, our generation did, was to make the most right decisions."
Thank you
………
www.novelhold.com , the fastest update of the webnovel!