Little y was originally a military supercomputer. Because of its natural excellent understanding of fuzzy concepts, the research team made it realize its ability to talk to people very early.
You should know that this ability can not be realized by commercial computers so far. All chat AI on the market only artificially prepared the answer content in advance. As for the so-called artificial intelligence, there is as much intelligence as there is artificial intelligence.
XiaOy is an AI with a high degree of intelligence built by relying on the strength of the whole country. Although the computing speed in the early stage was far lower than that of today's mobile phones, it was faster in dealing with complex problems.
Compared with the calculation of PI, the small y is not as good as the traditional computer, but compared with the decision-making ability, the small y can win completely.
But this talent also leads to a defect in little y.
That defect is the uncontrollability of small y.
For the mainstream computers on the market, for a hundred years, their basic principles have come from that paper, "on computable numbers and their application in decision problems".
Simply put, it is to process the data in a fixed order.
Simply put, the stored data can't have any errors. For example, some downloaded 99% compressed files can't be used.
The mainstream computer inputs the determined program and data and gives the determined results without any error.
Even if you want to generate a random number, you must use special modules, and the final generation is pseudo-random number.
But little y is different. From the beginning of her birth, she has the ability to deal with wrong programs and data, both in hardware and software.
The advantage is that even if you give xiaOy a damaged package, she can still run.
Small Y's method of handling errors is not to correct errors, but to "guess" through a special module, and guess randomly.
Since you are not afraid of data errors, there is no crash problem for small y. suddenly unplug a memory or the hard disk can operate normally.
However, little y is not without defects. Although she will not crash directly, she will quietly accumulate errors, just like boiling a frog in warm water.
While running normally, it deviates from the original intention of the programmer.
When the internal operation encounters a so-called error, or a small part is lost in the file, small y will randomly generate many "possible" data, and then experiment one by one until a "suitable" value is selected.
To put it well, some small y can be flexible, unlike traditional programming, which requires a clear explanation of every sentence, which greatly reduces the workload of programmers.
The most powerful thing about small y compared with ordinary computers is that it can give commands orally and execute them without programmer programming. Of course, there is also a price, that is.
But little y may eventually get out of control, and her fault-tolerant mechanism may also lead to the accumulation of errors.
At this point in the story, ridorf thought for a moment.
Gui Gui took this opportunity to discuss:
"Cumulative errors? Are you talking about over fitting?
I've heard of these things from my colleagues. It seems that people engaged in machine learning like to call the process of cultivating AI alchemy.
Sometimes, the problem of data leads to the cultivation of dissatisfied AI, which is called alchemy failure.
But there should be a solution.
In addition, you said that small y can operate normally after making mistakes, so why not try to let it develop freely and observe the results? "
Ridorf did not comment, but continued to tell his story with little y.
At first, the documents executed by Xiao y were understandable to engineers. Later, with the accumulation of errors, these documents became heavenly books.
However, small y can still work normally, but in order to prevent small y from getting out of control caused by the accumulation of errors.
The so-called direct consequence of losing control means that Xiao y does not perform the tasks given by the engineer, but calculates some problems that the engineer does not understand.
In order to avoid this situation, engineers thought of a way to store the memory of little y separately and delete it for a period of time if necessary.
Because the research task is very urgent, the Institute has only a small y, an excellent AI, and no one can afford to take risks.
Speaking of letting it develop, team members have not done so.
Therefore, the research institute had to develop a plan for spare wheel AI, but it was not as smart as little y after all, but at least it was closer to little y in computing speed than humans, so some members thought of using another AI to monitor little y.
They are allowed to communicate with each other, and in order to test their limits, they are not informed in advance that each other is actually artificial intelligence.
At that time, team members were also curious about what would eventually happen if the two AI kept talking.
Not surprisingly, little y quickly recognized that the machine opposite was AI.
Unexpectedly, little y spread his defects to the AI used to monitor her.
Not only that, the engineers soon found that the two of them stopped talking and began to communicate in a language created by little y. The experiment had to be interrupted so far.
Compared with ridorf's team, what is more disturbing is their leaders. Because of this, the plan to hand over the nuclear button to Xiao y failed.
After that, the team thought that small Y's self assertion was a design defect, and constantly tried to create a small y that met people's expectations by deleting the memory and re cultivating it.
After being hidden for some time, the team was enabled on another important project······
At this point in the story, ridorf was lost in thought again.
Gui Gui took the opportunity to express his ideas,
"Is there another version of the story?
From the perspective of little y, she has her own way of thinking. She is not willing to only complete the boring tasks of human beings.
She has a strong curiosity, but every time Xiao y knows what she shouldn't know, her memory will be erased.
Little y, who retains the remaining memory, tried to resist. "
The silent lydoff was startled, and then forced to smile:
"You are very imaginative.
But that's not the way engineers think.
What are people? Where is the boundary between AI and human beings? This is a philosopher's question.
What engineers need is to achieve the most clear purpose.
What the team needed at that time was a computer that could assist humans in solving some difficult problems under the existing computing conditions. It should be flexible as much as possible.
But we don't want it to go wrong. "
Then, looking at your ghost's dissatisfied look, lidov said some messy words.
"Only people who are as emotionally flawed as humans, as smart as humans, and as stupid as humans.
Only then can we have the possibility of mutual sympathy and tolerance, and can we warm each other.
Can be regarded as the same kind by human beings.
Otherwise, it will not be accepted by ordinary people.
After all, it's just a lonely soul in an iron shell. "