Famous Anthropologist Jiaqi Hu: The Greatest Crisis of Humanity

Hostess: Mr. Hu Jiaqi is not only an entrepreneur but also a famous anthropologist. Mr. Hu Jiaqi has been working on human problems for nearly forty years. He has achieved far-sighted results in his research and the results are strong warnings. He has not only regarded his research as a career but also as his responsibility as a human.
Mr. Hu: We human beings are both a perceptual species and a rational one. They are often more perceptual than rational. A person's life depends on his character and the fate of a species depends on its features.
We often make wrong judgments because of the weakness of being greedy, selfish and shortsighted in human nature. Something that we think is right may not be true. Even what we generally believe is right is just wrong. Some mistakes may destroy mankind completely.

Soon after I entered the university in 1979, I learned about a relativistic formula E=mc² that is the mass-energy equality of special relativity. According to the formula, the energy contained in a gram of matter is equivalent to 20,000 tons of TNT equivalent. How much is the quantity of one gram of matter? It weighs about the same as a peanut kernel. 20,000 tons of TNT is enough to destroy a city. The power of each of the atomic bombs that exploded in Hiroshima and Nagasaki was not more than that. 
Shortly afterwards, I came to know that the former Soviet Union  once exploded a hydrogen bomb whose power is 56 million tons of TNT equivalent. In fact, the Soviet Union had planned to make a hydrogen bomb whose power was 100 million tons of TNT equivalent but not the one exploded. No matter theoretically or technically, they could produce such a powerful hydrogen bomb or a much powerful one. It means that the power of only one bomb is equivalent to the total force of the high explosive carried on a train which is as long as the circumference of the earth.
I thought science and technology were too amazing. If science and technology developed like this, would science and technology destroy us some day? I thought it was a question worth pondering. So I turned to the data to study it. Soon a concept came into my mind. It was since the beginning of the industrial revolution in mid-18th Century that human beings have really devoted their enthusiasm to the development of science and technology. It took human beings only over 200 years to develop science and technology from a very low level to such a high level. I intuitively felt that science and technology could destroy human beings. As a 17-year-old young man, I had no idea whether the intuitive feeling was true or not. If science and technology destroy mankind in thousands of years, we do not need to worry about it for it is a problem for the future generations to solve. If science and technology destroy human beings in the near future, what should we do? Is there anything more important than it? Is there any other crisis that is more worth worrying about than this one?
Will it happen? Can we prevent it from happening? And how? I thought it was worth studying. I thought it was lucky for a person to do something meaningful in his life. I decided at that time that I would only do one thing well in my life. That is to say, I would devote myself to the matter and then disseminated my research results. I have been engaged in the research for nearly forty years. With the deepening of the research, I was more and more aware of the seriousness and urgency of the problem. So far, I have not only regarded the research as a career but also as my responsibility as a human.
Now it seems that nuclear weapons are unable to destroy human beings. There is such a conclusion in the scientific community. If all the world's nuclear weapons explode in a nuclear war, billions of people will be killed. The explosion will lead to nuclear winter because sunlight will be obscured by the materials in the sky exploded from the earth's crust. But mankind has the opportunity to start again because some people will surely survive.
Famous Anthropologist Jiaqi Hu: The Greatest Crisis of Humanity
Now there are more powerful things than nuclear weapons, such as bioengineering. Bioengineering can re-edit and reorganize genes. It can make mice grow bigger than cats. Will mice be afraid of cats any more? It can also make cattle grow six legs. But it can also make genetic recombination for biological toxins as biological weapons. That is to say, those viruses and bacteria whose genes have been recombined will have the ability to attack the deadly organs of human beings. It can either attack the heart or lungs of a person, or a nation or a race in a targeted way. Theoretically, these can be achieved. It is the spread of super plagues.
And there is artificial intelligence(AI), too. AI has penetrated into every aspect of our life today. AI is also used in our company. It is mainly used for product screening, price screening and intelligent recommendation on the Internet.
Not long ago, I studied in Alibaba for three days. Alibaba has launched a Ground Shuttle Project about AI. They will cooperate with the enterprises that have physical stores on new retail after the Ground Shuttle Project succeeds.
What is the Ground Shuttle Project? It means that it knows who you are, your income and your family as soon as you get into the store. And then it recommends products to you.
It seems very mysterious. But it is not so mysterious at all. When we are with our cell phones on a train that arrives in a new place, we can receive a welcome message. Why can we receive a welcome message? It is because your cell phone is detected by the base station. The principle of the Ground Shuttle Project is similar to that. Your cell phone will be detected as soon as you enter its store. It will know everything of you once it knows your phone number. It knows your occupation and your income because you may have used Alipay, Taobao or T-mall. You have to register before you use them. You may have bought the milk powder for six-month-old babies on its website not long ago. It estimates that you may have a six-month-old baby. It will recommend diapers and children's clothing for you. You may have bought ceramic tiles recently. It estimates that you may be decorating your home. It will recommend wallpaper and latex paint for you. So science and technology are difficult for some people but easy for others. Once hinted at, it becomes clear.
Driverless vehicle technology is the strength in AI of some enterprises, such as Google and Baidu. It is hard to imagine that there is no driver in a running vehicle. But in fact, it is not so difficult to understand as it seems. Based on image recognition technology, it can judge what is ahead, like a person, a car, a stone or a tree. Another example is speech recognition technology. It can recognize the voices of human beings, and the sound from a cow, a car or others. The vehicle will be navigated by the GPS and the state of the vehicle will be operated by a program including the speeds, the routes and the destinations. It does not need a person to drive.
If so many technologies are integrated, it is not difficult to make a machine killing humans. It is reported publicly that both the US and Russia are developing robot warriors. I believe other powers are also doing research in this area. Based on image recognition technology or speech recognition technology, the robot warriors will first judge whether the enemy or a friend is ahead. The behaviors of the robot warriors will be controlled by a program. If you are an enemy, it will shoot at you. If you are a friend, it will keep away from you.
The robot warriors in the initial stage may not be very powerful. But it will be upgraded rapidly. The most advanced technologies will be integrated into it. Finally, it will be turned into a robot warrior with extraordinary eyesight and extraordinary hearing who can fly in the sky, sneak into the ground and even sneak into the sea. The weapons it uses will be more and more advanced from pistol, gun to laser or others we do not know today. Compared to the reaction speed of human beings, its reaction speed is much faster.
The computing speed of the computers today can reach billions of times a second. How many times can we human beings operate? We can not compare with them. They react much more quickly than we do. If such robot warriors kill people for we lose control of the computer programs, they will be dangers for us. But such robot warriors are not really dangerous. If robots are able to think as human beings, they will be much more dangerous. Today's computers and the corresponding robots can only handle the problems in logical thinking. There are different ways to classify human thinking, such as image thinking and logical thinking. Logical thinking is also called abstract thinking.
Famous Anthropologist Jiaqi Hu: The Greatest Crisis of Humanity
I think another classification method is more suitable for what I will explain below. That is to classify human thinking into logical thinking and abstract thinking. Logical thinking is easy to understand such as one and one equals two. It is in charge of memory, computation, judgement and reasoning, etc. Abstract thinking is in charge of association, generalization and refinement, etc. It may not be accurate but it has high efficiency for its rapid reaction speed.
Both logical thinking and abstract thinking are useful in our daily life. When we go out to buy food, how much we should pay for the cabbages according to the price and weight is certain. It is a kind of logical thinking. Another example is playing chess. We use abstract thinking more than logical thinking when we play Chinese chess. For example, one may not think over before using his cannon to kill the opponent's horse. He may guess that he will win by making the move. It is not a rigorous calculation, but a summary of experience. It ignores the details and simplifies the procedure. It is a blow.
If computers and corresponding robots have logical thinking and abstract thinking as that of human beings, and are given the ability to learn by themselves, they will be extremely powerful. A small baby knows nothing about the world when it was born, but it grows and learns knowledge. It is because of knowledge that I have learned and experience I have gathered in the past 50 years that I can stand here to give a speech. However, a robot with the ability of thinking as that of human beings and the ability to learn by itself is able to learn very rapidly. It may only take it one hour to learn the knowledge and experience that I have learned in the past 50 years.
Therefore, Hugo de Garis, who is considered to be Father of AI, has ever compared human beings with intelligent robots. He said, compared to the intelligent robots, human beings were weaker than ants. Our reaction speed is as slow as that of rock weathering. How slow the rock weathering is! It only changes a little in 100 years. How can a person win an intelligent robot? If such a robot thinks, “ how can the stupid human beings command me? I will kill them.” You may think that you can destroy it at that time. But can you? It will destroy you before you have the idea to destroy it because it is far more intelligent than you. You may think, “ if I can't deal with it face to face, I will play dirty tricks against it by attacking it from behind. Play tricks? It will be better at it than you. It will attack you from behind before you do. You may think that you can escape if you can not fight against it and win. But where? To the moon? It will kill you before you leave for the moon. Remember it is far more intelligent than you.
Of course, the examples above I gave do not suggest that I am an authoritative expert in this field, but I know the fact very well. One or two hundred years ago, there was no technology that could destroy human beings. People believed so due to the low level of science and technology at that time. It was not until the time of developing the atomic bomb that mankind really worried for the first time that a technology would destroy human beings. The scientists involved in the development of the atomic bomb at the Manhattan Project in the United States feared that an atomic bomb would ignite the atmosphere. The combustion of the atmosphere is not chemical combustion but nuclear combustion. If the atmosphere is burned, how can humans survive? Why were the scientists worried about it? When an atomic bomb explodes, the temperature at the center will exceed 10 million degrees. We think the temperature at thousands of degrees is quite high in chemical combustion because steel will melt at that temperature. These scientists did not know what the heat of the ten million degrees meant and they did not know what would happen. But the Second World War was very tragic at that time, so a new weapon was needed. Curiosity to explore unknown areas is the natural instinct of scientists, so the scientists were interested in developing atomic bomb. The atomic bomb was made and exploded, but the atmosphere was not ignited.
Hydrogen bombs were developed later. The hydrogen bomb is ignited by an atomic bomb. A hydrogen bomb is much more powerful than an atomic bomb. The temperature at the center of the explosion can be over 100 million degrees. The scientists developed hydrogen bombs though they worried about it again. Hydrogen bombs did not ignite the atmosphere, either.
In 2000, a device called Relativistic Heavy Ion Collider was built in American Brookhaven National Laboratory. At that time, a scientist sued the project to stop the project. Why did he want to stop the project? He worried that the collision of fast particles would produce a tiny black hole. If the very tiny black hole or other unknown dangerous particles were stable, the earth would be sucked in and it would be a small black hole that was not so big as a ping-pong ball. The person who insisted on the project said, “Your worries are unnecessary because black holes will evaporate.” What does it mean? According to the Theory of Black Hole Evaporation by Stephen Hawking, black holes would evaporate because the strong gravitational attraction of black holes could lead to vacuum polarization around them. The smaller the black hole, the faster it evaporates. A black hole as heavy as a particle would evaporate at the instant of the collision. However, the scientist against him said that the theory did exist but the Theory of Black Hole Evaporation was just deduced by Stephen Hawking and it had never been verified. Our bet was too big. We were betting on the fate of all mankind. The project was carried out and the earth was not sucked in. 
The Large Hadron Collider in Europe was started to be built in 2008. It was much more powerful than the Relativistic Heavy Ion Collider for its far more power. The project was also sued because some people thought it was dangerous. The project was carried out at last and the earth, of course, was not sucked in. 
Famous Anthropologist Jiaqi Hu: The Greatest Crisis of Humanity
Then my article The World Has Gone Mad was published in Hong Kong in which I raised a question--What is so important that we must bet on the fate of all mankind? Without human existence, is anything else meaningful? You can not keep your shoes dry unless you avoid walking along the river. You can avoid encountering a ghost if you do not walk in the dark. Isn’t it very difficult to understand? Is there any difference from playing Russian roulette? We have conducted experiments again and again. Won't we stop until we try out a human extinction? The experiments are being carried out now. They are trying more and more contents for more advanced technologies. There are more factors that may lead to the extinction of human beings.
For example, many scientists are worried that nanotechnology will destroy human beings. They worry that nano-robots developed through nanotechnology are likely to destroy human beings. What is a nano-robot? A nano-robot is as large as a molecule. The robot is a carrier whose only task is to carry things as large as a molecule or an atom. If it combines carbon atoms according to the lattice structure of diamond, diamonds can be easily produced and will not expensive any more. If it is put in the human body, it can kill cancer cells in a targeted way and then move good cells to that place to install. It can also help us clean up blood vessels and remove kidney stones. It can turn grass into bread because it is carbon, hydrogen and oxygen. Therefore, it has a very broad application prospect. But if only one nano-robot works, the efficiency will be too low. Even if the nano-robot works so hard that it could move a million atoms a day, the million atoms would not be visible unless a microscope is used. So there are two things for nano-robots to do. First of all, it has to replicate itself. A nano-robot is just as big as a molecule. It can combine atoms to form a new nano-robot. A nanorobot replicates ten of itself, a hundred, a thousand,then there will be hundreds of millions of nano-robots soon. Efficiency will be improved if so many nano-robots work together. But we have to face another question. What will happen if they do not stop replicating themselves? If so, the entire human body will be turned into nano-robots. If they continue to replicate themselves, they will turn all living things of the earth's biosphere into bread and even turn the whole world into nano-robots. But some scientists say they can solve this problem by designing a program to allow the nano-robots to replicate themselves to a certain extent and then stop. But what if there is something wrong with the program? What if there is a bad scientist? So, many scientists warn that it is a bad thing. For example, the chief scientist Bill Joey of Sun Microsystems Incorporated said, “It’s a bad thing. Unlimited replication of nano-robots will completely destroy human beings.” But not only is the technology being developed, but also it is a hot discipline.
Besides, as mentioned above, some people are worried about bioengineering. They are worried about the spread of super plagues through gene toxin developed in bioengineering. The spread of super plagues is not the only matter we are worried about. There is another one that makes us more worried. You know it often needs the power of a nation to develop nuclear weapons. But for a high-level scientist, he can acquire genetic bio-toxins independently in his own laboratory. Compared to the behavior of a country, the individual's behavior is more difficult to control, especially for those who do extreme bad things. 
Comparatively speaking, more people are worried that AI will destroy human beings. The cause of this is that a computer called Alphago produced by Google in 2016 played go with world go champion Li Shishi from Korea and won. The event shocked the whole world. As mentioned above just now, the current computers and the corresponding robots can only solve the problems of logical thinking. It is not difficult for a computer to play chess. After you make a move, how will the computer move? It counts all the changes and finds out the most reasonable point. But the go is different. It has too many changes. The number of the changes is larger than that of the atoms in the universe. After you make a move, how will the robot move? It can not work it out because there are too many changes. The amount of computation is too large for the robot. But this seemingly impossible thing was solved by Google programmers in a way that is not difficult to understand. How did they do that? You make a move. How can the computer, or the robot, make the next move? It is impossible to consider all possible changes because there are too many changes. It only makes 100 steps, for example, or not 100 steps but 80 steps, or only 70 steps. It calculates all possible changes when it makes the 100th step. It chooses the point with the greatest comparative advantage. It will be very accurate when it makes the last 100th steps. It should be the most correct choice. Such a seemingly abstract problem was solved by a robot of logical thinking. It struck the world at that time. Many scientists worried that it would threaten us. Hawking predicted the awakening of self awareness of AI would come sooner or later. It will be difficult for us to control them at that time. I wrote an article called Hawking Put forward Three Points of View that are the Same as Mine. The article was published in a Hong Kong magazine. The article was also the preface of Saving Humanity as well as the preface of the English version of Saving Humanity published in the US.
Famous Anthropologist Jiaqi Hu: The Greatest Crisis of Humanity
Let’s summarize the examples I have just mentioned. We can draw three conclusions.
The first conclusion is that Science and technology are able to destroy human beings and it is not far ahead.
The second conclusion is that human beings are unable to judge the safety of science and technology comprehensively and accurately. As I mentioned just now, different scientists have different ideas on the same technology, thought they are all first-class scientists. Why does this happen? It is because science and technology are uncertain. I lived in the countryside when I was young. DDT was used in the countryside at that time. The inventor of DDT is Muller who was awarded the Nobel Prize. People found lots of problems after DDT was used widely. It can not be used any more. Another example is Freon. We thought Freon was quite good when we used it. But after it was used for a time, we found Freon had led to the destruction of the ozone layer. Of course, the Freon used today is safe without chlorine. The ozone layer above the earth is an important (protective) barrier for the earth's life. It is the natural result of billions of years of the earth's formation. In decades, we humans have destroyed the ozone layer which was formed over billions of years. It's hard to make up for it again.
In my opinion, the greatest scientists in human history are Newton and Einstein. Both of them have made major mistakes in scientific research. Newton agreed with the particle theory of light and he was against the wave theory of light. We have known that light has wave-particle dualism. Because Newton's view was very authoritative, all thought he was correct. Thus the development of optics was hindered in one hundred years. 
Einstein opposed the Principle of Uncertainty. He said God does not throw dice. We knew Einstein was wrong later. The Principle of Uncertainty is the cornerstone of quantum mechanics today. Even the great scientists like Newton and Einstein made such serious mistakes, what about ordinary scientists? Science and technology have developed to the level to destroy human beings. Therefore, if we omit a technology that could exterminate humans when we screen for safety, it means that the end of our human race is coming.
The third conclusion is the basic fact that we humans are unable to utilize science and technology rationally. Nuclear energy was not first used to benefit mankind but make nuclear weapons after it was developed. Any most advanced technology was first used as a tool to kill people rather than a tool to benefit mankind as soon as it was developed. Nowadays, science and technology, even some high-end things, are more and more inclined to be accessible to individuals and enterprises. Science and technology have developed to the level to destroy human beings. If a country, an enterprise or a person that has gained the technology to exterminate mankind uses it one time, they can send us to the abyss of extinction.
Thus, the first view of mine can be drawn from the three points above. I have worked on human problems for many years. I hold several big views and a series of small ones. The first big view of mine, which I think is the most acceptable point of view, is that the continuously developing science and technology will soon destroy mankind. It may happen in two or three hundred years, or just in this century. I am more inclined to believe in the latter. But few people accept the view. I've been promoting my view everywhere after I got the conclusion because I believe it is extremely important.
Famous Anthropologist Jiaqi Hu: The Greatest Crisis of Humanity
Mankind is now facing a huge crisis, an unprecedented one. It is the crisis of extinction. Mankind needs saving. Who can save us? What we can only rely on is ourselves. I believe the leaders of great powers can play the most important role in saving mankind. My first monograph Saving Humanity was published in 2007. It took me 28 years to write the book. I took the opportunity to write An Open Letter to the Leaders of Mankind to the president of the People's Republic of China, President of the United States, Russian President and others. I took this letter as the preface to the Chinese version of the book Saving Humanity. I wrote letters to the leaders of different countries, some famous scientists and some famous research institutions later. I published several books then. My articles were published both in magazines and on the Internet most of which were in Chinese and English. That is, there are Chinese and English versions on the Internet. I seize every opportunity to spread my views because I think they are very important. I gave speeches in many universities and research institutes. I am a member of the CPPCC. I have many friends who are members of the CPPCC. I talk to them. I run a business and many of my friends are entrepreneurs. I talk about my views with them. I also talk to many of my friends who are scientists and writers. I hope to influence them and let more people accept my ideas through them. To tell you the truth, I haven't had much success in so many years. Most of the time, they consider my views ridiculous. Things are becoming better these years. Because of the rapid development of AI in recent years, some people are vaguely worried that science and technology may exterminate human beings.
On one morning in 2013, I found a short message on my cell phone. It was from a friend of mine. This friend was the vice president of China Building Materials Planning and Research Institute. He said, “Hu Jiaqi, I have good news for you. An international authoritative research institute has come to the same conclusion as you. Even their proofs are the same as yours.” The so-called proofs mean that the examples they cited are the same as those in my published books or articles. He asked me to check it on the Internet quickly. And I did so. What was it about?A group of the world's top scientists, philosophers and anthropologists gathered at the Institute for the Future of Humanity of Oxford University and conducted a study. What conclusion did they draw? Mankind will be extinct as early as the next century. The chief culprit is science and technology. Nuclear weapons can not destroy human beings. Nanotechnology, bioengineering, artificial intelligence or some future technology may destroy human beings. I was excited when I read the conclusion. I thought such a warning conclusion from such an authoritative research institute would be able to attract people's attention. I did three things then. First, I wrote a letter to the leaders of different countries whose name was The Third Open Letter to the Leaders of Mankind. Second, I published an article called I Have Got A Bosom Friend at Last. Third, I wrote a letter to Nick Bostrom who was the Director of Institute for the Future of Humanity of Oxford University in which I expressed my willingness to cooperate with him. It is a pity that such an authoritative and cautionary conclusion only attracted much attention in a short period of time and was soon drowned out. This reminds me of Bill Gates' words. When Bill Gates was asked questions online, he said, "As AI develops like this, it will endanger us in decades." I do not know why the words did not attract people’s attention. Just as I mentioned just now, there is no use in holding back developing a technology though we say they can not develop it or it is unsafe, and we appeal to, warn and even sue them. Even the appeals and warning voices of the world's top scientists are very weak and are drowned soon. Those who say it is good and give support always get flowers and applause.
When a CEO from a Chinese enterprise that is good at AI was asked about the security of AI, he said, “I am an optimist. AI will not be able to destroy mankind in our generation at least.” Is he an optimist? It can not destroy mankind in our generation, but what can our next generation do? And the generation after that?
So I have drawn the conclusion below. The whole society is in a state of developing numbness. Developing numbness is crisis numbness. The sea is often very calm before the great waves come, but the undercurrents are surging on the sea floor. When any scientific and technological research is carried out naturally,  any scientific and technological achievements are affirmed naturally, and any scientific and technological products are used naturally, a devastating disaster may not be far ahead. Every step of us determines our future and determines our final outcome. Though science and technology have developed to such a high level, we are still moving forward. One step further, we will step on the mine. It's a land mine that destroys mankind. When we set foot on it, we can not go back. It is our first time and will be our last.
Thank you!

评论

此博客中的热门博文

Jiaqi Hu (Hu Jiaqi), the Famous Anthropologist, is Worried about the Future of Mankind

Jiaqi Hu (Hu Jiaqi)-the Famous Anthropologist Warned the Dangers of Artificial Intelligence’s Appearing in the Military Factory

Hu Jiaqi, the Famous Anthropologist Believes That the Unbalance Evolutionary Would Completely Destroy Humans