Jiaqi Hu: Second Edition Preface Revised

Hawking Raised Three Views Identical to Mine

Today marks the eleven-year publication anniversary for the first edition of my book Saving Humanity. In these eleven years, science and technology has developed rapidly, and human society has changed drastically as well.

Thinking back eleven (should be eleven as well to keep with the timeline) years to when Saving Humanity was first published, almost no one concurred with all my core views, and even those who agreed with one core view were few and far between. However, in the short span of eleven years-a mere second in human history-the development of science and technology has greatly changed the way the world is perceived. Some of my predictions back then are becoming reality today, and some well-known scientists and scholars have come up with views similar and even identical to my own.

I chose “Hawking Raised Three Views Identical to Mine” as the preface title for the second edition of Saving Humanity, in order to use Stephen Hawking-the most famous physicist today- to illustrate my points. In reality, many scientists have proposed ideas similar to mine in the past eleven years, using Hawking as the example is due solely to his fame.

Stating that “Hawking Raised Three Views Identical to Mine” is obviously to demonstrate the importance of my research using Hawking’s reputation, that I do not deny. The reason I say, “Hawking Raised Three Views Identical to Mine” and not vice versa, is because my views existed first and Hawking’s came later, and because I proposed my views much earlier than he did. These three viewpoints are all clearly elaborated in the first edition of Saving Humanity published in 2007.  

The first point: in April 2010, Hawking stated that aliens almost certainly exist, that we should avoid them and never attempt to contact them. Contacting aliens would likely result in the destruction of humanity as human beings cannot defeat aliens. Hawking’s reasoning and examples are exactly the same as mine, yet three years later.

The second point: In December 2014, Hawking pointed out that artificial intelligence will eventually develop self-consciousness and replace human beings, since technology develops at a greater speed than biological evolution. Once artificial intelligence completes its development, they may cause humanity’s extinction. Hawking’s reasoning and examples are basically my own, but seven years later.

The third point: in March of this year (that is, 2017), in an interview with the London Times, Hawking observed that without proper control, artificial intelligence was likely to destroy humanity. He reasoned that we must identify these kinds of threats faster and take action before losing control, in order to do so, some type of world government must be established. Once again, Hawking’s reasoning and examples echo mine, this time a decade later than me.

The above viewpoints are certainly not borrowed from Hawking, more likely he referenced me. Not only because my ideas came before his, but also because my related works and articles have been sent to many national leaders and a number of institutions. The electronic version has been published in both Chinese and English on multiple sites, not only in China but on scientific, social and political websites in the United States, Britain and many other countries as well.

What I am trying to say is that the world’s top scientists are starting to agree with my core views, and these core views are essential to the continued survival of humanity. Striving for the rest of the world to accept these views and take appropriate action is vitally important. Simultaneously, I believe that Hawking’s views are still not deep enough, not comprehensive enough and not thorough enough.

In 1979, shortly after I started college, a strange idea occurred to me. Science and technology is a double-edged sword that has the potential to both benefit and destroy mankind, then if science and technology continued to develop at its continued rate, could it eventually drive humanity to extinction? The possibility of human destruction through technology in the near future is undoubtedly a huge deal, so is a solution to this problem possible? I quickly realized that this was a worthwhile subject—one of those rare things worth dedicating one’s life to. I soon decided that this would be the only cause of my life. I left my government job to pursue business in order to better fund the research for this cause, and to provide better conditions to promote and further my study.

A few decades have gone by, and now this cause is no longer merely a career for me, but more of a responsibility and mission. That is because my research has yielded truly terrifying results—that humanity’s development has taken a fundamentally erroneous turn, one disastrous enough to end mankind and push us into extinction once and for all.

It is very challenging to fully prove that science and technology will destroy humanity and to find a solution to this crisis—it took me nearly twenty-eight years. I finished my book “Saving Humanity” in January of 2007. When this 800,000-word work was still a sample book, I delivered it to 26 world leaders, among them include the Chinese president, the president of the United States and the UN secretary. Apart from one phone call from the Iranian Embassy, I did not receive any feedback.

In July of 2007, “Saving Humanity” was published in 2 Volumes by Tong Xin publications, but was asked to stop issuance after just one day.

I later published the book “The Greatest Question”, and two other books “On Human Extinction” and “Saving Humanity (Selected Works)” in Hong Kong, as well as numerous articles. I also put both the Chinese and English versions of my articles and “Saving Humanity (Selected Works)” online. A website was specifically built for this purpose, under the name “Hu Jiaqi Website”. In order to promote my views and suggestions, I wrote numerous letters to the leaders of major world powers, the UN secretary and other relevant agencies. In addition, I have also lectured at many universities and research institutions.

Over the years I have exchanged opinions with many people, but my core views are often considered unfounded, while some people directly accuse me of fallacy.

Although I am pleased that many scientists and scholars today have offered up views similar to mine, it is still regrettable that my core views have not been seconded nor recognized by any major players. I am deeply anxious about this, since humanity truly does not have much time left.

Through these many years of research, I have formed a few core views and a series of secondary views. In the second edition preface of “Saving Humanity”, I will only discuss my three core views which entertain practical significance, the other core views will be further discussed in the book. My first point of practical significance is this: the continued development of science and technology will soon destroy humanity—at best in two or three hundred years and at worst by the end of this century, I believe the latter to be more likely.

Similar views have been broached by others. In May 2013, Oxford University’s future of humanity institute—located in the same city as Hawking—published results stating that humanity could reach extinction.

The president of this research institute Nick Bostrom commented in an interview, “Human’s scientific abilities are at war with human’s wisdom in using these abilities, I fear that the former may far outreach the latter.” (Bostrom’s worries are what I call the evolutionary imbalance phenomenon, I published a specific article on this topic.)

Bostrom further elaborated that threats like planetary collisions, super volcanoes, nuclear explosions etc are not enough to impend the survival of humanity. The biggest threat to mankind comes from the “uncertainties” brought on by technological innovations like synthetic biology, nanotechnology, artificial intelligence and other scientific achievements that have not yet emerged.

When I learned that Oxford University’s future of humanity institute had come to the same conclusion as I had, I was overjoyed. I immediately wrote a letter addressed “To: Professor Nick Bostrom, Dean of the Future of Humanity Research Center” and an article titled “Finding a common voice”. I translated both writings into English and sent them to Nick Bostrom as well as published them online. In order to better get his attention, I specifically used my title “Beijing Mentougou District CPPCC member”, but I never received a reply.

In early 2016, the intelligent robot “Alpha Dog” developed by Google defeated South Korean Go master Li Shishi, shocking the world and fully demonstrating that robots possess deep learning ability—a chilling thought upon further pondering. Soon after, some of the world’s top scientists such as Hawking, Musk and Bill Gates pointed out that the development of artificial intelligence could be the biggest threat to human survival. The fact is, once artificial intelligence acquires human-level thinking abilities and selfawareness, their response time will be thousands, tens of thousands or even hundreds of millions of times faster than that of humans. The simple rule of evolution tells us that humans cannot hope to control artificial intelligence anymore once they reach such a stage.

In May of 2016, Bill Gates pointed out in the Reddit AMA series, “in a few decades, artificial intelligence will be strong enough to warrant concern… have no idea why some people are completely indifferent to this.” This prompted me to consider the study of artificial intelligence within my own company, since so many people are studying it already and I can only keep track of its movements by researching myself. If humanity really encounters unexpected disaster in the future, maybe I will be able to help in some way.

My second point of practical significance is this: only through human unity, the establishment of a world government and a world regime can we firmly grasp the developmental course of science and technology; i.e. the “World Government” approach Hawking proposed to control artificial intelligence.

The reasoning for the above view is, no country can wholeheartedly control and regulate science technology since countries are in constant competition and any failure could lead to total destruction. With science and technology being a main force in production, the competition between countries is essentially a technology race. Who would sacrifice a national competitive advantage for the overall benefit to mankind?! Even the United Nations could not bring about such change—only the unity of humanity as a whole and the establishment of a world government could accomplish this. A World government considers things on the level of all mankind instead of on a national standpoint, it removes the pressure of individual competition, thus making it possible to regulate world order through the power of an international regime.  

My third point of practical significance is also the one that still lacks influential backing, it theorizes that we must strictly limit the development of science and technology in order to prevent the extinction of humanity. Due to the uncertain nature of technology, the more advanced a technology is, the harder it is to control and regulate. Not even the world’s top scientists can accurately predict the consequences of scientific discoveries—even Einstein, Newton and Hawking make errors in scientific judgment. Many technological developments have already brought disaster to mankind.

With science and technology already developed to a hazardous height and still continuing its dangerous ascension, the risks are difficult to predict. Distributing and sensibly managing the safe mature scientific achievements we already enjoy on a global level is more than enough to guarantee a comfortable existence for mankind. If we keep up the unchecked demand on technology, human extinction will not be far off. The fact of the matter is, while some high-tech developments may be harmless to humans, not all high-tech developments are. We may be able to control the development and use of certain high-tech developments, but there is no way to control them all. He that touches pitch shall be defiled—accidents are always bound to happen and one big scientific blunder could be the end of humanity. Hawking believes that just controlling artificial intelligence is enough to solve the problem, but we can only control them for a time—is there any guarantee that the control will last forever?! And even if we could control one technological advancement, how could we possible control all of them?! Only by utilizing the World government to rigorously screen and adapt mature, established scientific achievements; permanently seal and eventually forget all other technological advancements as well as strictly limit scientific research, can humans ensure a continued existence. This is precisely where I find Hawking’s views lacking in depth and comprehensiveness. Perhaps people will think my ideas to be absurd, but I firmly believe this to be the truth.

What needs to be clarified here is that restricting the development of science and technology is not denying the positive contributions science has made to humanity, but merely a concern that its negative effects could destroy mankind; in other words, this is not calling for just China or the United States to restrict or lead the way in restricting science development, it must be a synonymous global effort.

I’d also like to touch on the issue of aliens, where me and Hawking again share similar views—this is just one of my secondary points.       

I believe that aliens definitely exist, as there are hundreds of billions of galaxies in the universe, with each galaxy housing hundreds of millions of stars (we have two or three hundred million stars in the Milky Way). Though only planets orbiting a stellar have the capacity to produce intelligent life forms— so the probability is very small—the overall total capacity is still fairly high.

However, it is extremely difficult for alien life to traverse the interstellar distance and reach earth. The distance between stars is calculated in lightyears, for example, our closest neighbor in the solar system is the Alpha Centauri— which is still 4.25 light years away. Even using the fastest non-manned spacecraft we currently have, it would take tens of thousands of years to arrive, let along manned ones which are substantially more difficult. Moreover, it is pretty much impossible for there to be intelligent life on or near Alpha Cen, since it is a triple star system consisting of three stars orbiting each other. This type of star system is not conducive to the existence of life. That is why we have not found any trace of aliens on earth, even though earth formed 4.6 billion years ago, and the earliest microbial life can be traced back 4.28 billion years.

Conversely, if aliens really did reach earth it could be extremely problematic. Since the universe formed 13.8 billion years ago, intelligent life could have formed four or five billion years earlier. Any aliens who could travel through the interstellar divide and reach earth would be at least thousands, or even hundreds of millions of years ahead of us in terms of scientific development. The laws of nature inform us that more civilized groups despise less civilized groups; higher species kill lower species and even treat them as food to be fried and cooked. Once highly civilized aliens reach earth, our fate would be similar to that of the American Indians or Australian Aboriginals in the face of colonialists—or even worse. Nowadays, some people are keen to communicate with aliens, that is in fact very irrational behavior.

Finally, I believe that humans must rely on themselves to save mankind, and the leaders of major world powers possess the most capacity to do so. As a scholar myself, I can only sound the alarm, which is why I wrote so often to the leaders of powerful nations.

If one takes my three core views to heart, then the conclusion that only a unified front can save humanity is a logical one. Known as “the father of artificial brain” Hugo de Garis believes that artificial intelligence will destroy mankind, yet he also believes that the destruction of lesser species by higher species is a natural course. Thus, the destruction of humanity by artificial intelligence is completely reasonable and the creators of these higher species can be considered deities, or God themself. Let us imagine then, as members of the human species ourselves, can we bear to be destroyed?

While I began editing the second edition of “Saving Humanity”, the cinemas were showing the movie “Resident Evil: The Final Chapter”. The movie centers around a high-tech company that developed a bio-weapon which could destroy humanity. The company’s leaders planned to retain only their own shareholders and destroy the rest of humanity with this weapon. These are obviously the actions of truly deranged immoral people—but with so many high-tech companies around the world, who’s to say there aren’t such deranged immoral individuals among them?

评论

此博客中的热门博文

Jiaqi Hu (Hu Jiaqi)-the Famous Anthropologist Warned the Dangers of Artificial Intelligence’s Appearing in the Military Factory

Hu Jiaqi, the Famous Anthropologist Believes That the Unbalance Evolutionary Would Completely Destroy Humans

Jiaqi Hu (Hu Jiaqi)-the Famous Anthropologist’s Discussion on the Development Trend of Artificial Intelligence