Written by: Scott Johnson


The rapid evolution of technology has raised concerns among psychologists, scholars, and scientists on the probability of the evolving technology surpassing and finally rivaling human intelligence. The contention on the dangers and prospects of artificial intelligence has mainly focused on the singularity. The term denotes a point in time when rapid advances in technology may make futuristic computers so powerful that they may cause cataclysmic alterations to humanity, notably the universe (Broderck, 12).

Even as technology and humanity remain uncertain, there is optimism that human opinions, decisions, and actions will always influence the direction that the evolution of technology assumes. However, a closer look at the rapid rate of technological growth reveals that human intelligence may not remain superior and capable of controlling the continuum of the advances in technology and its associated outcomes (Bostrom 4). If not approached carefully, the curiosity on futuristic technologies will leave human beings playing second fiddle to machine intelligence, which will subsequently herald the end of humanity, particularly when such machines start developing their values rather than safeguarding humanity and preserving human values.

It is worth noting that each individual, whether a scholar, a technology enthusiast, or a scientist, will often have an independent idea on what to expect from the current advancements in artificial intelligence. There are speculations that the Internet of Things (IoT) will soon lead to the realization of artificial superintelligence, with technological powered machines influencing all aspects of human life (Moravec, 25-26). The opinions on how such kind of intelligence will surpass the extremes of human comprehension vary depending on who is asking and answers the question.

On the one hand, technological enthusiasts aim at exploring the highest realms of technological evolution, and the rise of artificial superintelligence will not come as a surprise. But to psychologists and some scientists, human capacity and potential remains incomprehensible and not yet stretched to anywhere near its full extreme. However, human beings may not be able to maintain an upper hand over any forms of technological singularity, or whatever terms technology observers may decide to use (Broderick, 18-23).

One thing in common among the different groups of experts is that they all call for attention, forecasting, and speculation on the future of technology, thereby expanding the room for debates and controversies on balance between human and artificial intelligence.

With the current development of the Internet of Things, the ground seems all set for human beings to showcase their capacity to manipulate technology in enhancing their way of life, while remaining ignorant of the dangers of uncontrolled development of technology (Broderick 12). Thus, even as machines at homes, in the workplaces, factories, and elsewhere start communicating among themselves, human beings will still keep an eye on the communications, keenly controlling the buttons on what the machines can or cannot do.

So far, no significant incidence of technology surpassing human intelligence is in any credible report. However, human beings are already recreating what it would like when machines start taking over human potential.

It is necessary to assume a scenario where artificial intelligence becomes the order of the day to understand the present issue in greater depth. For instance, one can expect a case where technology-mediated knowledge embodies a superset of human cognitive ability (Carvalko, 12). It would be ignorant to assume that such kind of intelligence, which will be aware and able to manipulate personal information, will pose no dangers to humanity’s survival of humanity.

One question that comes into mind is; is human intelligence in tandem with developments in artificial intelligence? If the answer to this question is affirmative, then there is no need to worry about the rapid evolution of technological capacities. However, if the answer is negative, then human beings need to control how much of their size and the potential they are transferring to technology-mediated machines, particularly in the critical domains of their survival, such as healthcare and security.

Although the rapid improvement in technology’s aim is to make life easier and human input even more productive, such as in the industries, the fear of artificial intelligence eventually perceiving human beings as something that needs extermination cannot be downplayed; this is mainly the case when one considers the scalable competence attribute of artificial intelligence. This characteristic renders artificial intelligence capable of executing a massive number of tasks more rapidly, including functions that humans can only accomplish with enough resources and time.

Those that humans cannot achieve due to their organizational and cognitive limitations. Some are concern that technology may reach a point when a breakdown in coding or mishaps in software development will give rise to machines that are hostile to human beings. In this regard, some technology observers have anticipated a point when some everyday household gadgets will do the opposite of what human beings command them to do (Bonner, n.p).

With prospects of devices connected through the Internet of Things expected to hit over 26 billion by the year 2020, one can only imagine what miscommunication among such a vast collection of gadgets can do to human life.

It is also worth noting that computer processor speed has been doubling every 18 months, and there is doubt on whether human intelligence is evolving at the same rate.

Human intelligence is indeed under constant evolution, and this is the primary reason why human beings have managed to develop technologies with capabilities that could only be imagined just a few years or decades ago (Baudier, n.p). Thus, even as one forms the picture of a universe dominated by artificial intelligence, it is equally important to think about the potential of human knowledge in several decades (Prescott, 439). The only way artificial intelligence may surpass and perhaps dominate human intelligence is when human beings allow technology to dictate almost all aspects of their lives; this is likely to diminish the potential for human intelligence to evolve in unison with developments in technology.

So far, technological advancements have defragmented human society into mass culture. Furthermore, the proliferation of mass media is likely to debase human civilization, thereby giving machine evolution an upper hand on human intelligence. The fact that people are already thinking about and recreating a future scenario where technology commands and punishes human beings points to diminishing hope in the human race (Pinker, n.p).

Rather than dwelling on this fear and devising ways to counter the imminent threats, people seem obsessed with stretching their infinite potential to evolve and cope with all sorts of diversity.

Human beings are the custodians of all forms of technology used today, whether at home, in industries, education, medicine, and all realms of society. However, the uncontrolled development of technology will soon become counterproductive when the same technology gets out of hand and threaten the very existence of the human race. A form of technology that is powerful and flexible is likely to pose a myriad of social consequences, just like electricity.

However, unlike power, artificial intelligence systems are likely to have a wider variety of functionalities, thereby posing even more significant challenges. Secondly, the diverse nature of artificial intelligence means a myriad of its possible malicious uses (Brundage 5-6). Thus, if artificial intelligence may not turn against humanity by itself, the likelihood of human beings misusing AI either intentionally or unintentionally, such as algorithmic bias, will precipitate the dawn of a post-human era.


For many who oppose the likelihood of artificial intelligence threatening human existence, fears of a point of singularity remain farfetched, as long as stringent rules are in place to control the further development of technological capacities. The only dangers posed by modern technologies, such as the Internet of Things, come indirectly from the same people who developed it. For instance, cyber-crime has become a global concern as people manipulate technologies to harm other people.

Thus, it is clear that with evolution in technology goes the advancement in the human capacity to use the same techniques in the creation of social and economic disruptions (Barrat n.p). Technology, no matter how advanced it becomes, will never pose a direct, imminent, and uncontrollable threat to the human race. When people start pursuing technological improvement to better their lives and make the world a better place, the danger of singularity will dissolve for good.

The second counterargument is that human beings are always flexible when it comes to adopting new technologies; this means that any advances in computer technology are caused by an even more significant advancement in the human ability to employ technology in making life easier (Garreau, 154). Through such a trend, it becomes almost impossible to reach a point where artificial intelligence can function independently from preconceived human design.

The implication here is that even as technology advances along an exponential curve, human beings will become more innovative and creative to shape the impact of technology on human affairs. Furthermore, the fact that people can use previous technological evolution trends to create futuristic technologies demonstrates their preparedness to handle advanced artificial intelligence (Carvalko, 23-27). For instance, some past predictions on technological evolution, such as jet-pack computing, are yet to become a reality though they crossed human imagination several years ago.

These observations lead Jaron Janier to comment on Who Owns the Future. That technology may never have the capacity to create or recreate itself autonomously without human intervention or control (Janier, 7-10). The assertion here is that even as artificial intelligence gives rise to robots, the idea that they will wish to dominate the world is mere science fiction with no basis in reality.


The counterarguments on the possibility of artificial intelligence threatening human life build on the premise that human beings have always remained firmly in control of emerging technologies. Although reaching a point of singularity may not happen anytime soon, it is unarguable that other potential hazards and pitfalls are imminent (Haqq-Misra 269); this is when one considers the development of military robots, which have become increasingly complex to the point of making independent decisions.

Furthermore, if people were firmly in control of technological evolution as some belief, then there would be no fears of a point in singularity where machines eventually take control of human life. These fears only demonstrate how people are increasingly becoming wary of artificial intelligence being able to function autonomously without human input (Kurtzweil, 56-62). When one thinks of futuristic scenarios such as electronic personality and intelligent autonomous robots, it becomes clear that robots dominating human life are no longer fictitious, but a possibility that is getting real.

Ignoring the chance of reaching a point of singularity in artificial intelligence is similar to ignoring the threat of climate change even as its disastrous consequences become real every day.


The rapid evolution of technology continues to raise fears of a point when artificial intelligence heralds cataclysmic alterations to human life. Even though the technology aims to make the experience more accessible through the global interconnection of people and societies, human beings’ failure to match their intelligence to the emerging artificial superintelligence will make machines superior to the human race. There is a significant divergence in the current opinions on how artificial intelligence will influence human life in the future.

However, these perceptions appear to have a familiar premise; the fear of artificial intelligence causing the extermination of human life as it is known today. There is little doubt that technology has set the human race on the path to a more automated future where human beings will not be the only sophisticated intelligence. If not carefully approached, it will be a future riddled with fears and damages, as the threat of artificial superintelligence triggering a post-human future becomes more real.

Rather than downplaying the imminent danger that artificial intelligence will pose to human existence in the foreseeable future, it is time for people to ponder their ability to handle runaway or self-developing artificial superintelligence. They might as well decide to live with the fear of the inevitable unknown; the extermination of human life by artificial intelligence. Whether artificial intelligence will pose an existential threat to people or make them more creative and productive depends mostly on how ethically people approach the current developments in technology.


Works Cited

Baldauf, Kenneth & Stair, Ralph. Succeeding with Technology. New York: Cengage Learning, 2010.

Barrat, James. “Why Stephen Hawking and Bill Gates are terrified of artificial intelligence.” Huffington Post (2015).

Baudier, Amanda, “Artificial Intelligence vs. Authentic Intelligence,” https://becominghuman.ai/artificial-intelligence-vs-authentic-intelligence-ab1bcd34e8f2.

Bostrom, Nick. Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press, 2014.

Bostrom, Nick. Ethical Issues in Advanced Artificial Intelligence. Cognitive, Emotive, and Ethical Aspects of Decision Making in Humans and Artificial Intelligence 2: 12–17.

Bonner, Stephen. Hacked by your fridge? When the Internet of Things bites back. Retrieved from 23 February 2020.

Broderick, Damien. The Spike: How Our Lives Are Being Transformed By Rapidly Advancing Technologies, New York: Forge, 2012.

Brundage, Miles. “Economic possibilities for our children: Artificial intelligence and the future of work, education, and leisure.” Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. 2015.

Carvalko, Joseph. The Techno-human Shell-A Jump in the Evolutionary Gap. Sunbury Press., 2012.

Haqq-Misra, Jacob. “Here be dragons: science, technology, and the future of humanity.” (2016): 268-270.

Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York: Viking Press, 2005.

Moravec, Hans. Robot: Mere Machine to Transcendent Mind. Oxford: Oxford University Press, 2000.

Pinker, Steven, “AI Won’t Takeover The World, and What Our Fears of the Robopocalypse Reveal,” bigthink.com, 12 August 2019, https://bigthink.com/videos/steven-pinker-on-artificial-intelligence-apocalypse/.

Prescott, Tony. The AI singularity and runaway human intelligence.” Conference on Biomimetic and Biohybrid Systems. Springer, Berlin, Heidelberg, 2013.