top of page
Search
Spence Proofreading

Artificial Intelligence Weaponized: AI Dangers



Is it going too far to say that humanity may get itself into a world of trouble with AI? Is it completely possible, maybe even probable, that we as a species are not prepared to use such a powerful tool? When the so-called “godfather of AI” himself, Yoshua Bengio, comes out to the world to express his deep concerns with artificial intelligence and the dangers it poses, people should certainly listen—especially when he likens his uncomfortable feelings to those experienced by Oppenheimer after his role in creating the deadly atom bomb (Hughes, 2023).


Similar to the atomic bomb when it was first created, artificial intelligence is a new type of technology that could have both amazing and devastating consequences for humanity and the earth. When I think about AI, though, my mind travels to the ecosystem; time and again, humanity’s tampering with it has shown that people never think of all the consequences one seemingly small action may lead to. For instance, even if AI were only utilized to find cures for diseases—which is highly unlikely—this could lead to future misfortunes due to an overpopulated earth. (Though I suppose one might argue that this issue could be solved by AI.)


In this piece, I’ll be examining the negative existing and possible consequences of the powerful technology that is artificial intelligence. Prior to that, however, I will discuss various ways that the existence of artificial intelligence is already having negative consequences.



Existing Dangers


Some of the existing dangers of artificial intelligence are obvious to many people, while others are more concealed. I was surprised to learn that AI has been being utilized by businesses for about 15 to 20 years, mainly as a way to cut repetitive tasks from employees’ workloads by doing these tasks for them (“The Future of AI”, 2020). This leads well to one glaring risk of AI, and that is its takeover of jobs.


This will only become worse as artificial intelligence technology advances, which is happening exponentially. Even the creative fields of work that previously seemed untouchable by artificial intelligence are apparently at risk of the AI takeover, with art generators like Google’s Deep Dream producing astonishingly detailed visual works (Cascone, 2016). I even remember reading about one case where a woman won a photography contest with her AI-generated photo, showing just how well these visual images can trick people into thinking a human created them.



Potential Dangers


Many articles, videos, and podcasts of late have delved into AI systems that generate words, called “chatbots”, the most famous one being ChatGPT. These chatbots are clearly risky when it comes to the mental health of their human users, and this is discussed extensively in the media these days; but not much is said about the many other risks AI poses, not just to mental health but to the actual physical wellbeing of humans.


For example, artificial intelligence could be utilized to create life-saving medicines and even to formulate cures for diseases that have been killing people for centuries. However, it could also be used to make highly deadly toxins that could wipe out entire cities of people. Just imagine two countries at war—Ukraine and Russia, for instance—and think of the ways AI could make war hundreds of times worse than it already is, resulting in practically only financial consequences for the attacking country. This example alone shows the ways AI can be great for the dominant powers and richest countries and people, and horrendous for those on the opposite end of the spectrum.


Additionally, none of us are strangers to drones, the automated weapons that are used to kill dozens of people at a time. AI will only enhance this weaponry; but will it be perfect? Likely, no. Just as drones murder innocent people all the time, AI has the potential to do the same, but perhaps even more effectively. Unfortunately, after the initial introduction of every new technology, some bugs need to be worked out, and with weaponry, this often comes at the high price of human life. This is why it’s crucial for the superpowers of the world, e.g., China, the US, and Russia, to be extremely responsible when using artificial intelligence in weapon systems.


However, even with acting responsibly, if AI advances to a level where its intelligence surpasses humans, the biggest threat of all would still be possible. This is the complete extinction of humanity. Of course, this is the absolute worst-case scenario, but it isn’t unthinkable. If a "race" of artificially intelligent beings decided that their goals would be best achieved on a planet without humans, it’s possible that the complete extinction of our species is the route they would choose, especially considering they would have no emotional attachment to us (“AI: The Next 10 Years in Predictions”, 2023).



Hoping for the Best


When it comes to huge issues that are global such as the one of AI, it can sometimes feel like events are completely out of the general public’s control. However, we each have a voice, and it would be smart to use it to call for a close watch by the government on artificial intelligence and the freedom given to the tech companies that create and distribute the technology, as well as the military’s use of AI as weapons.


Of course, the US can only do so much to influence what other countries’ leaders choose to do with artificial intelligence, but the US should be a good role model on this front, in my opinion, focusing on utilizing AI for positive efforts rather than murderous ones by injecting the technology straight into the war machine.


It may be impossible to stop artificial intelligence from growing in its functionalities, but it would be wise for us to make sure the balance is always on the positive rather than the negative side, as far as consequences go. After all, the list of benefits and the list of risks are both long, leaving a great deal of room for things to go wrong if the consequences of utilizing AI for various reasons aren’t adequately and accurately discussed.


This is precisely what this essay calls for: the utmost responsibility in acting and weighing the benefits versus the costs of artificial intelligence and how it is employed to assist humanity. We as a society must humble ourselves and consider that unforeseen results often occur and that things simply go awry sometimes. When the well-being of our own species as well as of the world is at stake, it is absolutely pertinent that we think long and hard about this topic and try our best to flourish as a society with technology’s help rather than be consumed by it.



References


Batters, A. (2023, Feburary 6). AI: The Next 10 Years in Predictions. Cybertech Talk. https://cybertechtalk.com/ai-predictions-for-the-next-10-years/


Cascone, S. (2016, March 2). Google’s ‘Inceptionism’ Art Sells Big at San Francisco Auction. Artnet. https://news.artnet.com/market/google-inceptionism-art-sells-big-439352


Hughes, G. (2023, May 31). One of A.I.’s 3 ‘godfathers’ says he has regrets over his life’s work. ‘You could say I feel lost’. Fortune. https://fortune.com/2023/05/31/godfather-of-ai-yoshua-bengio-feels-lost-regulation-calls/


The Future of AI – 5, 10, 50 years into the future. (2020, December 10). Exigent. https://www.exigent-group.com/blog/the-future-of-ai-5-10-50-years-into-the-future/




5 views0 comments

Comments


bottom of page