Part II: The Human Experience vs. a Digital Bot. An experiment to see exactly what AI is, what it sees itself as, and how dangerous it could become.
The History
Before I start another conversation with one of the various Chatbots to relinquish my curiosity, I probably should slow down, and start at the beginning. The actual definition of AI, or Artificial Intelligence.
If you’ve been living under a rock, the rock being a (probably very blissful) life without the internet, or simply don’t know what this new technology is, or how it works, let me explain (I do have at least one semester of computer science under my belt. I know. Impressive.) Or rather, let the professionals do so.
Cole Stryker and Eda Kavlakoglu, authors of an article on IBM (one of the largest researching companies in the work focusing on all things technological, especially AI) define AI as this:
“Artificial intelligence (AI) is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.”
This technology isn’t particularly new either. To really understand this creation, (or anything, really), we need to take a step back and look at the history. Because the “birth” of AI took place long before 2025, and long before Grammarly and self-driving cars were created.
All of 75 years ago, in 1950, English mathematician and computer science pioneer Alan Turing published a paper. If anyone knows anything about computer science, you may recognize that name (or if you know Benedict Cumberbatch’s entire filmography). He is considered the “father of computer science,” and after all of his successful work in WWII, he published his paper titled, Computer Machinery and Intelligence, where he asked the fundamental question, “Can machines think?” Which then began his test on whether or not this question rang true, in the Turning Test or Imitation Game.
The Lawrence Livermore National Laboratory explains the scope of the test as an adaptation of a Victorian-style game. It involved the seclusion of a man and a woman from an interrogator, who then guessed which is which. But, in Turing’s version, the computer program took the place of one of the participants, and the questioner then tried to decipher which was the computer and which was the human. If the interrogator was unable to decipher the machine from the human, the computer would be considered to be thinking, or to possess “artificial intelligence.”
For those who have not read his paper, (Note: I advise anyone with an inkling of curiosity about computers, or any curiosity at all, to do so. It is incredibly interesting) after much fascinating discussion of the arguments of his theory and test, the test ultimately proves that computers cannot think like a human would. Instead, it can simulate human conversational behavior enough to fool another human interrogator. And that was 75 years ago.
Turner had his own predictions of these machines stating in his paper:
“I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
(Crazy how far we have come.)
Around six years later, the term “Artificial Intelligence” was coined by John McCarthy at the first ever AI-Conference, and mere months later Allen Newell, J.C. Shaw and Herbert Simon created the first ever running AI computer program called the Logic Theorist. More and more innovations begin to be made in this field like in 1967 when Frank Rosenblatt built the Mark 1 Perceptron. It was the first computer completely based on a neural network which “learned” through trial and error. Then IBM noted 1997, IBM’s Deep Blue computer defeated the then world chess champion Garry Kasparov, in a game of chess then a rematch.
As we move through the timeline into the 2000’s massive steps were made, like in 2015 The Chinese multinational technology company Baidu, created Minwa, a supercomputer that uses “a special deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.”
Then in 2016, DeepMind’s AlphaGo program, beat Lee Sodol, the world champion Go player, in a five-game match, with the possibility of using 14.5 trillion moves.
But, in 2022, we see the change that is witnessed today, which is the rise of LLM’s, or large language models. The chatbots that respond instantly in your google search or OpenAI’s ChatGPT, a system I have already written about, is beginning to emerge, and hasn’t stopped. In 2024, IBM declares the AI’s rise as an AI renaissance, with the data output being far richer in content, and utilizing image recognition and NLP speech recognition capabilities to distribute more in-depth responses.
And in 2025, I would have to say I agree with IBM. There is a renaissance going on whether we want it or not. Because according to Stanford’s AI Index, 78% of organizations reported using AI in 2024, up from 55% the year before, also stating that AI boosts productivity and, in most cases, helps narrow skill gaps across the workforce.
Another statistic shows that in 2024, U.S. private AI investment grew to $109.1 billion, which is nearly 12 times China’s $9.3 billion and 24 times the U.K.’s $4.5 billion. Specifically, Generative AI had $33.9 billion privately invested globally.
What AI Actually Is
So, now that we have covered (a very small part) of the history, let us return to the first question.
What is AI, or rather what is the AI that we as average people are exposed to constantly in our social media, internet searches, and at work. It is called Generative AI.
Defined by Oxford as: “artificial intelligence designed to produce output, especially text or images, normally requiring human intelligence, typically by applying machine learning techniques to large collections of data.”
This type of AI is used the most and three: ChapGPT, Gemini, and Copilot (If you want to count Grok go ahead, but for the sake of me only seeing it being used on X, I simply cannot take it seriously.)
Generative AI has begun to infiltrate most places of business, or of daily life itself, and like I said in the introduction, there are a lot of different emotions, mindsets, and thoughts about this. Let us try discuss some (I will attempt to be as unbiased as I can be):
The positive mindset. The people that look to AI as an innovating helper to make life easier. AI can help make smart personalized decisions in a business or social context, complete tedious tasks for the user, solve complex problems including medical progression, and minimize human error, all with 24/7 operation (Western Governors University).
Some may even argue that almost all technological advances have faced pushback, even the creations of the locomotive, photography, the telephone, and much later – the internet. So, this isn’t the first time humans have been afraid to embrace a new innovation, especially a scientific one. But, as much as this could be an argument, there is a clear counterpoint. (Sorry, I tried. There is just too much evidence against the contrary.)
Technology has received pushback for years, and it has taken away certain jobs for the last twenty years. Many behind the scenes jobs in warehouses and factories, or even the Walmart check out. But, AI is more than manual labor, it mimics human cognition. It is faster, accessing everything on the internet in a matter of seconds, outputting the exact task it has been asked to do.
Yes, AI makes errors, but so do humans. So, AI being able to replace jobs isn’t some looming possibility but a genuine threat for thousands, and is exactly why this pushback is important. The human heart cannot be replaced, but heads of companies don’t care if the human hands working for them cost a whole lot more than an automated one.
Environmental issues arise as well, and large ones at that. Large data centers. The UN Environment Programme explains it simply, “The proliferating data centres that house AI servers produce electronic waste. They are large consumers of water, which is becoming scarce in many places. They rely on critical minerals and rare elements, which are often mined unsustainably. And they use massive amounts of electricity, spurring the emission of planet-warming greenhouse gases.”
To put it in my own words, it drains us. More than we can take, more than small towns can take, more than big cities can take. And the argument for these data centers creating jobs will not hold up when there are no resources left to use.
Watching these data centers be built disregarding the communities wishes has been painful, but now it is personal. My hometown is getting a six billion dollar data center, right off a popular highway in Arkansas.
“An economic win,” political leaders said. Enough to power 37,000 homes. They don’t anticipate for there to be any power shortages because of proposed investments to new plants. E&E maintains rules and permits applicable to certain data centers, but in the same sentence, they do not have regulations specifically tailored to data centers.
But, Osyrus Bolly, with Arkansas Grass Roots United, makes the point, “When we’re talking about Arkansas, we’re talking about the Natural State.”
“A lot of the things that attract people to Arkansas are those outdoor opportunities, being in nature, so the fact that these data centers bring so much climate stress,” Bolly said.
The natural state in climate stress. The irony. But, that is what happens when we begin to place more value on the economy, than the environment.
(Note: I am not overly knowledgeable on being an environmentalist. But, I do know when something is plainfully damaging our environment, it’s not good.)
There are a thousand other problems that I haven’t touched, like the push for AI in the creative world, or the dangers of deep fakes and hyper-realistic photos becoming too realistic, or bias in AI’s answers, or data privacy issues, or the many others that I plan on discussing further.
AI is more than a push in innovation like we have seen before. It an attempt of the replication of the human mind, mimic cognitive function without the soul, and whether you love or hate it: It is here, and I am certain it is not going away any time soon.

Leave a comment