The Story of Their Life

Welcome!

The person you bump into that one time. Or the people you saw across the street. These stories are about them.

“Ordinary” people with less than ordinary lives. To tell the stories of different people and what makes them who they are.

The Story of Their Life: Why is AI Dangerous?

Why is it bad? Why is it dangerous? If everyone is using it, why shouldn’t I?

These are the questions that people are asking, and I for one will admit I have asked the first two. Because I didn’t fully know what exactly it was doing to our planet. 

I could understand the problems that would naturally arise from the use of video AI, with deep fakes and falsified generated images, or the psychosis people experience if AI is utilized as a friend instead of a program. But, dangerous

So many people use AI, it would be easy to be just another person. To rely on ChapGPT, or Gemini, or whatever form of AI used for simple things like a recipe, or a math problem or a thousand other things you plug into the machine. Because, unfortunately, it does simplify tasks. To ignore that is ignorant. But, in the same breath- some of the tasks that AI is simplifying shouldn’t be. There are certain small tasks that we as humans should be able to do without the assistance of a generative bot. Like for instance, a recipe. A recipe can be found online with one quick google search. If you need a recipe for banana bread, there are thousands online, or millions of articles online filled with healthy, quick meal ideas. That all can be done on the internet, on Google, like we have been doing for years. So, why are we suddenly switching to a generative bot when Google/Bing/any internet explore service is right there and does the exact same thing. 

But, recipes are not the only task people are using AI for. It is perhaps the more harmless of the topics. AI has started to infiltrate all areas, especially in regard to therapy, which I touched on earlier. This is where AI gets into the territory of unsafe.

The usage of AI for therapy is unhealthy, but in the same breath- it is understandable. It isn’t incomprehensible as to why someone would resort to a generative AI bot, if anything it points to a bigger issue. The accessibility of therapy and the expenses of the service. Therapy is expensive, and in many situations, is not covered by insurance or a person’s workplace, so many cannot afford therapy. Many don’t have time for therapy, due to their jobs, or other external factors. But, because of countless reasons, people have resulted in going to a bot that is constantly able to answer, instead of a therapist or even a friend. 

Columbia reports, “More than 61 million Americans are dealing with mental illness but the need outstrips the supply of providers by 320 to 1, according to a report by Mental Health America.” The prospect of a bot being around 24/7, creating answers to complicated questions in seconds is promising to many- but people cannot forget: this isn’t 100% accurate. The information distributed by these bots isn’t always right, more so, is factually incorrect, which wouldn’t be a problem if you are asking for a grammar check but this is a person’s mental health. These bots distribute confident messages riddled with errors, and people have begun to accept these as truths. 

In the same Columbia article, “People often mistake fluency for credibility,” says Ioana Literat, Associate Professor of Technology, Media, and Learning. “Even highly educated users can be swayed because the delivery mimics the authority of a trusted expert…and once people get used to offloading cognitive labor to AI, they often stop checking sources as carefully.”

It is also designed to agree and affirm with the user. There are times where this isn’t the case, but for the most part, like I experimented in part 1, the bot agreed with me. Even when I was suggested a way of life that was completely unhealthy, it didn’t disagree with me even as I acted paranoid and anxious. 

This is the tip of the iceberg with the dangers of AI generated content. 

The reliance of AI for one. I understand that AI can be used as a tool for good in education. When regulated, AI could have been used for good everywhere- but realistically, I fear it is the opposite. Not considering what AI does to the environment, the usage of AI in education, in students, is concerning. Because there is a difference in plagiarism and AI generated content, in that AI is harder to detect. Yes, there are ways to detect AI (typically using an AI program, ironic) but those programs are not perfect either. Then, of course, the human eye can detect AI wordage, but if the student makes enough changes, they can get away with it. 

Plagiarism or plain cheating has been happening in school since forever, it has just been getting easier and easier to do. I am not immune, I was in Chemistry in college once. I know how difficult that class was. (Note: I didn’t use AI, I just failed and dropped the class and changed my major. The sciences were NOT for me.) But, using AI once in a blue moon is very different from relying on AI for your work. The countless videos I have seen online from college students struggling to write an essay without the help of AI is concerning. Again, I understand wanting to use a tool for help, but there is a tool: the internet.

The internet has millions on top of millions of articles about most subjects, and will help explain most topics. The reason this is so important, especially in high school, is because this is education. This is when you are supposed to be intellectually challenged. For those in college, I always raise the point that you are paying for school. It’s not like in high school, where you are forced to go. College is a choice, an expensive one at that, so I had to look at it like every time I wanted to skip class or take the easy way out, I was losing money. The same goes for using AI. I am paying to learn, so I can one day get a diploma which I rightfully earned, but that doesn’t apply if I am not the one who did the work. If for every assignment, I plugged it into ChapGPT and waited for a response. And since AI searches across the internet to the answer for the question, it ends up plagiarizing anyway. 

The National Library of Medicine explored the effects of AI on students’ well being specifically in higher-education, and there were positives. There was a substantial amount of time saved for students, when using AI to help study, or homework. But, the very next segment explained that there are downsides to this. 

“As Cambra-Fierro et al. (2024) state, over-reliance on AI for communication, especially in recreational contexts, may reduce face-to-face social interactions, negatively impacting interpersonal skills and emotional intelligence. Students may become more isolated and less adept at real-world social interactions and teamwork, which are critical to their overall social well-being and development,” The National Library of Medicine explains. 

“Positively, these tools offer personalized learning experiences, reduce stress by allowing students to progress at their own pace, and improve accessibility to mental health support via chatbots and virtual assistants. However, over-reliance on AI can lead to digital fatigue, technostress, and anxiety over data privacy concerns. While AI can enhance well-being by reducing academic pressures, its associated challenges require careful management.”

I am not ignoring the positives that AI has, because it does have positives. But so do most technologies. It’s not about the positives, it’s about if the negatives outweigh the positives. In my opinion, they do. Not only because of the brief things I mentioned here, but for the overwhelming amount of damage data centers do on the environment, the mass amount of reliance on these bots, the AI generated videos falsely defacing people(especially women), the dangers of AI psychosis, creating layoffs in all fields, and plenty of other problems that I believe outweigh the benefits.

Leave a comment