Artificial Intelligence
Artificial intelligence. Like it or not, it’s everywhere now. Schlocky click-bait articles fill the search results on Google, YouTube is inundated with AI slop looking to make a quick buck.
You might think from those two sentences that I’m not a fan of AI, and at least for the LLMs—large language models—you’d be mostly right.
So just to be clear: for the remainder of this essay, unless I explicitly state otherwise, I’m talking about AI technology, I’m refering to LLMs.
Contents
- 1. A depressing thought about AI
- 2. Garbage In, Garbage Out
- 3. Emulation is not intelligence
- 4. The average of some information is not the truth
- 5. The problem of the fire house
- 6. Everything seems right when you don’t know better
- 7. Intelligence requires metaknowledge
- 8. Thinking about the intangible
- 9. Validating our education
- 10. A brain in a jar
- 11. Different Ways of Thinking
- 12. No questions, no curiosity
- 13. Fuzzy thinking
- 14. What AI is
- 15. That dystopian future, or…?
- 16. “Efficiency”: Of what?
- 17. Inertia and the lag before consequences
- 18. Effects in the future
- 19. Advice from Science Fiction
- 20. Contrapoints
- 21. Summary
1. A depressing thought about AI
Sadly, I’m not convinced AI is far off from the way a lot of us use our brains. In general we humans spend far too much of our time loading our brains with stories from television because they are exciting, titilating or stimulating, and not for their value in improving our intelligence or our understanding of the world. How many people know the Marvel stories and all the characters' histories better than they understand all the politic, economics and players in our real world?
How often have you seen parents that tell their child to behave one way, but demonstrate something else? And which way most influences the children? Children are little emulation engines, that learn to imitate the behaviors and habits demonstrated to them. I’m not convinced adults are a better, so I ask, what do we emulate or aspire to when we spend our time watching sports, shows and movies? Whole swaths of our population spend more time watching other people play sports, than participating themselves. How often do our conversations revolve around the touchdowns in the game on Sunday, that unexpected plot twist in a popular show, or the new fascet revealed about a fictional character. What does it mean?
We are trained on the data we’re fed, and I find it disturbing how much of our lives is regurgitating whatever garbage we follow. Not so different from AI.
2. Garbage In, Garbage Out
And it’s not just children and private life. In tech I’ve worked with supposedly technical people who knew the lingo and the buzzwords, which gained them respect among the managers, who thought they knew something. But after working with them long enough, it became clear that their fancy words were a veneer. There was no knowledge behind them. They heard what others said, and parroted it back. The managers liked having their ideas confirmed, rephrased to add some technical details.
The managers didn’t know they were listening to technobabble.
With such low standards, it’s no wonder we embrace AI as “thinking like us” or “better than us”.
And as a new employee, people like this seem respectable and knowledgeable. Still learning the system, the code, and the industry it’s impossible to see the holes and errors, and easy to instead ascribe inconsistencies and things that sound wrong as our own deficiencies. As we become acquainted and oriented, though, it becomes aparent the holes aren’t in us.
Given these encounters, I believe AI is like these technobabble repeaters. We feed it a bunch of stuff, and it learns to parrot it back at the appropriate time. It feigns intelligence to those that don’t know any better. But to the experienced, the illusion can’t be maintained.
3. Emulation is not intelligence
Nevertheless, the proponents say that this is why AI is smarter than us. Developers load way more data into AI systems than one human can read in their lifetime. There’s no possible way a mere human can compete!
But until Skynet becomes sentient, it’s humans choosing the data used for training. Oftentimes this is data scraped from the Internet. It might be fact or speculation, real or imaginary, scientific or fantasy, erotica or religious propaganda. How does AI know what’s real? It doesn’t know any more than a 3-year-old understands the truth of the things their parents say. It simply believes.
And like the 3-year-old hears and believes the things his parents and siblings say, the opinions of AI are shaped by the training data they’re fed. Both AI and kids are shaped by their elders, and succeptible to misinformation or brainwashing. They parrot back that information, creating the appearance of intelligence.
But are they intelligent, or just mimics? If a child makes a bizarre assertion—asserting “Doggie!” when he encounters a cat, because his family has chihuahas at home, so that’s the closest pattern available—we grin or laugh at it. But too many are not so skeptical when AI says something bizarre, having forgotten the adage of “garbage in, garbage out.”
4. The average of some information is not the truth
I recently upgraded the Linux software on my laptop, and as with any ugprade, there were snags. One of occurred with a tool for accessing my Google address book. I normally use the paid search engine Kagi that’s ad- and AI-free, but since I was in the middle of setup, I got the default: Google. As I was researching the problem, Google’s slopbot kept trying to help me, and I ignored it, prefering curated advice.
After sifting through 2 generations of outdated information, I found more recent information that described the current solution but gave instructions for an outdated version of a Google control panel that I needed to use. Since this involved Google’s own tool, I looked at the AI advice— which turned out to be inaccurate, based not on current information but the same proliferous but outdated information I kept finding myself. Having read those sources, I had determined they were out of date and taught myself to identify outdated information, something the AI couldn’t do, as evidenced by the worthless advice it gave me.
Is there a problem here that needs fixing? I’d say yes. We produce tons of content, and it’s all archived, never thrown away. Occasionally thats good, helpful for the retro-computing afficionado who bought a 20-year-old MacBook Pro on a yard sale for $20 bucks and needs technical information to get it working.
The problem wasn’t that I needed a machine to summarize the existing human-written solutions. The problem was digging through all the information to find the right solution, separating the wheat from the chaff, as it were. That digging requires I gain a greater understanding, to sort out the accurate and outdated information. AI can’t help with that. In fact, it only makes it worse: pretending there’s only one solution, either the outdated one that’s most prolific, or a mashed-up solution that’s an average of old and new and entirely inaccurate.
5. The problem of the fire house
This is the problem with the Internet— it’s flooded with a constant stream of new information and content created by billions of people around the world. We generate content— sometimes new information but often the old repacked in new words, styles or formats— as fast as it benefits us. And when we’re done? Well, it’s work to go back and update it to mark it out of date. So instead, it simply gets abandoned in place.
Is there any process to sort out the truth from fiction from the bitrot? No— not a good one, not at scale. It would require people to do the curation, but that would cost money, and the cost would be astronomical.
And someone’s about to step in and say, “Well, see, that’s exactly what AI is for. It can do the curation.” No, it can’t: we have a chicken and egg problem. An AI to do curation well would need to be trained on good, accurate data. Okay, so let’s train it on old encyclopaedias and research papers. Well, yes, but we also want it to be able to explain stuff like why people believe in the flat earth if somebody asks. Even if the theory itself isn’t true, people do genuinely believe in it. So in goes a bunch of information about the flat earth, and similar crazy and outdated theories. And, y’know, AI needs to know everything. Most people won’t be interested in Perette Barella, but just in case somebody is interested, AI’s got to be trained up on who I am. The same goes for you, your parents, your kids, and billions of other people. So in goes all of our data, our opinions, our blog posts and social media status updates.
The idea of using AI to do curation is a catch-22: if it was possible to curate the training data in the first place, then maybe an AI could be trained to do curation. But until well-curated training data is available, it’s not going to work.
And on a large scale, the curation that’s needed is impossible. There’s a lot of reasons people create content, but once it’s posted… where’s the incentive to maintain it? Who has reason, time and resources to curate all that old content? Nobody.
And when AI is trained on all the content on the web, guess what? Garbage in, garbage out. It is just a more grand, complicated echo chamber that is the web itself.
6. Everything seems right when you don’t know better
Given this, it would be prudent to doublecheck anything AI tells us. It begs the question: if we’re going to have to research all of AI’s claims to make sure they are valid… then how much is it saving us? And if not much, then why bother with the AI at all?
Sadly, I doubt this double checking will happen much at all. I’m reminded of a job I had 25 years ago.
The company I worked at wanted to sell their product in Japan. I was tasked with the goal of making the install software work multilingually. Having updated the code, I needed a way to test it. So, with very limited Japanese, I created terrible, clumsy translations of the prompts and messages that would suffice for testing, until such time that a translator could provide proper translations.
But nobody else spoke Japanese at all. All they saw were Kanji and Kana characters. Not knowing better, they assumed it was already translated and ready to go. The software shipped without proper translations, because those who knew nothing about Japanese, assumed my poor translations were right. In their blindness, they trusted.
This is the danger AI poses: it will make claims and people will trust it because, unless they have enough knowledge to spot a problem, they’ll assume the entirety is accurate. After all, the machine can’t lie, can it? And if it’s telling them what they want to hear— confirmation bias— there’s even less reason to question it. Or it’s common knowledge— it’s stress what causes ulcers, it is, everyone knows that. Right? *Shake head.
The wise-ass tl;dr of this: It’s amazing how much AI knows about about the topics we don’t know about, and how many errors it has in its understanding of things we do know about.
At the end of the day, AI poses to us the same problem we started with— determining the validity of the information it produces— combined with problems we have— that we’re often lazy and naive— and add in an entirely new problem: the tendency of people to assume the machine is always right.
7. Intelligence requires metaknowledge
Skepticism: we’re capable of it, even if we’re not as good at it as we should be. But, can AI be skeptical at all? I’m sure it can give us a definition regurgitated from 1,000 dictionary definitions, and can describe to us when we should be after being trained from works that listed topics to which skepticism is appropriate. But does it really know, innately, when to be skeptical? What skepticism is? Can it be skeptical about the things it’s been trained on? I doubt it.
AI confidently makes pronouncements about the world that just aren’t so. Asked about things that don’t make sense, it hallucinates—makes shit up—to fill the void. It’s not just that it’s fails to be skeptical, but also it doesn’t have metaknowledge—it doesn’t know what it knows and what it doesn’t know. It’s the most dangerous kind of idiot: one who doesn’t know his own limitations. It’s unable to say, “I don’t know.”
8. Thinking about the intangible
And what about things that are impossible to communicate? On my website I’ve got travelogues of bike adventures, where I talk about the places I went, the things I saw, the difficulties I encountered, and the things I felt. Places and sights are easy to convey, especially with pictures. Challenges though? A description is possible, but conveying the emotions, physical sensations, tiredness and aches—data is lost.
Maybe if you’ve gone bike touring, you can rely on your experiences as a reference. It won’t be exact, but perhaps close. If not, maybe the marathon you ran, or the way you feel after going to the gym provide some idea. But for a couch potato? Still, even they no doubtedly ran around when they were a child. Surely AI’s read stories about all these things, but it’s never done them. It doesn’t have sensation or proprioception—the feelings of your body interacting and moving and responding. How can AI “know about” these things when it doesn’t even have a body?'
9. Validating our education
Our children have an advantage over AI: they can interact with world. Through life they learn more about the world. Our parents teach us to be careful on the ice because it’s slippery, but inevitably we slip and fall and get a bruise. From that, we gain first-hand knowledge about the world. And we validate that our parents' advice was correct.
Can the AI do that? No; it’s like the superintelligent brain in a jar in a science-fiction movie. It can learn that going outside in freezing temperatures feels cold. But what does feeling cold feel like? Perhaps it could quantify that as a certain temperature, but it will never understand the sensation of cold—nor pain, pressure, sensation, tickling, nor emotions such as joy, happiness, anger, frustration.
10. A brain in a jar
AI proponents will tell us AI benefits from being a brain in a jar; that its disconnection from the real world makes it neutral, disinterested, fair in a way we humans can’t be. We humans are swayed by our emotions and desires. Fair enough.
But that’s assuming AI is trained on fair, accurate, balanced data. Both we and AI are subject to brainwashing. It may take us a long time for us, but people can and do change their minds and overcome even deep brainwashing. Can AI? AI’s conclusions will only be as well-reasoned and judicious as the data on which it’s trained. If garbage goes in, garbage gonna come out.
11. Different Ways of Thinking
AI is also limited to one form of thinking, if you can call it that at all. There are no ideas behind the things they say. LLMs break a conversation into words and punctuation, then do extensive probability calculations and lookups on the relationships between these words in comparsion to training data. Finally, they pick the next probable word. They repeat this over and over to create a response. Does that sound like thinking to you? It doesn’t to me. But, that’s why AI responses appear a word at a time. It’s not a stylized faux retro 300-baud-modem effect. It’s caused by the sheer amount of computation needed in selecting each next word.
I suppose in a way it complements how I put ideas together when I’m reading a book: one word at a time, assembling them into an idea. But when writing? I have an idea, and I’m choosing my words to describe that idea. LLMs, however, have no plan, no idea behind the things it says. Somehow, through all that probabilistic mathematics, coherent-sounding and sometimes correct stuff comes out.
But for us humans, there are other ways of thinking out there. Watch somebody with “the knack” repair something mechanical. Are they thinking about how, if this shaft rotates, then logically that turns this gear, which in turn rotates at such-and-such a gear ratio? No; someone with “the knack” just sees it. If you don’t have it, the closest I can suggest is imagining you’ve got the exploding CAD view built into your head.
Computer programming? To outsiders, since code is written, you might expect that we think in code. Sometimes. But when when thinking about ways of representing data, I think in something closer to shapes. Algorithms? Something like a flowchart, but possibly animated. Relationships? Connectgions.
Another example: it’s not that geographical savants have something like Google Maps built into the heads— we do better than this: we can seamlessly switch from seeing the big picture of the map, the landmarks along a route. We can adapt to conditions, times of day, mode of travel, or conditions we encounter. We do have tech that is somewhat adaptive, though not nearly as fast or dynamic— and we had that long before we had LLMs, so it’s not an AI feature.
What about music? Driving a car or operating a machine? Or simply walking?
As humans, if we’ve been at this for a while—and taking advantage of the capabilities of our brains—then we probably have multiple ways of thinking about things: thinking in words, thinking in pictures, thinking in drawings, or lists or instructions. What do athletes think in, as they fluidly coordinate observations and movements to catch, strike or kick balls and score points?
Can AI do that? We could probably build machines that do each of these well. But building a machine that does all of these, and drive a bus, make a decent lasagna, ride a bike, reflect philosophically on life, climb a mountain and care for their spouse, friends and family? I can do all that. The brain in the jar, not so much.
12. No questions, no curiosity
AI systems don’t ask questions; they don’t have curiosity. But let’s just imagine AI did have questions of its own. Could it investigate? Does it have free will? Can it decide to learn something new, and conspire to investigate information to satisfy its curiosity? No, it can’t. It is, at the end of the day, a goddamn machine that does what it’s told to do.
13. Fuzzy thinking
And isn’t it interesting that I can call AI a brain in a jar, and you probably understand what I mean? As humans we understand analogies and similies, even though they aren’t exact. We understand approximations and inaccuracy. Can computers think in fuzzy ways? When we program CAD software we program it to allow engineers to provide tolerances—which they then think about in very precise ways.
But you’re not AI, you’re human, so I can use fuzzy descriptions and expect you to understand my meaning. Like when I say…
14. What AI is
AI is like a bunch of Hollywood script actors who have been hired to regurgitate Spiderman’s origin story into yet another formulaic movie. How much of the crud out of Hollywood is just reboots and sequels of the same stuff, over and over, with higher resolution, more violence and fancier special effects?
If, as a species, we’re satisfied with just repeating the same stuff over and over—and I find it it scary how often humankind finds that acceptable, from repetitious Hollywood slop to people signing up to be spammed with mass-produced “offers” and “coupons”—if that’s what we want out of life, then maybe AI is perfect.
15. That dystopian future, or…?
But is that the world we live in? Not really, there’s a lot of change happening all the time. Is that a world you want to live in? More and more of the same shit, from now until you die? Fuck no. The world is filled with so much stuff, so many possibilities. I change, I grow, I try new things, develop new skills. Even my bicycle trips—every year I go different places using different routes. I’ll sometimes revisit a favorite waypoint, but repeating trips wholesale— no. I love adventures, newness, learning new skills and talents, and creating—even when it’s fucking hard. Or do you think researching, writing, revising, filming, editing and subtitling all this is easy?
I could have just asked AI to make me a video about why AI slop sucks. It probably won’t have said this in my way, in my words, would it? This video you’re watching represents weeks of my life spent working on it. Thanks for watching, by the way. When this video is done, it will retain a a piece of me—a snapshot of my thoughts, my way of speaking, my style of editing.
I’ve curated a lot of thoughts and reflections into this script. Before this is produced, I’ll be adding, removing, rewriting and getting suggestions from my wife to improve the script. Does AI do that? No. It regurgitates something like what it’s been told, based on statistical probabilities of word associations. To me, that sounds like what marketing does. It doesn’t sound like thinking.
16. “Efficiency”: Of what?
The proponents of AI suggest this will make us more efficient. Most cost-effective. More profitable.
Who is “us”? Their corporations. For society, it means more jobs replaced by machines. Those working will either have to work harder to clean up the half-ass job done by AI after it replaces their coworkers, or alternately, lower their standards and just pass along the AI slop.
As a society, we’ll have more people out of work, their jobs displaced by AI. More people for society to care for after they lose their jobs. More people to turn to drugs as an escape, or commit crimes to make ends meet after their benefits run out. This is “efficiency.” Efficiency for them to hoard more money, and let the rest of us, our way of life, and our government, sink. All while their data centers suck down water, electricity, equipment and our future as the carbon dioxide footprint expands, for what?
17. Inertia and the lag before consequences
It’s funny, because 40 years ago improved efficiency and profits were the benefits given for outsourcing manufacturing to Japan. Eh, those skilled trades weren’t great jobs, they said, we could let them go overseas. We’d keep the design jobs at home, and they could do the manufacturing.
So we allowed US manufacturing to decline. Jobs disappeared, and need for people with the skills required for those jobs disappeared. But it seemed okay, because we had inertia, and effects take time to fully set in.
By 20 years ago, I heard the problems talked about: we didn’t have anyone to build prototypes. Things had to be built in Japan or China and shipped back, but that took weeks. It slows the whole engineering and refinement process down when we can’t iterate quickly, building and refining prototypes. Why even do it here?
Today, our administration wants to bring back US manufacturing. But we don’t have the skilled people to know how to do it. We’re at the mercy of getting foreign companies to build factories here, because we don’t know have sufficient talent here to do it ourselves. And, of course, when they come to set up the factories we arrest and belittle them. That’s always good for business.
Forty years ago, we set ourselves up to be screwed now.
18. Effects in the future
Now AI proponents want to replace the people doing the thinking? What could go wrong with that?
I have an admission to make: I don’t live in the US anymore. My wife and I fled to developing nation, and are trying to make a new life here.
It is eye-opening how much the US has lost, because the skills still exist here.
I’m not sure where I’d buy a soldering iron or a welding mask in the US—other than buying it on Amazon and having it shipped—but here? You can get those in the grocery store. No, I’m not kidding. There are tools and repair shops everywhere, and with them, the talents to use tools to repair and make things.
Downtown there’s a mall that is filled with appliance parts stores. Brushes, motors, stove burners, washing machine agitators and controllers—all kinds of things—it’s all there. Because they still have people capable of fixing things.
I admit curiosity if things are built differently for here. In the US, things aren’t meant to be servicable. Why bother, because there’s nobody able to fix anything anyway? Just build it cheap and dirty, because it’ll just get chucked anyway. Things aren’t built to be repairable anymore.
Appliance parts and repair used to be a good business. It barely exists in the US now. And it’s becoming commonplace that repairing something yourself violates terms of service for the product you supposedly outright own.
Forty years ago, the US thought it was okay to send manufacturing jobs elsewhere. There would be more profits and efficiency. Displaced workers would find new jobs, even if it required new careers. And at the time, it seemed okay.
Twenty years later, the effects were becoming apparent. And now that we want to reverse them? It’s not just a matter of clicking edit–>undo.
But AI will be great. It’ll lower expenses and get more profits this quarter. Twenty or 40 years down the road? The next generation will surely find new career options. It’ll all be fine, as long as we keep the good jobs here—being football stars, a rich CEO, or an important influencer—it’ll be great. Those thinking jobs aren’t really important. And if it turns out they are, we can just hire people back.
At least, that seems to be the expectation. But looking at where we are with blue-collar skills, after 20 or 40 years of AI doing the thinking for us, we aren’t going to have people with white-collar skills either.
19. Advice from Science Fiction
Before the end, I want to take a moment to compare AI in the forms of LLMs of today, to the computers of the future in science fiction.
Stanislaw Lem and other older sci-fi writers foresaw computers as becoming small and powerful, but still terminal-based affairs, like from the 1970s. At best, they might have a voice interface for some control.
This is what we see in Star Trek: the Computer is not an AI that does the work for you, it’s a voice interface, a keyboard and screen replacement. You can ask it to find information, then sort and pare down results with further requests for refinement. The computer never summarizes results unless asked. It’s closer to a traditional search engine, just with a verbal interface; it is not an LLM. Commander Data, on the other hand, is clearly actually intelligent. While he has some blind spots, he senses his environment, has a sense of curiosity, inquires and adapts. Similarly, Voyager’s hologramatic doctor is also truly intelligent, and though he’s initially limited and always awkward. He has a desire to grow, and uses his agency to do so. Both Data and the doctor are much more than the LLMs we have today.
In Blakes' 7, Zen and Slave are similar to the Enterprise computer, their functionality primarily controlling the respective ships. The AI system in Blake is Orac, which through a plot McGuffin has access to all other computers. Orac is a brain in a jar AI, very intelligent but also sociopathic and selfish. Though the crew frequently consult Orac’s advice, his abrasive, egotistical personality and sometimes dangerously self-interested shenanigans leave no doubt that that blind trust is at their own peril.
Doctor Who, meanwhile, has talking computers of various sorts, many featuring speech interfaces, including K-9, a sentient robotic dog. But over its many seasons Doctor Who has also had a handful of insane computers. Artificial Mental Illness, anyone? The prime example of that, of course, is HAL-9000 in 2001, Stanley Kubrick’s adaptation of some Arthur Clark stories.
An exception from Lem is The Cyberiad (“Fables for the Cybernetic Age”) which features two sentient engineer robots and their adventures and foibles. Their intelligence does not prevent mistakes, but instead permits them mistakes much more grand. (The prose is fun and it’s worth a read.)
I think AI is modeled after the idea of the thinking computer from science fiction, but they got it wrong: science fiction has talking computers, but they don’t resemble LLMs. They are either truly sentient (potentially with all the problems that entails), or simply an alternate interface for times when a keyboard and screen aren’t available. The computers of science fiction usually do traditional data processing tasks to help users find the data they need to do their work. The machine does not try to do the work for anyone. The verbal computers in science fiction aren’t AI, they’re traditional browsers that feature verbal interfaces.
The only exception is Star Trek’s holodecks, which can create storyline characters based on works of fiction, historical characters, or a user’s stated parameters. These are generally used for storytelling and entertainment, although on occasion for simulations and prototyping, and simulations may not match the real world exactly. It also accidentally create the sentient Moriarty, inviting an ethical dilemma. Holodeck addict Reg Barclay’s use of real people’s imagery for his holodeck stories is frowned on, and Geordi Laforge prompts a simulation of Leah Brahams, an engineer in the Star Trek world— which we find out later is quite different from the real Leah Brahams. Thirty years ago, the writes for The Next Generation knew generative technology would have its limits, and would come with a bundle of problems. It’s a shame we have such bad memories.
20. Contrapoints
If it seems I’m harsh on LLMs, it’s because they rightfully deserve all this critique. But I admit they are intriguing—the way they represent ideas as points in many-dimensional space is fascinating and clever. Perhaps someday, that will be part of a new model that’s more than a stochastic parrot.
LLMs seem to be good at summarizing— I could see their use as a writing tool for drafting synopses, if subsequently reviewed and finalized by a human.
Similarly, LLMs are a good— but not perfect— tool for translating. They aren’t as accurate as a human, but perhaps they could be used to produce drafts that are later reviewed by skilled translator.
If we stop myopically considering LLMs, and look at the larger AI picture, there are a lot of good uses for AI. For example, there are AI systems that can read mamographies— I’d like a human to review my films first, but as a second set of eyes, sure. Not first though— that path would lead to radiologists losing skill when the machine kept doing the work for them.
LLMs, and AI more generally, do have uses—and not just as copyright washers. But their uses are limited— despite the faith their proponents espouse— and they are not actual intelligence.
21. Summary
I think the major reason to think AI is something good is by comparsion. In a country—or maybe a world— full of so many gullible people believing nonsense and outright lies, there’s no way the machine can’t do better.
But compared to those that push themselves, and really use their intelligence? I will grant that AI seems to be a very sophistocated summarizer. Fed good inputs, it might do it well. But here’s the catch:
- We are drowning in a sea of information because we produce it faster than we can sort it.
- We don’t know how to sort through the information, because there’s too much.
- Curating it all is imposible: deleting the garbage, marking it as historical, adding clarifications to indicate it’s out of date or direct people to updated information, with more being generated every day.
- AI is a technical attempt to solve this problem. We can’t hire enough workers to sort through all the web, so we’ll just load it all into the machine and let it figure it out.
Thus, the inputs are not well curated, with plenty of garbage going in. And AI is only as unbiased as the inputs it is fed. Garbage in, garbage out.
In the long run, replacing jobs with AI will result in the disappearance of the skills necessitated by those jobs. This has happened before, and when we decided we wanted those jobs back, it was too late. The next generation of workers will need to reestablish those skills, since there is no one to pass on said skills. We are setting ourselves up to repeat this mistake with AI.
Thinking and reasoning at a deep level is a uniquely human ability that AI mimics. But it doesn’t have agency, a desire for knowledge, it’s incapable of skepticism, it doesn’t know the limits of its knowledge. It thinks only in terms of words being related to other words, nor does it have a way to validate the things its told. Us? If we’re using our brains right, then we’re curious; and if we’re paying attention, we notice inconsistencies in the things we’ve been taught versus what we see. We seek out answers for those quirks.
Artificial Intelligence is not real intelligence. And pretending it’s a viable substitution is not going to benefit humankind.