Okay. I haven't read much of this thread, but given that I'm seemingly much more familiar with AI than anyone else here, I think I should give Kanzenshuu a primer.
Starting with the Chicago-Sun Times story about the 15 books, I dug into the story some more. Turns out that story is horrifically misleading. Yeah, the Chicago Suns guy should have done his research, but so should have the person who wrote the article reporting on the incident because there's some pretty fucking massive misrepresentation going on here.
For starters: just as I expected, no, the AI used was not a modern state-of-the-art AI. The AI used could have been as old as late 2022 or early 2023. By AI standards that is terribly old, and by no means whatsoever should it be used to gauge the capabilities of the technology. To cite this example as being relevant (as the news article does) would be akin to someone confidently stating in 1995 that videogames never, and cannot, include voice acting based on a sample of 100 Sega Master System games... unaware that if you move beyond 8-bit consoles, yes, voice acting existed by the late 80s at the latest in games like Ys.
Just as importantly, Search was not enabled - the tool that allows AI to access the internet for information.
Here's a bit of AI 101. If you disable tool use, of fucking course AI won't perform as well. LLMs are supposed to be able to utilize tools - Python, web search, various other programs and APIs. Tool use only started to emerge during late 2023 and early 2024, and their absence during the early days is a large part of why certain negative stereotypes arose surrounding AI during 2023 - without tool use, raw LLMs are much less capable.
It's not at all reasonable for the article to criticize the technology when a very important feature that helps it to work properly was disabled (or perhaps nonexistent if the AI really was two and a half to three years old). That would be like if a guy in 1985 played his first videogame, let's say Super Mario Bros. 1, and his controller got unplugged and he was no longer able to control Mario. He becomes enraged. "What the FUCK is going on? I can't play just because the controller is unplugged?!?!?????!!!!!! What the fuck! That makes no sense! I KNEW videogames were a scam!"
You can use any numbers of similar analogies here. A 40 year old man in 1915 becomes bewildered and confused because the "lightbulb" thing turns off when the "power" goes out. A person who gets their first TV in 1954 thinks it isn't working right because 24/7 TV isn't a thing yet and so it displays static at night. Use whatever analogy you like, they're all about the same as what's going on here.
Hallucinations (AI making things up) is indeed still a problem worth noting and considering, but the problem has become dramatically better over time in terms of both hallucination frequency as well as the nature/severity of the hallucinations themselves.
To compare o3 (the current publicly available SOTA), GPT-4.1 (A currently available model but only for a small audience), and GPT-5 (Upcoming flagship model): o3, with tools enabled, has a hallucination rate of around 7 percent. That's much better than the 25 percent of July 2023, but still significant. GPT-4.1, available to a small audience, has a hallucination rate of around 1.7 percent - that's dramatically better and maybe negligible. GPT-5, slated for a summer release, should get the hallucination rate under one percent.
The severity of the hallucinations has also declined. 2024 AI has a generally true-to-reality basic understanding of the world, but makes bizarre mistakes that no human being ever would and pulls facts out of its ass that - though plausible - are entirely false. (Keep in mind that the best public model, OpenAI o3, is from December 2024 at latest. It is already outdated.) People who have used 4.1, though, say that the mistakes it makes are much milder. No bizarre fabrications, just understandable mistakes - things like misunderstandings of ambiguously-worded sentences rather than the present day "lol I made it the fuck up." GPT-5 will be dramatically smarter and make even fewer mistakes, sub-1 percent and typically of a mild, human-like nature.
4.1 has also been stated by testers to say "I don't know" when it doesn't know something. No, I don't know why this is an ability that's only being developed in 2025-era AI, it seems like basic common sense design that should have been implemented years ago, but better late than never. This means that incidents like the Chicago Sun-Times mishap are extremely unlikely with 2025 AI. This isn't speculation, it's already confirmed stuff from models being tested.
So the "AI isn't connected to reality" idea that that news article postulates isn't exactly untrue, but it's a heavily biased piece of work that omits any inconvenient details. "AI is completely unreliable! Look at this mistake! No, I will not supply any context. I will not mention that at best it was a modern AI with critical features disabled, and at worst it's a very old, very dumb AI from before those features even existed. Oh, and I definitely won't mention that this flaw has basically been erased in AI currently in testing, who cares about unimportant details like that? It's the 2020s and being a part of the AI witch hunt is fucking in!"
To prove with an example just how misleading that article about the Chicago Sun-Times was, I asked the exact same question to o3 with search enabled. I double-checked every bit of info to see how well it performed. I only gave it one chance, I didn't re-roll for the best results, and in order to stress test the model I said that each book recommendation had to be from a different source.
All 15 books are indeed real books. Zero fabrication here.
All 15 authors were correctly attributed to their respective book.
12 of the links provided were proper links (the other three links were mislinks; I had to use Google Search to confirm that the books were real).
11 of the 15 'publication dates' - the publication dates for the articles, not the books themselves - were correct. The four that were wrong all got the correct month but an incorrect date.
So, with a maximum score of 60 here, o3 scores a 52. That proves pretty decisively the article was extremely misleading. The entire point of the article was that AI made up 10 out of 15 summer reading books; I'm just a regular person rather than an AI expert, and even I knew the solution. Using a modern model with search enabled, all 15 books were real and all 15 authors were indeed the authors. 100 percent perfect performance.
I'm not the only person who thinks this, but AI entered mainstream awareness too soon. Once people form a negative impression of something, it can take a long time for society to catch up and accept the new reality if that 'something' ever improves. Some products, ideas, whatever still have a negative reputation due to early failures from years or decades ago. I'm sure that AI's redemption arc won't take as long, but it wasn't a good idea to begin spreading AI in late 2022 when it was an extremely half-baked product. 2023 AI was basically terrible, 2024 was still largely bad. Even here in July 2025 it's sometimes great, sometimes frustrating. Q3 2025 should be where friction mostly disappears for the SOTA, perhaps expensive SOTA, chatbots (low hallucination rates etc); by the end of 2026 hallucinations should be irrelevant for all AI models, low and medium tiers included, and all other areas of AI like video generation and coding should be fully reliable.
Even current AI is highly useful for many things, though. I got o3 to read the 2.4 million word series I wrote from back when I was a writer (I retired after finishing in 2018 and am never writing again), and asked many questions about it. What MPA rating each individual story would receive were they movies, each character's MBTI or D&D alignment, what characters handled their traumas the best and worst, whether characters with bad attributes (whether full-on villain or toxic protagonist) were sympathetic or not, what mental illnesses or personality disorders the characters might have, etc. It was fun. The experience wasn't perfect, it's 2025 and hallucinations exist, but it got 10x or 20x more facts right than wrong, and its feedback was really thoughtful and interesting.
As a random aside. I don't know if the topic was broached much here because I barely even skimmed the thread, but as a former artist who wrote one of the longest books in all of human history - unpublished and only written for myself - I'm very much in favor of AI in art (minus some obvious misuses like job displacement or actual plagiarism).
I think a lot of people who are firmly against any form of AI art have tunnel vision and don't understand that for some use cases, AI art is perfectly valid - namely cases where the product simply will not exist otherwise because the person doesn't have the ability or willingness to create it. I always thought that it would be fun to have AI create fanfics or bonus stories based off my book. I am a retired writer and there is absolutely no way in hell I am ever writing another story. AI excels for that purpose - stories that won't exist otherwise because you don't really care and don't have any drive to write them (and honestly the idea of stories I didn't write myself, where I don't know what to expect, sounds really fun).
I never would have used AI for the series proper had it existed back then, nor for any other genuine or heartfelt project. Not because I'm anti-AI (Obviously) but because I was the writer and I wanted every sentence, paragraph, and page to come from me. It wasn't good enough for it merely to exist at all, as is the case with the hypothetical bonus stories; I wanted the story to exist in the precise form that I wanted it to exist.
So it's not as simple as AI art always bad, manual work always good. It's a matter of precision vs. convenience. If you draw, write, or compose something, the finished product takes whatever form you want it to take. You usually want precision for heartfelt projects, obviously. But there's also some cases where you simply want something to exist and adhering to a very specific vision is not the goal. In that case, convenience is the goal, not precision, and at those times AI art is entirely okay.
Some of the people in the 2020s crusading against AI seem to believe that in order to have 'artist's pride' or whatever, you must be a Luddite that gatekeeps art against everyone except those who do things the 'proper' way. That is not even remotely true in any way whatsoever. If I'm to be perfectly honest, I almost certainly went through more to see my story through to completion than most of the artists online stirring up a shitstorm. I wrote a series of novels that in length trumps almost every other story ever written (some would say the story was too long and I needed an editor but fuck 'em, story was just for myself anyway), and I did it through chronic fatigue, through depression (both conditions showing up years before I even wrote the very first word), through burnout and writer's fatigue that I started experiencing four full years before I even finished the
original series, with the sequel adding another two years (albeit after a five year vacation between the original and sequel series).
Very few people would have pushed through the way that I did - that's evident by how many writing projects are dropped due to life obligations despite being infinitely shorter. I wrote all the way to the end despite the emotional and physical toll because I loved the story and wanted it to be told in full. I think about the story and characters frequently, and I always will no matter how long I live.
I have pride in my story, and by extension I suppose I have artist's pride (though I'd rather not use a term that cheesy). But that does not translate to a desire to be a gatekeeper. I can understand being against certain use cases of AI art... selling it, actual plagiarism (no, AI art is typically not plagiarism, it learns general patterns and rules the same way that humans do), job displacement, whatever... but being against it on a conceptual level and thinking that nobody should ever use it in any context is absolutely indefensible.
Generally speaking, I am always in favor of things being democratized. Of possibilities opening up and for more and more people gaining the ability to consume and access, or in this case create, things. Piss on artist's pride. Artist's pride is nowhere near as important to me (nor should it be to anyone) as the humanitarian joy of seeing abilities, possibilities, and forms of happiness and meaning reach as many people as they can.
Anyway...
There is a severe lack of understanding regarding AI. Some think it's incompetent and unreliable in every possible way; others think that it's competent and capable but has no potential benefits and is nothing but a tool for destruction. The former has already been addressed, and I would advise everyone to be skeptical of articles that amount to "lol AI sux and is so dum lol." Most beliefs about AI's supposedly underwhelming capabilities are not based upon actual reality, but half-truths, things that used to be true but no longer are, and things which were always complete myths even in the past.
There are very few techno-skeptics who have any appreciable understanding of the technology they have such strong feelings on. In the abstract, I get it. You don't read as much about things you hate as you do things that you enjoy. If every positive opinion and every negative opinion throughout history was collected and scored on its understanding and validity, the 'positive opinion' column would have a higher score - people explore things they like more in-depth. But the record needs to be corrected regardless since most people's understanding of AI is frozen in either September 2022 (when AI won the art contest) or December 2022 (ChatGPT debut).
Almost all technologies have debuted in an extremely primitive state. Telephones started with party lines, an idea that sounds absolutely atrocious by modern standards (and were likely a turn-off for many even at the time). 'Horror movies' of the 1890s were silent films under a minute long. 1980s PCs didn't have sound cards until the late decade and games required constant changing of floppy disks. Televisions had extremely fickle reception with people sometimes having to go onto the roof and hold the antenna straight. But AI, unlike every other technology throughout history, will never ever have its problems addressed (even though to a large extent it already has) and they will be forever immutable and unfixable because... reasons?
As for the idea that AI is purely destructive and cannot be used for good (this also addresses the idea of AI being incapable).
Is the fact that
AI diagnoses complex medical conditions 4x more accurately than any medical professional (85% vs 20%) an inconsequential change? Is living in a world where no doctor tells obviously sick patients "It's all in your head" not one worth desiring?
Is the fact that kids who use AI tutors perform exceptionally well (In a Texas study, kids who used AI tutors scored higher than 98 percent of their peers; in a separate test, Nigerian children learned as much in six weeks as most do in two years) irrelevant? Is democratized, high-quality education for all, including those in the third world, not a good thing?
There's an at-home foot scanner which you stand on like a scale that detects developing heart failure by measuring fluid build-up in your feet; if I'm understanding correctly, that means heart attacks and strokes can be predicted in advance - anywhere between 3 and 19 days according to tests. Is making the number one cause of death vastly more predictable and thus treatable something that doesn't matter? Is making people much safer and guarding against the most common causes of sudden death not that big a deal?
What about the fact that the relatively much more primitive AI of 2021 reduced hospital mortality by 40 percent at the hospitals where it was piloted due to being able to detect sepsis well before it developed? Is reducing a very common cause of death by almost half rendered irrelevant because people post bad AI pictures on Pixiv?
Is the fact that AI has created a lifelong vaccine for the flu that protects against all strands not important? Is it not something that improves quality of life and basic safety for the human race?
How about the fact that AI-created medicines are about to reach the clinic and will be very cheap and affordable since AI makes the drug discovery process so much cheaper? Sounds like something worth celebrating to me.
Those are just a few examples of the good AI has already done. It will do an insane amount of good for healthcare during the remainder of the 20s and make life and future health dramatically safer (the AI foot scanner being an easy current example), but I won't get into that because people will probably just say the tired old bullshit about moon bases being predicted for 2000, the electricity too cheap to meter process from the 1950s, LOL where is my jetpack XD, etc - ignoring (or likely completely unaware) of the many 'optimist' predictions that were exactly right and the many 'grounded skeptic' predictions that were miserably wrong. I think people love reductive thinking more than they love anything else in the world.
Despite some stereotypes, out-of-touch Silicon Valley types aren't the only ones who advocate for AI. Though my own household while growing up was okay, I live in a generally poor and drug-ridden area - in the current day my county is around 200th place amongst the country's ~3100. The first time I played Final Fantasy VII as a 12 year old, Midgar was fascinating and felt deeply familiar because its populace - bitter, depressed druggies living in squalor - is similar to my local area. It's not a particularly violent place, I don't think, but certainly very dreary and gray. Seeing so many people with no hope due to their economic circumstances made me hope for technology to offer them stability and a better life than that which they'd ever known.
I want a world where all forms of suffering are escapable, and where nobody has to experience shame or low self-esteem. A large part of solving problems is knowing how to solve them. Since AI works thousands, millions, or billions of times faster than human beings (this has been confirmed numerous times in multiple areas the past three years and is not speculation), it can be leveraged to figuring out the solution to any problem. I know some would be difficult or impossible to implement (Middle East wars etc), but some/most wouldn't have much blocking them off.
And obviously I want some of the benefit for myself. As someone with severe chronic fatigue syndrome where mild exertion can leave me broken for days, Avoidant Personality Disorder plus CPTSD, and being unable to drive due to having my brains fried back in the day (stroke during my birth) or communicate well (I've had lifelong speech problems and auditory sensory gating issues so severe that I can't form a sentence in anything except silent environments; noises aren't louder or uncomfortable, it's more like a distorted phone call with a poor connection), I need some rescuing myself. I'm very pro-AI in large part because the promises of AI finally nearing commercial availability have helped me from much of the vulnerability, insecurity, and overwhelming daily anxiety I've known since 2014. Self-driving cars... AI discovering chronic fatigue treatments and accelerating the medical pipeline... AI-based hearing aids that improve sensory gating much more than normal hearing aids (not theoretical, already exist)... it's AI that will help me and bring good and safety and security to my life moreso than people ever have, unless you count the scientists who invented those things.
I know the day will come where I can go to sleep knowing that there's no more hurting people, or at least that any form of suffering can be relieved. And much of that reduced suffering will be the result of AI.
AI is indeed a job destroyer. It is, I suppose, bad for artists in some ways. But it's many other things. I understand why progressives hate it so much since so much attention has been drawn to its potential for increasing inequality, but it also has the potential to be the most democratizing technology of all time and reduce inequality more than any other.
You don't have to be as emphatic as me. But if you care about the person who looks in the mirror and thinks of themselves as ugly, then pray for AI to advance cosmetic treatments so much that everyone in the world can have the face they want and no longer have to feel self-hatred and shame. If you care about drug addicts, remember that AI has already been proven to analyze the brain thousands of times faster than human scientists and will find easy cures for addictions. If you care about the sick and suffering, be thankful that AI-developed medicines can be affordable without insurance. If you care about the poverty-stricken, be glad that AI can dramatically deflate the costs of many goods and services.
Democratize everything and eliminate as many forms of suffering as you can. AI will do that to whatever extent is physically possible, and that's why I will always support it.
Princess Snake avatars courtesy of Kunzait, Chibi Goku avatar from Velasa.