
[Editor’s note: This article contains descriptions of sexual assault and dehumanization.]
A longstanding manga industry injustice is back in the spotlight: Prominent publications have repeatedly and knowingly platformed authors convicted of sex crimes against children. On February 27, the editorial department behind MangaOne, a digital manga service, revealed they had allowed manga author Kazuaki Kurita (more widely known as Shōichi Yamamoto) to publish works under a pen name after he was convicted of a sex crime in 2020. The publication’s parent company, Shogakukan, gave this information after a civil lawsuit found Kurita guilty of abusing a female high school student. The company’s investigation also discovered that an editor at MangaOne attempted to help Kurita reach an out-of-court settlement with the victim.
The news sent a shockwave through the manga sphere, as several authors associated with the service, like ONE (Mob Psycho 100, One Punch Man), called out MangaOne for negligence and lobbied to have their series removed from the service. Then, Shogakukan revealed that MangaOne had an additional author who had been convicted of a sex crime working under a pen name, Tatsuya Matsuki (Act-age). It’s not just this service either, and other publications have stood by authors convicted of abusive acts.
As for Kazuaki Kurita, in 2020, he was arrested, indicted, and fined for violating the Child Prostitution and Pornography Prohibition Act before being ordered to pay a 300,000 yen penalty (roughly $2,700 at the time). Kurita had a series, Daten Sakusen, running in MangaOne under his first pen name, Shōichi Yamamoto. It went on hiatus the same month as his arrest, with the publisher saying the author was suffering from “health issues.” In 2022, it was removed from the service and rights were handed over to Kurita. The same year, MangaOne started to run a new series he was working on, Joujin Kamen, under his second pen name, “Hajime Ichiro.” Not only was the publication aware of his true identity, but at least one editor directly intervened on Kurita’s behalf in the case that would eventually rule against him on February 20, 2026. That editor was added to a LINE group chat with Kurita and the victim, and proposed that the author pay a 1.5 million yen fee as a settlement (around $13,700 at the time). The victim didn’t agree, eventually leading to a civil lawsuit where Kurita was ordered to pay 11 million yen (approximately $71,000).
Read more of this story at Slashdot.
Read more of this story at Slashdot.

The once defiant makers of Grammarly, Superhuman were forced to eat a little AI crow today. After enlisting countless authors, writers, and journalists for its much-needed “Expert Review” feature, the company has reversed course because it did so entirely without their consent, prompting a class action lawsuit. “Expert Mode” allowed Grammarly subscribers to receive phony analysis made by an LLM that’s been trained on the work of famous writers, living or dead, in an effort to “take your writing to the next level.” Of course, seeing as this is a tech company we’re talking about, and everything is just data for them to train their products on, Superhuman did so without the consent of its “leading professionals, authors, and subject-matter experts.”
Earlier today, Wired reported that Markup founder Julia Angwin is the only named plaintiff in a class action suit against Superhuman, arguing damages exceeding $5 million. “We think it’s a pretty straightforward case,” Angwin’s attorney told Wired. He goes on to argue that this type of behavior from tech companies is happening across society. “Lots of professionals who spend years, or in Julia’s case, decades, honing a skill or a trade, then see that their name or their skills are being appropriated by others without their consent.”

When David saw his friend Michael’s social media post asking for a second opinion on a programming project, he offered to take a look.
“He sent me some of the code, and none of it made sense, none of it ran correctly. Or if it did run, it didn't do anything,” David told me. David and his friend’s names have been changed in this story to protect their privacy. “So I'm like, ‘What is this? Can you give me more context about this?’ And Michael’s like, ‘Oh, yeah, I've been messing around with ChatGPT a lot.’”
Michael then sent David thousands of pages of ChatGPT conversations, much of it lines of code that didn’t work. Interspersed in the ChatGPT code were musings about spirituality and quantum physics, tetrahedral structures, base particles, and multi-dimensional interactions. “It's very like, woo woo,” David told me. “And we ended up having this interesting conversation about, how do you know that ChatGPT isn't lying?”
As their conversation turned from broken code to physics concepts and quantum entanglement, David realized something was very wrong. Talking to his friend — whom he’d shared many deep conversations with over the years, unpacking matters of religion and theories about the world and how people perceive it — suddenly felt like talking to a cultist. Michael thought he, through ChatGPT, discovered a critical flaw in humanity’s understanding of physics.
“ChatGPT had convinced him that all of this was so obviously true,” David said. “The way he spoke about it was as if it were obvious. Genuinely, I felt like I was talking to a cult member.”
But at the time, David didn’t have a way to name, or even describe, what his friend was experiencing. Once he started hearing the phrase “AI psychosis” to describe other peoples’ problematic relationships with chatbots, he wondered if that’s what was happening to Michael. His friend was clearly grappling with some kind of delusion related to what the chatbot was telling him. But there’s no handbook or program for how to talk to a friend or family member in that situation. Having encountered these kinds of conversations myself and feeling similarly uncertain, I talked to mental health experts about how to talk to someone who appears to be embracing delusional ideas after spending too much time with a chatbot.