9746 stories
·
100 followers

Great Job, Internet!: Intrepid Mystery Science Theater fan finds and uploads lost episode "K03"

1 Share

Mystery Science Theater 3000 has managed to keep a lot of crappy movies in the public consciousness by sheer mocking will. By torturing Joel (or Mike or Jonah), and their robot friends, Crow and Tom Servo, with bad movies, the Mads have helped cinematic sadists keep bad-movie classics like Manos: The Hands Of Fate and The Final Sacrifice in circulation. But given its public access roots, MST3K has had to work to keep its episodes available to said sadists. One, in particular, has become the Holy Grail of Mystery Science Theater ephemera, a lost episode from 1988 made during the show’s KTMA years. But seemingly out of the ether, a YouTuber named Arthur Putie has found a VHS copy of the show’s third episode, ‌Star Force: Fugitive Alien II, also known as “K03,” and uploaded it to YouTube. On Reddit, they wrote that the tape was “found in a garage sale around Minneapolis & finally digitized.”

The movie in question is a compilation film made of episodes of the Japanese series Star Wolf. However, the Mads would again use the movie against the Satellite of Love a few years later in season three. 

Though the creators of Mystery Science Theater have always encouraged fans to “keep circulating the tapes,” that hasn’t always been so easy. In 2021, Ivan Askwith, a producer on the Netflix-era MST3K episodes, wrote on Kickstarter that the episode’s whereabouts were still unknown. “If we had KTMA Episode 3, we’d have made it available by now,” he wrote. “But we don’t have it either, so we’ve been wanting to get our hands on copy as badly as everyone else. As far as we know, there isn’t a known copy ANYWHERE.” It was sitting in a Minneapolis garage sale all this time. 

“K03” is now more widely available to anyone who wants to check it out.

 

 



Read the whole story
InShaneee
1 hour ago
reply
Chicago, IL
Share this story
Delete

Google's AI might rewrite this headline

1 Share

After cratering web traffic with its AI summaries that are prone to misinformation and hallucinations, journalists around the world waited with bated breath for Google’s next great innovation. No longer content with simply plagiarizing others’ work for AI Summaries, Google is now using AI to rewrite headlines in search results. 

This is per The Verge, which is ironically always the first to get screwed by the tech and AI industry’s attempts at replacing trustworthy media with hallucinating chatbots. Obviously, much like Grammarly’s attempt to steal writers’ identity, Google didn’t even attempt to ask for consent in this. Instead, it’s editorializing headlines in the company’s once coveted “10 blue links” with AI-generated clickbait that misinforms the user. Writer Sean Hollister writes, “Google reduced our headline ‘I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything’ to just five words: “‘Cheat on everything’ AI tool.’ It almost sounds like we’re endorsing a product we do not recommend at all.”

Google tells The Verge that these changes are just an experiment, much like the one the company performed in Google Discovery. In that case, Google began “experimenting” with AI clickbait on headlines in Discovery, which later evolved into a permanent feature. So we should assume that AI headlines will become the norm sooner rather than later. According to Google spokespeople Jennifer Kutz, this is to help match queries to headlines. To do so, it’s awkwardly snipping bits of the headline or changing it whole cloth in ways that are misleading and inaccurate. Why stop there? Why not just change the headline to the exact query, whether the article is a good match or not? This is all unsurprising. Since Google began diminishing its Search feature to chase those AI ad bucks, its crown-jewel product has become increasingly unusable for users and hostile toward journalists. 

 



Read the whole story
InShaneee
1 hour ago
reply
Chicago, IL
Share this story
Delete

FBI Is Buying Location Data To Track US Citizens, Director Confirms

1 Share
An anonymous reader quotes a report from TechCrunch: The FBI has resumed purchasing reams of Americans' data and location histories to aid federal investigations, the agency's director, Kash Patel, testified to lawmakers on Wednesday. This is the first time since 2023 that the FBI has confirmed it was buying access to people's data collected from data brokers, who source much of their information -- including location data -- from ordinary consumer phone apps and games, per Politico. At the time, then-FBI director Christopher Wray told senators that the agency had bought access to people's location data in the past but that it was not actively purchasing it. When asked by U.S. Senator Ron Wyden, Democrat of Oregon, if the FBI would commit to not buying Americans' location data, Patel said that the agency "uses all tools ... to do our mission." "We do purchase commercially available information that is consistent with the Constitution and the laws under the Electronic Communications Privacy Act -- and it has led to some valuable intelligence for us," Patel testified Wednesday. Wyden said buying information on Americans without obtaining a warrant was an "outrageous end-run around the Fourth Amendment," referring to the constitutional law that protects people in America from device searches and data seizures.

Read more of this story at Slashdot.

Read the whole story
InShaneee
1 day ago
reply
Chicago, IL
Share this story
Delete

Federal Cyber Experts Called Microsoft's Cloud 'a Pile of Shit', Yet Approved It Anyway

1 Share
ProPublica reports that federal cybersecurity reviewers had serious, yearslong concerns about Microsoft's GCC High cloud offering, yet they approved it anyway because the product was already deeply embedded across government. As one member of the team put it: "The package is a pile of shit." From the report: In late 2024, the federal government's cybersecurity evaluators rendered a troubling verdict on one of Microsoft's biggest cloud computing offerings. The tech giant's "lack of proper detailed security documentation" left reviewers with a "lack of confidence in assessing the system's overall security posture," according to an internal government report reviewed by ProPublica. For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn't vouch for the technology's security. Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant's products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials. The federal government could be further exposed if it couldn't verify the cybersecurity of Microsoft's Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation's most sensitive information. Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government's cybersecurity seal of approval. FedRAMP's ruling -- which included a kind of "buyer beware" notice to any federal agency considering GCC High -- helped Microsoft expand a government business empire worth billions of dollars. "BOOM SHAKA LAKA," Richard Wakeman, one of the company's chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in "The Wolf of Wall Street." It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government's cybersecurity. The program's layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government's secrets. But ProPublica's investigation -- drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors -- found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company's products and practices were central to two of the most damaging cyberattacks ever carried out against the government.

Read more of this story at Slashdot.

Read the whole story
InShaneee
1 day ago
reply
Chicago, IL
Share this story
Delete

Saying "nope" to Nvidia's "yassify" tech

1 Share

It’s called DLSS 5—that is, “Deep Learning Super Sampling,” and apparently we’re already on the fifth one. Nvidia’s plan with DLSS, at least initially, was to improve graphics on older tech that might struggle to support modern dazzle, simulating the experience of having a more powerful, expensive computer (which are, by the way, increasingly scarce). But with the fifth version of DLSS, revealed at GTC 2026, what Nvidia offers isn’t just a sharper, smoother, more impressive picture.

Using AI’s neural rendering, the gaps are hallucinated-into, ostensibly to reproduce dynamic lighting and ray-tracing. Instead, the faces of the characters Nvidia has used to demonstrate the technology—principally, Grace Ashcroft of Resident Evil Requiem—have been not only revised, but entirely replaced: casualties of what AI is being trained to think we all want. The tech industry (or rather, the humans in tech who care about the actual business, and ownership, of creative labor) has lost its absolute mind over Nvidia pushing this on us, justifiably and rightfully. We’re through the looking glass now. 

When industry veteran Will Smith called out DLSS 5 as a glorified “yassify filter,” his critique immediately entered the popular parlance. Admittedly, I’ve been a vocal skeptic of the slippery slope of arbitrating what cup size constitutes an aesthetically ‘serious’ videogame character, but even I had to immediately concede Smith’s point. DLSS 5 really has veered into the realm of “Bold Glamour”—that is, the Instagram or TikTok filter that is the direct antecedent of Mar-a-Lago face. Grace Ashcroft, by the way, is an FBI analyst. The effect is too much like walking in on your boomer mom who watches fire-department procedurals on network television. 

This raises an obvious question: Just how sexy do we really need our videogame characters to be? Is this sexy at all? It’s an uncomfortable question that goes back at least as far as Lara Croft: Tomb Raider’s polygon boobies, but which reached a fever pitch in the months and weeks right before GamerGate, when many gamers felt that game reviewers and tech critics were trying to prevent or totally outlaw sexiness in order to ruin gaming—just to be mean, to prove a point. Now, 12 years later, DLSS 5 has taken the supposed ‘side’ of that hypothetical gamer: the player who might demand maximum protagonist hotness as a sort of aesthetic default.

“This kind of tech undermines the artistic intent of countless artists, animators, lighting engineers, and designers in games,” John Warren of VGBees tells me. Indeed, this isn’t the democratization of taste; it’s the flattening of art and creative labor in favor of whatever the lowest common denominator might want. Which is something AI can only guess at, to our collective peril. This realtime “lighting” filter also diminishes, devalues, the skill of the original work. The entitlement! The extremely questionable decision-making of it all! (It also, not for nothing, introduces questions about artists’ consent and where the hell all the “fair use” laws went.)

“Imagine if this tech also decided Grace Ashcroft’s voice needed to match the visual version DLSS 5 renders,” Warren continues. “You’d laugh the person who wants that out of the room, I hope. But it’s transformational, right? Warping intent for technical ‘improvement’ is anti-art.” Warren’s point yields another very real problem: A new, yassified face grafted on top of an ‘old’ voice would be uncanny. It introduces dissonance, a strange mismatch, a conflict of two interests.

In a swiftly deleted post, likely a variation on a gag meme that in this instance felt far too true, someone had replaced Harrison Ford—that is, the face of videogame and movie hero Indiana Jones—with the absolute blandest looksmaxxed Chad, a face developed not with the artistry of human hands or hearts, but rather, by analyzing the already-blandest human faces in the world and settling on whatever horrific facemorph distortion emerges from it as the ultimate standard for male beauty: a lantern-jawed meme of a man.

Where does this lead? Nowhere good. We iterate on aesthetic standards; one minor tweak begets the next. Have you ever seen those social-media stars who’ve spent way too long using a FaceTune app? Or someone who’s had what was maybe one too many surgical revisions? Distortions echo upon other distortions, and soon you’ve passed up dysmorphia entirely, embracing something much more alien, something uncanny. This is where a disordered self-image meets your bank account: Someone will always have one more little tweak to sell you. Discernment, human intervention, is needed, because, if anything has become abundantly clear, it’s that AI cannot pump its own brakes. 

This is really what AI is all about—crowdsourcing artistic vision. Not to get too “death of the author” about it, but gamemakers’ intent really does matter less than whatever we, the players, ultimately put of ourselves into the work. And how we feel matters—we are never merely consumers of a piece of media or literature (“content”); rather, we are in constant breathing dialogue with it. That’s because the reader, or player, is a person who is alive, with their own personal context and biases. The instinct to want to challenge a piece of media, as opposed to consuming it uncritically, is a good thing.

It’s one thing to want to challenge something, and quite another to feel absolutely threatened by it, to want to seize control of it, to dominate it: to change it to suit us, seizing control of someone else’s creation, reskinning it, then assuming credit for what amounts to a fresh coat of paint. It is “mod as authorship,” except in this specific case, the mod itself is artless. (Artful mods do exist! What Nvidia is offering here is not it!)

For years, gamers have wanted to seize greater authorship, greater authority, over the games they play. That isn’t new; they’ve been sending death threats to game developers for as long as there have been games. It comes down to control issues. The initial impulse is almost understandable, if pathological. When you love something, when you value it, your instinct might be to also lay some sort of claim to it, to put it on a pedestal or in a little box, to trim it like a bonsai, to clip its wings, to feel territorial, to try to possess it: to declare war on it. To escalate threats, to saber-rattle, to enact a terror campaign until the developers and narrative designers and artists cede to a list of demands.

Already anticipating where this trail leads—how catering to this initial impulse can accelerate—games journalist Leigh Alexander wrote in 2014, effectively, “Creators, you do not have to go there,” in a piece that launched a thousand ships, inciting further mob campaigns under the auspices of a “consumer movement.” One thing quickly became apparent: We may collectively be in the thrall of billion-dollar corporations, but corporations, in turn, will absolutely defer to a mob’s will.

Here is the seductive promise of the mob, though: If anything ever goes wrong, the hope is, no one person can be blamed. The mob’s promise is the same as the promise of AI itself: an end to personal accountability, to ever being held responsible for one’s own individual thoughts, actions, behaviors, or freaky-ass desires. For good or ill, thoughts, actions, and even desires are the things that make you individually you. Those are the things to lay claim to, to own. You shouldn’t sacrifice or outsource those to the cloud or the AI or the amorphous mob. Unfortunately, AI can never be held accountable for any harms done because, very much like a mob, there’s no one ‘there.’ Moreover AI cannot give you what you want, only its best neural guess at what it thinks you want. That is to say, AI can only tell you what you want.

It seems as if there is a larger push against seeing ‘real’ people, even pretend-real people: against fictions that have the audacity to depict real physical flaws real individuals might have. Humans, the billionaire consensus seems to be, have failed you, disappointed you. Real life has failed you. Or you’re failing it. Hard to say. Imagine if you could go somewhere better, more utopian, where conversation never stalls and the girls are seriously hot, and your grandma feels alive, at least. Maybe better than alive! And there’s no friction, no cognitive load, because the AI will help you think, will even take over your thinking for you, if you start to get too tired, too pained, by all the thinky-ouchies. 

Endemic to the DLSS 5 brouhaha is this bizarre expectation of having to like everything all the time—to have every personal whim catered to. Unfortunately, with the persistent availability of on-demand Everything, the discomfort of disliking stuff can start to feel foreign, even intolerable. That is why we should all be friction-maxxing our 2026, embracing the imperfect. Maybe we, individually, can’t stop AI’s harms—therein lies the paradox, where collective action is required for sustained change—but we can each enforce checks and balances in our own lives, learning to conscientiously pump the brakes when needed.



Read the whole story
InShaneee
4 days ago
reply
Chicago, IL
Share this story
Delete

CEO Asks ChatGPT How to Void $250 Million Contract, Ignores His Lawyers, Loses Terribly in Court

1 Share
CEO Asks ChatGPT How to Void $250 Million Contract, Ignores His Lawyers, Loses Terribly in Court

A judge ordered the reinstatement of a video game developer after he was fired as part of a scheme cooked up by a CEO using ChatGPT. Facing the possibility of paying out a massive bonus to the developer of Subnautica 2, the CEO of publisher Krafton used ChatGPT to create a plan to take over the development studio and force out its founder, according to court records.

The Monday ruling details the bizarre story. Unknown Worlds Entertainment is the studio behind the 2018 underwater survival game Subnautica. The company has since been working on the sequel, Subnautica 2. In 2021, South Korean publisher Krafton bought Unknown Worlds Entertainment for $500 million and promised to pay out another $250 million if Subnautica 2 sold well enough.

Krafton’s internal sales projections for Subnautica 2 looked great, and looked like it would be on the hook for the additional $250 million. In an attempt to avoid paying this, Krafton CEO Changhan Kim turned to ChatGPT for help avoiding paying the developers the $250 million bonus. “As Unknown Worlds prepared to release its hotly anticipated sequel, Subnautica 2, the parties’ relationship fractured,” the court decision said. “Fearing he had agreed to a ‘pushover’ contract, Krafton’s CEO consulted an artificial intelligence chatbot to contrive a corporate ‘takeover’ strategy.”

Kim partnered with Krafton Head of Corporate Development Maria Park and the company’s legal team to work out options. He toyed with finding a reason to fire the founders. According to court records, Park pinged Kim on Slack and told him that attempting to avoid paying the bonus would be legally risky. “Hi CEO . . . it seems to be highly likely that the earn-out will still be paid if the sales goal is achieved regardless of the dismissal with cause,” the Slack message said according to court records. “Therefore, there isn’t much that we can practically gain other than punishment with a simple dismissal alone, whereas I am worried that we may be exposed to lawsuit and reputation risk.”

But the CEO would not accept defeat. “And so Kim turned to ChatGPT for help,” court records said. “When the AI chatbot responded that the earnout would be ‘difficult to cancel,’ Kim complained to Park that the [payout] was a ‘contract under which we can only be dragged around.’”

Kim pressed the chatbot for an answer. “At ChatGPT’s suggestion, Kim formed an internal task force, dubbed ‘Project X.’ The task force’s mandate was to either negotiate a ‘deal’ on the earnout or execute a ‘Take Over’ of Unknown Worlds. They looked to buy time,” court records said. “Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a ‘Response Strategy’ to a ‘No-Deal’ Scenario.”

This was a piece of ChatGPT’s “Project X” for Krafton:

“a. Preemptive Framing - Repeat that protecting quality and fan trust is the highest priority, undermine the ‘Large Corporation VS. Indie’ framing

b. Securing Control Points -

* Lock down Steam/console publishing rights and access rights over code/build pipeline through both legal and technical aspects.

* For the earn-out freeze, keep room for negotiations through provision stating ‘immediate removal if specific development results are achieved’

a. Systematic materials for legal defense - Prepare contract interpretation memorandums, log all communications, seek external consultation
b. Team retention - Operation of retention packages for key personnel and rapid backfill pipelines in anticipation of resignation/departure scenarios
c. Two handed strategy - Create a structure that allows for both hardball (Legal+ Finance) and softball (Support/Incentives) approaches so moderate factions within Unknown Worlds can push for compromise.”

Kim followed ChatGPT’s advice rather than his lawyers’ advice, according to the court records. The first step was posting a message on Subanutica’s website to get fans on his side. According to court documents, Kim said the goal of the message was to “secure public support from fans and legal validation of our legitimacy.” He then suggested that ChatGPT write it for him. It achieved the opposite of his intended goal. Fans found the message bizarre and worried about the future of the game. Those fears were compounded when Kim fired the game’s original creators and entered into a legal battle with them.

The legal battle is ongoing, but Kim looks set to lose. The judge has ordered he reinstate the fired developers and has exposed the CEO’s flailing use of ChatGPT. Krafton told Kotaku that it was “evaluating its options” regarding the ruling and that it “puts players at the heart of every decision.”

Read the whole story
InShaneee
5 days ago
reply
Chicago, IL
Share this story
Delete
Next Page of Stories