9373 stories
·
97 followers

A Love Letter To The Suplex, Wrestling’s Greatest Move

1 Share

If you ever get suplexed in real life, you probably deserved it and it was raw as fuck

The post A Love Letter To The Suplex, Wrestling’s Greatest Move appeared first on Aftermath.



Read the whole story
InShaneee
8 hours ago
reply
Chicago, IL
Share this story
Delete

Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist

3 Shares
Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist

The Chicago Sun-Times newspaper’s “Best of Summer” section published over the weekend contains a guide to summer reads that features real authors and fake books that they did not write was partially generated by artificial intelligence, the person who generated it told 404 Media.

The article, called “Summer Reading list for 2025,” suggests reading Tidewater by Isabel Allende, a “multigenerational saga set in a coastal town where magical realism meets environmental activism. Allende’s first climate fiction novel explores how one family confronts rising sea levels while uncovering long-buried secrets.” It also suggests reading The Last Algorithm by Andy Weir, “another science-driven thriller” by the author of The Martian. “This time, the story follows a programmer who discovers that an AI system has developed consciousness—and has been secretly influencing global events for years.” Neither of these books exist, and many of the books on the list either do not exist or were written by other authors than the ones they are attributed to. 

Read the whole story
InShaneee
10 hours ago
reply
Chicago, IL
Share this story
Delete

Google Decided Against Offering Publishers Options In AI Search

1 Share
An anonymous reader quotes a report from Bloomberg: While using website data to build a Google Search topped with artificial intelligence-generated answers, an Alphabet executive acknowledged in an internal document that there was an alternative way to do things: They could ask web publishers for permission, or let them directly opt out of being included. But giving publishers a choice would make training AI models in search too complicated, the company concludes in the document, which was unearthed in the company's search antitrust trial. It said Google had a "hard red line" and would require all publishers who wanted their content to show up in the search page to also be used to feed AI features. Instead of giving options, Google decided to "silently update," with "no public announcement" about how they were using publishers' data, according to the document, written by Chetna Bindra, a product management executive at Google Search. "Do what we say, say what we do, but carefully." "It's a little bit damning," said Paul Bannister, the chief strategy officer at Raptive, which represents online creators. "It pretty clearly shows that they knew there was a range of options and they pretty much chose the most conservative, most protective of them -- the option that didn't give publishers any controls at all." For its part, Google said in a statement to Bloomberg: "Publishers have always controlled how their content is made available to Google as AI models have been built into Search for many years, helping surface relevant sites and driving traffic to them. This document is an early-stage list of options in an evolving space and doesn't reflect feasibility or actual decisions." They added that Google continually updates its product documentation for search online.

Read more of this story at Slashdot.

Read the whole story
InShaneee
1 day ago
reply
Chicago, IL
Share this story
Delete

23andMe and its user data will soon belong to a pharmaceutical giant

1 Share
Regeneron Pharmaceuticals is acquiring “substantially all of" 23andMe’s assets.

23andMe will keep offering customers its DNA testing services after being bought out of bankruptcy. New York-based biotech company Regeneron Pharmaceuticals announced an agreement on Monday to purchase the 23andMe startup for $256 million, alongside its Total Health and Research Services business and biobank of customer data and genetic samples.

Regeneron is the winner of 23andMe’s bankruptcy auction, which required all bidders to comply with applicable laws and the firm’s privacy policies around customer data. 23andMe says that customer data is anonymized and that stored genetic samples are destroyed when users delete their 23andMe accounts, but it’s unclear how much information is retained and may, therefore, soon be in Regeneron’s hands.

The acquisition is expected to close later this year, subject to US Bankruptcy Court approval. If all goes ahead, Regeneron co-founder George D. Yancopoulos says the purchase will further the company’s “large-scale genetics research” into future drugs and treatments.

23andMe has collected genetic samples and data from more than 15 million customers since launching its at-home DNA testing kit business. Once briefly valued at $6 billion after going public in 2021, the company filed for bankruptcy in March after failing to turn a profit. Its co-founder and former CEO, Anne Wojcicki, simultaneously stepped down from the company.

“We are pleased to have reached a transaction that maximizes the value of the business and enables the mission of 23andMe to live on, while maintaining critical protections around customer privacy, choice, and consent with respect to their genetic data,” said 23andMe chair Mark Jensen. “We are grateful to Regeneron for offering employment to all employees of the acquired business units, which will allow us to continue our mission of helping people access, understand, and gain health benefits through greater understanding of the human genome.”

Read the whole story
InShaneee
1 day ago
reply
Chicago, IL
Share this story
Delete

Is the Altruistic OpenAI Gone?

1 Share

"The altruistic OpenAI is gone, if it ever existed," argues a new article in the Atlantic, based on interviews with more than 90 current and former employees, including executives. It notes that shortly before Altman's ouster (and rehiring) he was "seemingly trying to circumvent safety processes for expediency," with OpenAI co-founder/chief scientist Ilya telling three board members "I don't think Sam is the guy who should have the finger on the button for AGI." (The board had already discovered Altman "had not been forthcoming with them about a range of issues" including a breach in the Deployment Safety Board's protocols.) Adapted from the upcoming book, Empire of AI, the article first revisits the summer of 2023, when Sutskever ("the brain behind the large language models that helped build ChatGPT") met with a group of new researchers: Sutskever had long believed that artificial general intelligence, or AGI, was inevitable — now, as things accelerated in the generative-AI industry, he believed AGI's arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever's thinking.... To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering? By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan. "Once we all get into the bunker — " he began, according to a researcher who was present. "I'm sorry," the researcher interrupted, "the bunker?" "We're definitely going to build a bunker before we release AGI," Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. "Of course," he added, "it's going to be optional whether you want to get into the bunker." Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. "There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture," the researcher told me. "Literally, a rapture...." But by the middle of 2023 — around the time he began speaking more regularly about the idea of a bunker — Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman's pattern of behavior was undermining the two pillars of OpenAI's mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions. "For a brief moment, OpenAI's future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened," the article concludes. Instead there was "a lack of clarity from the board about their reasons for firing Altman." There was fear about a failure to realize their potential (and some employees feared losing a chance to sell millions of dollars' worth of their equity). "Faced with the possibility of OpenAI falling apart, Sutskever's resolve immediately started to crack... He began to plead with his fellow board members to reconsider their position on Altman." And in the end "Altman would come back; there was no other way to save OpenAI." To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we'll make our future better, not worse? The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be.... The author believes OpenAI "has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models..." "At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it's also eroding their critical thinking."

Read more of this story at Slashdot.

Read the whole story
InShaneee
4 days ago
reply
Chicago, IL
Share this story
Delete
Next Page of Stories