
As one fan put it, you can spend ‘$4 in coins to have poop smeared on your face’
The post <i>Arc Raiders</i> Players Are Getting Free Tokens But Frustrations Over Pricey Cosmetics Remain appeared first on Kotaku.

As one fan put it, you can spend ‘$4 in coins to have poop smeared on your face’
The post <i>Arc Raiders</i> Players Are Getting Free Tokens But Frustrations Over Pricey Cosmetics Remain appeared first on Kotaku.
If you ask an LLM to explain its own reasoning process, it may well simply confabulate a plausible-sounding explanation for its actions based on text found in its training data. To get around this problem, Anthropic is expanding on its previous research into AI interpretability with a new study that aims to measure LLMs’ actual so-called “introspective awareness” of their own inference processes.
The full paper on “Emergent Introspective Awareness in Large Language Models” uses some interesting methods to separate out the metaphorical “thought process” represented by an LLM’s artificial neurons from simple text output that purports to represent that process. In the end, though, the research finds that current AI models are “highly unreliable” at describing their own inner workings and that “failures of introspection remain the norm.”
Anthropic’s new research is centered on a process it calls “concept injection.” The method starts by comparing the model’s internal activation states following both a control prompt and an experimental prompt (e.g. an “ALL CAPS” prompt versus the same prompt in lower case). Calculating the differences between those activations across billions of internal neurons creates what Anthropic calls a “vector” that in some sense represents how that concept is modeled in the LLM’s internal state.

It ain’t Hit and Run 2, but it’s probably the closest to a new Simpsons video game we’ll get anytime soon
The post <i>Fortnite’s</i> New <i>Simpsons</i> Season Is An Incredible Treat For Fans Of The Show appeared first on Kotaku.

Archinstall 3.0.12, a guided installer for Arch Linux, introduces a new -S flag for arch-chroot, enhanced Btrfs support, and adds Uzbek language support.
We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.
There might be something shifting within Apple’s commerce strategy. Whereas in the past, new models of the company’s products rarely went on sale, now some of them are getting respectable discounts shortly after release—including, recently, the M4 MacBook Air and, now, the Apple Watch Ultra 3.
Despite being released just this past September, the GPS + Cellular 49mm model is already marked down to $699.99 (originally $799), the lowest price it has yet reached, according to price tracking tools.
The Ultra is Apple’s “pro” watch, focused on durability and lengthy battery life. The main difference between it and the “regular” Apple Watch is that the Ultra has the largest and brightest display. This particular model offers updates to the display, processor, satellite connectivity, cellular connection, charging, and blood pressure monitor. There are also new features, like hypertension notifications (which was cleared by the FDA) and sleep scores.
Like older versions, the Watch Ultra 3 comes in one size, 49mm. It also only comes in titanium, while the new Apple Watch Series 11 comes in more sizes and materials if you’re interested. (It’s also half the price, but you won’t get the same display, battery life, or precise GPS, among other things.)
The Watch Ultra 3 gets bright, with up to 3,000 nits of brightness. It also has a 1Hz refresh rate with an always-on display, so there’s no need to wake it up to see the time. The battery life can last up to 42 hours on a single charge and 72 hours in Power Mode, a record for Apple Watches.
Since this is the cellular version, you don’t need to bring your iPhone along to be able to take calls, or texts. The improved 5G connectivity means your calls will be less likely to drop, and your downloads and streams will be faster.
If I were a betting man, I’d say you won’t find this watch any cheaper on Black Friday or Cyber Monday, given its recent release and Apple’s history with sales. This price is likely as good as it’s going to get for a while.
Captchas have largely vanished from the web in 2025, replaced by invisible tracking systems that analyze user behavior rather than asking people to decipher distorted text or identify traffic lights in image grids. Google launched reCaptcha v3 in 2018 to generate risk scores based on behavioral signals during site interactions, making bot-blocking technology “completely invisible” for most users, according to Tim Knudsen, a director of product management at Google Cloud.
Cloudflare followed in 2022 by releasing Turnstile, another invisible alternative that sometimes appears as a simple checkbox but actually gathers data from devices and software to determine if users are human. Both companies distribute their security tools for free to collect training data, and Cloudflare now sees 20% of all HTTP requests across the internet.
The rare challenges that do surface have become increasingly bizarre, ranging from requests to identify dogs and ducks wearing various hats to sliding a jockstrap across a screen to find matching underwear on hookup sites.
Read more of this story at Slashdot.
President Trump says he still doesn’t know who Binance founder and former CEO Changpeng Zhao is, despite having pardoned Zhao last month.
CBS correspondent Norah O’Donnell asked Trump about the pardon in a 60 Minutes interview that aired yesterday, noting that Zhao pleaded guilty to violating anti-money laundering laws. “The government at the time said that C.Z. had caused ‘significant harm to US national security,’ essentially by allowing terrorist groups like Hamas to move millions of dollars around. Why did you pardon him?” O’Donnell asked.
“Okay, are you ready? I don’t know who he is. I know he got a four-month sentence or something like that. And I heard it was a Biden witch hunt,” answered Trump, who has criticized his predecessor for signing pardons with an autopen.
We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.
The JBL Tune Flex 2 earbuds are on sale for $49.99 in open-box condition on StackSocial, which is less than half the $109.95 price of a new pair on Amazon. “Open-box” just means the packaging might have some shelf wear like scuffs, stickers, or light handling marks, but the earbuds themselves are tested, in new condition, and backed by a one-year warranty. If you don’t mind imperfect packaging, that’s a solid savings on a feature-packed set of wireless buds.
JBL is known for its bass-driven sound, and the 12mm drivers in the Tune Flex 2 deliver plenty of punch. They’re also versatile depending on how you want to listen. Sealed ear tips provide stronger passive noise isolation, while open tips allow you to remain more aware of your surroundings. Adaptive noise cancelling is designed to shut out distractions, but you also get Ambient Aware and TalkThru modes when you want to hear what’s going on without taking your earbuds out. Calls are handled by six microphones for improved clarity, and the JBL Headphones app allows you to fine-tune the sound with tools like Personi-Fi 3.0. And with multipoint Bluetooth, you can also jump between your phone and laptop without re-pairing.
Battery life holds up well, too. You get up to 12 hours per charge with noise cancelling off (plus 36 more in the case), or 8 hours with ANC on (plus 24 in the case). Real-life usage may vary depending on how often you switch modes, the volume at which you listen, and how frequently you make calls. Also, while the earbuds themselves are rated IP54 for dust and water resistance, the case isn’t, so that’s worth keeping in mind if you’ll be carrying them outdoors. All in, the Tune Flex 2 offers a lot of flexibility and performance for the price. If you prefer your packaging pristine, you might still lean toward a new pair, but if your focus is on sound quality, long battery life, and handy features, this open-box deal makes sense.
More accurate translations seem to be making their way to Google Translate, as first spotted by 9to5Google. While Lifehacker has not been able to confirm this independently, the publication says some of its iOS devices now show an option to pick an “Advanced” translation model in the Google Translate app.
The new model shows up as an option in a model picker at the top of the page, similar to the Gemini app, and advertises “High accuracy for complex translations.” Engadget was also able to get the model picker to appear, where the Advanced model said it “specializes in accuracy using Gemini.”
Those wishing to use the old translation tools can instead continue to use the “Fast” model.
Alongside the introduction of an AI-learning tool competing with Duolingo back in August, the new model further cements Google Translate as an AI-powered app, with the idea being that incorporating the LLM will allow for translations of longer, more context-sensitive work.
For now, , the Advanced translation model does come with a few limitations. First, it only supports text translation, so no holding your phone out to a native speaker and recording what they say. Second, it only supports “select languages.”
While 9to5Google does not clarify which languages the Advanced Model works with, Engadget’s report says that it currently only works between English and French, or English and Spanish. The publication also tested an excerpt from a French play with the new model, saying that while the Fast model gave a more literal word-for-word translation, the Advanced model was more accurate, taking into account the passage’s nuance and better translating an idiom that the old tools missed.
While the Advanced model is a more explicit AI addition, it is not the first time the Google Translate app has used AI to translate text. In August, Google said it had already started using “Gemini models in Translate,” and the company has been experimenting with its implementation since 2016, saying that AI translation “reduced translation errors by an average of 60%.” Still, it marks more choice for those with access to it, and a greater commitment to bring new AI tools to the app.
Unfortunately, it seems like it’ll take some time to roll out fully, as I currently don’t see it on any of my devices. I’ve contacted Google for an update on when the Advanced model is likely to reach all users.
Table of Contents
This season, Zwift is leaning heavily into the Zwift Camp concept, launching a three-camp series that kicked off with Zwift Camp: Baseline on September 15.
Next week (Monday, November 10) the second Camp of the season begins. Named “Zwift Camp: Build”, it’s a 5-stage workout series all about pushing yourself in targeted workouts to build performance at particular intervals. Dive into all the details below!

After Zwift Camp: Baseline showed us our power bests across various intervals, Zwift Camp: Build is here to push us to train and get stronger.
The Camp consists of five different workouts, spread across five weeks. You can finish each workout once and complete the Camp, but you can also do a workout multiple times if you’re looking for additional training.
The workouts target the same approximate time intervals as Zwift Camp: Baseline tested, plus a longer bonus effort up Alpe du Zwift:
Zwift is using lots of different game and HUD features to make their latest Zwift Camp as effective and engaging as possible.


Stages can be completed as on-demand (solo) efforts whenever you’d like, or you can join a scheduled group event. Note: on-demand rides of stages 4 and 5 will not include RoboPacers.
Sign up at zwift.com/zwift-camp > (events coming soon)
Each stage is a week long, with events beginning at 9am PST on Monday and scheduled hourly on the hour until 8am PST the following Monday.
Three unlocks are available as you work your way through Zwift Camp: Build:

Zwifter will have a Zwift Camp: Build dashboard which includes a progress meter and your power bests across the target intervals. This will be available at zwift.com and in the Companion app.
Access your dashboard at zwift.com/zwift-camp-build/dashboard > (going live soon)

This is the second of three Zwift Camps planned for this year’s 2025/26 peak Zwift season:
What do you think of this second Zwift Camp of the season? Planning to participate? Got questions? Share your thoughts below!
An anonymous reader shares a report: The Content Overseas Distribution Association (CODA), an anti-piracy organization representing Japanese IP holders like Studio Ghibli and Bandai Namco, released a letter last week asking OpenAI to stop using its members’ content to train Sora 2, as reported by Automaton. The letter states that “CODA considers that the act of replication during the machine learning process may constitute copyright infringement,” since the resulting AI model went on to spit out content with copyrighted characters.
Sora 2 generated an avalanche of content containing Japanese IP after it launched on September 30th, prompting Japan’s government to formally ask OpenAI to stop replicating Japanese artwork. This isn’t the first time one of OpenAI’s apps clearly pulled from Japanese media, either — the highlight of GPT-4o’s launch back in March was a proliferation of “Ghibli-style” images.
Altman announced last month that OpenAI will be changing Sora’s opt-out policy for IP holders, but CODA claims that the use of an opt-out policy to begin with may have violated Japanese copyright law, stating, “under Japan’s copyright system, prior permission is generally required for the use of copyrighted works, and there is no system allowing one to avoid liability for infringement through subsequent objections.”
Read more of this story at Slashdot.

The security of a popular wifi router brand is under scrutiny from multiple federal agencies, and devices could be pulled from shelves in the United States in the future. According to reporting from the Washington Post, the US Department of Commerce has proposed a ban on routers from TP-Link Systems, a move that has now received support from Departments of Homeland Security, Justice, and Defense.
The proposal reportedly stems from security concerns with routers sold by TP-Link Systems, which is in California but was spun off from the Chinese-based TP-Link Technologies. Commerce officials have warned that the devices handle sensitive data and may be subject to influence by the Chinese government.
For example, there is concern that TP-Link is required to provide information to Chinese intelligence agencies and central government, which could in turn force software updates that compromise user data. (It is important to note that U.S.-based TP-Link Systems disputes this and says that only U.S. engineers can push patches to devices owned by U.S. customers.)
The interagency review of TP-Link actually began during the Biden administration—and this isn’t the first action the federal government has taken against tech companies that have foreign ties. In June 2024, the Commerce Department banned sales of antivirus software from Russia’s Kaspersky Lab to U.S. consumers.
Again, the proposal under consideration could ban future sales of TP-Link Systems routers to U.S. users. Existing devices from TP-Link have been targeted by threat actors and been subject to zero-day vulnerabilities, including a flaw that allowed full takeover.
Of course, most internet-connected devices are vulnerable to hackers, and while some security experts express caution when it comes to TP-Link, there isn’t unilateral support for tossing your router ASAP. Instead, you should continue to follow security best practices to protect your home network, such as changing default login credentials, enabling protective features like a firewall and encryption, and keeping your device’s firmware up to date. If you do need to purchase a new router—if you stop renting from your internet service provider, for example—you might consider a different brand.
Some estimates suggest that TP-Link’s home routers make up as much as half of the market in the U.S. (though others put the numbers much lower). Many of those devices are sold or leased through ISPs.

Johnny Silverhand’s story was pretty much done, no matter what you did in 2077
The post <em>Cyberpunk</em> Creator Tells Keanu Reeves To ‘Contact’ Him About Bringing His Character Back appeared first on Kotaku.
You may be disappointed if you go looking for Google’s open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.
Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives.
At the hearing, Google’s Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google’s Gemini for Home has been particularly hallucination-happy in our testing.
As AerynOS powers ahead without its founder, the open source world might ask: Why does Ikey Doherty keep going AWOL?
The post Ikey Doherty’s Gone Missing Again appeared first on FOSS Force.
Barry Warsaw, writing for the Python steering council, has announced
that PEP 810 (“Explicit lazy
imports”) has been approved, unanimously, by the four who could vote. Since
Pablo Galindo Salgado was one of the PEP authors, he did not vote. The PEP provides a way to defer importing modules until the names
defined in a module are
needed by other parts of the program. We covered the PEP and the discussion around it
a few weeks back. The council also had “recommendations about some of
“, including:
the PEP’s details, a few suggestions for filling a couple of small
gaps
Use lazy as the keyword. We debated many of the given alternatives
(and some we came up with ourselves), and ultimately agreed with the PEP’s
choice of the lazy keyword. The closest challenger was
defer, but once we tried to use that in all the places where the
term is visible, we ultimately didn’t think it was as good an overall
fit. The same was true with all the other alternative keywords we could
come up with, so… lazy it is!What about from foo lazy import bar? Nope! We like that in both module imports and from-imports that the lazy keyword is the first thing on the line. It helps to visually recognize lazy imports of both varieties.
A developer operating under the handle @XenoPanther has stripped Windows 7 down to 69MB. The OS boots but runs almost nothing because critical files like common dialog boxes and common controls are missing. @XenoPanther described the project on X as “more of a fun proof of concept rather than something usable.” The desktop appears and the genuine check remains intact.
Read more of this story at Slashdot.
Arrow has introduced the AXC3000 Starter Kit, a compact FPGA development platform featuring the first production device from the Altera Agilex 3 family. Following the Agilex 5 AXE5000 devkit, this board provides a smaller form factor and focuses on low- to mid-range applications that demand efficient compute performance in compact designs. The Altera Agilex 3 […]