You Can Get These Adaptive Noise-Canceling JBL Earbuds on Sale for Just $50 Right Now

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.

The JBL Tune Flex 2 earbuds are on sale for $49.99 in open-box condition on StackSocial, which is less than half the $109.95 price of a new pair on Amazon. “Open-box” just means the packaging might have some shelf wear like scuffs, stickers, or light handling marks, but the earbuds themselves are tested, in new condition, and backed by a one-year warranty. If you don’t mind imperfect packaging, that’s a solid savings on a feature-packed set of wireless buds.

JBL is known for its bass-driven sound, and the 12mm drivers in the Tune Flex 2 deliver plenty of punch. They’re also versatile depending on how you want to listen. Sealed ear tips provide stronger passive noise isolation, while open tips allow you to remain more aware of your surroundings. Adaptive noise cancelling is designed to shut out distractions, but you also get Ambient Aware and TalkThru modes when you want to hear what’s going on without taking your earbuds out. Calls are handled by six microphones for improved clarity, and the JBL Headphones app allows you to fine-tune the sound with tools like Personi-Fi 3.0. And with multipoint Bluetooth, you can also jump between your phone and laptop without re-pairing.

Battery life holds up well, too. You get up to 12 hours per charge with noise cancelling off (plus 36 more in the case), or 8 hours with ANC on (plus 24 in the case). Real-life usage may vary depending on how often you switch modes, the volume at which you listen, and how frequently you make calls. Also, while the earbuds themselves are rated IP54 for dust and water resistance, the case isn’t, so that’s worth keeping in mind if you’ll be carrying them outdoors. All in, the Tune Flex 2 offers a lot of flexibility and performance for the price. If you prefer your packaging pristine, you might still lean toward a new pair, but if your focus is on sound quality, long battery life, and handy features, this open-box deal makes sense.

Our Best Editor-Vetted Tech Deals Right Now


Amazon Fire TV Stick 4K Plus



$29.99

(List Price $49.99)


Ring Video Doorbell Pro 2 with Ring Chime Pro



$149.99

(List Price $259.99)

Deals are selected by our commerce team

Check If You Have Access to Google’s New Gemini-Powered ‘Advanced’ Translations

More accurate translations seem to be making their way to Google Translate, as first spotted by 9to5Google. While Lifehacker has not been able to confirm this independently, the publication says some of its iOS devices now show an option to pick an “Advanced” translation model in the Google Translate app.

The new model shows up as an option in a model picker at the top of the page, similar to the Gemini app, and advertises “High accuracy for complex translations.” Engadget was also able to get the model picker to appear, where the Advanced model said it “specializes in accuracy using Gemini.”

Those wishing to use the old translation tools can instead continue to use the “Fast” model.

Alongside the introduction of an AI-learning tool competing with Duolingo back in August, the new model further cements Google Translate as an AI-powered app, with the idea being that incorporating the LLM will allow for translations of longer, more context-sensitive work.

Advanced translations, limited language support

For now, , the Advanced translation model does come with a few limitations. First, it only supports text translation, so no holding your phone out to a native speaker and recording what they say. Second, it only supports “select languages.”

While 9to5Google does not clarify which languages the Advanced Model works with, Engadget’s report says that it currently only works between English and French, or English and Spanish. The publication also tested an excerpt from a French play with the new model, saying that while the Fast model gave a more literal word-for-word translation, the Advanced model was more accurate, taking into account the passage’s nuance and better translating an idiom that the old tools missed.

While the Advanced model is a more explicit AI addition, it is not the first time the Google Translate app has used AI to translate text. In August, Google said it had already started using “Gemini models in Translate,” and the company has been experimenting with its implementation since 2016, saying that AI translation “reduced translation errors by an average of 60%.” Still, it marks more choice for those with access to it, and a greater commitment to bring new AI tools to the app.

The update is still rolling out

Unfortunately, it seems like it’ll take some time to roll out fully, as I currently don’t see it on any of my devices. I’ve contacted Google for an update on when the Advanced model is likely to reach all users.

Zwift Camp: Build Announced, Begins November 10

This season, Zwift is leaning heavily into the Zwift Camp concept, launching a three-camp series that kicked off with Zwift Camp: Baseline on September 15.

Next week (Monday, November 10) the second Camp of the season begins. Named “Zwift Camp: Build”, it’s a 5-stage workout series all about pushing yourself in targeted workouts to build performance at particular intervals. Dive into all the details below!

Build Basics

After Zwift Camp: Baseline showed us our power bests across various intervals, Zwift Camp: Build is here to push us to train and get stronger.

The Camp consists of five different workouts, spread across five weeks. You can finish each workout once and complete the Camp, but you can also do a workout multiple times if you’re looking for additional training.

The workouts target the same approximate time intervals as Zwift Camp: Baseline tested, plus a longer bonus effort up Alpe du Zwift:

  • 5-second power
  • 1-minute power
  • 5-minute power
  • 20-minute power
  • 60-minute power (bonus!)

What’s New

Zwift is using lots of different game and HUD features to make their latest Zwift Camp as effective and engaging as possible.

  • Instead of standard ERG mode workouts, Zwift Camp: Build uses route-based workouts and on-screen prompts to guide you through a training effort tailored to Zwift’s virtual parcours
  • RoboPacers will be put to use in stages 4 and 5 to help riders pace their efforts
  • On-screen scripts will recommend enabling HoloReplay for stages 1, 2, and 3, so you can try to beat your previous efforts
  • Lap Splits and Ride Stats HUD elements will be automatically enabled to give you a mid-ride picture of your workout
Using HoloReplays to chase your past efforts
RoboPacers help you hold target pace

Workouts + Schedule

Stages can be completed as on-demand (solo) efforts whenever you’d like, or you can join a scheduled group event. Note: on-demand rides of stages 4 and 5 will not include RoboPacers.

  • Stage 1: November 10-16
    • Ride six laps of Glasgow Crit Circuit, putting in a maximual 5-second effort on the Champion’s Sprint each lap.
    • Training Target: Neuromuscular (~5 Seconds)
    • Route: Glasgow Crit Six (18.3km, 199m)
  • Stage 2: November 17-23
    • Test your 1-minute power on three efforts of the Dos d’Ane Sprint as you lap France’s newer cobbled roads.
    • Training Target: Anaerobic Capacity (~1 Minute)
    • Route: Bon Voyage (31.4km, 155m)
  • Stage 3: November 24-30
    • Ride four laps of the Volcano Circuit, pushing to your max to test your 5-minute (VO2) power.
    • Training Target: VO2 (~5 Minutes)
    • Route: Hot Laps (23.4km, 149m)
  • Stage 4: December 1-7
    • Ride up The Grade for a tough threshold workout and FTP test, with 5 different RoboPacers set up at different target times to help you pace your effort.
    • Training Target: Lactate Threshold/FTP Estimate (~20 minutes)
    • Route: Hilltop Hustle (16.3km, 346m)
    • RoboPacer The Grade KOM Target Times:
      • 14 minutes (4.2 W/kg)
      • 18 minutes (3.2 W/kg)
      • 22 minutes (2.6 W/kg)
      • 26 minutes (2.2 W/kg)
      • 30 minutes (1.8 W/kg)
  • Stage 5: December 8-14
    • Ride up Alpe du Zwift for a long threshold effort, with 5 different RoboPacers set up at different target times to help you pace your effort.
    • Training Target: True FTP/Threshold (~60 minutes)
    • Route: Road to Sky (17.3km, 1045m)
    • RoboPacer Alpe du Zwift KOM Target Times:
      • 50 minutes (4.0 W/kg)
      • 60 minutes (3.3 W/kg)
      • 70 minutes (2.8 W/kg)
      • 90 minutes (2.1 W/kg)
      • 120 minutes (1.6 W/kg)
  • Make-Up Events: December 15-21

Sign up at zwift.com/zwift-camp > (events coming soon)

Each stage is a week long, with events beginning at 9am PST on Monday and scheduled hourly on the hour until 8am PST the following Monday.

Progressive Unlocks

Three unlocks are available as you work your way through Zwift Camp: Build:

  • Complete 1 Stage: Zwift Camp: Build socks
  • Complete 3 Stages: Zwift Camp: Build headphones/sweatband combo
  • Complete all 5 Stages: Zwift Camp: Build cycling kit

Personal Dashboard

Zwifter will have a Zwift Camp: Build dashboard which includes a progress meter and your power bests across the target intervals. This will be available at zwift.com and in the Companion app.

Access your dashboard at zwift.com/zwift-camp-build/dashboard > (going live soon)

2025/26 Zwift Camps

This is the second of three Zwift Camps planned for this year’s 2025/26 peak Zwift season:

  • Zwift Camp: Baseline (September 15-October 20): Pure power analysis
  • Zwift Camp: Build (November 10 – December 21): Power application through in-game segments/routes
  • Zwift Camp: Breakthrough (Sprint 2026): Pure power competition and analysis to help you break into a strong outdoor season

Questions or Comments?

What do you think of this second Zwift Camp of the season? Planning to participate? Got questions? Share your thoughts below!

Studio Ghibli, Bandai Namco, Square Enix Demand OpenAI Stop Using Their Content To Train AI

An anonymous reader shares a report: The Content Overseas Distribution Association (CODA), an anti-piracy organization representing Japanese IP holders like Studio Ghibli and Bandai Namco, released a letter last week asking OpenAI to stop using its members’ content to train Sora 2, as reported by Automaton. The letter states that “CODA considers that the act of replication during the machine learning process may constitute copyright infringement,” since the resulting AI model went on to spit out content with copyrighted characters.

Sora 2 generated an avalanche of content containing Japanese IP after it launched on September 30th, prompting Japan’s government to formally ask OpenAI to stop replicating Japanese artwork. This isn’t the first time one of OpenAI’s apps clearly pulled from Japanese media, either — the highlight of GPT-4o’s launch back in March was a proliferation of “Ghibli-style” images.

Altman announced last month that OpenAI will be changing Sora’s opt-out policy for IP holders, but CODA claims that the use of an opt-out policy to begin with may have violated Japanese copyright law, stating, “under Japan’s copyright system, prior permission is generally required for the use of copyrighted works, and there is no system allowing one to avoid liability for infringement through subsequent objections.”


Read more of this story at Slashdot.

Microsoft Fixes A Frustrating Windows Shutdown Bug That’s Lingered For Years

Microsoft Fixes A Frustrating Windows Shutdown Bug That's Lingered For Years
You almost assuredly know this problem first-hand: Windows performs updates, and your “Restart” and “Shut down” options get replaced by “Update and restart” and “Update and shut down.” Only, in reality, these are actually the same option, because on virtually all systems, “Update and shut down” actually results in the machine rebooting to

Federal Agencies May Move to Ban These Popular Wifi Routers

The security of a popular wifi router brand is under scrutiny from multiple federal agencies, and devices could be pulled from shelves in the United States in the future. According to reporting from the Washington Post, the US Department of Commerce has proposed a ban on routers from TP-Link Systems, a move that has now received support from Departments of Homeland Security, Justice, and Defense.

What is the issue with TP-Link?

The proposal reportedly stems from security concerns with routers sold by TP-Link Systems, which is in California but was spun off from the Chinese-based TP-Link Technologies. Commerce officials have warned that the devices handle sensitive data and may be subject to influence by the Chinese government.

For example, there is concern that TP-Link is required to provide information to Chinese intelligence agencies and central government, which could in turn force software updates that compromise user data. (It is important to note that U.S.-based TP-Link Systems disputes this and says that only U.S. engineers can push patches to devices owned by U.S. customers.)

The interagency review of TP-Link actually began during the Biden administration—and this isn’t the first action the federal government has taken against tech companies that have foreign ties. In June 2024, the Commerce Department banned sales of antivirus software from Russia’s Kaspersky Lab to U.S. consumers.

Is my TP-Link router affected?

Again, the proposal under consideration could ban future sales of TP-Link Systems routers to U.S. users. Existing devices from TP-Link have been targeted by threat actors and been subject to zero-day vulnerabilities, including a flaw that allowed full takeover.

Of course, most internet-connected devices are vulnerable to hackers, and while some security experts express caution when it comes to TP-Link, there isn’t unilateral support for tossing your router ASAP. Instead, you should continue to follow security best practices to protect your home network, such as changing default login credentials, enabling protective features like a firewall and encryption, and keeping your device’s firmware up to date. If you do need to purchase a new router—if you stop renting from your internet service provider, for example—you might consider a different brand.

Some estimates suggest that TP-Link’s home routers make up as much as half of the market in the U.S. (though others put the numbers much lower). Many of those devices are sold or leased through ISPs.

Google removes Gemma models from AI Studio after GOP senator’s complaint

You may be disappointed if you go looking for Google’s open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.

Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives.

At the hearing, Google’s Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google’s Gemini for Home has been particularly hallucination-happy in our testing.

Read full article

Comments

Python steering council accepts lazy imports

Barry Warsaw, writing for the Python steering council, has announced
that PEP 810 (“Explicit lazy
imports”) has been approved, unanimously, by the four who could vote. Since
Pablo Galindo Salgado was one of the PEP authors, he did not vote. The PEP provides a way to defer importing modules until the names
defined in a module are
needed by other parts of the program. We covered the PEP and the discussion around it
a few weeks back. The council also had “recommendations about some of
the PEP’s details, a few suggestions for filling a couple of small
gaps
“, including:

Use lazy as the keyword. We debated many of the given alternatives
(and some we came up with ourselves), and ultimately agreed with the PEP’s
choice of the lazy keyword. The closest challenger was
defer, but once we tried to use that in all the places where the
term is visible, we ultimately didn’t think it was as good an overall
fit. The same was true with all the other alternative keywords we could
come up with, so… lazy it is!

What about from foo lazy import bar? Nope! We like that in both module imports and from-imports that the lazy keyword is the first thing on the line. It helps to visually recognize lazy imports of both varieties.

Windows 7 Squeezed To 69MB in Proof-of-Concept Build

A developer operating under the handle @XenoPanther has stripped Windows 7 down to 69MB. The OS boots but runs almost nothing because critical files like common dialog boxes and common controls are missing. @XenoPanther described the project on X as “more of a fun proof of concept rather than something usable.” The desktop appears and the genuine check remains intact.


Read more of this story at Slashdot.

AXC3000 Starter Kit Highlights Altera Agilex 3 FPGA with HyperRAM and MIPI Support

Arrow has introduced the AXC3000 Starter Kit, a compact FPGA development platform featuring the first production device from the Altera Agilex 3 family. Following the Agilex 5 AXE5000 devkit, this board provides a smaller form factor and focuses on low- to mid-range applications that demand efficient compute performance in compact designs. The Altera Agilex 3 […]

[$] An explicit thread-safety proposal for Python

Python already has several ways to run programs concurrently —
including asynchronous functions, threads, subinterpreters, and multiprocessing
— but all of those options have drawbacks of one kind or another.

PEP 703
(“Making the Global Interpreter Lock Optional in CPython”)
removed a major barrier to running Python
threads in parallel, but also exposed Python programmers to the same tricky
synchronization problems found in other languages supporting multithreaded
programs. A new draft proposal
by Mark Shannon,

PEP 805
(“Safe Parallel Python”), suggests a way for the CPython runtime
to cut down on concurrency bugs, making it more practical for Python programmers
to use versions of the language without the global interpreter lock (GIL).

You May Have a Refund Coming If You Use Amazon Prime

If you’ve signed up for an Amazon Prime subscription in the last few years, you may have some cash coming your way. Amazon recently settled a lawsuit with the Federal Trade Commission (FTC) over deceptive enrollment and cancellation practices, including enrolling customers in Prime without their consent and making it difficult to cancel. The company is now set to pay out $1.5 billion in refunds to affected consumers.

Here’s who qualifies, and how to make sure you get your money.

Am I eligible for an Amazon Prime refund?

Refunds will be paid out to select Amazon Prime subscribers in the U.S. In order to qualify, you must also meet the following criteria:

  • You signed up for your Prime account between June 23, 2019 and June 23, 2025.

  • You signed up through a “challenged enrollment flow” (the universal Prime decision page, shipping selection page, single page checkout, or Prime Video enrollment flow) OR you tried cancel your Prime subscription between the dates listed above and were unsuccessful.

  • You used no more than three Amazon Prime benefits in any 12-month period after enrolling.

If you signed up for Amazon Prime before or after this time frame, or via another enrollment flow, you aren’t covered by the settlement.

How to get your Amazon Prime refund

In most cases, you won’t need to take any action. If you are eligible, Amazon will automatically refund your Amazon Prime subscription fees by December 25, 2025—up to a maximum of $51.

Some Amazon Prime customers who don’t qualify for automatic refunds may still be able to claim some cash from the settlement. If you signed up through a challenged enrollment flow and used up to 10 Prime benefits in any 12-month period, you may receive a claims form from Amazon via email sometime in early 2026. You’ll need to complete your claim within 180 days to get a refund.

As Mashable notes, payouts could trickle down to other Amazon Prime customers if the full settlement isn’t exhausted in the first two phases—though these refunds are likely to come later.

OpenAI signs massive AI compute deal with Amazon

On Monday, OpenAI announced it has signed a seven-year, $38 billion deal to buy cloud services from Amazon Web Services to power products like ChatGPT and Sora. It’s the company’s first big computing deal after a fundamental restructuring last week that gave OpenAI more operational and financial freedom from Microsoft.

The agreement gives OpenAI access to hundreds of thousands of Nvidia graphics processors to train and run its AI models. “Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said in a statement. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

OpenAI will reportedly use Amazon Web Services immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond. Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses, generate AI videos, and train OpenAI’s next wave of models.

Read full article

Comments

arXiv Changes Rules After Getting Spammed With AI-Generated ‘Research’ Papers

An anonymous reader shares a report: arXiv, a preprint publication for academic research that has become particularly important for AI research, has announced it will no longer accept computer science articles and papers that haven’t been vetted by an academic journal or a conference. Why? A tide of AI slop has flooded the computer science category with low-effort papers that are “little more than annotated bibliographies, with no substantial discussion of open research issues,” according to a press release about the change.

arXiv has become a critical place for preprint and open access scientific research to be published. Many major scientific discoveries are published on arXiv before they finish the peer review process and are published in other, peer-reviewed journals. For that reason, it’s become an important place for new breaking discoveries and has become particularly important for research in fast-moving fields such as AI and machine learning (though there are also sometimes preprint, non-peer-reviewed papers there that get hyped but ultimately don’t pass peer review muster). The site is a repository of knowledge where academics upload PDFs of their latest research for public consumption. It publishes papers on physics, mathematics, biology, economics, statistics, and computer science and the research is vetted by moderators who are subject matter experts.


Read more of this story at Slashdot.