Microsoft Flight Simulator 2024 Now Has Foveated Rendering

Microsoft Flight Simulator 2024 now has both fixed and eye-tracked foveated rendering, alongside a range of other improvements to VR support.

Fixed foveated rendering (FFR) had been tested in beta for a few months now, and it and eye-tracked foveated rendering (ETFR) have arrived today for all players in Sim Update 2.

If you’re unfamiliar with the concept of foveated rendering:

  • Fixed Foveated Rendering (FFR) means rendering the central area of the image at a higher resolution than the peripheral area.
  • Eye-Tracked Foveated Rendering (ETFR), occasionally also called Dynamic Foveated Rendering, means rendering the area you’re looking at each frame at higher resolution than everywhere else, determined by the eye tracking capability of some headsets.

Both techniques save performance in VR, and this can be used to either run demanding experiences at a smoother framerate or render experiences already hitting framerate at higher peak resolution. FFR comes with noticeable pixelation at the edges, but works on any headset, while with ETFR there shouldn’t be any noticeable difference, assuming the eye tracking system has low enough latency.

In both cases, as Asobo Studio notes, the higher the peak resolution, the greater the gain in GPU performance you’ll see when enabling foveated rendering will be. However, enabling it does come with some CPU overhead.

Depiction of typical Fixed Foveated Rendering.

0:00

/0:24

Depiction of Eye-Tracked Foveated Rendering from Meta. Note that the eye camera view here is not correctly synced to the rendering view, this is just for example purposes.

Flight Simulator 2024 appears to be using the quad views foveated rendering technique developed by Varjo and merged into OpenXR 1.1 last year. This means its eye-tracked foveated rendering should work with any headset which provides its eye tracking to OpenXR, including Bigscreen Beyond 2e, Pimax Crystal and Crystal Super, Varjo XR-4 and Aero, and Meta Quest Pro via Quest Link, Virtual Desktop’s VDXR runtime, or Steam Link.

Foveated rendering is also supported in DCS, a popular combat flight sim, since last year, and VR users with eye tracking have seen significant performance benefits in it.

Other FS2024 VR Improvements

Foveated rendering is not the only improvement in Microsoft Flight Simulator 2024’s Sim Update 2.

Here’s every other mention of VR in the changelog:

Virtual Reality
• Fix moving the cursor over an ingame panel interacts with the game window
• Fixed an issue causing some attached interations to not block the VR ray in some cases
• Fixed an issue causing hidden yokes to still affect the VR rays
• Fixed an invisible UI in VR that was taking raycasts
• Fixed the toolbar that closed itself when resetting the panels in VR
• Fixed the mission steps panel did not appear by default when switching to VR. Now when switching to VR, all the default settings of the game mode are reapplied
• Fixed mission steps not displaying VR controllers icons
• Removed the “VR Compatibility” popup
• Fixed UI inputs broken after using the certification tree
• Fixed in game panels position and size not saved in VR
• Fixed invisible in game panels when exiting VR mode
• Fixed grouped missions on worldmap impossible to select in VR
• Holding the yoke will now disable its highlight.
• Fixed Render Scale option in VR Tab not applying while in VR.
• Fixed no free cam control with VR controllers default profiles
• Fixed not being able to rotate the player character left/right during preflight with VR controllers default profiles
• Fixed Night Lights and transparent objects leaking through Clouds in VR.
• Fixed VR graphic preset LOW not setting rocks, grass and plants to a lower setting
• Fixed a random crash when switching to VR with DevMode & Smart Docking System activated
• Fixed a 100% crash on enabling Quad Views when using HDR monitors on Windows
• Fixed interaction in VR for the collective of helicopters be implementing a similar logic to the yoke’s.
• Fixed the interactions with the cockpit were lost after resuming from the pause in VR
• Fixed the EFB panel scroll/zoom input behaviours in VR
• Fixed the toolbar was not interactible in VR with the mouse when the controllers were in proximity mode
• Added the option to move the EFB when it’s undocked in VR
• Made the EFB interactible in VR when externalized
• Fixed the conflict of interaction between instruments and vcockpits in VR. • The distance was not taken into account when instruments and vcockpits were aligned.
• Fixed holding and dragging a list should not click on the selected tile
• Fixed right VR controllers missing default input profiles for some headsets
• Fixed an issue preventing the Yoke Controller from working after leaving the pause screen
• Improved the gesture detection for lever & Knobs interactions
• Fixed an issue preventing the Yoke Controller from working properly in VR for the Skyship
• Fixed “Ready to fly” button not clickable in VR
• Fixed in game panels position and size not saved when going to the pause menu
• Mapped “Unbind” in control settings to Left and Right Controllers
• Fixed missing scrolling in menus with right joysticks on VR controllers
• Fixed inverted “IN-FLIGHT UI PANEL – ZOOM AXIS” on VR controllers

Familiarization Flight
• Fixed an issue were irrelevant levant steps and notifications were shown while playing in VR

General Bug Fixes
• Fixed the ratio of panels in VR flight
• Fixed the appearance of the toolbar when going out of VR flight
• Fixed some 3D UI that were appearing in front of the panels in VR flight
• Fixed behavior of lighting knobs on the C172 G1000 which in VR used both horizontal movement and rotation to increment/decrement while other knobs only consider rotation.

Stability & Performance
• Fixed a crash when changing DLSS quality mode during VR.

UK Needs More Nuclear To Power AI, Says Amazon Boss

In an exclusive interview with the BBC, AWS CEO Matt Garman said the UK must expand nuclear energy to meet the soaring electricity demands of AI-driven data centers. From the report: Amazon Web Services (AWS), which is part of the retail giant Amazon, plans to spend 8 billion pounds on new data centers in the UK over the next four years. Matt Garman, chief executive of AWS, told the BBC nuclear is a “great solution” to data centres’ energy needs as “an excellent source of zero carbon, 24/7 power.” AWS is the single largest corporate buyer of renewable energy in the world and has funded more than 40 renewable solar and wind farm projects in the UK.

The UK’s 500 data centres currently consume 2.5% of all electricity in the UK, while Ireland’s 80 hoover up 21% of the country’s total power, with those numbers projected to hit 6% and 30% respectively by 2030. The body that runs the UK’s power grid estimates that by 2050 data centers alone will use nearly as much energy as all industrial users consume today.

In an exclusive interview with the BBC, Matt Garman said that future energy needs were central to AWS planning process. “It’s something we plan many years out,” he said. “We invest ahead. I think the world is going to have to build new technologies. I believe nuclear is a big part of that particularly as we look 10 years out.”


Read more of this story at Slashdot.

Apple’s New CarPlay ‘Ultra’ Won’t Fix the Biggest Problem of Phone-Connected Cars

An anonymous reader quotes a report from Gizmodo: Apple’s next step for CarPlay is a version you’ll only get to try if you’re a fan of luxury cruisers or a popular spy film franchise. CarPlay Ultra, with its new suite of exclusive features like custom gauges, is coming first to Aston Martin vehicles with the largest, most blaring dash screens. The more advanced version of CarPlay won’t necessarily fix the lingering issues the software has with some modern vehicles. Segmenting CarPlay into newer and older systems may make things worse for those with aging cars. Apple’s CarPlay Ultra includes a new kind of dashboard alongside real-time information that can include car diagnostics — like tire pressure — or dashboard gauges. You should be able to control temperature and other car-based features as well. The new version of the software includes options for dashboards or console screens, and it will work with on-screen controls, Siri, and “physical buttons.”

CarPlay Ultra was supposed to launch in 2024, but Apple missed its release date by close to half a year. The new feature suite was first revealed at WWDC in 2022, where Apple promised a “unified and consistent” suite of informational dashboards offering more control over radio and AC “without ever leaving the CarPlay experience.” Last year, Apple showed off “the next generation” of its car-focused app that included custom gauges and other layouts made for a variety of automakers. It lacked much of the full-width, busy design of the initial iteration from two years prior. […]

To entice more manufacturers, CarPlay Ultra is supposed to adapt to multiple screen sizes thanks to a modular layout system with more options for companies to adhere to their own brand identity. Apple promised carmakers they could resize and reorient gauges on a dashboard like you do widgets on your iPhone. Users can change up various gauges on the dash and bring up apps like Apple Music or Maps in between your temperature gauge and speedometer. Aston Martin showed off these features on an Aston Martin DBX, a luxury SUV that costs more than $250,000. Apple said these features should be coming to the U.S. and Canada first, with more Aston Martins getting these features through software updates from local dealerships. Apple said its still trying to bring these features to brands like Hyundai, Kia, and Genesis. Maybe we’ll see Ultra on a vehicle regular folk can afford. “The customizable dashboards are a way for Apple to let each carmaker have their say in how their vehicles look, but they won’t help all those who are stuck with regular CarPlay on their aging beaters,” concludes Gizmodo’s Kyle Barr. “The new version will inevitably create a distinction between those with new software and others with legacy software…”


Read more of this story at Slashdot.

MIT Asks arXiv To Take Down Preprint Paper On AI and Scientific Discovery

MIT has formally requested the withdrawal of a preprint paper on AI and scientific discovery due to serious concerns about the integrity and validity of its data and findings. It didn’t provide specific details on what it believes is wrong with the paper. From a post: “Earlier this year, the COD conducted a confidential internal review based upon allegations it received regarding certain aspects of this paper. While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we are writing to inform you that MIT has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper. Based upon this finding, we also believe that the inclusion of this paper in arXiv may violate arXiv’s Code of Conduct.

“Our understanding is that only authors of papers appearing on arXiv can submit withdrawal requests. We have directed the author to submit such a request, but to date, the author has not done so. Therefore, in an effort to clarify the research record, MIT respectfully request that the paper be marked as withdrawn from arXiv as soon as possible.” Preprints, by definition, have not yet undergone peer review. MIT took this step in light of the publication’s prominence in the research conversation and because it was a formal step it could take to mitigate the effects of misconduct. The author is no longer at MIT. […]

“We are making this information public because we are concerned that, even in its non-published form, the paper is having an impact on discussions and projections about the effects of AI on science. Ensuring an accurate research record is important to MIT. We therefore would like to set the record straight and share our view that at this point the findings reported in this paper should not be relied on in academic or public discussions of these topics.” The paper in question, titled “Artificial Intelligence, Scientific Discovery, and Product Innovation” and authored by Aidan Toner-Rodgers, investigated the effects of introducing an AI-driven materials discovery tool to 1,018 scientists in a U.S. R&D lab. The study reported that AI-assisted researchers discovered 44% more materials, filed 39% more patents, and achieved a 17% increase in product innovation. These gains were primarily attributed to AI automating 57% of idea-generation tasks, allowing top-performing scientists to focus on evaluating AI-generated suggestions effectively. However, the benefits were unevenly distributed; lower-performing scientists saw minimal improvements, and 82% of participants reported decreased job satisfaction due to reduced creativity and skill utilization.

The Wall Street Journal reported on MIT’s statement.


Read more of this story at Slashdot.

I Used ‘Cover Your Tracks’ to See What’s Following Me Online, and Yikes

The internet is not a private place. Every time you connect to a website, ads and beacons—both visible and invisible—attempt to follow your every move. They’ll even take the anonymized information they collect and build a profile that other trackers can use to identify you as you surf the web. It’s pretty bleak.

Luckily, modern browsers are fighting back. While there are certainly better options than others, most browsers have privacy protections built in. If you know what you’re doing, you can max out these protections and install some third-party boosters to retain some (if not most) of your privacy across the internet.

If you’re curious how your browser stacks up to the competition, check out Cover Your Tracks. I gave it a shot, and was surprised to find out that my super-private setup wasn’t as anonymous as I thought.

What is Cover Your Tracks?

Cover Your Tracks is a project created by the Electronic Frontier Foundation (EFF), a non-profit digital rights group, and aims to educate users on two key metrics: how unique their browser is, and how effective their tracker blockers are. The site pretends to load a series of trackers on your browser, and measures which trackers are actually able to load. If your protections are strong, more trackers will fail to load. If they’re not, well, trackers galore.

The site will load fake trackers like the following:

  • https://trackersimulator.org/?action=tracking_tally&ad_url=123456

  • https://eviltracker.net/?action=tracking_tally&trackingserver=123456

  • https://do-not-tracker.org/?action=tracking_tally&random=123456

The first link acts like an ad you might encounter on any random website. If your browser blocks the “ad,” it passes that test. The second link pretends to be an invisible beacon (or tracker). If you browser blocks the “beacon,” it passes. Finally, the third link acts like a domain that respects the EFF’s Do Not Track Policy. If your browser unblocks the domain’s scripts, it passes. (Why unblock the scripts at all? According to the EFF, so few sites choose to voluntarily abstain from tracking visitors, that it’s worth unblocking ads on these sites to reward them.) Partial credit is awarded to browsers that allow the ad or beacon to load, but block their respective cookies: Allowing the ad or tracker means you can be followed, but blocking cookies means the tracker likely can ID you.

Those trackers are only effective if they have a sense of the profile they’re following. That’s where browser uniqueness comes in: Cover Your Tracks takes a look at your browser fingerprint and compares it against its database of recently scanned browsers. It then generates a uniqueness score—the more unique your fingerprint, the more difficult it is to track you across sites. Cover Your Tracks anonymously collects and stores browser data like your timezone, screen resolution, system language, and system platform, among other data points, to compare against other users’ browsers.

How did I score?

I’m not necessarily a hardcore privacy enthusiastic, but I do enjoy protecting my privacy wherever I can online. As such, I use Safari whenever possible, with all of the privacy settings I can enable. That includes hiding my IP address from trackers and websites, as well as preventing cross-site tracking. I combine that with an ad blocker (I’m using AdGuard, but would love if uBlock Origin would make a Safari extension) for a private, ad-free web experience.

Those minimal steps do appear to have paid off—at least according to Cover Your Tracks. After processing my browser, the site concluded I have strong protection against web tracking. The tests confirmed my browser blocks tracking ads and invisible trackers, which is reassuring. The bad news, though, is that my current setup isn’t protecting me against fingerprinting—a practice where trackers build a profile to make it easier to identify you across the web. Cover Your Tracks said my browser had a unique fingerprint among the more than 250,000 browsers they had tested over the past 45 days, which means I stick out like a sore thumb on the internet. While the trackers my browser blocks won’t be able to see me, the ones my browser misses will, and they’ll know it’s my browser reading that article or watching that video.

I’m actually pretty surprised by this: Safari has “advanced tracking and fingerprinting protection,” which I keep enabled for all browsing. The fact Cover Your Tracks thinks I have a totally unique browser profile is a bit concerning to say the least.

When I tried the test in my Firefox browser—with all of its privacy protections enabled, coupled with uBlock Origin—it scored the same, save for the fingerprinting test. Unlike Safari, my Firefox browser is nearly unique: one in 125,883 browsers have the same fingerprint as mine, which, to my eyes, means my Firefox browser is twice as anonymous as my Safari browser, though that isn’t saying all that much.

How can you avoid fingerprinting on the web?

So, it turns out that any trackers that do break through my browsers defenses are able to see me in full view. That’s not great. Where do you go from here?

Unfortunately, this is tough. Fingerprinting is pretty difficult to avoid, because the more trackers you disable, the worse the web becomes. As the EFF explains, it’s a bit of a paradox, but after a certain point, you stop blocking the trackers that exist to track you, and you start to block elements that make websites work. If you disable JavaScript, you might stop a website from tracking you, but you might not be able to use it at all. On the flip side, using too many protections may actually inadvertently identify you, as trackers and sites see that you’re the only one constantly blocking everything all of the time.

There’s far from one way to be totally private on the web, but according to the EFF, the simplest way to attack fingerprinting includes the following:

  • Using Tor for your browsing, as the browser has a number of advanced anti-tracking features

  • Using a hardcore privacy extension (EFF recommends Privacy Badger and Disconnect, or NoScript if you’re using Firefox. Sadly, none are compatible with Safari.)

That all said, I did attempt the test using Tor with “Safer” privacy settings enabled, and earned the same unique browser score I did with Safari. So, I cranked up the settings to “Safest,” which, among other things, disables JavaScript on all websites. I tried to run the test again, and broke the website. Perhaps you really do need to give up a little privacy in order to use the internet at all.

How to watch Google I/O 2025

It’s still May, which means it’s still Google time. After showing off Android’s new look at The Android Show, the company still has its developer conference to check off the list. Google I/O 2025 is scheduled to start on May 20 at 1PM ET / 10AM PT, and Engadget will be covering it live, via a liveblog and on-the-ground reporting from our very own Karissa Bell.

Google included some Gemini news in The Android Show — the AI is coming to Wear OS, Android Auto and Google TV — but artificial intelligence should still be the focus of the company’s upcoming keynote. too. Expect news about how Google is using AI in search to be featured prominently, along with some other surprises, like the possible debut of an AI-powered Pinterest alternative.

The company made it clear during its Android showcase that Android XR, its mixed reality platform, will also be featured during I/O. That could include the mixed reality headset Google and Samsung are collaborating on, or, as teased at the end of The Android Show, smart glasses with Google’s Project Astra built-in.

To find out for yourself, you can watch Google’s keynote in the embedded livestream above or on the company’s YouTube channel. The event starts at 1PM ET on May 20 and the company plans to hold breakout sessions through May 21 on a variety of different topics relevant to developers. 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/how-to-watch-google-io-2025-214622457.html?src=rss

OpenAI Launches Codex, an AI Coding Agent, In ChatGPT

OpenAI has launched Codex, a powerful AI coding agent in ChatGPT that autonomously handles tasks like writing features, fixing bugs, and testing code in a cloud-based environment. TechCrunch reports: Codex is powered by codex-1, a version of the company’s o3 AI reasoning model optimized for software engineering tasks. OpenAI says codex-1 produces “cleaner” code than o3, adheres more precisely to instructions, and will iteratively run tests on its code until passing results are achieved.

The Codex agent runs in a sandboxed, virtual computer in the cloud. By connecting with GitHub, Codex’s environment can come preloaded with your code repositories. OpenAI says the AI coding agent will take anywhere from one to 30 minutes to write simple features, fix bugs, answer questions about your codebase, and run tests, among other tasks. Codex can handle multiple software engineering tasks simultaneously, says OpenAI, and it doesn’t limit users from accessing their computer and browser while it’s running.

Codex is rolling out starting today to subscribers to ChatGPT Pro, Enterprise, and Team. OpenAI says users will have “generous access” to Codex to start, but in the coming weeks, the company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex, an OpenAI spokesperson tells TechCrunch. OpenAI plans to expand Codex access to ChatGPT Plus and Edu users soon.


Read more of this story at Slashdot.

8BitDo Arcade Controller Is A Fully-Loaded Svelte Hitbox For Fighting Game Enthusiasts

8BitDo Arcade Controller Is A Fully-Loaded Svelte Hitbox For Fighting Game Enthusiasts
It’s a great day for fighting game aficionados and emulation enthusiasts everywhere. The much-revered 8BitDo, known for its high quality peripherals, has released its own take on a high-quality hitbox-style controller. Comboing off has never been easier.

For those unfamiliar, a hitbox (or leverless) controller is basically a keyboard in

Bungie admits its Marathon alpha included stolen artwork

The bright, glitched-out, typography-forward look of Bungie’s upcoming extraction shooter Marathon is one of its defining features. As it turns out, it’s also partially plagiarized, according to posts shared on the official Marathon X account. The announcement comes after artist Fern Hook, who goes by @4nt1r34l on X, initially accused Bungie of using in-game textures that looked similar to her original artwork on May 15.

“Bungie is of course not obligated to hire me when making a game that draws overwhelmingly from the same design language I have refined for the last decade,” Hook wrote on X, “but clearly my work was good enough to pillage for ideas and plaster all over their game without pay or attribution.” 

the Marathon alpha released recently and its environments are covered with assets lifted from poster designs i made in 2017.@Bungie @josephacross pic.twitter.com/0Csbo48Jgb

— N² (@4nt1r34l) May 15, 2025

Hook’s posts are fairly damning, identifying obvious areas in Marathon‘s alpha map that feature only lightly altered versions of her original work. She became aware of the issue when the alpha originally launched in April 2025, according to The Washington Post, but kept quiet until now because she was advised to seek legal action. Hook ultimately decided to post about the issue because she doesn’t “have enough time or money to fly out to the US to pursue an unwinnable court case against Sony.”

Bungie’s statement claims that “a former Bungie artist” included Hook’s art in a texture sheet without the rest of the art team’s knowledge. The company is “conducting a thorough review of [its] in-game assets” and has also reached out to Hook to “discuss the issue” further.

As Eurogamer notes, this isn’t the first time Bungie has been accused of lifting the work of other artists. The developer was accused of doing the same thing multiple times during the development of Destiny 2 and its various expansions.

Since the game is still in development, it’s not clear how or if Bungie will change Marathon to address Hook’s complaint, but the developer has until September 23 to do it. That’s when Marathon is supposed to launch on PlayStation 5, PC and Xbox Series X/S.

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/bungie-admits-its-marathon-alpha-included-stolen-artwork-210006323.html?src=rss

What Is Frame Generation, and Should You Use It In Your Games?

Earlier this year, Nvidia announced its new line of 50 Series GPUs with a hot new feature in tow: “Multi Frame Generation.” Building on early frame gen tech, these new GPUs allow games to create multiple video frames based on a single frame rendered the normal way. But is that a good thing? Or are these just “fake frames?” Well, it’s complicated.

On a very basic level, “frame generation” refers to the technique of using deep learning AI models to generate frames in between two frames of a game rendered by the GPU. Your graphics card does the more grindy work of creating “Frame One” and “Frame Three” based on 3D models, lighting, textures, etc., but then frame generation tools take those two images and make a guess at what “Frame Two” should look like.

Multi Frame Generation takes this a step further. Instead of just generating one extra frame, it generates several. This means that, on the highest settings, three out of every four frames you see could be generated, rather than rendered directly. Whether that’s a good thing, though, depends heavily on what type of game you play and what you want your gaming experience to be.

What’s the difference between upscaling and frame generation?

Nvidia’s new Multi Frame Generation comes as part of its announcement of DLSS 4. DLSS stands for Deep Learning Super Sampling and, as the name implies, its earlier iterations weren’t about frame generation, but rather supersampling (or upscaling). 

In this version of the tech, a GPU would render a lower-resolution version of a frame—say, 1080p—and then upscale it to a higher resolution like 1440p or 2160p (4K). The “deep learning” in DLSS refers to training a machine learning model on each game individually to give the upscaler a better idea of what the higher res frame should look like.

Nowadays, DLSS refers more to a whole suite of tools Nvidia uses to eek out better performance, and the above method is usually referred to as Super Resolution. Frame generation, on the other hand, takes two entire frames and generates an entirely new frame between them from scratch.

Of course, it’s also possible to use all of this tech simultaneously. You can end up in situations where your GPU is technically only rendering one lower-resolution frame for every two—or more, on the newest GPUs—full-res frames you see. If that sounds like a lot of extrapolation, well, it is. And, incredibly, it works pretty well. Most of the time.

When is frame generation useful?

In a relatively short amount of time, we’ve seen the demand placed on GPUs explode. As mentioned above, 4K resolutions contain quadruple the amount of pixel information as 1080p ones. Moreover, while media like movies and TV have stuck at a relatively consistent 24-30 frames per second, gamers increasingly demand at least 60fps as a baseline, often pushing that even higher to 120fps or 240fps for high-end machines. And do not get me started on Samsung’s absurd display capable of supporting up to 500fps.

If your GPU had to calculate every pixel of a 4K image 120 (or 500) times every second, the resulting fire coming from your PC would be visible from space—at least for games with the kind of detailed, ray-traced graphics we’re used to from AAA titles. 

From that perspective, frame generation isn’t just helpful, it’s necessary. On Nvidia’s latest GPUs, Multi Frame Generation can allow a game to increase its frame rate by multiple hundred frames per second even in 4K, while still looking pretty great. That’s just not a frame rate that’s possible at that resolution without an industrial rig.

When it works (and we’ll come back to that), frame generation can allow for smoother movement and less eye strain. If you want to get a taste of the difference, this little tool lets you experiment with different frame rates (as long as your display supports it). Try comparing 30fps to 60fps or 120fps and follow each ball with your eyes. The effect gets even more stark if you turn off motion blur which, for many games, would be the default.

For chaotic games with a lot of movement, those extra frames can be a huge benefit, even if they’re not exactly perfect. If you were to take a close look at the images frame-by-frame, you might see some artifacts, but they might be less noticeable while playing—at least, that’s how it should work in theory.

What are the downsides of frame generation?

In practice, how well this tech works can vary greatly on a per-game basis, as well as by how powerful your machine is. For example, going from 30fps to 60fps with frame generation can look jankier than if you’re going from 60fps to 120fps. This is due, at least in part, to the fact that at lower frame rates, there’s more time in between reference frames, which means more guess work for the frames being generated. That leads to more noise and artifacts.

Whether those artifacts will bother you is also highly subjective. For example, if you’re swinging through the city in Spider-Man 2, and the trees in the background look stranger than they should, would you even notice? On the other hand, for slower-paced atmospheric games like Alan Wake II, where graphical detail and set design is more important for the vibes, ghosting and smearing can seem more pronounced.

It should also be noted that artifacts aren’t necessarily inherent to all frame generation. For starters, better input frames can lead to better frame generation. Nvidia, for example, is touting new models behind Super Resolution and Ray Reconstruction—a whole other piece of tech for improving ray tracing results that we simply don’t have enough time to get into—to improve the images that get passed to the frame generation portion of the pipeline.

You can think of it a bit like a giant, complex version of a game of telephone. The only way to get the most accurate, detailed frames from your game is to render them directly. The more you add steps to extrapolate extra pixels and frames, the more chances there are for mistakes. However, our tools are getting progressively better at cutting down on those mistakes. So, it’s up to you to decide whether more frames or more detail is worth it for you.

Why frame generation is (probably) bad for competitive games

There’s one major exception to this whole argument, and that’s when it comes to competitive games. If you play online games like Overwatch 2, Marvel Rivals, or Fortnite, then smooth motion isn’t necessarily your primary concern. You might be more concerned with latency—which is to say, the delay between when you react to something, and when your game has registered your reaction.

Frame generation complicates latency issues because it requires creating frames out of order. Recall our earlier example: The GPU generates Frame One, then Frame Three, then the frame generator comes up with what Frame Two should be. In that scenario, the game can’t actually show you Frame Two until it’s figured out what frame three should be.

Now, in most cases this isn’t usually a problem. At 120fps, each frame is only on screen for about 8.33 milliseconds. Your brain can’t even register that short of a delay, so it’s not likely to cause a huge issue. In fact, human reaction time is typically measured in the hundreds of milliseconds. For a completely unscientific proof, go ahead and try out this reaction time test. Let me know when you get under 10 milliseconds. I’ll wait.

However, this does become an issue in competitive gaming, because frame delays aren’t the only latency issues you’re dealing with. There’s latency between your keyboard and your computer, between your computer and the server, and between the server and the other players. 

Most of those individual links in the chain might be pretty low, but they have to get synced up somewhere. That “somewhere” is in the game’s tick rate. This is how often the game you’re playing updates on the server. For example, Overwatch 2 has a tick rate of 64. That means that every second, the server updates what has happened in the game 64 times, or once every 15.63 milliseconds.

That’s just enough that if, say, your game shows you our rhetorical Frame One, where the enemy Cassidy is in your crosshairs, but hasn’t yet updated to Frame Three, when he’s not, the server could have ticked over before your screen has updated. That could mean your shot registers as a miss even though it feels like it should have hit. This is also the one issue that can actually get worse with Multi Frame Generation.

There are ways to mitigate this hit—for example, Nvidia’s Reflex tech that reduces input latency in other areas—but it’s not something that can be avoided entirely. If you’re playing competitive online games, you’re better off turning your graphics settings down lower to get a better frame rate, rather than using frame generation for now.

Meta Argues Enshittification Isn’t Real

An anonymous reader quotes a report from Ars Technica: Meta thinks there’s no reason to carry on with its defense after the Federal Trade Commission closed its monopoly case, and the company has moved to end the trial early by claiming that the FTC utterly failed to prove its case. “The FTC has no proof that Meta has monopoly power,” Meta’s motion for judgment (PDF) filed Thursday said, “and therefore the court should rule in favor of Meta.” According to Meta, the FTC failed to show evidence that “the overall quality of Meta’s apps has declined” or that the company shows too many ads to users. Meta says that’s “fatal” to the FTC’s case that the company wielded monopoly power to pursue more ad revenue while degrading user experience over time (an Internet trend known as “enshittification”). And on top of allegedly showing no evidence of “ad load, privacy, integrity, and features” degradation on Meta apps, Meta argued there’s no precedent for an antitrust claim rooted in this alleged harm.

“Meta knows of no case finding monopoly power based solely on a claimed degradation in product quality, and the FTC has cited none,” Meta argued. Meta has maintained throughout the trial that its users actually like seeing ads. In the company’s recent motion, Meta argued that the FTC provided no insights into what “the right number of ads” should be, “let alone” provide proof that “Meta showed more ads” than it would in a competitive market where users could easily switch services if ad load became overwhelming. Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it “does not profit by showing more ads to users who do not click on them,” so it only shows more ads to users who click ads.

Meta also insisted that there’s “nothing but speculation” showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them. The company claimed that without Meta’s resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was “pretty broken and duct-taped” together, making it “vulnerable to spam” before Meta bought it. Rather than enshittification, what Meta did to Instagram could be considered “a consumer-welfare bonanza,” Meta argued, while dismissing “smoking gun” emails from Mark Zuckerberg discussing buying Instagram to bury it as “legally irrelevant.” Dismissing these as “a few dated emails,” Meta argued that “efforts to litigate Mr. Zuckerberg’s state of mind before the acquisition in 2012 are pointless.”

“What matters is what Meta did,” Meta argued, which was pump Instagram with resources that allowed it “to ‘thrive’ — adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success.” In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that “the sole Meta witness to (supposedly) learn of Google’s acquisition efforts testified that he did not have that worry.” In sum: A ruling in Meta’s favor could prevent a breakup of its apps, while a denial would push the trial toward a possible order to divest Instagram and WhatsApp.


Read more of this story at Slashdot.

Meta is trying to get its antitrust case thrown out in the middle of the trial

The FTC just rested its case following weeks of testimony in a landmark antitrust case against Meta. But before Meta can begin its defense, the company’s lawyers have opted for another move: asking the judge to throw out the case entirely.

The company filed a motion on Thursday asking US District judge James Boasberg to toss out the FTC’s case, arguing that the regulator has not proved that Meta acted anticompetitively. “Meta has made two promising mobile apps with uncertain prospect: two of the most successful apps in the world, enjoyed by approximately half of the planet’s population (including hundreds of millions of U.S. consumers) on demand, in unlimited quantities, all for free,” the filing says, “The FTC has not carried its burden to prove that Meta ‘is currently violating the antitrust laws.'”

The company’s reasoning is similar to past arguments it’s made about the FTC’s case. Meta has said that Instagram and WhatsApp were able to grow to one-billion-user services because of the company’s investments. The company also takes issue with the FTC’s claim that there is a lack of competition for “personal social networking services.” (The FTC has argued that Meta’s only competitors for social networking are Snapchat and MeWe, a small privacy-focused social app that runs on decentralized protocols.)

So far, the month-long trial has seen a number of prominent current and former Meta executives take the stand, including CEO Mark Zuckerberg, former COO Sheryl Sandberg and Instagram cofounder Kevin Systrom. Their testimony has revealed new details about the inner workings of the social media company and its tactics to stay ahead of potential competitors.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/meta-is-trying-to-get-its-antitrust-case-thrown-out-in-the-middle-of-the-trial-204656979.html?src=rss

All These Beats Headphones Are on Sale During Amazon’s Memorial Day Sale

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.

Amazon’s Memorial Day sale is in full swing, featuring deals on garden, sports, beauty, home items, electronics, and more. A particular sale that stood out to me, as a deals writer who frequently reviews headphones and speakers, is the one on Beats products. While it’s not uncommon to see Beats products with a good discount, it is rare to see the newest product (the Beats Powerbeats Pro 2) included in the sale.

Right now, you can get the Beats Pill, Beats Powerbeats Pro 2, and the Beats Studio Buds+ for up to 41% off. All of these options are at their lowest prices ever, according to price-tracking tools.

Beats Studio Buds + are $40 off

The Beats Studio Buds + are very similar to the Beats Studio Buds, but there are some key differences. The Studio Buds + have an extra small ear tip size, an extra hour of battery life (total of nine hours), 12 more hours from the charging case (total of 36 hours), and perhaps more importantly, they have better ANC and better features for Android users, which you can read more about in PCMag’s review. You can get them for $99.95 (originally $169.95), matching the lowest price they have ever been, according to price-tracking tools.

The Beats Powerbeats Pro 2 are $50 off

The Beats Powerbeats Pro 2 are the newest earbuds from Beats, and the second generation of the original Powerbeats Pro. Although they’re only $20 apart in price, don’t let that fool you. This new iteration offers a lot more in quality than the original ones, which you can read more about in Senior Health Editor Beth Skwarecki’s review. While your attention might go to its new heart-rate monitoring feature, it’s actually a flop. However, they’re still great workout earbuds with better flexible and durable ear hooks, ANC, audio quality, battery, case, and the newer H2 chip, which you also see in the AirPods Pro 2.

The Beats Pill is $50 off

I’ve been trying out the Beats Pill speaker since it came out in June, and it has everything I would want in a portable speaker: a stereo sound when connected to another compatible speaker, multi-room mode to play the same music on multiple speakers, the Find My Device feature in case you misplace it, and Class 1 Bluetooth for extended range. If you’re an Apple user, you can take advantage of the AAC codec for better audio quality. You can get it for $99.95 (originally $149.95), the lowest price yet.

Urgent Chrome Update: CISA Warns An Alarming Security Flaw Is Being Exploited

Urgent Chrome Update: CISA Warns An Alarming Security Flaw Is Being Exploited
If you’re a Chrome user, take note that the Cybersecurity and Infrastructure Security Agency (CISA) has identified and reported three zero-day vulnerabilities, and one of these flaws could affect you.

The CISA reported that CVE-2025-4664 is already being exploited in the wild and that it impacts the Google Chromium Loader, a part of Chrome

The FDA clears the first blood test for diagnosing Alzheimer’s disease

There’s a new way to screen for early detection of Alzheimer’s disease. On Friday, the Food and Drug Administration (FDA) cleared the first blood test for the disorder. Fujirebio Diagnostics’ Lumipulse measures the ratio of two proteins that correlate with the presence or absence of Alzheimer’s. Previously, patients being screened for Alzheimer’s were limited to more invasive options: a PET Scan or spinal tap.

Lumipulse is intended for use in clinical settings with patients exhibiting signs of cognitive decline. At least in its current form, it isn’t something the general population can ask for as a standard screening.

The test works by measuring two proteins: pTau217 and β-amyloid 1-42. The blood test calculates their ratio, which correlates with the buildup of amyloid plaques in the brain. People with Alzheimer’s have elevated pTau217 and lower β-amyloid 1-42.

In a clinical study, the test performed better with negative results than positive ones. As such, Reuters reports that the test will likely be used initially to rule out Alzheimer’s. Over 97 percent of negative results corresponded with a negative PET scan or CSF test result. Results for positives were a bit lower: a 91.7 percent correlation. So, positive results will need to be confirmed with more advanced diagnostic tests.

This article originally appeared on Engadget at https://www.engadget.com/science/the-fda-clears-the-first-blood-test-for-diagnosing-alzheimers-disease-201209676.html?src=rss

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs

This week, Spotify rushed to remove hundreds of obviously fake podcasts found to be marketing prescription drugs in violation of Spotify’s policies and, likely, federal law.

On Thursday, Business Insider (BI) reported that Spotify removed 200 podcasts advertising the sale of opioids and other drugs, but that wasn’t the end of the scandal. Today, CNN revealed that it easily uncovered dozens more fake podcasts peddling drugs.

Some of the podcasts may have raised a red flag for a human moderator—with titles like “My Adderall Store” or “Xtrapharma.com” and episodes titled “Order Codeine Online Safe Pharmacy Louisiana” or “Order Xanax 2 mg Online Big Deal On Christmas Season,” CNN reported.

Read full article

Comments

Verizon Secures FCC Approval for $9.6 Billion Frontier Acquisition

The Federal Communications Commission has approved Verizon’s $9.6 billion acquisition of Frontier Communications, valuing the Dallas-based company at $20 billion including debt. The approval comes after Verizon agreed to scale back diversity initiatives to comply with Trump administration policies.

FCC Chairman Brendan Carr, who previously threatened to block mergers over DEI practices, praised the deal for its potential to “unleash billions in new infrastructure builds” and “accelerate the transition away from old, copper line networks to modern, high-speed ones.” The acquisition positions America’s largest phone carrier to expand its high-speed internet footprint across Frontier’s 25-state network. Verizon plans to deploy fiber to more than one million U.S. homes annually following the transaction.


Read more of this story at Slashdot.