AMD EPYC Turin vs. Intel Xeon 6 Granite Rapids vs. Graviton4 Benchmarks With AWS M8 Instances

With Amazon recently launching their M8a AWS instances powered by 5th Gen AMD EPYC “Turin”, for their M8 class instance types there now are all the latest-generation CPU options with AMD EPYC Turin (M8a), Intel Xeon 6 Granite Rapids (M8i), and their in-house Graviton4 processors (M8g). After recently looking at the M7a vs. M8a performance with Amazon EC2, many Phoronix readers expressed interest in seeing an M8a vs. M8i vs. M8g performance showdown so here are those benchmarks.

Play, pedagogy, and real-world impact: What we learned from the AI Quests webinars

Photo of two adult educators sitting around a table with a group of young people playing AI Quests.

How do you teach AI in a way that resonates with 11- to 14-year-olds long after the lesson ends? In two recent Experience AI webinars, we explored that question with collaborators from Google Research, Google DeepMind, and the Stanford Accelerator for Learning. During the webinars, we also showcased AI Quests, a gamified, classroom-first experience where learners use AI concepts to solve real problems.

“The AI technology you’ll experience is amazing, but it’s not magic. Success depends on the decisions you make.”

That line, delivered by Professor Sky, the in-game mentor, captures the core message of AI Quests: AI systems are built by people and shaped by human judgment at every step.

What is AI Quests?

We’ve embedded AI Quests into the Foundations of AI unit in Experience AI, our free AI literacy programme created with Google DeepMind. 

As Google Research’s Liat Ben Rafael explained, “AI Quests is a gamified experience… where students discover firsthand how AI is used in the real world to create positive impact.” Each quest is grounded in a real research programme and mirrors the AI project lifecycle you’ll recognise from our Experience AI lessons: define the problem, prepare data, train, test, deploy.

Photo of a young person playing AI Quests on a laptop. The AI Quests character Luna, can clearly be seen on the young person's screen.

The first quest, Market Marshes, asks students to help Luna, one of the central characters, to protect a riverside market from flooding. Players roam, gather candidate data (from rainfall stats to town gossip), clean it, choose relevant features, and train a model. If the model underperforms, they iterate, exactly as real AI developers would.

Emma Staves, Learning Manager at the Foundation, notes that a key moment is when learners test their model: “It’s made really clear that the data that’s being used to test the model is historic data.” That simple design choice can help you to unlock rich discussions with your learners about validation, reliability, and what counts as “accurate enough” for real decisions.

Designed around how students actually learn

Developed in collaboration with learning scientists at the Stanford Accelerator for Learning, the quests reflect what Victor Lee, Faculty Lead for AI and Education at Stanford, describes as “enduring understanding”:

“The enduring understanding is about how humans can initiate and design AI applications that can address some of humanity’s biggest unsolved challenges.”

To keep that focus, the team blends:

  • Situated learning – for example, a concrete flood scenario rather than abstract exercises
  • Pedagogical agents – characters who nudge, model, and explain
  • Embedded feedback and productive failure – learn by trying, revising, and trying again
  • Self-explanation prompts – ‘learning tickets’ that ask students to articulate what they’re doing and why

In other words, the quests are all about playing with purpose.

What teachers are seeing in the classroom

We piloted AI Quests with some teachers, including Dave Cross, Curriculum Leader for Computer Science at North Liverpool Academy, who tested the quests with his Year 7 students, before extending it to his GCSE classes:

“We see it moving forward as a really solid foundation… for that further learning.”

He also saw strong cross-curricular ties: geography colleagues spotted “massive opportunities” to use the flood quest in their own units, while broader staff discussions turned to digital citizenship, data literacy, and fairness. The cross-disciplinary nature of AI is increasingly apparent, and so AI literacy shouldn’t be limited to computing — students need to encounter AI across multiple subjects and in everyday life.

Where the research comes in: Forecasting floods days in advance

Graphic of the flooded marketplace from the Market Marshes quest on AI Quests.

The second webinar connected the classroom experience to the real project it’s modelled on: Google Research Flood Forecasting. Gila Loike, Product Manager, set the scene:

“Our research team develops AI models that predict flooding all over the world, five to seven days before the flood occurs.”

Deborah Cohen, the Research Scientist leading the team at Google Research focused on flooding, also explained that traditional models can’t easily predict floods in places with little data. However, AI can fill those gaps by combining information from rivers, weather forecasts, and satellites, to give accurate warnings around the world:

“With AI we were able to expand our coverage to the entire world.”

The results are real and practical. Accurate predictions help:

  • People stay safe by receiving flood alerts through familiar apps
  • Emergency teams plan routes and close roads in time
  • Farmers decide whether to move animals or harvest early
  • Aid organisations act sooner, delivering supplies or financial support before the flood hits
Graphic from the Market Marshes quest on AI Quests.

To make sure their models work well, the team compares predictions with real river data, where available, and with satellite images showing flooded areas. Students explore these same ideas in the AI Quest game, cleaning messy data, testing their models, and checking how accurate their results are.

“Students are really engaged by the real-world challenge,” said Emma Staves about the Market Marshes quest. “That authenticity makes learning come alive.” It helps students see how classroom ideas, like features, accuracy, bias, and model cards, connect directly to real decisions and their consequences.

Coming soon: Health quests and more languages

Graphic from the second AI Quests.

Liat also gave a sneak peek at the next quest, a health-focused story on blindness prevention. It introduces new layers — privacy, diverse data, field testing — while following the same lifecycle. More quests are in development, with additional languages planned from early 2026.

Why this matters now

The key message from both webinars is clear: AI literacy isn’t just about using technology — it’s about understanding our role in shaping it. As one Stanford researcher put it, “AI isn’t this magic thing that just happens to us. Humans decide how to use it, and how choices around data affect accuracy and fairness.”

Our goal with Experience AI is to help young people become thoughtful, creative problem-solvers who can navigate an AI-powered world with confidence and integrity — and AI Quests fits perfectly with that.

Find out more

You can watch both webinars anytime on our YouTube and LinkedIn channels

Webinar 1: LinkedIn, YouTube
Webinar 2: LinkedIn, YouTube

Explore our Experience AI resources — already used by nearly two million learners and educators to understand, question, and create with AI — to bring them and AI Quests into your classroom. You’ll find the Foundations of AI unit, alongside materials on large language models, ecosystems and AI, and AI safety, at rpf.io/experienceai-resources

The post Play, pedagogy, and real-world impact: What we learned from the AI Quests webinars appeared first on Raspberry Pi Foundation.

Typst 0.14 released

Version 0.14 of the
Typst document processor has been released.

If you need to comply with accessibility-related regulations, Typst
0.14 has your back. Typst now generates accessible documents by
default, with opt-in support for stricter checks. For those working
with complex illustrations, PDFs are now supported as a native
image format. In case you’re typesetting a book, the new
character-level justification will give your layout the final
touch. And if you’re building a website or blog, many improvements
to Typst’s HTML export are waiting for you.

LWN looked at Typst in September.

[$] GoFundMe to delete unwanted open-source foundation pages

Open-source foundations and projects that have charity status in
the US may want to see if GoFundMe has created a profile
for them without permission. The company has operated since 2010 as a
self-service fundraising platform; individuals or groups could create
pages to raise money for all manner of causes. In June, the company announced
that it would expand its offerings to “manage all aspects of
charitable giving
” for users through its platform. That seems to
include creating profiles for nonprofit organizations without their
involvement. After pushback, the company said
on October 23 that it would be removing the pages. It has not
answered more fundamental questions about how it planned to disburse
funds to nonprofits that had no awareness of the GoFundMe pages in the
first place.

Self-Tuning Linux Kernels: How LLM-Driven Agents Are Reinventing Scheduler Policies

Modern computing systems rely heavily on operating-system schedulers to allocate CPU time fairly and efficiently. Yet many of these schedulers operate blindly with respect to the meaning of workloads: they cannot distinguish, for example, whether a task is latency-sensitive or batch-oriented. This mismatch, between application semantics and scheduler heuristics, is often referred to as the semantic gap. A recent research framework called SchedCP aims to close that gap.

New Code Allows VCE 1.0 Video Acceleration To Work On AMDGPU Driver For GCN 1.0 GPUs

Valve contractor Timur Kristóf for their Linux graphics driver team has been working on improving Linux driver support for old AMD Radeon GCN 1.0 and GCN 1.1 generation GPUs. This has been about improving the AMDGPU driver to fill remaining gaps in GCN 1.0/1.1 support with those graphics cards by default relying on the older “Radeon” DRM kernel graphics driver compared to the AMDGPU driver used by default with GCN 1.2 and later. Another feature gap for AMDGPU is now being addressed with Video Coding Engine 1.0 support…

Linux’s Kconfig Is No Longer Orphaned

Back in August, open-source developer Masahiro Yamada stepped down from maintaining the Kconfig and Kbuild areas of the Linux kernel. While Kbuild maintainership was quickly passed on, no one immediately stepped up to maintain Kconfig as the infrastructure code for configuring the Linux kernel builds. That led to Kconfig officially being orphaned code within the kernel but now that situation has been addressed…

Summary of the Amazon DynamoDB Service Disruption in Northern Virginia (US-EAST-1) Region

We apologize for the impact this event caused our customers. While we have a strong track record of operating our services with the highest levels of availability, we know how critical our services are to our customers, their applications and end users, and their businesses. We know this event impacted many customers in significant ways. We will do everything we can to learn from this event and use it to improve our availability even further.

VMScape: Cracking VM-Host Isolation in the Speculative Execution Age & How Linux Patches Respond

In the world of modern CPUs, speculative execution, where a processor guesses ahead on branches and executes instructions before the actual code path is confirmed, has long been recognized as a performance booster. However, it has also given rise to a class of vulnerabilities collectively known as “Spectre” attacks, where microarchitectural side states (such as the branch target buffer, caches, or predictor state) are mis-exploited to leak sensitive data.