Largest Third-Party D&D Marketplace Restricts AI-Generated Products

Dungeon Masters Guild, a subsidiary of OneBookShelf and the largest digital third-party Dungeons & Dragons marketplace, officially announced a policy change that would restrict the sale of “standalone” AI-generated products. Additionally, it is enacting a “Creation Method Filter” which requires sellers to indicate if…

Read more…



Source: Gizmodo – Largest Third-Party D&D Marketplace Restricts AI-Generated Products

Why Your ‘Dopamine Fast’ Could Backfire

In the same brand of pop culture shorthand that labels oxytocin the “love hormone” and cortisol the “stress hormone,” dopamine is classified as the “reward” chemical we deliver ourselves a dose of every time we experience something pleasurable. Of course, real life is never that simple.

Read more…



Source: LifeHacker – Why Your ‘Dopamine Fast’ Could Backfire

Headspace annual plans are 30 percent off right now

Doomscrolling through Twitter’s dumpster fire descent into x-crazed madness may be fun, but it likely isn’t the best option for your overall mental state. That’s where meditation-focused apps like Headspace come in. To commemorate these uncertain times, Headspace has lowered the price of its annual subscription plan from $70 to $49, a reduction of 30 percent. This only lasts for the first year, at which point you’ll get upped to the original price (unless you cancel.)

The sale is live right now and is available for new users and previous Headspace devotees, if you took a break and want to get back on the mindfulness horse. There’s no discount when paying monthly, so it’s the full year or bust.

What exactly is Headspace? This all-in-one meditation app offers mindfulness sessions, sleep guides, stress relief tools, workouts and more. There’s video and audio options and plenty of search fields to narrow down the offerings to your exact liking. There’s even dedicated programs for when you wake up in the middle of the night and can’t get back to sleep. There’s a reason, after all, why Headspace is so well-reviewed.

On the fitness side, it has yoga, guided jogs, cardio courses and just about anything else. Headspace has been around for 12 years and amassed 70 million users, so they must be doing something right. Now you can try it for yourself and save a few bucks in the process.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/headspace-annual-plans-are-30-percent-off-right-now-162605255.html?src=rss

Source: Engadget – Headspace annual plans are 30 percent off right now

TikTok Is Going to Sell Cheap Junk in Competition With Shein and Temu

TikTok is set to launch an e-commerce platform to sell Chinese-made goods to its millions of American users, according to a new report in the Wall Street Journal. The new shopping experience is a new attempt by TikTok to compete with Shein and Temu, two Chinese companies dominating the online retail scene with fast…

Read more…



Source: Gizmodo – TikTok Is Going to Sell Cheap Junk in Competition With Shein and Temu

Ubisoft Addresses Backlash Over Inactive Account Deletion Policy With A Key Clarification

Ubisoft Addresses Backlash Over Inactive Account Deletion Policy With A Key Clarification
Ubisoft has addressed a concern that arose on social media over the company deleting inactive Ubisoft accounts. Gamers were fearful that if their account remained inactive for a period of time, they would lose all their games forever.

This past weekend, a tweet set off a stampede of confusion and dread over whether or not Ubisoft would

Source: Hot Hardware – Ubisoft Addresses Backlash Over Inactive Account Deletion Policy With A Key Clarification

Climatologists: July’s intense heat “exactly what we expected to see”

Billboard showing a 118° reading in Phoenix

Enlarge / A billboard in Phoenix, Ariz. displays the temperature on July 18, 2023 during an unprecedented string of days with high temperatures above 110° F. (credit: Patrick T. Fallon/AFP via Getty Images)

Widespread summer heatwaves like those currently baking the Northern Hemisphere, with temperatures soaring above 110 degrees Fahrenheit simultaneously in North America, Asia and Europe, will be common in just a few decades unless greenhouse gas emissions are immediately curtailed, an international team of scientists said Monday.

If global warming reaches 2 degrees Celsius above the pre-fossil fuel era, such heat waves will happen every two to five years, the researchers said as they released a rapid attribution analysis of the blistering conditions experienced by hundreds of millions of people in recent weeks. If emissions continue on the same increasing path as now for a few more years, the 2 degree Celsius mark will be passed in about 30 years, according to the new analysis by World Weather Attribution.

In the current climate, warmed by 1.1° C (1.9° F) by humans, these extreme heatwaves are no longer rare, “due to warming caused by burning fossil fuels and other human activities,” the authors wrote. “Events like these can now be expected approximately once every 15 years in North America, about once every 10 years in southern Europe and approximately once every five years in China.”

Read 16 remaining paragraphs | Comments



Source: Ars Technica – Climatologists: July’s intense heat “exactly what we expected to see”

Elon Musk is taking his SEC fight to the Supreme Court

Elon Musk is taking his battle with the Securities and Exchange Commission (SEC) to the highest court in the country. Attorney Alex Spiro has confirmed that Musk will ask the Supreme Court to decide whether the SEC went too far with a consent decree determining what Musk can say about Tesla’s financials on Twitter (now X). He’s challenging a May 15th appeals court decision dismissing allegations the SEC abused the decree to harass him with investigations over Twitter usage.

The new appeal comes a day after a judicial panel denied Musk’s request that judges reexamine the case. The entrepreneur previously claimed he was pushed into the decree, and had to give up his right to contest the constitutionality of the SEC’s terms if he wanted to pursue the eventual settlement. The truce saw a total of $40 million in fines between Musk and Tesla, and required that Musk both step down as board chairman and seek legal approval when posting about company financials.

Musk drew the SEC’s attention in August 2018, when he tweeted that he was considering taking Tesla private and had “funding secured” with “investor support.” The deal never went through, and shareholders pinned ensuing losses on Musk’s posts. The Commission sued Musk over the tweets, arguing that they could be considered fraud.

During a shareholder trial, Musk contended that people didn’t necessarily believe or respond to his tweets the way you’d expect. He pointed to one example where Tesla’s stock price surged despite a tweet saying the value was too high. At the same time, he acknowledged that he has ignored requests to stop tweeting over delicate subjects, such as when he accused a Thai cave rescue diver of being a “pedo guy.”

There’s no certainty the Supreme Court will take the case or overturn the outcome. Either way, the court’s response should have a significant impact on Musk’s social media posting, either forcing him to honor the SEC’s decisions or giving him more flexibility in what he says online.

This article originally appeared on Engadget at https://www.engadget.com/elon-musk-is-taking-his-sec-fight-to-the-supreme-court-161255874.html?src=rss

Source: Engadget – Elon Musk is taking his SEC fight to the Supreme Court

Chainalysis Investigations Lead Is 'Unaware' of Scientific Evidence the Surveillance Software Works

Chainalysis’ head of investigations doesn’t seem to have a great understanding of whether her company’s flagship software even works. From a report: Elizabeth Bisbee, head of investigations at Chainalysis Government Solutions, testified she was “unaware” of scientific evidence for the accuracy of Chainalysis’ Reactor software used by law enforcement, an unreleased transcript of a June 23 hearing shared with CoinDesk shows.

The fact that Chainalysis’ blockchain demystification tools have become so widespread is a serious threat to the crypto ecosystem. Although industry insiders have raged against Chainalysis since it was founded, often accusing it of violating people’s financial privacy, there may be a better argument to make against the company and analysis firms like it: it’s within the realm of possibility that these “probabilistic” machines don’t work as well as advertised. This is a big deal considering Chainalysis’ surveillance tools are used widely across the industry for compliance, and have at times led to unjustified account restrictions and — even worse — land unsuspecting individuals on the radar of law enforcement agencies without probable cause.

Read more of this story at Slashdot.



Source: Slashdot – Chainalysis Investigations Lead Is ‘Unaware’ of Scientific Evidence the Surveillance Software Works

TACC's Stampede3 Supercomputer Uses Intel's Xeon Max with HBM2E and Ponte Vecchio

The Texas Advanced Computing Center (TACC) unveiled its latest Stampede supercomputer for open science research projects, Stampede3. TACC anticipates that Stampede3 will come online this fall and will deliver its full performance in early 2024. The supercomputer will be a crucial component of the U.S. National Science Foundation’s (NSF) ACCESS scientific supercomputing ecosystem, and it is projected to serve the open science community from 2024 until 2029.


The third-generation Stampede cluster, which will be built by Dell, will incorporate 560 nodes equipped with Intel’s Sapphire Rapids generation Xeon CPU Max processors, each offering 56 CPU cores and 64GB of on-package HBM2E memory. Surprisingly, TACC is going to be operating these nodes in HBM-only mode, so no additional DRAM will be attached to the CPU nodes – all of their memory will come from the on-chip HBM stacks.


With these specifications, Stampede3 is expected to have a peak performance of approximately 4 FP64 PetaFLOPS, while offering nearly 63,000 general-purpose cores. In addition, TACC also plans to install 10 Dell PowerEdge XE9640 servers with 40 Intel Data Center GPU Max compute GPUs for artificial intelligence and machine learning workloads.


Given this layout, the bulk of Stampede3’s compute performance will be supplied by CPUs. This makes Stampede3 a bit of a rarity in this day and age, as most high-performance systems are GPU driven, leaving Stampede3 as one of the last supercomputers that relies almost solely on general-purpose CPUs.


And while the current cluster is primarily focused on CPU performance, TACC is also going to use the Intel GPUs in the latest Stampede revamp to investigate on how to incorporate larger numbers of GPUs into future versions of the system. For now, most of TACC’s AI tasks are run on its Lone Star systems, which is powered by hundreds Nvidia A100 compute GPUs. So the organization’s aim is to explore whether a portion of this workload can be transferred to Intel’s Ponte Vecchio.


We are going to put in a small system with exploratory capability using Intel Ponte Vecchio,” said Dan Stanzione, executive director of TACC. “We are still negotiating exactly how much of that will have, but I would say a minimum of 40 nodes and maximum of a hundred or so. […] We are just putting a couple of racks of Ponte Vecchio out there to see how people work with it.”


Stampede3 will leverage 400 Gb/s Omni-Path Fabric technology that will enable a backplane bandwidth of 24TB/s. This setup will allow the machine to efficiently scale and minimize latencies, making it well-suited for various applications requiring simulations.


TACC also plans to reincorporate nodes from the previous version, Stampede2, which were based on older-generation Xeon Scalable CPUs. This integration will enhance the capacity of Stampede3 for high-memory applications, high-throughput computing, interactive workloads, and other previous-generation applications. In total, the new supercomputer system will feature 1,858 compute nodes with over 140,000 cores, more than 330 TBs of RAM, new storage capacity of 13 PBs, and a peak performance close to 10 PetaFLOPS.


Sources: TACC, HPCWire




Source: AnandTech – TACC’s Stampede3 Supercomputer Uses Intel’s Xeon Max with HBM2E and Ponte Vecchio

What to Do When You Get Chills While Working Out

Have you ever been in the middle of a run and gotten hit with a sudden wave of chills, goosebumps, or shivers, even though it’s hot as hades outside? Feeling cold and shivery when you’re working out isn’t uncommon, especially when it’s hot and humid, and isn’t harmful in itself—but it is an early warning of heat…

Read more…



Source: LifeHacker – What to Do When You Get Chills While Working Out

Thicken Your Salad Dressing With Cooked Egg Yolks

I’m a huge fan of creamy salad dressings, but I almost never use them. The bottled stuff often has an overwhelming synthetic flavor, and a lot of homemade recipes are too mayonnaise-dominant for my taste. Fortunately, there’s a simple trick to add body and subtle richness to any liquid-y salad dressing: Add cooked egg…

Read more…



Source: LifeHacker – Thicken Your Salad Dressing With Cooked Egg Yolks

Threads adds a chronological feed as Twitter burns to the ground

Threads is about to get vastly more useful as Meta has started rolling out the option to see a chronological feed of posts from the people you follow. Many observers said this was a key feature Threads needed to truly compete with Twitter, long a vital source of real-time information. But as Twitter (sorry, X) owner Elon Musk continues to reduce his app to rubble, Threads is looking like a more viable destination for up-to-the-minute news and updates. You’ll need to update to the latest version of Threads to see the chronological feed, but since this is a gradual rollout, it might not appear for you immediately.

Mark Zuckerberg announced the rollout of the chronological feed on his Instagram broadcast channel (Adam Mosseri, the head of Instagram, said a while back that such an option was on the way). The Meta CEO added that Threads has gained another vitally important feature in the form of translations. Zuckerberg said there was more to come, hopefully including the ability to post to Threads from the web, direct messages, improved accessibility, better search and a TweetDeck-like way to keep tabs on Threads posts.

This article originally appeared on Engadget at https://www.engadget.com/threads-adds-a-chronological-feed-as-twitter-burns-to-the-ground-152817251.html?src=rss

Source: Engadget – Threads adds a chronological feed as Twitter burns to the ground

The Most Powerful Diablo IV Heart In Season 1 Is Easy To Farm

Diablo IV released its first-ever season on July 20, and with it came a definite way to farm Wrathful Malignant Hearts, one of the most powerful items to boost and customize your build. Rare, ultra-powerful Wrathful Hearts typically only drop from Wrathful Malignant Monsters or are crafted at Cormond’s Wagon, but the

Read more…



Source: Kotaku – The Most Powerful Diablo IV Heart In Season 1 Is Easy To Farm

The Exorcist: Believer's First Trailer Is Full of Creepy Kids and Old Haunts

Stop us if you’ve heard this one before, but David Gordon Green and Blumhouse are reviving an iconic horror franchise with a fresh reboot-sequel saga that tells a new re-treading of classic ground but doesn’t quite cast away a history of messy sequels. Oh, and this time it’s The Exorcist rather than Halloween.

Read more…



Source: Gizmodo – The Exorcist: Believer’s First Trailer Is Full of Creepy Kids and Old Haunts

Intel Unveils AVX10 and APX Instruction Sets: Unifying AVX-512 For Hybrid Architectures

Intel has announced two new x86-64 instruction sets designed to bolster and offer more performance in AVX-based workloads with their hybrid architecture of performance (P) and efficiency (E) cores. The first of Intel’s announcements is their latest Intel Advanced Performance Extensions, or Intel APX as it’s known. It is designed to bring generational, instruction set-driven improvements to load, store and compare instructions without impacting power consumption or the overall silicon die area of the CPU cores.


Intel has also published a technical paper detailing their new AVX10, enabling both Intel’s performance (P) and efficiency (E) cores to support the converged AVX10/256-bit instruction set going forward. This means that Intel’s future generation of hybrid desktop, server, and workstation chips will be able to support multiple AVX vectors, including 128, 256, and 512-bit vector sizes throughout the entirety of the cores holistically.


Intel Advanced Performance Extensions (APX): Going Beyond AVX and AMX


Intel has published details surrounding its new Advanced Performance Extensions, or APX for short. The idea behind APX is to allow access to more registers and improve overall general-purpose performance. They are designed to provide better efficiency when using x86-based instruction sets, allowing access to more registers. New features such as doubling the general-purpose registers from 16 to 32 enable compilers to keep more values within the registers, with Intel claiming 10% fewer loads and 20% fewer stores when the code is compiled for APX versus the same code for x86-64 using Intel 64; this is Intel’s 64-bit compatibility mode for x86 instruction sets.


The idea behind doubling the number of GPRs from 16 with x86-64 to the 32 GPRs available with the Intel APX is that more data can be held close by, avoiding the need to read and write further into the different levels of cache and memory. Having more GPR also means that it should theoretically require less access to slower areas, such as DRAM, which takes longer and uses more power.


Despite effectively abandoning its MPX (Memory Protection Extensions), the Intel APX can effectively use the existing area set aside for MPX for what it calls XSAVE. Touching more on XSAVE, Intel’s APX general purpose registers (GPRs) are XSAVE-enabled, which means they can automatically be saved and restored by XSAVE and XRSTOR sequences during context switches. Intel also states by default that these don’t change the size or layout, which means they can take up the same space left behind for the now-defunct Intel MPX registers.


Another essential feature of Intel’s APX is its support for three-operand instruction formats, a subset of the x86 instruction set specifying the data being operated on. APX introduces new instructions optimized for predicted loads, including a novel 64-bit absolute jump instruction. Compared to older examples that used EVEX, a 4-byte extension to VEX, APX transforms single register operands into three, effectively reducing the need for additional register move instructions. As a result, APX compiled code achieves a claimed 10% increase in efficiency, requiring 10% fewer instructions than previous ISAs.


Intel AVX10: Pushing AVX-512 through 256-bit and 512-bit Vectors


One of the most significant updates to Intel’s consumer-focused instruction sets since the introduction of AVX-512 is Intel’s Advanced Vector Extension 10 (AVX10). On the surface, it looks to bring forward AVX-512 support across all cores featured in their heterogeneous processor designs.


The most significant and fundamental change introduced by AVX10 compared to the previous AVX-512 instruction set is the incorporation of previously disabled AVX-512 instruction sets in future examples of heterogeneous core designs, exemplified by processors like the Core i9-12900K and the current Core i9-13900K. This enables support for AVX-512 in these processors. Currently, AVX-512 is exclusively supported on Intel Xeon performance (P) cores.




Image Source: Intel


Examining the core concept of AVX10 it signifies that consumer-based desktop chips will now have full AVX-512 support. Although performance (P) cores have the theoretical capability to support 512-bit wide vectors if Intel desires (Intel has currently confirmed support is up to 256-bit vectors), efficiency (E) cores are restricted to 256-bit vectors. Nevertheless, as a whole, the entire chip will be capable of supporting complete AVX-512 instruction sets across all of the cores, whether they are fully-fledged performance or lower-powered efficiency cores.


Touching on performance, within the AVX10 technical paper, Intel states the following:


  • Intel AVX2-compiled applications, re-compiled to Intel AVX10, should realize performance gains without the need for additional software tuning.
  • Intel AVX2 applications sensitive to vector register pressure will gain the most performance due to the 16 additional vector registers and new instructions.
  • Highly-threaded vectorizable applications are likely to achieve higher aggregate throughput when running on E-core-based Intel Xeon processors or on Intel® products with performance hybrid architecture.


Intel further claims that their chips, already utilizing 256-bit vectors as an example, will maintain similar performance levels when compiled onto AVX10 at the 256-bit ISO vector length. However, the true potential of AVX10 comes to light when leveraging the more substantial 512-bit vector length, promising the best AVX10 instruction set performance attainable. This aligns with introducing new AVX10 libraries and enhanced tool support, enabling application developers to compile newer AI and scientific-focused codes for optimal benefits. Additionally, this means preexisting libraries can be recompiled with AVX10/256 compatibility and, when possible, further optimized to exploit the larger vector units for better performance throughput.


In Intel’s first phase of AVX10 (AVX10.1), this will be introduced for early software enablement and will support the subset of Intel’s AVX-512 instruction sets, with Granite Rapids (6th Gen Xeon) performance (P) cores being the first cores to be forward compatible with AVX10. It is worth noting that AVX10.1 will not enable 256-bit embedded routing. As such, AVX10.1 will serve as an introduction to AVX10 to enable forward compatibility and implementation of the new versioning enumeration scheme.




Image source: Intel


Intel’s 6th Gen Xeons, codenamed Granite Rapids, will enable AVX10.1, and future chips after this will bring fully-fledged AVX10.2 support, with AVX-512 also being supported to allow for compatibility for legacy instruction sets and applications compiled with them. It is worth noting that despite Intel AVX10/512 including all of Intel’s AVX-512 instructions, applications compiled to Intel AVX-512 with vector lengths limited to 256-bit are not guaranteed to work with an AVX10/256 processor due to differences in the supported mask register width.


While initial support of the AVX10 instruction set is more of a transitional phase in AVX10.1, it’s when AVX10.2 finally rolls out will be where AVX10 will start to show cause and effect in performance and efficiency, at least with compatible instruction sets associated with AVX10. AVX10, by default, will allow developers that recompile their preexisting code to work with AVX10, as new processors with AVX10 won’t be able to run AVX-512 binaries as they previously would have. Intel is finally looking toward the future.


The introduction of AVX10 completely replaces the AVX-512 superset. Once AVX10 is widely available through Intel’s future product releases, there’s technically no need to use AVX-512 going forward. One challenge this presents is that software developers who have specifically compiled libraries specifically for 512-bit wide vectors will need to recompile the code as previously mentioned to properly work with the 256-bit wide vectors that AVX10 holistically supports across the cores.


While AVX-512 isn’t going anywhere as an instruction set, it’s worth highlighting that AVX10 is backward compatible, which is an essential aspect of supporting instruction sets with various vector widths such as 128, 256, and 512-bit where applicable. Developers can recompile code and libraries for the broader transition and convergence to the AVX10 unified instruction set going forward.


Intel is committing to supporting a maximum vector size of at least 256-bit on all Intel processors in the future. Still, it remains to be seen which SKUs (if any) and the underlying architecture will support full 512-bit vector sizes in the future, as this is something Intel hasn’t officially confirmed at any point.


The meat and veg of Intel’s new AVX10 instruction set will come into play when AVX10.2 is phased in, officially bringing 256-bit instruction vector support across all cores, whether performance and/or efficiency cores. This also marks the inclusion of 128-bit, 256-bit, and 512-bit integer divisions across both the performance and efficiency cores, and as such, will support full vector extensions based on the specification of each core.




Source: AnandTech – Intel Unveils AVX10 and APX Instruction Sets: Unifying AVX-512 For Hybrid Architectures