Skip to primary content

Prime-WoW

My site, my way, no big company can change this

Prime-WoW

Main menu

  • Home
  • Discord
  • Forums
  • Games
    • 7DtD
      • 7DtD Map
      • 7DtD Official Forums
      • 7DtD Wiki
    • Minecraft
      • Survival Map
      • Vanilla Map
      • FTB Map
      • FTB Wiki
      • Download FTB Client
    • NWN
      • NWN Wiki
      • NWN Lexicon
      • NWN Vault
      • NWNX
      • NWN Info
      • Rhun Guide
    • Terraria
      • Terraria Map
    • WoW
      • Prime-WoW Site
      • WoW Armory
  • Unfiltered RSS
    • Bikes
    • Games
      • Kotaku
      • PS4 News
      • VR
    • Nature
      • TreeHugger
      • Survival
    • Technology
      • Hardware
        • Hot Hardware
      • Linux
        • Linux Today
        • LWN.net
        • LXer
        • Phoronix
        • RPi
      • LifeHacker
      • Akihabara News
      • AnandTech
      • Ars Technica
      • Engadget
      • Gear & Gadgets
      • Geekologie
      • Gizmodo
      • [H]ardOCP
      • io9
      • Slashdot
      • TG Daily

Post navigation

← Previous Next →

Deepfake audio has a tell and researchers can spot it

Posted on September 20, 2022 by Xordac Prime
Deepfake audio has a tell and researchers can spot it

Enlarge (credit: Knk Phl Prasan Kha Phibuly / EyeEm via Getty)

Imagine the following scenario. A phone rings. An office worker answers it and hears his boss, in a panic, tell him that she forgot to transfer money to the new contractor before she left for the day and needs him to do it. She gives him the wire transfer information, and with the money transferred, the crisis has been averted.

The worker sits back in his chair, takes a deep breath, and watches as his boss walks in the door. The voice on the other end of the call was not his boss. In fact, it wasn’t even a human. The voice he heard was that of an audio deepfake, a machine-generated audio sample designed to sound exactly like his boss.

Attacks like this using recorded audio have already occurred, and conversational audio deepfakes might not be far off.

Read 14 remaining paragraphs | Comments



Source: Ars Technica – Deepfake audio has a tell and researchers can spot it

This entry was posted in Ars Technica, Unfiltered RSS and tagged Ars Technica by Xordac Prime. Bookmark the permalink.
Proudly powered by WordPress