Ask Slashdot: What Are Some Good AI Regulations?

Longtime Slashdot reader Okian Warrior writes: There’s been a lot of discussion about regulating AI in the news recently, including Sam Altman going before a Senate committee begging for regulation. So far I’ve seen only calls for regulation, but not suggestions on what those regulations should be. Since Slashdot is largely populated with experts in various fields (software, medicine, law, etc.), maybe we should begin this discussion. And note that if we don’t create the reasonable rules, Congress (mostly 80-year old white men with conflicts of interest) will do it for us.

What are some good AI regulation suggestions?

I’ll start: A human (and specifically, not an AI system) must be responsible for any medical treatment or diagnosis. If an AI suggests a diagnosis or medical treatment, there must be buy-in from a human who believes the decision is correct, and who would be held responsible in the same manner as a doctor not using AI. The AI must be a tool used by, and not a substitute for, human decisions. This would avoid problems with humans ignoring their responsibility, relying on the software, and causing harm through negligence. Doctors can use AI to (for example) diagnose cancer, but it will be the doctor’s diagnosis and not the AI’s.

What other suggestions do people have?

Read more of this story at Slashdot.



Source: Slashdot – Ask Slashdot: What Are Some Good AI Regulations?