Scientists Want to Set Some Ground Rules to Stop AI Taking Over The World

With AI technology accelerating rapidly, scientists say that existing rules and regulations don’t go far enough in limiting what AI can do – and recommend that robots be held to the same standards as the humans who make them.



Sarah Connor would be proud of the efforts these researchers are putting fourth. I can see the other side of the argument, where transparency could harm development, or give away trade secrets; but I believe that with something as powerful as AI errors on the side of caution are better. While AI has become a modern convenience for many with assistants like Alexa becoming popular, it won’t be long before AI is delegated to more serious tasks than ordering more Cheetos; and that’s when problems could start to happen. Not to go full judgement day, but autonomous targeting systems on weapons could be worrying. Or perhaps an AI jury. I read a good quote on the subject once, “Ultimately, the future of AI – our AI future – is bright. But the brighter it becomes, the more shadows it will cast.”

Even before we get to the stage of the robots rising up, AI that’s unaccountable and impossible to decipher is going to cause issues – from problems working out the cause of an accident between self-driving cars, to understanding the reasons why a bank’s computer has turned you down for a loan.

Discussion

Source: [H]ardOCP – Scientists Want to Set Some Ground Rules to Stop AI Taking Over The World