Microsoft and some tech giants have taken an important step to prevent artificial intelligence from rising up against people and installing some kind of “Skynet”.
Prevent Artificial Intelligence Rising Against Humans
Those worried about a Skynet- like ” revolt of machines ” can now breathe a sigh of relief with new protective measures taken to avoid a potential AI uprising. The nonprofit MITER Corporation has founded the Adversarial ML Threat Matrix ( Adversarial ML Threat Matrix ) with the top 12 technology companies including Microsoft , IBM and Nvidia . The group says the system is an open framework established to help security analysts find, alert, respond, and resolve threats targeting machine learning (ML) systems.
Microsoft says this step is triggered by the ongoing increase in the number of attacks against commercial AI systems around the world. The firm reviews 28 selected large businesses , saying that almost all of them are unaware of the threat posed by enemy machine learning, and of those 28, 25 do not have the right tools to secure ML systems.
Matrix is comprised of past vulnerabilities and hostile behavior discovered by Microsoft and MITER over the years, and includes Microsoft’s expertise in the security industry.
In a blog post from Microsoft, he says: ” We have found that when attacking an ML system, subversive use a combination of ‘traditional techniques’ such as phishing and lateral movement, as well as hostile ML techniques .”
As Mikel Rodriguez , Director of Machine Learning Research at MITER points out, ” when it comes to machine learning security, the walls between public and private efforts and responsibilities are blurring; public sector challenges such as national security will require the cooperation of private parties as well as public investment. At MITER, we are committed to identifying critical vulnerabilities in the machine learning supply chain, working with organizations like Microsoft and the general community. This framework is the first step towards bringing communities together and helping organizations think about securing their machine learning systems more holistically. “