EU SEEKS GLOBAL STANDARDS FOR AI

Source: nytimes.com

 

The European Union unveiled strict regulations on Wednesday to govern the use of artificial intelligence, a first-of-its-kind policy that outlines how companies and governments can use a technology seen as one of the most significant, but ethically fraught, scientific breakthroughs in recent memory.

 

 

The draft rules would set limits around the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems — areas considered “high risk” because they could threaten people’s safety or fundamental rights. Some uses would be banned altogether, including live facial recognition in public spaces, though there would be several exemptions for national security and other purposes.

The European Union regulations would require companies providing artificial intelligence in high-risk areas to provide regulators with proof of its safety, including risk assessments and documentation explaining how the technology is making decisions. The companies must also guarantee human oversight in how the systems are created and used. Companies that violate the new regulations could face fines of up to 6 percent of global sales.

The new policy, which could take several years to move through the European Union policymaking process, is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implications for major technology companies that have poured resources into developing artificial intelligence, including Amazon, Google, Facebook and Microsoft, but also scores of other companies that use the software to develop medicine, underwrite insurance policies and judge credit worthiness. Governments have used versions of the technology in criminal justice and the allocation of public services like income support.

“On artificial intelligence, trust is a must, not a nice-to-have,” Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. “With these landmark rules, the E.U. is spearheading the development of new global norms to make sure A.I. can be trusted.”