In early April, the European Commission publication of instructions is committed to making any artificial intelligence technology used on the 500 million EU citizens trustworthy. The group’s commissioner for the digital economy and society, Mariya Gabriel of Bulgaria, called them “a strong foundation based on EU values.”
One of the 52 experts who worked on the guidelines argued that the foundation was wrong-thanks to the technology industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, said that many of the experts who created the policies came from or were aligned with corporate interests. Metzinger said he and another member of the group were asked to come up with a list of uses of AI that should be banned. That list available autonomous weapons, and government social evaluation systems similar to those under development in China. But Metzinger accused friends of the technology later convincing the wider group that it should not draw any “red lines” around the uses of AI.
Metzinger said that it spoiled the opportunity for the EU to set an effective example of — like the group. GDPR privacy rules— The technology presented must work within clear limits. “Now everything is up for negotiation,” he said.

When the formal permit was introduced in December, uses that had been suggested as requiring “red lines” were presented as examples of “significant concerns.” That change appeared to please Microsoft. The company does not have its own seat on the EU expert group, but like Facebook, Apple, and others, it is represented by the DigitalEurope business group. In a public comment on the choice, Cornelia Kutterer, Microsoft’s executive director for EU government affairs, said that the group has “taken the right approach in choosing to express these as “concerns,” rather than as “red lines. ” Microsoft does not. provide further comment. Cecilia Bonefeld-Dahl, director general for DigitalEurope and member of the expert group, said that her work has been balanced and not pressed into the industry. “We need to get it right, not to stop innovation and aid in Europe, but to avoid the dangers of misuse of AI.”
The brouhaha over European guidelines for AI is an early bout in a debate that is likely to reoccur globally, as policymakers consider putting safeguards on artificial intelligence to prevent harm to society. Technology companies are taking a close interest—and in some cases these companies seem to be trying to direct the construction of any new surveillance systems to their own benefit.
Harvard law professor Yochai Benkler warned in the journal Creation this month that “the industry has gathered to shape the science, behavior and laws of artificial intelligence.”
Benkler cited Metzinger’s experience in that op-ed. He also joined other academics in criticizing the National Science Foundation for research into “Fairness in Artificial Intelligence” which was funded by Amazon. The company will not participate in the peer review process that distributes grants. But NSF documents said it could ask recipients to share updates on their work, and would be entitled to a royalty-free license to any intellectual property it develops.
Amazon declined to comment on the plan; An NSF spokesperson said the tools, data, and research papers produced under the grants will be made available to the public. Benkler said the program is an example of how the tech industry is increasingly influencing how society manages and scrutinizes the effects of AI. “Government actors need to rediscover their own sense of purpose as an unnecessary distraction to corporate power,” he said.
Microsoft used some of its power when the state of Washington considered proposals to restrict facial recognition technology. The company’s cloud division offers similar technology, but it has also been said that technology should be subject to new federal regulation.
In February, Microsoft voiced support for a proposed privacy document in the Washington state Senate that outlined its desired rules, which would include a requirement that vendors allow outsiders to test their technology for accuracy. or irregularity. The company said against a stricter price That would have put a damper on local and state government use of the technology.
In April, Microsoft found itself in a fight against the House version of the bill that had been supported after the addition of language support on facial recognition. The House bill would have required that companies get independent verification that their technology works equally well for all skin tones and genders before moving it. Irene Plenefisch, Microsoft’s director of government affairs, testified against that version of the bill, saying it “would effectively ban facial recognition technology (which) has many benefits.” The house price is stable. With lawmakers unable to reconcile different visions for the law, Washington’s attempt to pass a new privacy law collapsed.
In a statement, a Microsoft spokesperson said that the company’s actions in Washington stem from its belief in “a robust process of facial recognition technology to ensure that it is used objectively.”
Shankar Narayan, director of the technology and freedom project of the ACLU’s Washington chapter, said the incident shows how technology companies are trying to steer lawmakers toward their favorable, looser, laws for AI. But, Narayan says, they won’t always succeed. “My hope is that more policymakers will see these companies as things that need to be regulated and stand up for consumers and communities,” he said. On Tuesday, San Francisco administrators voted to ban the use of facial recognition by city agencies.
Washington lawmakers—and Microsoft—expect to try again for a new privacy and facial recognition law next year. At the time, AI could still be a topic of debate in Washington, DC.
Last month, Senators Cory Booker (D-New Jersey) and Ron Wyden (D-Oregon) and Representative Yvette Clarke (D-New York) introduced bills Algorithmic Accounting Law. It includes a requirement that companies assess whether their AI systems and training data have built-in biases, or could harm consumers through discrimination.
Mutale Nkonde, a fellow at the research center Data and Society, participated in discussions during the drafting of the document. He hopes it will spark a discussion in DC about AI’s social implications, which he says is long overdue.
The technology company will make itself a part of any kind of communication. Nkonde said that when he talked to the lawmakers about topics like ethnic differences in face analysis algorithmssome have seemed surprised, and said that they have been briefed by technology companies on how AI technology benefits society.
Google is one company that has briefed federal lawmakers about AI. Its parent Alphabet spent $22 million, more than any other house, on lobbying last year. In January, Google published a white paper arguing that although the technology comes with risks, there are laws and personal regulation. it will be enough “In many cases.”
Metzinger, the German professor, believes that the EU can also gain independence from industry influence on its AI policy. The expert group that developed the guidelines is now formulating recommendations for how the European Commission should invest the billions of euros it plans to use in the coming years to improve Europe’s competitiveness. powerful.
Metzinger wants some of it to fund a new company to study the effects and ethics of AI as a parallel project throughout Europe. That would create a new class of experts who could continue developing the EU’s AI ethics guidelines in a less industry-led manner, he said.