New Government Regulator To Oversee Internet Content – Nervous About New Rules

Facebook’s Mark Zuckerberg and Google’s Sundar Pichai have journeyed to Brussels as the European Union drafts regulation for A.I. and the digital economy.

LONDON — First came Sundar Pichai, the chief executive of Google’s parent company, Alphabet. Then Apple’s senior vice president for artificial intelligence, John Giannandrea, showed up.

And on Monday, Mark Zuckerberg, Facebook’s chief executive, is joining in with his own trip to Brussels to meet with officials like Margrethe Vestager, the executive vice president of the European Commission.

The main reason so many Silicon Valley executives are paying court in the European Union’s capital: E.U. lawmakers are debating a new digital policy, including first-of-its-kind rules on the ways that artificial intelligence can be used by companies. That has far-reaching implications for many industries — but especially for tech behemoths like Google, Facebook and Apple that have bet big on artificial intelligence.

“While A.I. promises enormous benefits for Europe and the world, there are real concerns about the potential negative consequences,” Mr. Pichai said in a speech last month when he visited Brussels. He said regulation of artificial intelligence was needed to ensure proper human oversight, but added “there is a balance to be had” to ensure that rules do not stifle innovation.

Silicon Valley executives are taking action as Europe has increasingly set the standard on tech policy and regulation. In recent years, the E.U. has passed laws on digital privacy and penalized Google and others on antitrust matters, which has inspired tougher action elsewhere in the world. The new artificial intelligence policy is also likely to be a template that others will adopt.

Artificial intelligence — where machines are being trained to learn how to perform jobs on their own — is seen by technologists, business leaders and government officials as one of the world’s most transformative technologies. Yet it presents new risks to individual privacy and livelihoods — including the possibility that the tech will replace people in their jobs.

A first draft of the artificial intelligence policy, which is being coordinated by Ms. Vestager, will be released on Wednesday, along with broader recommendations outlining the bloc’s digital strategy for the coming years. The debate over the policies, including how to expand Europe’s homegrown tech industry, is expected to last through 2020.

 

The artificial intelligence proposal is expected to outline riskier uses of the technology — such as in health care and transportation like self-driving cars — and how those will come under tougher government scrutiny.

In an interview, Ms. Vestager said artificial intelligence was one of the world’s most promising technologies, but it presents many dangers because it requires trusting complex algorithms to make decisions based on vast amounts of data. She said there must be privacy protections, rules to prevent the technology from causing discrimination, and requirements that ensure companies using the systems can explain how they work, she said.

She raised particular concerns about the expanding use of facial recognition technology and said new restrictions might be needed before it was “everywhere.”

Ms. Vestager said she was looking forward to Mr. Zuckerberg’s visit. While she was curious to hear his ideas about artificial intelligence and digital policy, she said, Europe was not going to wait to act.

“We will do our best to avoid unintended consequences,” she said. “But, obviously, there will be intended consequences.”

Facebook declined to comment.

Europe is working on the artificial intelligence policy at the direction of Ursula von der Leyen, the new head of the European Commission, which is the executive branch for the 27-nation bloc. Ms. von der Leyen, who took office in November, immediately gave Ms. Vestager a 100-day deadline to release an initial proposal about artificial intelligence.

The tight time frame has raised concerns that the rules are being rushed. Artificial intelligence is not monolithic and its use varies depending on the field where it is being applied. Its effectiveness largely relies on data pulled from different sources. Overly broad regulations could stand in the way of the benefits, such as diagnosing disease, building self-driving vehicles or creating more efficient energy grids, some in the tech industry warned.

“There is an opportunity for leadership, but it cannot just be regulatory work,” said Ian Hogarth, a London-based angel investor who focuses on artificial intelligence. “Just looking at this through the lens of regulations makes it hard to push the frontiers of what’s possible.”

Europe’s A.I. debate is part of a broader move away from an American-led view of technology. For years, American lawmakers and regulators largely left Silicon Valley companies alone, allowing the firms to grow unimpeded and with little scrutiny of problems such as the spread of disinformation on social networks.

Policymakers in Europe and elsewhere stepped in with a more hands-on approach, setting boundaries on privacy, antitrust and harmful internet content. Last week, Britain unveiled plans to create a new government regulator to oversee internet content.

“Technology is fragmenting along geopolitical lines,” said Prof. Wendy Hall, a computer scientist at Southampton University, who has been an adviser to the British government on artificial intelligence.

In the interview, Ms. Vestager compared Europe’s more assertive stance in tech regulation to its regulations of agriculture. Many pesticides and chemicals that are allowed in the United States are banned in Europe.

“It is quite the European approach to say if things are risky, then we as a society want to regulate this,” she said. “The main thing is for us to create societies where people feel that they can trust what is going on.”

European policymakers have no shortage of ideas to wade through in drafting the artificial intelligence policy. Since 2018, 44 reports with recommendations for “ethical artificial intelligence” have been published by various organizations, according to a PricewaterhouseCoopers report.

In the United States, the government has focused on providing research funding for artificial intelligence rather than drafting new regulations. This month, the latest White House budget proposal earmarked roughly $1.1 billion for such research.

Ms. Vestager said her policy would include a boost to research funding, but also a framework for safeguarding areas where artificial intelligence can cause the most harm. She said she was not worried about how artificial intelligence is used to recommend a song on Spotify or a movie on Netflix, but was focused on algorithms that determined who gets a loan or what diseases are diagnosed.

Many details will likely be fought over, such as how Europe intends to meet its goal of maximizing the benefits of artificial intelligence. Civil society groups, banks, carmakers and health providers are expected to weigh in.

The rules will have important consequences for Apple, Facebook and Google. The tech giants have invested heavily in artificial intelligence in recent years and have battled to hire the world’s top engineers. Artificial intelligence is now in Apple products such as Siri and Face ID, helps power Google’s search engine and self-driving cars, and Facebook’s advertising business.

Apple declined to comment. A Google spokesman referred back to the comments made by Mr. Pichai.

During Mr. Pichai’s visit to Brussels last month, he was asked what the region’s artificial intelligence rules should look like. He warned there would be lasting economic consequences if Europe did too much.

“The ability of European industry to adopt and adapt A.I. for its needs is going to be very critical for the continent’s future,” he said. “It’s important to keep that in mind.

credit//nytimes

Leave a Comment