Tenha acesso completo ao Stoodi

Assine o Stoodi e prepare-se para o ENEM com nossos conteúdos exclusivos!

ITA 2019

A(s) questão(ões) a seguir refere(m)-se ao texto abaixo:

Artificial Intelligence (AI) is going to play an enormous role in our lives and in the global economy. It is the key to self-driving cars, the Amazon Alexa in your home, autonomous trading desks on Wall Street, innovation in medicine, and cyberwar defenses.
Technology is rarely good nor evil — it’s all in how humans use it. AI could do an enormous amount of good and solve some of the world’s hardest problems, but that same power could be turned against us. AI could be set up to inflict bias based on race or beliefs, invade our privacy, learn about and exploit our personal weaknesses — and do a lot of nefarious things we can’t yet foresee.
Which means that our policymakers must understand 1and help guide AI so it benefits society. […] We don’t want overreaching regulation that goes beyond keeping us safe and ends up stifling innovation. 2Regulators helped make it so difficult to develop atomic energy, today the U.S. gets only 20% of its electricity from nuclear power. 3So, while we need a Federal Artificial Intelligence Agency, or FAIA, I would prefer to see it created as a public-private partnership. Washington should bring in AI experts from the tech industry to a federal agency designed to understand and direct AI and to inform lawmakers. Perhaps the AI experts would rotate through Washington on a kind of public service tour of duty.
Importantly, we’re at the beginning of a new era in government — one where governance is software-defined. The nature of AI and algorithms means we need to develop a new kind of agency — one that includes both humans and software. The software will help monitor algorithms. Existing, old-school regulations that rely on manual enforcement are too cumbersome to keep up with technology and too “dumb” to monitor algorithms in a timely way.
Software-defined regulation can monitor software-driven industries better than regulations enforced by squads of regulators. Algorithms can continuously watch emerging utilities such as Facebook, looking for details and patterns that humans might never catch, but nonetheless signal abuses. If Congress wants to make sure Facebook doesn’t exploit political biases, it could direct the FAIA to write an algorithm to look for the behavior.
It’s just as important to have algorithms that keep an eye on the role of humans inside these companies. We want technology that can tell if Airbnb hosts are illegally turning down minorities or if Facebook’s human editors are squashing conservative news headlines.
The watchdog algorithms can be like open-source software — open to examination by anyone, while the companies keep private proprietary algorithms and data. If the algorithms are public, anyone can run various datasets against them and analyze for “off the rails” behaviors and unexpected results.
Clearly, AI needs some governance. As Facebook is proving, we can’t rely on companies to monitor and regulate themselves. Public companies, especially, are incentivized to make the biggest profits possible, and their algorithms will optimize for financial goals, not societal goals. But as a tech investor, I don’t want to see an ill-informed Congress set up regulatory schemes for social networks, search and other key services that then make our dynamic tech companies as dull and bureaucratic as electric companies. […] Technology companies and policymakers need to come together soon and share ideas about AI governance and the establishment of a software-driven AI agency. [...]
Let’s do this before bad regulations get enacted — and before AI gets away from us and does more damage. We have a chance right now to tee up AI so it does tremendous good. To unleash it in a positive direction, we need to get the checks and balances in place right now.

Adaptado de. Acesso em: jun. 2018. 

Assinale a alternativa INCORRETA. No texto, o autor afirma que 

Escolha uma das alternativas.