Every so often, VentureBeat writes a story about something that needs to go away. A few years back, my colleague Blair Hanley Frank argued that AI systems like Einstein, Sensei, and Watson must go because corporations tend to overpromise results for their products and services. I’ve also taken runs at charlatan AI and white supremacy.
This week, a series of events at the intersection of the workplace and AI lent support to the argument that techno-utopianism has no place in the modern world. Among the warning signs in headlines was a widely circulated piece by a Financial Times journalist who said she was wrong to be optimistic about robots.
O’Connor describes how she used to be a techno-optimist but in the course of her reporting found that robots can crunch people into a robot system and force them to work at a robot’s pace. In the article, she cites the Center for Investigative Reporting’s analysis of internal Amazon records that found instances of human injury were higher in Amazon facilities with robots than in facilities without robots.
“Dehumanization and intensification of work is not inevitable,” wrote the journalist, who’s quite literally named Sarah O’Connor. Fill in your choice of Terminator joke here.
Also this week: The BBC quoted HireVue CEO Kevin Parker as saying AI is more impartial than a human interviewer. Facing opposition on multiple fronts, HireVue last month announced it would no longer use facial analysis in its AI-powered video interview analysis of job candidates. Microsoft Teams got similar tech this week to recognize who enjoys video calls.
External auditors have examined Al used by HireVue and hiring software company Pymetrics, which refers to its AI as “entirely bias free,” processes that seem to have raised more questions than they’ve answered.
And VentureBeat published an article about a research paper with a warning: Companies like Google and OpenAI have a matter of months to confront negative societal consequences of large language models before they perpetuate stereotypes, replace jobs, or are used to spread disinformation.
What’s important to understand about the OpenAI and Stanford paper is that before criticism of large language models became widespread, research and dataset audits found major flaws in large computer vision datasets that were over a decade old, like ImageNet and 80 Million Tiny Images. An analysis of face datasets dating back four decades also found ethically questionable practices.
A day after that article was published, OpenAI cofounder Greg Brockman tweeted what looked like an endorsement of a 90-hour work week. Run the math on that. If you slept seven hours a night, you would have about four hours a day to do anything that is not work — like exercise, eating, resting, or spending time with your family.
An end to techno-utopianism doesn’t have to mean the death of optimistic views about ways technology can improve human lives. There are still plenty of people who believe that indoor farming can change lives for the better or that machine learning can accelerate efforts to address climate change.
Google AI ethics colead Margaret Mitchell recently made a case for AI design that keeps the bigger picture in mind. In an email sent to company leaders before she was placed under investigation, she said consideration of ethics and inclusion is part of long-term thinking for long-term beneficial outcomes.
“The idea is that, to define AI research now, we must look to where we want to be in the future, working backwards from ideal futures to this moment, right now, in order to figure out what to work on today,” Mitchell said. “When you can ground your research thinking in both foresight and an understanding of society, then the research questions to currently focus on fall out from there.”
With that kind of long-term thinking in mind, Google’s Ethical AI team and Google DeepMind researchers have produced a framework for carrying out internal algorithm audits, questioned the wisdom of scale when addressing societal issues, and called for a culture change in the machine learning community. Google researchers have also advocated rebuilding the AI industry according to principles of anticolonialism and queer AI and evaluating fairness using sociology and critical race theory. And ethical AI researchers recently asserted that algorithmic fairness cannot simply be transferred from Western nations to non-Western or nations in the Global South, like India.
The death of techno-utopia could entail creators of AI systems recognizing that they may need to work with the communities their technology impacts and do more than simply abide by the scant regulations currently in place. This could benefit tech companies as well as the general public. As Parity CEO Rumman Chowdhury told VentureBeat in a recent story about what algorithmic auditing startups need to succeed, unethical behavior can have reputation and financial costs that stretch beyond any legal ramifications.
The lack of comprehensive regulation may be why some national governments and groups like Data & Society and the OECD are building algorithmic assessment tools to diagnose risk levels for AI systems.
Numerous reports and surveys have found automation on the rise during the pandemic, and events of the past week remind me of the work of MIT professor and economist Daron Acemoglu, whose research has found one robot can replace 3.3 human jobs.
In testimony before Congress last fall about the role AI will play in the economic recovery in the United States, MIT professor and economist Daron Acemoglu warned the committee about the dangers of excessive automation.
A 2018 National Bureau of Economic Research (NBER) paper coauthored by Acemoglu says automation can create new jobs and tasks, as it has done in the past, but describes excessive automation as capable of constraining labor market growth and has potentially acted as a drag on productivity growth for decades.
“AI is a broad technological platform with great promise. It can be used for helping human productivity and creating new human tasks, but it could exacerbate the same trends if we use it just for automation,” he told the House Banking committee. “Excessive automation is not an inexorable development. It is a result of choices, and we can make different choices.”
To avoid excessive automation, in that 2018 NBER paper Acemoglu and coauthor Boston University research fellow Pascual Restrepo call for reforms of U.S. tax code, because it currently favors capital over human labor. They also call for new or strengthened institutions or policy to ensure shared prosperity, writing “If we do not find a way of creating shared prosperity from the productivity gains generated by AI, there is a danger that the political reaction to these new technologies may slow down or even completely stop their adoption and development.”
This week’s events involve complexities like robots and humans working together and language models with billions of parameters, but they all seem to beg a simple question: “What is intelligence?” To me, working 90 hours a week is not intelligent. Neither is perpetuating bias or stereotypes with language models or failing to consider the impact of excessive automation. True iIntelligence takes into account long-term costs and consequences, historical and social context, and, as Sarah O’Connor put it, make sure “the robots work for us, and not the other way around.”
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading,
Khari Johnson
Senior AI Staff Writer
VentureBeat
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more
Source: Read Full Article