The unholy union of AI and HR is coming.
This article was written by Paul Dicken and originally published by The New Atlantis.
Bloomberg columnist Ben Schott recently suggested that Amazon and Facebook should be given seats at the United Nations. The argument is that since these giant corporations exercise the power and influence of a modern nation state — and in many cases, considerably more power and influence — they should be held accountable to the same mechanisms governing international relations. There is certainly some logic to this proposal, especially given concerns over the role Big Tech already plays in global politics, whether facilitating electoral interference, censoring information, or providing the facial recognition software for tracking down political dissidents.
Yet the political use of technology is also nothing new. Technical innovation — particularly the development of “machine intelligence” — has always evolved hand-in-hand with political innovation. The English mathematician Charles Babbage is generally credited with inventing the first computer, but in fact he made a more lasting contribution to political economy. A prototype of his Difference Engine, a calculating machine, was produced in 1822, but like many generously funded government schemes the finished product disappeared beneath mounting costs, increasing delays, and contractual disputes with the chief engineer. And perhaps just as well; the completed machine would have involved 25,000 individual parts and weighed over four tons. Yet as Babbage explained in his book On the Economy of Machinery and Manufactures (1832), the mechanization of mental labor was only partly intended to redress “the inattention, the idleness, or the dishonesty of human agents.” Its principal benefit was managerial. Mechanization allowed one to clearly individuate the various sub-processes involved in any complex operation, and thereby improve their overall efficiency. This became known as the Babbage Principle, the financially cut-throat extension of Adam Smith’s division of labor and a forerunner of modern management practices of reducing labor costs by differentiating between high-skill and low-skill tasks and paying workers accordingly. From its very beginning, then, the concept of “machine intelligence” had a double meaning, ambiguous between the apparently purposeful behavior of the machinery and the information such machinery allows you to gather about your employees.
These two interrelated issues of technological control and political control frame the recommendations advanced in Human-Centered AI, a new book by computer scientist Ben Shneiderman. Beginning in the 1980s, Shneiderman pioneered the research and design of some of the familiar ways in which we have all come to operate personal computers through screens — for example, using highlighted hyperlinks and touchscreen keyboards. He has also long been critical of the direction of artificial intelligence research, and its narrow-minded focus on purely technical innovation. By contrast, his proposed “Human-Centered Artificial Intelligence” (HCAI) framework recommends using AI to provide practical solutions to everyday problems, seeking “not to replace people but to empower them.”
It sounds promising in concept. Yet in the writeup, we encounter passages like this:
HCAI is based on processes that extend user-experience design methods of user observation, stakeholder engagement, usability testing, iterative refinement, and continuing evaluation of human performance in the use of systems that employ AI algorithms such as machine learning. The goal is to create products and services that amplify, augment, empower, and enhance human performance. HCAI systems emphasize human control, while embedding high levels of automation.
Click here to read more.