• December 7, 2022

Zero-Day Hackers Breach Samsung Galaxy S22 Twice In 24 Hours

Last year, during the Pwn2Own hacking event in Austin, Texas, the Samsung Galaxy S21 was hacked, not once but twice, across a period of just 48 hours. This year, at the …

How To Get The Most Out Of Your Branding

By Tommy Mello, owner A1 Garage Doors, a $100M+ home service business. Sharing what I’ve learned to help other entrepreneurs scale. You can buy a purse for $30. Or you can …

Cities Face Long-Term Neglect, Not Just A Real Estate “Doom Loop”

There’s been a sudden spike in worrying about city problems created by declining commercial real estate (CRE) values, especially urban office buildings where increased working from home (WFH) has reduced in-office …

What constitutes a quality job? If you were to ask family and friends, they would probably say good pay, benefits, and stable working conditions, but for many workers, workplace technologies, especially AI, is affecting job quality.

That’s important because the U.S. has a serious job quality problem. The number one ESG challenge companies are grappling with is the treatment of workers.

That’s why the Partnership on AI (PAI), a nonprofit focused on the responsible use of AI, studied how employers can do their part to ensure that workplace AI improves job quality rather than making it worse, as often has been reported by media outlets, labor groups, and technology organizations.

Stephanie Bell, the report’s author, studied how three groups of workers experienced workplace AI and its effect on their job quality: Customer service agents, data annotators (workers who prepare the data that feed into AI systems), and warehouse workers.

“When people talk about AI and the future of work, very frequently job quality gets ignored in favor of discussions about automation and job loss. The reality of what AI adoption looks like on the ground is that workers most often see AI in the form of workplace technologies.” – Stephanie Bell, Research Scientist, Partnership on AI

Bell had the workers record their experiences with workplace AI, and after reviewing the journal entries, conducted interviews to dig deeper into what workers experienced. PAI then identified pointers for employers on how to balance managerial autonomy and worker input on deploying workplace AI.

When given a voice, workers genuinely appreciate AI at work

The first key finding was that when AI was deployed strategically like for professional development or to minimize annoying tasks–workers felt that AI made their jobs easier.

For example, customer service agents had to use AI-based software that monitored their calls and text chats with customers which monitored the agents’ tone of voice, speaking volume, and keywords to assess emotions and offer warnings to speak more quietly, slowly, or with less “emotional charge.”

When the prompts were used as coaching tools, rather than commands or performance metrics, workers appreciated the real-time tips. “Most employers can’t afford to hire dedicated coaches to listen to every employee’s calls and offer feedback, especially in real-time, so using AI-based tools to offer suggestions makes sense from a professional development and quality assurance standpoint,” Bell told me.

In other cases, AI freed up workers to focus on more interesting and higher-level tasks. Data annotators used AI-based software to speed up the tedious process of hand-labeling documents, audio, video, and image files with text so that it could be used by AI systems. The strong majority preferred working with software compared to data labeling manually.

Blame bosses for bad workplace AI

The second key finding from the report was the confirmation that management decisions overwhelmingly shape workplace AI. “That may seem obvious, but the way AI is described in the media and policy focuses on what AI does to people–minimizing the human responsibility around how AI is deployed,” Bell said. Bell described two business models that leaders follow when adopting workplace AI:

  • High-road model: Employers using this model hire highly skilled workers that need to be paid more and invested in through upskilling, but this workforce is more capable and has the autonomy to make higher-level decisions. This group more often uses AI to augment their skills.
  • Churn model: Employers using this model hire less skilled workers who undergo high “churn” (rapid rate of hiring, firing, and attrition) as a core tenant of the business model. Bell told me that many of the technologies used to automate jobs or tasks are focused on churn model employers, rather than high-road employers.

Those models track with what research done by the World Economic Forum. At Davos this year, the Forum published the Good Work Framework to help countries benchmark job quality, and several projects explore how technologies can improve job quality including a deep-dive on how to augment manufacturing workers with technology, rather than replace them, and an initiative to help employers use the public interest technology technique of “collaborative design” to ensure that all workplace technologies benefit workers and employers. The projects aim to promote more high-road practices by industry.


Research, Labor Unions Encourage High-Road on AI

When it comes to job quality and AI—experts are in alignment that worker voice is key.

The high-road model is also supported by economic justifications and research on usability. For instance, MIT professor Daron Acemoglu and Boston University’s Pascual Restrepo have found that some automation, dubbed “so-so” automation, disrupts employment without boosting productivity—like with self-checkout kiosks or automated telephone customer service. In another example, a paper published by Katherine Kellogg and colleagues this summer in MIT Sloan Review found that when healthcare workers, regardless if they are lab technicians or cardiologists, were brought into the workplace AI adoption process they were more likely to use AI and use it well.

Bucking their roots as technology resisters, unions are also urging for high-road models for workplace AI. “Unions sometimes are mischaracterized as technology Luddites. This is far from the truth. Technology can make jobs better, and unions are ready to pursue the co-creation of technology augmentation plans with employers, workers, and even technology vendors,” Tim Noonan told me. Noonan is Director at the International Trade Union Confederation, the world’s largest coalition of labor unions and a core member of the World Economic Forum’s Global AI Action Alliance.

Bell told me that in the United States, unions have been slow to incorporate workplace technology provisions in collective bargaining agreements, but in Germany “works councils” are frequently pro-technology and help employers navigate workplace AI adoption.

However, some progress is being made in the United States. New resources from the University of California-Berkeley’s Labor Center aim to help unions incorporate workplace tech provisions into collective bargaining agreements, contracts that typically focus on working conditions like scheduling stability, pay, and safety.

In some cases, policymakers are helping elevate worker voice into the conversations. California’s Future of Work Commission included business and labor leaders and heard testimony from frontline workers to identify solutions to a variety of job quality woes—including workers prepare for technology-related change.

As for employer actions, PAI concluded that executives should ask themselves three questions in order to take the high road approach to adopting workplace AI:

  1. Are workers brought into the AI procurement and implementation process and given agency as experts of their own experience?
  2. Is the AI being introduced shifting power even more towards management or towards workers?
  3. How are gains in productivity being distributed? Is it only through more money for senior leaders and shareholders or through worker distribution as well?

Employers should engage workers at the start of the workplace AI adoption process through tactics like anonymous “frustration boxes” that allow workers to share problems they think technology could solve. From there, chief technology and human resources officers can identify tech and non-tech solutions to the problems.

Bell tells me there aren’t many examples of high-road employers, but the PAI team hope to address that next. The non-profit will build a set of commitments for companies who want to make sure the AI products they build or use are beneficial to workers, or at the very least are “neutral in their impact.” In 2023, PAI will draft the commitments and incorporate feedback from workers, industry leaders, and the public–Forbes readers are encouraged to get involved.

Hopefully projects like these will power the adoption of workplace AI in a way that is a “win-win” for workers and employers.


Leave a Reply

Your email address will not be published.