Response to the White House OSTP's Request for Information on Automated Worker Surveillance and Management
JUNE 18, 2023 | TEAM COWORKER
To Whom It May Concern:
We write to offer public comment on the request for information published on May 2, 2023 (Document 2023-09353). Coworker welcomes this public consultation by the White House Office of Science and Technology Policy (OSTP) on automated worker surveillance and management technologies. Coworker is a laboratory for workers to experiment with power-building strategies and win meaningful changes in the 21st-century economy.
For the past four years, we have been conducting participatory field and market research and analysis on how data-mining techniques innovated in the consumer realm had moved into the workplace[1] and job markets. The past two years, we have been investigating and documenting the increasing number of tech products and tech companies intersecting with every step of the labor process — hiring/recruitment, workplace safety and productivity, benefit provision, workforce development, et al. Dubbing this tech ecosystem as “Little Tech”, we launched a public database[2] to bring attention to the rapidly growing and expansive unregulated marketplace of tech products increasingly collecting, aggregating, and analyzing sensitive data from workers. We have also been working alongside workers to understand how algorithmic payroll systems impact workers’ wages, through our work with Shipt workers (see: Some Shipt workers report seeing lower pay under new effort-based model.)
The impact of algorithmic management tools and surveillance on workers has to be understood within the broader labor and market realities. Specifically, the proliferation of algorithmic workforce management tools have to be understood within a context of decreasing productivity gains, workers’ weakening ability to organize and collectively bargain, ongoing fissuring of the workplace and job markets, and past regulatory precedent from the DOJ and FTC specifically calling out the role of HR in maintaining a competitive market for their employment, not only in hiring and recruitment but in any harmful conduct that stifles competition and can lead to decreased wages, less attractive benefits, or even lost job opportunities.
We have found that while the suite of products in this employment tech marketplace seek to fulfill a variety of HR business purposes, these products pose a variety of risks and harms for workers that extend beyond privacy, and go into economic well-being, health, including mental health. We are also seeing that with increased focus on intrusive algorithmic management and workplace surveillance tools, that vendors are beginning to rebrand in an attempt to fly under the radar. One particular vendor that has done this is Activtrak. Activtrak is a well-known highly-intrusive productivity monitoring vendor that's been around for a while. They were among the first doing keyboard logging tracking and now have a more sophisticated set of tools. In the past two years they have become more vocal about taking privacy seriously and it would be good to check what they say. They also have a Productivity Lab with tech experts that are trying to understand workplace trends and provide tips with employers that would be good to engage with. Therefore, it is important to stay hyper vigilant of this rapidly changing field of employment technologies.
In order to assist OSTP’s analysis of public and private uses of automated worker surveillance and management tools, below is an overview of the current and anticipated uses of these technologies in the workplace and job markets.
1 - Increased diversification and sophistication of HR and workforce management tools:
Through our research and conversations with workers we have been tracking the evolution of HR-focused and workforce management tools over the past five years. We have found that during this time the suite of tools has expanded to a wide variety of new vendors that now also include employer listening tools, identity verification and background screening tools and labor data brokers and intelligence vendors. Additionally, the evolving suite of tools and solutions are being aided through increased tech capabilities such as the use of facial recognition and various forms of AI such as emotionAI, conversational AI, computer or video vision and AI, wearable devices, etc.
- Employee listening tool, Prodoscore: This vendor is mostly a workplace productivity monitoring tool, but they have a Social Network feature that allows employers to "visualize how people are connected, how they communicate, and what influence they have on each other" which could be problematic for detecting organizing in the workplace. More here: https://www.prodoscore.com/social-network/ .
- Employee listening tool, Infeedo.AI: they call their solutions "continuous listening at scale". They report to be growing rapidly in the past two years, although it's hard to see the list of their customers.
- Employee listening tool, Oracle: has a new employee listening tool and it's not very transparent on what data it's collecting. More investigation and research is needed. More on this article: “Oracle’s new platform latest sign of growth in employee listening”.
- Workplace productivity tool that utilizes facial recognition, Clever Control: Highly intrusive workplace productivity monitoring vendor that has been around a long time. Also utilizes facial recognition as one of their customer stories shows. Not very transparent about which employers use them, but they seem to cater to the public and private sector.
- Employee listening tool, Aware: Collects sentiment and productivity data on workers and frames it as empathetic employee listening. Used in a lot of Fortune 100 and Fortune 200 companies, but not very transparent with who the customers are.
2 - Widespread collection of workers’ data inside and out the workplace:
While earlier products collected passive data, such as time logging, keystrokes, websites visited, etc, algorithmic management tools are now collecting an increasing number of data points on workers that include things such as tracking of physical movements, as well as facial and audio data and sensitive physiological biometrics data such as gestures, sentiment/mood, stress levels, cognitive functioning, etc, health such as workers’ medical/health info (i.e. body temperature, respiratory rate, and heart rate[3]).
Vendors in this category include:
- Invisible AI: uses cameras and algorithms to track workers’ body movements as they work through assembly processes.
- VoxelAI: used in retail, warehouses, and manufacturing. It is a highly intrusive product that utilizes "computer vision and AI to enable security cameras to automatically identify hazards and high-risk activities in real-time, keeping people safe."
- Wearable tech, Modjoul: Founded in 2016 and based in Greenville, South Carolina, Modjoul is developing wearable safety technology that enables real-time, personalized alerts and recommendations aimed at reducing injuries, most notably musculoskeletal issues.
We are also finding that some vendors are scraping public data on workers (social media information, press releases, google search results, etc) and packaging them up as labor intelligence data that can be bought and integrated into workforce management tools.
Vendors acquiring and integrating workers’ public data into workforce management tools include:
- Physiological biometrics data collection, WorkHuman: this vendor calls itself the world’s fastest-growing integrated Social Recognition® and Continuous Performance Management platform to help build a positive workplace culture. However, their tool collects data from workers to predict worker motivations, behaviors, and sentiment. You can read about their MoodTracker and other analytics here: https://www.workhuman.com/workhuman-iq/.
- Employment data broker, Equifax/Appriss Insights (acquired by Equifax): Prior to its acquisition in 2021, Appriss Insights administered the nation's most comprehensive source of person-based incarceration, justice, and risk intelligence data. After its acquisition by Equifax (press statement here), it was integrated into the Equifax Total Verify platform which among different solutions, includes a workforce management solution for “Workplace Safety” screening and employment verification.
- Employment data broker, CLARO: Claro describes itself as a “global labor market intelligence platform” collecting and aggregating billions of data points to benchmark worker attrition risk and worker engagement. They seem to be affiliated with the Human Data Interaction Project at MIT. But other than that, it is unclear their process for aggregating employment data from public records and how that data is modeled into predictive tools for employers.
Finally, the switch to hybrid and remote work has increased the demand for management tools that can provide employers with visibility into employee activities discreetly and in the ability to surveil remote or hybrid workers after working hours. Two particularly, problematic vendors we want to identify are:
- Teramind: This is one of the more intrusive algorithmic workplace management vendors we've seen that also collect biometric data as part of their monitoring and is used to monitor remote work.
- Teleperformance TP Observer: provide an AI-enabled webcam that can be installed in remote workers’ computers that recognizes their face, tags their location, and scans for “breaches” of rules at random points during a shift. Such breaches include an “unknown person” detected at the desk via the facial recognition software, “missing from desk,” “detecting an idle user,” and “unauthorized mobile phone usage”. Other products such as Teramind, collect audio recordings from workers (without their knowledge) among other employee activity data points in order to support workplace investigations.
3 - Widespread data collection is being used to increase employers’ intelligence capabilities to conduct (1) risk modeling and predictive analytics, (2) disaggregate job duties and assign economic value, and (3) design AI systems to work alongside workers to make them more productive and “effective”:
These black box predictive and risk modeling systems are being used to measure everything from workers’ productivity to predict workers’ mood and sentiments, their cultural fit, and specific targeted analysis to identify workers at risk of unionizing or going “rogue.” We have also found that these black boxes can be customized by employers to target whichever problematic behaviors they are most concerned about (e.g. tardiness, productivity, workplace organizing, workplace violence, etc) and there is not a lot of transparency on what problematic behaviors they can target, predict, and rank worker for and how this analysis is made.
Vendors that process workers’ data through black box risk modeling and predictive analytics are:
- Risk Modeling/Insider Threat Detection: Verensics: Their Human Resource Solution includes the use of a “Visual Risk Index” to help employers weed out potential new employees in their “areas of concern” at the screening stage. Their proprietary algorithm appears to create a unique profile for each candidate and multiple data points are analyzed in real-time. It is unclear whether public records data is being used in the modeling.
- Risk Modeling/Insider Threat Detection, Forcepoint Behavioral Analytics (acquired by Francisco Partners): They have an Insider Risk Detection tool that collects a lot of workers' behavioral data. You can find a data fact sheet on this solution on their site: https://www.forcepoint.com/product/fit.
- Risk Modeling/Insider Threat Detection: Veratio Cerebral (acquired by Awareness Technologies): Veriato Cerebral is a user behavior analytics and insider threat management solution that’s powered by machine learning algorithms. It monitors employee chats, emails, web surfing, and file transfers and uses other data to develop a Risk Score profile for each worker that is updated daily. We are not sure what other data may be integrated into their Risk Profile of workers. More here on their proprietary Risk Profile: https://www.veriato.com/products/veriato-cerebral-insider-threat-detection-software.
- Risk Modeling/Insider Threat Detection, Forcepoint Behavioral Analytics: This vendor has been around for over 10 years and is an established player in the field of employment risk detection tools. They have an Insider Risk Detection tool that collects a lot of workers' behavioral data. You can find a data fact sheet on this solution on their site: https://www.forcepoint.com/product/fit.
- Fraud detection: Pondera Solutions: It was acquired by Thomas Reuters in 2020 and is now called Fraud Detect solution for Thomas Reuters. This new solution under Thomas Reuters is now used to detect unemployment scams. More information is needed regarding its machine learning modeling and collecting of data.
With increased focus on generative AI, and the ability for these tools to fragment workers’ tasks, understand time spent per task, and economic value to employers, that this may lead to wage instability for workers who may find that employers may begin disaggregating job tasks and assign arbitrary cost value to particular job duties without a workers’ knowledge or awareness. So, this is a trend we are watching closely.
Finally, we are seeing workforce management tools that teach workers to work alongside AI systems in order to teach them to be more productive and “effective” at their job. Two particular vendors we’ve been monitoring are used in call center work are:
- Chorus: company claims to be "backed by 14 technology patents that leverage proprietary machine-learning, Chorus is the fastest growing Conversation Intelligence product in existence”.
- Cogito: claims to be "used by 5 of the Fortune 25 brands across diverse industries including healthcare payers; property, casualty, and life insurers; telecom and cable providers." Utilizes what it calls an Emotion AI and Conversational AI to support Call Center workers.
- Biointellisense: this vendor is being used in the home health aide industry as we heard from the worker last week. Its tag line is: "Patient-centered, data driven care built for scale." It's unclear about how it includes the voices and concerns of home health care providers in the development of its data-driven platform.
Now that we have outlined the marketplace of algorithmic management products, below we discuss key areas where OSTP’s leadership would be a much-needed intervention in helping to provide better protections and redress for workers.
- Engage vendors in order to increase greater transparency over data collection and processing: The marketplace includes established corporate actors such as Experian, Oracle, and long standing workplace productivity vendors such as Activtrak, Teramind, etc., and a variety of emerging AI startups. As a result, there is competition and gatekeeping taking place between established and emerging vendors in this space, which contributes to the lack of transparency. OSTP is well-placed to convene this industry in order to better understand the functionalities of the products and the potential risks and harms to workers. This effort should be done jointly with other key labor regulatory agencies.
- Support more research into how these tools impact federally protected groups, especially in terms of the risk modeling/scoring and predictive analytics and also the training of AI systems that are being integrated into workforce management tools. Previous research on how AI-powered hiring and recruitment products can lead to discrimination against protected classes has been documented. But less focus has been on how these risk modeling tools can also be used to target these workers in a way that can lead to various forms of exploitation and intimidation. This information will be essential in not only increasing regulatory investigations surrounding potential abuses of workers’ biometric data but also help support FTC rulemaking in this area, the increasing number of state-level complaints and class action suits taking place, especially in Illinois where the Biometric Information Privacy Act (BIPA) is the most comprehensive biometric legislation in the country, as well as new regulations and laws emerging in different states.
- OSTP should encourage the use of algorithmic impact assessments as an industry standard for these products. The space of algorithmic impact assessments is rapidly expanding and it may provide vendors with practical tools for understanding the impact that their products have on workers. Additional guidance is needed to help employers conduct better due diligence when purchasing and using these products and OSTP can issue guidance to ensure that both vendors and employers are obligated to conduct third party evaluation of their use of these technologies.
Coworker, welcomes OSTP’s leadership in helping to better understand the rapidly evolving field of algorithmic management and surveillance tools.
Thank you for the opportunity to provide these comments.
Wilneida Negrón, PhD
Director of Research and Policy
Coworker.org
[1] “The Datafication of Employment. How Surveillance and Capitalism Are Shaping Workers’ Futures Without Their Knowledge.” Sam Adler-Bell and Michelle Miller. The Century Foundation. 12/19/18.
[2] “Bossware and Employment Tech Database.”
[3] See Scorecard by Fight for the Future highlighting which top retailers are employing facial recognition technologies in the workplace: Ban Facial Recognition In Stores.