The Privacy Insider Podcast
Privacy, Power, and the Algorithmic Workplace with Matthew Scherer of the Center for Democracy & Technology
Concerns about AI, workplace surveillance, and digital privacy are redefining how companies manage people. As automation scales and monitoring expands, the question is not what to measure but why we should. The urgency sits in the tradeoff between efficiency and human dignity, and now is the moment to set guardrails that protect workers and support responsible innovation.
Matt Scherer, Senior Policy Counsel for Workers' Rights and Technology of the Center for Democracy & Technology, explains how algorithmic management and data-driven decisions are changing hiring, evaluation, and termination. Matt connects legal reality to technical practice and shows where policy clarity matters most.
Episode Highlights:
- 00:00 Introduction.
- 02:55 A non-linear path can lead to meaningful work at the intersection of law and technology.
- 06:40 Technical fluency strengthens policy analysis and improves collaboration with practitioners.
- 09:11 Civil society organizations advance digital rights through research and advocacy.
- 13:09 The growth of flexible work arrangements affects access to worker protections.
- 29:05 Algorithmic management changes how organizations monitor and evaluate people.
- 34:17 Human judgment remains essential for fair and responsible employment decisions.
- 42:47 Attempts to quantify intrinsic qualities raise privacy and ethical concerns.
- 45:28 Policy and regulation lag behind rapid advances in workplace technology.
Episode Resources:
[00:00:00] Matthew: The more you are able to automate the monitoring of employees, the more you are able to automate the collection of information that allows you to track their performance, the fewer managers you need.
[00:00:12]
[00:00:35] Arlo: Hello and welcome to the Privacy Insider Podcast. My name is Arlo Gilbert. I'm the founder of osano, a leading data privacy platform, but today I'm your host. Today we're joined by Matt Sheer Senior Policy Counsel at the Center for Democracy and Technology, where he leads efforts to protect workers' rights in the age of ai.
[00:00:57] With a background in employment law and a passion for ethical technology, Matt focuses on how algorithms and data impact hiring, workplace surveillance and labor equity. Matt is a former prosecutor, judicial clerk, and legal advisor to tech firms and a published thought leader on AI regulation and discrimination law.
[00:01:18] Matt's work helps ensure that innovation respects human dignity. Matt, welcome to the show. The.
[00:01:25] Matthew: It's great to be here.
[00:01:27] Arlo: Fantastic. Well, we're so excited to have you here. You gotta really exciting background and some, especially given the, the latest. The latest, news out of Washington. Lots of exciting things happening around employee rights and employment, especially if you work for the federal government. but before we dive into all of the incredible work you're doing and talk a little bit about AI and the intersection of worker rights and technology, it would be amazing if you could share a little bit about yourself. What do you do at, uh, CDT, and, uh, tell us a little bit about how you got there.
[00:02:00] Matthew: Yeah, so my job right now at CDTI lead the Workers' Rights Project. Uh, my full job title is a mouthful. It's Senior Policy Counsel for Workers' Rights and Technology, and I. It's both a research and an advocacy role. Then the three topics that are kind of within my ambit at CDT are, number one, the use of electronic monitoring and other surveillance and data collection techniques on workers in the workplace.
[00:02:29] Number two, the use of AI and employment decisions. And number three, trying to look for ways that workers can use technology and data to empower themselves. And kind of just by virtue of me being in those research and advocacy spaces over time, I've started to do some work on consumer privacy and automated decision making as well.
[00:02:55] And as far as how I got here, that is a long and winding road story. It is interesting. Looking back, I didn't ever really make long-term plans for my career. I feel like I almost stumbled into the role that I'm in now. Um, the first five years of my legal career, I was in the public sector. I did three judicial clerkships and did a couple of years as a prosecutor, uh, at the county level in Michigan.
[00:03:22] And kind of after I finished my last judicial clerkship, I just moved out to the west coast. I'm based in Portland, Oregon. and I needed to decide what I wanted to do with the rest of my life. I there weren't career clerkships available as much as I loved being a judicial clerk, uh, at the time.
[00:03:39] And I decided on employment law as being the thing that would pay the bills. And then separately I started to get into artificial intelligence. And initially it was the way that honestly, a lot of people who are. Kind of on the periphery of AI policy. Think about it. I, I wasn't so much worried about workers' rights and consumer rights.
[00:04:05] I was worried that AI would become self-aware and the terminator of the Matrix would come true. And, I wrote a paper, on regulating artificial intelligence systems that kind of was very timely. It turned out, it, got published right before AlphaGo and uh, the reinforcement learning revolution kind of got underway.
[00:04:30] And so I ended up getting invited to a conference and that ended up getting me invites to other conferences. And I started kind of having this hobby almost where I was writing and speaking on ai. but it really was a hobby. It was completely disconnected from my day job as an employment lawyer. And eventually I, at one of those conferences, I ran into a partner from, or they, they're called shareholders at Littler Mendelssohn, which is the largest management side employment law firm in the world.
[00:05:02] And, uh, they ended up offering me a chance to kind of do AI advice work and data analytics work at, uh, littler. And so I did that for a few years, but eventually for reasons that I won't get into, uh, I decided that I wanted to do, something that was more socially impactful, uh, than my role advising clients at, uh, who are mainly, you know, large and medium sized corporations.
[00:05:32] And the opportunity for my role that I'm in now at CDT happened to come up right as I was, uh, starting to have those thoughts and, uh, it seemed like a perfect fit. And that was about five years ago. So here I am. So the, it's, it's interesting. Like I said, I didn't plan it that way. I'm a musical theater nerd and there's this song called The Saga of Ginny from a musical that nobody's ever heard of called Lady In the Dark.
[00:05:58] And basically the theme of the song is if you do make plans for your life and career, you'll always end up getting disappointed. And maybe I just like subconsciously took that too literally, and I just ended up taking things as they came. And that's how I ended up where I am now. But it's worked out all right.
[00:06:15] Arlo: It certainly has. So tell me, I mean, as a, as a lawyer and a, a self-professed theater nerd, how does one cross the chasm from legal scholarship into having a deep enough understanding of the technology to be able to inform policy advice and inform workers' rights? I mean, that's a, that's, that's a, that's a pretty big bridge.
[00:06:38] And I'm curious, how did you get there?
[00:06:40] Matthew: I taught myself a lot about the technical side of ai and that happened because as I was going to those conferences, I was talking to people who did understand the technology and I felt like a fool kind of showing up, like you just said, you know, opining on a technology that I only had a vague understanding of.
[00:07:01] So I actually like. Went back to basics. I taught myself, I retaught myself, calculus and statistics. I taught myself the basics of computer science and programming. I graduated to, uh, some of the, you know, essential mathematics and, theory behind AI and computer science. I actually taught myself multi-variable calculus in linear algebra so that I could understand what gradient descent algorithms are doing mathematically and what different types of, AI architectures are, what they do.
[00:07:39] And, uh, ultimately I got to the point where while I couldn't program, uh, an AI system, the most complicated thing I ever programmed in my life was this, like version of hangman and Python. and, but I, I got to the point where at least I understood what was going on well enough. That I could show up to those conferences.
[00:08:01] And eventually, when I was at Littler, advised clients with confidence that I knew enough about the technology that I could engage with the engineers and the people who did have a formal technical background on this stuff. but the other interesting,
[00:08:16] Arlo: That buys you a lot of credibility in those conversations, I imagine.
[00:08:20] Matthew: yes, uh, yes and no because eventually if you tell the client things that they don't want to hear, whatever respecting credibility they might have had at the beginning seems to wane sometimes. but I mean, to me, honestly, the, the biggest thing that I got out of that is that the more I learned about the technology, the more it was demystified.
[00:08:42] The less concerned I became about those Terminator and Matrix scenarios, and the more concerned I became about the privacy and social and broader legal and ethical implications of ai.
[00:09:01] Arlo: Wow, that's a, that's a really great origin story. Uh, you know, self-taught theater nerd learns, calculus, you know, goes to change the world. So tell us a little bit about, the Center for Democracy and Technology. Just so we all understand what the charter there is and kind of what the group does as a broader, as a whole.
[00:09:21] Matthew: So, CDT, uh, has been around in some form for more than three decades now. It's one of the oldest tech policy nonprofits out there. It kind of started as a, as a splinter group, from the Electronic Frontier Foundation, um, which is probably the oldest, uh, tech policy nonprofit. But, uh, CDTs tagline is, we fight for advancing civil rights and civil liberties in the digital age.
[00:09:49] And I think that that's a pretty accurate description of what we do. We have teams that do research. We have, teams that do policy advocacy. Um, we don't, you know, do much, if any, lobbying because we're a 5 0 1 C3, which limits our ability to. Make lobbying a, a, a key component of what we do. So instead, most of our advocacy is in the form of, you know, reports and recommendations and hosting webinars and seminars so that people in the policy community understand the civil rights and civil liberties implications of the technologies.
[00:10:30] Arlo: Interesting. And so when you think about, and this sounds like a very noble cause, I'm a big fan of the EFFI think we had Cindy, uh, on here, uh, a couple episodes ago, and I, I, I think she just announced that she's retiring from the EFF, but they always had a lot more of a focus, it felt like on a lot of the like cryptography type of issues.
[00:10:51] And I know that's where they originated. whereas it sounds like you're more broadly focused on civil liberties, uh, and technology at CDT. Is that a pretty fair contrast?
[00:11:00] Matthew: Yeah, I think that's right. And you know, I
[00:11:03] love EFF as an organization too, and I think that it's one of the situations where whoever decided that the divorce that led to CDT and DFF, uh, needed to happen was probably right. Because I think that, you know, they each fill an important niche in, uh, the policy space.
[00:11:21] And one thing that I've always been really impressed with about EFF is that they make very cool tools, that are available to the general public. I'm going to do a plug for the Privacy Badger extension on Firefox and Chrome. That EFF,
[00:11:38] Arlo: I am a, I'm a fan. I'm a user.
[00:11:40] Matthew: Yes. So, you know, like I think that, it's a very interesting space in general, the tech policy space because for a long time, most tech policy nonprofits were completely dependent on the tech industry's donations to drive them forward.
[00:11:56] But kind of over time, civil society and the tech industry have started to experience, shall we say, some tension, uh, with each other. as you know, particularly as the tech industry has taken on, a so much more central role in our economy and in a lot of ways in our daily lives, uh, which is of course why there is enough interest in the space for there to be a Privacy Insider podcast.
[00:12:20] So,
[00:12:22] Arlo: that's very diplomatic the way that you, the way that you described that and, and probably the perfect segue into talking about how tech is impacting worker rights because, you know, worker rights is a. Pretty broad concept, right? You know, I've got rights to take a break every couple hours if I'm working a shift as a waiter. Uh, I've got rights that are completely unrelated to physical work. But your focus is really on that intersection of AI and technology. What is it right now that workers are needing protection from and, and needing, needing the services of the CDT? Like, what's happening in the world for workers that, or I should say, in America, for workers that, that you're particularly focused on?
[00:13:07] Matthew: I mean, it's hard to know where to begin. Um, I'll actually start with what you just said about, you know, there's workers' rights, like the right to take a break. Well, one of the biggest things that I would say that has been an outflow of the tech industry and advancements in. Automation with respect to workers has been the rise of the platform economy or the gig economy, whatever you want to call it.
[00:13:31] And in that space, pretty much all of the workers are at least legally treated as independent contractors, which means that they don't have most of the rights that employees at a company have, including the right to take breaks, including the right to be paid, a minimum wage, including the right to leave and all of that other stuff, including the right to organize unions, perhaps most importantly.
[00:13:55] And, I think that that is in and of itself one of the major issues that we are kind of confronting in terms of how technology and. The labor market and workers' rights are intersecting. beyond that, I think that the things that have enabled the rise of the platform economy are increasingly being seen in traditional employment relationships as well.
[00:14:25] It's much easier to manage workers through the medium of a machine rather than having the traditional, uh, managerial employee relationship. Uh, you can think about it this way just like you can, you know, when you start introducing automation at an individual contributor level to use the terminology of human resources.
[00:14:50] Uh, the idea is that that increases the worker's productivity. That worker will be able to accomplish more tasks in less time with the assistance of automation. The same idea applies to managing workers. The more you are able to automate the monitoring of employees, the more you are able to automate the collection of information that allows you to track their performance, the fewer managers you need.
[00:15:16] And I always say, I don't mean to, idealize the relationship between human managers and employees. Obviously, uh, there wouldn't be a need and I think that there is a need for labor unions anywhere if manager, employee relationships were always hunky dory. But at least when you have that human connection, you can think about it in the same way as when call centers started to get automated.
[00:15:42] If you have a problem and it's hard to get through to a human, or so much of the process has been automated that employees are kind of left hanging for guidance and assistance. Or, when they have a problem with a performance evaluation, they are one of 50 people that the manager is dealing with instead of one of five.
[00:16:02] That's a major problem. It hurts the employee's productivity in the long run. It hurts the company and it creates, you know, kind of tangentially a lot of, well, not tangentially. the same causes lead to a, a lot of privacy concerns because in order to monitor workers in that, way, you need to monitor them continuously or as close to continuously as you can, and you need to collect a lot of information about what they're doing all the time and that ends up creating, as you can imagine, a lot of privacy concerns for workers.
[00:16:36] Arlo: Yeah, that's, uh, that's pretty terrifying. And, and you know, five years ago I think we'd be having a pretty different discussion about monitoring employees. I mean, it's always been, one of the topics in the world of data privacy that people have been concerned about. You know, whether it's keystroke monitoring or location monitoring or, uh, whether it's 10 99 rights.
[00:16:59] You know, that data was being collected for a long time about people and how they worked. But you know, like you said, the manager with 50 direct reports, they are not going through all of your location history. They do not have the time to go through all of your location history for every one of their employees.
[00:17:17] But as the tools for analyzing data and discovering insights from data have improved, that feels like that collection. Becomes much more dangerous. And so maybe it would be interesting, I mean, you know, you, you do talk a lot about the intersection of AI and workers' rights and, and that's, you know, so when you think about these big treasure troves of data about your behavior and how you work and your activities, but then you layer in artificial intelligence, which can consume these vast amounts of data and quickly discover insights and make recommendations that feels like that really changes the calculus.
[00:17:56] The collection was always questionable to start with, but now you, you know, one person with 50 employees really can analyze. So what, what's happening in the world of AI and technology as it relates to workers' rights? I'm, I'm really curious how that's changed.
[00:18:12] Matthew: I mean, man, there's so much to unpack there. Uh, of the time back in the late 2010s, that employers were collecting a lot of information on their workers Just because they could, not so much because they had a good plan for what to do with it or how to generate value from it.
[00:18:31] It was just, I mean, everybody, this wasn't too long ago in the grand suite, but I think, but, but it seems like it, back then, the buzzwords were not artificial intelligence. They were big data, and so the idea started to really take hold. I felt like. In a lot of companies that we need to collect data on our employees because that will allow us to analyze them better and to, you know, gain insights that will help improve productivity, efficiency, et cetera.
[00:19:01] But as I frequently discovered, like when we would, one of my roles I mentioned was kind of doing legal data analytics work, and as part of that, I would, one of the more common ways in which I would get looped in is litigation would arise, the company would get sued, let's say it was a, you know, a misclassifying, employees as independent contractors when they were actually workers.
[00:19:23] And, uh, because they'd been classified as independent contractors, there wasn't, you know, clocking in and clocking out information. and there wasn't. Any sort of hard, you know, objective. This is when the employee started their shift and ended their shift because that's not the way that independent contractors at a lot of companies are compensated.
[00:19:44] So what is it that you have to do in order to monitor those workers and, and, or sorry, what is it that you have to do in order to figure out what hours those employees or independent contractors worked in order to calculate what wages they should have been paid? Well, the answer was, you know, like you look at all this other information that the employer collects on the worker.
[00:20:06] But what was interesting to me is that I knew that, but frequently I would go to a client at the beginning of the case and. I would ask, do you have information on badge swipes for people coming in and outta the building? Do you have information on their web browsing activity? Have you monitored, you know, when they were sending their emails?
[00:20:26] Have you collected this information? And a lot of the time I would ask the employer these questions or the, you know, my first point of contact with the employer. And they would say, I don't know. And then they would go back to their IT people and the answer was, why? Yes, we do collect all that information, but nobody's ever asked for it before and you know it.
[00:20:45] So to me it was just like,
[00:20:48] Arlo: Just archiving. I mean just, just basically we're saving this for some.
[00:20:52] Matthew: yeah. Well it was data harvesting essentially, but it was data hard data harvesting without a function or pre. Disposed purpose. And sometimes like in those cases, the employer probably wishes that they hadn't done it because I would come back to them and say, oh, by the way, based on all of the work activity that we're seeing from the data you collected, these workers are owed a lot of money if they're misclassified as independent contractors.
[00:21:20] And you should have been completing, you know, treating 'em as employees because this information shows a lot of them are working more than 40 hours per week. So you need to pay them time and a half for the time that they spent off of it.
[00:21:32] Arlo: All that, all that collection. Bit him in the ass, huh?
[00:21:34] Matthew: that's exactly right. And, again, the kind of like haphazard collection of data in the sense of you're doing it without really thinking through what value you are unlocking, uh, by doing so, that just to me kind of showed a.
[00:21:52] it, it, it was, it was a disturbing trend in, you know, kind of corporate operations that, troubled me more and more as time went on, as did the rise of, you know, automated decision making. And the last thing that I'll say though is, you know, you kind of referenced, alluded to this, uh, you know, eventually when AI came along, employers started to figure out what they could do with the, all that extra information.
[00:22:24] But one of my great theories of life, it's kind of idiosyncratic that I've, that I've come around to the more I've been around this stuff, is that there's diminishing returns on that and on everything, yes, you can collect lots of information about workers, but usually once you. Get past the real basics of monitoring what an employee is doing, what the output of their work product is, and the quality is.
[00:22:51] And usually in a lot of jobs, those are not things that you can actually get at in a real way through data collection and harvesting. Because so much of what makes a worker a good employee is not measurable, it's subjective. or at least at this point, it's subjective and it can't be measured. and you get diminishing returns.
[00:23:10] Therefore, on the more data you collect on them, you're not actually getting more information that you can really use to gain insights as to how good a worker somebody is. And, You end up therefore, in this situation where the more information you collect, the less valuable that each additional piece of information is, but the greater the possibility is that you'll end up collecting information that the worker would feel intruded upon, or that their private lives are being threatened by the fact that it's being collected.
[00:23:41] Arlo: Yeah, you know, in analytics it's almost always true. If you give somebody a spreadsheet with, you know, a thousand columns, what you're gonna find is there's probably three or four of those columns that are actually the useful predictors for the outcomes you're trying to predict. So it sounds like we have employers collecting these massive quantities of data, getting very little value from the majority of it, while creating risk and cultural issues for their company.
[00:24:08] You know, one thing you mentioned earlier, you mentioned, uh, the 10 99 workers, and I'm curious to, to drill down on two of these areas. So the first one would be, I'd love to understand when you talk about automated decision making. What does that mean in practice? Is this just resume filtering or is this much deeper about like, you know, performance management and making, hiring and making firing decisions even.
[00:24:36] And then the second piece of that question is, simply because you mentioned 10 90 nines, are there laws right now that govern this and, and is there any kind of difference for 10 99 workers and their rights versus those that are employed full time?
[00:24:51] Matthew: let me, I'll take the second question first. Actually, on the 10 99 workers, I mean, almost by definition, 10 99 workers or independent contractors or whatever, uh, you know, terminology you use, they have fewer rights than traditional employees. Uh, at least in the, the workplace, they have fewer rights.
[00:25:14] I mentioned examples of that they don't have the right to leave, they don't have the right to organize unions minimum wage over time. All of that doesn't apply to independent contractors. although there are some exceptions that are now arising, uh, we're seeing in the policy space where there are some states that are saying you have to let, uh, like California, you have to let, independent contractors, at least in the ride hailing industry, have some right to organize.
[00:25:43] but by and large, uh, there's a lot fewer rights. But one of the things that's interesting is that there are virtually no privacy rights for employees at the outset. And there's an interesting historical reason for that, which is namely that. Are laws, the, the default rules, if you wanna call it that in the absence of a legislature passing a law saying employees have these rights, the default rules for the employee employer relationship actually come from the ancient English law for their relationship between masters and servants.
[00:26:20] and so you can actually think of the default rule for an employee in the workplace is that they have the exact same right to privacy as a servant living under the cupboard, you know, under the stairs, in a cupboard, in an ancient, ancient English manner house. Which is to say that they have no right to privacy at all.
[00:26:37] Um,
[00:26:37] and that is, so, but so that, the interesting thing though is that we have now in the country a bunch of states that have passed consumer privacy laws and. For purposes of the relationship between a platform, and the platform worker, the, that relationship is treated as a consumer relationship.
[00:27:01] They are that. And so therefore, kind of ironically, there is this one space where arguably, at least in a couple of states like Maryland and California, which have halfway decent privacy laws, the the independent contractor has a greater right to privacy in some ways than the traditional employee.
[00:27:21] Arlo: Well, that is fast. That is fascinating. And I, I gotta tell you, I'm in the privacy world and I didn't know that.
[00:27:27] Matthew: yeah. well it's not something that we talk about or think about, but if you are ever, enough of a geek like me about the history of this stuff, there's a book called, uh, belated Feudalism that is kind of about how the. Default rules of workers' rights, or the lack thereof in terms of workers' rights in the United States developed.
[00:27:49] Um, and it was kind of like a conscious choice almost, that courts decided that the best analogy for how to treat employees' rights with respect to employers and vice versa, was this master servant relationship. and so until legislatures start passing laws on employee privacy changing that we're stuck with kind of this common law whose roots literally lie in this very exploitative relationship that, uh, used to occur between masters and servants.
[00:28:22] Arlo: Wow. Well, so tell, so tell us about the automated decision making, because that's a pretty broad term, right? I mean, we all, as managers, as employers, we make decisions all the time. We make decisions often with automated information, you know, eh, did my sales team hit their quota? Right? I, I I'm looking an automated report that tells me the results of the sales efforts for the quarter, but, but it sounds like it probably means a lot more than that when you're talking about legislation and about privacy rights.
[00:28:54] How do you encompass that concept of automated decision making?
[00:28:59] Matthew: I think that, I think that there are kind of a couple of layers to that. So I, I use the term algorithmic management. you know, when I talk about how current employees, you know, active employees are managed or, uh, monitored. By automated systems in the workplace. And as that description implies, there's two steps to that.
[00:29:25] There's the monitoring and the collection of data on workers, and then that can be done automatically. But then there's also the, do you not just automatically collect it, but do you use an automated system to make some sort of decision based on that information or to make some sort of recommendation that influences a human's decision based on that information?
[00:29:47] And that, I think is kind of the key step change that the rise of AI has made in terms of the active employment relationship. Because, you mentioned, is it just resume screeners? I mean, at the beginning that was the first big use case for AI and automation in employ employment decisions was hiring decisions.
[00:30:11] Sifting you, you, you get an AI system that will help you sift through resumes or that will automate the interview step of the hiring process. and that was the first use case. But now you are seeing an increasing number of companies that are using automated systems to make decisions with respect to active employees as opposed to just prospective employees.
[00:30:38] And yeah, you see it at everything from automated draft performance reviews. Uh, now with the rise of generative ai, there are companies out there that are saying, Hey, we will, uh, you know, take information that our system collects as inputs and automatically create a first draft of performance review for this employee and everything up to termination.
[00:31:01] And one of the first companies that used automated systems in kind of this complete. Employee lifecycle sort of way was Amazon, where, it came to light during litigation that they were automatically monitoring using these handheld scanners and warehouses, how quickly workers were working and How much downtime they have.
[00:31:24] And at a certain point, if they moved too slowly and that happened often enough, they would be automatically fired by an algorithm. so it really has become a complete lifecycle thing. And I definitely do think that there's a, there's a feedback loop that gets created where, you feel the desire to automate away as much HR and middle management as you can.
[00:31:50] Because those are, and if you think about what happens when a private equity firm or hedge fund purchases a company, what are the first things they do? They cut costs, and usually the things that they do to cut costs. The first targets are administrative costs and layers of reporting, which in the workplace, that means, you know, HR and middle managers, those are the first jobs that they look to get rid of in order to create supposed efficiencies But in order to successfully do that, you need to be able to accomplish the same roles using fewer people. And that creates this, well, we need to have more automated management in order to downsize these companies and create efficiencies. But in order to automate it, we need to collect as much information about our workers', uh, activities and productivity, et cetera as we can.
[00:32:43] Because without that information, you can't automate those tasks away. So there's this feedback loop where the more you rely on ai, the more you need to harvest employees' data. And once you've harvested that data, you can't help but feel the urge to collect even more, to hopefully increase the accuracy of the automated decisions that you're making and increasingly relying on.
[00:33:06] Arlo: So let me, so let me ask you, I'm looking for your personal opinion here. I mean, it feels like what you've just described really runs the gamut, right? I mean, we talk about like resume filtering. That's a pretty legitimate use case. You know, we've, we've had job openings at our business where we open the job and two days later we have 2000 applicants.
[00:33:27] And we either have to hire somebody full-time to go through all of those, or we can run it through some keyword filters. You know, just don't, don't, you know, just auto reject anybody who's not from the United States 'cause we can't hire outside the United States. Right? So, fairly simple, fairly simple uses that seem.
[00:33:45] Pretty chill and laid back, right? It's a, it's effectively like a better search filter. but then you're talking about this Amazon example, and that feels pretty terrifying, right? Like every moment, every movement is tracked and there's this instant automated termination. It feels like two big ends of the spectrum.
[00:34:03] Where is the balance? Like, I mean, I'm just curious for your, your personal opinion here.
[00:34:08] Matthew: Yeah, I mean, I would argue that there are definitely concerns with automated resume screening even. most notably that they. We, we have not yet reached a point where people's resumes, number one, are an accurate enough reflection of their actual relevant experience and capabilities that you can rely just on their resume to tell you if they have that requisite experience.
[00:34:33] what I've actually always recommended, and this seems so, you know, 1990s or two thousands level of technology that nobody seems to want. You think that this is, uh, the future, what I think would be a great use for resume, uh, deciphering and scanning would, instead of, you know, generating some sort of score based on, you know, an algorithms assessment of a resume, you have the algorithm pull information from a resume and fill out what's essentially in a draft of a dropdown menu form for the employee that just has the, so that they can verify, like, okay.
[00:35:10] Is this, how many relevant years of experience you have in this field? Is that an accurate statement?
[00:35:16] Arlo: Oh, that's not a bad idea. Right? Take, take this disorganized, unstructured document. Have the, have the applicant participate in confirming and validating the information in a structured manner so that the employer can in fact then, reliably filter on things like years of experience.
[00:35:33] Matthew: that's exactly right. And then, you know, maybe, maybe at that point you at least have, you know, like the, the, the algorithm can generate a score based on whatever you don't put into the dropdown menu. But there's no reason in my view, like the, the number of act, uh, of actual hard requirements for most jobs is not that many.
[00:35:53] It's usually you need to have this many years of experience and or this minimum level of education and you know, you or you, you have to, for some jobs, for most jobs. You know, do you live within a close enough distance from the office or work site in order to make it there reliably in person, whatever the minimum number of days of a week there are, et cetera.
[00:36:17] And it's much better to just have the employee tell you that information directly in some way than to have an algorithm effectively guess whether or not that's true based on their resume. so I, I just wanted to briefly say that like even with resume screening, maybe we'll get to the point where AI doesn't make mistakes and doesn't have any issue parsing resumes.
[00:36:40] And when they do make mistakes that they're not structurally biased in favor of whatever demographic group dominates the training data, but we're not there yet. so I think that there's, you know, you need to be thoughtful. I would say regardless of what decision is made about. Are you actually creating potentially a greater risk of misfires than you would if you were only to automate some smaller subset of the things that you think you are capable of automating?
[00:37:16] because again, like you, there's a, a real kind of zeitgeist in the employment world right now as there is throughout corporate America and society as a whole right now around ai. And the impression is that AI is capable of doing so much of what humans can do. And quite frankly, and I'm biased because I'm the son of a career HR person, you know, I think that we don't give nearly enough credit to what a tough job it is for human HR managers to figure out.
[00:37:54] Which workers are the best candidates for current jobs, and even more so for once they're on the job, how is it that you figure out who is actually pulling their weight and who isn't? the reality is, and this is something that a lot of people I think, don't reckon with properly, we don't have enough information about what matters in terms of assessing most employees, certainly their capabilities, but even their actual on the job performance, to automate a way that task and get rid of that aspect of human judgment, that, uh, is I think still kind of an essential component of employee management.
[00:38:39] and if you'll, if you'll let me like actually give one metaphor that seems to like strike home when I give it to people. I'm a big basketball fan. and, uh, unfortunately my favorite team's, the Atlanta Hawks, which are have the longest championship drought in the NBA.
[00:38:55] Arlo: That's like that, like being a Cubs fan for a long time.
[00:38:58] Matthew: it is, it is, uh, the, the, the last championship, um, that the Hawks won was when my mother was two years old. but my point is actually about the NBA draft. Now you can think about how much of an absolute. Dream it would be if recruiters at the average company had as much information about their perspective hires as NBA general managers have about the players in a draft, you have so many statistics from highly relevant, directly relevant past work experience, namely being a college or occasionally high school or international basketball player.
[00:39:39] You have all of these statistics, you have tape of them actually performing the job. You have information about their relevant physical characteristics like their speed and their agility and all of this stuff. But despite having all of that information, if you do a 10 year retrospective on any NBA draft, you can almost always say that most of the draft picks were wrong in the sense that they would've been, the GM would've been better off picking somebody else.
[00:40:09] It's not because the GMs suck at their jobs, you know, in fact, I actually think it's a miracle that they get, you know, the top picks right? Or close to right as often as they do, given how much of what actually matters in basketball can't be captured in any statistic or measurement. You can't measure somebody's effort or ability to grow and their potential and all of those, you know, their, um, their hustle, their ability to, to, to translate their skills as a defender once they have to defend much better players.
[00:40:46] all of that stuff. Is, are things that you can't accurately measure, and it's really hysterical if you look at like these, uh, John Hollinger's predictions on who the best draft pick will be in every draft, just based purely on statistics. They're like usually way off. Um, and the, the GMs clearly improve on the Bayer automated decision making.
[00:41:10] But my point is, if accurately using statistics and using seemingly objective data to predict who will be a good worker, is that hard in, in a sphere where you have more information than a typical corporate recruiter or a manager could ever dream of having on an employee, then how much harder is it to measure how good somebody is as a consultant or an accountant or even a factory worker?
[00:41:40] Where the vast majority of what actually matters in most jobs cannot be measured, and if it can't be measured in AI system cannot make use of it.
[00:41:49] Arlo: There is a certain amount of randomness and continuing the sports analogies, I mean, we, we can look at Heisman winners, right? And you go, how many of those Heisman winners actually have like really successful NFL careers? And the answer is not as many as there are Heisman winners that you would think, right?
[00:42:08] I mean it's, it's sub 50% go on to have like really impressive professional careers. Couple things that I'm taking away from this, one of which is that corporate America gets a hammer and everything becomes a nail, right? I can collect data, I can use ai, I should definitely do it at every possible opportunity. and I do think there's a lot to what you're saying about things like you can't measure heart, you can't measure passion, you can't measure somebody's desire to learn and improve. and those are definitely very soft skills that ai, at least not now, uh, it seems, uh, seems incapable angle to handle.
[00:42:45] Matthew: And, and real quick on that, do we want them to be able to measure those things? How creepy would the level of data collection need to be about somebody in order to measure. Those intrinsic characteristics like effort and heart, you would probably need a level of monitoring and information harvesting that people would not be comfortable with, you know, if you were to even attempt to measure those things.
[00:43:11] So even if it were theoretically possible, I think that that's the other thing. Like, you know, we, it might be a situation where in order to get the level of information we, need about the things that currently you can't quantify as an employer, you would need to do things that, uh, you would get employees quitting on mos, you know, uh, in,
[00:43:34] Arlo: We like blood tests and D-N-S-D-N-A samples, and we're gonna do hormone levels throughout the day to figure out when your optimum working time is.
[00:43:44] Matthew: Exactly. You know, and, uh, there's, the, the funny thing is there are actually, you know. Schools of thought that are out there that thankfully are mainly just academic right now, where people are saying that that's kind of where we're going, and that that's like kind of the inevitable end point where you just lose your privacy and we measure everything so that we can optimize our economy and labor market to the, greatest degree possible.
[00:44:13] But do we really wanna live in that world? That's the question that you can't answer with that.
[00:44:18] Arlo: It says something you got, you got two people that are definitely not Luddites here who. Think that that's creepy. And, um, you know, I do wonder in the future do we find, do we find that people get called Luddites because they don't wanna have every moment of their day and every heartbeat measured for their employer's consumption?
[00:44:39] Well, let me turn, let me turn the, the questions. We've talked a lot about ethics, about practices. We've talked a little bit about, you know, 10 99 versus, you know, W2 workers. tell us a little bit about what's happening in Washington. What, what is their, and, and perhaps even on a, on a state level, you know, we have seen some state laws like you, you mentioned, you know, Maryland and and California have good privacy regulations that do help to provide some governance around some of these things. will, will there be federal regulation? I know there was an AI. Yeah, there was AI regulation that was sort of, kind of passed and feels like it was a little bit toothless. Tell us about the current state of, of regulation, uh, around workers' rights and AI and data collection.
[00:45:27] Matthew: Uh, well, I think under the Biden administration there, there seemed to be a little bit of movement towards and recognition of the problem of employee privacy and the concerns around AI making employment decisions. The Trump administration has, shall we say, not shown the same concern. and I think that it's very difficult for me to, to, to believe that there's going to be significant legislation in the coming months and years, at the federal level on automated decision making in the workplace or on data collection.
[00:46:07] for workers.
[00:46:08] it would take, I think, a pretty radical change in the, you know, kind of political climate right now for that to happen. because just to be, you know, maximum cynicism, the right now, frankly, both parties are so hyped up on the promise and potential of AI and automation that there just does not seem to be.
[00:46:34] Huge appetite among the bulk of policy makers in both parties to really make a push towards that. and certainly, you know, the, I don't think that there's any chance of it happening in the Trump administration, but I'll be honest, I don't think that it would've been that different in terms of what substantive laws were in place right now.
[00:46:55] If there had been a Kamala Harris administration, I think that until, what I would say the fever breaks around ai, you know, when you have people who believe that we are literally on the cusp of artificial general intelligence or artificial super intelligence, and it's beyond the scope of this podcast for me to explain why I think that that's insane.
[00:47:17] but as long as you have a significant chunk of Washington DC believing that they are not going to be as aggressive as I think that they should be in reigning in. the increasing use of automated systems to manage workers. It's just they, there is a widespread belief right now in Washington DC it seems like that, the more important thing is to not slow down AI innovation.
[00:47:46] and unfortunately, I think it's the workers who are catching strays a lot of the time as a result.
[00:47:51] Arlo: Do you remember? There was, I think it was the VPPA, um, which was the video privacy regulation that came out a long time ago. And my recollection is that it came out because, uh, and this is the, this is the law. It's, it's, it's kind of a tangential law that most people don't pay attention to, but it effectively says that you can't share somebody's video viewing history with somebody else, right.
[00:48:21] As if you, and this was, this was driven because, uh, blockbuster video. It turned out that they were using your viewing history to sell to direct marketers. And, um, what, what I thought was fascinating about this one is that the, the final, the final straw that really got the federal government to go and focus on this was that all of Congress and judges, and even potus, they were renting movies from Blockbuster, and suddenly they were like, wait a second, that could impact me.
[00:48:55] And I really don't want people to know what I've been renting at the video store right now. We can assume that it may have just meant that they were watching bad, you know, foreign films with subtitles and they didn't want the, the misrepresentation. But I think we all know that that's not the concept that they were worried about.
[00:49:12] And I, I similarly have wondered, you know, with things like automated decision making and automated monitoring, it may not become a top priority for legislatures until legislators are impacted. And they're becoming measured, right? That might be the thing that has to happen.
[00:49:29] Matthew: I I think that that's a great point. I actually was not aware of that law. Uh, I'm embarrassed to say. And, but I think that that's a, that's a great insight. It, I think that that's exactly right. It's, it's, the reality is that if you look at the relative spending of lobbyists in, you know, across our economy, it, it's just a fact that policymakers hear way more from corporate lobbyists and from industry groups, and from industry aligned think tanks than they do from unions or workers rights organizations.
[00:50:05] And on top of that, members of Congress, they are managers, they are not managed, they are bosses, you know, and they don't ultimately. Personally experience what a, you know, ordinary worker experiences in the workplace and therefore it's, I think somewhat of a, well, it's not something that they personally viscerally experience, so they don't feel as much urgency about it.
[00:50:35] And kudos to the handful of, policy makers in both parties, for whom that is not true. Uh, and we are, believe it or not, starting to see kind of an increasing number of people that it is kind of interestingly on, you would almost describe it as both extremes of the political spectrum, who are more concerned about, uh, the tech industry and about privacy issues and about workers' rights.
[00:51:01] you've got people on what are, you know, kind of loosely called the populous left and the populous right. Who are both, more concerned about that. And I think that that comes from the fact that those are people who, are the least likely of everybody in Congress who have previously come from a business background or a background where they were a boss rather than fr, you know, a worker.
[00:51:25] so I do think that that's a great point and that a lot of it just comes from the fact that Congress fundamentally cannot, members of Congress and legislators in most states, fundamentally have a much easier time accepting and thinking from thi about things from the perspective of somebody in C-Suite than they do somebody who is on a factory or warehouse floor.
[00:51:48] Arlo:
[00:51:48] Matthew:
[00:51:48]
[00:51:48] Arlo:
[00:51:48] That was fascinating. Thank you, Matt. as we wind down the show, I'd love to understand, I mean, we have a lot of our, a lot of our listeners are involved in regulatory, work and many of them are on the corporate side trying to implement and comply with these regulations.
[00:52:04] But we're all human and as much as we like to pretend that we have all the right answers, we still sometimes give our social security numbers away to watch a cat video. Uh, I'm curious from, from your point of view, is there any best practice that you recommend to others where maybe you're not following that best practice?
[00:52:23] Matthew: I'm definitely better with my privacy practices on my laptop than I am on my phone. I'll, I'll say that, you know, I, I, I think much more carefully when I'm using my laptop about, okay, make sure I'm connected to A VPN, you know, in order to, prevent my information from being as easily tracked, uh, you know, use privacy friendly browsers, you know, limit scripts that are running all of those things.
[00:52:47] I've got, I've got a great setup on my laptop for all of that. Um, but I just haven't, I haven't been as diligent about making sure that my privacy hygiene is as good on the, on my phone. Um, it's
[00:52:59] Arlo: All right. So bet better phone hygiene is the takeaway. I think we're all guilty of that. That's everything is set up to be so easy to say yes to on the phone. Uh, it's definitely harder. Well, Matt, this has been amazing having you here. I've learned a lot and, and it certainly has gotten me thinking about some things that I didn't think a lot before. Uh, before we go, is there anything that you'd like the audience to go take a look at? I know you've done some really interesting work around, you know, bosses and some papers you've written and, uh, maybe tell us where we can find those.
[00:53:33] Matthew: For sure. so there's two things that I'll mention. One, uh, is the first report that I wrote during my time at CDT, it's called Warning Boss Ware, may be Hazardous to your Health. and as the title implies, it's about the health and safety risks associated with employers increasingly using electronic surveillance and algorithmic management.
[00:53:55] just Google that there are not multiple reports out there called Warning Boss or maybe Hazardous to Your Health. and then I, would just, I'll, I'll just plug my, you know, bio page on the CDT website, which has my most recent report, as well as other reports. The most recent one, uh, was about bringing transparency and accountability to algorithmic decision systems.
[00:54:14] But my, your, the URL to go to, to look at that would be cdt.org. Slash staff slash matt dash sheer, S-C-H-E-R-E-R. People always forget the C.
[00:54:27] Arlo: I,
[00:54:28] I, I, I made the same mistake when I first started searching your name. Uh, there are a lot of people with a name without the C.
[00:54:36] Matthew: well, interestingly, I did some genealogical research and, uh, my own ancestors struggled with the C
[00:54:42] as it turns out when they first came over to, to, to the us. So, people shouldn't feel too bad about it. But you'll to avoid a 4 0 4 uh, error when you try to find my profile, make sure you put the c
[00:54:54] Arlo: All right, folks. Warning, boss Ware may be hazardous to your health. And then head over to the CDT and take a search for Matt Share if you'd like to learn more. Matt, thank you for joining us today. It's been a real pleasure.
[00:55:06] Matthew: Thank you Arlo. This was great.
[00:55:08] Arlo:
Meet the host
Arlo Gilbert is the host of The Privacy Insider Podcast, CIO and cofounder of Osano, and author of The Privacy Insider Book. A native of Austin, Texas, he has been building software companies for more than twenty-five years in categories including telecom, payments, procurement, and compliance.