S4 E11: AI & the Law


Podcast December 13, 2023

This week, we aim to explore the legal implications of AI in human resources. Our guest is employment lawyer Bryn Goodman, partner at Fox Rothschild, and she brings our focus to recent regulations and what HR teams need to know as our industries enter the AI era.

Pete and Bryn dive into the key laws and guidance around AI that HR teams need to follow. The EEOC, FTC, and other federal agencies released a joint statement putting companies on notice about their AI use. States like New York, California, and Illinois have also passed laws regulating the use of automated decision-making tools in hiring and employment. This episode provides a look at the emerging legal landscape for AI in HR. Pete and Bryn outline the compliance steps HR teams should take today to integrate AI ethically and minimize risks. Their discussion makes it clear that while AI tools offer opportunities, they require diligent governance within HR.


Links & Notes

Transcript:

Pete Wright:
Welcome to Human Solutions: Simplifying HR for People who Love HR. From AIM HR Solutions on TruStory FM, I’m Pete Wright. Last week, we opened the conversation on AI’s role in human resources. What can you use it for? How can you explore AI responsibly? How is AI poised to impact your workforce? Well, it turns out it’s a big conversation and we have more to cover. So our fantastic guest, employment lawyer Bryn Goodman, partner at Fox Rothschild, is back with me this week to talk about AI and the law in the AI era. Bryn, welcome back. Thanks for doing this.

Bryn Goodman:
Thank you for having me. Happy to be here and happy to cover some of the legal issues that we missed last week.

Pete Wright:
Yeah, I think where we were, we talked a lot about the tools. What are the tools? How are we seeing them used? Fantastic. Terry was here to talk about how she’s seeing it used in recruiting, and we didn’t get to some of the thornier issues potentially for people to look out for as they start looking at these tools. So where would you like to start? What makes the most sense in terms of thinking about AI and the relationship with the law from your perspective?

Bryn Goodman:
Well, I think we need to start on a federal level in the United States because that’s going to impact everybody who’s listening no matter what state you’re located in. And I think it’s, for those that haven’t heard yet, the EEOC launched an AI and algorithmic fairness initiative in 2021, and there was also a joint enforcement statement put out by the EEOC along with other agencies. So it was the EEOC, the Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, and the Federal Trade Commission. I think everybody here and HR professional knows the EEOC stands for the Equal Employment Opportunity Commission. But for those of us who aren’t HR professionals, that’s what I’m referring to when I say EEOC.

Pete Wright:
Okay. What does their joint statement mean for us?

Bryn Goodman:
Right. So the EEOC, along with these other agencies, issued this joint statement and basically said that agencies are going to hold anyone they govern, any entity they govern responsible for their use of AI, meaning employers and companies as a whole need to understand what’s going on and need to understand how to control the AI in a way that allows them to comply with the laws. And we had been speaking with you about how that’s very difficult, right? Because sometimes the people who create the AI don’t even know how it’s operating, so to figure out how it’s operating and to stop it from violating the laws. Well, I think the answer is, until we can get some transparency from the products and organizations that are selling and using this, creating AI products, we need to be very careful about in what ways we use it so we don’t violate the existing laws.

Pete Wright:
I’m going to ask a question, and I want you to know I’m fully ready for you to say, “I don’t know.” It seems like what we’re asking here is for whom the responsibility lies for using these tools. Is it the vendor of a tool that has made or embedded an AI that they don’t clearly understand when that tool unintentionally violates a standard set out by the EEOC, or is it the company that was using the tool?

Bryn Goodman:
I think what the agencies are saying is, if you are a company and you use these tools, you’re going to be responsible and we’re putting you on notice of that. And if the tool has a hallucination, does something unexpected, that’s going to be your responsibility. So the Consumer Financial Production Bureau is saying that those federal consumer financial laws are going to apply regardless of the technology being used, how it’s being used or who’s using it, they’re going to apply. So if a credit decision is made about someone, that credit decision needs to still comply with the requirements under the CFPB.

Pete Wright:
Okay.

Bryn Goodman:
So if you don’t know how that credit decision is being made, it’s going to be very hard for you to defend it, and saying “I used AI” is not a defense. So that’s not going to be a blanket defense. Of course, it was lawful because humans were not making the decision. So there can be no discriminatory intent, but there are different ways to violate those laws, the EEOCs laws, the Consumer Financial Protection Bureau’s law. This credit decision was fair. Well, if you can’t tell me how it was made, you can’t say it was fair.

Pete Wright:
In fact, the decision to use the tool is actually what we’re judging here. Right?

Bryn Goodman:
Right.

Pete Wright:
Okay.

Bryn Goodman:
So the point is that we as consumers of, not to use the word consumer again, but companies as consumers of this AI product need to demand. So this joint enforcement statement is basically saying we’re holding you responsible for the use of these tools. So instead of dealing with regulating the AI companies, we’re going to force the consumer of the AI product to demand that there be transparency of some kind to ensure that the tools are not being improperly used.

Pete Wright:
That makes sense. It seems like it fits in line with how we sort of do things in technology right now.

Bryn Goodman:
Yeah, it’s really going to be challenging I think to sort of … For the company that’s using AI or developing AI, it’s going to be challenging for them to sort of say, “We are transparent. Here’s how we do this,” without giving up the secret sauce.

Pete Wright:
Yeah. Are you watching what’s going on in Europe with the European statements and-

Bryn Goodman:
Yeah, Europe is aggressively sort of meeting this head on, whereas US is sort of taking more of a backseat. I am not following it super closely, but our privacy folks are certainly on top of that, and we work with them on ensuring our foreign companies are in compliance with those requirements as well. But certainly, Europe is taking a different approach.

Pete Wright:
It does make me wonder a little bit how the impact of European regulator’s approach may end up unintentionally affecting companies operating the US, because these are global companies, right?

Bryn Goodman:
Mm-hmm.

Pete Wright:
The OpenAI is AI, ChatGPT is working in Europe as it is here. If they have to change it to meet some European regulator, that implies that that will be changed elsewhere.

Bryn Goodman:
Right. And a lot of websites have more disclosures because of GDPR.

Pete Wright:
Of course. Yeah.

Bryn Goodman:
Because of the laws in Europe. So I think there will be a similar ripple effect, as you saw with the privacy laws with respect to AI. But the other thing is that, that’s basically, on the federal level, other than the joint statement of enforcement and the initiative that the EEOC launched, there isn’t a lot of law being passed on a federal level with respect to these issues. But we do see states regulating specific AI issues, especially with respect to automated employment decision-making tools. We see that in New York City, but we also have seen other AI laws passed around the country. In Maryland, they passed a law in 2020 actually to prohibit facial recognition without consent. Illinois, there’s a requirement that you provide notice and consent with respect to video interviews with recording and deleting, keeping video interviews. So you have to consent on that.
And then other jurisdictions, California, New Jersey, New York, Vermont, they have other laws. There’s a law pending in D.C. that would prohibit algorithmic decision-making in employment offers. And then that’s, what is algorithmic decision-making? So then we get into, what AI am I using that might need to be disclosed or be subject to one of these laws? Because the recent law in New York actually requires notice on the website or on the job posting, that you’re using these automated decision-making tools for reviewing resumes. And whatever tool you’re using, there has to be an audit performed on that tool annually by a third party, not by the vendor. In this case, for New York City’s new law, you cannot rely on the vendor’s word. You actually have to have a third-party auditing the AI that you’re using if you’re going to use it for anything related to employment decisions. And you have to post that, post a notice of that so that people can see it, and it has to be done annually, which is interesting.

Pete Wright:
That seemed like a loaded interesting, Bryn.

Bryn Goodman:
It’s interesting because there’s no indication of who should be doing the audits, what the audits need to … I mean, there’s some general guidelines about what the audits need to contain, but who’s authorized to do the audits? Is there a certified auditor? It’s not like an accountant, right? This is all brand new, all new terrain.

Pete Wright:
All of these laws, they come from the intent to protect, in this case, potential employees, right? In some ways, shape or form, we’re trying to, I guess, protect potential employees from what? What is the recourse of somebody? Let’s say I’m looking for a job in New York. I go in and I sit in for an interview and I don’t get the job, but I know because they’ve told me and I’ve given consent that AI would be used in my hiring decision. So what, then?

Bryn Goodman:
Well, so a lot of this was sort of spurned by the issue with Amazon back in 2018. If you recall, Amazon had AI reviewing its resumes specifically for software engineer positions, and AI was teaching itself what kind of resume does Amazon want. And they were automatically weeding out, for instance, women’s volleyball team on the resume because historically, men had been chosen for these software engineer positions. So the AI taught itself to weed that out. So similarly, that’s sort of what we’re trying to prevent against. I think what these laws are intended to prevent against is an unintended, disparate impact on a particular group of people because AI has been used to … You can’t expect, especially at a place like Amazon where you’re getting thousands of resumes, it’s very inefficient for an individual to go through and review every single resume. But if you’re going to have some kind of automated system looking at these things, it needs to be constantly checked upon to make sure that it didn’t teach itself to do something that’s unlawful.

Pete Wright:
Right. Emergent behavior is a nasty business when you don’t know what the recipe is in the soup. Okay. All right, so what’s next? Are there other laws around the states that are picking up interest, things that might interest us in Massachusetts specifically?

Bryn Goodman:
Well, the California Civil Rights Council has proposed something. Massachusetts, I don’t have anything for you specifically right now, unless-

Pete Wright:
Don’t worry, Massachusetts, it won’t be long, I’m sure.

Bryn Goodman:
But California Civil Rights Council has a rulemaking that’s very similar to New York. There’s a fundamental difference though. There’s no bias audit required. So that’s good news for California employers, which is usually the opposite. Usually, California has more onerous sort of procedure here. In this case, it’s New York City that’s winning in terms of the regulations that are being promulgated. And even though the EEOC, they’re the rulemaking body, but they can’t pass laws. So there’s an EEOC guidance that has sort of broader requirements, and it indicates that tools that make or inform decisions about whether to hire or promote, terminate or take actions are covered under Title VII.
So the New York City law that imposes this audit requirement is actually, it prohibits the use of any tool that substantially assists or replaces human decision making. So that’s sort of, you’re looking at auditing a tool that’s more robust in a way, whereas the EEOC is saying, “I don’t care if it’s substantially assisted or replaced. If it makes or informs any decision, which is basically like any tool you could possibly imagine is very broad, it is going to be covered by Title VII. So it’s going to be considered something that you’re using AI, you’re responsible for that. So EEOC is making very clear that it’s anything you use, whereas New York City is imposing this bias audit requirement on sort of a tool that you use more rely upon in a greater fashion. So you’re using it to replace yourself.

Pete Wright:
Okay.

Bryn Goodman:
So basically, you don’t necessarily need the bias audit if it’s just … If there’s someone involved and the human is reviewing those decisions, but you’re using AI to do some kind of sorting for you, and then ultimately you have a human checking it, that’s not going to require the bias audit necessarily. But again, the law in New York City and this guidance, none of it’s been litigated. None of it’s been tested. I think the thing you want to walk away with all of these changes is that notice and consent are two huge factors in any of these laws. So providing notice that you’re using the tool, and some of them require actual consent, but providing notice, providing information, understanding what the tool is doing, think before you purchase, you need to be an informed consumer, especially in this specific industry. Any industry, it’s best that you be an informed consumer. In this particular area, it’s even more important.

Pete Wright:
So it sounds like New York is leading the way in terms of specifying or sort of laying out what regulation should look like. Is that a fair statement?

Bryn Goodman:
Right. New York and California, I mean, California also has a robust privacy law that’s going to dovetail with this, which New York does not have. So it’s going to be New York and California have been enacting these things. But as I mentioned, there’s also New Jersey, Vermont are passing similar laws, and there will be more to follow. As I mentioned, we have Illinois, Maryland. So everyone should just be looking, keeping their eyes open.

Pete Wright:
We have our HR pros listening to this who are in, let’s say, Kansas.

Bryn Goodman:
Yep.

Pete Wright:
Right. What do you recommend in terms of baseline behavior to make sure you’re ready for regulation that comes down the path eventually?

Bryn Goodman:
I think the key thing is understanding how you are managing your employee’s use of AI, and also understanding whatever HR tools you’re using if those products contain AI. And specifically asking those payroll companies, EEOs, job posting services, third party agencies that you use to solicit for jobs, asking them, what AI do you use? What do I need to know about? Are there decisions being made on your end through an automated or artificial intelligence system of any kind? And if so, I want to know more about that and I want to know what you’re doing to comply.
You don’t necessarily need to know all the right questions to ask, but you need to ask the question about it to see if there’s an answer that you’re satisfied with. And if there’s an I-don’t-know answer, then that’s obviously not something that you might want to consider a different partner. You don’t have to be the expert on this, that’s not the HR professional’s job, but you have to understand to ask the right questions. And then it’s important though, with respect to your employees, that you understand what access they have to ChatGPT or any of the other AI functions that are free and how they’re using it in the workplace. So you’d want to have, just like we have our social media policies, we want to have an AI policy for employees use and sort of set the parameters around that because those are the two areas in which HR professionals need to get ahead of the game.

Pete Wright:
I would just add to that one of the things that we’ve seen, because this is such a brand new space, everything that Bryn is saying is right on the money in terms of asking your partners what is their use of AI, especially because not all of them have updated their tools to be completely clear about what parts are AI and what are not. So asking those questions, even if you aren’t sure that AI is even being used in some of the tools, applications that you’re using, ask the questions. Please ask the questions because it’s very new. It’s so, so new. And in some cases, it may not be super clear.

Bryn Goodman:
It follows the trend of the financial industry having to disclose how decisions are made, having to be transparent about processes. We’re going to see more compliance requirements as the use of these tools expand. But while we’re waiting for that, make sure that you are an informed consumer.

Pete Wright:
It’s also really fun. Let’s just say that.

Bryn Goodman:
Oh, AI is, yes.

Pete Wright:
It’s so fun.

Bryn Goodman:
AI is very fun. Yeah.

Pete Wright:
Just be careful.

Bryn Goodman:
AI is very fun. So yeah, I mean, the way that I use it, never, never input confidential information, never use it as a replacement, but to use it to ask a question, almost like Google, but see what it fits back. It’s kind of like having a dialogue. I was in a CLE with a colleague and she made a great point that even if you said, “Draft a letter or draft a policy,” and you put in AI and it fits back something that’s terrible, let’s say. It’s like first-year level or entry-level sort of bad.

Pete Wright:
That’s what we talked about last week. It’s like C-level work.

Bryn Goodman:
Right. But you use it as a baseline.

Pete Wright:
Yeah. Yeah. That’s a perfect way to think about it. Bryn, this is great. I so appreciate you coming back and really illuminating the regulation that is coming, the laws that are clearly impacting our use of AI in the HR business. We sure appreciate you doing that.
And thank you, everybody, for downloading and listening to this show. As we lean into the holidays here, we’re going to go quiet for a little bit, but don’t unsubscribe because we’re coming back for our next season right after the new year. We can’t wait to bring more fantastic HR topics. I will come back early in the new year with a preview of the kinds of things we’re going to be talking about. So don’t unsubscribe. Have a fantastic holiday season, wherever that takes you. And to Bryn Goodman, thank you so, so much for your time.

Bryn Goodman:
Pete, thank you so much for having me. This was lovely.

Pete Wright:
As always, you can find links and notes about this show at aimhrsolutions.com. You can listen to the show right there on the website, or subscribe to the show on Apple Podcasts, Spotify, or anywhere else fine podcasts are served. On behalf of Bryn Goodman, I’m Pete Wright. We’ll see you next year right here on Human Solutions: Simplifying HR for People Who Love HR.

More:

April 23, 2024

S5 E6: Workforce Training Grants to Level Up Your Training & Development

Join Rob Duncan, Director of the Workforce Training Fund Programs at Commonwealth Corporation, and Jen Moff, VP of Learning and Development at AIM HR Solutions. They help us all understand how to take advantage of Massachusetts Grants and how you can level up your corporate training as a result.

Listen
April 16, 2024

S5 E5: The Complicated Reality of Workplace Wellness Programs

A controversial article circulated a few months back criticizing the effectiveness of many workplace wellness programs. Jen Moff sits down with Pete Wright to talk about the perspective, and give you a path toward programs that work.

Listen
April 9, 2024

S5 E4: Exploring Workplace Coaching

This episode delves into the true power of coaching, debunking myths and highlighting its transformative potential in professional growth and career transitions.

Listen
April 2, 2024

S5 E3: The Big 3: The Top Categories of Calls to the HR Helpline

Ever wonder what the most common questions HR professionals grapple with are? In this episode we are joined by AIM HR Solutions team members, Tom Jones and Terry Cook will bring their expertise to the forefront on pressing HR issues.

Listen
March 26, 2024

S5 E2: Fueling DEI in Your Organization

In the rich tapestry of today’s workforce, the threads of Diversity, Equity, and Inclusion (DEI) are not just woven in; they’re becoming the very fabric of organizational culture. Karen Wallace, Executive Vice President of Economic Inclusion at AIM sits down to share her perspectives with Pete Wright this week.

Listen