S4 E10: AI & the Future of HR


Podcast December 5, 2023

One year ago, as we recorded this conversation, a chatbot named ChatGPT debuted and shook our technological assumptions. Developed by OpenAI, the artificial intelligence tool ignited debate on the future of work, creativity, and what it means to be human. But the story of AI stretches far beyond any single innovation.

As companies race to harness AI, what legal realities must they confront? How do these powerful tools reshape the employer-employee relationship? And in automating tasks once reserved for humans, what deeper human values are at stake?

Today, we explore this unfolding narrative, guided by employment lawyer Bryn Goodman, partner at Fox Rothschild, and our own Terry Cook. Together, they’re going to help us separate AI fact from fiction with insights gleaned from businesses on the frontlines of adoption. Make no mistake: there are human impacts—both promising and perilous—of entrusting technology with more of our working lives. However uncertain the path ahead, one thing is clear: the tools changing work are here. How leaders engage them now will shape the future. So let’s rethink some assumptions assumptions, weigh risks, and see if we can move thoughtfully into the AI era.


Links & Notes

Transcript:

Pete Wright:
Welcome to Human Solutions, simplifying HR for people who love HR. From AMHR solutions on True Story FM, I’m Pete Wright. One year ago today, as we sit down to record this conversation, a chatbot named ChatGPT debuted and shook our technological assumptions. Developed by OpenAI, the artificial intelligence tool ignited debate on the future of work, creativity, and what it means to be human, but the story of AI stretches far beyond any single innovation. As companies race to harness AI, what legal realities must they confront? How do these powerful tools reshape the employer-employee relationship? And in automating tasks once reserved for humans, what deeper human values are at stake?
Today we explore this unfolding narrative guided by employment lawyer Bryn Goodman, partner at Fox Rothschild, and our own Terry Cook. Together they’re going to help us separate AI fact from fiction with insights gleaned from businesses on the frontiers of adoption. Make no mistake, there are human impacts both promising and perilous, of entrusting technology to more of our working lives, however, uncertain the path ahead. One thing is clear, the tools changing work are here. How leaders engage them now will shape the future, so let’s rethink some assumptions, weigh risks, and see if we can move thoughtfully into the AI era. Bryn Goodman, Terry Cook, Welcome Bryn to the show, it’s great to see you and have you here to teach us.

Bryn Goodman:
Thanks Pete, thank you for having me. I don’t know if we can live up to that wonderful introduction you just gave, but we will try.

Pete Wright:
You will have no problem, I’m sure. So we get these quotes, these high-minded quotes. Satya Nadella says that the golden age of AI is here, and it’s, “good for humanity.” Sundar Pichai over at Google says, “I’ve always thought of AI as the most profound technology humanity is working on, more profound than fire or electricity, or anything that we’ve done in the past.” Are they overstating, understating or properly stating? This is an option choice?

Bryn Goodman:
I think that I’m going to go with the lawyer option, which is none of the above. And what AI can do is it can improve efficiency, it can save costs, avoid problems. There are a lot of great things that are going to come from AI, and it’s exciting for sure. There’s also obviously a lot of unknowns, and when we’re living in a world of unknown unknowns, so I don’t think that we can properly contemplate the level of impact this is going to have. And so what we’re doing, especially in the workplace, is trying to sort of anticipate as much as we can how this is going to impact our workers and impact the business models that we have, because it’ll impact both sides of that. And sort of anticipate, what are going to be the legal challenges. So what are legislators going to make rules about? How can we anticipate those already and put them into action so that we’re not caught flatfooted? But I don’t think it’s necessarily an overstatement.

Pete Wright:
I’m curious as we start, let’s start personally, right? We’re talking about AI, you both have presented on AI. How do you use these tools day to day? I mean today is ChatGPT’s birthday, did you do anything AI related to celebrate? What tools are you using right now? Do you have a suite that you count on yet?

Bryn Goodman:
I do use ChatGPT at work. Our firm has taken the position that you can use these tools as long as you take a training and you understand what they can be used for. So if I use them at work or when I use them at work, because I said that I do, I don’t put any client information into ChatGPT, I don’t put anything in there that I would not want to be spread on the internet at some point.
I use it more like sometimes a starting point or a Google search or something like that. And I understand also there’s only limited information in there, and there are problems with it. The technology can have hallucinations, what they call hallucinations, and we’ve seen seen lawyers get into trouble writing… Actually submitting briefs to court and being sanctioned for the briefs having made up case names and made up case law. So obviously I wouldn’t use it to draft a brief. I might use it to say, draft a letter saying this or that. And I would use that as a first draft and it’s almost like taking a first year associate lawyer who’s just learning anything, and having them draft something for you. So then you have something in writing in front of you that you can work from, but I would never use it for a finished product, and again, be very careful about confidential information. How about you, Terry?

Terry Cook:
No, I think the same. I think with the way we use it as starting points, so maybe we think about it in some marketing aspects or as you said, letters, but really from our standpoint, looking at maybe job posts and things like that related to recruiting. But again, I’ve already learned a lot from Bryn about the dangers of it, so we have never used any confidential information to put into there because then it is out in the universe for people to access. So we really just use it for the starting point. We don’t rely on it being completely accurate at this point until we verify, so.

Bryn Goodman:
Right. I think it gives you quick information. It sifts through a lot of stuff quickly. That’s great, that’s the efficiency piece of it. First draft we already mentioned. I think you have to understand the limitations, that the information in there is limited to, I think up to 2020 is for ChatGPT, that’s all they have in there.

Pete Wright:
Yeah, 2021.

Bryn Goodman:
Anything after that is not in there, so you have to understand that you’re going to be very limited on that point. But understand it’s a free resource, so take advantage of that free resource. I think we are going to see lawyers who use it to their advantage, and lawyers who don’t use it to their advantage being separated. And then you also have to be very careful because if you rely on it too heavily, you can get in trouble. One other thing I do is I’ll say, give me a terms and connectors search for this case and see what they came up with, and then I’ll put that into my legal research data database, Westlaw or Lexis. So that’s an interesting way to use it. So again, I like to use it to supplement. It’s like your own personal assistant getting their opinion on something, but sparingly.

Pete Wright:
I think it has domains of expertise that are fascinating to me. And one of the things I’ve heard is that the law is an area that it’s actually quite adept at. With the exception of, of course, that the highlighted stories of attorneys who didn’t know better, it wasn’t widely publicized that they were hallucinating and making up case law, using it in actual court. It’s really, I mean it’s hard. It’s not even Schadenfreude at this point to laugh at that kind of experience because it is now obvious, but you have to relate to the pain of being that attorney who didn’t know any better, and was using that material. And I mean, I think that is foundational emotional understanding of how we have to learn to use these tools.

Bryn Goodman:
You’re making a great point that we, as I mentioned at the top, the unknown unknowns. They weren’t thinking about that this could give information that’s inaccurate or could make up information, and that’s the thing about AI. It learns from the data that’s in there. So it learns from its input, it’s only as good as its input, and so they didn’t think about that. That being said, even if I’m having someone do research for me, typically best practice is to check the cases in the actual legal databases, Westlaw, Lexis or Bloomberg, and read the cases that you’re citing, at least make sure they exist so that… There were numerous cases that didn’t exist.
But that’s certainly a cautionary tale, and certainly something that while AI may feel, as I said, my human personal assistant next to me, it is not a human being, it is not a person that’s actually thinking in that way, it’s learning from the information that’s input. And that was the trouble with Amazon in 2018, the story came out where they had been using an AI tool to screen resumes, and were screening out resumes that had things like women’s volleyball team, because the history of hiring for software engineers or technical positions had been men. So the resumes with women’s volleyball team were not ones that they perceived as going to get past that initial phase.
So they were screening out in a discriminatory fashion, which gets into my area of expertise, which is employment law and what impact AI will have on employment law. Which already the EOC has come out and made a statement, other federal agencies have made joint statement with the EOC. A lot of states have passed laws to address these issues because again, the information that we input can result in a disparate impact on certain groups that are protected, right? And that’s considered discrimination. You can have disparate treatment of someone which has discriminatory intent, or you can have disparate impact, which is what AI can potentially result in.

Pete Wright:
Terry, do you have a comment on that before I jump into it? Different question.

Terry Cook:
No, I mean I think I was just going to say kind of what Bryn said already is, people have to remember that it is only as good as what the input is. So I’ve recently taken a few sessions around diversity and recruiting and onboarding, and that was one of their cautionary tales as well is, as Bryn said, the system tries to learn from you what type of person you’re looking for. So it may be inadvertently, as Bryn mentioned, taking people out that you actually want to stay in as a company. So it’s just one of those things that we have to just realize that it’s not perfect, and that we just have to be cautious on how we use it.

Pete Wright:
Forgive me this, I don’t think this was in your slide, so if you don’t have an answer, I’ll cut it. One of the things that I have been learning about is, that I feel like is so perilous about these particular tools, is the sort of relative opacity of the training process. When we talk about training, we’re talking about this large language model, this transformer model, the ChatGPT is a large language model, a transformer. And what we do is feed a lot of data into it and then it all gets munged up together. So we feed, humanity, the things we’ve created, into these tools, and we’re not entirely sure, as I understand it, where the ingredients go once they’re in the giant pot, right?

Bryn Goodman:
Right.

Pete Wright:
It’s all a big stew. So when we feed it stuff like you both are talking about, when we feed it our biases, when we feed it the trends that, we hire male engineers and not women volleyball players, right?

Bryn Goodman:
Mm-hmm.

Pete Wright:
The sense to me is as of now, we don’t have an answer to how to change that. Is that my understanding?

Bryn Goodman:
Right, it’s sort of impossible to penetrate that sort of black box problem, which is the details of those internal processes are not going to be visible. The decision-making is sort of imperceptible when you use AI. And so what the privacy law area, that area of law is looking at disclosures, and notice and disclosures as requirements saying… And it’s more advanced in this area in Europe, and this is a dialogue I had with one of my colleagues whose expertise is privacy saying that, if you’re going to use a vendor that’s using AI, and especially in the employment context, because these laws are going to say whether you use the AI or you make the decision, you’re going to be responsible for that decision. So you’re going to be responsible if it violates one of the existing laws. So what you need to be asking these vendors is not, what is the formula that you’re using?
They don’t need to be giving intellectual property to their consumers. You do need to know, can you tell me what are the main categories of information that you are considering, and how do you consider that? Not the secret sauce per se, but kind of an outline so that it’s a little more transparent. The problem is that I think some of these developers don’t even kind of understand what’s happening themselves, but you should be asking if they can provide some level of transparency, because you are going to be required to answer for that if something goes wrong. You’re going to be responsible to ensure that you’re not violating one of the already existing laws, right? Title VII, protecting against certain protected categories, Americans, anti-disability discrimination laws, age discrimination laws, genetic information discrimination laws. There are a number of federal statutes that you need to consider and make sure that whatever you’re using the tool for that the result doesn’t have a disparate impact on a certain group.
And so you’re going to have to keep humans involved in the process somehow to double check that, or you’re going to have to very thoroughly vet and ask for transparency from the vendor. And if the vendor can’t give you those answers or they’re not even thinking or looking into those answers, then you should be a little bit skeptical about working with them. The other thing is you want to make sure that you maintain control of your data. So if you are inputting data into one of these products, you don’t lose control of that. And those contracts will often give the vendor the right to take control of that data. So you want to make sure that you’re not giving personal information or protected information by just signing up for use of this modeling system.

Pete Wright:
I think this is a really sensitive area because I know right now, and at least in the last year since GPT blew up, we’re seeing a lot of companies that have great data protection in and of themselves, but they’re wrapping the large language model, these AI models into the backend of the tool, and they have a new set of fine print that says, hey, if you click this button that uses, that turns on the AI model, you should know all our previous data protection is out the window, because we have to send your data to our server to this third party server to actually look at the data. Now you’re under their license model. Is that something you’ve run into? I’ve seen a number of complaints of tools that are previously trusted that are suddenly not, and in such a way that looks shady.

Bryn Goodman:
It’s not something that I can speak specifically to any vendors. It’s just in general, being aware of privacy, data protection, you need to look out for when you’re using these automated tools and you’re feeding information, that you have a lawyer who understands the loopholes and the way the language can be manipulated or can be very broad. Take a look and just cross out the lines that might make you lose control of that data. A lot of HRIS systems, that’s a key thing anyways, right?

Pete Wright:
Yes.

Bryn Goodman:
Even apart from privacy security, just in terms of being able to transfer your employee HRIS system to a new system if you’re unhappy with the service, sometimes there are issues getting back the information, getting back the… And there are penalties and costs associated with taking back the payroll data, the I9s, the personal information. Everything that you put into HRIS system, you want to make sure that you thought when you signed that contract, how am I getting this back if this doesn’t work out? Kind of like a prenuptial agreement.
You want to make sure that you can sort of separate yourself cleanly in the worst case scenario. While you hope that this HRIS system is the key to all your problems, in the event that it’s not and something goes wrong, you want to be able to… You want to maintain, you never want to give up control of your employee’s information or any other information that you’re intaking, consumer information or anything like that, that you could then be responsible for.

Pete Wright:
Let’s dig into some of the HR specifics, and I want to see if I can trigger a mood of optimism for this part of the conversation, right?

Bryn Goodman:
Sure, yes. I’m really interested from both of you in what you are excited about in how these tools, this sort of suite of tools that we’re seeing, can unlock something new for us in HR.

Terry Cook:
Yeah, so I think there is a lot of reason to be optimistic. I mean, there’s a lot of ways, as Bryn mentioned, for efficiency. So there’s a lot of time saving opportunities in HR between recruiting and writing policies, or writing job descriptions, and job postings. There’s a lot of ways that AI can help us so that we’re not reinventing the wheel, so at least we have a great starting point that we can utilize in order to move forward quickly. So I think all of those pieces make AI very successful I know from a marketing tool, a lot of people have mentioned that. Even some job posting companies, it’s been interesting to me as you start putting a job title in, it will come up with a job posting sample and said, “Generated from AI, here’s what we think you might be looking for.”
So it’s helpful. It’s helpful because it saves you time from trying to have to start over. So I think there’s a lot of ways that it can be helpful for sure. It’s just I think the other piece that the human resources professionals like to hear, and Bryn already touched on it is, you still need them. You still need people. You need HR to be involved, you need to have review because AI might generate, for example, a policy, but if you’re located in a specific state that has rules around that topic, AI may not have picked up on those specific rules that are applicable to your state. So the person in HR would just be able to use what’s written but able to tweak it and make sure that it’s compliant for the state. So I think again, definitely time-saving, definitely creativity that some people might not have, AI can offer that to them. So I think those are some of the positive ways for sure that AI has been helping hr.

Pete Wright:
Well, and I think Terry, you’re in a position, just the position that you are in gives you an opportunity to sort of weigh in on a question that is all the rage right now in AI media, which is, is AI coming for my job?

Terry Cook:
And I don’t think it is. I think you’re right, that is the fear. I’ve actually heard it talking to people, to other HR professionals that I encounter. They’re like, “Geez, is this going to replace me someday?” And I think the answer is no. I think you still need the people. You need the human in human resources because you need to be able to confirm information, and you need to be able to relate to people. I mean, a computer is not going to be able to relate like a human being would be able to relate to your employees. So employee relations is a huge part of what we do in human resources, and you do need that aspect of that person involved in that process.

Bryn Goodman:
There’s an interesting point one of my colleagues and I were talking about this issue sort of generally. There are certain things AI is very bad at, and one thing that it is known to be not good at doing is recognizing human emotion. So there have been attempts to create tests for whether or not someone’s lying. Insurance companies would love to use that to see if someone could record a video and see if they’re lying or not, see if the AI could figure that out, or test to see if someone, how they feel about a certain product. HR needs to be present to basically tailor that policy to the actual workplace. And not just specific laws of every state, which certainly I wouldn’t trust, especially not at this stage, and I don’t know how quickly we’ll progress to that. I wouldn’t trust AI to tailor something, a policy to specific locality.
More than that, there’s the industry, the business and that particular workplace. I mean, you have a number of personalities. Even in a remote workplace you have personalities that interact with one another, and you need to understand, how is this policy going to fall? How is my vacation policy? I know what the law is, I know what I’m required to do, but how would’ve been doing in the past? What is the practice? How do people feel about that practice? AI cannot do that for you. And chatbots are not that good at making people feel heard, which is a lot of what HR does is mitigate risk and liability by ensuring that people feel heard, seen, comfortable at work, safe at work, valued at work. AI can’t do any of that.

Terry Cook:
Yeah, that’s an interesting statement, and I think that’s a really good point. And I think in today’s job market and today’s employee population, people are looking even more to feel heard, and they’re looking more into the culture of the company, and the way people will talk with them, receive what they’re saying, they want to be able to make a difference in their company. So I think to what Bryn’s saying, it’s a really good point that in order to really feel heard, it may take more than the robot that’s saying, “Thank you, Pete, for talking to me.” I mean, it’s really just being able to have more of a conversation there.
But again, I don’t want to take I away from how positive it can be, because there’s definitely aspects that it’s really helpful in in our profession and HR, in recruiting and policies and job postings, the job descriptions. So even performance reviews I’ve heard people say they want to start a performance review. To Bryn’s point, you’re probably not going to have ChatGPT do your whole performance review because it doesn’t understand what you’re looking for as a company from your employees, and you don’t want to have a lot of private information. But it’s a great starting point. There’s still time savers.

Bryn Goodman:
You could use it to say, I wrote anonymous paragraph, I wrote a couple sentences. Can you make this language softer? Can you make this language more friendly? And so you can use it to give you an idea of what you want to do, but you have to know what you want to do based on the real life inputs. And if you’re not using your brain to do that, then AI is not going to be that useful.
It’s going to be super useful in freeing us up, lawyers, HR professionals, everyone in these types of professions basically, freeing us up from having to draft the rote. This is the policy that has the legalese, the letter of the law. Now think about, how can we stay within the letter of the law, but make a policy that actually works for our company, works for our people, draft a job posting that has personality that reflects the culture of the company. I want to know, give me the points that I need to include in the job posting that cover all my bases to make sure I’m giving the right disclaimers, notices, blah, blah. AI can do that for me. Great. Now I can be freed up to spend more time on the things that will make me a better HR professional, a better lawyer, whatever it is.
I can have more time for investigations. If I need to have witness interviews, I can have more time for trainings, and the basic trainings I can leave the AI. I can ask AI to draft a presentation for me, and then I can spend a lot of time really digging deep into these details that make it… So it can make your job easier, and you can be better by using these tools, you just can’t rely on them. And yes, if you’re, you’re the kind of individual who doesn’t really feel invested and wants to just get it done and doesn’t really want to think about anything else, then yeah, maybe your job could be replaced. But I think that people who… I think that it’ll level everyone up, and that’s what people are excited about.

Pete Wright:
I think you just said something really important that, the bottom line is, right now, let’s just say you ask AI to write you a deposition, and say it has all the data, it’s going to give you, at best, C work, right? That’s kind of where we’re living right now. It takes the human to take it to the other place, to the next level. But all of everything that we’re saying maybe should be sort of, add the word yet at the end of it, right? Things are changing very, very quickly. So what is your position on the people who say, no, I don’t trust it. I don’t want to get into it, I just want to do my job. What do you tell people about their position on AI? What should they be doing next?

Bryn Goodman:
You can decide not to use AI. It’s coming no matter what. So if you choose not to use it, you’ll eventually be limited in how you progress in the marketplace, in the world. And rather than reject it, why not try to use it in the ways that you feel comfortable using it, just so you can kind of understand, and also be able to, if there is legislation, if there are people running on platforms or there are policies out there, or you’re in a state that has referendums where individuals can write their own laws and pass them, be able to actually engage in that dialogue and understand what the risks are so that you can help set the parameters. Because there is going to be regulation around this. It’s not robust yet, there will be regulation. Just, it’s taken us years to regulate social media, we’re finally kind of trying to regulate social media.
Thank goodness for our future children, because a lot of children from the age of first social media were hurt by the lack of regulation. And so, we’re going to get regulation. It probably will lag behind what we’re dealing with, but if in order for you as an individual to understand what the risks are, understand how it could be beneficial, because it’s going to come no matter what, understand where you want to see the regulation, you should try and use it.
And also just, again, there’s going to be a divide between HR professionals who use AI, and those who don’t. And yeah, you don’t want to go so far where those lawyers who were the guinea pigs and basically flying blind and then sanctioned for it, you don’t want to go to that level. But you should, in a very cautious, informed way, try to utilize these tools, because they can be helpful. That doesn’t mean adopt the newest AI HRIS system, but it doesn’t mean thinking about, how can this help me, like Terry said, draft a policy? Great. Let’s just see what it feels like to, instead of looking at a blank page, looking at AI’s version of this. Question every sentence, but use it.

Pete Wright:
Terry, are you seeing a trend yet, if any, or signals that hiring managers are starting to count on some level of savvy with AI tools as a part of the job posting?

Terry Cook:
I have not seen that in any job posting requirements yet that I’ve worked on with clients. It doesn’t mean it’s not out there, but it definitely has not come up as part of the list on the job postings that we’ve worked on. I can see what you’re saying though, Pete. I think it makes sense. I think that may actually start even more so in the marketing realm, because it makes sense that especially in marketing, that they’re able to understand it, utilize it, and be able to find the best ways to use it for the company, for the good.
But yeah, I think that you’re right. I think it’s, and Bryn mentioned social media, that was the first thing I thought of when you asked that original question. There were a lot of people that wanted to completely reject social media and not find ways to be a part of it, and not find ways to utilize it, even at home or at work.
And they, as Bryn was mentioning, they get left behind, because that is what people are using, and you do need to. So To Bryn’s point with AI, you can take baby steps, as she said, you can find areas of AI that you want to at least try to understand and see how it helps. You don’t have to jump all in, but if you start at least a little bit at a time thinking, well, how can this help me? You’ll see that it can help. And you do have to verify, but at the same time, as Bryn also mentioned, it does save you a lot of time in so many different pieces and areas, that it’s crazy not to at least try to utilize it in your worlds.

Pete Wright:
Well, and it is everywhere, as we’ve said. And we’ve talked a lot about Open AI’s ChatGPT, but there are a lot of different tools, and I wonder… I just want to run down some of the other large language models, because you’re going to see them, right? ChatGPT is from OpenAI, and OpenAI has a particularly interesting business structure. You’ve probably heard a lot of news about OpenAI if you’re in this space at all. It’s messy, but hopefully getting better. Anthropic is another company, they make the language model Claude. Claude has a lot of great benefits, and I’ll just tell you my use case. So in my home life, I write books. And I wrote a novel two, three years ago that I had not read in a while. I didn’t finish it because I couldn’t figure out how to finish it. I’ve kind of been thinking about it.
You know what’s interesting about Claude? It takes the largest data set, so I could feed it my entire book. It also does not use my conversations to train itself like ChatGPT does. So anything I submit to my Claude account is private to me, and I was able to interview my book, my own book. I was able to say, Hey, who are the principal characters? What is their story arc? What threads are still open in this thing? And it was able to interpret. I say that in air quotes, interpret is a loaded word, but it was able to give me an interpretation of the book, and let me know where to start again to finish it. And I just finished it last night. It is an extraordinary set of tools.
So Anthropic’s Claude is fantastic. OpenAI has DALL E, which is an image generating model. You’re going to hear a lot about image generation. Midjourney is another fantastic tool for image generation, incredibly controversial in terms of copyright. I don’t know, Bryn, if you want to weigh in on any of what we’re talking about related to copyright, but there are some questionable things that simply, it feels to me have to be litigated in order to get these answers.

Bryn Goodman:
Right, legislated and then litigated. Yeah, I think we’re going to try to sort of shoehorn these things into the existing laws, but you can’t. We have this issue with website accessibility in employment law context is, that law in terms of accessibility for public accommodations, which is a law that protects individuals who suffer any kind of disability to allow them to have access to public accommodations, goods and services provided by retailers, public library, whatever it might be, the movie theater.
And then there have been just a slew of lawsuits with respect to accessibility to websites. Now that law and its regulations were drafted far before there were ever websites. And so we’re trying to, apples to oranges say, oh, well, a website is a public accommodation, but is it, and what are the requirements? And there’s no definitive regulation, there’s guidance, there’s website content accessibility guidelines, but nothing that’s actually controlling. And so we have a slew of lawsuits as a result because it wasn’t legislated. So I’m hoping that we see governments, local, state, federal, and international legislators make laws to attempt to kind of put guardrails up around this so that people don’t get hurt. Because that’s what’s going to have to happen, like you said. Lawsuits or harm, injury, and then as a backlash, ultimately some kind of regulation.

Pete Wright:
Well, it’s all happening right now. You’ll also hear Google’s PaLM, they’re publicly branded. Bard is Google’s tool, but the language model’s called PaLM. Facebook has Llama, or Meta, I should say Meta has Llama. So all of these tools are out there right now, and they all have a different set of policies around how they use your data, and it’s changing very, very quickly. So read the fine print. But I think closing words, don’t be scared. Is that fair? Or be very, very afraid? Maybe it’s just we should all be very afraid.

Bryn Goodman:
No, cautiously optimistic.

Terry Cook:
There is a good one.

Bryn Goodman:
And go forward and check out AI tools, but make sure that you do think about a policy related to employees’ use of AI, so that you don’t get in trouble having not… As a workforce you want to be careful about how you use the AI tools for job applicants or for storing employee information. You also want to make sure that the individual employees aren’t using AI tools without having information about that. And so either require trainings, either block websites so they’re not using, that they don’t have access to that while at work if you don’t want it to be accessed, or you don’t feel like there’s enough information yet. But certainly you need to, as an HR professional, flag for the entire leadership that we have to address these things, because they’re coming. So whether it’s a ban on use of it at work or whether it’s education, you need to deal with it.

Pete Wright:
Deal with it. Terry, you heard it here.

Terry Cook:
I know.

Pete Wright:
You have to deal. We have to deal with it.

Terry Cook:
She’s right. Policy and training, always important.

Pete Wright:
Well, thank you both very, very much. It’s obviously a tip of the iceberg conversation. So much is changing so quickly. But deeply appreciate you both hanging out and talking about this stuff on the show today. Terry, do we have any resources we want to point people to that AIM has put together yet? I know it’s very new.

Terry Cook:
It’s very new. We’re putting together some more materials around education, but not yet.

Bryn Goodman:
I would recommend that everyone who hasn’t checked out the joint statement and the EEOC guidance on AI should go ahead and, if you’re in HR, check out that guidance from the EEOC, because it’s going to be at least helpful in some fashion.

Pete Wright:
We’ll put the link in the show notes. Thanks everybody. Thank you so much for downloading and listening to the show, we appreciate your time and your attention. On behalf of Bryn Goodman and Terry Cook, I’m Pete Wright. We’ll be here next week, or maybe not, maybe it’ll be our AI replacements. We’ll never know. It’ll be a C-level podcast next week right here on Human Solutions, simplifying HR for people who love HR.

More:

December 13, 2023

S4 E11: AI & the Law

In this episode, we explore the legal implications of AI in human resources. Our guest is employment lawyer Bryn Goodman, partner at Fox Rothschild, and she brings our focus to recent regulations and what HR teams need to know as our industries enter the AI era.

Listen
November 27, 2023

S4 E9: Financial Wellness in Focus: Helping Employees Find Peace of Mind

In this week’s episode of Human Solutions, Pete Wright explores the pressing issue of financial wellness in the workplace with Kirsten Hunter Peterson from AIM partner Fidelity Investments.

Listen
November 20, 2023

S4 E8: Redefining Performance Reviews for a New Era of Work

This week, Kyle Pardo and Terry Cook join Pete Wright to discuss reinventing employee evaluations in the aftershocks of a global shift to remote work due to COVID-19.

Listen
November 13, 2023

S4 E7: The Structured Learning & Development Strategy

What a poorly done L&D strategy is, is reactionary. It is short-term. It’s not thinking about the big picture,” says Jen Moff, Vice President of Learning and Development at AIM HR Solutions, in this week’s episode of Human Solutions. Pete Wright sits down with Moff to discuss how HR professionals can design learning and development programs that strategically align with and achieve organizational goals.

Listen
November 7, 2023

S4 E6: Authentic Recruiting

Why is recruiting the right candidates becoming an uphill battle for many companies today? This week, Pete Wright sits down with Jen Moff and Terry Cook to unpack the intricacies of modern recruitment. Together, we delve into the power of a purpose-driven mission and the importance of authentic branding.

Listen