What Is AI Governance? Frameworks for Risk, Compliance, and Innovation
AI is transforming the workplace — but without the right guardrails, the risks can outweigh the rewards. In this episode, learn how smart governance frameworks help companies manage AI responsibly, avoid bias and lawsuits, and drive innovation faster.
Summary
In this episode of Advancing Talent Acquisition, Jenna Hinrichsen welcomes Guru Sethupathy, CEO of FairNow, to unpack what AI governance really means and why it’s more important than ever for organizations of all sizes. Guru breaks down his five pillars of AI governance, from risk assessment and human oversight to compliance and continuous monitoring, and explains how these frameworks protect companies from bias, regulatory pitfalls, and costly lawsuits like the Workday class action. They also explore the risks of deepfake job applicants, the debate over federal vs. state AI regulation, and why smart governance doesn’t slow innovation – it accelerates it.
Episode 12
Jenna Hinrichsen
Welcome to the Advanced RPO podcast, Advancing Talent Acquisition. Our guest today is Guru Satyapati. He’s the CEO of FairNow, an AI governance company. Welcome, Guru. Thank you for joining us. Will you tell us a little bit about your background before we get started?
Guru Sethupathy
Yeah, happy to, thank you. Thanks again for having me. I’ve been in the data analytics, AI technology space for over 25 years, right? So, just a little bit of the origin story. I was a bit of a chess player in high school. And I remember the moment I was in high school when Deep Blue, the IBM chess AI system beat Garry Kasparov. And that was in a chess match. It was the first kind of artificial intelligence system to beat a human in chess. I remember that was such a big deal, and that actually got me interested in AI, to be honest. And so in my undergraduate, I studied computer science at Stanford. I specialized and went deeper on artificial intelligence at that time. And AI at that time was quite different than AI today. But I point that out to say AI has been around for quite a And the version of AI that you see today, again, has been being developed for a while. But there are other versions of that. And we can maybe talk about this later, like machine learning model and statistical models and expert systems. These have been around for decades, right? all that to say, that’s how I got into AI. ⁓ was, the first part of my career, I was an academic. I was a professor of economics. Again, studying kind of how technology systems impact the workforce, probably pretty relevant today to kind of how AI is impacting the workforce. And then the last dozen years, I spent at McKinsey and Capital One on the consulting side and then as an executive at Capital One, really building and leading teams, building AI technologies and solutions in regulated domains. The domains I know well are financial services and HR, which is what we’ll be talking about today. But again, how data analytics, technology and AI has just revolutionized business over the last 20 years has been quite a journey. And I’ve been really lucky and happy to play a role and be part of that journey.
Jenna Hinrichsen
Yeah, awesome. I think just over the last five years even, it’s like gone so much further than you can imagine. So our topic today is AI governance. And we all hear AI, AI, it’s everywhere, it’s in the news, it’s all over the place. I think most people have an understanding of what AI is. But what I wanted to have you give us a better understanding of today is what is AI governance and how does that apply to all the AI stuff that you’re hearing about on the news and at work and things that are relevant in today’s world.
Guru Sethupathy
And surprisingly, look, I know obviously AI is a hot topic, but even AI governance is becoming more and more known and searched. At least people are aware of the term. Now we’re going to dive into what does it actually mean? Because I think a lot of people don’t know what it actually means. But the topic, for instance, just on Google search, the search of it has increased by 4x, 5x just in the last year. So people are searching for this term. They’re inquiring. You’re starting to see more jobs and roles on LinkedIn where companies are hiring specifically for these roles, head of AI governance, director of AI governance, et cetera. I do want to put that out there. But that being said, we are so in the early stages of all of this, whether it be AI adoption or AI governance. And this is going to be the journey of the next 10 years, just like data was the journey of the last 15 years, or the cloud was the journey of the last 15 years, right? Like we’re embarking on this new journey. So what is AI governance? Well, it starts with governance. What is governance? Governance is a set of frameworks, policies, and procedures that one implements to essentially have oversight and manage risk of a syste
That’s kind of a generic term.
And so you just, can apply that to a whole variety of things. You can have governance of your organization and your employees, can have a governance over, driving and automobiles, you can have governance of, kind of guns, right? Like any kind of important thing, You have governance, you can have governance over data and cyber. so governance is again, it’s just a frameworks, policies and procedures that you implement to manage risk. So in this context for AI governance, I like to say, hey, what are the components? And look, some of the details are going to vary by industry and by company. But I like to break it down into five things.
The first thing from an AI governance standpoint is, do you know the AI that you even have in your organization? It’s fascinating. When I talk to so many leaders and prospects, I ask, hey, how many applications do you have? What applications are you using? Who’s using it? How are they using it? No one knows, no one’s tracking it. So step one is just know what you have. Inventory it, know what applications you have, know if they’re coming from vendors, know how they’re being used, all of that – you got to know what you have. So that’s number one.
Number two is do a kind of continuous risk assessment, right? Here’s the thing. AI governance is very much about risk triaging because you don’t need to do heavy governance on every single AI system. In fact, I’m going to go out there on a limb a bit and say the vast majority of your AI that you’re going to be using your companies is going to be low risk actually. And you don’t need to do a whole lot. mean, no, it’s there, no, it’s being used, but you don’t have to do a ton of governance around it. And this is where you can save yourself a lot of time. In fact, I actually call it smart governance because you can reduce the amount of work by simply triaging and understanding what risks you have and spend the majority of your time on the medium to high risk stuff.
The third piece is humans in the loop. For a long time to come, for years and years and years to come, humans are going to have to play an important role in AI governance. So what does that mean? What are the roles? What are the responsibilities? Ultimately, humans are going to be accountable for AI decision making. Humans need to review and ascertain whether an AI system is ready to be deployed or not deployed. If something goes wrong, who gets blamed or who’s hold to account? How do you think about these various kind of roles and responsibilities? And each company can determine what I call their own accountability framework, but it’s going to be a variety of roles across different groups, legal risk and compliance, HR potentially, the engineering and data science teams, right? Like different groups will have different roles. So how do you assign those roles and establish those roles and responsibilities? So that’s the third piece.
The fourth piece is compliance. And there’s going to be internal compliance and external compliance. Internal compliance is what are your own internal policies when it comes to AI? How transparent do you want to be around AI? For instance, are you going to let every candidate know that they are interacting with AI? Some of this is a regulatory expectation, but some of it is not. It’s up to How do you want to establish your guardrail? Certain companies are uncomfortable with certain types of AI systems.
Like certain companies are just not okay with video interviewing. Other companies are. Those are your internal values and guardrails that can only be determined by you. And so you need to set those policies, you need to establish those things, and then make sure that they’re being enforced. So that’s kind of internal compliance. Then there’s external compliance. So then there’s also, and I talk about this, there’s over 30 laws now globally around AI regulations.
And that is going to continue to grow. And so you need to make sure you’re compliant with that. And there’s nothing new about this. We’ve had to do compliance for labor laws. And labor laws can vary by state and region and country, You have data governance laws, Like data privacy laws that you have to deal with, right? So from that standpoint, external compliance is nothing new. It just could be more complicated in some ways.
And then the last piece, the fifth leg of this is just testing and monitoring, If you are building your own AI systems or you’re getting them from vendors, regardless, you need to periodically test these systems to make sure they’re working as intended there’s three legs of the testing. You want to test for performance, you want to test for bias, and you want to test for data, data meaning data privacy, data securities, data leakage, any of those things. just to kind of quickly summarize again, there’s five pillars of good AI governance. And then the devil’s in the details, and each company is going to fill these in according to their own values and thoughts. But it’s inventory your applications. Number two, assess the risk. Number three, establish clear rules and responsibilities and accountability. Number four, there’s compliance, there’s internal policies as well as external regulations. And number five, testing and monitoring. So hopefully that gives people a good starting point on what AI governance is.
Jenna Hinrichsen
Yes, and I think your point about different industries, it’s going to be completely different in different industries. So you mentioned finance, HR. If you’re a financial company, the governance is going to look different than it does for maybe a manufacturing company. So there’s not going to be a one size fits all governance model. So really customizing, as you said, to get to the core of your internal values and beliefs and how the organization works, and then complying with the external laws. So it’s going to look different for everybody.
Guru Sethupathy
Absolutely. look, even things like, as a quick example I can share with you, ⁓ bias, right? How do you identify bias? That’s going to look different by different industries. Because the EEOC might have different standards than the standards in lending and banking and finance, right? So different industries are going to have different definitions, different standards, different expectations. And then your own company, like you said, is going to fill in the details in different ways, right? Some companies want to be more transparent. Some want to be less transparent. Some have other types of trade-offs that they have to manage too. And so, yes, while this is a great framework and a great starting point, the details have to be filled in by organizations.
Jenna Hinrichsen
Right. And humans in the loop, I love that. It’s such an important piece to it. And I believe firmly that humans are never going to go away from these things. There’s always going to be a place for humans, but it’s finding what works best with the technology and then really using the technology as I’d like to say, the co-pilot. So the human is driving, but you’ve got a co-pilot and that’s your AI or whatever, you know, whatever systems or things that you’re using is your co-pilot. You’re never letting it run on autopilot where it’s just making its own decisions and doing its own things, it’s gonna set you up to fail. Okay. Well, let’s talk a little bit about some current events because we’ve had some things in the news recently about lawsuits. And so I think one that probably people in the recruiting industry may be familiar with would be the Workday Class Action Lawsuit.
Guru Sethupathy
Before we do that, can I hit on one more central point? And it’s something you and I talked about even in our pre-chat, which a lot of people, when you talk about AI governance, like, my gosh, it’s more kind of governance. This is going to slow down innovation, all of that. And this is actually going to segue into the next part of our conversation. But I do want to hit on this point, And I think this is very important.
Jenna Hinrichsen
Please do!
Guru Sethupathy
Absolutely, if you do kind of onerous governance, it’s a pain and it’s going to slow down innovation, absolutely. But here’s the interesting thing. Smart governance actually speeds up innovation and adoption. Okay, let me give you some examples that just really draw this out, right? If I told you that tomorrow, stop signs, traffic lights, traffic cops, all of that was going away. No more need for driver’s license testing for people to be able to drive cars. Car insurance was going away. Speed limits were going away. All of this was going away. And the sensors on your cars were going away. Would you feel more comfortable or less comfortable hopping in your car and driving?
Jenna Hinrichsen
Less comfortable.
Guru Sethupathy
Look, my point being governance actually, you’re talking about very powerful technology that has huge upside, but also quite large downsides, And the automobile is a great example because everyone understands the car, right? Like we all drive cars, we get it, we get how the whole ecosystem operates and how it functions. And we get the value from vehicles, but we also get the downsides and the regulations around it. We understand all of that. And all of us, we want all of the things I mentioned to stick around. We want the traffic lights. We want the stoplights. We might complain about them sometimes, but if all that went away, the regulations, the speed limits, the technologies, if all of that went away, we would be scared to drive. It would be an absolute disaster. And so similarly here, and I’ve experienced this firsthand, right? When I was at Capital One, when I came in with a kind of a very ambitious agenda around analytics and machine learning and AI in the HR ecosystem and just felt a lot of pushback. And it took me a while to realize actually this pushback is fair. It’s because people don’t understand this technology and they are weary of it. And so it’s only when I started to increase transparency, I brought them in to kind of how we worked.
I had established a bit of a governance program, both kind of that laddered up to the enterprise governance program, but that also established some new guidelines in HR. Once we did that, it was actually incredible how much buy-in we got and how much faster we were able to move. And so I can say, exactly.
Jenna Hinrichsen
Right, because people don’t know, they fear what they don’t know, and so it’s easier to just say, no, no, we can’t do that.
Guru Sethupathy
But if you are willing to work with folks and increase transparency and develop a process and procedures in place, and once I let people know, this is how we do things, this is how we test things, this is how we review things, we have different guardrails here, et cetera, et cetera, once you do that actually and show, even bring them into the loop, say, hey, do you wanna be a reviewer? Do you wanna review our process? Then you actually build that transparency and that trust and go a lot faster. Now, of course, you can go too far and again, slow things down, a middle ground where you build that right level of governance and transparency, you can go a lot faster. And so I really, and I share this example a lot because I’ve lived this and I can tell you, governance does not actually have to be an inhibitor. It doesn’t actually have to be a trade-off between governance and innovation. You can actually go faster.
Jenna Hinrichsen
It’s like anything else in life. it seems overwhelming. So the amount of work you don’t know, so you fear what it could do. And then you don’t know all the steps that it would take. You don’t know how to do it, how to organize it. And so it’s easier to walk away from it. But I think it’s like anything else in life. You put the work in upfront and it’s going to pay off in the long run. It’s like the old saying, you get out of it what you put into it. So if you put the work in upfront, it’s going to pay off in the long run, but it is going to take some time upfront. And not sugarcoating that, like it is going to take some time upfront, but it’s going to pay off. The dividends are going to pay off in the long run. Okay, good. Thank you for adding that. I think that’s great and really important for people to understand because it is such, it’s an overwhelming topic for most people.
Let’s talk a little bit about the current events, the workday class action lawsuit has been all over the news. So let’s talk about that a little bit. It’s AI bias and my understanding is it is age discrimination. Tell us your thoughts on that and just give us kind an overview of how this happened and where this will likely go.
Guru Sethupathy
Yeah, a couple of things I’ll say, right? So what’s interesting here is the workday technology that’s being adjudicated is actually not an AI system. It was actually a very simple, think, my guess is some kind of rule-based filtering mechanism of candidates, right?
I’m not 100 % sure, but I’m quite certain that it was not AI-based, artificial intelligence-based. But what this boils down to, again, is in the kind of regulatory landscape is this concept of AEDTs, which are automated employment decisioning tools. Because it was done in an automated fashion, again, automation doesn’t necessarily automatically mean AI. And this is where I think there’s a lot of confusion that comes around. Automation, there can be rules-based automation. We’ve had that kind of stuff for a long time, right? But in this case, it was automatically filtering candidates. And then it appears, at least, I don’t know the details of the evidence. I no one does. But it appears least candidates felt like they were being discriminated against. So again, point number one that I want to make was this loss is actually not about AI, but it is about AEDTs. So if you have any kind of technology that’s making automated decisions, you do need to have governance around it. And number two, obviously, you need to have governance around AI systems. The next component around this is the point that I want to make is, look, this is clearly moving forward. And at the end of the day, whether Workday kind of wins or not, it has created a lot of pain for them, right? Whether it’s legal fees, whether it’s lost revenue, brand reputation, all of these kinds of things, right? And so I think this is a example for companies to learn from. Right? Like why go through this? Set up governance. Like the cost of setting up governance are way less than the pain that you are, know, work days enduring at this point. Right?
Jenna Hinrichsen
Yes. And I thinkmother companies are afraid. They’re seeing it and they’re like, my gosh, are we next?
Guru Sethupathy
Exactly. So this is what I tell people, hey, governance, again, don’t you don’t have to do something so complicated. There’s smart. And this is one of the things we help with. Like while we’re a technology company, we also provide consulting to share, hey, look, this is what good governance looks like. This was smart governance looks like you don’t have to go overboard. But if you do these basics, you’re going to really have high ROI and protecting yourself. And so that’s the second piece, right? Start implementing smart governance. The third is, I do think one thing is it’s going to be a lot easier to sue on an AI system or an AI technology than it is to sue individual decision makers. And the reason for that is, if you think in the TA context, if you have individual recruiters who are making decisions in a federated fashion or hiring managers or whatever, establish systemic bias. But in an AI system, it’s easier to say, that particular system has bias.
So I do anticipate lawsuits to be frequent. I’ve even heard while the Workday case has gotten a lot of publicity that the number of lawsuits is increasing in general in the TA space, right, over time. And so that is something companies need to be aware of. Another component I want to raise here, while Workday is being sued here, at the end of the day, companies themselves, employers themselves, are also on the hook.
Okay, so if you’re using technologies, if you’re using AI and then you’re using them in a way that is inappropriate and for whatever reason there’s bias happening, you’re on the hook. You can’t just blame the vendor. The vendor has culpability too and at end of the day, that’s something for the employer and the vendor to figure out kind of how that shared culpability works. But ultimately, the employer is gonna be on the hook. And so this is something both vendors should care about because if they can’t build trust in the ecosystem, then no employer is going to buy their products. So vendors care about this because at the end of the day, it affects their ability to go win deals. Employers should care about this because ultimately, they’re going to be held responsible from a lawsuit standpoint from the ultimate users as well. So both.
And this is why we even have document where we say, employers, ask questions of your vendor. Be tough. At the end of the day, this is your responsibility. You’re going to be held responsible and culpable. So do the pre-work in the RFP process and the sales cycle to ask tough questions. And we even have a list of questions that we recommend and so on and so forth. But do that work upfront and do it even on an ongoing basis.
It’s interesting. I see a lot of companies who are jumping into AI, Jenna, without doing any governance. And that feels very risky to me. I have other companies who I see other companies who are just terrified of AI and not using AI at all. I often say the number one risk a business can take today is to not investigate, invest and implement AI. Because if you’re not doing that, your competitors are just going to pass you by.
Jenna Hinrichsen
You’ll be antiquated in five years. If you last that long.
Guru Sethupathy
Yes, implementing AI in your organization is going to take time. It’s going to take years. There’s a lot of complexity around it. So if you’re not already investigating that now, then you’re just going to be more and more behind the eight ball. So that to me is a huge risk. The second biggest risk you can take is to not govern the AI that you are implementing. Right. So those are the two biggest risks that I see out there. And so companies should be investing in AI and they should be governing AI. And those are the companies that are going to win.
Jenna Hinrichsen
Yep, agreed. This is a good overview of that. Let’s talk a little bit about another current event, which is the 10-year ban on state laws regulating AI. Tell us what this is all about.
Guru Sethupathy
Yeah. So this ban is not official, right? It’s not gone into effect. And I actually think the odds of it going into effect are pretty low. But let me talk through what’s happened here. the House of Representatives has just recently passed a budget bill. This is just part of their job, right? What’s the budget for the next year, fiscal year? And they’ve passed a budget bill. Now, in that this budget bill is like hundreds of pages long.
And so one of the things they’ve inserted in there is a clause, is a section banning states from passing AI regulations for 10 years. And this is on the back of a lot of local and state laws that have been passing and are being considered for passing. So already we know New York City has a local law, Colorado, Illinois, California, Maryland, all have laws around AI.
And there’s dozens of other states that are in various stages. Even Texas just recently passed something in their house and in their Senate that’s going to their governor’s desk. So look, many states are passing laws. And so what’s happening is the House, concerned ostensibly around slowing innovation, is saying, hey, no more until we get federal guidance. Now, why I think this is both unlikely to happen, I’ll share, and then also why I think this may not be a great idea. So one is, I think this is unlikely to happen for a couple of reasons.
Apparently several House members signed this law they approved it and then didn’t even realize this was in there and so now they’re coming out and speaking out against it So that’s part one part two is it’s now sitting in the Senate and the Senate just does not love this bill Right. And in fact many there’s a bipartisan Rejection of this particular ban moratorium, right both obviously many Democrats and even some Republicans – By The way, and I’m gonna get a nerd out a little bit here, I won’t linger on this point too long, but the way our budget process works is if you do budget through reconciliation, you cannot put non-budgetary items in the budget bill.
Well, guess what? This AI moratorium thing has nothing to do with the budget. And according to Senate laws, you can’t have something like this in the bill. That’s right, and have it passed the Senate. Now let’s see what they say and whether the Senate parliamentarian strikes it down, but it feels unlikely that this is going to be allowed. And then there’s even constitutional challenges to this.
The way system works is states should be allowed to do things. It’s almost by default, the states have their rights and you need to make an argument why you are enforcing this at the federal level as opposed to the state level. And no such argument was made. So if anyone wants to dispute this at the kind of, this can go up to the Supreme Court and I’d be surprised how the Supreme Court would say, hey, this is allowed, right?
So there is many, many, many, many hurdles for this to actually get implemented. And again, that’s why I think it’s unlikely this will get implemented. I also think it’s not a great idea for the following reason, If they had said, hey, look, we don’t love all the state laws, this is confusing people. And instead, here’s federal guidance that simplifies things. Hey, I might have been supportive of that. But all they said was, hey, no state laws and you’ll just have to wait for our federal guidance, which may never. Which means nothing. And I’ve already told you what I think about nothing. I think nothing’s not a great idea either because everyone’s worried about this stuff.
Jenna Hinrichsen
Agreed, yeah. think this is such an interesting topic, because I think there’s concerns from both sides, letting the states have free reign that don’t necessarily know what the best thing is to do. The federal government is not offering right now, as you said, solutions for how to best manage that. And so it’s kind of a free for all. We’re saying either you do nothing or you do everything based on how we’re saying you should do it. So yeah, it’ll be interesting to watch this one play out.
Guru Sethupathy
One of the biggest arguments for how we do policymaking in this country is like states rights is like, states are a laboratory of experimentation. We talk about innovation in the tech sense, but innovation in the policy sense is also important. And one of the beauties of having it at the state level is, states get to experiment. Different states can try different things and they can learn from each other and they can say, hey, look, that particular law over in that state slowed down innovation. Let’s not do that. Let’s do it a different way. Or you can say, hey, that state did it really well. Let’s copy that. Right. So, you know, there’s a term that the states are the laboratories of democracy, of innovation, et cetera, because you’re able to experiment, learn and test. why would we think a federal guidance would be the right type of guidance? Right. In fact, I think states learned from the New York City local law, which I don’t think was a great law. States learn from that and said, hey, let’s not do it that way. Let’s do it a different way. And people often argue this point naturally in other circumstances. It’s just funny that they’ve changed their minds on this particular case. But I actually think allowing states to experiment with some different policies to see, because we don’t know. We don’t know what’s going to be the right kind of balance and what’s going to strike the right chord. Letting states kind of run with that and seeing which ones actually strike that balance well and which ones don’t, I think is actually a good idea.
Jenna Hinrichsen
I agree. I think that makes a lot of sense. think it’s overwhelming. So it’s like what people look for other people or experts to make that decision. But as you’re saying, we’re learning. We’re still learning. that’s part of the process. And so we’re going to have to go through some of this to figure out what works and what doesn’t work. One more topic about something I read recently that I wanted to get your thoughts on is in the recruiting and HR space, it’s been in existence for a long time where, I guess we call them fake resumes, where companies would create resumes to kind of entice an organization to talk to them and say, have this candidate for you and start a conversation, but maybe that candidate doesn’t really exist. this has been around for a long time, but with AI, it’s changing that game a little bit. And so I read an article recently about fake job applicants being deep baked into existence and companies being manipulated into hiring them. Once they’re hired, then there’s sensitive information that these fake candidates, if you will, can steal. So how does this fit into this whole AI governance thing that we’re talking about? And how can companies, from your perspective, prevent that from happening?
Guru Sethupathy
Yeah, great question. And I actually read that article and just for the audience, what I was referring to is the ability to create deep fake videos that simulate a real person and use that to then be able to pass an interview or whatnot. And then the person kind of running behind the scenes is actually the person who gets access to, know, gets hired, so to speak, and gets hired in and is able to then maybe steal information or do fraud or whatever. Look, this is just an extension of many types of fraud we’ve seen before in the context of hiring. Right, Jenna, you and I were talking about this beforehand, Just going a little bit further back, even during COVID days, even before there was AI being used in this way, because everything was remote, you had a lot of fake candidates, right? Off screen, remote work. And there’s been a lot, there was a lot of reporting around like just fake candidates or they were employed by 10 different companies because they were working remotely. Or in previous times, you talked about even potentially people lying. This is just an extension of people making up completely fabricated resumes, for instance. Or recruiting firms putting forth fake candidates to then be able to send you other types of candidates. These are of ploys, I guess, extension of ploys that I think we’ve seen for decades in the TA space. And so in some sense, think TA has thought about these problems before. How do you handle the situation? How do you prevent it? How do you fix it after it happens? All of these kinds of things. There are verification tools and technologies out there to verify people’s identities. There’s a bunch of things you can do there. At the same time, it’s not obvious to me that this is just an AI-specific problem – this is broader problem of fake candidates. But the AI component definitely falls into AI governance. Now there’s AI technologies being used, whether it’s your own tools and technologies that are interacting with those candidates. So one the fascinating things could be companies are using AI interviewers. So are you going to have an AI interviewer interviewing an AI candidate?
But this is why this is AI governance, right? Because at end of the day, somewhere you need a human in the loop. And so that’s what, kind of closing the loop back on where our conversation started, companies need to establish framework policies and procedures. And one of those is like, where is the human in this process? What are the guideposts, the checkposts to verify things, to evaluate things, and if something goes wrong, to rectify things. So again, this is exactly a great example. Bias and other things are good examples, but this is another example of why you need policies, procedures, and frameworks, and humans in the loop to govern these systems.
Jenna Hinrichsen
Yeah, you can’t just set it on autopilot. Innovation is key and making your organization and your employees more efficient is key, but that’s not being extreme and just saying, let’s like let technology do everything. And that means the company has to decide what pieces are going to be run by humans and what pieces are going to be worn by technology.
Guru Sethupathy
I actually saw this article. That point is so interesting. I think it’s Moderna. I could be wrong. I think it’s Moderna. Moderna just established a new role where they combined the CHR role and the CTO role. And this new role, this person, is going to be responsible for figuring out exactly that thing that you just mentioned. What jobs and tasks are going to be done by AI? And what jobs and tasks are going to be done by humans. And that’s why this role was combined CTO-CHR role. Isn’t that fascinating?
Jenna Hinrichsen
It’s interesting because I feel there’s almost a need for two of those people collaborate or to at least be dotted line accountable to it because one of them does not know the other’s area at the level of expertise that the expert does.
Guru Sethupathy
Well, in this particular case, I think they’re imagining that this person will, right? Like they would have had years of experience, both as a CTO and as a CHRO. In fact, I think the person who took on the role, I could be wrong on this, it don’t hold me to account, but I think this person was actually previously a CH, most recently a CHRO, though I think they’d also had a technology background. So, but you’re right. Maybe you can find those unicorns that have both experiences, or maybe you do this in a way where it’s a combination of the two roles in a dotted line. That’s right.
Jenna Hinrichsen
Yeah, that I had not heard that that is really interesting. But I mean, there’s it makes some sense. So I would be interested to watch that play out and see how that works. OK, well, I have one last question for you. I like to end every podcast asking the same question, so it does not have to be about this topic. But I wanted you to tell us if you could give one piece of career advice to people today, looking back on your career and things that have really stood out and had an impact on you. What would be your number one career tip?
Guru Sethupathy
This will be a combination of relevance in the AI world, but also trends that are happening more broadly. So look, first thing I’ll say is, look, regardless of what job function you’re in, learn about AI and how AI can help in your function. Because almost any job, especially white collar job, AI can make you more productive.
And so why wouldn’t you take advantage of that and learn how it can be more helpful and use that to make yourself more productive? Right. I think there’s no downsides to that whatsoever. Along with that, I do think this field of AI governance is going to be one of the more rapidly growing fields in the future. Just think about it this way, right? Like 10 years, 20 years ago, there wasn’t, I don’t think there was a role called CISO, Chief Information Security Officer. And now almost every company has a CISO. And then CISOs have large organizations. think about all the roles that have been created in the CISO organization, from data analysts to cyber experts, all that kind of stuff. And in fact, one of the questions people often ask is, ⁓ what jobs is AI going to replace? But they don’t ask, what jobs is AI going to create?
Jenna Hinrichsen
Oh my gosh, that is what I’m always talking about when people bring this up. It’s like, creates so many jobs. Yes, it changes things, but it opens up doors that we never had access to before. So I love, Yeah.
Guru Sethupathy
Exactly. And so I’ll finish on that point where, data analysts was not a job 30 years ago. There are millions of data analysts today. CISO and cyber was not a job 30 years ago today. You know, so many jobs in that space. Similarly, AI governance is going to be a huge field. And the reason for it’s going to be a huge field is because at end of the day,
You’re going to need humans to guide AI systems. You’re going to need humans to monitor AI systems to some degree. You’re going to need humans to infuse context and infuse values of the company in these systems. You’re going to need humans to be governing them from a roles and compliance and risk and accountability standpoint, all the things we talked about. so governance, the guidance management and governance of AI systems is going to be a hugely rapidly growing area.
And there’s going to be jobs created in this area. And so again, if I were to give advice, I would say two things. One is, think about how AI can improve the work that you do today and become more productive. And the second is, start learning about AI and start learning about how these systems work. And in the future, there’s going be a lot of jobs in the field of AI governance and management.
Jenna Hinrichsen
Yeah, I think there’s opportunity for everybody. It’s impacting everybody’s job in some way. So to think that your job or your line of business is not gonna be impacted by it is scary. So be thinking about that. No, I like this. I think this is really helpful for people. And just start learning about it on your own, right? You’re not being forced to change how you do things today, but the more you know about it, the better it’s gonna help you manage your career going forward. So that’s a great tip. I love that.
This has been awesome. Thank you so much. I’m going to have to have you back again because there’s so many other topics I’d like to cover with you, Guru, but this has been amazing. And I think our audience is gonna be able to learn so much from this. is, while it’s not as new to you, the governance piece, I think it is newer to a lot of people. And so I’m really excited for this episode to come out and for people to learn sort of your list of five key areas to focus on in terms of governance and really have some good takeaways from this conversation.
So thank you again for joining us and for sharing your insights and for our audience, make sure you subscribe to the podcast. And if there’s a topic that you’re interested in us covering that we haven’t covered yet, please put it in the comments and we will pick that one up. And that is it for today. So thank you again, Guru.
Guru Sethupathy
Thank very much for having me. I enjoyed this conversation. And look, there’s a huge amount of education that needs to happen. this is also new to so many people. And so I appreciate any opportunity to partner with someone like you who has that audience base. together, we can reach out and hopefully educate people, inform people. So again, thank you for partnering with me on this.
If anyone out there has any further follow-up questions, feel free to come to www.fairnow.ai and shoot us a note and we’re happy to help you along the way.
Jenna Hinrichsen
Absolutely. Yes, and we will have you back because there’s so many topics. I was making notes while we were talking. So I have so many things I want to cover with you. you so much. Have a great rest of the day.
Guru Sethupathy
Thanks so much.
About our experts

Jenna Hinrichsen
Jenna develops sourcing strategies for diverse positions across wide geographic areas, leveraging research, networking, and database mining to build a robust, diverse candidate pipeline. As a recruitment leader, she guides direction, forecasting, and decision-making, manages third-party relationships, and supports sales efforts. With a background as a staffing consultant, Jenna combines her expertise in recruitment metrics and delivery processes with a passion for learning about industries and organizations to address complex hiring challenges effectively.

Guru Sethupathy
Guru Sethupathy is the Founder and CEO of FairNow, an AI governance company focused on helping organizations manage AI risk responsibly. With more than 25 years of experience in data analytics, technology, and AI, Guru’s background spans academia, consulting, and industry leadership roles. Before founding FairNow, he held senior executive positions at Capital One, where he built and scaled AI and data teams in highly regulated environments, and worked at McKinsey & Company advising Fortune 500 companies on advanced analytics. He started his career as a professor of economics, researching the impact of technology on the workforce. Guru holds a B.S. in Computer Science from Stanford University and is passionate about bridging the gap between AI innovation and practical, human-centered governance.



