Learn how AI can improve due diligence activities in your third-party risk management program.
Using AI in the Due Diligence Industry - Transcript
Elliot Berman: Hi, I'm Elliot Berman from AML RightSource, and I want to welcome you to this episode of Third Party Risk Perspectives, and I'm here with Chris Sindik, who's going to introduce himself.
Chris Sindik: Hi, Elliot, it's Chris Sindik and I'm the director of third party risk and due diligence at Blue Umbrella. So I get involved with a lot of our projects as it relates to due diligence, third party risk management, reports, technology, database, and everything in between. Pleasure to be speaking with you today.
Elliot Berman: Thanks, Chris. You and I are doing a number of these on different topics. And today I wanted to talk with you a little bit about use cases for artificial intelligence in the due diligence space. There's obviously a mad dash to examine or see if there's a use case for AI in just about everything.
As I told you earlier, I've joked about the fact that I'm waiting to buy an AI empowered toaster. But until that's available I'm curious if you can give us some insight as what's driving the deployment of AI tools in the due diligence industry.
Chris Sindik: Yeah it's a great place to start too. And I would say that, we're really in an AI boom right now. We've heard about all the different tools that are being made available to the general public and obviously to companies as well. So I think just by virtue of it being a part of the societal zeitgeist right now it's really starting to get into the area of compliance and third party risk management.
And, when it comes to due diligence specifically, I think one of the areas that people are looking to utilize it and Blue Umbrella is on the vendor side of things, we write due diligence reports is really using it as another tool in our toolbox. That's really how I describe it.
So I think that we're not quite ready in every circumstance to turn the keys to the program over to AI and have it just go off and run reports and write research and then, at the end of the day what it produces is taken as pure fact. I don't think that we're there yet. But by virtue of having AI available to us to do some of the more repetitive, maybe tedious type of tasks, we can reduce our costs and hopefully reduce the amount of time spent on reports.
In today's day and age, not just in the due diligence industry, but everywhere, everyone wants it faster, better, and cheaper. So employing technology to help in various aspects, such as, looking at articles, summarizing them, drafting reports in an early format finding different results that are out there it's something that needs to be explored.
And not only are we looking at it, on the vendor side providing information to our clients. But a really big part of it came in March when the DOJ announced that, they're going to be looking more at how technology is used in compliance programs and it can be, possibly a factor to consider if there's ever an enforcement action or investigation. So I think that's one of the areas that happened in March where, really put it on the map for a lot of companies.
Elliot Berman: You touched on this a little bit but where specifically in the due diligence process do you see AI tools being deployed today?
Chris Sindik: It's a good question. And I think one of the areas where people can get better when we're talking about AI is really thinking about it in practical terms.
You may have a very large data set and we're going to have AI look at that and it'll draw some conclusions. Okay what actually does that mean? Where do I start? How do I go about it? Really in terms of how it fits in the due diligence process it depends on the company's risk appetite, and their budget and their constraints and what they have to work with.
There's, always that notion in our industry that people are having to do more with less. Less head count, less budget, et cetera. So it can be that AI is an enhancement of an existing process. For example a lot of companies will do watch list or database screening. So with that, you'd put in the name, Chris Sindik, you'll get some results. And from there, you have to evaluate them.
Now if the name of the company or the name of the individual is a very common one, Jack Smith, whatever it might be, you might end up with thousands of possible hits that you would need to look at. And if you lay AI over that process to do some of the filtering, maybe you take out the hits that aren't relevant to you.
For example, the person sitting in the chair may not be as concerned about environmental enforcement actions. Okay in that point, we'll just take those out and AI can do that, or they'll get rid of false positives or whatever it might be.
That's one part where it can fit in, and again if a company is going beyond just the database screenings and searches that they would do they can go on and have it be a part of the research process using AI to find pertinent issues very quickly. Putting in the company name and asking to find any issues related to bribery or human rights or fraud or money laundering or whatever it might be, it can be done in seconds.
As we know AI can give a quick little summary, the who, what, when, where, why, et cetera. So it can be another tool to just get that information a little bit more quickly, a little bit more succinctly. I think that's where it can fit.
And there are a lot of other use cases for AI, certainly with training that can be used to have it fit the individual learner, large data sets et cetera. So I think there's a lot of ways for it to fit into the process.
Elliot Berman: And when we think about the overall process as compared to AI as you just described, where do you see human analysts and their judgment fitting into the process?
Chris Sindik: As I said earlier, I don't think that a lot of compliance programs and those that are managing third party risks I don't think they're entirely comfortable with the idea of relying entirely on AI results. It's a part of the process. It's a part of the research that's being done.
It's open source research most of the time, and certainly there are proprietary databases, techniques, et cetera, that will be used. I think about that question too, as where don't they come into the process?
And I think it might be things like just the information gathering side of it and summarization. But even within that summarization, you can leave out details that would be relevant depending on the prompting that goes into AI. You may leave out the names of individuals, the names of related companies, specific details, other parties that were involved, appeals overturning a case, whatever it might be. Those nuances are sometimes lost on AI, and certainly it's getting better every single day too.
So I think that someone who has that contextual knowledge of a market, of a country, to understand the findings that are there. Sometimes there can be a criminal action or accusations and investigation on a company, but how big of a deal is it? Is this something that happens all the time in that particular market or jurisdiction?
Is it politically motivated? Sometimes. Is it legitimate? And I think that's something where, individuals that are familiar with this industry, with the research they're doing, they can give that additional context and analysis to really, help connect the dots.
It's something that we can use to gather information, but human analysts and their judgment it's to, review what's being done by the robot if you will, to make sure it's fit for purpose and it hasn't been corrupted somehow along the way. We've always seen those stories of, AI generating something very bizarre that doesn't fit so it's that reality check.
Elliot Berman: Given the current state of AI, which is rapid change, variable ranges of adoption, concerns about whether it is a black box, which it probably is, and can we see inside, or are we okay if we can't, we'll just see what comes out. How do you talk with clients about the deployment of AI tools?
Chris Sindik: I think it's a, starting out small, if you will. To say, okay, we're going to set up this massive program AI is going to be doing a lot of different things for us. It's going to be sweeping across a lot of different systems to, I think, go into it whole hog right away there might be some lessons learned in that process.
Although, I'm sure some companies are choosing to take that approach. I think maybe starting small. Getting familiar with it, getting familiar with the tools, making sure that it is fit for purpose. That the users understand it, that they realize how it fits into their daily life, the day to day comings and goings of the program, and then the overall goals, too.
I think starting small. I use the example of possibly having AI look at database hits or using it as another means of research. That's a good way to start out where, the stakes can be a little bit smaller and possibly double checking that along the way. Hey, has this company ever had legal action against them? Okay. AI says, no, let's go ahead and check the courts. Oops. There actually is one. And it turns out that it was because the company used their former name and oops, no one had any idea about that, or it was the parent company that was named in the lawsuit, but they were the ones that were doing the wrongdoing.
So I think starting out small, making sure, where the holes are if any, and I think sometimes that there are, because, AI is only smart enough to do what we tell it sometimes. So it's really that trust, but verify approach, making sure that, you're satisfied with the results.
And I think too, as is the case with so many things in our industry, is to take a risk based approach. If something is low risk maybe you're okay with relying more on AI. But if it's, that super high or that extreme risk, okay let's maybe have, some others in the driver's seat as opposed to AI. Do some of that more critical analytical thinking ourselves. And using some of the techniques that people are more familiar with and trusted.
And we'll get there with AI. I know with this discussion, I've been, hey, it's spooky out there don't trust it. But, I think that it is getting a lot better. And frankly, everyone's getting their arms around it to understand what it can and can't do. And it's that can't do that can get us into trouble a little bit more often I think we want to focus on that a little bit more and bring it up to the standards that we want to see.
Elliot Berman: Chris, this has been really helpful to me in gaining a better understanding of the kinds of things that we've been talking about. Is there any one last comment you'd like to make?
Chris Sindik: As I said before, I don't want to sound like I'm anti AI. I'm actually, very much excited for what it can do for us on the vendor side and also for clients. I think that it can be a watershed moment for people to see changes into their program. And bring it maybe not to the AI standard, but the pre AI standard and see if they can get there one day.
With the amount of topics that everyone's expected to cover and budgets, maybe not keeping up with those demands, technology and AI can help to fill those gaps. And I think there are appropriate ways to use it right now, today that make a lot of sense. And then there are those other ones we want to be cautiously optimistic about.
Elliot Berman: That's great. So Chris, thanks again for another great conversation. I'm looking forward to our next conversation and I will talk with you soon.
Chris Sindik: Hey, sounds great.