Today I'm joined by Yogesh Mudgal. Yogesh currently works for Citi, where he is the director in operational risk management. In 2019, Yogesh formed an informal group of like-minded professionals called AIRS, AI Risk and Security. AIRS seeks to promote, educate, and advance AI machine learning governance for the financial services industry by focusing on risk identification, categorization, and mitigation. Welcome, Yogesh. Thank you, Mary. Thank you for having me. I'm so happy to have you here. To ground us in your perspective, can you tell us about your career journey and your role at Citi? Starting with Citi, I head the emerging tech engineering and architecture group and operational risk at Citi. Most of my career has been devoted to risk management and information security. For the last couple of years, I've been more focused on identifying and managing risks with emerging technologies in general and AI in particular. To that end, like you were saying earlier, I founded AIRS in 2019 of like-minded professionals, which stands for AI Risk and Security. Essentially, trying to see if in the industry we can come together and have a common view on identifying risks with AI and how to ensure that it's being implemented safely and securely. AI can be viewed as a risk to firms and they can be observed as reputational risk, operational risk, or regulatory risks. So these different terms of risk. I was wondering, could you describe these risks in particular, like which ones are most concerning from your experience? In my opinions, risk should be managed for any technologies or a set of technologies including AI. The risks related to AI for an institution can be dependent on various variables, including how AI is implemented, strength of existing controls, or a risk profile of the institutions and the risk appetite. As described in the paper, which probably we didn't mention yet, but AIRS did publish a paper through Wharton. In that paper, we also described various risk categories for AI. Some of the risk categories that I will highlight here are the data-related risks, AI attacks, testing and trust, and compliance. In data-related risk, it could include subcategories like learning limitations. For example, the learning and limitations and, of course, the data quality to summarize, the AI system is generally as effective as the data used to train it and the various scenarios considered while training the system. In AI attacks, there's been research done in the industry on live ML models that talked about various attack including data privacy attack, cleaning data poisoning, adversarial inputs, and model extraction. Then there's testing and trust. Obviously, trust and testing is the most common talked about topic in the AI field, which includes bias and explainability, and things of that nature. Compliance, and compliance including internal policies as well as regulatory requirements. I think all of these are the various risks that we can summarize or categorize into for AI. Within the group at AIRS, parts of one of those three risk areas, is anyone more like at a higher level or does it depend on the particular industry or the particular organization in terms of their willingness to control the risk? Like I was just curious how it plays in it. Is it the people, is it the company, or is it the industry that prioritizes these risks as very, very volatile? Yes, you're absolutely right. It really depends on where AI is implemented, how it is implemented, and the risk appetite of the institution. We cannot say one size fits all or, in this case, one risk fits all or the top risks. It really depends on the implementation. Let's say, for example, if somebody has an AI model sitting out on a cloud. The risks on the AI ML attacks probably become more heightened depending on how exposed that AI model is versus an AI model, which is being used internally in the organization. Probably those attacks probably are less likelihood, but there could be compliance risk that is more prevalent. So it really depends on use case, and there's no one risk that I would say is higher or lower. That leads into just peeling back the onion a little bit more on this because we're hearing more in what you were talking about data and the data that's used, but we also hear about the algorithm. More and more we're hearing about algorithmic transparency within the media and at conferences, similar to what you had mentioned earlier. What is this in terms of what's the specific around what we hear? Sometimes it's called technical, which would be like revealing source code versus a calibrated transparency. If you could just maybe give a little bit of context about those terms that we're hearing a bit more and just overall, how algorithms have to be working more transparent. What if a leader, how you address that? Then potentially as consumers or investors, how they're looking at it. First of all, I agree there are a lot of terms floating and transparency is one of them for sure. There are increased conversations and discussions regarding transparency and explainability. I think it's important to consider who is the audience. As depending on the stakeholder, the level of transparency or the need for transparency may differ. For example, transparency might be required for an internal auditor versus a regulator versus a developer or the end-user. The analogy I always give is I travel and fly in a plane. I'd bet my life on it, but I have no idea how the engine is working. I don't know how good or bad the pilot is but the important factor here is trust. I trust that the system is working, I trust that the plane is running well, and I trust that I'll be taken to the destination that I'm going to last but not the least. But here I think the common thing here is the trust and because it leaves the cliche term on hype around AI and there are some relevant, I think fear or I should say, or the harm related to AI that's not been fully understood. I think it's important that we develop the trust and build the trust in the AI systems. The important factor here for transparency or the calibrated transparency, in my opinion, is the trust factor. If the stakeholder on a need-to-know basis understands the why, hows, and the trade-offs of using the AI system, then for example, an internal order versus a regulator or the end-user, the transparency could be granted and hence build the trust in that AI system. We've talked about the data, the transparency, the algorithms, and that also might lead us to potentially if people request more and more to look at the source code or revealing the source code. Is there are problems with revealing the source code? If you could talk a little bit about that, because that might be something else that people are wondering how they might manage if that's coming down the pike. I think revealing the source code or in extreme cases compromise of a source code, whether it's an AI system or a non AI system, I think most of the risks remain the same. Obviously, the intellectual property is on the top of the list and there are other things involved including, if you reveal the source code, are you revealing the methodology used to build a system or you arrive at a decision and depending on the stakeholder, do you want to reveal that or not? Can the source code be used to implant or embed backdoor into the system? There are various risks involved when we're talking about revealing the source code. In some cases it might have to be done due to if a law or the legal situation demands that but generally I don't think source code is revealed that easily or in generally speaking, it should not be revealed or companies don't do that. Potentially, it might be something in which companies need to better educate potentially the consumers or their clients or their customers why revealing the source code is bad. Is it not in the best interests for them in that it exposes them to more risk, some of the risks that you talked about earlier. Then your analogy about flying on the plane. You have purchased a ticket to be on plane because you know that there's experts running the plane and there's a certain amount of assumption and trust that has transpired between you and the airline that you are expecting them to keep certain things that you know they have to keep running and you don't need to know exactly how it's being run, but when there is a very important risk they're going to let you know. Well, thank you so very much for all that information and for talking to me today. I appreciate it. Thank you Yogesh. Thank you, Mary. Thanks for having me.