AI를 넘어서 XAI로… AI가 답해야 하는 질문들

1.
오래 전 알파고와 이세둘 9단의 대국을 중계하였습니다. 알파고가 바둑알을 놓으면 해설을 하여야 하는데 경험상 보지 못한 수일 경우 해석을 하지 못하는 경우가 많았습니다. 어떤 경우네는 대국이 중반을 넘어서서 초반에 두었던 수의 의미를 이해하는 경우도 있었습니다.바둑을 보는 사람은 당연히 질문을 합니다.

“왜 이 수를 두었을까?”

보통 프로기사의 대국은 대국을 마친 후 설명을 합니다. 이럴 경우와 비교하여 질문이 가능합니다.

“알파고는 대국을 해설할 수 있을까?”

저는 알파고가 해설할 수 없을 것이라고 생각했습니다. 그러나 2016년 알파고는 이세돌의 수를 이렇게 보았다 -1을 보면 알파고는 알파고나름의 해설을 하고 있었습니다. 알파고가 말로 표현하지 않았지만 의사결정할 때 예상했던 돌의 흐름=참고도를 가지고 있었고 이를 공개했습니다. 윗 그림의 1이 알파고가 놓은 수인데 이를 결정할 때 예상했던 수순이라고 합니다.


다시 처음 질문으로 돌아가서 알파고는 알파고 나름으로 대국을 해설합니다. 물론 우리가 아는 해설은 아니지만…

또다른 질문을 해보죠. 사회에서 자주 듣는 질문입니다.

“다음 뉴스서비스의 알고리즘은 어떤 기준으로 추천을 할까?”
“배민, 쿠팡이츠, 요기요 배달 플랫폼 3사는 어떤 기준으로 배달노동자에게 작업을 배당할까?”

이와 관련하여 전문가가 설계 및 소스코드를 검증하는 방식으로 조사하는 경우는 있지만 이해당사자 혹은 소비자가 관련하여 확인할 수 있는 방법은 없습니다. 또다른 질문도 가능합니다.

“로보드바이저는 무슨 기준으로 종목을 선정할까? 그리고 무슨 이유로 재선정할까?”

인공지능은 수많은 데이타를 통하여 학습하고 학습한 결과에 따라 의사결정을 합니다. 처음 사회는 의사결정에만 집중하였습니다. 사회문제가 되지 않는다면 과정 자체에 관심을 두지 않았습니다. 그러나 인공지능이 보편화하고 영향력이 커지면서 관심이 더 넓어집니다.

“어떤 근거로 이런 의사결정을 했나?”

사람과 사람사이에서는 당연한 질문입니다. 당연한 질문을 인공지능에 하는 단계에 왔습니다. 정확히는 인공지능서비스를 하는 개인 혹은 집단에 합니다. 이것이 설명가능한 인공지능(Explainable AI)입니다. 금융에서도 XAI는 화두입니다. 이미 금융위의 금융AI 개발활용 안내서에서 소개하였지만 금융위원회는 이미 설명가능한 AI를 기준으로 제시하였습니.

2.
하바드비지니스리뷰
When — and Why — You Should Explain How Your AI Works 을 “왜, 언제” 설명가능한 AI가 필요한지를 보여줍니다.

What Makes an Explanation Good?

A good explanation should be intelligible to its intended audience, and it should be useful, in the sense that it helps that audience achieve their goals. When it comes to explainable AI, there are a variety of stakeholders that might need to understand how an AI made a decision: regulators, end-users, data scientists, executives charged with protecting the organization’s brand, and impacted consumers, to name a few. All of these groups have different skill sets, knowledge, and goals — an average citizen wouldn’t likely understand a report intended for data scientists.

So, what counts as a good explanation depends on which stakeholders it’s aimed at. Different audiences often require different explanations.

For instance, a consumer turned down by a bank for a mortgage would likely want to understand why they were denied so they can make changes in their lives in order to get a better decision next time. A doctor would want to understand why the prediction about the patient’s illness was generated so they can determine whether the AI notices a pattern they do not or if the AI might be mistaken. Executives would want explanations that put them in a position to understand the ethical and reputational risks associated with the AI so they can create appropriate risk mitigation strategies or decide to make changes to their go to market strategy.

Tailoring an explanation to the audience and case at hand is easier said than done, however. It typically involves hard tradeoffs between accuracy and explainability. In general, reducing the complexity of the patterns an AI identifies makes it easier to understand how it produces the outputs it does. But, all else being equal, turning down the complexity can also mean turning down the accuracy — and thus the utility — of the AI. While data scientists have tools that offer insights into how different variables may be shaping outputs, these only offer a best guess as to what’s going on inside the model, and are generally too technical for consumers, citizens, regulators, and executives to use them in making decisions.

Organizations should resolve this tension, or at least address it, in their approach to AI, including in their policies, design, and development of models they design in-hour or procure from third-party vendors. To do this, they should pay close attention to when explainability is a need to have versus a nice to have versus completely unnecessary.
When We Need Explainability

Attempting to explain how an AI creates its outputs takes time and resources; it isn’t free. This means it’s worthwhile to assess whether explainable outputs are needed in the first place for any particular use case. For instance, image recognition AI may be used to help clients tag photos of their dogs when they upload their photos to the cloud. In that case, accuracy may matter a great deal, but exactly how the model does it may not matter so much. Or take an AI that predicts when the shipment of screws will arrive at the toy factory; there may be no great need for explainability there. More generally, a good rule of thumb is that explainability is probably not a need-to-have when low risk predictions are made about entities that aren’t people. (There are exceptions, however, as when optimizing routes for the subway leads to giving greater access to that resource to some subpopulations than others).

The corollary is that explainability may matter a great deal, especially when the outputs directly bear on how people are treated. There are at least four kinds of cases to consider in this regard.
When regulatory compliance calls for it.

Someone denied a loan or a mortgage deserves an explanation as to why they were denied. Not only do they deserve that explanation as a matter of respect — simply saying “no” to an applicant and then ignoring requests for an explanation is disrespectful — but it’s also required by regulations. Financial services companies, which already require explanations for their non-AI models, will plausibly have to extend that requirement to AI models, as current and pending regulations, particularly out of the European Union, indicate.
When explainability is important so that end users can see how best to use the tool.

We don’t need to know how the engine of a car works in order to drive it. But in some cases, knowing how a model works is imperative for its effective use. For instance, an AI that flags potential cases of fraud may be used by a fraud detection agent. If they do not know why the AI flagged the transaction, they won’t know where to begin their investigation, resulting in a highly inefficient process. On the other hand, if the AI not only flags transactions as warranting further investigation but also comes with an explanation as to why the transaction was flagged, then the agent can do their work more efficiently and effectively.
When explainability could improve the system.

In some cases, data scientists can improve the accuracy of their models against relevant benchmarks by making tweaks to how it’s trained or how it operates without having a deep understanding of how it works. This is the case with image recognition AI, for example. In other cases, knowing how the system works can help in debugging AI software and making other kinds of improvements. In those cases, devoting resources to explainability can be essential for the long-term business value of the model.
When explainability can help assess fairness.

Explainability comes, broadly, in two forms: global and local. Local explanations articulate why this particular input led to this particular output, for instance, why this particular person was denied a job interview. Global explanations articulate more generally how the model transforms inputs to outputs. Put differently, they articulate the rules of the model or the rules of the game. For example, people who have this kind of medical history with these kinds of blood test results get this kind of diagnosis.

In a wide variety of cases, we need to ask whether the outputs are fair: should this person really have been denied an interview or did we unfairly assess the candidate? Even more importantly, when we’re asking someone to play by the rules of the hiring/mortgage lending/ad-receiving game, we need to assess whether the rules of the game are fair, reasonable, and generally ethically acceptable. Explanations, especially of the global variety, are thus important when we want or need to ethically assess the rules of the game; explanations enable us to see whether the rules are justified.

JP Morgan의 AI 책임자는 인터뷰에서 XAI가 필요한 이유를 아래와 같이 제시합니다. 2019년 인터뷰입니다.

Knowledge at Wharton: How is AI reshaping the financial services industry?

Apoorv Saxena: AI is impacting every industry. AI is making a substantial, wide-ranging impact because it is used with data, and every industry is increasingly becoming data-driven. Companies across every industry are looking to gather and use more data. They want to better understand who their customers are, how they interact with them, the services they provide, and how they can improve those services and experiences. Every activity is becoming data-driven.

AI is allowing companies like Google, Facebook and Amazon to achieve hyper-scale. You can get personalized news feeds in real-time. A grocery store or a bookstore like Amazon can serve hundreds of millions of users globally. That is possible when you inject AI into every piece of your business process. Now, transfer this to AI and finance. The future of AI in finance is a bank that can serve billions of people and provide personalized services.


Knowledge at Wharton: What are some of the opportunities and challenges in implementing this vision?


Saxena: The opportunity is that AI will let banks provide services in much more personalized, highly scalable and customized ways. The challenges include the ability to explain your AI – what we call “AI explainability.” When AI is used, the regulatory environment requires banks to justify or rationalize decisions. JPMorgan is trying to be the leader in applying “explainability” to financial markets. Another challenge is to ensure confidentiality, since a lot of the data in finance is personal information or highly confidential.


Knowledge at Wharton: If you look at financial institutions, technology companies and telecom companies — which are all broadly involved in mobile money and offering financial services to a massive number of customers — who do you think is best positioned to win in AI and why?

Saxena: The essence of finance and banking – banking in particular – is trust. User trust is key. The person on the other side wants to trust you with their most valuable assets, and with their most valuable information. And they want you to manage these assets in a way that is compliant with regulations.

The second factor is customer service. Customers are looking for you to provide the best service possible, in a manner that conforms to the trust. If you break it down to fundamentals, finance is a service built around trust and regulation.

Anybody who can replicate that model of trust, regulatory compliance and client service is well-positioned to be a player in this space. It does require very deep domain knowledge. There are some areas of banking, like payments, which involve highly skilled operations, but which are not deep-domain. Many other financial services [require] extremely detailed and very deep domain understanding. For example, how do you manage M&As? How do you create complex securities? These are non-trivial and highly domain-specific, and there will be space for banks to continue to provide those services, given their expertise, existing client relationships and thorough understanding of the complex environment.
What’s Behind JPMorgan Chase’s Big Bet on Artificial Intelligence? 중에서

이를 위해서 JP Morgan의 AI 연구소는 Seven challenges for harmonizing explainability requirements라는 제목의 논문을 발표합니다. XAI 연구동향을 살피고 일곱개의 도전과제를 선정하였습니다.

Challenge 1. The intent and scope of explanations matter
Challenge 2. The type of data and type of model matter\
Challenge 3. The human factors around intent and scope of explanations matter.
Challenge 4. There is no consensus around evaluating the correctness of explanations.
Challenge 5. XAI techniques in general lack robustness and have strong basis dependence
Challenge 6. Feature importance explanation methods can be manipulated
Challenge 7. Too detailed an explanation can compromise a proprietary model that was intended to be kept confidential.

Download (PDF, 181KB)

금융은 규제산업입니다. 그리고 소비자 보호가 규제에서 핵심적인 지위를 차지합니다. 인공지능을 기반으로 한 서비스가 늘어나면 늘어날 수도록 xAI의 요구는 커집니다. 그에 따라 XAI와 관련한 규제는 점점더 커집니다. 미래의 문제가 아니라 오늘부터 준비하여야 할 과제입니다.

마지막으로 A Practitioner’s Guide to Machine Learning으로 소개합니다. 기계학습과 관련한 글들이 무척이나 전문적인 지식을 필요로 하지만 이 책은 그런 지식이 없어도 읽을 수 있는 책입니다. Interpretable Machine Learning:A Guide for Making Black Box Models Explainable은 설명가능한 AI를 주제로 한 안내서입니다. 도전해보시면.

Leave a Comment

이메일 주소는 공개되지 않습니다. 필수 필드는 *로 표시됩니다

이 사이트는 스팸을 줄이는 아키스밋을 사용합니다. 댓글이 어떻게 처리되는지 알아보십시오.