Explainable AI it produces transparent, easily understandable models. Using a series of if-then statements, Rulex automatically produces self-explanatory logic for all decisions. Rulex rulesets make it possible to explain a decision directly to the customer or provide customer service agents with the ability to look up the reason for a decision.
Why eXplainable AI is more transparent than black box?
The problem with conventional AI is very simple: it’s unexplainable. Conventional AI relies on machine learning algorithms such as neural networks and others that have one key feature in common: they produce “black box” predictive models, meaning they’re mathematical functions that cannot be understood by people, even mathematicians.
f(x) = 0.293 tanh(0.337 x1 - 0.329 x2 + 0.251 x3 - 0.288 x4 - 0.297 x5 + 0.436 x6 + + 0.166 x7 - 0.184 x8 + 0.219 X9 ± 0.483 x10 - 0.222 x11 + 0.173 X12 ± 0.012 X13 ± + 0.352 x14 + 0.259 X15 ± 0.176 x16 + 0.345 x17 + 0.314 x18 + 0.177 x19 - 0.329 X20 - 0.3) + - 1.934 tanh(-0.233 x0 + 0.174 x1 - 0.252 x2 - 0.501 x3 - 0.125 x4 + 0.311 X4 - 0.573 x6 + - 0.299 x7 + 1.123 x8 + 0.318 x19 - 1.169 x10 + 0.105 x11 - 0.429 X12 - 0.075 X13 ± 0.143 X14 + 0.146 x15 - 0.531 x16 + 0.077 X17 -0.133x18 0.122 xl9)
Rulex’s unique, proprietary machine learning algorithms work differently. Rulex creates predictive models in the form of first-order conditional logic rules that can be immediately understood and used by everybody. Here an example of Rulex clear box predictive model.
IF customer_province in {A, B, C, D} AND damage_class in {1} AND Number of days between policy start and date of accident <= 371 THEN Fraud = Yes IF customer_province in {E, B, C, F} AND Customer age > 48 AND Number of days between date of accident and complaint > 1) THEN Fraud = Yes IF customer_province in {G, H, I, J, K, L, M, N, B, O, P, Q, R, S}) AND Number of days between policy start and date of accident > 371 THEN Fraud = No IF (Number of days between date of accident and policy end <= 2) THEN Fraud = No
How eXplainable AI works?
Rulex’s core machine learning algorithm, the Logic Learning Machine (LLM), works in an entirely different way from conventional AI. Rather than producing a math function, it produces conditional logic rules that predict the best decision choice, in plain language that is immediately clear to process professionals. Rulex rules make every prediction fully self-explanatory.
And unlike decision trees and other algorithms that produce rules, Rulex rules are stateless and overlapping, meaning one rule can cover many cases, and many rules can cover a single case. This allows for fewer, simpler rules and provides broader coverage at the same time.
Rulex calculates the coverage and accuracy of each rule, making it easy to select the most effective decision rules. Also, proven heuristic human rules can be added to the predictive model, allowing a seamless blend of human and artificial intelligence. Human rules are also rated for coverage and accuracy, allowing Rulex to easily evaluate the quality of the decision rules in use and reduce false positives.
While Rulex is one of the most innovative tools for making true explainable AI, it’s not the only one. If you are curious to learn more, you can read 8 explainable AI frameworks for transparency in AI.
Where is eXplainable AI important?
Explainable AI is important to any business because it conveys trust and competence. It is particularly relevant in some sectors and applications where decisions can have a strong impact on people, such as granting a loan (read more) or making a medical prognosis (read more). According to many international privacy regulations, such as the GDPR, artificial intelligence cannot replace human decision but only support it.
Privacy is one of the areas in which eXplainable AI plays a more important role. Here are some tips for creating and maintaining processes with artificial intelligence that respect the principles of privacy.
5 Tips for Privacy Compliance
- Identify processes in your business that use profiling and automated decisions.
- Inventory the machine learning models currently used.
- Assess your existing models. Are they interpretable? Can you demonstrate to an auditor that they do not discriminate?
- Assess your current machine learning techniques. Do they produce interpretable rules?
- Develop a strategy for meeting compliance requirements in each stage of the machine learning workflow