Our products are used to:
_reduce the uncertainty of Machine Learning models,
_prioritize enterprise data efforts,
_support experts in the ML loop,
_improve the quality of ML models, especially in multi-class settings with complex ontologies,
_reduce data footprint and compactify ML models so as to be used by the Internet of Things applications,
_improve gaming experience via more challenging and realistic AI in games,
_create intelligent advisory systems from pre-compiled building blocks.
[EN] Development of the BrightBox system – a explainable AI class tool for improving the interpretability and predictability of learning methods and diagnostics of the correctness of learned AI / ML models.
[PL] Opracowanie systemu BrightBox - narzędzia klasy explainable AI służącego do poprawy interpretowalności i przewidywalności działania metod uczących oraz diagnostyki poprawności działania wyuczonych modeli AI/ML.
Application number: MAZOWSZE/0198/19
Value of the project: 8 548 180,00 zł
Donation: 6 093 476,00 zł
Beneficiary: QED Software Sp. z o. o.
Project duration: 2020-02-01 - 2022-07-31
Project realised as a part of: „Ścieżka dla Mazowsza” contest
BrightBox – explainable AI class tools used to improve the interpretability and predictability of learning methods and diagnostics of correctness of learned AI / ML models. Its purpose will be to support:
1) analysts and data science specialists creating AI / ML models,
2) field experts validating AI / ML models,
3) persons responsible for monitoring the operation of AI / ML models,
4) end users employing AI / ML models in their work.
The software will provide:
Explanation of decisions made by existing (learned) AI / ML models, indicating the reasons for making a particular decision and explaining the reasons for uncertainty while making decisions (explaining the risk of a model making a mistake).
Conducting ‘what-if’ analyzes and indicating possible optimization of process control parameters monitored by AI / ML models, including the possibility of refining the input data so that the risk of error is minimized.
Performing periodic or continuous diagnostics of errors made by learned AI / ML models, together with an indication of the most probable causes of errors.
Designing more interpretable methods of teaching AI / ML models, including methods more resistant to noise in the input data and optimized to match the error scale measures that meet the requirements of field experts.
The software will be useful in those practical areas of application of the AI / ML methods, where the ability to understand and explain the nature of their operation is required by law (or the introduction of such requirements under new regulations is planned). On the other hand, however, it should be emphasized that in many areas (such as cyber security, risk monitoring in industrial processes, telemedicine, etc.), the improvement of transparency and interpretability of AI / ML models is highly expected, regardless of the existence of any legal regulations.