Membres du jury :
A usual solution to create complex systems is to resort to a component-based architecture. Though making the design easier, it also increases the complexity of pinpointing which components are responsible for a system failure. Two fields of computer science have goals close to the one of responsibility assignment. Diagnosis aims at determining the components of the system which, when assumed to be functioning abnormally, will explain the system failure. In a network context, Fault Localisation can be defined as the process of deducing the exact source of a failure from a set of observed errors. In a program context, it can be defined as a task in software debugging to identify the set of statements in a program that cause the program to fail.
Though close to assigning responsibilities, Diagnosis and fault localisation do not answer the question ``who is responsible for the system failure ?''. A previous work provides a trace-based approach that answers this question, with causality analysis. The idea is to use a counter-factual approach (``what would have happened, would this component not be faulty?'') to assess whether or not a set of components is responsible for a system failure. Given a failing system trace, the counter-factuals are build by first removing the faults of the suspected components from the faulty trace, and then extending the resulting pruned trace using the expected behaviour of the components. This approach differs from the main trend for causality in computing science, that generally relies on the framework introduced by Halpern and Pearl.
My PhD thesis expands this framework in two ways. The first one is to extend the causality analysis framework from a black-box component setting (components for which we know the expected behaviour, usually the ones diagnosis consider) to a mixed framework with both black-box and white-box components (components for which we know the actual behaviour, usually the ones considered in fault localisation). Using a game, one is able to assess responsibility for systems composed of both black-box and white-box components. This approach is a generalisation of the causality analysis approach for the black-box components, and work similarly to fault localisation techniques for the white-box components. The approach relies on techniques that are close to the one used in controller synthesis, and thus can generate fixes for the bug observed.
The second main axis is to study the impact of the amount of information accessible on the construction of the counter-factuals. Having more information, like the memory state of the components or a fault model for the components, can be used to build more refined counter-factuals. Those counter-factuals will be closer to what would have happened if components had been fixed, thus yielding some more accurate responsibility assignment. The other end of the spectrum is what happens with less information (missing variables in the trace, or missing portions of the trace). Some general results are given, as well as a way of performing causality analysis on a partial trace that yield the same results as the one performed on the full trace, under certain assumptions.