Tuesday, December 3, 2024, 11:45, 4A125

Gabriel Damay

Dynamic Decision Trees and Community-based Graph Embeddings: towards Interpretable Machine Learning

Machine learning is the field of computer science that interests in building models and solutions from data without knowing exactly the set of instructions internal to these models and solutions. This field has achieved great results but is now under scrutiny for the inability to understand or audit its models among other concerns. Interpretable Machine Learning addresses these concerns by building models that are inherently interpretable. This thesis contributes to Interpretable Machine Learning in two ways.

First, we study decision trees. This is a very popular group of machine learning methods for classification problems and it is interpretable by design. However, real world data is often dynamic, but few algorithms can maintain a decision tree when data can be both inserted and deleted from the training set. We propose a new algorithm called FuDyADT to solve this problem.

Second, when data are represented as graphs, a very common machine learning technique called “embedding” consists in projecting them onto a vectorial space. This kind of method however is usually not interpretable. We propose a new embedding algorithm called PaRFaITe based on the factorization of the Personalized PageRank matrix. This algorithm is designed to provide interpretable results.

We study both algorithms theoretically and experimentally. We show that FuDyADT is at least comparable to state-of-the-art algorithms in the usual setting, while also being able to handle unusual settings such as deletions of data. PaRFaITe on the other hand produces embedding dimensions that align with the communities of the graph, making the embedding interpretable.

Tuesday, November 12, 2024, 11:45, 4A125

Cyril Chhun

Methodology and Meta-Evaluation Benchmark for Automatic Story Generation

Storytelling is a central component of human culture. Multiple approaches have been proposed to explore computational storytelling, despite the inherent challenges posed by the tasks of generating stories and assessing their quality. In this thesis, we design a meta-evaluation methodology and benchmark for ASG. First, we lay the groundwork for conducting our meta-evaluation: we describe our chosen setting, provide definitions for the ASG and Automatic Story Evaluation (ASE) tasks, and propose an original set of six criteria for story evaluation. Then, we introduce HANNA, our corpus of Human ANnotated NArratives, which contains 1,056 stories annotated w.r.t. our six criteria, and show that those criteria allow for a standardized human evaluation. We use Large Language Models (LLMs) to augment HANNA with 480 new stories and 150k+ rating annotations. We observe that LLMs obtain better grades than humans, as rated by selected LLMs. After that, we perform our meta-evaluation benchmark on HANNA. We mainly observe that specific measures for ASE are needed, and that commonly-used measures (e.g. BLEU) are sub-optimal. We then show our analysis of LLM performance at ASE: we find that LLMs are currently the best proxy for human evaluation of ASG and that, in our specific setting, providing detailed guidelines does not improve correlations between LLM and human ratings. Those results prompt us to study whether the performance displayed by LLMs at ASE and ASG can be explained through different factors. We perform a three-part study on LLM-generated explanations, and an analysis of pretraining data on LLM performance. Notably, we find that LLMs struggle to explain their answers with substantiated claims. Finally, we outline three main research perspectives: designing specific ASE measures, further investigating LLM performance at ASG and ASE, and assessing and mitigating the impact of LLMs on society.

References:

Of Human Criteria and Automatic Metrics: A Benchmark of the Evaluation of Story Generation (COLING 2022)

Do Language Models Enjoy Their Own Stories? Prompting Large Language Models for Automatic Story Evaluation (TACL 2024)

Tuesday, October 29, 2024, 11:45, 4A125

Simon Coumes

Qiana: A First-Order Formalism to Quantify over Contexts and Formulas

Qiana is a logic framework for reasoning on formulas that are true only in specific contexts. In Qiana, it is possible to quantify over both formulas and contexts to express, e.g., that “everyone knows everything Alice says”. Qiana also permits paraconsistent logics within contexts, so that contexts can contain contradictions. Furthermore, Qiana is based on first-order logic, and is finitely axiomatizable, so that Qiana theories are compatible with pre-existing first-order logic theorem provers.

Tuesday, October 15, 2024, 11:45, 4A301

Yael Amsterdamer & Daniel Deutch

Query-Guided Data Cleaning (Yael Amsterdamer)

We take an active approach to the cleaning of uncertain databases, by proposing a set of tools to guide the cleaning process. We start with a database whose tuple correctness is uncertain, and with some means of resolving this uncertainty, e.g., crowdsourcing, experts, a trained ML model or external sources. Guided by a query that defines what part of the data is of importance, our goal is to select tuples whose cleaning would effectively resolve uncertainty in query results. In other words, we develop a query-guided process for the resolution of uncertain data. Our approach combines techniques from different fields, including the use of provenance information to capture the propagation of errors to query results and Boolean interactive evaluation to decide which input tuples to clean based on their role in output derivation or effect on uncertainty.

Yael Amsterdamer is a Professor at the Department of Computer Science, Bar-Ilan University, and the head of the Data Management Lab. She received her Ph.D. in Computer Science from Tel-Aviv University, and has been a visiting Scholar at the University of Pennsylvania, Philadelphia, PA and jointly at Télécom Paris and INRIA institute (Paris, France). Her research is in the field of interactive data management spanning topics such as crowd-powered data management, interactive summarization and data cleaning. Her research was awarded multiple competitive grants including the Israeli Science Foundation (ISF) personal grants, the Israeli Ministry of Science (MOST) grant, and the BIU Center for Research in Applied Cryptography and Cyber Security Personal Grant.

Explanations in Data Science (Daniel Deutch)

Data Science involves complex processing over large-scale data for decision support, and much of this processing is done by black boxes such as Data Cleaning Modules, Database Management Systems, and Machine Learning modules. Decision support should be transparent but the combination of complex computation and large-scale data yields many challenges in this respect. Interpretability has been extensively studied in both the data management and in the machine learning communities, but the problem is far from being solved. I will present an holistic approach to the problem that is based on two facets, namely counterfactual explanations and attribution-based explanations. I will demonstrate the conceptual and computational challenges, as well as some main results we have achieved in this context.

Daniel Deutch is a Full Professor in the Computer Science Department of Tel Aviv University. Daniel has received his Ph.D. degree in Computer Science from Tel Aviv University. He was a postdoctoral fellow at the University of Pennsylvania and INRIA France. His research focuses on advanced database applications and web data management, studying both theoretical and practical aspects of issues such as data provenance, analysis of web applications and data, and dealing with data uncertainty. Daniel’s research has been disseminated by papers in the top conferences and journals on data and web data management (VLDB, SIGMOD/PODS, VLDBJ, TODS, etc.) He has won a number of research awards including the VLDB best paper award, the Krill Prize (awarded by the Wolf Foundation) and the Yahoo! Early Career Award. His research was awarded multiple competitive grants including the European Research Council (ERC) Personal Research Grant and grants by the Israeli Science Foundation (ISF, twice), the US-Israel Binational Science Foundation (BSF), the Broadcom Foundation, the Israeli Ministry of Science (MOST), the Blavatnik Interdisciplinary Cyber Research Institute (ICRC), Intuit and Intel.

Tuesday, October 8, 2024, 11:45, 4A125

Rajaa El Hamdani & Yiwen Peng

Refining Wikidata Taxonomy using Large Language Models (Yiwen Peng)

Due to its collaborative nature, Wikidata is known to have a complex taxonomy, with recurrent issues like the ambiguity between instances and classes, the inaccuracy of some taxonomic paths, the presence of cycles, and the high level of redundancy across classes. Manual efforts to clean up this taxonomy are time-consuming and prone to errors or subjective decisions. We present WiKC, a new version of Wikidata taxonomy cleaned automatically using a combination of Large Language Models (LLMs) and graph mining techniques. Operations on the taxonomy, such as cutting links or merging classes, are performed with the help of zero-shot prompting on an open-source LLM. The quality of the refined taxonomy is evaluated from both intrinsic and extrinsic perspectives, on a task of entity typing for the latter, showing the practical interest of WiKC.

The Factuality of Large Language Models in the Legal Domain (Rajaa El Hamdani)

This paper investigates the factuality of large language models (LLMs) as knowledge bases in the legal domain, in a realistic usage scenario: we allow for acceptable variations in the answer, and let the model abstain from answering when uncertain. First, we design a dataset of diverse factual questions about case law and legislation. We then use the dataset to evaluate several LLMs under different evaluation methods, including exact, alias, and fuzzy matching. Our results show that the performance improves significantly under the alias and fuzzy matching methods. Further, we explore the impact of abstaining and in-context examples, finding that both strategies enhance precision. Finally, we demonstrate that additional pretraining on legal documents, as seen with SaulLM, further improves factual precision from 63% to 81%.

Tuesday, September 24, 2024, 11:45, 4A125

Ambroise Odonnat

Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias

Self-training is a well-known approach for semi-supervised learning. It consists of iteratively assigning pseudo-labels to unlabeled data for which the model is confident and treating them as labeled examples. For neural networks, softmax prediction probabilities are often used as a confidence measure, although they are known to be overconfident, even for wrong predictions. This phenomenon is particularly intensified in the presence of sample selection bias, i.e., when data labeling is subject to some constraints. To address this issue, we propose a novel confidence measure, called T-similarity, built upon the prediction diversity of an ensemble of linear classifiers. We provide the theoretical analysis of our approach by studying stationary points and describing the relationship between the diversity of the individual members and their performance. We empirically demonstrate the benefit of our confidence measure for three different pseudo-labeling policies on classification datasets of various data modalities.

Tuesday, September 10, 2024, 11:45, 4A125

Samuel Reyd & Jean-Louis Dessalles

CIRCE: a Scalable Methodology for Causal Explanations in Cyber-Physical Systems (Samuel Reyd)

Cyber-physical systems (CPS) are increasingly complex and harder for human users to understand. Integrating explainability methods within their design is a key challenge for their acceptability and management. We consider that causal explanations can provide suitable answers to address this issue. Most approaches to causal explanations, however, rely on global system models, often built offline, which implies heavy computations, delays, and interpretability issues when answering questions at runtime. We propose CIRCE: a scalable method for Contextual, Interpretable and Reactive Causal Explanations in CPS. It is an abduction method that determines the cause of a fact questioned by users at runtime. Its originality lies in finding a cause instead of an entire causal graph to explain CPS behavior and employing a classic local Explanatory AI (XAI) technique, LIME, to approximate this cause. We validate our method via several simulations of smart home scenarios. Results indicate that CIRCE can provide relevant answers to diverse questions and scales well with the number of variables. Our approach may improve the efficiency and relevance of causality-based explanations for CPS and contribute to bridging the gap between CPS explainability and classic XAI techniques.

Simplicity bias in human-generated data (Jean-Louis Dessalles)

Texts available on the Web have been generated by human minds. We observe that simple patterns are over-represented: abcdef is more frequent than arfbxg and 1000 appears more often than 1282. We suggest that word frequency patterns can be predicted by cognitive models based on complexity minimization. Conversely, the observation of word frequencies offers an opportunity to infer particular cognitive mechanisms involved in their generation.

Tuesday, July 9, 2024, 11:45, 4A125

Peter Fratrič

Mining behavior from a legal simulation environment: where we are and what lies ahead

This talk presents a methodological framework for the use of simulation-based methods to investigate questions of non-compliance in a legal context. Its aim is to generate observed or previously unobserved instances of non-compliance and use them to improve compliance and trust in a given socio-economic infrastructure. The framework consists of three components: a law formalization process resulting in a normative system implemented as an agent-based model, a profit-driven agent generating instances of non-compliance, and a norm extraction process transforming the generated behavior into a formal model. Early research results of practical implementation of this methodology are illustrated on a multinational tax avoidance case. Towards the end, we focus on open issues related to behavior clustering and data/process mining.

Tuesday, July 2, 2024, 12:15, 4A301

Chadi Helwe

PhD defense practice talk

This thesis focuses on evaluating and improving the reasoning abilities of Smaller Language Models (SLMs) and Large Language Models (LLMs). It explores SLMs’ performance on complex tasks and their limitations with simpler ones. This thesis introduces LogiTorch, a Python library that facilitates the training of models on various reasoning tasks with minimal coding. It also presents TINA, a negated data augmentation technique that improves SLMs’  robustness to negation in textual entailment tasks. Further, this thesis explores LLMs’ capabilities through MAFALDA, a new benchmark for identifying and classifying reasoning fallacies, proposing a new annotation scheme and evaluation metric that considers subjectivity in reasoning. The findings indicate that humans outperform SLMs and LLMs in this reasoning task. We propose several research directions that merit further investigation, such as investigating Neuro-symbolic AI and improving the reasoning abilities of low-resource LLMs.

Tuesday, June 18, 2024, 11:45, 4A125

Shady Elbassuoni

Data Centric Fake News Detection During Armed Conflicts

Armed conflicts continue to be a major global issue, causing widespread human suffering, displacement, and economic instability. Fake news can further fuel armed conflicts by manipulating public perception, inciting violence, and undermining efforts towards resolution. In this talk, I will argue why a one-size-fits-all approach for fake news detection is not adequate during armed conflicts. I will then present a data-centric approach for fake news detection, focusing on the Syrian civil war as a case study. The approach utilizes a knowledge graph of conflict casualties to construct a fake news dataset, and then employs meta-learning to automatically detect fake news. I will present experimental results that demonstrate the effectiveness of this approach compared to various baselines, and will conclude with a few potential avenues for future research.