Home

The research interests of the DIG team span knowledge graphs, language models, foundational models, learning over tabular data, graph mining, and stream mining. The team develops methods for representing, integrating, and reasoning over complex, dynamic data to enable interpretable and trustworthy AI. Applications range from general-purpose AI to domain-specific areas such as healthcare and law. More specifically, the DIG team’s research activity covers the following topics:
 
  • Knowledge Graphs
  • Machine/Deep Learning
  • Language/Foundational Models
  • Natural Language Processing
  • Logic and Algorithms
  • Graph mining
  • Stream Mining
  • Application Domains: Healthcare, Legal Domain

Industry Collaborations

Research Projects

Research

Algorithms:

Knowledge Graphs:

Softwares:

  • Logo of scikit-networkscikit-network: scikit-network is a Python package for the analysis of large graphs (clustering, embedding, classification, ranking).

 

 

  • We investigate how to do machine learning in real time, contributing to new open source tools:

    • River: a Python library for online MachineLearning
    • MOA: Massive Online Analytics, a framework for mining data streams (in Java)
    • Apache SAMOA: Scalable Advanced Massive Online Analytics, an open source framework for data stream mining on the Hadoop Ecosystem

Language and Relevance

Computer science is not just about computers. In this area of research, we investigate how humans reason, and what this implies for machines.

  • Simplicity theory seeks to explain the relevance of situations or events to human minds.
  • Relevance in natural language: The point is to retro-engineer methods to achieve meaningful and relevant speech from our understanding of human performance.
  • We apply game theory and social simulation to explore conditions in which providing valuable (i.e. relevant) information is a profitable strategy. Read this paper.

Team

Talel Abdessalem Mehwish Alam Albert Bifet Thomas Bonald Jean-Louis Dessalles Yanzhu Guo
Nils Holzenberger Louis Jachiet Van-Tam Nguyen Nikola Simidjievski Fabian Suchanek

Faculty

Research engineer

Post-docs

  • Fajrian Yunus

PhD candidates

  • François Amat (Fabian Suchanek)
  • Tom Calamai (Oana Balalau, Fabian Suchanek)
  • Simon Coumes (Fabian Suchanek)
  • Francois Crespin (Fabian Suchanek, Nils Holzenberger)
  • Tuan-Kiet Doan (Nikola Simidjievski, Van-Tam Nguyen)
  • Pierre Epron (Mehwish Alam, Adrien Coulet)
  • Lorenzo Guerra (Van-Tam Nguyen)
  • Samy Haffoudhi (Fabian Suchanek, Nils Holzenberger)
  • Rajaa El Hamdani (Thomas Bonald)
  • Bérénice Jaulmes (Mehwish Alam, Fabian Suchanek)
  • Zhu Liao (Van-Tam Nguyen)
  • Tom Maye Lasserre (Nikola Simidjievski)
  • Rémi Nahon (Pavlo Mazharovsky, Van-Tam Nguyen)
  • Hung Nguyen (Mehwish Alam)
  • Le Trung Nguyen (Enzo Tartaglione, Van-Tam Nguyen)
  • Van Chien Nguyen (Van-Tam Nguyen)
  • Zakari Ait Ouazzou (Talel Abdessalem, Albert Bifet)
  • Yiwen Peng (Thomas Bonald)
  • Roman Plaud (Thomas Bonald, Matthieu Labeau)
  • Ael Quelennec (Enzo Tartaglione, Van-Tam Nguyen)
  • Samuel Reyd (Ada Diaconescu, Jean-Louis Dessalles)
  • Rachida Saroui (Pavlo Mazharovsky, Van-Tam Nguyen)
  • Azzedine Ait Said
  • Ali Tarhini (Van-Tam Nguyen)
  • Trung-Hieu Tran (Van-Tam Nguyen, Nikola Simidjievski)
  • Long-Tuan Vo (Mehwish Alam, Pavlo Mazharovsky)
  • Clément Wang (Thomas Bonald)
  • Yinghao Wang (Enzo Tartaglione, Van-Tam Nguyen)
  • Viktoriya Zhukova (Thomas Bonald)

PhD track students

  • Daniela Cojocaru (Nikola Simidjievski)
  • Marc Farah (Nikola Simidjievski)
  • Avrile Floro (Nils Holzenberger)
  • Zeinab Ghamlouch (Mehwish Alam)
  • Quoc-Dat Tran (Nikola Simidjievski)
  • Thanh Hai Tran (Van-Tam Nguyen)
  • Thanh Nam Tran (Van-Tam Nguyen)
  • Hai Thien Long Vu (Van-Tam Nguyen)

Alumni

News

Our article “It’s All About the Confidence: An Unsupervised Approach for Multilingual Historical Entity Linking using Large Language Models” by Cristian Santini, Marieke van Erp, Mehwish Alam has been accepted at EACL 2026.

We are hiring two PhD students and one Postdoc to work on language models and knowledge graphs!

Yiwen Peng, Thomas Bonald and Fabian Suchanek received the best paper award at ISWC 2025 for their paper on FLORA: Unsupervised Knowledge Graph Alignment by Fuzzy Logic.

Two demo articles Enriching Taxonomies using Large Language Models and T-REX: Table Refute or Entail Explainer by Zeinab Ghamlouch, Tim Luka Horstmann, Beptiste Geisenberger, and Mehwish Alam were presented at ECAI 2025 and ECML/PKDD 2025.

Berenice Jaulmes attended the 36th European Summer School in Logic, Language, and Information (ESSLLI2025) in Bochum, Germany.

Nils Holzenberger attended the International Seminars on the New Institutional Economics 2025.