Home

The research interests of the DIG team span knowledge graphs, language models, foundational models, learning over tabular data, graph mining, and stream mining. The team develops methods for representing, integrating, and reasoning over complex, dynamic data to enable interpretable and trustworthy AI. Applications range from general-purpose AI to domain-specific areas such as healthcare and law. More specifically, the DIG team’s research activity covers the following topics:
 
  • Knowledge Graphs
  • Machine/Deep Learning
  • Language/Foundational Models
  • Natural Language Processing
  • Logic and Algorithms
  • Graph mining
  • Stream streams
  • Application Domains: Healthcare, Legal Domain

Industry Collaborations

Research Projects

Research

Algorithms:

Knowledge Graphs:

Softwares:

  • Logo of scikit-networkscikit-network: scikit-network is a Python package for the analysis of large graphs (clustering, embedding, classification, ranking).

 

 

  • We investigate how to do machine learning in real time, contributing to new open source tools:

    • River: a Python library for online MachineLearning
    • MOA: Massive Online Analytics, a framework for mining data streams (in Java)
    • Apache SAMOA: Scalable Advanced Massive Online Analytics, an open source framework for data stream mining on the Hadoop Ecosystem

Language and Relevance

Computer science is not just about computers. In this area of research, we investigate how humans reason, and what this implies for machines.

  • Simplicity theory seeks to explain the relevance of situations or events to human minds.
  • Relevance in natural language: The point is to retro-engineer methods to achieve meaningful and relevant speech from our understanding of human performance.
  • We apply game theory and social simulation to explore conditions in which providing valuable (i.e. relevant) information is a profitable strategy. Read this paper.

Team

Talel Abdessalem Mehwish Alam Albert Bifet Thomas Bonald Jean-Louis Dessalles
Nils Holzenberger Louis Jachiet Van-Tam Nguyen Nikola Simidjievski Fabian Suchanek

Faculty

Research engineer

Post-docs

  • Fajrian Yunus

PhD candidates

  • Azzedine Ait Said
  • François Amat
  • Tom Calamai
  • Simon Coumes
  • Tuan-Kiet Doan
  • Pierre Epron
  • Lorenzo Guerra
  • Samy Haffoudhi
  • Rajaa El Hamdani
  • Bérénice Jaulmes
  • Zhu Liao
  • Tom Maye Lasserre
  • Rémi Nahon
  • Hung Nguyen
  • Le Trung Nguyen
  • Van Chien Nguyen
  • Zakari Ait Ouazzou
  • Yiwen Peng
  • Roman Plaud
  • Ael Quelennec
  • Rachida Saroui
  • Samuel Reyd
  • Ali Tarhini
  • Trung-Hieu Tran
  • Long-Tuan Vo
  • Clément Wang
  • Yinghao Wang

PhD track students

  • Zeinab Ghamlouch
  • Avrile Floro
  • Marc Farah
  • Daniela Cojocaru
  • Quoc-Dat Tran
  • Hai Thien Long Vu
  • Thanh Hai Tran
  • Thanh Nam Tran

Alumni

News

We are hiring two PhD students and one Postdoc to work on language models and knowledge graphs!

Best paper award at ISWC 2025

Yiwen Peng, Thomas Bonald and Fabian Suchanek received the best paper award at ISWC 2025 for their paper on FLORA: Unsupervised Knowledge Graph Alignment by Fuzzy Logic.

Thursday, December 11, 2025, 12:15, 4A301

Simon Coumes Contextual knowledge representation for neurosymbolic Artificial Intelligence reasoning The field of Knowledge Representation and Reasoning is concerned with the representation of information about reality in a form that is both human-readable and machine-processable. It has been a part of artificial intelligence since its inception, and has produced many important formalisms and systems. One …

Tuesday, December 9, 2025, 11:45, 1D19

Le-Minh Nguyen (Japan Advanced Institute of Science and Technology) SPECTRA: Faster Large Language Model Inference with Optimized Internal and External Speculation Inference with modern Large Language Models (LLMs) is both computationally intensive and time-consuming. While speculative decoding has emerged as a promising solution, existing approaches face key limitations. Training-based methods require the development of a …