- Knowledge Graphs
- Machine/Deep Learning
- Language/Foundational Models
- Natural Language Processing
- Logic and Algorithms
- Graph mining
- Stream streams
- Application Domains: Healthcare, Legal Domain
Industry Collaborations



Research Projects







Télécom Paris
Industry Collaborations




Research Projects






YAGO: YAGO is a large ontology constructed from WordNet, Wikipedia, and other sources. We develop YAGO together with the Database department of the Max Planck Institute for Informatics in Germany.
scikit-network: scikit-network is a Python package for the analysis of large graphs (clustering, embedding, classification, ranking).
We investigate how to do machine learning in real time, contributing to new open source tools:
Computer science is not just about computers. In this area of research, we investigate how humans reason, and what this implies for machines.
| Talel Abdessalem | Mehwish Alam | Albert Bifet | Thomas Bonald | Jean-Louis Dessalles |
![]() |
![]() |
![]() |
![]() |
![]() |
| Nils Holzenberger | Louis Jachiet | Van-Tam Nguyen | Nikola Simidjievski | Fabian Suchanek |
![]() |
![]() |
![]() |
![]() |
![]() |
We are hiring two PhD students and one Postdoc to work on language models and knowledge graphs!
Yiwen Peng, Thomas Bonald and Fabian Suchanek received the best paper award at ISWC 2025 for their paper on FLORA: Unsupervised Knowledge Graph Alignment by Fuzzy Logic.
Simon Coumes Contextual knowledge representation for neurosymbolic Artificial Intelligence reasoning The field of Knowledge Representation and Reasoning is concerned with the representation of information about reality in a form that is both human-readable and machine-processable. It has been a part of artificial intelligence since its inception, and has produced many important formalisms and systems. One …
Continue reading “Thursday, December 11, 2025, 12:15, 4A301”
Le-Minh Nguyen (Japan Advanced Institute of Science and Technology) SPECTRA: Faster Large Language Model Inference with Optimized Internal and External Speculation Inference with modern Large Language Models (LLMs) is both computationally intensive and time-consuming. While speculative decoding has emerged as a promising solution, existing approaches face key limitations. Training-based methods require the development of a …