Tuesday, September 19, 2023, 11:45, 4A301

Julien Lie-Panis

Models of reputation-based cooperation. Bridging the Gap between Reciprocity and Signaling.

Human cooperation is often understood through the lens of reciprocity. In classic models, cooperation is sustained because it is reciprocal: individuals who bear costs to help others can then expect to be helped in return. Another framework is honest signal theory. According to this approach, cooperation can be sustained when helpers reveal information about themselves, which in turn affects receivers’ behavior. Here, we aim to bridge the gap between these two approaches, in order to better characterize human cooperation. We show how integrating both approaches can help explain the variability of human cooperation, its extent, and its limits.

In chapter 1, we introduce evolutionary game theory, and its application to human behavior.

In chapter 2, we show that cooperation with strangers can be understood as a signal of time preferences. In equilibrium, patient individuals cooperate more often, and individuals who reveal higher preference for the future inspire more trust. We show how our model can help explain the variability of cooperation and trust.

In chapter 3, we turn to the psychology of revenge. Revenge is often understood in terms of enforcing cooperation, or equivalently, deterring transgressions: vengeful individuals pay costs, which may be offset by the benefit of a vengeful reputation. Yet, revenge does not always seem designed for optimal deterrence. Our model reconciles the deterrent function of revenge with its apparent quirks, such as our propensity to overreact to minuscule transgressions, and to forgive dangerous behavior based on a lucky positive outcome.

In chapter 4, we turn to dysfunctional forms of cooperation and signaling. We posit that outrage can sometimes act as a second-order signal, demonstrating investment in another, first-order signal. We then show how outrage can lead to dishonest displays of commitment, and escalating costs.

In chapter 5, we extend the model in chapter 2 to include institutions. Institutions are often invoked as solutions to hard cooperation problems: they stabilize cooperation in contexts where reputation is insufficient. Yet, institutions are at the mercy of the very problem they are designed to solve. People must devote time and resources to create new rules and compensate institutional operatives. We show that institutions for hard cooperation problems can emerge nonetheless, as long as they rest on an easy cooperation problem. Our model shows how designing efficient institutions can allow humans to extend the scale of cooperation.

Finally, in chapter 6, we discuss the merits of mathematical modeling in the social sciences.

scikit-network

A new version of scikit-network is available!

This includes:

  • accelerated code for massive graphs
  • visualization in SVG format
  • soft clustering
  • soft classification
  • fast embedding

Open position on Explainable AI

Télécom Paris offers a full-time academic position as Maître de Conférences in the area of Artificial Intelligence, and in particular on techniques making results or decisions of AI explainable, starting September 2020.

More details here.

Open Associate Professor position in Scalable Artificial Intelligence in Paris

The DIG team is opening an Associate Professor position in Scalable Artificial Intelligence at LTCI, Télécom ParisTech in Paris.

More information: here

University: Télécom ParisTech, https://telecom-paristech.fr/
Location: Palaiseau, near Paris, France
Position: Associate Professor (“Maître de conférences”), tenured permanent position
Application deadline: Friday, March 15, 2019
Starting date: September 2019
Team: Data Intelligence and Graphs (DIG, https://dig.telecom-paristech.fr/)

 

New book “Des intelligences très artificielles” by Jean-Louis Dessalles

L’« IA » fait de plus en plus souvent la une des médias. Les mystérieux algorithmes de nos ordinateurs sont champions du monde d’échecs et de go, ils vont conduire nos voitures, traduire automatiquement en n’importe quelle langue, voire imiter nos modes de raisonnement. Hélas, ils ne savent même pas qu’ils sont intelligents.

Pour le dire plus clairement, ils ne savent rien. Tout ce que peuvent manifester les ordinateurs dotés des techniques les plus récentes d’IA est une intelligence qui ne comprend rien – du réflexe sans réflexion. Certains de nos mécanismes cognitifs, patiemment mis au point par l’évolution biologique, comme la recherche de la simplification et de la structure des phénomènes, sont encore hors de portée des machines, contraintes d’approcher au plus près de nos modes de raisonnement sans jamais les reproduire vraiment.

Le fantasme de la machine qui sait tout a donc de beaux jours devant lui, même si les progrès de l’IA posent avec toujours plus d’acuité la lancinante question de savoir si une véritable intelligence peut être produite par des circuits de silicium.

Jean-Louis Dessalles est enseignant-chercheur à Télécom ParisTech. Il utilise l’intelligence artificielle pour démonter les mécanismes de l’intelligence humaine, notamment en ce qui concerne le langage et le raisonnement.

Book Website

Workshop on Graph Learning

A workshop on Graph Learning will be held at LINCS on May 14h, 2018:

https://www.lincs.fr/workshop-on-graph-learning/

The objective of this workshop is to bring together people from industry and academia for presenting and discussing the most recent learning techniques based on graphs, from both theoretical and practical perspectives.

The workshop will cover the following aspects:

  • Graph clustering
  • Topic detection
  • Recommender systems
  • Graph-based classification
  • Link prediction
  • Graph alignment
  • Social networks
  • Dynamic graphs
  • Graph signal processing

Speakers
Oana Balalau (Max-Planck Institute)
Alexis Benichoux (Deezer)
Pierre Borgnat (ENS Lyon)
Stephan Clémençon (Telecom ParisTech)
Vincent Cohen-Addad (CNRS / UPMC)
Matthias Grossglauser (EPFL)
Alexandre Hollocou (Inria)
Hervé Jegou (Facebook)
Renaud Lambiotte (University of Oxford)
Matthieu Latapy (CNRS / UPMC)
Dimitrios Milioris (Nokia Bell Labs)
Eric Siboni (Shift Technologies)
Michal Valko (Inria)

Organizers

Thomas Bonald (Telecom ParisTech)
Marc Lelarge (Inria)
Laurent Massoulié (Inria / Microsoft)