I am interested in understanding what made our language unique and how we can endow artificial models with such a communication protocol. To this end, I use deep learning techniques, building up on the emergent communication field to compare deep learner languages with human languages. More recently, I have been focusing on looking at deep learners’ inductive biases and how to inject the “right” biases in these task-agnostic models.
Prior to my Ph.D., I received my B.S. and Master of Applied Mathematics degree from Supelec and a DataScience master’s degree from Ecole Polytechnique. You can find more information in my CV.
Rahma Chaabouni, Eugeny Kharitonov, Emmanuel Dupoux, Marco Baroni, Communicating artificial neural networks develop efficient color-naming systems. PNAS (to appear).
Eugeny Kharitonov*, Rahma Chaabouni*, What they do when in doubt: a study of inductive biases in seq2seq learners, ICLR 2021. pdf
Rahma Chaabouni*, Eugeny Kharitonov*, Diane Bouchacourt, Emmanuel Dupoux, Marco Baroni. Compositionality and generalization in emergent languages. ACL 2020.pdf
Mathieu Rita, Rahma Chaabouni, Emmanuel Dupoux, Lazy and Impatient neural agents learn to communicate efficiently. CoNLL 2020. pdf.
Eugeny Kharitonov, Rahma Chaabouni, Diane Bouchacourt, Marco Baroni. Information Minimization In Emergent Languages. ICML 2020, pdf
Rahma Chaabouni, Eugeny Kharitonov, Emmanuel Dupoux, Marco Baroni. Anti-efficient encoding in emergent communication . NEURIPS 2019. pdf
Eugeny Kharitonov, Rahma Chaabouni, Diane Bouchacourt, Marco Baroni. EGG: a toolkit for research on Emergence of lanGuage in Games . EMNLP 2019 (Demo). pdf
Rahma Chaabouni, Eugeny Kharitonov, Alessandro Lazaric, Emmanuel Dupoux, Marco Baroni. Word-order biases in deep-agent emergent communication. ACL 2019. pdf
For up to date list of publications, please have a look at my Google Scholar.