 
|
>
Research > NLP and Dialogue
NLP for Human-Machine Dialogue
Presentation
Spoken
Language Processing aims at achieving a robust analysis (syntax,
semantics, pragmatics) of spontaneous spoken utterances. It has to cope
with two main problems :
- Automatic speech
recognition strongly corrupt speech transcriptions,
- Speech disfluences
(repairs, self-corrections, hesitations...) break the syntactic
structure of spoken language.
I'm investigating robust parsing methods (incremantal shallow parsing
based on POS tagging and chunking) in order to overcome these
difficulties. This research takes place in the framework of
human-machine spoken dialogue or human conversational speech (broadcast
speech).

Detection of emotions- I
am currently working on the extension of theses researches on the
question of the emotion carried by spoken utterances. Most researches
conducted in this area are concerning a prosodic characterization of
the emotion. In an complementary way, I investigate the detection of
emotions from the analysis of the propositionnal content of the spoken
utterances. The basic idea is that emotion is compositional : while
simple words carry a emotional valency which is established by
phycholinguistic works, verbs and adjective act as predicates which
take a emotional valency of their arguments to provide a resulting
global emotion. As a result, the computation of the general emotion
carried by an utterance depends on the semantic structure of the speech
turn and lexical emotional norms as well.
For the moment being, we have implemented a emotion detector (EmoLogus) which is based on the Logus
speech understading system . EmoLogus is able to characterise the
emotional valency (positive, negative, neutral) and intensity (weak,
high) carried by isolated speech turns. We are now working on a more
contextual detection of emotion. This work is done in collaboration
with the European University of Brittany (Jeanne
Villaneau and Dominique Duhaut ) and the Montpellier 3 University (Arielle Syssau-Vaccarella). Agata Savary, from the LI, is also involved in this work.
Works
and projects
- Spoken language
understaning for dedicated human-machine dialogue
- I am developping robust parsing methods which aim at studying tasks
that are more complex than standard ATIS-like applications.
Furthermore, these approaches describe speech disfluences more
precisely than standard pattern-based approches (see for instance the
work of Shriberg
or Heeman).
- ROMUS
speech understanding system (finite state automata for robust chunking)
: PhD of Jerome Goulian (2002)

- LOGUS
speech understanding system (logical approach based on categorial
grammars) : PhD of Jeanne
Villaneau
(2003) 
- EPAC
project
(2007-2010) - Extension of our previous works to general
conversational speech : chunking and named entities detection of
broadcast speech,
- EMOTIROB
project
(2007-2010) - Speech understanding and detection of
émotions for a companion robot that will
be used bhy children in hospitals (PhD of Marc Le Tallec under my supervisor and those of Jeanne
Villaneau and Dominique Duhaut ).
Some
publications
- Marc
LE TALLEC, Jeanne VILLANEAU, Jean-Yves ANTOINE, Dominique
DUHAUT (2011) Affective Interaction with a
Companion Robot for vulnerable Children: a Linguistically based Model
for Emotion Detection Proc. LTC’2001,
Language
Technology Conference, Poznan, Poland. 445-450. [HAL-00664618]

- Marc
LE TALLEC, Jeanne VILLANEAU, Jean-Yves ANTOINE, Agata SAVARY, Arielle
SYSSAU-VACARELLA A (2010) Emologus - a
compostional model of emotion detection based on the propositionnal
content of spoken utterances
Proc. 13th
International Conference on Text, Speech and Dialogue, TSD'2010, Brno,
Czech Republic, sept. 2010 In LNCS/LNAI
6231, Springer, ISBN: 978-3-642-15759-2 [HAL-00536786]
.
- Yannick Estève,
Thierry Bazillon, Jean-Yves Antoine, Frédéric
Béchet, Jérôme Farinas
(2010) The EPAC corpus: manual and automatic annotations of
conversational speech in French broadcast news. Proc. 9th European conference on
Language Resources and Evaluation, LREC’2010,
Valetta,
Malta, May 2010.
- Jeanne
VILLANEAU, Jean-Yves ANTOINE (2009)
Deeper spoken language understanding for man-machine dialogue on
broader application domains: a logical alternative to concept spotting.
Proc. Workshop on the
Semantic Representation of Spoken Language, SRSL’2009,
EACL’2009, Athens, Greece, April 2009
. pp. 50-57. [ACM
Portal[LREC2010_650] .]
- Jean-Yves
ANTOINE, Abdenour MOKRANE, Nathalie FRIBURGER (2008)
Automatic rich annotation of large corpus of conversational transcribed
speech, Proc.
8th European conference on Language Resources and Evaluation. LREC'2008,
Marrakesh, Maroc (à paraître)
[LREC_2008-172]
[HAL-00484046]
- Jeanne
VILLANEAU, Jean-Yves ANTOINE (2004) Categorial
grammars used to partial parsing of spoken language, Proc. Categorial Grammars'2004,
Montpellier, France
- Jerome
GOULIAN, Jean-Yves ANTOINE, Franck POIRIER (2003)
How NLP techniques can improve speech understanding. ROMUS: a robust
chunk based message understanding system using link grammars.
Proc. Eurospeech'2003,
Genève, Suisse, pp. 2773-2776.
- Jeanne
VILLANEAU, Jean-Yves ANTOINE (2001) Combining
syntax and pragmatic knowledge for the understanding of spontaneous
spoken sentences. Proc. 4th
Conference on Logical Aspects of Computational Linguistics, LACL'2001,
Le Croisic, France. In P. de Groote, G. Morrill, C. Retore (Eds.) LNAI 2099, Springer Verlag, pp.
279-295.
|
|
NLP for Human-Machine Dialogue
|
Top of page |
|