Dialogue About Dialogue: Talking Linguistics, Dialogue Systems and Conversational AI with Prof Ginzburg
Wluper’s Research Scientist Ye Tian was recently joined by Jonathan Ginzburg, Vladislav Maraev and Chiara Mazzocconi to discuss Linguistics, Dialogue Systems and Conversational AI. Take a look at their conversation below to learn more about the history and shifts in the industry, what’s exciting about the field and what is still lacking.
Ye: Jonathan, you have a background in mathematics and then you did a PhD in linguistics. What made you transition from mathematics to linguistics?
Jonathan: That's a good question. Last weekend I was at a winter school and it was taking place for a bunch of Math students. I gave this talk on interaction, so I talked about negation, and how type theoretic negation can be useful for talking about laughter and about head-shaking, etc. And I was arguing for the need for negation. So I gave this talk and at the end, after my talk, I went upstairs to get a coffee. There were two students and they didn't see that I was there. One of them says to the other, "Are you into linguistics?" The other guy says, "No, not at all." It was quite interesting getting to be around these Math people because you know, Math is sort of, in the end, the domain is quite different from the sort of empirical debates. It's got a huge long history, right. And it's much more... should I say solid inside? I'm not sure if solid is the right word. But, you know, there's always discussions like, "Do you know this guy's proof of that thing? No, I only know the other guy's proof of that thing, etc." It took me back to those days, which, you know, it's nice to see these things that you can sort of just on the blackboard develop these things, without remembering them. You just kind of follow from the little formal system and everything.
My point is, I had done Math and I liked some aspects. I was reasonably good at algebra, other aspects I was less interested in and less good at, more calculus type stuff. And I was interested in logic and set theory and stuff like that. At the time, I'd taken some really advanced logic courses and the logic that they did there was really very abstract in the sense that it didn't have any conceptual interest anymore for me. It was really top stuff, sort of like the cutting edge of some modern logic, which was used for all sorts of mathematical kinds of purposes, to prove things about various mathematical disciplines, but I didn't find it that intriguing. Whereas, I had read some stuff about linguistics and some about syntax, and later I had a chat with this guy at UCL, Neil Smith. He said, I think it was when I was inquiring about coming to study for my Masters, "I think you might find semantics might be more interesting than syntax." So I then found out about situation semantics, which at the time was very big. I had read about it, and it was sort of some strange set theory and stuff like that. So it was kind of resizing stuff and I'd done some stuff with languages in the army and so on, so I found the whole thing kind of interesting and thought it could be an interesting combo. I went without basically knowing anything, I mean, I had read syntactic structures and some stuff on that and I wrote a little paper, which was.. basic, without trying to try to come up with a theory of, what are all the possible responses you could give to an entrance? Like, I wish I knew where the staple was. But, you know, I knew nothing and somehow I got accepted into these programmes.
Dialogue gives a much richer perspective of language than more traditional linguistics.
Ye: Interesting. So to expand on that and segway into the second question about the differences between what traditional linguistics is interested in, and not just that, but then the particular interest in interaction and dialogue, and conversation and communication more generally. It's kind of a vague question — we can either talk about the difference between traditional linguistics, like topics versus more focused, linguistics, dialogues and interaction, and why are you particularly interested in the more interactive part rather than what one may think the traditional intro linguistics may be more interested in?
Jonathan: Again, I got into dialogue, to some extent by accident. I had been accepted to this postdoc, and on the first day of my postdoc, I was at lunch with Robin Cooper and I had come up with an idea of setting some very abstruse kind of stuff with functional readings of… I don't know what exactly and he said, "Why don't you work on dialogue with us?" So, that's kind of how it happened. I think dialogue is of course a much richer kind of perspective of language than you get with the more traditional things and it ties into so many things. Modern semantics is kind of a version of a more sophisticated version of semiotics. This was big in the 50s and 60s, and I guess, it still may exist in some places. I think dialogue is just a much more general kind of notion of communication, of meaning making than you get with things like text and the traditional sort of thing that was studied for millennia. I guess these days, it's sort of this whole multimedia thing. It seems to me that people are also evolving to some extent, with all these extra metres that they have, you know. When they have a conversation, they're also holding their mobile phone at the same time and doing all sorts of things. They're not just speaking. I mean, of course, speaking is now... spoken dialogue is to some extent, I don't know for the dinosaur, it's not the whole story, right? You have to integrate it with other stuff, which, for the moment, I haven't done anything with but you have to be aware of it.
Vlad: So a question for you... to trust the search, was it driven by interest or by search for truth, or is it the same thing?
Jonathan: I think I was really interested in trying to come up with this thing with all the possible answers you could give the questions. Nowadays it's hard to say… One is sort of pushed by the sea one is in and trying not to drown, right, so you kind of just keep on doing what you can do and trying to survive. There's no money involved so at some point, you get kind of framed, so you can't just... I mean there are other things that one would like to do too, but one isn't.
Ye: What is semiotics? Is that more about symbols?
Jonathan: It's the art of symbols I think, more or less, something like the meaning conveyed by the right sorts of signals. It was big in the past when this guy was run over by a bus at some point. No, not by a bus… He was run over by, I think… by a van carrying laundry. He is a very famous guy. I remember that he wrote this book S/Z… Roland Barthes. There's an interesting novel by Laurent Binet, which is about Barthes' death and a whole bunch of people in Paris and so on. It's called The 7th Function of Language. So with signs, for instance, the clothes you wear convey certain signs. A beard… I was cycling today and saw a guy with a particular kind of beard and I thought, "Ah he's trying to convey by that." Different kinds of beards convey different sorts of things. So, all of these things are signs, right? That goes beyond just language.
When you see somebody you get stuff from it and not just from their speech, right? Theatre in this respect is a domain where you can do all of this... theatre or maybe even the most complex opera. Because in opera you have all of the stuff that you have in the theatre, more or less, though it's of course, somewhat stylised because it happens so much slower in the opera but yeah, opera has everything. It has all the lights, the clothes, the text, the music, and theatre is a little bit more restricted in some respects, so you don't get as moved as you would get with opera, right? Opera can really bang you in the head whereas theatre you can enjoy but not in the same kind of smash way.
Ye: Do you think there's a natural truth to things like theories of communication, models of communication?
Jonathan: One would hope so. But it's much more difficult because it's so much more complicated than just physics. And physics is so much more abstract and more stylised in some respects. Of course, that's only some kind of physics but yeah, sure. It's really much more difficult to come up with general rules.
Ye: Okay, so let's say you have a system that is chaotic, like in a mathematical sense, it's chaotic. There could still be underlying rules, but also a system where you can't predict the outcome. Because what is physics? It's like physics is, you're trying to have laws about nature, excluding human intentions, almost. And that somehow everything that has human intentions involved becomes very hard to predict.
Jonathan: This is only if you want certain predictions and of course, that's an illusion, right? So I suppose if you're happy to be at certain high levels of probability.
Straightforward pattern matching can be very effective.
Ye: Yeah and I suppose that is why the technologies are working a little bit with statistical models. So if we get to the next question, in the field of dialogue systems, what are the historical major shifts and periods and are there any major changes? I think most people who are probably interested in dialogue systems and conversational AI, kind of jump in from a very recent period of time. So could you talk about the history of dialogue systems, or the major strands and ideas and so on?
Jonathan: I think Eliza is still terrifically important, right? These days when you send an email, when you use Gmail, it gives you some suggestions for reactions, which are actually not bad. Obviously, they're just based on some very simple patterns, I'm guessing, but it's quite astonishing how disconcerting they can be. I have somebody who I email and I usually start my emails to this person with hi, and their initial and I was about to respond to this person, clicked reply and it showed me this initial starting thing that I use and I thought, "Hey, what’s this? My privacy has been violated!" Of course, it is true that my privacy has been violated, email does that. Which means we should leave Gmail as soon as we can. But you don't use Gmail, right Vlad?
Vlad: For my personal email I don't use Gmail. Do you think it's really privacy or it's more like agents? Because it's like you felt that way not when they were reading your emails, which they've been doing for like 20 years or so, but only when they reacted?
Jonathan: Yeah, it's kind of revealing itself that it's there. But it’s just very straightforward pattern matching and that can be so effective. Right? Just by seeing these little things, these little suggestions, which I don't think are very sophisticated, but there was just a haha element in an email saying something like, "Don't get your hopes up." I think Vlad must have seen this. And then Gmail suggested to me, "Haha" or something like, "Thanks. Haha". So there's that kind of stuff and Siri does that kind of stuff too, right? More or less? I mean, nothing that's much more clever.
Ye: Do we have freewill or are we just, you know, just a statistical pattern?
Vlad: Maybe. We're not being like, well, creative anymore, we just agree with what it's saying. But it's not too bad. Right. Accept it. Carry on with it.
Jonathan: Yeah, I mean, the issue is that somehow we have those kinds of systems, which are the only ones that are kind of creative. And then we have the ones that are much more sophisticated, but they can do so much less, right? So I don't know if there are any... I imagine some of these ICT services... I don't know what the what the latest stuff coming out of ICT is for instance,
Ye: What's ICT?
Jonathan: ICT Centre in LA, where they produce a number of quite sophisticated systems, at some level. I mean, I've only seen one of them so I don't really know what the capabilities of the systems are. But they used to have this embodied conversational agent who was a doctor at Médecins Sans Frontières in Iraq and the US Army was kind of dragged in with him. They had this part where you need to move your base, and they would have a number of emotional settings that they would change to either make him aggressive or calm or whatever. So I guess this is pretty scripted. And then they also had this system called Sensei, I think it was called, that was doing some sort of psychological assessment. I think the analyst was Asian, an Asian woman. And it was a system that was supposed to be used by Army veterans so that they could fully assess Army veterans for various psychological issues. I only saw a very, very brief demo, but it was very sophisticated in terms of the face stuff, the speech recognition, the generation, stuff like that, you know, the dialogical capabilities .I guess there's the usual stuff with the fact that you're sort of constrained by the sophistication of your NLU.
Ye: So roughly what period of time was that?
Vlad: Maybe five years ago.
Ye: So there's ASR improvement, which is quite big, and I suppose there might be some general NRU improvements because of language models?
Jonathan: That remains to be seen, right, how much this is actually going to improve the NLU.
Ye: So if we talk about the trend of end-to-end dialogue systems and now there's speech language understanding. Spectrum to slots and intent, spectrum to direct semantic representation instead of going through the words bit. So maybe there will be, at some point, end-to-end dialogue systems, so that trend versus say a more modular way of building dialogue systems. We think it's traditional where you have dialogue policy and NLU and so on, what's our thoughts on whether end-to-end, given that the trend somehow has more promise? I hear from some people, for example, from a robotics reinforcement learning point of view saying that they're looking at dialogue systems as a comparison to robotics machine learning and traditionally they have several components and they try to maximise each component. But they found in that system, because of the error compounding, even if you have 90% of something, the whole system is still unstable, so they found that an end-to-end system works better and from that background said dialogue is going to be the same, and end-to-end is going to be better than component wise modular systems.
Jonathan: I don't have any informed thoughts about that. I guess it's an empirical question. Some people are very suspicious of whether really deep learning can ultimately deliver in this domain. I'm sure deep learning can deliver certain things, but can it really deliver spontaneous, unrestricted interactions? I mean, it's not clear that other systems or other approaches can either. I can't give a good argument to say that it's not going to work out or that it's intrinsically impossible. You can't get any proof of that. Given that there's so many resources and so much is being poured into it, you can assume that some progress is going to happen, but who knows. I think Vlad is a better person to give an opinion.
Vlad: No, I don't think so. I was thinking about goals or system goals. Where do you take them from? Does it have some goals or intentions or something that they want to do? Maybe that's a question. If you want to do something that you have already done before, then it can come from data, but I think it's the same with humans. We don't only operate on input and create some output, we have something that drives us somehow.
We need to start looking at the emotional and affectionate side of interaction.
Ye: Some behaviours that resemble curiosity and creativity are some of these exploration, reinforcement learning things, where you have an agent and robots that explore either with language or with the environment and their goal is somehow built in to explore and then they keep learning new things. For some of these exploration, soft tasks they might come up with... I remember once we went to Paris 6, the robotics lab. There was a very big robot and the goal was for the robot to throw a ball in a particular direction or to hit something, but they didn't train the robot to like throw the arm in a particular way. They just left the robot to try and solve it. And then they would throw in ways that are very unbiological that we wouldn't have thought about because they just stumbled upon that as a solution. That's a bit like some resemblance of creativity. What does it mean for a dialogue system and Conversational AI to be spontaneous and explorative rather than more like a closed system?
Jonathan: My guess is that in a way there is the issue of the dialogue system, unless it has its own goals, it's very much a slave, right? So it's a little bit like Lucky in Waiting for Godot. He's kind of held by a belt or something and then just walking around and he's being pushed and pulled in various directions. So, as long as it's just a slave, with no will of its own and no longer term goals, then obviously it's not going to be very spontaneous. I mean, it's just reactive, right, if it doesn't have its own tasks, things to do. I think that's a big issue.
Ye: Do you see any significant patterns and periods and shifts?
Vlad: Like technological shifts or something like marketing shifts?
Ye: It could be related to dialogue systems, personal assistants, and Conversational AI. It could be any of these shifts, like technology or perception or scientific.
Vlad: I mean, if you look at the whole landscape, not the research landscape, maybe speech recognition seems to be the main shift.
Ye: So speech recognition has the main shifts that make it just more usable and more people are using it?
Vlad: Yes it's a bit strange in a way, but I think that's what really happened, why people started using them because I think it adds an aspect to the work but with practically no improvement.
Jonathan: I guess that's why Siri is useful. I mean, I don't use Siri, and I have a feeling even my kids stopped using Siri. It was kind of a game that was interesting for a while but at some point it's like we've seen it, right? It was useful because it did understand the sense of being able to get the basic thing that you were telling it to like, "Check the weather for me" but at some time, if it's that or just typing it in, it's not such a big deal. It's kind of entertaining, it's a bit like playing with the dog, but the dog is a bit more interesting than Siri.
Ye: Yeah, so that's an interesting point, right? That a dog is more interesting than a conversational agent, which pretends to be intelligent, and can potentially access a large knowledge base. So why is that? Why do humans find it more rewarding to interact with the dog than with Siri?
Jonathan: Well, I guess there's an obvious answer, which is the dog has some sort of emotional kind of life, right? You have empathy with the dog. Whereas the conversational agent... I mean, I have this robot FurHat. Once you start interacting with him or her it does make a difference in the sense that he does have a face. I mean, above all, he's a bit scary, he does have a certain kind of presence. But I suppose these robotic things are going to make some difference.
Ye: Is the scariness like the uncanny valley or something else?
Jonathan: There's no rational thing about it. It's just somehow done so well in some respects that it's kind of an Eliza effect where you attribute so much more to it than is actually there. I think that's kind of a fairly crucial thing, the embodiment of the emotional life and caring a little bit about this thing somehow. And the dog obviously cares for you, right? But, I guess, equally if you have a tortoise, you can't really have much of a conversation with it, but still the tortoise does have its own life. You're curious to say, "Oh, hello, tortoise, how are you doing today?" That kind of thing. Conversational agents just don't have that yet.
Ye: It's very interesting this comparison, because you're basically having a one sided dialogue with the tortoise but you would still have it and you would still feel something from that.
Jonathan: It's the same thing with a baby, right? I mean, with a three month old baby you're projecting so much more to it. Obviously, we have no idea what's really there, right? And it's very unlikely we ever will. You have people like Samoans who actually have a different perspective on children and they don't really attribute anything to them until they're about three. So in certain civilisations we do this over attribution, but it's obviously not intrinsic to be human.
Ye: Is this in certain societies? Or is this like human nature where somehow emotion is what drives and motivates you whereas anything that's purely information exchange doesn't have that? Why would we do this non-interactive dialogue that we happily carry out on so many things? People talk to plants and they feel something, they talk to tortoises, to babies. They feel much more fulfilled, rather than when they're talking to a robot, which actually does respond.
Jonathan: I think it depends. This robot has a face and that makes a huge difference, and a voice, which also makes a difference. I don't know what the difference is between how you react to an embodied thing, in an embodied conversation and a non robotic, embodied conversational agent.
Ye: I remember there was one of the SemDial or SIGDial at the Edinburgh Heriot Watt lab. They had a colour learning robot that learnt colour, shapes and words and then they got this robotic arm to pick up stuff and put it somewhere. They were learning the meaning of these words through interaction. They found that when the robotic arm had a voice, and I don't know whether it had a face, but when it had a voice, people were more harsh with it because they had higher expectations of its intelligence and capability. When it no longer had a voice, people interacted with it more like how they would interact with a dog. They actually became more affectionate and forgiving of the system. They found certain actions to have some cuteness. For example, some of the robots, like some of those Japanese ones, look so eerie. Eerily very close to humans while not being human. I wonder whether having a face makes people want to interact with it more or less. It seems maybe one of the things that isn't getting enough focus is the emotional side and the affectionate side of interaction.
Jonathan: Obviously, there's a lot of research on that. I'm sure that the Alexa people are doing stuff on that. Whether they're going the right way around it, I don't know. But I'm sure that there's a bunch of stuff associated with it.
Ye: What do you think, Vlad, about emotion in Conversational AI or interaction technology?
Vlad: I agree that it's important but in this respect that we discussed, why don't robots get the same empathy as pets? I think maybe it has something to do with their social role or some kind of a social structure? In most cases they are subservient to us so maybe that doesn't induce any sort of respect for them. Also if they pretend to be or pretend to act like humans, maybe this feels dangerous to us or we feel scared of that in a way. I'm not sure, but it's about society in a way. Maybe they can still use language without being human.
Ye: Yeah, it's almost like people would feel more sympathetic to a very robotic voice rather than a voice that pretends to be human.
Jonathan: Like an accent in the sense that I'm sure it could be effective in its own way so it's not clear that naturalness is always the best thing, or imitation.
Ye: I wonder whether there are different scenarios with different people. So it's related because the normal thinking is that you should have a dialogue system that looks and sounds as human-like as possible. And that might not be a good thing, because then you have human-like expectations. Whereas if it's a system that is clearly a robotic system, you will help it more. You will adapt more than expecting the system to adapt because it can't.
Vlad: I think it's kind of human-like in terms of behaviour. Behaviour is very general maybe, but with how it does stuff like what is its intention, how does it look and so on. I think this thing about being similar to humans, in terms of behaviour, is important, because if we're talking about dogs, they become more usable, because for us it's a very natural way to communicate with language and with speech. With all forms of language, it would be good if we don't really need to adapt that as much as we do now. So for instance, most of the systems are non-incremental and this is not good at all.
Dialogue is not always information seeking or resolving issues. There's more to it.
Ye: Let’s talk about the next question, which is what is currently lacking in research and the industry? I think incrementality is definitely one of them. I haven't seen much real commercial application of incremental dialogue systems. Have you?
Jonathan: I just know about David Schlangen's stuff but there's probably others.
Ye: Why do you think incrementality is not looked into? Is it just hard or is it that people don't realise that it's a feature of natural dialogue? Or something else?
Vlad: My guess would be that it's not incrementalised. Because if you use something like states, there are incredibly many more states if you incrementalise things.
Ye: Yeah, that's true. We could get away with just having less states but then you just have a statistical distribution, moment by moment.
Vlad: No, no, I mean, because you need to react, right? Somehow. Or not react. So that's traditionally done by maybe a shortcut.
Ye: What else is lacking?
Jonathan: The question is what's there, I guess.
Ye: Yeah. What's there?
Jonathan: We know from Vlad, there's ASR. For instance there's Staffan's type system, which is actually not unsophisticated and can do a whole bunch of, I guess, basic kind of information seeking dialogue, with a certain amount of repairing. I don't know how much more advanced other systems are beyond that.
Vlad: Yeah. Speaking of this, it's like one time or another, maybe they're trying to tackle more, but it's right. It's information seeking dialogue. It's not like other genres, like the education genre or well, to some extent it is now. It's not the only thing that dialogue is, right? It's not only booking a flight ticket, it's not always information seeking or resolving issues. There's more to it.
Ye: So how fundamentally different are information seeking or goal oriented dialogues versus open ended conversations? Maybe they are totally different things and need completely different systems?
Vlad: I was thinking about goals again, like that's their goal. I think maybe chat dialogue would also have some social goals or like micro goals or things like that. Or maybe also have a goal of curiosity or some sort of exploration goal.
Jonathan: We have a paper we recently resubmitted about conversational types and the structure. There we are sort of looking pretty much at the taxonomy of conversation type. I think it's quite a useful exercise and it's not uninteresting for people who are interested in dialogue systems to try and get some sort of uniform characterisation of these different conversational types. We ended up coming up with this conclusion that basically the main thing we could talk about was the issue of the subject matter of the conversation. That's the main thing that we could really say anything interesting about, and how to specify that. It looks like the simpler kinds of conversation types are ones that have a small number of fixed types… a small number of fixed issues that have to be discussed. And that covers a lot of a lot of things like doctor's appointments and buying bread. Then there are other kinds of activities where you can't specify the small number of goals in advance, but you have to give some kind of description and that seems to rely on some notion of question classification.
In a religious institution, I guess religious services are rather scripted but you might have a speech by the clergyman and he speaks about certain kinds of religious issues. So they have some kind of notion of talking about religious issues. In a parliament, again, it's a discussion, it's not open ended, but it's a discussion of certain issues that are somehow political issues, and you don't know how many, but it's within that sort of range. Then you have what the BNC calls Sports Live, which is when you have commentary on an event. There the basic description seems to be where you have it periodically to say what's happening now. That's essentially what's going to be the discussion. So in that respect, for a wide range of activities, you can get a story about what's going to happen in terms of subject matter. You can to some extent predict it, but only with cases of chat.
Chat is really a case where there's some predictability at the beginning, but not much. There's a greeting interaction at the beginning and at the end. In the middle, who knows? But it's just going to be stuff about whatever interests the conversation at the moment. So I suppose it's kind of, in that respect, it does kind of constrain things to a certain extent. It does give you some idea, so that's just on the level of subject matter, right? And then you have to add lots of other variables of, what kind of other actions exist? Because, a classroom, for instance, is different in the sense that there's a teacher. The subject matter is not predictable in the classroom because the class can talk about anything. The teacher asks certain questions about the domain that the class is about, but beyond that there's no real restrictions. There's issues about the control, the fact that the teachers don't control things, in terms of controlling the turn taking. I suppose that, to some extent, is one way of looking at things and trying to come up with a general theory or different variations across activities and coming up with some sort of attempt to give a comprehensive characterisation of that. That's a strategy I suppose. And at least gives you some idea of the variability across domains.
Ye: That's certainly something that's lacking in the sense that almost nobody's studying this, certainly not in the industry. If you think about it, you can still evaluate whether a conversation is good or bad. What does it mean for the UK Parliament to have a good conversation? And a bad one? I don't know. Maybe part of it is how much agreement of new information that's been brought about.
Jonathan: Some agreement is useful, but again, too much is probably also not very interesting. I don't know if the analogy is good, so I'm just throwing it in the air but I did this work on multimodal interaction, the issue about gaze, how much time do you look at each other? And there's sort of a range between 60% of the time and 92 or 3%, right? If you go less than 60, then the other person is going to complain and ask, “Are you still with me?” You go above 92% and it's like, “Why are you staring at me all the time?” So the analogy I'm trying to put forward is that a good conversation is one where there's a fair amount of agreement reached. If it's too much, then it's just... I don't actually remember his name... Xi Jinping? Great Leader. So then it's a conversation with a great leader, right? That's the kind of the level at around 95%. But under 60% it's kind of like, "You say your thing. I say my thing". Nothing really comes from it. The numbers might be wrong, but this is at least some sort of analogy.
Ye: This might be a characteristic that as one dimension distinguishes different types of dialogues, because with hotel booking airlines, you want 100%. Anything beyond that is not good. So that's one feature, whereas in a more open ended discussion, you have a slightly different dial in this agreement metre, to the point of when it becomes an argument instead of a conversation, while its argument is a type of conversation. It'd be funny to build a robot to argue with you.
Jonathan: There's all these stories about a lack of creativity, that creativity requires some sort of dissonance, in a sense.
Ye: Incongruency, perhaps? Like wittiness is a good indication of some kind of creativity.
Vlad: That's sort of a dissonance I would say.
Ye: There are so many things that are never touched upon by the industry as a whole and very, very little by research. It feels very far off. But they're very interesting topics. Okay, we can wrap it up there. Thank you very much for this conversation.
Jonathan: Thank you for an interesting chat.
About the guests
Jonathan Ginzburg is Professor of Linguistics at Université Paris Cité (formerly Paris 7). He has held appointments at the Hebrew University of Jerusalem and King’s College, London. He is one of the founders and currently associate editor of the journal Dialogue and Discourse. His research interests include semantics, dialogue, language acquisition, musical meaning, and brain structure. He is the author of Interrogative Investigations (CSLI Publications, 2001, with Ivan A. Sag) and The Interactive Stance: meaning for conversation (Oxford University Press, 2012), along with around 100 articles.
Vladislav Maraev received a Diploma in Communication Networks in 2009 from the St. Petersburg State University of Telecommunications. The main focus of his industrial experience was on dialogue systems: since 2011 he has been working with spoken dialogue systems as a developer, system analyst, project manager and a researcher. He completed an MA program in Cognitive Science at the University of Lisbon in 2017. He is now working towards his PhD on the subject of the role of laughter in conversation. Vladislav also has wide experience in Natural Language Processing and Deep Learning, having his MA thesis and several publications on the subject.
Chiara Mazzocconi is a Postdoc at the Institute of Language, Communication and the Brain (Aix-Marseille University). She obtained a PhD in Linguistics from Université de Paris. She also holds an MSc in Neuroscience, Language and Communication from University College London. Her background is in Speech and Language Pathology (Università La Sapienza, Roma). Her main interests are pragmatics of dialogue, its development and impairments in clinical populations.
Book a live demo to learn more about us and our Conversational AI technology.