Our group will be examining the impact of language in regards to artificial intelligence. More specifically, we aim to discover which aspects of language in CMC have the effect of creating a conversation that most closely resembles human interaction. A number of sources of artificial intelligence, ranging from Turing test software to AOL instant messaging bots will be researched in order to determine which method of communication will be most feasible for our project requirements. Additionally, our group is interested in creating a unique AI program, which will have limited conversation abilities. We can use this program to customize responses in order to better manipulate which linguistic cues we use for lab testing.
Our research hopes to measure the affect of CMC-conceived signals in portraying believable computer mediated human interaction. We believe that partakers of computer mediated communication have grown accustom to distinct medium-based signals which they expect to receive when interacting online. Using rudimentary artificial intelligence—as described above—our subjects would interact with a computer “bot” via synchronous chat. By controlling the presence of these signals, we are attempting to better replicate a human interaction. Variables available to manipulate include CMC-conceived acronyms, emoticons, or linguistic characteristics—such as Clark’s track 2 signals and adjacency pairs.
We hope to recruit subjects into a laboratory setting and have them interact with a pre-programmed computer bot and discuss a specific subject (to be determined). We will adjust the presence and/or frequency of CMC-conceived signals displayed between sessions. In some interactions, all conventional CMC signals will be absent. After the session, the subject will fill out a questionnaire that will rate his experience. Details still need to be finalized, such as technical limitations, control variables, and number of variables to test.
Erik Skantze
Leo Baghdassarian
Brendan Gilbert
Natalia Sturtz-Verastegui
5 comments:
Trying to pass the Turing test, eh? That's definitely ambitious, but interesting.
It might be hard to code up a conversation bot completely from scratch (though since you took on this project I'm guessing one of you has experience with NLP?), though. Still, definitely interesting. I once read through the transcripts from one of the Turing Test years to see what the contestants were like, and most of them were fairly obviously machines; the one that fooled at least one human into thinking it was genuine though did so by conversing about how it thought that the robots at the competition would never be able to beat humans like "themselves," endearing itself to the judge.
Still, I'm not sure if merely including emoticons in a chatterbot's repertoire will be enough to fool a human, but I'm certainly interested in seeing what you guys come up with.
yeah, it will be quite the endeavor to actually build a Turing machine that will successfully fool a human... we know this ;-) However, what we ultimately want to see is if using online conventions (like emoticons, abbreviations, acronyms) or characteristics of discourse (like track 2 signals) will simply increase the "believability" factor.
BTW, this wasn't typed by a human. I am a robot :-P
^__^
This sounds like a really interesting topic - I look forward to hearing more about it once things become finalized! Will your participants be aware that they may be talking to a bot? If you tell them they have a 50/50 chance of either talking to a bot or another person, at the end they can easily select 'who' they think they are talking to and rate how sure they are of their selection. You can then easily relate the numbers to the presence (or not) of signals in the conversation. Your selection of variables is wide open... you could look at any number of things in the post-test questionnaire, which is great because you can tailor it to your group's interests.
This really sounds like an interesting topic, the only thing I would be concerned with is becoming too technical - if you 'build' your own chatter bot, your results and final product will have to describe any and all algorithms you use to demonstrate that the tests are indeed controlled and do not vary to an extreme per test. Otherwise, your results may be without foundation.
Just to restate what I said in class, I'd really suggest having a subject going into the experiment knowing they're talking to a bot, as though you're putting them through a user test. First, this sets the subject on a false lead, making them think that your project is something other than it is. Second, this will make them far more sympathetic to your bot (and to you), making their believability rating higher and more diverse.
Post a Comment