This winter and spring, we are identifying interesting gestures and annotating them for machine learning projects, as part of Prof. Steen's research into multimodal communication.
February 19, 2016
Just to give you all a better idea of how the Elan Workflow will proceed, so you all will be able to see the bigger picture in regard to what this project is attempting to accomplish, here is a list of the 10 integral steps (most of which you will engage with at one point or another) that go into this media project:
1) Finding Co-Speech Gesture Candidates
Co-Speech simply means any body movements (head, arms, hands, etc.) and facial expressions (including eye movements) that accompany speech. In this project, we are specifically targeting the gestures people use while speaking, so even though body language does have great significance in and of itself, the gestures you all find have to be connected to a certain fragment of speech.
2) Generate a List of Words That Accompany the Gesture
In this step, the most important thing to keep in mind is to find patterns in these videos where you see a particular gesture repeated with a specific phrase. These patterns allow us to give certain co-speech gestures a more generalized meaning.
This “list of words” can refer to one of the following:
Phrase: An example of a phrase would be, “I wish I knew,” which is the phrase we coded for in the previous task. This same phrase can also be looked up in different tenses such as the past, present, or future. Under the past tense, this phrase would change to, “I wish I had known,” or simply, “I wish I’d known.”
Colloquial Expression: These are informal words, phrases, or slang we use in everyday speech that doesn’t have to be necessary grammatical. It can vary from region to region (Like the Northern California “Hella” or the Southern “Y’all” or “How do you do?”). It can also be words like “Gonna,” “Ain’t”, or “Wanna.”
Specific Grammatical Construction: The same phrase can be said in different grammatical constructions, such as “Mary said to go home” can become “Home is where Mary said to go.”
Popular Exclamations: Examples of these would be—“Oh my gosh!” “What!” “Hey!”
3) Locate Instances in Edge Search Engine:
This is when you will look through the Edge Search engine by typing in the phrase you have chosen, and then go through each video and seeing if the gesture you are coding for appears with the selected phrase. You all also performed this step while looking for “I wish I knew” instances that contained the Positive Ignorance gesture.
4) Create hyperlinks for these instances:
Another step which you have all performed in the previous assigned tasks. This involved going to the “text” category next to the video, and copying the name of the video and adding a time stamp that indicates the beginning of the gesture.
5) Generate a “Batch Job” list of these links:
This is a step that none of you have performed as of yet. It involves gathering all the hyperlinks you all have submitted and running them through a computer terminal in which a sequence of commands (such as the subtraction of a symbol in the links or the replacement of period for a different punctuation mark) can be applied to all the hyperlinks simultaneously so as to create uniformity amongst the links that will be easily read by a computer program.
6) Run the “Batch Job” to create the clips:
Once the commands have been performed, you will see a uniform set of links that can now be run through a computer program to create clips.
7) Download the Clips
Professor Steen has recently performed this step with the links you all have sent using a script on the terminal that allowed him to add 2 minutes to the beginning of the clip and 3 minutes afterward to create the five minute video clips. As of now, he has gathered 115 clips of five minutes each.
8) Annotate the Clips in Elan (w/Preset Template)
This is a step you should all be familiar with since it was the first task that was assigned. It involves opening up the video clips on the Elan program and annotating the gestures by setting parameters for when the gestures start and finish. Although we now have the created video clips from the hyperlinks you all sent in, as of yet we do not have the preset template that allows you all to click on the grid category and select a specific speaker to annotate for. As soon as we have the preset template ready, you all will be able to annotate the links you sent.
9) Export Annotations with Python Converter (Project Database) (not ready yet)
This is a step none of you will be performing, but is a huge part of this media project. The python converter is currently being created by a student that will allow us to input the data you all have collected and allow the machine to analyze it.
10) Use machine learning to find more instances (not part of the job)
Again, this is not a step any of you will be performing, but this is where the culmination of all your hard work comes into being. The computer program will be able to find more instances of the gestures you have annotated automatically, finding patterns quickly and allowing us to view how prevalent the co-speech gesture is in the news database in order to draw conclusions as to the significance of these repeated occurrences.
Hopefully, you all now have a better understanding of the media project and the connection between the different steps we have assigned.
February 19, 2016
Find 50 interesting gestures on the Edge Search Engine like you did in the previous task and send them to me. This will engage you all in Steps 1-4 of the workflow process above. Please explain and comment on each gesture and write down the expression that typically goes with the gesture. Create your hyperlinks in the exact same manner as we did before by adding timestamps (hh:mm:ss). Group your links in a categorical manner such as by Gesture Name, Phrase, or Type of Movement (Head, Eyes, Expression), so as to organize the links.
Take your time and make insightful contributions.
We approximate it will take you all about 10 work hours to complete this task. Hopefully, you can complete the assignment by Wednesday or Thursday of this upcoming week.
Deadline: March 3, 2016 by 4:00pm
Approximate Time Needed: 10 hours
Lilia Oseguera, Research Assistant
March 2, 2016
For Assignment #5, you will be asked to annotate clips from the "I wish I knew" gestures that you sent in for Task #3.
I will be assigning you all a partner so that you can confer and help each other while creating your own annotations. Each of you will be annotating five clips. You and your partner will be annotating the same clips, and may use each other as a reference point in order to submit more objectively analyzed annotations.
This is individual work, but again, you can use your partner to confer on points that might seem ambiguous or simply when you need help on the task.
Once you have annotated your clips, send them to me and I will be able to provide you all with further instruction.
A list of step-by-step instructions is available here: