Summer of Code‎ > ‎

a. Red Hen Lab - GSoC 2020 Ideas

Applicants: Submit your final proposal NOW directly to Google. See https://summerofcode.withgoogle.com/get-started/ and https://summerofcode.withgoogle.com/student-signup/ . Deadline: 31 March 2020, 18:00 UTC. You can replace your final application as often as you like before the deadline, but get it into the system NOW. Red Hen Lab cannot consider an application that has not been submitted directly to Google's system.

Red Hen Google Summer of Code 2020
redhenlab@gmail.com

See Guidelines for Red Hen Developers and Guidelines for Red Hen Mentors


How to Apply

To write your pre-proposal, please use our Latex Template and follow the instructions. Study that template. Think carefully about its requirements.  

Red Hen Lab will consider any mature pre-proposal related to the study of multimodal communication.  A mature pre-proposal is one that uses the Latex template and completes all of its section in a thorough and detailed manner.  Red Hen lists a few project ideas below; more are listed in the Barnyard of Potential Possible Projects.  But you are not limited to these lists.  Do not write to ask whether you may propose something not on this list. The answer is, of course! We look forward to your mature and detailed pre-proposals.

Once you have a mature and detailed idea, you may contact the mentors, express your idea, and get some initial feedback and direction for finishing your pre-proposal. Then use the Latex Template to create your pre-proposal, print it to pdf, and send it to a mentor and to redhenlab@gmail.com

The ability to generate a meaningful pre-proposal is a requirement for joining the team; if you require more hand-holding to get going, Red Hen Lab is probably not the right organization for you this year. Red Hen wants to work with you at a high level, and this requires initiative on your part and the ability to orient in a complex environment. It is important that you read the guidelines of the project ideas, and you have a general idea of the project before writing your pre-proposal. 

When Red Hen receives your pre-proposal, Red Hen will assess it and attempt to locate a suitable mentor; if Red Hen succeeds, she will get back to you and provide feedback to allow you to develop a fully-fledged proposal to submit to GSoC 2020. Note that your final proposal must be submitted directly to Google, not to redhenlab@gmail.com

Red Hen is excited to be working with skilled students on advanced projects and looks forward to your pre-proposals.

Please study the following before attempting to apply:

Red Hen Lab is an international cooperative of major researchers in multimodal communication, with mentors spread around the globe. Together, the Red Hen cooperative has crafted this Ideas page, which offers some information about the Red Hen dataset of multimodal communication (see some sample data here and here) and a long list of tasks.

To succeed in your collaboration with Red Hen, the first step is to orient yourself carefully in the relevant material. The Red Hen Lab website that you are currently visiting is voluminous.  Please explore it carefully. There are many extensive introductions and tutorials on aspects of Red Hen research. Make sure you have at least an overarching concept of our mission, the nature of our research, our data, and the range of the example tasks Red Hen has provided to guide your imagination. Having contemplated the Red Hen research program on multimodal communication, come up with a task that is suitable for Red Hen and that you might like to embrace or propose. Many illustrative tasks are sketched below. Orient in this landscape, and decide where you want to go.

The second step is to formulate a pre-proposal sketch of 1-3 pages that outlines your project idea. In your proposal, you should spell out in detail what kind of data you need for your input and the broad steps of your process through the summer, including the basic tools you propose to use. Give careful consideration to your input requirements; in some cases, Red Hen will be able to provide annotations for the feature you need, but in other cases successful applicants will craft their own metadata, or work with us to recruit help to generate it. Please use the Latex template to write your pre-proposal, and send us the pdf format.

Red Hen emphasizes: although she has programs and processes—see, e.g., her Τέχνη Public Site, Red Hen Lab's Learning Environment—through which she tutors high-school and college students, Red Hen Google Summer of Code does not operate at that level.  Red Hen GSoC seeks mature students who can think about the entire arc of a project: how to get data, how to make datasets, how to create code that produces an advance in the analysis of multimodal communication, how to put that code into production in a Red Hen pipeline.  Red Hen is looking for the 1% of students who can think through the arc of a project that produces something that does not yet exist. Red Hen does not hand-hold through the process, but she can supply elite and superb mentoring that consists of occasional recommendations and guidance to the dedicated and innovative student.

Requirements for Commitment

Google requires students to be dedicated full-time to the project during Google Summer of Code and to state such a commitment. Attending courses or holding other jobs or onerous appointments during the period is a violation of Google policy. Red Hen relies on you to apply only if you can make this full commitment. If your conditions change after you have applied, Red Hen relies on you to withdraw immediately from Google Summer of Code. If you violate policy, you will not be paid. If you violate policy or if you are selected and then withdraw after selections have been announced, you will deprived another worthy applicant of being selected. Such eliminated slots cannot be recovered or reassigned.

In all but exceptional cases, recognized as such in advance, your project must be put into production by the end of Google Summer of Code or you will not be passed or paid. Putting your project into production means scripting (typically in bash) an automated process for reading input files from Red Hen's data repository, submitting jobs to the CWRU HPC using the Slurm workload manager, of course running your code, and finally formatting the output to match Red Hen's Data Format. Consider these requirements as opportunities for developing all-round skills and for being proud of having writtenb code that is not only merged but in regular production!

Requirements for Production

Note that your project must be implemented inside a Singularity container (see instructions). This makes it portable between Red Hen's high-performance computing clusters. Red Hen has no interest in toy, proof-of-concept systems that run on your laptop or in your user account on a server. Red Hen is dedicated exclusively to pipelines and applications that run on servers anywhere and are portable. Please study Guidelines for Red Hen Developers, and master the section on building Singularity containers. You are required to maintain a github account and a blog. 

In almost all cases, you will do your work on CWRU HPC, although of course you might first develop code on your device and then transfer it to CWRU HPC. On CWRU HPC, do not try to sudo; do not try to install software.  Check for installed software on CWRU HPC using the command
module
e.g.,
module spider singularity
module load gcc
module load python
On CWRU HPC, do not install software into your user account; instead, if it is not already installed on CWRU HPC, install it inside a Singularity container so that it is portable.  Red Hen expects that Singularity will be used in 95% of cases.  Why Singularity? Here are 4 answers; note especially #2 and #4:
What is so special about Singularity?
While Singularity is a container solution (like many others), Singularity differs in its primary design goals and architecture:
    1. Reproducible software stacks: These must be easily verifiable via checksum or cryptographic signature in such a manner that does not change formats (e.g. splatting a tarball out to disk). By default Singularity uses a container image file which can be checksummed, signed, and thus easily verified and/or validated.
    2. Mobility of compute: Singularity must be able to transfer (and store) containers in a manner that works with standard data mobility tools (rsync, scp, gridftp, http, NFS, etc..) and maintain software and data controls compliancy (e.g. HIPPA, nuclear, export, classified, etc..)
    3. Compatibility with complicated architectures: The runtime must be immediately compatible with existing HPC, scientific, compute farm and even enterprise architectures any of which maybe running legacy kernel versions (including RHEL6 vintage systems) which do not support advanced namespace features (e.g. the user namespace)
    4. Security model: Unlike many other container systems designed to support trusted users running trusted containers, we must support the opposite model of untrusted users running untrusted containers. This changes the security paradigm considerably and increases the breadth of use cases we can support.
A few further tips for rare, outlier cases:
  1. In rare cases, if you feel that some software should be installed by CWRU HPC rather than inside your Singularity container, write to us with an argument and an explanation, and we will consider it.
  2. In rare cases, if you feel that Red Hen should install some software to be shared on gallina but not otherwise available to the CWRU HPC community, explain what you have in mind, and we will consider it.
Remember to study the blogs of other students for tips, and document on your own blogs anything you think would help other students.

Background Information

Red Hen Lab participated in Google Summer of Code in 2015, 2016, 2017, 2018, and 2019, working with brilliant students and expert mentors from all over the world. Each year, Red Hen has mentored students in developing and deploying cutting-edge techniques of multimodal data mining, search, and visualization, with an emphasis on automatic speech recognition, tagging for natural language, co-speech gesture, paralinguistic elements, facial detection and recognition, and a great variety of behavioral forms used in human communication. With significant contributions from Google Summer of Code students from all over the world, Red Hen has constructed tagging pipelines for text, audio, and video elements. These pipelines are undergoing continuous development, improvement, and extension. Red Hens have excellent access to high-performance computing clusters at UCLA, Case Western Reserve University, and FAU Erlangen; for massive jobs Red Hen Lab has an open invitation to apply for time on NSF's XSEDE network.

Red Hen's largest dataset is the NewsScape Library of International Television News, a collection of more than 600,000 television news programs, initiated by UCLA's Department of Communication, developed in collaboration with Red Hens from around the world, and curated by the UCLA Library, with processing pipelines at UCLA, Case Western Reserve University, and FAU Erlangen in Germany.  Red Hen develops and tests tools on this dataset that can be used on a great variety of data—texts, photographs, audio and audiovisual recordings. Red Hen also acquires big data of many kinds in addition to television news, such as photographs of Medieval art, and is open to the acquisition of data needed for particular projects. Red Hen creates tools that are useful for generating a semantic understanding of big data collections of multimodal data, opening them up for scientific study, search, and visualization. See Overview of Research for a description of Red Hen datasets.

In 2015, Red Hen's principal focus was audio analysis; see the Google Summer of Code 2015 Ideas page. Red Hen students created a modular series of audio signal processing tools, including forced alignment, speaker diarization, gender detection, and speaker recognition (see the 2015 reportsextended 2015 collaborations, and github repository). This audio pipeline is currently running on Case Western Reserve University's high-performance computing cluster, which gives Red Hen the computational power to process the hundreds of thousands of recordings in the Red Hen dataset. With the help of GSoC students and a host of other participants, the organization continues to enhance and extend the functionality of this pipeline. Red Hen is always open to new proposals for high-level audio analysis.

In 2016, Red Hen's principal focus was deep learning techniques in computer vision; see the Google Summer of Code 2016 Ideas page and Red Hen Lab page on the Google Summer of Code 2016 site. Talented Red Hen students, assisted by Red Hen mentors, developed an integrated workflow for locating, characterizing, and identifying elements of co-speech gestures, including facial expressions, in Red Hen's massive datasets, this time examining not only television news but also ancient statues; see the Red Hen Reports from Google Summer of Code 2016 and code repository. This computer vision pipeline is also deployed on CWRU's HPC in Cleveland, Ohio, and was demonstrated at Red Hen's 2017 International Conference on Multimodal Communication. Red Hen is planning a number of future conferences and training institutes. Red Hen GSoC students from previous years typically continue to work with Red Hen to improve the speed, accuracy, and scope of these modules, including recent advances in pose estimation.

In 2017, Red Hen invited proposals from students for components for a unified multimodal processing pipeline, whose purpose is to extract information about human communicative behavior from text, audio, and video. Students developed audio signal analysis tools, extended the Deep Speech project with Audio-Visual Speech Recognition, engineered a large-scale speaker recognition system, made progress on laughter detection, and developed Multimodal Emotion Detection in videos. Focusing on text input, students developed techniques for show segmentation, neural network models for studying news framing, and controversy and sentiment detection and analysis tools (see Google Summer of Code 2017 Reports). Rapid development in convolutional and recurrent neural networks is opening up the field of multimodal analysis to a slew of new communicative phenomena, and Red Hen is in the vanguard.

In 2018, Red Hen GSoC students created Chinese and Arabic ASR (speech-to-text) pipelines, a fabulous rapid annotator, a multi-language translation system, and multiple computer vision projects. The Chinese pipeline was implemented as a Singularity container on the Case HPC, built with a recipe on Singularity Hub, and put into production ingesting daily news recordings from our new Center for Cognitive Science at Hunan Normal University in Hunan Province in China, directed by Red Hen Lab Co-Director Mark Turner. It represents the model Red Hen expects projects in 2019 to follow.

In 2019, Red Hen Lab GSoC students made significant contributions to add speech to text and OCR to Arabic, Bengali, Chinese, German, Hindi, Russian, and Urdu. We built a new global recording monitoring system, developed a show-splitting system for ingesting digitized news shows, and made significant improvements to the Rapid Annotator. For an overview with links to the code repositories, see Red Hen Lab's GSoC 2019 Projects.

This year, the organization is adopting a tighter focus on a small number of tasks; see details below. 

In large part thanks to Google Summer of Code, Red Hen Lab has been able to create a global open-source community devoted to computational approaches to parsing, understanding, and modeling human multimodal communication.
 With continued support from Google, Red Hen will continue to bring top students from around the world into the open-source community.



What kind of Red Hen are you?

What kind of Red Hen are you?






More About Red Hen

 

About us and the project


Our mentors

 
https://www.linkedin.com/in/shruti-gullapuram/
Shruti Gullapuram
UMass Amherst
 

Vaibhav Gupta 
IIIT Hyderabad


 
https://sites.google.com/site/inesolza/home
Inés Olza
University of Navarra 
 

  
http://www.cs.ucla.edu/~lwx/ 
Weixin Lee. Beihang University


http://www.jsjoo.com
 
http://markturner.org

Mark TurnerCWRU

 

 
 http://www.linkedin.com/in/PeterMBroadwell
Peter Broadwell. Stanford
 
http://engineering.case.edu/profiles/sxr358
Soumya Ray. CWRU

 


Jakob Suchan
University of Bremen
Anna Pleshakova
 Anna Wilson, Oxford

   http://www.um.es/lincoing/jv/index.htm
University of Murcia

https://sites.google.com/site/cristobalpagancanovas/
University of Murcia
Heiko Schuldt
Heiko Schuldt,
University of Basel
Abhinav Shukla
 Abhinav Shukla,
Imperial College London

  
Zhaoqing Xu
Beihang University
University of Basel

   
https://www.anglistik.phil.fau.de/staff/uhrig/


Grace Kim
UCLA  

Federal University of Juiz de Fora
          


José Fonseca, Polytechnic 
Higher Education Institute of Guarda 
Ahmed Ismail
Ahmed Ismail
Cairo University & DataPlus
 https://www.linkedin.com/in/ffsteen

Jan Gorisch
Leibniz-Institut für Deutsche Sprache
NSIT, Delhi University
Robert Ochshorn
Robert Ochshorn
Reduct Video



Leonardo Impett
EPFL & Bibliotheca Hertziana




The profiles of mentors not included in the portrait gallery are linked to their name below.

Guidelines for project ideas

To write your pre-proposal, please use our Latex Template and follow the instructions. 

Red Hen Lab will consider any mature pre-proposal related to the study of multimodal communication.  A mature pre-proposal is one that uses the Latex template listed below and completes all of its section in a thorough and detailed manner.  We list a few project ideas below; more are listed in the Barnyard of Potential Possible Projects.  But you are not limited to these lists.  Do not write to ask whether you may propose something not on this list: the answer is, of course!
Your project should be in the general area of multimodal communication, whether it involves tagging, parsing, analyzing, searching, or visualizing. Red Hen is particularly interested in proposals that make a contribution to integrative cross-modal feature detection tasks. These are tasks that exploit two or even three different modalities, such as text and audio or audio and video, to achieve higher-level semantic interpretations or greater accuracy. You could work on one or more of these modalities. Red Hen invites you to develop your own proposals in this broad and exciting field.

Red Hen studies all aspects of human multimodal communication, such as the relation between verbal constructions and facial expressions, gestures, and auditory expressions. Examples of concrete proposals are listed below, but Red Hen wants to hear your ideas! What do you want to do? What is possible? You might focus on a very specific type of gesture, or facial expression, or sound pattern, or linguistic construction; you might train a classifier using machine learning, and use that classifier to identify the population of this feature in a large dataset. Red Hen aims to annotate her entire dataset, so your application should include methods of locating as well as characterizing the feature or behavior you are targeting. Contact Red Hen for access to existing lists of features and sample clips. Red Hen will work with you to generate the training set you need, but note that your project proposal might need to include time for developing the training set.

Red Hen develops a multi-level set of tools as part of an integrated research workflow, and invites proposals at all levels. Red Hen is excited to be working with the Media Ecology Project to extend the Semantic Annotation Tool, making it more precise in tracking moving objects. The "Red Hen Rapid Annotator" is also ready for improvements. Red Hen is open to proposals that focus on a particular communicative behavior, examining a range of communicative strategies utilized within that particular topic. See for instance the ideas "Tools for Transformation" and "Multimodal rhetoric of climate change". Several new deep learning projects are on the menu, from "Hindi ASR" to "Gesture Detection and Recognition". On the search engine front, Red Hen also has several candidates: the "Development of a Query Interface for Parsed Data" to "Multimodal CQPweb". Red Hen welcomes visualization proposals; see for instance the "Semantic Art from Big Data" idea below.

Red Hen is now capturing television in China, Egypt, and India and is happy to provide shared datasets and joint mentoring with our partners CCExtractor, who provides the vital tools for text extraction in several television standards, 
 for on-screen text detection and extraction.
When you plan your proposal, bear in mind that your project should result in a production pipeline. For Red Hen, that means it finds its place within the integrated research workflow. The application will typically be required to be located within a Singularity module that is installed on Red Hen's high-performance computing clusters, fully tested, with clear instructions, and fully deployed to process a massive dataset. The architecture of your project should be designed so that it is clear and understandable for coders who come after you, and fully documented, so that you and others can continue to make incremental improvements. Your module should be accompanied by a python application programming interface or API that specifies the input and output, to facilitate the construction of the development of a unified multimodal processing pipeline for extracting information from text, audio, and video. Red Hen prefers projects that use C/C++ and python and run on Linux. For some of the ideas listed, but by no means all, it's useful to have prior experience with deep learning tools.

Your project should be scaled to the appropriate level of ambition, so that at the end of the summer you have a working product. Be realistic and honest with yourself about what you think you will be able to accomplish in the course of the summer. Provide a detailed list of the steps you believe are needed, the tools you propose to use, and a weekly schedule of milestones. Chose a task you care about, in an area where you want to grow. The most important thing is that you are passionate about what you are going to work on with us. Red Hen looks forward to welcoming you to the team!

Ideas for Projects

To write your pre-proposal, please use our Latex Template and follow the instructions. 
Red Hen Lab will consider any mature pre-proposal related to the study of multimodal communication.  A mature pre-proposal is one that uses the Latex template listed below and completes all of its section in a thorough and detailed manner.  We list a few project ideas below; more are listed in the Barnyard of Potential Possible Projects.  But you are not limited to these lists.  Do not write to ask whether you may propose something not on this list: the answer is, of course!
Red Hen strongly emphasizes that a student should not browse the following ideas without first having read the text above them on this page. Red Hen remains interested in proposals for any of the activities listed throughout this website (http://redhenlab.org). 
See especially the 
Red Hen is uninterested in a preproposal that merely picks out one of the following ideas and expresses an interest.  Red Hen looks instead for an intellectual engagement with the project of developing open-source code that will be put into production in our working pipelines to further the data science of multimodal communication.  What is your full idea? Why is it worthy? Why are you interested in it? What is the arc of its execution? What data will you acquire, and where? How will you succeed? 
Please read the instructions on how to apply carefully before applying for any project. Failing to follow the guidelines of the application will result in your (pre)proposal not being considered for the GSOC2020.

1. Gesture detection and recognition in news videos

Mentored by Mahnaz Parian <mahnaz.amiriparian@unibas.ch> and Heiko Schuldt's team

Red Hen invites proposals to build a gesture detection and recognition pipeline. For gesture detection, a good starting point is OpenPose, and a useful extension is hand keypoint detection. Our dataset is around 600,000 hours of television news recordings in multiple languages, so the challenge is to obtain good recall rates with this particular content.

For the GSoC gesture project, Red Hen has the following goals:

  • Build a system inside a Singularity container for deployment on high-performance computing clusters (see instructions)
  • Reliably detect the presence or absence of hand gestures 
  • Recognize and label a subset of the detected hand gestures  
  • Process and annotate Red Hen's news video dataset
A good command of python and deep learning libraries (Tensorflow/caffe/Keras) is necessary. Please see here for more information regarding proposals.

2. Red Hen Rapid Annotator

Mentored by Peter Uhrig and Vaibhav Gupta

This task is aimed at extending the Red Hen Rapid Annotator, which was re-implemented from scratch as a Python/Flask application during GSoC 2018 and improved in GSoC 2019. Still, there are some bugs and feature requests. Then Red Hen would like to integrate it further with other pieces of software, such as CQPweb and (possibly) Google Docs.
Please familiarize yourself with the project and play around with it.
A good command of Python and HTML5/Javascript are necessary for this project.

3. System Integration of Existing Tools Into a New Multimodal Pipeline


Red Hen is integrating multiple separate processing pipelines into a single new multimodal pipeline. Orchestrating the processing of hundreds of thousands of videos on a high-performance computing cluster along multiple dimensions is a challenging design task. The winning design for this task will be flexible, but at the same time make efficient use of CPU cycles and file accesses, so that it can scale. Pipelines to be integrated include: 
  1. Shot detection
  2. Commercial detection
  3. Speaker recognition
  4. Frame annotation (for English)
  5. Text and Story segmentation
  6. Sentiment Analysis
  7. Emotion detection
  8. Gesture detection
This infrastructure task requires familiarity with Linux, bash scripting, and a range of programming languages such as Java, Python, and Perl, used in the different modules. 

4. Integration of Gesture Retrieval into vitrivr

Mentored by Mahnaz Parian <mahnaz.amiriparian@unibas.ch> and Luca Rossetto.

Gestures are a common component of daily communications where can carry some of the weight of spoken language. Query by gesture can be used in different contexts to search for gestures that accompanied the spoken words. This is done in collaboration with vitrivr, a multimodal retrieval system, on the basis of the newscape video collections and the semantic annotations.
For this project, we have the following goals:
  • Integrate the gesture feature extraction into Cineast 
  • Incorporate the existing annotations of Newscape to enhance the feature extraction
  • Adjust the vitrivr UI to accommodate necessary filters and query modes 
  • Test the setup on Newscape Dataset.
A good command of python and deep learning libraries (Tensorflow/caffe/Keras) plus a very good knowledge of Java and typescript is necessary. 

5. CQPweb plugins (and plugin structure)

Red Hen uses an open-source software called CQPweb to facilitate linguistic research. However, CQPweb is not yet fully equipped to handle audio and video data, so it needs modifications for our purposes. Your task is to create plugins for HTML5 video playback, audio analysis using the EMU webApp and better query options (e.g. the ability to search by sounds using IPA symbols). Where CQPweb's plugin structure cannot cater for our needs, you will submit merge request to the CQPweb codebase. Proficiency in PHP, JavaScript and HTML is required.
Mentors: Peter Uhrig, Javier Valenzuela, and others

6. CQP and CQPweb date ranges

This project requires an understanding of C/C++ as well as PHP and MySQL. Your task is to implement a date range query feature (e.g. "Give me all the instances of x uttered between 5 September 2016 and 3 March 2018") in both CQP (the backend of CWB) and the CQPweb frontend. Please familiarize yourself with the codebase of CWB before applying for this project!
Mentors: Peter Uhrig, Javier Valenzuela, and others

7. Image and audio clustering

You will design and evaluate a system that clusters images (screenshots from media broadcasts) and/or audio snippets and then re-orders them accordingly in the Red Hen Rapid Annotator (see task 2). This includes figuring out the best features to use for the clustering, the algorithms and thresholds to use. While we will start with traditional clustering methods, the task can evolve into the machine learning direction if time permits.
Mentors: Peter Uhrig, Francis Steen, and others

8. Development of a Query Interface for Parsed Data

Mentored by Peter Uhrig's team

This infrastructure task is to create a new and improved version of a graphical user interface for graph-based search on dependency-annotated data. The new version should have all functionality provided by the prototype plus a set of new features. The back-end is already in place. 
Develop current functionality:
  • add nodes to the query graph 
  • offer choice of dependency relation, PoS/word class based on the configuration in the database (the database is already there) 
  • allow for use of a hierarchy of dependencies (if supported by the grammatical model) 
  • allow for word/lemma search 
  • allow one node to be a "collo-item" (i.e. collocate or collexeme in a collostructional analysis) 
  • color nodes based on a finite list of colors 
  • paginate results 
  • export xls of collo-items 
  • create a JSON object that represents the query to pass it on to the back-end 
Develop new functionality:
  • allow for removal of nodes 
  • allow for query graphs that are not trees 
  • allow for specification of the order of the elements 
  • pagination of search results should be possible even if several browser windows or tabs are open. 
  • configurable export to csv for use with R 
  • compatibility with all major Web Browsers (IE, Firefox, Chrome, Safari) [currently, IE is not supported] 
  • parse of example sentence can be used as the basis of a query ("query by example") 
Steps:
  1. Visit http://www.treebank.info and play around with the interface (user: gsoc2018, password: redhen) [taz is a German corpus, the other two are English]
  2. In consultation with Red Hen, decide on a suitable JavaScript Framework, such as reactJS paired with jQuery
  3. Think about html representation. Red Hen probably prefers HTML5/CSS3, but it is unclear whether its requirements can be met without major work on <canvas>, or whether sensible widgets are possible without digging into into the <canvas> tag.
Contact Peter Uhrig <peter.uhrig@fau.de> to discuss details or to ask for clarification on any point.

9. AI Recognizers of Frame Blends, especially in conversations about the future

Mentored by Mark Turner's team. 
Contact turner@case.edu to discuss details or ask for clarification.
Human beings are remarkable for their ability to imagine futures that are significantly different from anything they can remember or anything they have learned, or futures whose difference from our present isn't just a change in the value of a parameter on a linear scale (e.g., if you have wealth of x this year, you can imagine having wealth of 1.1x next year; you have no experience of 1.1x, but you know x and you know the scale, so the difference is just a scalar adjustment). These imagined futures can have to do with new journeys, new experiences, global finance, international relations, cultural evolution, new discoveries, health threats never before experienced, trade arrangements, new medical techniques and tools, new computational systems (e.g. quantum computing), and on and on.  The cognitive science of how people communicate when they are collaborating on imagining a novel future is very complicated.  Human beings have to locate data manually.  What AI could be developed to tag conversations for the kinds of imagining that people do when they are communicating about the future? That tagging could drastically reduce the amount of work the human being needed to do to locate data of various sorts.  We are imagining an interactive system, in which human beings tag some data, an AI system trains on that data to try to imitate the human tagger, the human team corrects false positives and false negatives and then the AI system retrains. But there are many different ways to go about developing such an AI. What ideas do you have for such an AI? What tagging system could you establish?

We would begin with looking at situations where human beings imagine a future by blending the present with something quite incompatible—from history, or perhaps from a story or a novel or a film.  For example, in the current moment, the world is talking about trade, and China has proposed the Belt and Road Initiative (BRI) which imagines our present world but blended with some ideas of the ancient Silk Road, to produce a new idea of a future for the world of international trade, one that bears almost no literal resemblance to the original era of the Silk Road.  Another example might blend our present condition in facing the COVID-19 virus with the ancient story of what happened to Rome, and how Rome responded, in dealing with the plague.

One thought might be to run multiple semantic annotation tools such as PathLSTM, Semafor, and USAS and try to identify instances where their analysis mismatches because one sees frame A and the second sees frame B.

This is an imaginative and highly challenging project, which would need to include some thinking about how we imagine our futures and design of a preliminary tagging system that could be scaled up later.  Red Hen already has a frame tagging system for English that exploits FrameNet; for details, see Tagging for Conceptual Frames. Red Hen Lab works closely with Framenet Brasil, another Google Summer of Code 2020 organization, and is eager to involve other languages in her tagging of conceptual frames. The present project would need to create at least an elementary, preliminary tagging system and actually install it in a Red Hen pipeline at Case Western Reserve University. Red Hen is looking not for a toy system or a proof of concept, but rather a tagging system that would go into production on well over 5 billion words of data and almost 500,000 hours of audiovisual recordings.  Study http://redhenlab.org to familiarize yourself with the Red Hen holdings and existing tools before submitting a pre-proposal.