Invited Speakers


8:30-8:40 Opening remarks Workshop Chairs
8:40-9:00 Poster Spotlight (1 min madness)
9:00-9:45 Invited Talk: Collaboration in Situated Language Communication Joyce Chai
9:45-10:30 Invited Talk: Talking to the BORG: Task-based Natural Language Dialogues with Multiple Mind-Sharing Robots Matthias Scheutz
10:30-11:00 Coffee Break
11:00-11:45 Invited Talk: The Blocks World Redux Martha Palmer
11:45-12:30 Invited Talk: Learning Models of Language, Action and Perception for Human-Robot Collaboration Stefanie Tellex
12:30-2:00 Lunch
2:00-2:45 Invited Talk: Habitat: A Platform for Embodied AI Research Dhruv Batra
2:45-3:30 Invited Talk: Learning Grounded Language Through Non-expert Interaction Cynthia Matuszek
3:30-4:00 Coffee Break
4:00-4:45 Invited Talk: A Review of Work on Natural Language Navigation Instructions Raymond Mooney
4:45-5:15 Best Paper Oral Presentations
SpatialNet: A Declarative Resource for Spatial Relations
Morgan Ulinski, Bob Coyne and Julia Hirschberg
Multi-modal Discriminative Model for Vision-and-Language Navigation
Haoshuo Huang, Vihan Jain, Harsh Mehta, Jason Baldridge and Eugene Ie
5:15-6 Poster Session


Cross Submissions

Overview and Call For Papers

SpLU-RoboNLP 2019 is a combined workshop on spatial language understanding (SpLU) and grounded communication for robotics (RoboNLP) that focuses on spatial language, both linguistic and theoretical aspects and its application to various areas including and especially focusing on robotics. The combined workshop aims to bring together members of NLP, robotics, vision and related communities in order to initiate discussions across fields dealing with spatial language along with other modalities. The desired outcome is identification of both shared and unique challenges, problems and future directions across the fields and various application domains.

While language can encode highly complex, relational structures of objects, spatial relations between them, and patterns of motion through space, the community has only scratched the surface on how to encode and reason about spatial semantics. Despite this, spatial language is crucial to robotics, navigation, NLU, translation and more. Standardizing tasks is challenging as we lack formal domain independent meaning representations. Spatial semantics requires an interplay between language, perception and (often) interaction.

Following the exciting recent progress in visual language grounding, the embodied, task-oriented aspect of language grounding is an important and timely research direction. To realize the long-term goal of robots that we can converse with in our homes, offices, hospitals, and warehouses, it is essential that we develop new techniques for linking language to action in the real world in which spatial language understanding plays a great role. Can we give instructions to robotic agents to assist with navigation and manipulation tasks in remote settings? Can we talk to robots about the surrounding visual world, and help them interactively learn the language needed to finish a task? We hope to learn about (and begin to answer) these questions as we delve deeper into spatial language understanding and grounding language for robotics.

The major topics covered in the workshop include:

  1. Spatial Language Meaning Representation (Continuous, Symbolic)
  2. Spatial Language Learning and Reasoning
  3. Multi-modal Spatial Understanding
  4. Instruction Following (real or simulated)
  5. Grounded or Embodied tasks
  6. Datasets and evaluation metrics
Click for a longer list of topics
Including but not limited to:
  • Spatial meaning representations, continuous representations, ontologies, annotation schemes, linguistic corpora.
  • Spatial information extraction from natural language.
  • Spatial information extraction in robotics, multi-modal environments, navigational instructions.
  • Text mining for spatial information in GIS systems, geographical knowledge graphs.
  • Spatial question answering, spatial information for visual question answering
  • Quantitative and qualitative reasoning with spatial information
  • Spatial reasoning based on natural language or multi-modal information (vision and language)
  • Extraction of spatial common sense knowledge
  • Visualization of spatial language in 2-D and 3-D
  • Spatial natural language generation
  • Spatial language grounding, including the following:
    • Aligning and Translating Language to Situated Actions
    • Simulated and Real World Situations
    • Instructions for Navigation
    • Instructions for Articulation
    • Instructions for Manipulation
    • Skill Learning via Interactive Dialogue
    • Language Learning via Grounded Dialogue
    • Language Generation for Embodied Tasks
    • Grounded Knowledge Representations
    • Mapping Language and World
    • Grounded Reinforcement Learning
    • Language-based Game Playing for Grounding
    • Structured and Deep Learning Models for Embodied Language
    • New Datasets for Embodied Language
    • Better Evaluation Metrics for Embodied Language

Camera-ready details

Archival track camera-ready papers should be prepared with NAACL style, either 9 pages without references (long papers) or up to 5 pages without references (short papers). Please make submissions via softconf here by April 8.

Non-archival track camera-ready papers should be uploaded online (e.g., to arxiv), and links to those camera-ready copies sent to the organizing committee at

Submission details

We encourage contributions with technical papers (NAACL style, 8 pages without references) or shorter papers on position statements describing previously unpublished work or demos (NAACL style, 4 pages maximum). NAACL Style files are available here. Please make submissions via Softconf here.

Non-Archival option: NAACL workshops are traditionally archival. To allow dual submission of work to SpLU-RoboNLP and other conferences/journals, we are also including a non-archival track. Space permitting, these submissions will still participate and present their work in the workshop, will be hosted on the workshop website, but will not be included in the official proceedings. Please submit through softconf but indicate that this is a cross submission at the bottom of the form:

Best Papers

We will present multiple best paper awards.

ACL Anti-Harassment Policy

Important Dates

Organizers and PC


  • James F. Allen
  • University of Rochester, IHMC
  • Jacob Andreas
  • Semantic Machines/MIT
  • Jason Baldridge
  • Google
  • Mohit Bansal
  • UNC Chapel Hill
  • Archna Bhatia (co-chair)
  • IHMC
  • Yonatan Bisk (co-chair)
  • University of Washington
  • Asli Celikyilmaz
  • Microsoft Research
  • Bonnie J. Dorr
  • IHMC
  • Parisa Kordjamshidi (chair)
  • Tulane University, IHMC
  • Matthew Marge
  • Army Research Lab
  • Jesse Thomason (co-chair)
  • University of Washington
    Contact Organizing Committee:

    Program Committee

  • Malihe Alikhani
  • Rutgers University
  • Yoav Artzi
  • Cornell University
  • Jacob Arkin
  • University of Rochester
  • John A. Bateman
  • Universität Bremen
  • Mehul Bhatt
  • Örebro University
  • Jonathan Berant
  • Tel-Aviv University
  • Raffaella Bernardi
  • University of Trento
  • Steven Bethard
  • University of Arizona
  • Johan Bos
  • University of Groningen
  • Volkan Cirik
  • CMU
  • Guillem Collell
  • KU Leuven
  • Joyce Chai
  • Michigan State University
  • Angel Chang
  • Stanford University
  • Simon Dobnik
  • CLASP and FLOV, University of Gothenburg Sweden
  • Ekaterina Egorova
  • University of Zurich
  • Zoe Falomir
  • Universität Bremen
  • Daniel Fried
  • UCSF
  • Lucian Galescu
  • IHMC
  • Felix Gervits
  • Tufts
  • Hannaneh Hajishirzi
  • University of Washignton
  • Casey Kennington
  • Boise State University
  • Jayant Krishnamurthy
  • Semantic Machines
  • Stephanie Lukin
  • Army Research Laboratory
  • Chris Mavrogiannis
  • Cornell
  • Dipendra Misra
  • Cornell University
  • Marie-Francine Moens
  • KU Leuven
  • Ray Mooney
  • University of Texas
  • Mari Broman Olsen
  • Microsoft
  • Martijn van Otterlo
  • Tilburg University, The Netherlands
  • Aishwarya Padmakumar
  • UT Austin
  • Natalie Parde
  • University of Illinois Chicago
  • Ian Perera
  • IHMC
  • James Pustejovsky
  • Brandeis University
  • Preeti Ramaraj
  • University of Michigan
  • Siva Reddy
  • Stanford
  • Kirk Roberts
  • The University of Texas
  • Anna Rohrbach
  • UC Berkeley
  • Marcus Rohrbach
  • FAIR
  • Manolis Savva
  • Princeton University
  • Jivko Sinapov
  • Tufts
  • Kristin Stock
  • Massey University of New Zealand
  • Alane Suhr
  • Cornell
  • Clare Voss
  • ARL
  • Xin Wang
  • University of California Santa Barbara
  • Shiqi Zhang
  • SUNY Binghamton
  • Victor Zhong
  • University of Washington

    Past Workshops

    SpLU 2018

    Robo-NLP 2017