ALTA logo Australasian Language Technology Workshop 2005
Call for Papers
Submission and Review system
Getting there
Information for Presenters

Tutorials: Friday 9 December

On 9 December, a tutorial day will be organised. The following people will give 90 minute tutorials:

Dialogue, Dialogue Models, and Dialogue Management Systems

Lawrence Cavedon, NICTA Victoria Laboratory, University of Melbourne

Dialogue is a complex process, involving many aspects of language-based communication (both verbal and non-verbal), interactivity, and pragmatic reasoning. The study of dialogue and dialogue systems cuts across the fields of linguistics, language technology, artificial intelligence, psychology, and sociology (and more!). This tutorial discusses the process of dialogue interaction between two or more participants, and introduces salient phenomena, issues, and challenges that arise in modeling dialogue and building powerful dialogue systems. We will begin by covering the types of dialogue models typically used by toolkits for building commercial dialogue applications, and discuss strategies used to make such systems perform more robustly. We then move on to more semantically rich models of dialogue, and architectures and systems designed for complex dialogue-based interaction in complex settings. Time permitting, we may also address issues such as multimodal interaction and the use of ontologies in dialogue system engineering. The outcomes of this tutorial will be an appreciation of some of the exciting issues in designing complex dialogue systems, and an overview of current research areas in the field.


Lawrence Cavedon is a Senior Researcher at National ICT Australia, in the Victoria Research Laboratory, and also holds a lecturing position in the School of Computer science and IT at RMIT University. Until July 2005 he was a senior research engineer at Stanford University's Centre for the Study of Language and Information (CSLI), where he helped develop a multi-application dialogue management system. This system is currently being applied in a number of projects, including control of in-car devices, tracking multi-party multi-modal meetings, and intelligent tutoring. Lawrence has also held positions in industry R&D labs. As well as dialogue, Lawrence's research interests include collaboration in multi-agent systems and formal models of reasoning. He holds a PhD in Cognitive Science from the University of Edinburgh.

Conditional Random Fields

Trevor Cohn, Department of Computer Science, University of Melbourne

Conditional Random Fields are an exciting new log-linear model which allows probabilistic modelling and inference over complex labelling decisions given an observation. This framework has been used in the fields of natural language processing tasks, genetics and computer vision, often producing state-of-the-art results. This tutorial will describe CRFs and how they relate to generative models such as the hidden Markov model and dynamic Bayesian networks. CRFs will also be compared to maximum entropy models and maximum entropy Markov models. The process of estimating model parameters of a CRF and decoding (finding the maximum a posteriori labelling) will be described, for both the traditional chain structured graphs, as used in sequence labelling tasks, and more complicated, loopy structures. The use of different prior distributions is often required in order to regularise the model (reduce the degree of over-fitting) - some of the regularisation methods will be covered. Finally, the course will describe tractability issues inherent in CRFs, and some techniques with which these problems have been countered.


Trevor Cohn is a PhD student at the University of Melbourne, studying Conditional Random Fields for Natural Language Processing under Steven Bird and Miles Osborne. He graduated from University of Melbourne in 2000 with BEng(Software, Hons) and BComm, then worked in industry as a Software Engineer for three years before returning to academia. He has recently returned from a year long visit to the University of Edinburgh.

Tree Adjoining Grammar and Related Approaches

Mark Dras, Department of Computing, Macquarie University

A focus of a significant part of mid-twentieth century linguistics was investigating formalisms for representing language that were constrained, with an associated question: What can these constraints say about the nature of language? The notion of constrained formalisms was taken up strongly by the field of computer science in the development of programming languages, but became less central in linguistics. In computational linguistics, however, the trend continued with several different formalisms: Tree Adjoining Grammar, Generalized Phrase Structure Grammar, and Combinatory Categorial Grammar, among others. This approach allowed the consideration of the complexity of processing of language -- both computational processing and human processing -- with a perspective different from that of other approaches. This tutorial will focus on Tree Adjoining Grammar (TAG), an approach that, broadly speaking, takes trees assigned to individual words as the core element of language representation -- rather than linear rules, systems, or pairs of structures -- along with a restricted definition of tree combination. The tutorial will look at several aspects: the definition of TAG; some formal properties; the XTAG project, a primary goal of which has been to build a large-scale grammar of English, along with work on other languages; parsers and other computational machinery for TAG, along with issues of complexity of processing; supertagging and 'almost parsing', the assigning of structured tags to lexical items; the Synchronous TAG formalism, and its use as a representation for pairing language and action, in the context of human-like computational agents; and the relationship of TAG to other related formalisms.


Mark Dras is a lecturer in the Department of Computing, Macquarie University. He did his PhD in computational linguistics, on characterising paraphrase as an optimisation problem and looking at related representational questions, at the Microsoft Research Institute and Macquarie University, completing in 1999. That was followed by a postdoctoral fellowship at the University of Pennsylvania in the Institute for Research in Cognitive Science, with Professor Aravind Joshi. His research interests can be broadly characterised as language transformations: paraphrase, machine translation and formal representations for describing them and their properties.

Automatic Summarisation: Picking out the best bits!

Nicola Stokes, NICTA Victoria Laboratory, University of Melbourne

The aim of text summarisation is to identify the most important information in a source document, and present the user with a coherent, abridged version containing its core message. This tutorial will review the state of the art in automatic text summarisation. In particular, the various statistical and symbolic/linguistic approaches to single document, multi-document and question-focused summarisation will be explored. The principal challenges and remaining problems in summarisation research will also be discussed. One of these challenges concerns the development of meaningful evaluation metrics for measuring summary quality, i.e. summary coverage and readabilty. This tutorial will review some of the automatic and manual summarisation evaluation methodologies proposed by DUC Workshop participants, that aim to address the problem of variability in human quality judgements.


Nicola Stokes is a NICTA Research Fellow based at the NICTA Victoria Laboratory at the University of Melbourne. She is also a member of the UniMelb Language Technology group, and is currently working on the NICTA Interactive Information Discovery and Delivery (I2D2) project with Steven Bird and Tim Baldwin (among others). Prior to joining NICTA, Nicola was a Postdoctoral Researcher with the Text Summarisation group at the Department of Computer Science, University College Dublin (UCD). During her PhD studies at UCD, she investigated the application of lexical cohesion analysis in a number of IR and NLP tasks in the broadcast news and newswire domain, including breaking news story detection, news story segmentation and news story gisting. Her current research interests include Multi-document Summarisation, Question Answering, Textual Entailment and Paraphrase Identification.

For any comments or questions about these pages please contact the ALTA Workshop 2005 organisers.
Timothy Baldwin (University of Melbourne -- co-chair)
James Curran (University of Sydney -- local organiser)
Menno van Zaanen (Macquarie University -- co-chair)

Copyright 2003 ALTA. Last updated: Wed Nov 30 12:18:44 EST 2005