18th International Conference on MultiMedia Modeling, Klagenfurt, Austria, January 4-6, 2012

Special Sessions CfP

MMM2012 will also feature special sessions. Therefore, in addition to the main conference authors are invited to submit papers to the following four special sessions. Submission deadline and page limit is the same as for regular papers, submission details can be found here.

For any question, please contact the Special Session Co-chairs:

  • Marco Bertini, Università di Firenze, Italy, bertini(at)
  • Mathias Lux, Klagenfurt University, Austria, mlux(at)


1. Interactive and Immersive Entertainment and Communication

Rene Kaiser1, Dr. Pablo Cesar2, Dr. Oliver Schreer3, Dr. Graham Thomas4
1Joanneum Research, DIGITAL – Institute for Information and Communication Technologies, Graz, Austria
2CWI – Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
3Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, Berlin, Germany
4BBC R&D, South Lab, Centre House, London, United Kingdom

In recent years, patterns of media consumption have been changing rapidly, but video production technology has not been keeping up. Video material is now viewed on screens ranging in size from an IMAX cinema, through to large domestic projection and flat-panel displays, down to mobile phones. While the different screen sizes require different editing of the content (e.g., shot framing, editing speed), it is commercially not feasible to produce separate content for each target platform. Another significant change in media consumption habits is the level of interactivity that consumers are increasingly expecting. Traditional broadcasting is lacking behind the expectations of a generation grown up with the interactivity known from the Web and computer games. Multi-source, multi-sensor and omnidirectional audiovisual capture systems, which have been proposed in recent years, provide the potential for novel types of immersive entertainment. The challenge for multimedia systems is to enable more interactive and realistic experiences for the consumer, by providing intuitive means of navigating scenes, selecting content, interacting with other users and media assets and making interactions more natural. The technologies are applicable both for entertainment applications and communication between groups of people. While there are application scenarios where users want to have full interaction possibilities, there is a risk of overwhelming users, who are not digital natives or look for a lean-back experience, with the wealth of content available. Thus (semi-)automatic tools supporting the production to guide the user and provide options rather than unlimited freedom, while keeping the production process efficient and cost-effective are urgently needed. This session invites contributions describing state of the art multimedia systems enabling immersive and interactive user experiences in entertainment and communication scenarios. The presented technologies make use of multiple high-resolution sensors and provide innovative means for user interaction. Ideally, the technologies are also applicable to other application areas than entertainment and communication.


2. Multimedia Preservation: How to ensure multimedia access over time?

Prof. Dr. Mario Döller1, Prof. Dr. Seamus Ross2, Walter Allasia, PhD3, Florian Stegmaier1
1University of Passau, Germany
2University of Toronto, Canada
3EURIX Group, Italy

Multimedia data is vital to all domains. Examples range from medical and healthcare records like CT scans, security applications coping with large GEO-data gathered by satellites up to the social sector dealing with large collections of photos and videos. The ability to effectively manage multimedia content and its preservation over time has become a necessity for both, the business as well as the general public. But what does digital preservation mean? The American Library Association1 (ALA) has identified the main issues related to preservation as follows: “Digital preservation combines policies, strategies and actions that ensure access to digital content over time”. Although this initiative is mainly driven by libraries, interoperable access to digital data and its prevention towards the loss of data by e.g. technology changes affects everyone. This is especially true by considering recent statistics illustrating the growth of multimedia data in the area of the social web. For instance, the well known online photo management and sharing application Flickr hosts 5 billion images and has a minute-by-minute increase of more than 3000 images2. How can this amount of data be accessed in 50 years despite technological changes? Are recent research achievements like the Linked Open Data movement suitable to improve current issues of multimedia preservation? Therefore, technologies, concepts and methodologies are needed to lower the barrier between systems and to guarantee interoperable access between different domains. In this context, the special session is planed to bring together researchers in the field of interoperable multimedia access (e.g., metadata modelling, retrieval) and processing (e.g., transmission, coding) on the one side as well as experts in the area of cultural heritage (e.g., libraries, museums) on the other side. Regarding this, we expect innovative submissions addressing visionary concepts and ideas in order to improve the current situation in multimedia preservation.

Topics of interest include, but are not limited to:

  • Interoperable multimedia access.
  • Interoperable multimedia frameworks.
  • Interoperable multimedia storage and exchange formats.
  • Linked Open Data and digital libraries.
  • Access policies (digital rights).
  • Applications and methodologies for the preservation of multimedia items: content and metadata.
  • Cultural heritage and multimedia


3. Multi-modal and Cross-modal Search

Dr. Petros Daras1, Antonio Camurri2, Anne Verroust-Blondet3, Thomas Steiner4
1Informatics and Telematics Institute, Centre for Research and Technology Hellas, Greece
2Casa Paganini- InfoMus research centre, DIST, University of Genoa, Italy
3INRIA Paris –Rocquencourt, France
4Google Germany GmbH, Germany

As the amount of multimedia content, which is available over the Internet, is increasing at an incredible pace, there is an emerging need for effective search through the various online media databases. Towards this direction, a lot of research has been conducted on developing methods for content-based multimedia retrieval, which are based on the automatic extraction of low-level features from content. Most of these methods use mono-modal queries to retrieve results of the same media type. Cross-media retrieval, which has been introduced in recent years, comprises methods which take as input a query in one modality to retrieve results in another. Moving beyond cross-media retrieval, multimodal retrieval allows users to enter multimodal queries and retrieve multiple types of media simultaneously.
This special session of MMM2012 invites unpublished, original research relating to crossmodal and multimodal retrieval. More specifically, the topics of interest include (but are not limited to):


  • Multi-modal and cross-modal indexing
  • Multimodal query interfaces
  • Multimodal interaction interfaces
  • Non-verbal social query interfaces
  • Multimodal retrieval


4. Video Surveillance

Dr. Jun-Wei Hsieh1, Dr. Shyi-Chyi Cheng1, Dr. Duan-Yu Chen1, Liao, Hong-Yuan Mark2, Prof. Phoebe Chen, PhD3, Zheng-Jun Zha4
1Department of Electrical & Electronic Engineering, Yuan-Ze University, Taiwan
2Research Fellow, Sinica, Taiwan
3Department of Computer Science and Computer Engineering, La Trobe University, Melbourne, Victoria 3086, Australia
4School of Computing, National University of Singapore

This special session combines advanced research themes in video surveillance, behavior analysis, unusual event analysis, event modeling, action analysis, abandoned object detection as well as surveillance system design and development. The special session will provide an international forum for researchers and academicians in the fields of pattern recognition (neural networks, machine learning, support vector machines, Adaboost, etc.), computer vision, image processing, biometrics, and intelligent multimedia processing.