Research Focus

Technological evolution brought computers into our daily life - our cars, homes and smart-phones surround us with information. Ever decreasing sizes of displays and the fact that mobile computers do not receive our full attention require new concepts to interact with these devices.

Automatic speech recognition and text-to-speech technology offer a promising alternative, to substitute or augment the more conventional user interfaces we grew accustomed to. Multi-touch technology is one of the most interesting graphical interaction concepts, gaining popularity with the newer generation of smart phones. The combination of multi-touch and voice recognition has the potential to dramatically speed-up workflows.

But there are challenges that arise when using the modality of voice, as experienced with more traditional deployments in telephony and desktop environments. Nevertheless, they can be met by a profound knowledge about the design of voice user interfaces on the one hand, and the combination of different modalities on the other. In addition, we are investigating how to support the development of multimodal applications in pervasive environments as they can be found in meeting scenarios, while driving with our car or when controlling our homes. Another important aspect that we are researching includes the social aspects or interaction in our home environments.

Current Projects:

dialog+
Dialog concepts for enabling a smart command & control interface via voice in a house environment. In combination with a semantic background information service for assisting the user during discussions. The goal for the user is that the system gives additional topic-related information, but not disturbing the user. The challenge here is find out possibilities for the design of a smart assistant system at home in the future.

User Interfaces for Brainstorming Meetings with Blind and Sighted Persons
The following research question is addressed in the proposed project: How can appropriate IT based means improve participation of the blind in workplace situations that require intense cooperation with the sighted? We plan scientific contributions in the areas of (a) novel interaction devices and (b) novel interaction techniques, combined with research in the area of (c) eAccessibility. Well matching research labs with considerable experience had to be found for each of the areas (a) - (c); they turned out to be spread over all three countries participating in the ‘lead agency’ cooperation line: Switzerland (ETH Zurich), Germany (TU Darmstadt), and Austria (TU Linz).

Smart Vortex
The goal of Smart Vortex is to provide a technological infrastructure for real time handling of massive product data streams.

Mundo Speech API
Development of a ubiquitous computing speech API to overcome the limitations of embedded devices and to support 
 multiple audio input and output devices, and multiple text-to-speech engines and speech recognizers with different capabilities in a given environment.

Past Projects:

CoStream
We advocate using live video streams not only over larger distances, but also in-situ 
in smaller, closed events such as soccer matches or concerts. We present CoStream, a mobile live video sharing system and present its iterative design process. 

PalmRC 
We propose to leverage the hand as an interactive surface for TV remote control as 
the hand has the potential to be leveraged for device-less and eyes-free TV remote interaction without any third-party mediator device. 

The research project Infostrom focuses on technical support for the cooperation of multiple invovled organisations in disaster management and for a coordinated cross-organisational recevory in the case of a large power outage. 

Desingning Social Television 
Research on social interactive television has been more focused on the creation of communication features. In this work, we show that depending on video content, social television has a greater potential to provide feelings of togetherness if real-life relationships are taken into account.

STAIRS
The Structured Audio Information Retrieval System (STAIRS) project targets environments where workers need access to information, but cannot use traditional hands-and-eyes devices, such as a PDA

People

Group Leader

Dr. Dirk Schnelle-Walka, Post-Doctoral researcher

Research Staff 

Stefan Radomski, Doctoral Researcher
Stephan Radeck-Arneth, Doctoral Researcher
Niloo Dezfuli, Doctoral researcher
Sebastian Döweling, Doctoral Researcher

Talk and Touch News

27.02.2015

TK co-organizes workshop at EICS'15, the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems

The Telecooperation Lab co-organizes the “2nd Workshop on Engineering Interactive Systems with SCXML” in conjunction with EICS'15, the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems in Duisburg, Germany on... [more]

Category: Allgemeine News

27.02.2015

TK co-organizes workshop at IUI, the International Conference on Intelligent User Interfaces

The Telecooperation Lab co-organizes the "4th Workshop on Interacting with Smart Objects” in conjunction with IUI'15, the International Conference on Intelligent User Interfaces, on March 29, 2015 in Atlanta, GA, USA.  ... [more]

Category: Allgemeine News

13.10.2014

SINUS: Siri für den Desktop

Mit Siri hat Apple dafür gesorgt, dass wir unsere Einschätzungen zu Spracherkennungssystemen völlig neu überdenken. Die Möglichkeit, per Sprache auf im Internet verfügbares Wissen zuzugreifen, hat uns von Anfang an begeistert.... [more]

Category: Allgemeine News

08.09.2014

Forschungsprojekt: An wen soll ich mich wenden?

An wen soll ich mich wenden? Diese Fragestellung steht im Zentrum eines Forschungsprojekts am Fachgebiet Telekooperation der Technischen Universität Darmstadt. SLANG Radio, ein Radiosender, der sich für ein... [more]


13.06.2014

New Viideo about Multimodal Error Correction on Mobile Devices

In a cooperation with devoteam we were working on error correction strategies for speech input on mobile devices. The results of our work is shown in  this video. [more]


05.03.2014

Accepted work at ICCHP 2014

Three submission to the upcoming ICCHP 2014 have been accepted as full papers: Multimodal Fusion and Fission within W3C Standards for Nonverbal Communication with Blind PersonsTowards an Information State Update Model Approach... [more]


23.08.2013

Accepted work at GSCL workshop

Our paper about standardization efforts for multimodal dialog systems hass been accepted at the GSCL workshop "Gesprochene Sprache und Sprachverarbeitung—Dialog und Dialogsysteme". [more]


Displaying 1 to 7 of 13
<< First < Previous 1-7 8-13 Next > Last >>
A A A | Drucken Print | Impressum Impressum | Sitemap Sitemap | Suche Search | Kontakt Contact | Website Analysis: More Information
zum Seitenanfangzum Seitenanfang