TY - GEN
T1 - Recording a complex, multi modal activity data set for context recogntion
AU - Lukowicz, P.
AU - Pirkl, G.
AU - Bannach, D.
AU - Wagner, F.
AU - Calatroni, A.
AU - Förster, K.
AU - Holleczek, T.
AU - Rossi, M.
AU - Roggen, D.
AU - Troester, G.
AU - Doppler, J.
AU - Holzmann, C.
AU - Riener, A.
AU - Ferscha, A.
AU - Chavarriaga, R.
PY - 2010
Y1 - 2010
N2 - Publicly available data sets are increasingly becoming an important research tool in context recognition. However, due to the diversity and complexity of the domain it is difficult to provide standard recordings that cover the majority of possible applications and research questions. In this paper we describe a novel data set hat combines a number of properties, that, in this combination, are missing from existing data sets. This includes complex, overlapping and hierarchically decomposable activities, a large number of repetitions, significant number of different users and a highly multi modal sensor setup. The set contains around 25 hours of data from 12 subjects. On the low level there are around 30'000 individual annotated actions (e.g. picking up a knife, opening a drawer). On the highest level (e.g. getting up, breakfast preparation) we have around 200 context instances. Overall 72 sensors from 10 different modalities (different on body motion sensors, different sound sources, two cameras, video, object usage, device power consumption and location) were recorded.
AB - Publicly available data sets are increasingly becoming an important research tool in context recognition. However, due to the diversity and complexity of the domain it is difficult to provide standard recordings that cover the majority of possible applications and research questions. In this paper we describe a novel data set hat combines a number of properties, that, in this combination, are missing from existing data sets. This includes complex, overlapping and hierarchically decomposable activities, a large number of repetitions, significant number of different users and a highly multi modal sensor setup. The set contains around 25 hours of data from 12 subjects. On the low level there are around 30'000 individual annotated actions (e.g. picking up a knife, opening a drawer). On the highest level (e.g. getting up, breakfast preparation) we have around 200 context instances. Overall 72 sensors from 10 different modalities (different on body motion sensors, different sound sources, two cameras, video, object usage, device power consumption and location) were recorded.
UR - http://www.scopus.com/inward/record.url?scp=85018097961&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85018097961
T3 - 23th International Conference on Architecture of Computing Systems 2010, ARCS 2010 - Workshop Proceedings
SP - 161
EP - 166
BT - 23th International Conference on Architecture of Computing Systems 2010, ARCS 2010 - Workshop Proceedings
A2 - Beigl, Michael
A2 - Cazorla-Almeida, Francisco J.
PB - VDE Verlag GmbH
T2 - 23th International Conference on Architecture of Computing Systems 2010, ARCS 2010
Y2 - 22 February 2010 through 23 February 2010
ER -