Modern video-based head-mounted displays allow users to operate along Milgram’s entire reality-virtuality continuum. This opens up the field for novel cross-reality applications that distribute data analytics tasks along this continuum to combine benefits of established 2D information visualisation in the real environment with immersive analytics. In this publication, we explore this potential by transforming 2D graph data from a planar, large-scale display in the real environment into a spherical layout in augmented reality 3D space, letting it appear as if the graph is moving out of the display. We focus on design aspects of this transformation that potentially help users to form a joint mental model of both visualisations and to continue their tasks seamlessly in augmented reality. For this purpose, we implemented a framework of transformation parameters that can be categorised as follows: transformation methods, node transformation order (groupings) and different ways of visual interconnection. Variants in each of these areas were investigated in three quantitative user studies in which users had to solve a simple cluster search task. We confirmed that a visual transformation from 2D to 3D helps users to continue their tasks in augmented reality with less interruptions, and that node transformation order should be adjusted to data and task context. We further identified that users can perform tasks more efficiently when a user-controlled transformation is used, while a constant transformation with fixed duration can contribute to lower error rates.

FachzeitschriftFrontiers in Virtual Reality
PublikationsstatusVeröffentlicht - 2023


Untersuchen Sie die Forschungsthemen von „Transforming graph data visualisations from 2D displays into augmented reality 3D space: A quantitative study“. Zusammen bilden sie einen einzigartigen Fingerprint.