Random Research Projects

Teaching Tools

MultiPresenter: Presentation System for Large Display Spaces

. We introduce MultiPresenter, a novel presentation system designed to work on very large display spaces (multiple displays or physically large high-resolution displays). MultiPresenter allows presenters to organize and present pre-made and dynamic presentations that take advantage of a very large display space accessed from a personal laptop. Presenters can use the extra space to provide long-term persistency of information to the audience. Our design deliberately separates content generation (authoring) from the presentation of content. We focus on supporting presentation flow and a variety of presentation styles, ranging from automated, scripted sequences of pre-made slides to highly dynamic ad-hoc, and non-linear content. By providing smooth transition between these styles, presenters can easily alter the flow of content during a presentation to adapt to an audience or to change emphasis in response to emerging interests. We describe our goals, rationale and the design process, providing a detailed description of the current version of the system, and discuss our experience using it throughout a one-semester first year computer science course.

Lanir, Y., Booth, K. S. & Tang, A. (2008). MultiPresenter: A Presentation System for (Very) Large Display Spaces. In Proceedings of the 16th international Conference on Multimedia (MULTIMEDIA 2008). (October 27 - November 1, 2008, Vancouver, Canada). ACM Press. (conference - Acceptance: 56/280: 20%)


Slit-Tear Visualizations for Stationary Video

Video slicing — a variant of slit scanning in photography — extracts a scan line from a video frame and successively adds that line to a composite image over time. The composite image becomes a time line, where its visual patterns reflect changes in a particular area of the video stream. We extend this idea of video slicing by allowing users to draw marks anywhere on the source video to capture areas of interest. These marks, which we call slittears, are used in place of a scan line, and the resulting composite timeline image provides a much richer visualization of the video data. Depending on how tears are placed, they can accentuate motion, small changes, directional movement, and relational patterns.

(Try the slit-tears demo -- sample code)

Tang, A., Greenberg, S., and Fels, S. (2009). Exploring Video Streams Using Slit-Tear Visualizations. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (CHI 2009). (April 4-9, 2009, Boston, USA). ACM Press. pp: 3509-3510. (draft video; conference - best research video nominee)

Tang, A., Greenberg, S., and Fels, S. (2008). Exploring Video Streams using Slit-Tear Visualizations. In Proceedings of the working conference on Advanced Visual Interfaces (AVI 2008). (May 28-30, Napoli, Italy). ACM Press. pp: 191-198. (conference - Acceptance: 32/117 - 27%)

Tang, A., Greenberg, S. and Fels, S. (2008). Exploring Video Streams Using Slit-Tear Visualizations: The Video. Research report 2008-897-10, Department of Computer Science, University of Calgary, Calgary, Alberta, Canada T2N 1N4, May.

Social Visualization: IM Interaction

im-vis.png Instant messaging (IM) allows us to maintain relationships with our social network through messaging and status information. We present early iterations of visualizations of IM interactions that help to visually identify several different types of relationships, such as intimate socials, long-lost-friend, and asymmetric relationships. Our work is motivated by an interest in designing awareness systems that can help reflect or even affect our desired social relationships.

Tang, A. and Neustaedter, C. (2006). Visualizing Egocentric Relationships in Instant Messaging. ACM CHI 2006 Workshop on Social Visualization: Exploring Text, Audio and Video Interactions. Organized by Karahalios, K. and Viegas, F. (workshop)

Interaction Techniques

Shadow Reaching: (Very) Large Display Interaction

We introduce Shadow Reaching, an interaction technique that makes use of a perspective projection applied to a shadow representation of a user. The technique was designed to facilitate manipulation over large distances and enhance understanding in collaborative settings. We describe three prototype implementations that illustrate the technique, examining the advantages of using shadows as an interaction metaphor to support single users and groups of collaborating users. Using these prototypes as a design probe, we discuss how the three components of the technique (sensing, modeling, and rendering) can be accomplished with real (physical) or computed (virtual) shadows, and the benefits and drawbacks of each approach.

Shoemaker, G., Tang, A., and Booth, K. S. (2007). Shadow Reaching: A New Perspective on Interaction for Large Wall Displays. In Proceedings of the 20th ACM Symposium on User Interface Software Technology (UIST 2007). (October 7-10, Newport, RI, USA). pp: 53-56. (conference - Acceptance: 24/129 - 19%)

C-Band: Visual Tagging for Interaction

cband.png This paper presents a new visual tag system, C-Band, which is based on a ring with a color pattern code; it offers several functionalities that cannot be achieved by existing visual tag systems: the tag may be any convex and contain any figure. A prototype shows that a C-Band tag expressing up to 28 bytes of data can be effectively extracted from a 640×480 pixel image. These features suggest that C-Band is an effective method with which to build various phone camera-based applications involving both static physical objects (e.g. magazine ads) and dynamic digital objects (e.g. large public displays).

Miyaoku, K., Tang, A., and Fels, S. (2007). C-Band: A Ring Tag System Using a Color Pattern Code. Information Processing Society of Japan Journal, Vol. 48, No. 3, March, pp: 1361-1371. (journal, in Japanese)

Miyaoku, K., Tang, A., and Fels, S. (2007). C-Band: A Flexible Ring Tag System for Camera-Based User Interface. In Proceedings of HCI International 2007 (HCII 2007). (July 22-27, Beijing, China). Springer LNCS 4563. pp: 320-328. (conference - Acceptance: 34%)

Rotate 'n' Translate: Tabletop Interaction

rnt.png Previous research has shown that rotation and orientation of items plays three major roles during collaboration: comprehension, coordination and communication. Based on these roles of orientation and advice from kinesiology research, we have designed the Rotate’N Translate (RNT) interaction mechanism, which provides integrated control of rotation and translation using only a single touch-point for input. We present an empirical evaluation comparing RNT to a common rotation mechanism that separatescontrol of rotation and translation. Results of this study indicate RNT is more efficient than the separate mechanism and better supports the comprehension, coordination and communication roles of orientation.

Kruger, R., Carpendale, M.S.T, Scott, S. D., and Tang, A. (2005). Fluid Orientation on a Tabletop Display: Integrating Rotation and Translation. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI 2005). (April 2-7, Portland, Oregon). pp: 601-610. ACM Press. (conference - Acceptance: 93/371 - 25%)

Workplace Work Practice

Email: Understanding How Users Monitor Incoming Email

email.png We have only a limited understanding of how users continuously monitor and manage their incoming email flow. A series of day-long field observations uncovered three distinct strategies people use to handle their incoming email flow: glance, scan, and defer. Consequently, supporting email flow involves providing simplified views of the email inbox and mechanisms to support the revisitation of overflow messages.

Siu, N., Iverson, L., and Tang, A. (2006). Going with the Flow: Email Awareness and Task Management. In Proceedings of 2006 ACM Conference on Computer Supported Cooperative Work (CSCW 2006). (November 4-8, Banff, Alberta). pp: 441-450. ACM Press. (conference - Acceptance: 47/212 - 22%)

Physical User Interfaces

DartMail: Playful Physical Information Transfer

This video illustrates DartMail, a humorous account of how electronic 'handles’ can be quickly created, attached to a physical medium, and exchanged between people. Its primary interface is a physical, RFID-tagged rubber dart. Exchange is accomplished in three rapid steps: Associating the RFID dart with digital data; Information transfer by having a person hunt down his or her colleague and shoot the dart at them, and Information retrieval where the receiver simply passes the dart over another RFID reader to open the associated information in the appropriate application. Serious applications of these tongue in cheek ideas are also shown.

Tang, A., Pattison, E. and Greenberg, S. (2005). DartMail: Digital Information Transfer through Physical Surrogates. (Google video) Video Proceedings of ECSCW - European Conference on Computer Supported Cooperative Work (Sept 18-22, Paris). Video and 2-page summary, duration 4:39. (conference)


Haptics: Interval or Ordinal Data Perception

Visual information overload is a threat to the interpretation of displays presenting large data sets or complex application environments. To combat this problem, researchers have begun to explore how haptic feedback can be used as another means for information transmission. In this paper, we show that people can perceive and accurately process haptically rendered ordinal data while under cognitive workload. We evaluated three haptic models for rendering ordinal data with participants who were performing a taxing visual tracking task. The evaluation demonstrates that information rendered by these models is perceptually available even when users are visually busy. This preliminary research has promising implications for haptic augmentation of visual displays for information visualization.

Tang, A., McLachlan, P., Lowe, K., Saka, C. R. and MacLean, K. (2005). Perceiving Ordinal Data Haptically Under Workload. In Proceedings of the Seventh International Conference on Multimodal Interfaces (ICMI 2005). (October 4-6, Trento, Italy). pp: 317-324. ACM Press. (conference - Acceptance: 24/97 - 25%; Best Paper Award)


Haptics for Drivers

Drivers rely almost exclusively on visual information to maintain awareness of other drivers and the environment. When this visual attention is required by other tasks (e.g. some other car interface), then it is diverted from the driving task. We explore how we can convey some of this rich information to drivers so their visual awareness of the driving context is augmented haptically.

Fels, S., Hausch, R., and Tang, A. (2006). Investigation of Haptic Feedback in the Driver Seat. In Proceedings of 9th International IEEE Conference on Intelligent Transportation Systems (ITSC 2006). (September 17-20, Toronto, Canada). pp: 584-589. (conference - Acceptance: 286/424 - 68%)

Affective Computing


Extreme Browsing

As an extreme case of browsing, the pornographic browsing experience has several unique UI characteristics: it requires simple, lightweight controls; usage needs to be discrete; users’ mental and physical context need to be respected, and common, repeated interactions need to be supported. While we identify design goals for user interfaces to better support browsing of pornographic images and movies, the same goals are applicable to other non-controversial browsing activities.

Agarawala, A., Tang, A. and Greenberg, S. (2006). Browsing Pornography: An Interface Design Perspective. ACM CHI 2006 Workshop on Sexual Interactions. Organized by Brewer, J., Kaye, J., Williams, A., and Wyche S. (workshop)

Detecting and Visualizing Arousal

In order to create input devices that are capable of sensing and interpreting human affect from physiological measurements, most previous approaches have produced interfaces that are cumbersome and require overhead in setup and calibration. Our goal was to create a minimal interface that could still interpret human affect. The results from this input are visualized to inform the user about his or her own state. We describe our simple tangible interface that requires no configuration, minimal explanation and does not require known actions from the user. This interface collects galvanic skin response (GSR) date and creates a visualization of this data.

Tang, A., Kratt, D., Carpendale, S. and Dunning, A. (2003). Sensing and Visualising Physiological Arousal. (Google video) Report 2003-727-30, Department of Computer Science, University of Calgary, Calgary, Alberta, Canada T2N 1N4, July.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License