CMSC838L Reading Seminar on Research in Human-Computer Interaction

posted in: Graduate, Spring2015 | 0

By nature HCI has been interdisciplinary, incorporating fields such as Computer Science, Psychology, and Design. The future of HCI will be even more so. This 1-credit weekly reading seminar will cover recent papers that represent new directions and opportunities in the field while reflecting on current trends. We will target papers that not only integrate multiple sub-disciplines of Computer Science (e.g., machine learning, crowdsourcing) but also target problem spaces traditionally found in other fields (e.g., urban planning, environmental sustainability). For this semester, we will focus on interesting applications of machine learning, crowdsourcing, and/or social computing around a central monthly theme. Students will be invited to suggest papers and themes following the first month of papers.

9/8/2015 Acoustruments

Gierad Laput, Eric Brockmeyer, Scott E. Hudson, and Chris Harrison. 2015. Acoustruments: Passive, Acoustically-Driven, Interactive Controls for Handheld Devices. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 2161-2170. DOI=10.1145/2702123.2702414

This paper presents Acoustruments, a novel tangible interactive technology for mobile devices by traversing ultrasonic signals from the speaker to the microphone. Compared with traditional approaches, Acoustrument is low-cost, passive, customizable, accurate and powerless. In addition, the authors identified an expansive vocabulary of design primitives. This paper evaluates the training, accuracy and noise robustness of Acoustruments, which validates its feasibility and performance. Finally, the authors made several applications via the new method.


This paper contributes to the HCI, tangible interactive computing and fabrication community as follows:

  1. Acoustruments provides rich, tangible functionality to handheld device interaction.
  2. The authors illustrates how 3D printing and design primitives could build rich physical controls for mobile devices.
  3. Acoustruments could inspire traditional fabrication techniques such as injection molding, milling and machining.


The paper is very well organized, the video and figures are of splendid quality. The only flaws might be the lacking of formal user study. But as a novel interactive technology, this paper deserves a best paper award.

  1. The repository of code (especially the variance version of LibSVM is missing, which hampers the reproducibility of the paper. 
  2. The lack of formal user study left the question of potential applications to the users.

Future Directions

  1. (Technical) Use signal modulation techniques or devices with higher sampling rates to deal with cross-talk scenario.
  2. (User study) Formal user study and detailed exploration towards end goals.

I agreed with Matt that the related work session “reads like a list of things rather than a good synthesis”. This paper reads like a UIST paper (which I prefer) rather than a CHI paper. I agree with Matt that a formal user study and end goal should be clear, however, this paper itself can lead to a patent and inspires many HCI communities.


9/1/2015 Bootlegger

Guy Schofield, Tom Bartindale, and Peter Wright. 2015. Bootlegger: Turning Fans into Film Crew. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 767-776. DOI=10.1145/2702123.2702229

This paper presents Bootlegger, a novel system for creating live music film by organizing volunteers from the audience to shoot from different views via their mobile phones. Bootlegger was deployed in several concert and the authors discussed lessons learnt from the deployment.


This paper contributes to the HCI and crowdsourcing community as follows:

  1. Bootlegger is the first to use directional system to organize the location of the audience, e.g., close-up of musician, wide of audience, thus resulting a more aesthetic way footage and maintaining maximum coverage of all subjects across all cameras.
  2. Bootlegger limits the duration of each shots to ensure the aesthetic quality.
  3. Bootlegger streamlines the editing process by producing metadata for each video clip shot.
  4. They report lessons learnt from five deployment in real live music concert. e.g. whether the experience was enjoyable or stressful, whether the autodirector algorithm works.
  5. Bootlegger inspires crowdsourcing video production such as football games, arts festivals, protests and rallies.


The paper is well organized with some small flaws:

  1. How is the training section works in Bootlegger? The paper only describes as “after a short explanation by the development team…”
  2. The figures in this paper are low-resolution and not professional.
  3. They deployed the system for so few times and the number of volunteers is small (5~6) compared with the number of different view locations (5~6) There is no backup if one volunteer failed to shoot good video.

Future Directions

  1. (Technical) Video stabilization
  2. (Technical) Multi-view reconstruction / virtual reality
  3. (Application) Add narrative audio tracks & tags
  4. (User study) More detailed user study


Leave a Reply

Your email address will not be published. Required fields are marked *