Mathieu Nancel
Inria Lille -- Nord Europe, Mjolnir team

Batiment B, bureau B202
40, avenue Halley - Park Plaza
59650 Villeneuve d'Ascq, France

mathieu nancel net
[↗]
Google Scholar [↗]
ACM [↗]



2016

[C10*] Next-Point Prediction Metrics for Perceived Spatial Errors
Mathieu Nancel, Daniel Vogel, Bruno De Araujo, Ricardo Jota, Géry Casiez
ACM UIST 2016 (13 pages)
  ... BibTEX
Esc:
ACM [↗] Video [↗] [ 21% (79 / 384) ]
Touch screens have a delay between user input and corresponding visual interface feedback, called input “latency” (or “lag”). Visual latency is more noticeable during continuous input actions like dragging, so methods to display feedback based on the most likely path for the next few input points have been described in research papers and patents. Designing these “next-point prediction” methods is challenging, and there have been no standard metrics to compare different approaches. We introduce metrics to quantify the probability of 7 spatial error “side-effects” caused by next-point prediction methods. Types of side-effects are derived using a thematic analysis of comments gathered in a 12 participants study covering drawing, dragging, and panning tasks using 5 state-of-the-art next-point predictors. Using experiment logs of actual and predicted input points, we develop quantitative metrics that correlate positively with the frequency of perceived side-effects. These metrics enable practitioners to compare next-point predictors using only input logs.
[C9] The Performance and Preference of Different Fingers and Chords for Pointing, Dragging, and Object Transformation
Alix Goguey, Mathieu Nancel, Daniel Vogel, Géry Casiez
ACM CHI 2016 (12 pages)
  ... BibTEX
Esc:
ACM [↗] Video [↗] [ 23% (565 / 2435) ]
The development of robust methods to identify which finger is causing each touch point, called “finger identification,” will open up a new input space where interaction designers can associate system actions to different fingers. However, relatively little is known about the performance of specific fingers as single touch points or when used together in a “chord.” We present empirical results for accuracy, throughput, and subjective preference gathered in five experiments with 48 participants exploring all 10 fingers and 7 two-finger chords. Based on these results, we develop design guidelines for reasonable target sizes for specific fingers and two-finger chords, and a relative ranking of the suitability of fingers and two-finger chords for common multi-touch tasks. Our work contributes new knowledge regarding specific finger and chord performance and can inform the design of future interaction techniques and interfaces utilizing finger identification.

2015

[C8*] Gunslinger: Subtle Arms-down Mid-air Interaction
Mingyu Liu, Mathieu Nancel, Daniel Vogel
ACM UIST 2015 (9 pages)
  ... BibTEX
Esc:
ACM [↗] pdf [↗] Video [↗] [ 24% (70 / 297) ]
We describe Gunslinger, a mid-air interaction technique using barehand postures and gestures. Unlike past work, we explore a relaxed arms-down position with both hands interacting at the sides of the body. It features novel ‘hand-cursor’ feedback to communicate recognized hand posture, command mode and tracking quality; and a simple, but flexible hand posture recognizer. Although Gunslinger is suitable for many usage contexts, we focus on integrating mid-air gestures with large display touch input. We show how the Gunslinger form factor enables an interaction language that is equivalent, coherent, and compatible with large display touch input. A four-part study evaluates Midas Touch, posture recognition feedback, fundamental pointing and clicking, and general usability.
[J2] Mid-air Pointing on Ultra-Walls
Mathieu Nancel, Emmanuel Pietriga, Olivier Chapuis, Michel Beaudouin-Lafon
ACM ToCHI 2015 (62 pages)
  ... BibTEX
Esc:
ACM [↗] pdf [↗]
Ultra-high-resolution wall-sized displays (“ultra-walls”) are effective for presenting large datasets, but their size and resolution make traditional pointing techniques inadequate for precision pointing. We study mid-air pointing techniques that can be combined with other, domain-specific interactions. We first explore the limits of existing single-mode remote pointing techniques and demonstrate theoretically that they do not support high-precision pointing on ultra-walls. We then explore solutions to improve mid-air pointing efficiency: a tunable acceleration function and a framework for dual-precision techniques, both with precise tuning guidelines. We designed novel pointing techniques following these guidelines, several of which outperform existing techniques in controlled experiments that involve pointing difficulties never tested prior to this work. We discuss the strengths and weaknesses of our techniques to help interaction designers choose the best technique according to the task and equipment at hand. Finally, we discuss the cognitive mechanisms that affect pointing performance with these techniques.
[C7*] Clutching Is Not (Necessarily) the Enemy
Mathieu Nancel, Daniel Vogel, Edward Lank
ACM CHI 2015 (4 pages)
  ... BibTEX
Esc:
ACM [↗] pdf [↗] Video [↗] [ 25% (379 / 1520) ]
Clutching is usually assumed to be triggered by a lack of physical space and detrimental to pointing performance. We conduct a controlled experiment using a laptop trackpad where the effect of clutching on pointing performance is dissociated from the effects of control-to-display transfer functions. Participants performed a series of target acquisition tasks using typical cursor acceleration functions with and without clutching. All pointing tasks were feasible without clutching, but clutch-less movements were harder to perform, caused more errors, required more preparation time, and were not faster than clutch-enabled movements.
[C6] Myopoint: Pointing and Clicking Using Forearm Mounted EMG and Inertial Motion Sensors
Faizan Haque, Mathieu Nancel, Daniel Vogel
ACM CHI 2015 (4 pages)
  ... BibTEX
Esc:
ACM [↗] pdf [↗] Video [↗] [ 25% (379 / 1520) ]
We describe a mid-air, barehand pointing and clicking interaction technique using electromyographic (EMG) and inertial measurement unit (IMU) input from a consumer armband device. The technique uses enhanced pointer feedback to convey state, a custom pointer acceleration function tuned for angular inertial motion, and correction and filtering techniques to minimize side-effects when combining EMG and IMU input. By replicating a previous large display study using a motion capture pointing technique, we show the EMG and IMU technique is only 430 to 790 ms slower and has acceptable error rates for targets greater than 48 mm. Our work demonstrates that consumer-level EMG and IMU sensing is practical for distant pointing and clicking on large displays.

2014

[C5*] Causality – A Conceptual Model of Interaction History
Mathieu Nancel, Andy Cockburn
ACM CHI 2014 (10 pages)
  ... BibTEX
Esc:
ACM [↗] pdf [↗] Video [↗] [ 23% (465 / 2043) , Honorable Mention Award: 5% ]
Simple history systems such as Undo and Redo permit retrieval of earlier or later interaction states, but advanced systems allow powerful capabilities to reuse or reapply combinations of commands, states, or data across interaction contexts. Whether simple or powerful, designing interaction history mechanisms is challenging. We begin by reviewing existing history systems and models, observing a lack of tools to assist designers and researchers in specifying, contemplating, combining, and communicating the behaviour of history systems. To resolve this problem, we present CAUSALITY, a conceptual model of interaction history that clarifies the possibilities for temporal interactions. The model includes components for the work artifact (such as the text and formatting of a Word document), the system context (such as the settings and parameters of the user interface), the linear timeline (the commands executed in real time), and the branching chronology (a structure of executed commands and their impact on the artifact and/or context, which may be navigable by the user). We then describe and exemplify how this model can be used to encapsulate existing user interfaces and reveal limitations in their behaviour, and we also show in a conceptual evaluation how the model stimulates the design of new and innovative opportunities for interacting in time.

2013

[C4] Body-centric Design Space for Multi-surface Interaction
Julie Wagner, Mathieu Nancel, Sean Gustafson, Stéphane Huot, Wendy Mackay
ACM CHI 2013 (10 pages)
  ... BibTEX
Esc:
ACM [↗] pdf [↗] Video [↗] [ 20% (392 / 1963) , Honorable Mention Award: 5% ]
We introduce BodyScape, a body-centric design space for both analyzing existing multi-surface interaction techniques and suggesting new ones. We examine the relationship between users and their environment, specifically how different body parts enhance or restrict movement in particular interaction techniques. We illustrate the use of BodyScape by comparing two free-hand techniques, on-body touch and mid- air pointing, separately and in combination. We found that touching the torso is faster than touching the lower legs, since it affects the user’s balance; individual techniques outperform compound ones; and touching the dominant arm is slower than other body parts because the user must compensate for the applied force. The latter is surprising, given that most recent on-body touch techniques focus on touching the dominant arm.
[C3*] High-Precision Pointing on Large Wall Displays using Small Handheld Devices
Mathieu Nancel, Olivier Chapuis, Emmanuel Pietriga, Xing-Dong Yang, Pourang Irani, Michel Beaudouin-Lafon
ACM CHI 2013 (10 pages)
  ... BibTEX
Esc:
ACM [↗] pdf [↗] Video [↗] [ 20% (392 / 1963) ]
Rich interaction with high-resolution wall displays is not limited to remotely pointing at targets. Other relevant forms of interaction include virtual navigation, text entry, and direct manipulation of control widgets. However, most techniques for remotely acquiring targets with high precision have studied remote pointing in isolation, focusing on pointing efficiency, and ignoring the need to support these other forms of interaction. We investigate high-precision pointing techniques capable of acquiring targets as small as 4 millimeters on a 5.5 meters wide display while leaving up to 93 % of a typical tablet device's screen space available for task-specific widgets. We compare these techniques to state-of-the-art distant pointing techniques and show that two of our techniques, a purely relative one and one that uses head orientation, perform as well or better than the best pointing-only input techniques while using a fraction of the interaction resources.

2012

[D1*] Designing and Combining Interaction Techniques in Large Display Environments
Mathieu Nancel
Ph.D. Dissertation (236 pages)
  ... BibTEX
Esc:
Université Paris-Sud [↗] pdf [↗]
Large display environments (LDEs) are interactive physical workspaces featuring one or more static large displays as well as rich interaction capabilities, and are meant to visualize and manipulate very large datasets. Research about mid-air interactions in such environments has emerged over the past decade, and a number of interaction techniques are now available for most elementary tasks such as pointing, navigating and command selection. However these techniques are often designed and evaluated separately on specific platforms and for specific use-cases or operationalizations, which makes it hard to choose, compare and combine them. In this dissertation I propose a framework and a set of guidelines for analyzing and combining the input and output channels available in LDEs. I analyze the characteristics of LDEs in terms of (1) visual output and how it affects usability and collaboration and (2) input channels and how to combine them in rich sets of mid-air interaction techniques. These analyses lead to four design requirements intended to ensure that a set of interaction techniques can be used (i) at a distance, (ii) together with other interaction techniques and (iii) when collaborating with other users. In accordance with these requirements, I designed and evaluated a set of mid-air interaction techniques for panning and zooming, for invoking commands while pointing and for performing difficult pointing tasks with lim- ited input requirements. For the latter I also developed two methods, one for calibrating high-precision techniques with two levels of precision and one for tuning velocity-based transfer functions. Finally, I introduce two higher-level design considerations for combining interaction techniques in input-constrained environments. Designers should take into account (1) the trade-off between minimizing limb usage and performing actions in parallel that affects overall performance, and (2) the decision and adaptation costs incurred by changing the resolution function of a pointing technique during a pointing task.
[J1] Multisurface Interaction in the WILD Room
Michel Beaudouin-Lafon, Olivier Chapuis, James Eagan, Tony Gjerlufsen, Stéphane Huot, Clemens Klokmose, Wendy Mackay, Mathieu Nancel, Emmanuel Pietriga, Clément Pillias, Romain Primet, Julie Wagner
IEEE Computer (12 pages)
  ... BibTEX
Esc:
IEEE [↗] pdf [↗]
The WILD room (wall-sized interaction with large datasets) serves as a testbed for exploring the next generation of interactive systems by distributing interaction across diverse computing devices, enabling multiple users to easily and seamlessly create, share, and manipulate digital content.

2011

[C2] Rapid Development of User Interfaces on Cluster-Driven Wall Displays with jBricks
Emmanuel Pietriga, Stéphane Huot, Mathieu Nancel, Romain Primet
ACM EICS 2011 (6 pages)
  ... BibTEX
Esc:
ACM [↗] pdf [↗] [ 22% (14 / 65) ]
Research on cluster-driven wall displays has mostly focused on techniques for parallel rendering of complex 3D models. There has been comparatively little research effort dedicated to other types of graphics and to the software engineering issues that arise when prototyping novel interaction techniques or developing full-featured applications for such displays. We present jBricks, a Java toolkit that integrates a high-quality 2D graphics rendering engine and a versatile input configuration module into a coherent framework, enabling the exploratory prototyping of interaction techniques and rapid development of post-WIMP applications running on cluster-driven interactive visualization platforms.
[TR3] Precision Pointing for Ultra-High-Resolution Wall Displays
Mathieu Nancel, Emmanuel Pietriga, Michel Beaudouin-Lafon
Inria Technical Report (24 pages)
  ... BibTEX
Esc:
Inria [↗] pdf [↗]
Ultra-high-resolution wall displays have proven useful for displaying large quantities of information, but lack appropriate interaction techniques to manipulate the data efficiently. We explore the limits of existing modeless remote pointing techniques, originally designed for lower resolution displays, and show that they do not support high-precision pointing on such walls. We then consider techniques that combine a coarse positioning mode to approach the target's area with a precise pointing mode for acquiring the target. We compare both new and existing techniques through a controlled experiment, and find that techniques combining ray casting with relative positioning or angular movements enable the selection of targets as small as 4 millimeters while standing 2 meters away from the display.
[C1*] Mid-air Pan-and-Zoom on Wall-sized Displays
Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, Wendy Mackay
ACM CHI 2011 (10 pages)
  ... BibTEX
Esc:
ACM [↗] pdf [↗] Video [↗] [ 27% (410 / 1532) , Best Paper Award: 1% ]
Very-high-resolution wall-sized displays offer new opportunities for interacting with large data sets. While pointing on this type of display has been studied extensively, higher-level, more complex tasks such as pan-zoom navigation have received little attention. It thus remains unclear which techniques are best suited to perform multiscale navigation in these environments. Building upon empirical data gathered from studies of pan-and-zoom on desktop computers and studies of remote pointing, we identified three key factors for the design of mid-air pan-and-zoom techniques: uni- vs. bimanual interaction, linear vs. circular movements, and level of guidance to accomplish the gestures in mid-air. After an extensive phase of iterative design and pilot testing, we ran a controlled experiment aimed at better understanding the influence of these factors on task performance. Significant effects were obtained for all three factors: bimanual interaction, linear gestures and a high level of guidance resulted in significantly improved performance. Moreover, the interaction effects among some of the dimensions suggest possible combinations for more complex, real-world tasks.

2010

[TR2] Push Menu: Extending Marking Menus for Pressure-Enabled Input Devices
Stéphane Huot, Mathieu Nancel, Michel Beaudouin-Lafon
Inria Technical Report (10 pages)
  ... BibTEX
Esc:
Inria [↗] pdf [↗]
Several approaches have been proposed to increase the breadth of standard Marking Menus over the 8 item limit, most of which have focused on the use of the standard 2D input space (x-y). We present Push Menu, an extension of Marking Menu that takes advantage of pressure input as a third input dimension to increase menu breadth. We present the results of a preliminary experiment that validates our design and shows that Push Menu users who are neither familiar with pen-based interfaces nor continuous pressure control can handle up to 20 items reliably. We also discuss the implications of these results for using Push Menu in user interfaces and for improving its design.
[P1] 131 millions de pixels qui font le mur
Michel Beaudouin-Lafon, Emmanuel Pietriga, Wendy Mackay, Stéphane Huot, Mathieu Nancel, Clément Pillias, Romain Primet
Plein Sud 2010 (8 pages)
  ... Université Paris-Sud [↗] pdf [↗]
Imaginez un mur d’écrans qui affiche des images en haute définition. Imaginez que par des gestes simples, vous puissiez interagir avec lui… Nous ne sommes pas dans le film «Minority Report », mais face à la concrétisation d’un projet unique d’interaction homme-machine (IHM), la plate-forme Wild, qui permet d’interagir avec des masses de données complexes.

2009

[F1*] Un espace de conception fondé sur une analyse morphologique des techniques de menus
Mathieu Nancel, Stéphane Huot, Michel Beaudouin-Lafon
ACM IHM 2009 (10 pages)
  ... BibTEX
Esc:
ACM [↗] pdf [↗] [ 44% (51 / 117) ]
Cet article présente un espace de conception basé sur une analyse morphologique des mécanismes de structuration des menus et de sélection des items. Son but est de faciliter l'exploration de nouveaux types de menus afin notamment d'augmenter leur capacité sans détériorer leurs performances. L'article démontre l'aspect génératif de cet espace de conception grâce à quatre nouveaux designs de menus, basés sur des combinaisons de dimensions pas ou peu explorées. Pour deux d'entre eux, des expérimentations contrôlées montrent qu'ils offrent des performances comparables aux menus de la littérature.

ACM CHI'17 – Interaction Techniques
ACM CHI'16 – Interaction Techniques
ACM ITS'14 (Interactive Tabletops and Surfaces)
ACM AUIC'14 (Australasian User Interface Conference)
ACM CHI'14

Exceptional Reviewer distinction at ACM UIST 2014.

Exceptional Reviewer distinction at ACM UIST 2012.


HCI Journals

ACM ToCHI ACM Transactions on Computer‑Human Interaction ( 2013, 2016 )
IJHCS International Journal of Human-Computer Studies ( 2013 – 2016 )

International Conferences in HCI

ACM CHI ACM SIGCHI Conference on Human Factors in Computing Systems ( 2008 – 2016 )
ACM UIST ACM Symposium on User Interface Software and Technology ( 2008, 2012 – 2016, 2016 )
ACM CSCW ACM Conference on Computer-Supported Cooperative Work and Social Computing ( 2014 )
Interact International Conference on Human-Computer Interaction ( 2015 )
ACM GI Graphics Interface ( 2015 )
ACM NordiCHI ACM Nordic Conference on Human-Computer Interaction ( 2012, 2014, 2016 )
ACM DIS ACM Conference on Designing Interactive Systems ( 2012, 2014, 2016 )
MobileHCI International Conference on Human-Computer Interaction with Mobile Devices and Services ( 2016 )

Research Project Grants

ANR Projet ANR ( 2015 )

Book Chapters

Designing with the Mind in Mind (2nd Edition). Jeff Johnson ( 2014 )

French Conferences in HCI

ACM IHM Conférence Francophone sur l'Interaction Homme-Machine ( 2009, 2011, 2013 – 2015 )

Non-HCI Journals

IEEE TNSRE IEEE Transactions on Neural Systems and Rehabilitation Engineering ( 2013, 2014 )
Ergonomics The Official Journal of the Institute for Ergonomics and Human Factors ( 2015 )

Non-HCI Conferences

IEEE PacificVis IEEE Pacific Visualization Symposium ( 2012, 2013 )

2016: [C10*] UIST 2016 (Tokyo, Japan).
2015: [C8*] UIST 2015 (Charlotte, NC, USA).
2015: [C7*] CHI 2015 (Seoul, South Korea).
2014: [C5*] CHI 2014 (Toronto, Canada).
2013: NUS HCI Lab (National University of Singapore).
[C3*] CHI 2013 (Paris, France).
2012: (Orsay, France).
2011-2012: (Orsay, France).
2011: [C1*] CHI 2011 (Vancouver, Canada).
2009: [F1*] IHM°09 (Grenoble, France).


2011 - 2012
Univ. Paris-Sud XI L1
Polytech Paris-Sud L3
Polytech Paris-Sud L3
Master Informatique - Univ. Paris-Sud XI M1

2010 - 2011
L2 Info - Univ. Paris-Sud XI L2
Polytech Paris-Sud L3
Polytech Paris-Sud M2

2009 - 2010
IFIPS L3
TER (MIAGE & L3 Info) - Univ. Paris-Sud XI L3

2008 - 2009
Processing & Arduino Mastère Nouveaux Médias - ENSCI M2
IFIPS L3
TER (MIAGE & L3 Info) - Univ. Paris-Sud XI L3


2015 - 2016 Post Doctoral Fellow User Interfaces Lab, Aalto University
2014 - 2015 Post Doctoral Fellow HCI Lab, University of Waterloo
2013 Post Doctoral Fellow History Management Systems HCI Lab, University of Canterbury
2008 - 2012 Designing and Combining Interaction Techniques in Large Display Environments Université Paris-Sud XI
2007 - 2008 Université Paris-Sud XI
2003 - 2008 IFIPS