I am a permanent researcher in Human-Computer Interaction (HCI) at the Loki lab at Inria Lille – Nord Europe (France) since 2016.

My ongoing research focuses on the temporality of Human-Computer interactions, from the user's physiological and cognitive capabilities to the way interactive systems are designed and built—and how to make them better. That involves:

I am the principal investigator of the Causality project funded by the French National Agency of Research (ANR), in which I apply these principles to cursor control and histories of command. I currently co-advise the PhD theses of Philippe Schmid and Alice Loizeau on these topics with Stéphane Huot.

My background is in software engineering, and my research toolbox borrows from experimental psychology, interactive systems engineering, and interaction design. During my PhD and subsequent postdocs I have worked extensively on interacting at a distance with large displays, focusing on designing and evaluating interaction techniques for cursor control (e.g. [C3, J2, C6, C12]), command selection [C4, C8], and virtual navigation [C1], with an interest in exploring and leveraging new* sensing technology like motion tracking (VICON [J2], Kinect [C4], LEAP [C8], ...) and electromyographic sensors [C6].
These projects progressively guided me towards more fundamental aspects of interactive systems design and implementation, exploring and answering lingering questions that no longer concern only a specific platform but all aspects of our experience of interactive systems.

Meanwhile, I recently contributed to the design of the first standard for the French keyboard layout (see [J3, R4, R5] and norme-azerty.fr for more information on the process), and I co-authored two high school textbooks on a new nationwide course of introductory programming [B1, B2].

Research


Publications

See only:

Click a title to show the paper's abstract.
[J4] GUI Behaviors to Minimize Pointing-based Interaction Interferences.
A. Loizeau, S. Malacria, & M. Nancel (2024) In ACM ToCHI 24 [pdf] - BibTEX
Esc:
A. Loizeau, S. Malacria, & M. Nancel (2024)

Keywords: pointing, temporality, interferences, fine-grained input, methods, blind spots

Pointing-based interaction interferences are situations wherein GUI elements appear, disappear, or change shortly before being selected, and too late for the user to inhibit their movement. Their cause lays in the design of most GUIs, for which any user event on an interactive element unquestionably reflects the user's intention-even one millisecond after that element has changed. Previous work indicate that interferences can cause frustration and sometimes severe consequences. This paper investigates new default behaviors for GUI elements that aim to prevent the occurrences of interferences or to mitigate their consequences. We present a design space of the advantages and technical requirements of these behaviors, and demonstrate in a controlled study how simple rules can reduce the occurrences of so-called 'Pop-up-style' interferences, and user frustration. We then discuss their application to various forms of interaction interferences. We conclude by addressing the feasibility and trade-offs of implementing these behaviors in existing systems.

▲ Hide the abstract
[B2] Numérique et Sciences Informatiques Tle Spécialité.
M. Beaudouin-Lafon, B. Groz, E. Waller, C. Pelsser, C. Chevalier, P. Marquet, X. Redon, M. Nancel, G. Grimaud, & T. Vantroys (2022) In NSI Tle [doi]
[C20] Relevance and Applicability of Hardware-independent Pointing Transfer Functions.
R. Hanada, D. Masson, G. Casiez, M. Nancel, & S. Malacria (2021) In ACM UIST 21 [Video] - [doi] - [pdf] - BibTEX
Esc:
R. Hanada, D. Masson, G. Casiez, M. Nancel, & S. Malacria (2021)

Keywords: pointing, fine-grained input, tools, methods

Pointing transfer functions remain predominantly expressed in pixels per input counts, which can generate different visual pointer behaviors with different input and output devices; we show in a first controlled experiment that even small hardware differences impact pointing performance with functions defined in this manner. We also demonstrate the applicability of ''hardware-independent'' transfer functions defined in physical units. We explore two methods to maintain hardware-independent pointer performance in operating systems that require hardware-dependent definitions: scaling them to the resolutions of the input and output devices, or selecting the OS acceleration setting that produces the closest visual behavior. In a second controlled experiment, we adapted a baseline function to different screen and mouse resolutions using both methods, and the resulting functions provided equivalent performance. Lastly, we provide a tool to calculate equivalent transfer functions between hardware setups, allowing users to match pointer behavior with different devices, and researchers to tune and replicate experiment conditions. Our work emphasizes, and hopefully facilitates, the idea that operating systems should have the capability to formulate pointing transfer functions in physical units, and to adjust them automatically to hardware setups.

▲ Hide the abstract
[J3] AZERTY amélioré: Computational Design on a National Scale.
A. Feit, M. Nancel, M. John, A. Karrenbauer, D. Weir, & A. Oulasvirta (2021) In CACM 21 [doi] - [pdf] - BibTEX
Esc:
A. Feit, M. Nancel, M. John, A. Karrenbauer, D. Weir, & A. Oulasvirta (2021)

Keywords: typing, models, tools, methods

France is the first country in the world to adopt a keyboard standard informed by computational methods, improving the performance, ergonomics, and intuitiveness of the keyboard while enabling input of many more characters. We describe a human-centric approach developed jointly with stakeholders to utilize computational methods in the decision process not only to solve a well-defined problem but also to understand the design requirements, to inform subjective views, or to communicate the outcomes. To be more broadly useful, research must develop computational methods that can be used in a participatory and inclusive fashion respecting the different needs and roles of stakeholders.

▲ Hide the abstract
[B1] Numérique et Sciences Informatiques 1re Spécialité.
B. Groz, E. Waller, M. Nancel, M. Beaudouin-Lafon, & O. Marce (2021) In NSI 1ère [doi]
[C19] Interaction Interferences: Implications of Last-Instant System State Changes.
P. Schmid, S. Malacria, A. Cockburn, & M. Nancel (2020) In ACM UIST 20 [Video] - [doi] - [pdf] - BibTEX
Esc:
P. Schmid, S. Malacria, A. Cockburn, & M. Nancel (2020)

Keywords: pointing, temporality, interferences, models, fine-grained input, methods, blind spots

We study interaction interferences, situations where an unexpected change occurs in an interface immediately before the user performs an action, causing the corresponding input to be misinterpreted by the system. For example, a user tries to select an item in a list, but the list is automatically updated immediately before the click, causing the wrong item to be selected. First, we formally define interaction interferences and discuss their causes from behavioral and system-design perspectives. Then, we report the results of a survey examining users’ perceptions of the frequency, frustration, and severity of interaction interferences. We also report a controlled experiment exploring the minimum time interval, before clicking, below which participants could not refrain from completing their action. Finally, we discuss our findings and their implications for system design, paving the way for future work.

▲ Hide the abstract
[C18] Modeling and Reducing Spatial Jitter caused by Asynchronous Input and Output Rates.
A. Antoine, M. Nancel, E. Ge, J. Zheng, N. Zolghadr, & G. Casiez (2020) In ACM UIST 20 [doi] - [pdf] - BibTEX
Esc:
A. Antoine, M. Nancel, E. Ge, J. Zheng, N. Zolghadr, & G. Casiez (2020)

Keywords: temporality, models, fine-grained input, tools, methods, blind spots

Jitter in interactive systems occurs when visual feedback is perceived as unstable or trembling even though the input signal is smooth or stationary. It can have multiple causes such as sensing noise, or feedback calculations introducing or exacerbating sensing imprecisions. Jitter can however occur even when each individual component of the pipeline works perfectly, as a result of the differences between the input frequency and the display refresh rate. This asynchronicity can introduce rapidly-shifting latencies between the rendered feedbacks and their display on screen, which can result in trembling cursors or viewports. This paper contributes a better understanding of this particular type of jitter. We first detail the problem from a mathematical standpoint, from which we develop a predictive model of jitter amplitude as a function of input and output frequencies, and a new metric to measure this spatial jitter. Using touch input data gathered in a study, we developed a simulator to validate this model and to assess the effects of different techniques and settings with any output frequency. The most promising approach, when the time of the next display refresh is known, is to estimate (via interpolation or extrapolation) the user’s position at a fixed time interval before that refresh. When input events occur at 125 Hz, as is common in touch screens, we show that an interval of 4 to 6 ms works well for a wide range of display refresh rates. This method effectively cancels most of the jitter introduced by input/output asynchronicity, while introducing minimal imprecision or latency.

▲ Hide the abstract
[C17] Investigating the Necessity of Delay in Marking Menu Invocation.
J. Henderson, S. Malacria, M. Nancel, & E. Lank (2020) In ACM CHI 20 [doi] - [pdf] - BibTEX
Esc:
J. Henderson, S. Malacria, M. Nancel, & E. Lank (2020)

Keywords: interaction techniques, temporality, menus, blind spots

Delayed display of menu items is a core design component of marking menus, arguably to prevent visual distraction and foster the use of mark mode. We investigate these assumptions, by contrasting the original marking menu design with immediately-displayed marking menus. In three controlled experiments, we fail to reveal obvious and systematic performance or usability advantages to using delay and mark mode. Only in very constrained settings—after significant training and only two items to learn—did traditional marking menus show a time improvement of about 260 ms. Otherwise, we found an overall decrease in performance with delay, whether participants exhibited practiced or unpracticed behaviour. Our final study failed to demonstrate that an immediately-displayed menu interface is more visually disrupting than a delayed menu. These findings inform the costs and benefits of incorporating delay in marking menus, and motivate guidelines for situations in which its use is desirable.

▲ Hide the abstract
[C16] AutoGain: Gain Function Adaptation with Submovement Efficiency Optimization.
B. Lee, M. Nancel, S. Kim, & A. Oulasvirta (2020) In ACM CHI 20 [doi] - [pdf] - BibTEX
Esc:
B. Lee, M. Nancel, S. Kim, & A. Oulasvirta (2020)

Keywords: pointing, fine-grained input, tools

A well-designed control-to-display gain function can improve pointing performance with indirect pointing devices like trackpads. However, the design of gain functions is challenging and mostly based on trial and error. AutoGain is an unobtrusive method to individualize a gain function for indirect pointing devices in contexts where cursor trajectories can be tracked. It gradually improves pointing efficiency by using a novel submovement-level tracking+optimization technique that minimizes aiming error (undershooting/overshooting) for each submovement. We first show that AutoGain can produce, from scratch, gain functions with performance comparable to commercial designs, in less than a half-hour of active use. Second, we demonstrate AutoGain’s applicability to emerging input devices (here, a Leap Motion controller) with no reference gain functions. Third, a one-month longitudinal study of normal computer use with AutoGain showed performance improvements from participants’ default functions.

▲ Hide the abstract
[C15] A Comparative Study of Pointing Techniques for Eyewear Using a Simulated Pedestrian Environment.
Q. Roy, C. Zakaria, S. Perrault, M. Nancel, W. Kim, A. Misra, & A. Cockburn (2019) In Interact 19 [doi] - [pdf] - BibTEX
Esc:
Q. Roy, C. Zakaria, S. Perrault, M. Nancel, W. Kim, A. Misra, & A. Cockburn (2019)

Keywords: pointing, interaction techniques, temporality, methods

Eyewear displays allow users to interact with virtual content displayed over real-world vision, in active situations like standing and walking. Pointing techniques for eyewear displays have been proposed, but their social acceptability, efficiency, and situation awareness remain to be assessed. Using a novel street-walking simulator, we conducted an empirical study of target acquisition while standing and walking under different levels of street crowdedness. We evaluated three phone-based eyewear pointing techniques: indirect touch on a touchscreen, and two in-air techniques using relative device rotations around forward and a downward axes. Direct touch on a phone, without eyewear, was used as a control condition. Results showed that indirect touch was the most efficient and socially acceptable technique, and that in-air pointing was inefficient when walking. Interestingly, the eyewear displays did not improve situation awareness compared to the control condition. We discuss implications for eyewear interaction design.

▲ Hide the abstract
[S1] Interfaces utilisateurs – Dispositions de clavier bureautique français (NF Z 71-300).
In Norme AFNOR 19 [doi] - BibTEX
Esc:

Keywords: typing

Le présent document définit la disposition du clavier informatique à 105 touches dans sa version bureautique et 72 touches dans sa version compacte. Il est particulièrement adapté à la saisie de la langue française dans un contexte multilingue où le recours à d’autres langues latines est fréquent. [...] Conçu pour les utilisateurs français, il pourrait néanmoins présenter un intérêt pour d’autres pays francophones. [...] Les travaux de normalisation à l’origine de ce document font suite à l’édition 2015 du rapport au Parlement sur l’emploi de la langue française (Délégation générale à la langue française et aux langues de France, ministère de la Culture et de la Communication) puis d’une publication intitulée « Vers une norme française pour les claviers informatiques » qui font état de la difficulté de saisir certains caractères courants en français tels que les capitales accentuées ou la ligature « œ ». [...] Les principaux objectifs qui ont concouru à la rédaction de ce document sont les suivants :
– homogénéiser le parc de claviers informatiques en France et notamment réduire les disparités dans la disposition des caractères entre différents fabricants de matériels et de systèmes d’exploitation ;
– améliorer l’ergonomie du clavier pour la saisie du français tout en s’inscrivant dans la continuité des dispositifs existants (dispositions de clavier dites « AZERTY », non normalisées, établies par l’usage), de façon à ne pas provoquer de difficultés d’appropriation de la nouvelle disposition par les utilisateurs ;
– permettre la saisie de l’ensemble des caractères des langues régionales de France dont la liste3 figure sur le site internet de la Délégation générale à la langue française et aux langues de France3 ;
– permettre la saisie de l’ensemble des caractères des langues à alphabet latin présentes sur le continent européen, avec comme priorité les caractères usuels des grandes langues de communication en Europe telles que l’allemand, l’espagnol ou le portugais ;
– rendre plus accessibles de nouveaux jeux de caractères et symboles pouvant être utiles lors de la rédaction de documents spécifiques ou techniques (lettres de l’alphabet grec, symboles mathématiques par exemple).

▲ Hide the abstract
[C14] Next-Point Prediction for Direct Touch Using Finite-Time Derivative Estimation.
M. Nancel, S. Aranovskiy, R. Ushirobira, D. Efimov, S. Poulmane, N. Roussel, & G. Casiez (2018) In ACM UIST 18 [doi] - [pdf] - BibTEX
Esc:
M. Nancel, S. Aranovskiy, R. Ushirobira, D. Efimov, S. Poulmane, N. Roussel, & G. Casiez (2018)

Keywords: pointing, temporality, models, fine-grained input, tools

End-to-end latency in interactive systems is detrimental to performance and usability, and comes from a combination of hardware and software delays. While these delays are steadily addressed by hardware and software improvements, it is at a decelerating pace. In parallel, short-term input prediction has shown promising results in recent years, in both research and industry, as an addition to these efforts. We describe a new prediction algorithm for direct touch devices based on (i) a state-of-the-art finite-time derivative estimator, (ii) a smoothing mechanism based on input speed, and (iii) a post-filtering of the prediction in two steps. Using both a pre- existing dataset of touch input as benchmark, and subjective data from a new user study, we show that this new predictor outperforms the predictors currently available in the literature and industry, based on metrics that model user-defined negative side-effects caused by input prediction. In particular, we show that our predictor can predict up to 2 or 3 times further than existing techniques with minimal negative side-effects.

▲ Hide the abstract
[C13] Introducing Transient Gestures to Improve Pan and Zoom on Touch Surfaces.
J. Avery, S. Malacria, M. Nancel, G. Casiez, & E. Lank (2018) In ACM CHI 18 [Video] - [doi] - [pdf] - BibTEX
Esc:
J. Avery, S. Malacria, M. Nancel, G. Casiez, & E. Lank (2018)

Keywords: navigation, interaction techniques

Despite the ubiquity of touch-based input and the availability of increasingly computationally powerful touchscreen devices, there has been comparatively little work on enhancing basic canonical gestures such as swipe-to-pan and pinch-to-zoom. In this paper, we introduce transient pan and zoom, i.e. pan and zoom manipulation gestures that temporarily alter the view and can be rapidly undone. Leveraging typical touchscreen support for additional contact points, we design our transient gestures such that they co-exist with traditional pan and zoom interaction. We show that our transient pan-and-zoom reduces repetition in multi-level navigation and facilitates rapid movement between document states. We conclude with a discussion of user feedback, and directions for future research.

▲ Hide the abstract
[C12] Pointing at a Distance with Everyday Smart Devices.
S. Siddhpuria, S. Malacria, M. Nancel, & E. Lank (2018) In ACM CHI 18 [Video] - [doi] - [pdf] - BibTEX
Esc:
S. Siddhpuria, S. Malacria, M. Nancel, & E. Lank (2018)

Keywords: pointing, interaction techniques, large displays

Large displays are becoming commonplace at work, at home, or in public areas. However, interaction at a distance -- anything greater than arms-length -- remains cumbersome, restricts simultaneous use, and requires specific hardware augmentations of the display: touch layers, cameras, or dedicated input devices. Yet a rapidly increasing number of people carry smartphones and smartwatches, devices with rich input capabilities that can easily be used as input devices to control interactive systems. We contribute (1) the results of a survey on possession and use of smart devices, and (2) the results of a controlled experiment comparing seven distal pointing techniques on phone or watch, one- and two-handed, and using different input channels and mappings. Our results favor using a smartphone as a trackpad, but also explore performance tradeoffs that can inform the choice and design of distal pointing techniques for different contexts of use.

▲ Hide the abstract
[R5] Historique et méthodologie de la nouvelle disposition de clavier AZERTY.
M. Nancel (2018) Unpublished Inria TechReport [doi] - [pdf] - BibTEX
Esc:
M. Nancel (2018)

Keywords: typing, methods

Document de travail utilisé dans la rédaction de l’annexe H “Historique et méthodologie” de la norme AFNOR Z 71-300 : “Dispositions de clavier bureautique français”.

▲ Hide the abstract
[R4] Élaboration de la disposition AZERTY modernisée.
A. Feit, M. Nancel, D. Weir, G. Bailly, M. John, A. Karrenbauer, & A. Oulasvirta (2018) Unpublished Inria TechReport [doi] - [pdf] - BibTEX
Esc:
A. Feit, M. Nancel, D. Weir, G. Bailly, M. John, A. Karrenbauer, & A. Oulasvirta (2018)

Keywords: typing, methods

Document de travail utilisé dans la rédaction de l'annexe F “Élaboration de la disposition AZERTY modernisée” de la norme AFNOR Z 71-300 : “Dispositions de clavier bureautique français”.

▲ Hide the abstract
[C11] Modeling User Performance on Curved Constrained Paths.
M. Nancel, & E. Lank (2017) In ACM CHI 17 [doi] - [pdf] - BibTEX
Esc:
M. Nancel, & E. Lank (2017)

Keywords: pointing, models, methods

In 1997, Accot and Zhai presented seminal work analyzing the temporal cost and instantaneous speed profiles associated with movement along constrained paths. Their work posited and validated the steering law, which described the relationship between path constraint, path length and the temporal cost of path traversal using a computer input device (e.g. a mouse). In this paper, we argue that the steering law fails to correctly model constrained paths of varying, arbitrary curvature, propose a new form of the law that accommodates these curved paths, and empirically validate our model.

▲ Hide the abstract
[C10] Next-Point Prediction Metrics for Perceived Spatial Errors.
M. Nancel, D. Vogel, B. De Araujo, R. Jota, & G. Casiez (2016) In ACM UIST 16 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Nancel, D. Vogel, B. De Araujo, R. Jota, & G. Casiez (2016)

Keywords: pointing, temporality, models, fine-grained input, methods, blind spots

Touch screens have a delay between user input and corresponding visual interface feedback, called input “latency” (or “lag”). Visual latency is more noticeable during continuous input actions like dragging, so methods to display feedback based on the most likely path for the next few input points have been described in research papers and patents. Designing these “next-point prediction” methods is challenging, and there have been no standard metrics to compare different approaches. We introduce metrics to quantify the probability of 7 spatial error “side-effects” caused by next-point prediction methods. Types of side-effects are derived using a thematic analysis of comments gathered in a 12 participants study covering drawing, dragging, and panning tasks using 5 state-of-the-art next-point predictors. Using experiment logs of actual and predicted input points, we develop quantitative metrics that correlate positively with the frequency of perceived side-effects. These metrics enable practitioners to compare next-point predictors using only input logs.

▲ Hide the abstract
[C9] The Performance and Preference of Different Fingers and Chords for Pointing, Dragging, and Object Transformation.
A. Goguey, M. Nancel, D. Vogel, & G. Casiez (2016) In ACM CHI 16 [Video] - [doi] - [pdf] - BibTEX
Esc:
A. Goguey, M. Nancel, D. Vogel, & G. Casiez (2016)

Keywords: pointing, navigation

The development of robust methods to identify which finger is causing each touch point, called “finger identification,” will open up a new input space where interaction designers can associate system actions to different fingers. However, relatively little is known about the performance of specific fingers as single touch points or when used together in a “chord.” We present empirical results for accuracy, throughput, and subjective preference gathered in five experiments with 48 participants exploring all 10 fingers and 7 two-finger chords. Based on these results, we develop design guidelines for reasonable target sizes for specific fingers and two-finger chords, and a relative ranking of the suitability of fingers and two-finger chords for common multi-touch tasks. Our work contributes new knowledge regarding specific finger and chord performance and can inform the design of future interaction techniques and interfaces utilizing finger identification.

▲ Hide the abstract
[A1] AutoGain: Adapting Gain Functions by Optimizing Submovement Efficiency.
B. Lee, M. Nancel, & A. Oulasvirta (2016) Unpublished arXiv PrePrint [doi] - [pdf] - BibTEX
Esc:
B. Lee, M. Nancel, & A. Oulasvirta (2016)

Keywords: pointing, fine-grained input, tools

A well-designed control-to-display (CD) gain function can improve pointing performance with an indirect pointing device such as a trackpad. However, the design of gain functions has been challenging and mostly based on trial and error. AutoGain is an unobtrusive method to obtain a gain function for an indirect pointing device in contexts where cursor trajectories can be tracked. It gradually improves pointing efficiency by using a novel submovement-level tracking+optimization technique. In a study, we show that AutoGain can produce gain functions with performance comparable to commercial designs in less than a half hour of active use. This is attributable to reductions in aiming error (undershooting/overshooting) for each submovement. Our second study shows that AutoGain can be used to obtain gain functions for emerging input devices (here, a Leap Motion controller) for which no good gain function may exist yet. Finally, we discuss deployment in a real interactive system.

▲ Hide the abstract
[W1] Hands Up: Who Knows Something About Performance and Ergonomics of Mid-Air Hand Gestures.
A. Feit, & M. Nancel (2016) Unpublished ACM CHI Workshop 16 [pdf] - BibTEX
Esc:
A. Feit, & M. Nancel (2016)

Keywords: interaction techniques, methods

Advances in markerless and un-instrumented hand tracking allow us to make full use of the hands' dexterity for interaction with computers. However, the biomechanics of hand movements remain to be thoroughly studied in HCI. The large number of degrees of freedom of the hand (25) presents us with a huge design space of possible gestures, which is hard to fully explore with traditional methods like elicitation studies or design heuristics. We propose an approach to develop a model of fatigue and stress of manual mid-air input, inspired by prior work on the ergonomics of arm movements and on the performance of multi-finger gestures. Along with our vision of the incoming challenges in mid-air interaction, we describe a design framework for mid-air input that, given such models, can be used to automatically evaluate any given gesture set, or propose an optimal gesture vocabulary for a given set of tasks.

▲ Hide the abstract
[C8] Gunslinger: Subtle Arms-down Mid-air Interaction.
M. Liu, M. Nancel, & D. Vogel (2015) In ACM UIST 15 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Liu, M. Nancel, & D. Vogel (2015)

Keywords: pointing, navigation, interaction techniques, large displays, menus, methods

We describe Gunslinger, a mid-air interaction technique using barehand postures and gestures. Unlike past work, we explore a relaxed arms-down position with both hands interacting at the sides of the body. It features novel ‘hand-cursor’ feedback to communicate recognized hand posture, command mode and tracking quality; and a simple, but flexible hand posture recognizer. Although Gunslinger is suitable for many usage contexts, we focus on integrating mid-air gestures with large display touch input. We show how the Gunslinger form factor enables an interaction language that is equivalent, coherent, and compatible with large display touch input. A four-part study evaluates Midas Touch, posture recognition feedback, fundamental pointing and clicking, and general usability.

▲ Hide the abstract
[C7] Clutching Is Not (Necessarily) the Enemy.
M. Nancel, D. Vogel, & E. Lank (2015) In ACM CHI 15 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Nancel, D. Vogel, & E. Lank (2015)

Keywords: pointing, methods, blind spots

Clutching is usually assumed to be triggered by a lack of physical space and detrimental to pointing performance. We conduct a controlled experiment using a laptop trackpad where the effect of clutching on pointing performance is dissociated from the effects of control-to-display transfer functions. Participants performed a series of target acquisition tasks using typical cursor acceleration functions with and without clutching. All pointing tasks were feasible without clutching, but clutch-less movements were harder to perform, caused more errors, required more preparation time, and were not faster than clutch-enabled movements.

▲ Hide the abstract
[C6] Myopoint: Pointing and Clicking Using Forearm Mounted EMG and Inertial Motion Sensors.
F. Haque, M. Nancel, & D. Vogel (2015) In ACM CHI 15 [Video] - [doi] - [pdf] - BibTEX
Esc:
F. Haque, M. Nancel, & D. Vogel (2015)

Keywords: pointing, interaction techniques, large displays

We describe a mid-air, barehand pointing and clicking interaction technique using electromyographic (EMG) and inertial measurement unit (IMU) input from a consumer armband device. The technique uses enhanced pointer feedback to convey state, a custom pointer acceleration function tuned for angular inertial motion, and correction and filtering techniques to minimize side-effects when combining EMG and IMU input. By replicating a previous large display study using a motion capture pointing technique, we show the EMG and IMU technique is only 430 to 790 ms slower and has acceptable error rates for targets greater than 48 mm. Our work demonstrates that consumer-level EMG and IMU sensing is practical for distant pointing and clicking on large displays.

▲ Hide the abstract
[J2] Mid-air Pointing on Ultra-Walls.
M. Nancel, E. Pietriga, O. Chapuis, & M. Beaudouin-Lafon (2015) In ACM ToCHI 15 [doi] - [pdf] - BibTEX
Esc:
M. Nancel, E. Pietriga, O. Chapuis, & M. Beaudouin-Lafon (2015)

Keywords: pointing, interaction techniques, large displays, models

Ultra-high-resolution wall-sized displays (“ultra-walls”) are effective for presenting large datasets, but their size and resolution make traditional pointing techniques inadequate for precision pointing. We study mid-air pointing techniques that can be combined with other, domain-specific interactions. We first explore the limits of existing single-mode remote pointing techniques and demonstrate theoretically that they do not support high-precision pointing on ultra-walls. We then explore solutions to improve mid-air pointing efficiency: a tunable acceleration function and a framework for dual-precision techniques, both with precise tuning guidelines. We designed novel pointing techniques following these guidelines, several of which outperform existing techniques in controlled experiments that involve pointing difficulties never tested prior to this work. We discuss the strengths and weaknesses of our techniques to help interaction designers choose the best technique according to the task and equipment at hand. Finally, we discuss the cognitive mechanisms that affect pointing performance with these techniques.

▲ Hide the abstract
[C5] Causality – A Conceptual Model of Interaction History.
M. Nancel, & A. Cockburn (2014) In ACM CHI 14 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Nancel, & A. Cockburn (2014)

Keywords: temporality, histories of commands, blind spots

Simple history systems such as Undo and Redo permit retrieval of earlier or later interaction states, but advanced systems allow powerful capabilities to reuse or reapply combinations of commands, states, or data across interaction contexts. Whether simple or powerful, designing interaction history mechanisms is challenging. We begin by reviewing existing history systems and models, observing a lack of tools to assist designers and researchers in specifying, contemplating, combining, and communicating the behaviour of history systems. To resolve this problem, we present CAUSALITY, a conceptual model of interaction history that clarifies the possibilities for temporal interactions. The model includes components for the work artifact (such as the text and formatting of a Word document), the system context (such as the settings and parameters of the user interface), the linear timeline (the commands executed in real time), and the branching chronology (a structure of executed commands and their impact on the artifact and/or context, which may be navigable by the user). We then describe and exemplify how this model can be used to encapsulate existing user interfaces and reveal limitations in their behaviour, and we also show in a conceptual evaluation how the model stimulates the design of new and innovative opportunities for interacting in time.

▲ Hide the abstract
[C4] Body-centric Design Space for Multi-surface Interaction.
J. Wagner, M. Nancel, S. Gustafson, S. Huot, & W. Mackay (2013) In ACM CHI 13 [Video] - [doi] - [pdf] - BibTEX
Esc:
J. Wagner, M. Nancel, S. Gustafson, S. Huot, & W. Mackay (2013)

Keywords: interaction techniques, large displays, menus

We introduce BodyScape, a body-centric design space for both analyzing existing multi-surface interaction techniques and suggesting new ones. We examine the relationship between users and their environment, specifically how different body parts enhance or restrict movement in particular interaction techniques. We illustrate the use of BodyScape by comparing two free-hand techniques, on-body touch and mid- air pointing, separately and in combination. We found that touching the torso is faster than touching the lower legs, since it affects the user’s balance; individual techniques outperform compound ones; and touching the dominant arm is slower than other body parts because the user must compensate for the applied force. The latter is surprising, given that most recent on-body touch techniques focus on touching the dominant arm.

▲ Hide the abstract
[C3] High-Precision Pointing on Large Wall Displays using Small Handheld Devices.
M. Nancel, O. Chapuis, E. Pietriga, X. Yang, P. Irani, & M. Beaudouin-Lafon (2013) In ACM CHI 13 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Nancel, O. Chapuis, E. Pietriga, X. Yang, P. Irani, & M. Beaudouin-Lafon (2013)

Keywords: pointing, interaction techniques, large displays, models

Rich interaction with high-resolution wall displays is not limited to remotely pointing at targets. Other relevant forms of interaction include virtual navigation, text entry, and direct manipulation of control widgets. However, most techniques for remotely acquiring targets with high precision have studied remote pointing in isolation, focusing on pointing efficiency, and ignoring the need to support these other forms of interaction. We investigate high-precision pointing techniques capable of acquiring targets as small as 4 millimeters on a 5.5 meters wide display while leaving up to 93 % of a typical tablet device's screen space available for task-specific widgets. We compare these techniques to state-of-the-art distant pointing techniques and show that two of our techniques, a purely relative one and one that uses head orientation, perform as well or better than the best pointing-only input techniques while using a fraction of the interaction resources.

▲ Hide the abstract
[D1] Designing and Combining Interaction Techniques in Large Display Environments.
M. Nancel (2012) Unpublished PhD [doi] - [pdf] - BibTEX
Esc:
M. Nancel (2012)

Keywords: pointing, menus, interaction techniques, large displays, models, navigation

Large display environments (LDEs) are interactive physical workspaces featuring one or more static large displays as well as rich interaction capabilities, and are meant to visualize and manipulate very large datasets. Research about mid-air interactions in such environments has emerged over the past decade, and a number of interaction techniques are now available for most elementary tasks such as pointing, navigating and command selection. However these techniques are often designed and evaluated separately on specific platforms and for specific use-cases or operationalizations, which makes it hard to choose, compare and combine them. In this dissertation I propose a framework and a set of guidelines for analyzing and combining the input and output channels available in LDEs. I analyze the characteristics of LDEs in terms of (1) visual output and how it affects usability and collaboration and (2) input channels and how to combine them in rich sets of mid-air interaction techniques. These analyses lead to four design requirements intended to ensure that a set of interaction techniques can be used (i) at a distance, (ii) together with other interaction techniques and (iii) when collaborating with other users. In accordance with these requirements, I designed and evaluated a set of mid-air interaction techniques for panning and zooming, for invoking commands while pointing and for performing difficult pointing tasks with lim- ited input requirements. For the latter I also developed two methods, one for calibrating high-precision techniques with two levels of precision and one for tuning velocity-based transfer functions. Finally, I introduce two higher-level design considerations for combining interaction techniques in input-constrained environments. Designers should take into account (1) the trade-off between minimizing limb usage and performing actions in parallel that affects overall performance, and (2) the decision and adaptation costs incurred by changing the resolution function of a pointing technique during a pointing task.

▲ Hide the abstract
[J1] Multisurface Interaction in the WILD Room.
M. Beaudouin-Lafon, O. Chapuis, J. Eagan, T. Gjerlufsen, S. Huot, C. Klokmose, W. Mackay, M. Nancel, E. Pietriga, C. Pillias, R. Primet, & J. Wagner (2012) In IEEE Computer [doi] - [pdf] - BibTEX
Esc:
M. Beaudouin-Lafon, O. Chapuis, J. Eagan, T. Gjerlufsen, S. Huot, C. Klokmose, W. Mackay, M. Nancel, E. Pietriga, C. Pillias, R. Primet, & J. Wagner (2012)

Keywords: pointing, navigation, interaction techniques, large displays, tools

The WILD room (wall-sized interaction with large datasets) serves as a testbed for exploring the next generation of interactive systems by distributing interaction across diverse computing devices, enabling multiple users to easily and seamlessly create, share, and manipulate digital content.

▲ Hide the abstract
[R3] Precision Pointing for Ultra-High-Resolution Wall Displays.
M. Nancel, E. Pietriga, & M. Beaudouin-Lafon (2011) Unpublished Inria TechReport [doi] - [pdf] - BibTEX
Esc:
M. Nancel, E. Pietriga, & M. Beaudouin-Lafon (2011)

Keywords: pointing, interaction techniques, large displays, models

Ultra-high-resolution wall displays have proven useful for displaying large quantities of information, but lack appropriate interaction techniques to manipulate the data efficiently. We explore the limits of existing modeless remote pointing techniques, originally designed for lower resolution displays, and show that they do not support high-precision pointing on such walls. We then consider techniques that combine a coarse positioning mode to approach the target's area with a precise pointing mode for acquiring the target. We compare both new and existing techniques through a controlled experiment, and find that techniques combining ray casting with relative positioning or angular movements enable the selection of targets as small as 4 millimeters while standing 2 meters away from the display.

▲ Hide the abstract
[C2] Rapid Development of User Interfaces on Cluster-Driven Wall Displays with jBricks.
E. Pietriga, S. Huot, M. Nancel, & R. Primet (2011) In ACM EICS 11 [doi] - [pdf] - BibTEX
Esc:
E. Pietriga, S. Huot, M. Nancel, & R. Primet (2011)

Keywords: large displays, tools

Research on cluster-driven wall displays has mostly focused on techniques for parallel rendering of complex 3D models. There has been comparatively little research effort dedicated to other types of graphics and to the software engineering issues that arise when prototyping novel interaction techniques or developing full-featured applications for such displays. We present jBricks, a Java toolkit that integrates a high-quality 2D graphics rendering engine and a versatile input configuration module into a coherent framework, enabling the exploratory prototyping of interaction techniques and rapid development of post-WIMP applications running on cluster-driven interactive visualization platforms.

▲ Hide the abstract
[C1] Mid-air Pan-and-Zoom on Wall-sized Displays.
M. Nancel, J. Wagner, E. Pietriga, O. Chapuis, & W. Mackay (2011) In ACM CHI 11 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Nancel, J. Wagner, E. Pietriga, O. Chapuis, & W. Mackay (2011)

Keywords: navigation, interaction techniques, large displays, methods

Very-high-resolution wall-sized displays offer new opportunities for interacting with large data sets. While pointing on this type of display has been studied extensively, higher-level, more complex tasks such as pan-zoom navigation have received little attention. It thus remains unclear which techniques are best suited to perform multiscale navigation in these environments. Building upon empirical data gathered from studies of pan-and-zoom on desktop computers and studies of remote pointing, we identified three key factors for the design of mid-air pan-and-zoom techniques: uni- vs. bimanual interaction, linear vs. circular movements, and level of guidance to accomplish the gestures in mid-air. After an extensive phase of iterative design and pilot testing, we ran a controlled experiment aimed at better understanding the influence of these factors on task performance. Significant effects were obtained for all three factors: bimanual interaction, linear gestures and a high level of guidance resulted in significantly improved performance. Moreover, the interaction effects among some of the dimensions suggest possible combinations for more complex, real-world tasks.

▲ Hide the abstract
[R2] Push Menu: Extending Marking Menus for Pressure-Enabled Input Devices.
S. Huot, M. Nancel, & M. Beaudouin-Lafon (2010) Unpublished Inria TechReport [doi] - [pdf] - BibTEX
Esc:
S. Huot, M. Nancel, & M. Beaudouin-Lafon (2010)

Keywords: menus, interaction techniques

Several approaches have been proposed to increase the breadth of standard Marking Menus over the 8 item limit, most of which have focused on the use of the standard 2D input space (x-y). We present Push Menu, an extension of Marking Menu that takes advantage of pressure input as a third input dimension to increase menu breadth. We present the results of a preliminary experiment that validates our design and shows that Push Menu users who are neither familiar with pen-based interfaces nor continuous pressure control can handle up to 20 items reliably. We also discuss the implications of these results for using Push Menu in user interfaces and for improving its design.

▲ Hide the abstract
[P1] 131 millions de pixels qui font le mur.
M. Beaudouin-Lafon, E. Pietriga, W. Mackay, S. Huot, M. Nancel, C. Pillias, & R. Primet (2010) In Plein Sud 10 [doi] - [pdf]
M. Beaudouin-Lafon, E. Pietriga, W. Mackay, S. Huot, M. Nancel, C. Pillias, & R. Primet (2010)

Keywords: interaction techniques, large displays, tools

Imaginez un mur d’écrans qui affiche des images en haute définition. Imaginez que par des gestes simples, vous puissiez interagir avec lui… Nous ne sommes pas dans le film «Minority Report », mais face à la concrétisation d’un projet unique d’interaction homme-machine (IHM), la plate-forme Wild, qui permet d’interagir avec des masses de données complexes.

▲ Hide the abstract
[C0] Un espace de conception fondé sur une analyse morphologique des techniques de menus.
M. Nancel, S. Huot, & M. Beaudouin-Lafon (2009) In ACM IHM 9 [doi] - [pdf] - BibTEX
Esc:
M. Nancel, S. Huot, & M. Beaudouin-Lafon (2009)

Keywords: interaction techniques, menus

Cet article présente un espace de conception basé sur une analyse morphologique des mécanismes de structuration des menus et de sélection des items. Son but est de faciliter l'exploration de nouveaux types de menus afin notamment d'augmenter leur capacité sans détériorer leurs performances. L'article démontre l'aspect génératif de cet espace de conception grâce à quatre nouveaux designs de menus, basés sur des combinaisons de dimensions pas ou peu explorées. Pour deux d'entre eux, des expérimentations contrôlées montrent qu'ils offrent des performances comparables aux menus de la littérature.

▲ Hide the abstract
[R1] Extending Marking Menus With Integral Dimensions: Application to the Dartboard Menu.
M. Nancel, & M. Beaudouin-Lafon (2008) Unpublished Inria TechReport [pdf] - BibTEX
Esc:
M. Nancel, & M. Beaudouin-Lafon (2008)

Keywords: menus, interaction techniques

Marking menus have many benefits, including fast selection time, low error rate and fast transition to expert mode, but these are mitigated by a practical limit of 8 items per menu. Adding hierarchical levels increases capacity, but at the expense of longer selection times and higher error rates. In this paper we introduce Extended Marking Menus, a variant of marking menus that increases their width without sacrificing performance. Extended marking menus organize the items in several rings or layers. Selection is achieved by simultaneous control of direction, as in traditional marking menus, and another dimension such as distance, speed or pressure. We examine the design space of these new menus and study the Distance Extended Marking Menu, or Dartboard Menu, in more detail. We report on two experiments, one to calibrate the sizes of the rings, the other showing that it performs faster than the Zone and Flower menus but is less accurate than the Zone menu.

▲ Hide the abstract

Grants and awards

2018 – 2024 : ANR JCJC Causality — “Integrating Temporality and Causality to the Design of Interactive Systems.” 2020 : Challenge HYVE with Géry Casiez “Real-time Latency Measure and Compensation.” 2019 : Google Faculty Research Award with Géry Casiez “Real-time Latency Measure and Compensation.” 2015 : NSERC Engage Grant with Daniel Vogel “Touch Dragging Latency Compensation with High Frequency Input.”

Invited talks and presentations

January 2024: Interview about our jitter model and solution [C18].
March 2020: Presented my ongoing research at University of Toronto, University of Waterloo, and Chatham Labs (Ontario).
April 2019: Round table at the French National Assembly, at the inauguration event of the new French keyboard standard. (Paris, France).
October 2018: Presented [C14*] at UIST 2018 (Berlin, Germany).
March 2017: Invited talk at the “30 Minutes de Sciences" seminar at Inria Lille – Nord Europe.
November 2016: Presented [C10*] at UIST 2016 (Tokyo, Japan).
November 2015: Presented [C8*] at UIST 2015 (Charlotte, NC, USA).
April 2015: Presented [C7*] at CHI 2015 (Seoul, South Korea).
April 2014: Presented [C5*] at CHI 2014 (Toronto, Canada).
April 2013: Invited talk at the NUS HCI Lab (National University of Singapore).
Presented [C3*] at CHI 2013 (Paris, France).
December 2012: Defended my Ph.D. thesis [D1*] (Orsay, France).
2011-2012: Several demos of the WILD platform (Orsay, France).
April 2011: Presented [C1*] at CHI 2011 (Vancouver, Canada).
October 2009: Presented [F1*] at IHM°09 (Grenoble, France).

Reviewing


Committees and jurys

HCI Conferences

CHCCS GI'24 Program committee (Graphics Interface)
ACM UIST'23 Program committee (ACM Symposium on User Interface Software and Technology)
ACM CHI'23 Program committee – Interacting with Devices: Interaction Techniques & Modalities
ACM CHI'22 Program committee – Interacting with Devices: Interaction Techniques & Modalities
CHCCS GI'20 Program committee (Graphics Interface)
ACM CHI'17 Program committee – Interaction Techniques
ACM CHI'16 Program committee – Interaction Techniques
ACM ITS'14 Program committee (Interactive Tabletops and Surfaces)
ACM AUIC'14 Program committee (Australasian User Interface Conference)
ACM CHI'14 Jury for the Video Showcase

Committees and jurys

Inria Lille – Nord Europe President of the Committee for Users of IT Resources at Inria Lille (2023 - )
Inria Lille – Nord Europe Technological Development Actions (ADT) committee (2018-2023)
ANR (panel member) Scientific Evaluation Panels member for the French funding agency for research (ANR) (2021, 2022)
Univ. Paris Saclay Examiner for the PhD thesis of Eugénie Brasier (2021)
CONEX-Plus Evaluation panel for postdoc grants (2019)
FWO Research Foundation Expert panel for PhD grants (2018)
ANR (expert) Expert reviewer for the French funding agency for research (ANR) (2015)

Research articles

Conferences

2010 2015 2020 2025
ACM CHI
ACM UIST
ACM CSCW
Interact
IEEE ISMAR
HHAI
MobileHCI
ACM ISS
ACM ITS
ACM GI
ACM NordiCHI
ACM IHM
ACM SUI
ACM AUIC
ACM DIS
ACM ICMI
ACM SIGGRAPH
IEEE VIS
IEEE PacificVis

Journals

2010 2015 2020 2025
ACM ToCHI
IJHCS
BIT
JMUI
IEEE TNSRE
Ergonomics
  • AC/Member of the Program Committee (HCI)
  • External reviewer (HCI)
  • External reviewer (Non-HCI)

2 Outstanding Reviews at ACM CHI 2024, Outstanding Review at ACM VIS 2023, Outstanding Review at ACM CHI 2022, Outstanding Review at ACM DIS 2022, Outstanding Review at ACM DIS 2022, 2 Outstanding Reviews at ACM CHI 2021, 2 Outstanding Reviews at ACM CHI 2020, 2 Outstanding Reviews at ACM CHI 2019, Exceptional Reviewer at ACM UIST 2014, Exceptional Reviewer at ACM UIST 2012.

Teaching

2020 - 2023
Controlled experiments and evaluation Master RVA, Univ. Lille M2
2022
Numérique et Sciences Informatiques Tre Spécialité High school textbook Terminale
2021
Numérique et Sciences Informatiques 1re Spécialité High school textbook 1ère
2018 - 2020
Controlled experiments and evaluation Master IVI, Univ. Lille M2
2017
Information visualization Univ. Lille M1
2011 - 2012
IT and Internet Certificate Univ. Paris-Sud XI L1
Human Computer Interactions Polytech Paris-Sud L3
Databases Polytech Paris-Sud L3
Human Computer Interactions Master Informatique - Univ. Paris-Sud XI M1
2010 - 2011
Software Development L2 Info - Univ. Paris-Sud XI L2
Human Computer Interactions Polytech Paris-Sud L3
Master's Degree Internship coaching Polytech Paris-Sud M2
2009 - 2010
Databases IFIPS L3
Software engineering TER (MIAGE & L3 Info) - Univ. Paris-Sud XI L3
2008 - 2009
Processing & Arduino Mastère Nouveaux Médias - ENSCI M2
Databases IFIPS L3
Software engineering TER (MIAGE & L3 Info) - Univ. Paris-Sud XI L3

Education and career


2016 -
Permanent researcher – Équipe Loki à Inria Lille – Nord Europe (Lille, France)
2015 - 2016
Postdoctoral FellowUser Interfaces Lab at Aalto University (Helsinki, Finland)
Main collaborators: Antti Oulasvirta, Anna Maria Feit
2014 - 2015
Postdoctoral FellowHuman Computer Interaction Lab at University of Waterloo (Waterloo, Ontario, Canada)
Main collaborators: Daniel Vogel, Edward Lank
2013 - 2014
Postdoctoral FellowHuman Computer Interaction and Multimedia Lab at University of Canterbury (Christchurch, New Zealand)
Main collaborators: Andy Cockburn
2008 - 2012
Ph.D. in Human-Computer Interactions – Équipe in|situ| à l'Université Paris-Sud XI (Orsay, France)
2007 - 2008
Master's Degree in Computer ScienceUniversité Paris-Sud XI (Orsay, France)
2003 - 2008
Engineering Degree in Computer ScienceIFIPS (Orsay, France)