Je suis Chargé de Recherche en Interactions Humain-Machine (IHM) au sein de l'équipe-projet Loki du centre Inria Lille – Nord Europe depuis 2016.

Ma recherche actuelle porte sur la temporalité des interactions humain-machine, depuis les capacités physiologiques et cognitives de l'utilisateur jusqu'à la manière dont les systèmes interactifs sont conçus et construits – et sur la manière de les améliorer. Cela implique de :

Je suis le porteur du projet Causality financé par l'Agence Nationale de la Recherche (ANR), dans lequel j'applique ces principes aux thématiques du pointage et des historiques de commandes. J'encadre les thèses de Philippe Schmid et d'Alice Loizeau sur ces sujets avec Stéphane Huot.

J'ai une formation d'ingénieur en informatique, et ma “boîte à outils de recherche” emprunte à la psychologie expérimentale, à l'ingénierie des systèmes interactifs et au design d'interactions. Durant ma thèse et les postdocs qui ont suivi, j'ai beaucoup travaillé sur l'interaction à distance avec de grands écrans, en me focalisant sur la conception et l'évaluation des techniques d'interaction pour le contrôle du curseur (e.g. [C3, J2, C6, C12]), la sélection de commandes [C4, C8] et la navigation virtuelle [C1], avec un intérêt spécifique pour l'exploration et l'exploitation de nouvelles* technologies de capture comme le suivi du mouvement (VICON [J2], Kinect [C4], LEAP [C8], ...) et les capteurs électromyographiques [C6].
Ces projets m'ont progressivement guidé vers des aspects plus fondamentaux de la conception et de l'implémentation des systèmes interactifs, en explorant et en répondant à des questions persistantes qui ne concernent plus seulement une plate-forme spécifique mais tous les aspects de notre expérience des systèmes interactifs.

Par ailleurs, j'ai récemment contribué à la conception de la première norme pour la disposition du clavier français (voir [J3, R4, R5] et norme-azerty.fr pour plus d'informations sur ce processus), et j'ai co-écrit deux manuels de lycée sur un nouveau cours national d'introduction à la programmation [B1, B2].

Recherche


Articles

See only:

Cliquer sur un titre pour faire apparaître l'abstract.
[J4] GUI Behaviors to Minimize Pointing-based Interaction Interferences.
A. Loizeau, S. Malacria, & M. Nancel (2024) In ACM ToCHI 24 [pdf] - BibTEX
Esc:
A. Loizeau, S. Malacria, & M. Nancel (2024)

Mot-clés : pointing, temporality, interferences, fine-grained input, methods, blind spots

Pointing-based interaction interferences are situations wherein GUI elements appear, disappear, or change shortly before being selected, and too late for the user to inhibit their movement. Their cause lays in the design of most GUIs, for which any user event on an interactive element unquestionably reflects the user's intention-even one millisecond after that element has changed. Previous work indicate that interferences can cause frustration and sometimes severe consequences. This paper investigates new default behaviors for GUI elements that aim to prevent the occurrences of interferences or to mitigate their consequences. We present a design space of the advantages and technical requirements of these behaviors, and demonstrate in a controlled study how simple rules can reduce the occurrences of so-called 'Pop-up-style' interferences, and user frustration. We then discuss their application to various forms of interaction interferences. We conclude by addressing the feasibility and trade-offs of implementing these behaviors in existing systems.

▲ Masquer le résumé
[B2] Numérique et Sciences Informatiques Tle Spécialité.
M. Beaudouin-Lafon, B. Groz, E. Waller, C. Pelsser, C. Chevalier, P. Marquet, X. Redon, M. Nancel, G. Grimaud, & T. Vantroys (2022) In NSI Tle [doi]
[C20] Relevance and Applicability of Hardware-independent Pointing Transfer Functions.
R. Hanada, D. Masson, G. Casiez, M. Nancel, & S. Malacria (2021) In ACM UIST 21 [Video] - [doi] - [pdf] - BibTEX
Esc:
R. Hanada, D. Masson, G. Casiez, M. Nancel, & S. Malacria (2021)

Mot-clés : pointing, fine-grained input, tools, methods

Pointing transfer functions remain predominantly expressed in pixels per input counts, which can generate different visual pointer behaviors with different input and output devices; we show in a first controlled experiment that even small hardware differences impact pointing performance with functions defined in this manner. We also demonstrate the applicability of ''hardware-independent'' transfer functions defined in physical units. We explore two methods to maintain hardware-independent pointer performance in operating systems that require hardware-dependent definitions: scaling them to the resolutions of the input and output devices, or selecting the OS acceleration setting that produces the closest visual behavior. In a second controlled experiment, we adapted a baseline function to different screen and mouse resolutions using both methods, and the resulting functions provided equivalent performance. Lastly, we provide a tool to calculate equivalent transfer functions between hardware setups, allowing users to match pointer behavior with different devices, and researchers to tune and replicate experiment conditions. Our work emphasizes, and hopefully facilitates, the idea that operating systems should have the capability to formulate pointing transfer functions in physical units, and to adjust them automatically to hardware setups.

▲ Masquer le résumé
[J3] AZERTY amélioré: Computational Design on a National Scale.
A. Feit, M. Nancel, M. John, A. Karrenbauer, D. Weir, & A. Oulasvirta (2021) In CACM 21 [doi] - [pdf] - BibTEX
Esc:
A. Feit, M. Nancel, M. John, A. Karrenbauer, D. Weir, & A. Oulasvirta (2021)

Mot-clés : typing, models, tools, methods

France is the first country in the world to adopt a keyboard standard informed by computational methods, improving the performance, ergonomics, and intuitiveness of the keyboard while enabling input of many more characters. We describe a human-centric approach developed jointly with stakeholders to utilize computational methods in the decision process not only to solve a well-defined problem but also to understand the design requirements, to inform subjective views, or to communicate the outcomes. To be more broadly useful, research must develop computational methods that can be used in a participatory and inclusive fashion respecting the different needs and roles of stakeholders.

▲ Masquer le résumé
[B1] Numérique et Sciences Informatiques 1re Spécialité.
B. Groz, E. Waller, M. Nancel, M. Beaudouin-Lafon, & O. Marce (2021) In NSI 1ère [doi]
[C19] Interaction Interferences: Implications of Last-Instant System State Changes.
P. Schmid, S. Malacria, A. Cockburn, & M. Nancel (2020) In ACM UIST 20 [Video] - [doi] - [pdf] - BibTEX
Esc:
P. Schmid, S. Malacria, A. Cockburn, & M. Nancel (2020)

Mot-clés : pointing, temporality, interferences, models, fine-grained input, methods, blind spots

We study interaction interferences, situations where an unexpected change occurs in an interface immediately before the user performs an action, causing the corresponding input to be misinterpreted by the system. For example, a user tries to select an item in a list, but the list is automatically updated immediately before the click, causing the wrong item to be selected. First, we formally define interaction interferences and discuss their causes from behavioral and system-design perspectives. Then, we report the results of a survey examining users’ perceptions of the frequency, frustration, and severity of interaction interferences. We also report a controlled experiment exploring the minimum time interval, before clicking, below which participants could not refrain from completing their action. Finally, we discuss our findings and their implications for system design, paving the way for future work.

▲ Masquer le résumé
[C18] Modeling and Reducing Spatial Jitter caused by Asynchronous Input and Output Rates.
A. Antoine, M. Nancel, E. Ge, J. Zheng, N. Zolghadr, & G. Casiez (2020) In ACM UIST 20 [doi] - [pdf] - BibTEX
Esc:
A. Antoine, M. Nancel, E. Ge, J. Zheng, N. Zolghadr, & G. Casiez (2020)

Mot-clés : temporality, models, fine-grained input, tools, methods, blind spots

Jitter in interactive systems occurs when visual feedback is perceived as unstable or trembling even though the input signal is smooth or stationary. It can have multiple causes such as sensing noise, or feedback calculations introducing or exacerbating sensing imprecisions. Jitter can however occur even when each individual component of the pipeline works perfectly, as a result of the differences between the input frequency and the display refresh rate. This asynchronicity can introduce rapidly-shifting latencies between the rendered feedbacks and their display on screen, which can result in trembling cursors or viewports. This paper contributes a better understanding of this particular type of jitter. We first detail the problem from a mathematical standpoint, from which we develop a predictive model of jitter amplitude as a function of input and output frequencies, and a new metric to measure this spatial jitter. Using touch input data gathered in a study, we developed a simulator to validate this model and to assess the effects of different techniques and settings with any output frequency. The most promising approach, when the time of the next display refresh is known, is to estimate (via interpolation or extrapolation) the user’s position at a fixed time interval before that refresh. When input events occur at 125 Hz, as is common in touch screens, we show that an interval of 4 to 6 ms works well for a wide range of display refresh rates. This method effectively cancels most of the jitter introduced by input/output asynchronicity, while introducing minimal imprecision or latency.

▲ Masquer le résumé
[C17] Investigating the Necessity of Delay in Marking Menu Invocation.
J. Henderson, S. Malacria, M. Nancel, & E. Lank (2020) In ACM CHI 20 [doi] - [pdf] - BibTEX
Esc:
J. Henderson, S. Malacria, M. Nancel, & E. Lank (2020)

Mot-clés : interaction techniques, temporality, menus, blind spots

Delayed display of menu items is a core design component of marking menus, arguably to prevent visual distraction and foster the use of mark mode. We investigate these assumptions, by contrasting the original marking menu design with immediately-displayed marking menus. In three controlled experiments, we fail to reveal obvious and systematic performance or usability advantages to using delay and mark mode. Only in very constrained settings—after significant training and only two items to learn—did traditional marking menus show a time improvement of about 260 ms. Otherwise, we found an overall decrease in performance with delay, whether participants exhibited practiced or unpracticed behaviour. Our final study failed to demonstrate that an immediately-displayed menu interface is more visually disrupting than a delayed menu. These findings inform the costs and benefits of incorporating delay in marking menus, and motivate guidelines for situations in which its use is desirable.

▲ Masquer le résumé
[C16] AutoGain: Gain Function Adaptation with Submovement Efficiency Optimization.
B. Lee, M. Nancel, S. Kim, & A. Oulasvirta (2020) In ACM CHI 20 [doi] - [pdf] - BibTEX
Esc:
B. Lee, M. Nancel, S. Kim, & A. Oulasvirta (2020)

Mot-clés : pointing, fine-grained input, tools

A well-designed control-to-display gain function can improve pointing performance with indirect pointing devices like trackpads. However, the design of gain functions is challenging and mostly based on trial and error. AutoGain is an unobtrusive method to individualize a gain function for indirect pointing devices in contexts where cursor trajectories can be tracked. It gradually improves pointing efficiency by using a novel submovement-level tracking+optimization technique that minimizes aiming error (undershooting/overshooting) for each submovement. We first show that AutoGain can produce, from scratch, gain functions with performance comparable to commercial designs, in less than a half-hour of active use. Second, we demonstrate AutoGain’s applicability to emerging input devices (here, a Leap Motion controller) with no reference gain functions. Third, a one-month longitudinal study of normal computer use with AutoGain showed performance improvements from participants’ default functions.

▲ Masquer le résumé
[C15] A Comparative Study of Pointing Techniques for Eyewear Using a Simulated Pedestrian Environment.
Q. Roy, C. Zakaria, S. Perrault, M. Nancel, W. Kim, A. Misra, & A. Cockburn (2019) In Interact 19 [doi] - [pdf] - BibTEX
Esc:
Q. Roy, C. Zakaria, S. Perrault, M. Nancel, W. Kim, A. Misra, & A. Cockburn (2019)

Mot-clés : pointing, interaction techniques, temporality, methods

Eyewear displays allow users to interact with virtual content displayed over real-world vision, in active situations like standing and walking. Pointing techniques for eyewear displays have been proposed, but their social acceptability, efficiency, and situation awareness remain to be assessed. Using a novel street-walking simulator, we conducted an empirical study of target acquisition while standing and walking under different levels of street crowdedness. We evaluated three phone-based eyewear pointing techniques: indirect touch on a touchscreen, and two in-air techniques using relative device rotations around forward and a downward axes. Direct touch on a phone, without eyewear, was used as a control condition. Results showed that indirect touch was the most efficient and socially acceptable technique, and that in-air pointing was inefficient when walking. Interestingly, the eyewear displays did not improve situation awareness compared to the control condition. We discuss implications for eyewear interaction design.

▲ Masquer le résumé
[S1] Interfaces utilisateurs – Dispositions de clavier bureautique français (NF Z 71-300).
In Norme AFNOR 19 [doi] - BibTEX
Esc:

Mot-clés : typing

Le présent document définit la disposition du clavier informatique à 105 touches dans sa version bureautique et 72 touches dans sa version compacte. Il est particulièrement adapté à la saisie de la langue française dans un contexte multilingue où le recours à d’autres langues latines est fréquent. [...] Conçu pour les utilisateurs français, il pourrait néanmoins présenter un intérêt pour d’autres pays francophones. [...] Les travaux de normalisation à l’origine de ce document font suite à l’édition 2015 du rapport au Parlement sur l’emploi de la langue française (Délégation générale à la langue française et aux langues de France, ministère de la Culture et de la Communication) puis d’une publication intitulée « Vers une norme française pour les claviers informatiques » qui font état de la difficulté de saisir certains caractères courants en français tels que les capitales accentuées ou la ligature « œ ». [...] Les principaux objectifs qui ont concouru à la rédaction de ce document sont les suivants :
– homogénéiser le parc de claviers informatiques en France et notamment réduire les disparités dans la disposition des caractères entre différents fabricants de matériels et de systèmes d’exploitation ;
– améliorer l’ergonomie du clavier pour la saisie du français tout en s’inscrivant dans la continuité des dispositifs existants (dispositions de clavier dites « AZERTY », non normalisées, établies par l’usage), de façon à ne pas provoquer de difficultés d’appropriation de la nouvelle disposition par les utilisateurs ;
– permettre la saisie de l’ensemble des caractères des langues régionales de France dont la liste3 figure sur le site internet de la Délégation générale à la langue française et aux langues de France3 ;
– permettre la saisie de l’ensemble des caractères des langues à alphabet latin présentes sur le continent européen, avec comme priorité les caractères usuels des grandes langues de communication en Europe telles que l’allemand, l’espagnol ou le portugais ;
– rendre plus accessibles de nouveaux jeux de caractères et symboles pouvant être utiles lors de la rédaction de documents spécifiques ou techniques (lettres de l’alphabet grec, symboles mathématiques par exemple).

▲ Masquer le résumé
[C14] Next-Point Prediction for Direct Touch Using Finite-Time Derivative Estimation.
M. Nancel, S. Aranovskiy, R. Ushirobira, D. Efimov, S. Poulmane, N. Roussel, & G. Casiez (2018) In ACM UIST 18 [doi] - [pdf] - BibTEX
Esc:
M. Nancel, S. Aranovskiy, R. Ushirobira, D. Efimov, S. Poulmane, N. Roussel, & G. Casiez (2018)

Mot-clés : pointing, temporality, models, fine-grained input, tools

End-to-end latency in interactive systems is detrimental to performance and usability, and comes from a combination of hardware and software delays. While these delays are steadily addressed by hardware and software improvements, it is at a decelerating pace. In parallel, short-term input prediction has shown promising results in recent years, in both research and industry, as an addition to these efforts. We describe a new prediction algorithm for direct touch devices based on (i) a state-of-the-art finite-time derivative estimator, (ii) a smoothing mechanism based on input speed, and (iii) a post-filtering of the prediction in two steps. Using both a pre- existing dataset of touch input as benchmark, and subjective data from a new user study, we show that this new predictor outperforms the predictors currently available in the literature and industry, based on metrics that model user-defined negative side-effects caused by input prediction. In particular, we show that our predictor can predict up to 2 or 3 times further than existing techniques with minimal negative side-effects.

▲ Masquer le résumé
[C13] Introducing Transient Gestures to Improve Pan and Zoom on Touch Surfaces.
J. Avery, S. Malacria, M. Nancel, G. Casiez, & E. Lank (2018) In ACM CHI 18 [Video] - [doi] - [pdf] - BibTEX
Esc:
J. Avery, S. Malacria, M. Nancel, G. Casiez, & E. Lank (2018)

Mot-clés : navigation, interaction techniques

Despite the ubiquity of touch-based input and the availability of increasingly computationally powerful touchscreen devices, there has been comparatively little work on enhancing basic canonical gestures such as swipe-to-pan and pinch-to-zoom. In this paper, we introduce transient pan and zoom, i.e. pan and zoom manipulation gestures that temporarily alter the view and can be rapidly undone. Leveraging typical touchscreen support for additional contact points, we design our transient gestures such that they co-exist with traditional pan and zoom interaction. We show that our transient pan-and-zoom reduces repetition in multi-level navigation and facilitates rapid movement between document states. We conclude with a discussion of user feedback, and directions for future research.

▲ Masquer le résumé
[C12] Pointing at a Distance with Everyday Smart Devices.
S. Siddhpuria, S. Malacria, M. Nancel, & E. Lank (2018) In ACM CHI 18 [Video] - [doi] - [pdf] - BibTEX
Esc:
S. Siddhpuria, S. Malacria, M. Nancel, & E. Lank (2018)

Mot-clés : pointing, interaction techniques, large displays

Large displays are becoming commonplace at work, at home, or in public areas. However, interaction at a distance -- anything greater than arms-length -- remains cumbersome, restricts simultaneous use, and requires specific hardware augmentations of the display: touch layers, cameras, or dedicated input devices. Yet a rapidly increasing number of people carry smartphones and smartwatches, devices with rich input capabilities that can easily be used as input devices to control interactive systems. We contribute (1) the results of a survey on possession and use of smart devices, and (2) the results of a controlled experiment comparing seven distal pointing techniques on phone or watch, one- and two-handed, and using different input channels and mappings. Our results favor using a smartphone as a trackpad, but also explore performance tradeoffs that can inform the choice and design of distal pointing techniques for different contexts of use.

▲ Masquer le résumé
[R5] Historique et méthodologie de la nouvelle disposition de clavier AZERTY.
M. Nancel (2018) Non publié Inria TechReport [doi] - [pdf] - BibTEX
Esc:
M. Nancel (2018)

Mot-clés : typing, methods

Document de travail utilisé dans la rédaction de l’annexe H “Historique et méthodologie” de la norme AFNOR Z 71-300 : “Dispositions de clavier bureautique français”.

▲ Masquer le résumé
[R4] Élaboration de la disposition AZERTY modernisée.
A. Feit, M. Nancel, D. Weir, G. Bailly, M. John, A. Karrenbauer, & A. Oulasvirta (2018) Non publié Inria TechReport [doi] - [pdf] - BibTEX
Esc:
A. Feit, M. Nancel, D. Weir, G. Bailly, M. John, A. Karrenbauer, & A. Oulasvirta (2018)

Mot-clés : typing, methods

Document de travail utilisé dans la rédaction de l'annexe F “Élaboration de la disposition AZERTY modernisée” de la norme AFNOR Z 71-300 : “Dispositions de clavier bureautique français”.

▲ Masquer le résumé
[C11] Modeling User Performance on Curved Constrained Paths.
M. Nancel, & E. Lank (2017) In ACM CHI 17 [doi] - [pdf] - BibTEX
Esc:
M. Nancel, & E. Lank (2017)

Mot-clés : pointing, models, methods

In 1997, Accot and Zhai presented seminal work analyzing the temporal cost and instantaneous speed profiles associated with movement along constrained paths. Their work posited and validated the steering law, which described the relationship between path constraint, path length and the temporal cost of path traversal using a computer input device (e.g. a mouse). In this paper, we argue that the steering law fails to correctly model constrained paths of varying, arbitrary curvature, propose a new form of the law that accommodates these curved paths, and empirically validate our model.

▲ Masquer le résumé
[C10] Next-Point Prediction Metrics for Perceived Spatial Errors.
M. Nancel, D. Vogel, B. De Araujo, R. Jota, & G. Casiez (2016) In ACM UIST 16 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Nancel, D. Vogel, B. De Araujo, R. Jota, & G. Casiez (2016)

Mot-clés : pointing, temporality, models, fine-grained input, methods, blind spots

Touch screens have a delay between user input and corresponding visual interface feedback, called input “latency” (or “lag”). Visual latency is more noticeable during continuous input actions like dragging, so methods to display feedback based on the most likely path for the next few input points have been described in research papers and patents. Designing these “next-point prediction” methods is challenging, and there have been no standard metrics to compare different approaches. We introduce metrics to quantify the probability of 7 spatial error “side-effects” caused by next-point prediction methods. Types of side-effects are derived using a thematic analysis of comments gathered in a 12 participants study covering drawing, dragging, and panning tasks using 5 state-of-the-art next-point predictors. Using experiment logs of actual and predicted input points, we develop quantitative metrics that correlate positively with the frequency of perceived side-effects. These metrics enable practitioners to compare next-point predictors using only input logs.

▲ Masquer le résumé
[C9] The Performance and Preference of Different Fingers and Chords for Pointing, Dragging, and Object Transformation.
A. Goguey, M. Nancel, D. Vogel, & G. Casiez (2016) In ACM CHI 16 [Video] - [doi] - [pdf] - BibTEX
Esc:
A. Goguey, M. Nancel, D. Vogel, & G. Casiez (2016)

Mot-clés : pointing, navigation

The development of robust methods to identify which finger is causing each touch point, called “finger identification,” will open up a new input space where interaction designers can associate system actions to different fingers. However, relatively little is known about the performance of specific fingers as single touch points or when used together in a “chord.” We present empirical results for accuracy, throughput, and subjective preference gathered in five experiments with 48 participants exploring all 10 fingers and 7 two-finger chords. Based on these results, we develop design guidelines for reasonable target sizes for specific fingers and two-finger chords, and a relative ranking of the suitability of fingers and two-finger chords for common multi-touch tasks. Our work contributes new knowledge regarding specific finger and chord performance and can inform the design of future interaction techniques and interfaces utilizing finger identification.

▲ Masquer le résumé
[A1] AutoGain: Adapting Gain Functions by Optimizing Submovement Efficiency.
B. Lee, M. Nancel, & A. Oulasvirta (2016) Non publié arXiv PrePrint [doi] - [pdf] - BibTEX
Esc:
B. Lee, M. Nancel, & A. Oulasvirta (2016)

Mot-clés : pointing, fine-grained input, tools

A well-designed control-to-display (CD) gain function can improve pointing performance with an indirect pointing device such as a trackpad. However, the design of gain functions has been challenging and mostly based on trial and error. AutoGain is an unobtrusive method to obtain a gain function for an indirect pointing device in contexts where cursor trajectories can be tracked. It gradually improves pointing efficiency by using a novel submovement-level tracking+optimization technique. In a study, we show that AutoGain can produce gain functions with performance comparable to commercial designs in less than a half hour of active use. This is attributable to reductions in aiming error (undershooting/overshooting) for each submovement. Our second study shows that AutoGain can be used to obtain gain functions for emerging input devices (here, a Leap Motion controller) for which no good gain function may exist yet. Finally, we discuss deployment in a real interactive system.

▲ Masquer le résumé
[W1] Hands Up: Who Knows Something About Performance and Ergonomics of Mid-Air Hand Gestures.
A. Feit, & M. Nancel (2016) Non publié ACM CHI Workshop 16 [pdf] - BibTEX
Esc:
A. Feit, & M. Nancel (2016)

Mot-clés : interaction techniques, methods

Advances in markerless and un-instrumented hand tracking allow us to make full use of the hands' dexterity for interaction with computers. However, the biomechanics of hand movements remain to be thoroughly studied in HCI. The large number of degrees of freedom of the hand (25) presents us with a huge design space of possible gestures, which is hard to fully explore with traditional methods like elicitation studies or design heuristics. We propose an approach to develop a model of fatigue and stress of manual mid-air input, inspired by prior work on the ergonomics of arm movements and on the performance of multi-finger gestures. Along with our vision of the incoming challenges in mid-air interaction, we describe a design framework for mid-air input that, given such models, can be used to automatically evaluate any given gesture set, or propose an optimal gesture vocabulary for a given set of tasks.

▲ Masquer le résumé
[C8] Gunslinger: Subtle Arms-down Mid-air Interaction.
M. Liu, M. Nancel, & D. Vogel (2015) In ACM UIST 15 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Liu, M. Nancel, & D. Vogel (2015)

Mot-clés : pointing, navigation, interaction techniques, large displays, menus, methods

We describe Gunslinger, a mid-air interaction technique using barehand postures and gestures. Unlike past work, we explore a relaxed arms-down position with both hands interacting at the sides of the body. It features novel ‘hand-cursor’ feedback to communicate recognized hand posture, command mode and tracking quality; and a simple, but flexible hand posture recognizer. Although Gunslinger is suitable for many usage contexts, we focus on integrating mid-air gestures with large display touch input. We show how the Gunslinger form factor enables an interaction language that is equivalent, coherent, and compatible with large display touch input. A four-part study evaluates Midas Touch, posture recognition feedback, fundamental pointing and clicking, and general usability.

▲ Masquer le résumé
[C7] Clutching Is Not (Necessarily) the Enemy.
M. Nancel, D. Vogel, & E. Lank (2015) In ACM CHI 15 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Nancel, D. Vogel, & E. Lank (2015)

Mot-clés : pointing, methods, blind spots

Clutching is usually assumed to be triggered by a lack of physical space and detrimental to pointing performance. We conduct a controlled experiment using a laptop trackpad where the effect of clutching on pointing performance is dissociated from the effects of control-to-display transfer functions. Participants performed a series of target acquisition tasks using typical cursor acceleration functions with and without clutching. All pointing tasks were feasible without clutching, but clutch-less movements were harder to perform, caused more errors, required more preparation time, and were not faster than clutch-enabled movements.

▲ Masquer le résumé
[C6] Myopoint: Pointing and Clicking Using Forearm Mounted EMG and Inertial Motion Sensors.
F. Haque, M. Nancel, & D. Vogel (2015) In ACM CHI 15 [Video] - [doi] - [pdf] - BibTEX
Esc:
F. Haque, M. Nancel, & D. Vogel (2015)

Mot-clés : pointing, interaction techniques, large displays

We describe a mid-air, barehand pointing and clicking interaction technique using electromyographic (EMG) and inertial measurement unit (IMU) input from a consumer armband device. The technique uses enhanced pointer feedback to convey state, a custom pointer acceleration function tuned for angular inertial motion, and correction and filtering techniques to minimize side-effects when combining EMG and IMU input. By replicating a previous large display study using a motion capture pointing technique, we show the EMG and IMU technique is only 430 to 790 ms slower and has acceptable error rates for targets greater than 48 mm. Our work demonstrates that consumer-level EMG and IMU sensing is practical for distant pointing and clicking on large displays.

▲ Masquer le résumé
[J2] Mid-air Pointing on Ultra-Walls.
M. Nancel, E. Pietriga, O. Chapuis, & M. Beaudouin-Lafon (2015) In ACM ToCHI 15 [doi] - [pdf] - BibTEX
Esc:
M. Nancel, E. Pietriga, O. Chapuis, & M. Beaudouin-Lafon (2015)

Mot-clés : pointing, interaction techniques, large displays, models

Ultra-high-resolution wall-sized displays (“ultra-walls”) are effective for presenting large datasets, but their size and resolution make traditional pointing techniques inadequate for precision pointing. We study mid-air pointing techniques that can be combined with other, domain-specific interactions. We first explore the limits of existing single-mode remote pointing techniques and demonstrate theoretically that they do not support high-precision pointing on ultra-walls. We then explore solutions to improve mid-air pointing efficiency: a tunable acceleration function and a framework for dual-precision techniques, both with precise tuning guidelines. We designed novel pointing techniques following these guidelines, several of which outperform existing techniques in controlled experiments that involve pointing difficulties never tested prior to this work. We discuss the strengths and weaknesses of our techniques to help interaction designers choose the best technique according to the task and equipment at hand. Finally, we discuss the cognitive mechanisms that affect pointing performance with these techniques.

▲ Masquer le résumé
[C5] Causality – A Conceptual Model of Interaction History.
M. Nancel, & A. Cockburn (2014) In ACM CHI 14 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Nancel, & A. Cockburn (2014)

Mot-clés : temporality, histories of commands, blind spots

Simple history systems such as Undo and Redo permit retrieval of earlier or later interaction states, but advanced systems allow powerful capabilities to reuse or reapply combinations of commands, states, or data across interaction contexts. Whether simple or powerful, designing interaction history mechanisms is challenging. We begin by reviewing existing history systems and models, observing a lack of tools to assist designers and researchers in specifying, contemplating, combining, and communicating the behaviour of history systems. To resolve this problem, we present CAUSALITY, a conceptual model of interaction history that clarifies the possibilities for temporal interactions. The model includes components for the work artifact (such as the text and formatting of a Word document), the system context (such as the settings and parameters of the user interface), the linear timeline (the commands executed in real time), and the branching chronology (a structure of executed commands and their impact on the artifact and/or context, which may be navigable by the user). We then describe and exemplify how this model can be used to encapsulate existing user interfaces and reveal limitations in their behaviour, and we also show in a conceptual evaluation how the model stimulates the design of new and innovative opportunities for interacting in time.

▲ Masquer le résumé
[C4] Body-centric Design Space for Multi-surface Interaction.
J. Wagner, M. Nancel, S. Gustafson, S. Huot, & W. Mackay (2013) In ACM CHI 13 [Video] - [doi] - [pdf] - BibTEX
Esc:
J. Wagner, M. Nancel, S. Gustafson, S. Huot, & W. Mackay (2013)

Mot-clés : interaction techniques, large displays, menus

We introduce BodyScape, a body-centric design space for both analyzing existing multi-surface interaction techniques and suggesting new ones. We examine the relationship between users and their environment, specifically how different body parts enhance or restrict movement in particular interaction techniques. We illustrate the use of BodyScape by comparing two free-hand techniques, on-body touch and mid- air pointing, separately and in combination. We found that touching the torso is faster than touching the lower legs, since it affects the user’s balance; individual techniques outperform compound ones; and touching the dominant arm is slower than other body parts because the user must compensate for the applied force. The latter is surprising, given that most recent on-body touch techniques focus on touching the dominant arm.

▲ Masquer le résumé
[C3] High-Precision Pointing on Large Wall Displays using Small Handheld Devices.
M. Nancel, O. Chapuis, E. Pietriga, X. Yang, P. Irani, & M. Beaudouin-Lafon (2013) In ACM CHI 13 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Nancel, O. Chapuis, E. Pietriga, X. Yang, P. Irani, & M. Beaudouin-Lafon (2013)

Mot-clés : pointing, interaction techniques, large displays, models

Rich interaction with high-resolution wall displays is not limited to remotely pointing at targets. Other relevant forms of interaction include virtual navigation, text entry, and direct manipulation of control widgets. However, most techniques for remotely acquiring targets with high precision have studied remote pointing in isolation, focusing on pointing efficiency, and ignoring the need to support these other forms of interaction. We investigate high-precision pointing techniques capable of acquiring targets as small as 4 millimeters on a 5.5 meters wide display while leaving up to 93 % of a typical tablet device's screen space available for task-specific widgets. We compare these techniques to state-of-the-art distant pointing techniques and show that two of our techniques, a purely relative one and one that uses head orientation, perform as well or better than the best pointing-only input techniques while using a fraction of the interaction resources.

▲ Masquer le résumé
[D1] Designing and Combining Interaction Techniques in Large Display Environments.
M. Nancel (2012) Non publié PhD [doi] - [pdf] - BibTEX
Esc:
M. Nancel (2012)

Mot-clés : pointing, menus, interaction techniques, large displays, models, navigation

Large display environments (LDEs) are interactive physical workspaces featuring one or more static large displays as well as rich interaction capabilities, and are meant to visualize and manipulate very large datasets. Research about mid-air interactions in such environments has emerged over the past decade, and a number of interaction techniques are now available for most elementary tasks such as pointing, navigating and command selection. However these techniques are often designed and evaluated separately on specific platforms and for specific use-cases or operationalizations, which makes it hard to choose, compare and combine them. In this dissertation I propose a framework and a set of guidelines for analyzing and combining the input and output channels available in LDEs. I analyze the characteristics of LDEs in terms of (1) visual output and how it affects usability and collaboration and (2) input channels and how to combine them in rich sets of mid-air interaction techniques. These analyses lead to four design requirements intended to ensure that a set of interaction techniques can be used (i) at a distance, (ii) together with other interaction techniques and (iii) when collaborating with other users. In accordance with these requirements, I designed and evaluated a set of mid-air interaction techniques for panning and zooming, for invoking commands while pointing and for performing difficult pointing tasks with lim- ited input requirements. For the latter I also developed two methods, one for calibrating high-precision techniques with two levels of precision and one for tuning velocity-based transfer functions. Finally, I introduce two higher-level design considerations for combining interaction techniques in input-constrained environments. Designers should take into account (1) the trade-off between minimizing limb usage and performing actions in parallel that affects overall performance, and (2) the decision and adaptation costs incurred by changing the resolution function of a pointing technique during a pointing task.

▲ Masquer le résumé
[J1] Multisurface Interaction in the WILD Room.
M. Beaudouin-Lafon, O. Chapuis, J. Eagan, T. Gjerlufsen, S. Huot, C. Klokmose, W. Mackay, M. Nancel, E. Pietriga, C. Pillias, R. Primet, & J. Wagner (2012) In IEEE Computer [doi] - [pdf] - BibTEX
Esc:
M. Beaudouin-Lafon, O. Chapuis, J. Eagan, T. Gjerlufsen, S. Huot, C. Klokmose, W. Mackay, M. Nancel, E. Pietriga, C. Pillias, R. Primet, & J. Wagner (2012)

Mot-clés : pointing, navigation, interaction techniques, large displays, tools

The WILD room (wall-sized interaction with large datasets) serves as a testbed for exploring the next generation of interactive systems by distributing interaction across diverse computing devices, enabling multiple users to easily and seamlessly create, share, and manipulate digital content.

▲ Masquer le résumé
[R3] Precision Pointing for Ultra-High-Resolution Wall Displays.
M. Nancel, E. Pietriga, & M. Beaudouin-Lafon (2011) Non publié Inria TechReport [doi] - [pdf] - BibTEX
Esc:
M. Nancel, E. Pietriga, & M. Beaudouin-Lafon (2011)

Mot-clés : pointing, interaction techniques, large displays, models

Ultra-high-resolution wall displays have proven useful for displaying large quantities of information, but lack appropriate interaction techniques to manipulate the data efficiently. We explore the limits of existing modeless remote pointing techniques, originally designed for lower resolution displays, and show that they do not support high-precision pointing on such walls. We then consider techniques that combine a coarse positioning mode to approach the target's area with a precise pointing mode for acquiring the target. We compare both new and existing techniques through a controlled experiment, and find that techniques combining ray casting with relative positioning or angular movements enable the selection of targets as small as 4 millimeters while standing 2 meters away from the display.

▲ Masquer le résumé
[C2] Rapid Development of User Interfaces on Cluster-Driven Wall Displays with jBricks.
E. Pietriga, S. Huot, M. Nancel, & R. Primet (2011) In ACM EICS 11 [doi] - [pdf] - BibTEX
Esc:
E. Pietriga, S. Huot, M. Nancel, & R. Primet (2011)

Mot-clés : large displays, tools

Research on cluster-driven wall displays has mostly focused on techniques for parallel rendering of complex 3D models. There has been comparatively little research effort dedicated to other types of graphics and to the software engineering issues that arise when prototyping novel interaction techniques or developing full-featured applications for such displays. We present jBricks, a Java toolkit that integrates a high-quality 2D graphics rendering engine and a versatile input configuration module into a coherent framework, enabling the exploratory prototyping of interaction techniques and rapid development of post-WIMP applications running on cluster-driven interactive visualization platforms.

▲ Masquer le résumé
[C1] Mid-air Pan-and-Zoom on Wall-sized Displays.
M. Nancel, J. Wagner, E. Pietriga, O. Chapuis, & W. Mackay (2011) In ACM CHI 11 [Video] - [doi] - [pdf] - BibTEX
Esc:
M. Nancel, J. Wagner, E. Pietriga, O. Chapuis, & W. Mackay (2011)

Mot-clés : navigation, interaction techniques, large displays, methods

Very-high-resolution wall-sized displays offer new opportunities for interacting with large data sets. While pointing on this type of display has been studied extensively, higher-level, more complex tasks such as pan-zoom navigation have received little attention. It thus remains unclear which techniques are best suited to perform multiscale navigation in these environments. Building upon empirical data gathered from studies of pan-and-zoom on desktop computers and studies of remote pointing, we identified three key factors for the design of mid-air pan-and-zoom techniques: uni- vs. bimanual interaction, linear vs. circular movements, and level of guidance to accomplish the gestures in mid-air. After an extensive phase of iterative design and pilot testing, we ran a controlled experiment aimed at better understanding the influence of these factors on task performance. Significant effects were obtained for all three factors: bimanual interaction, linear gestures and a high level of guidance resulted in significantly improved performance. Moreover, the interaction effects among some of the dimensions suggest possible combinations for more complex, real-world tasks.

▲ Masquer le résumé
[R2] Push Menu: Extending Marking Menus for Pressure-Enabled Input Devices.
S. Huot, M. Nancel, & M. Beaudouin-Lafon (2010) Non publié Inria TechReport [doi] - [pdf] - BibTEX
Esc:
S. Huot, M. Nancel, & M. Beaudouin-Lafon (2010)

Mot-clés : menus, interaction techniques

Several approaches have been proposed to increase the breadth of standard Marking Menus over the 8 item limit, most of which have focused on the use of the standard 2D input space (x-y). We present Push Menu, an extension of Marking Menu that takes advantage of pressure input as a third input dimension to increase menu breadth. We present the results of a preliminary experiment that validates our design and shows that Push Menu users who are neither familiar with pen-based interfaces nor continuous pressure control can handle up to 20 items reliably. We also discuss the implications of these results for using Push Menu in user interfaces and for improving its design.

▲ Masquer le résumé
[P1] 131 millions de pixels qui font le mur.
M. Beaudouin-Lafon, E. Pietriga, W. Mackay, S. Huot, M. Nancel, C. Pillias, & R. Primet (2010) In Plein Sud 10 [doi] - [pdf]
M. Beaudouin-Lafon, E. Pietriga, W. Mackay, S. Huot, M. Nancel, C. Pillias, & R. Primet (2010)

Mot-clés : interaction techniques, large displays, tools

Imaginez un mur d’écrans qui affiche des images en haute définition. Imaginez que par des gestes simples, vous puissiez interagir avec lui… Nous ne sommes pas dans le film «Minority Report », mais face à la concrétisation d’un projet unique d’interaction homme-machine (IHM), la plate-forme Wild, qui permet d’interagir avec des masses de données complexes.

▲ Masquer le résumé
[C0] Un espace de conception fondé sur une analyse morphologique des techniques de menus.
M. Nancel, S. Huot, & M. Beaudouin-Lafon (2009) In ACM IHM 9 [doi] - [pdf] - BibTEX
Esc:
M. Nancel, S. Huot, & M. Beaudouin-Lafon (2009)

Mot-clés : interaction techniques, menus

Cet article présente un espace de conception basé sur une analyse morphologique des mécanismes de structuration des menus et de sélection des items. Son but est de faciliter l'exploration de nouveaux types de menus afin notamment d'augmenter leur capacité sans détériorer leurs performances. L'article démontre l'aspect génératif de cet espace de conception grâce à quatre nouveaux designs de menus, basés sur des combinaisons de dimensions pas ou peu explorées. Pour deux d'entre eux, des expérimentations contrôlées montrent qu'ils offrent des performances comparables aux menus de la littérature.

▲ Masquer le résumé
[R1] Extending Marking Menus With Integral Dimensions: Application to the Dartboard Menu.
M. Nancel, & M. Beaudouin-Lafon (2008) Non publié Inria TechReport [pdf] - BibTEX
Esc:
M. Nancel, & M. Beaudouin-Lafon (2008)

Mot-clés : menus, interaction techniques

Marking menus have many benefits, including fast selection time, low error rate and fast transition to expert mode, but these are mitigated by a practical limit of 8 items per menu. Adding hierarchical levels increases capacity, but at the expense of longer selection times and higher error rates. In this paper we introduce Extended Marking Menus, a variant of marking menus that increases their width without sacrificing performance. Extended marking menus organize the items in several rings or layers. Selection is achieved by simultaneous control of direction, as in traditional marking menus, and another dimension such as distance, speed or pressure. We examine the design space of these new menus and study the Distance Extended Marking Menu, or Dartboard Menu, in more detail. We report on two experiments, one to calibrate the sizes of the rings, the other showing that it performs faster than the Zone and Flower menus but is less accurate than the Zone menu.

▲ Masquer le résumé

Financements

2018 – 2024 : ANR JCJC Causality — “Intégrer la temporalité et la causalité dans la conception des systèmes interactifs.” 2020 : Challenge HYVE avec Géry Casiez “Mesure et compensation de latence en temps réel.” 2019 : Google Faculty Research Award avec Géry Casiez “Mesure et compensation de latence en temps réel.” 2015 : NSERC Engage Grant avec Daniel Vogel “Compensation de la latence sur surfaces tactiles avec une entrée à haute fréquence.”

Présentations et conférence invitées

Janvier 2024 : Interview à propos de notre modèle et solution contre le jitter [C18].
Mars 2020 : Présentation de mes travaux de recherche à University of Toronto, University of Waterloo, et Chatham Labs (Ontario).
Avril 2019 : Table ronde à l'Assemblée Nationale, à l'occasion du lancement de la nouvelle norme de clavier français. (Paris, France).
Octobre 2018 : Présentation de [C14*] à UIST 2018 (Berlin, Germany).
Mars 2017 : Présentation au séminaire “30 Minutes de Sciences" à Inria Lille – Nord Europe.
Novembre 2016 : Présentation de [C10*] à UIST 2016 (Tokyo, Japan).
Novembre 2015 : Présentation de [C8*] à UIST 2015 (Charlotte, NC, USA).
Avril 2015 : Présentation de [C7*] à CHI 2015 (Seoul, South Korea).
Avril 2014 : Présentation de [C5*] à CHI 2014 (Toronto, Canada).
Avril 2013 : Conférence invitée au NUS HCI Lab (National University of Singapore).
Présentation de [C3*] à CHI 2013 (Paris, France).
Décembre 2012 : Soutenance de ma thèse de doctorat [D1*] (Orsay, France).
2011-2012 : Plusieurs démos de la plateforme WILD (Orsay, France).
Avril 2011 : Présentation de [C1*] à CHI 2011 (Vancouver, Canada).
Octobre 2009 : Présentation de [F1*] à IHM°09 (Grenoble, France).

Reviews


Comités et jurys

Conférences en IHM

CHCCS GI'24 Comité de programme (Graphics Interface)
ACM UIST'23 Comité de programme (ACM Symposium on User Interface Software and Technology)
ACM CHI'23 Comité de programme – Interacting with Devices: Interaction Techniques & Modalities
ACM CHI'22 Comité de programme – Interacting with Devices: Interaction Techniques & Modalities
CHCCS GI'20 Comité de programme (Graphics Interface)
ACM CHI'17 Comité de programme – Interaction Techniques
ACM CHI'16 Comité de programme – Interaction Techniques
ACM ITS'14 Comité de programme (Interactive Tabletops and Surfaces)
ACM AUIC'14 Comité de programme (Australasian User Interface Conference)
ACM CHI'14 Jury pour le Video Showcase

Jurys et comités

Inria Lille – Nord Europe Président de la Commission des Utilisateurs des Moyens Informatiques (CUMI) at Inria Lille (2023 - )
Inria Lille – Nord Europe Comité des Actions de Développement Technologique (ADT) (2018-2023)
ANR (comité) Membre d'un comité d’évaluation scientifique pour l'Agence Nationale de la Recherche (2021, 2022)
Univ. Paris Saclay Examinateur pour la thèse d'Eugénie Brasier (2021)
CONEX-Plus Panel d'évaluation pour l'attribution de bourses de postdoctorat (2019)
FWO Research Foundation Panel d'experts pour l'attribution de bourses de thèses (2018)
ANR (expert) Évaluateur expert pour l'Agence Nationale de la Recherche (2015)

Articles de recherches

Conférences

2010 2015 2020 2025
ACM CHI
ACM UIST
ACM CSCW
Interact
IEEE ISMAR
HHAI
MobileHCI
ACM ISS
ACM ITS
ACM GI
ACM NordiCHI
ACM IHM
ACM SUI
ACM AUIC
ACM DIS
ACM ICMI
ACM SIGGRAPH
IEEE VIS
IEEE PacificVis

Journaux

2010 2015 2020 2025
ACM ToCHI
IJHCS
BIT
JMUI
IEEE TNSRE
Ergonomics
  • AC/Membre du comité de programme (IHM)
  • Reviewer externe (IHM)
  • Reviewer externe (Non IHM)

2 Outstanding Reviews à ACM CHI 2024, Outstanding Review à ACM VIS 2023, Outstanding Review à ACM CHI 2022, Outstanding Review à ACM DIS 2022, Outstanding Review à ACM DIS 2022, 2 Outstanding Reviews à ACM CHI 2021, 2 Outstanding Reviews à ACM CHI 2020, 2 Outstanding Reviews à ACM CHI 2019, Exceptional Reviewer à ACM UIST 2014, Exceptional Reviewer à ACM UIST 2012.

Enseignements

2020 - 2023
Expériences contrôlées et évaluation Master RVA, Univ. Lille M2
2022
Numérique et Sciences Informatiques Tre Spécialité Livre de cours de lycée Terminale
2021
Numérique et Sciences Informatiques 1re Spécialité Livre de cours de lycée 1ère
2018 - 2020
Expériences contrôlées et évaluation Master IVI, Univ. Lille M2
2017
Visualisation d'informations Univ. Lille M1
2011 - 2012
Certificat Informatique et Internet (C2i) Univ. Paris-Sud XI L1
Interactions Humain-Machine Polytech Paris-Sud L3
Bases de données Polytech Paris-Sud L3
Interactions Humain-Machine Master Informatique - Univ. Paris-Sud XI M1
2010 - 2011
Développement Logiciel L2 Info - Univ. Paris-Sud XI L2
Interactions Humain-Machine Polytech Paris-Sud L3
Encadrement de stage de Master 2 Polytech Paris-Sud M2
2009 - 2010
Bases de données IFIPS L3
Génie Logiciel TER (MIAGE & L3 Info) - Univ. Paris-Sud XI L3
2008 - 2009
Processing & Arduino Mastère Nouveaux Médias - ENSCI M2
Bases de données IFIPS L3
Génie Logiciel TER (MIAGE & L3 Info) - Univ. Paris-Sud XI L3

Cursus et carrière


2016 -
Chargé de Recherche – Équipe Loki à Inria Lille – Nord Europe (Lille, France)
Collaborateurs principaux : Sylvain Malacria, Géry Casiez, Edward Lank
2015 - 2016
Chercheur post-doctorantUser Interfaces Lab at Aalto University (Helsinki, Finland)
Collaborateurs principaux : Antti Oulasvirta, Anna Maria Feit
2014 - 2015
Chercheur post-doctorantHuman Computer Interaction Lab at University of Waterloo (Waterloo, Ontario, Canada)
Collaborateurs principaux : Daniel Vogel, Edward Lank
2013 - 2014
Chercheur post-doctorantHuman Computer Interaction and Multimedia Lab at University of Canterbury (Christchurch, New Zealand)
Collaborateurs principaux : Andy Cockburn
2008 - 2012
Thèse de doctorat en Interactions Humain-Machine – Équipe in|situ| à l'Université Paris-Sud XI (Orsay, France)
2007 - 2008
Master 2 Recherche en InformatiqueUniversité Paris-Sud XI (Orsay, France)
2003 - 2008
Formation d'Ingénieur en InformatiqueIFIPS (Orsay, France)