We conclude by demonstrating the interactions enabled by our pocket-based sensor in several applications.
We yielded a 92.3% personal cross-validation accuracy for object recognition, 96.4% accuracy for gesture recognition, and a 100% accuracy for detecting NFC tags at close distance. Through a ten-participant study, we evaluated the performance of our prototype across 11 common objects including hands, 8 force gestures, and 30 NFC tag placements. We developed our prototype by integrating four distinct types of sensing methods, namely: inductive sensing, capacitive sensing, resistive sensing, and NFC in a multi-layer fabric structure into the form factor of a jeans pocket. By creating a new fabric-based sensor capable of detecting in-pocket touch and pressure, and recognizing metallic, non-metallic, and tagged objects inside the pocket, we enable a rich variety of subtle, eyes-free, and always-available input, as well as context-driven interactions in wearable scenarios. We present Project Tasca, a pocket-based textile sensor that detects user input and recognizes everyday objects that a user carries in the pockets of a pair of pants (e.g., keys, coins, electronic devices, or plastic items). Project Tasca: Enabling Touch and Contextual Interactions with a Pocket-based Textile Sensor We built three demo applications to highlight the effectiveness of our approach when combined with a simple IMU-based 2D tracking system. Notably, ElectroRing requires no second point of instrumentation, but only the ring itself, which sets it apart from existing electrical touch detection methods. ElectroRing’s active electrical sensing approach provides a step-function-like change in the raw signal, for both touch and release events, which can be easily detected using only basic signal processing techniques. ElectroRing addresses a common problem in ubiquitous touch interfaces, where subtle touch gestures with little movement or force are not detected by a wearable camera or IMU. We present ElectroRing, a wearable ring-based input device that reliably detects both onset and release of a subtle finger pinch, and more generally, contact of the fingertip with the user’s skin. ElectroRing: Subtle Pinch and Touch Detection with a Ring The results of our user studies validate the feasibility of PanoTrack and demonstrate that Auth+Track not only improves the authentication efficiency but also enhances user experiences with better usability. Based on the captured video stream, we develop an algorithm to extract 1) features for user tracking, including body keypoints and their temporal and spatial association, near field hand status, and 2) features for user identity assignment.
#Lightwrite touch install#
We install a fisheye camera on the top of the phone to achieve a panoramic vision that can capture both user’s body and on-screen hands.
To instantiate the Auth+Track model, we present PanoTrack, a prototype that integrates body and near field hand information for user tracking.
#Lightwrite touch free#
By sparse authentication and continuous tracking of the user’s status, Auth+Track eliminates the “gap” authentication between fragmented sessions and enables “Authentication Free when User is Around”. We propose Auth+Track, a novel authentication model that aims to reduce redundant authentication in everyday smartphone usage. Auth+Track: Enabling Authentication Free Interaction on Smartphone by Continuous User Tracking We then test this model with LipType and other speech and silent speech recognizers to demonstrate its effectiveness. We then develop an independent repair model that processes video input for poor lighting conditions, when applicable, and corrects potential errors in output for increased accuracy. To address these, first, we develop LipType, an optimized version of LipNet for improved speed and accuracy. Developing new recognizers and acquiring new datasets is impractical for many since it requires enormous amount of time, effort, and other resources. Lip reading can mitigate many of these challenges but the existing silent speech recognizers for lip reading are error prone. Speech recognition is unreliable in noisy places, compromises privacy and security when around strangers, and inaccessible to people with speech disorders.