Natural User Interface
- 1 Readings
- 2 Reading Critiques
- 2.1 Vineet Raghu 14:03:25 11/17/2015
- 2.2 Xinyue Huang 17:49:05 11/19/2015
- 2.3 Matthew Barren 22:46:17 11/22/2015
- 2.4 Long Nguyen 23:01:33 11/22/2015
- 2.5 Zinan Zhang 23:21:09 11/22/2015
- 2.6 Darshan Balakrishna Shetty 1:08:19 11/23/2015
- 2.7 Lei Zhao 1:38:57 11/23/2015
- 2.8 Mingda Zhang 2:03:21 11/23/2015
- 2.9 Samanvoy Panati 2:10:59 11/23/2015
- 2.10 Chi Zhang 3:00:50 11/23/2015
- 2.11 Sudeepthi Manukonda 4:05:56 11/23/2015
- 2.12 Jesse Davis 5:12:04 11/23/2015
- 2.13 Adriano Maron 8:48:00 11/23/2015
- 2.14 Kent W. Nixon 8:56:25 11/23/2015
- 2.15 Mahbaneh Eshaghzadeh Torbati 8:57:07 11/23/2015
- 2.16 Ankita Mohapatra 13:15:07 11/23/2015
- Skinput: Appropriating the Body as an Input Surface, Harrison, C., Tan, D. Morris, D., In Proc of CHI 2010
- Distant freehand pointing and clicking on very large high resolution displays, Daniel Vogel, Ravin Balakrishnan, In Proc of UIST 2005
Vineet Raghu 14:03:25 11/17/2015
Skinput: Appropriating the Body as an Input Surface The authors in this article have developed a new input technology that uses the skin itself as the input surface through the use of an array of sensors that determine the location of vibrations as they propagate throughout the body following finger taps on one’s arm or hand. This type of input allows users to perform visual free input since users can sense their own body location without visual input. The prototype itself consists of a simple armband that can be worn around the forearm that is able to detect vibrations along the skin of the arm using both internal and external waves. The band can be placed in various locations along the arm, and these were tested in the subsequent experiment run by the authors. Their experimental procedure was a within subjects design in which each subject went through five different arm band setups in a randomized order. For each setup, the subject performed example finger taps in order to calibrate the system (train a machine learning classifier). Then the subject tested the armband in the particular setup given tapping cues by a computer screen. The accuracy of the armband was fairly good with an average classification accuracy of 87.6%. As expected, in setups where the input sensor was close to the point of contact for the finger taps classification accuracy was higher. In addition, individuals with larger BMI had lower classification accuracies than smaller individuals perhaps due to the fact that fatty tissue would degrade acoustic signals more quickly. Finally, a supplemental experiment was done in which individuals were subject to the same tapping tasks except in the difficult environment of walking/jogging. Though the system did not display many false positives, the classification accuracy decreased fairly significantly possibly due to the increased noise. The authors here claim that the quality of the training data contributed to the faulty classification, but it is unclear why they could not have included as many training points as they did for the standard exercise. Overall, I found this paper to be a very interesting addition to possible input modalities for the future. The skin provides a great surface for on the go input recognition if the noise can be kept relatively under control. With fine tuning and perhaps a more powerful machine learning classifier, the skinput prototype could be highly successful, especially if a specific armband location is decided and maintained. -------------------------------------------------------------------------------------------------------- Distant Freehand Pointing and Clicking on Very High Resolution Displays The authors in this paper examine the use of freehand pointing and clicking on large high resolution displays from a distance. The ideology behind this is that the use of a physical device to interact with these types of displays can be clumsy when up close, and can result in misplaced devices. Though the authors in this paper use a virtual reality device to detect the human hand placement, they posit that computer vision will allow for real time tracking of a human hand in the near future. In addition, the authors attempt to explore the design space of input devices on these types of displays First, the authors mention five desirable characteristics of an input device on a high resolution display. These include: accuracy, acquisition speed of the input device (mice don’t move when released), pointing speed comfortable use for users, and the aforementioned smooth transitions between interaction distances. NExt, the authors examined various solutions to the problem of determining whether a user clicked and how the user’s hand should be oriented when pointing. They examined two clicking techniques, airtap which resembles a mouse click, and thumb trigger which involves moving the thumb towards the palm of the hand. The authors performed a mini experiment to evaluate these clicking techniques in a Fitts’ Law style, and they found no significant difference between the two techniques. In their full experiment, the authors examined the impact of various hand detection methods including raycasting ( index finger creates a ray to the target), relative, where the relative position of the hand determines where the cursor goes, and a hybrid approach the uses raycasting for recalibration. The major conclusion from this experiment was that raycasting was the fastest technique especially for large targets, but it’s lack of fine control for small targets made it an impractical solution. There was not a significant difference between the other two methods. This article has interesting applications on present day technologies. Microsoft Kinect attempted to accomplish a similar goal as this paper, as they needed a distant interface; however, they did not have to deal with close up interaction as users are required to stay within a certain range of the camera. For this, the relative technique with the simple waiting to click appeared to be very effective.
Xinyue Huang 17:49:05 11/19/2015
Skinput: Appropriating the Body as an Input Surface The paper presented Skinput, a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as an input surface. In particular, it resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. The paper described the design of a novel, wearable sensor for bio-acoustic signal acquisition and described an analysis approach that enables the system to resolve the location of finger taps on the body. They also assess the robustness and limitations of this system through a user study, and they also explore the broader space of bio-acoustic input through prototype applications and additional experimentation. The paper introduced some related work such as always-available input, bio-sensing, and acoustic input. For Skinput, the bio-acoustics is when a finger taps the skin, several distinct forms of acoustic energy are produced. Some energy is radiated into the air as sound waves; this every is not captured by the Skinput system. For sensing, to capture the rich variety of acoustic information described in the previous section, we evaluated many sensing technologies, including bone conduction microphones, conventional microphones and so on. For armband prototype, it features two arrays of five sensing elements, incorporated into an armband form factor. For processing, they employed a Mackie Onyx 1200F audio interface to digitally capture data from the ten sensors. For experiment part, they designed five different experimental conditions, such as fingers, whole arm, and forearm. They also gave some design and setup, and the experiment procedure. The paper presented the results of five fingers, whole arm and forearm separately and analyzed the BMI effects. They also designed some supplemental experiments such as walking and jogging, single-handed gestures, surface and object recognition, identification of finger tap type, segmenting finger input. It also gave some example interfaces and interactions. The results of the experiments have shown that the system performs very well for a series of gestures, even when the body is in motion. They also presented initial results demonstrating other potential uses of approach. which include single-handed gestures, taps with different parts of the finger, and differentiating between materials and objects. Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays They explored the design space of freehand pointing and clicking interaction with very large high resolution displays from a distance. Three techniques for gestural pointing and two for clicking are developed and evaluated. They also presented subtle auditory and visual feedback techniques to compensate for the lack of kinesthetic feedback in freehand interaction, and to promote learning and use of appropriate postures. The paper introduced some design characteristics such as accuracy, acquisition speed, pointing and selection speed, comfortable use, and smooth transition between interaction distances. The paper also introduced some previous work such as hand-held indirect pointing devices, laser pointer-style devices, eye-tracking, body and hand tracking, direct hand pointing, visual environments and selection with the hand. For pointing and clicking using only the hand, the paper mentioned a classic problem that is how to signal a selection or a clutch in the absence of any buttons. To prototype and explore different techniques they use some motion tracking system to get accurate and fast position information for the hand. They created two clicking techniques, one using the index finger and the other using the thumb. It also introduced some click techniques. For example, AirTap click technique is similar to how we move our index finger when clicking a mouse button or tapping a touch screen. There are two main challenges when designing the techniques, the first is that there is no physical object to constrain the downward movement of the finger to a definite start or stop position. The second is the ambiguity and idiosyncrasy of this style of finger movement. ThumbTrigger is a style where the thumb is moved in and out towards the index finger side of the hand. Adjusting for intended click point is when performing either of the gestures, the interconnected nature of the hand’s physiology causes some involuntary finger and hand movement. The paper also introduced some pointing techniques such as detecting and teaching hand postures, raycasting, and relative pointing with clutching, hybrid raytorelative pointing. After the experiment, they gave recalibration frequency analysis, recalibration time and activation analysis.
Matthew Barren 22:46:17 11/22/2015
Summary of Freehand Pointing and Clicking: Vogel and Balakrishnan explore target acquisition and clicking for large screen devices. There analysis looks at two clicking techniques and three pointing techniques. The pointing and clicking operations that the authors explore seem to run into many common problems. Additionally, the study only considers their mechanisms for accessing targets. If the paper wanted to consider how well this device would perform compared to other devices, the authors should have examined other forms of pointing and clicking such a touch and mouse interactions. Since the paper only considers their devices, it does not demonstrate that this clicking mechanism is more effective than other point and click mechanisms. That being said, the methods for pointing and clicking are quite interesting. In particular, using a hand as a clicking procedure allows for target access to map to more natural gestures. Additionally, there could be more potential ways to click or access a particular target. In considering their methods to demonstrate clicking, it seems that the ability to accurately click on small targets would be extremely difficult. Consider the thumb tap, it would be difficult to hold your hand steady on a target to click on the exact desired target. The greatest contribution from this paper was the exploration into new methods to point and click with hand gestures. In addition, it presents a series of operations that can be performed to click on targets. Summary of Appropriating the Body as an Input Surface: Harrison, Tan, and Morris examine using skin as a form to transmit inputs. They examine various methods, input quantities, and particular challenges related to the device. The Skinput method presents an interesting way to use an individual’s body as a larger palate for a device. As the authors state, current bio-devices are limited in size, and to achieve more features a larger device would be necessary. Thus, Skinput utilizes the users skin as the device layout, and the user can then tap pieces of their skin to interface with the device. Obviously this increases the potential areas to place icons to activate the device. One interesting challenge is the variation of BMI and electrical signal detection. Depending on the BMI, the detection may be more or less accurate because electrical signals travel differently depending on the quantity of fat. It would also be interesting to see if there is a difference between gender and age for the effectiveness of the device. One drawback to Skinput is it relies on the user having a projection device attached to their body, their arm must be in a particular orientation, and their skin needs to be exposed. The projection device and arm orientation could be a problem if the user must always keep their body in a particular position. This could cause fatigue over time. Additionally, having exposed skin is not always achievable, and typically depends on the users attire. Finally, the last drawback to this device is the projection on an individuals skin. The quality of the projection is limited because the canvas is not optimized to display graphics.
Long Nguyen 23:01:33 11/22/2015
"Skinput: Appropriating the Body as an Input Surface": The paper presents new sensor prototype, with input as vibration of fingers on human skin. I see the idea is quite new and motivated, where authors made something as close to human as finger and skin as input for sensor, even though there's no real application/usage for this sensor at this moment yet. Using machine learning algorithm and data analysis to tune the sensor, evaluation part shows that this device works out well for series of gestures, even when human are moving. However, the author did not specify limitation of this sensor, which was mentioned earlier in the introduction. ----------------------------------- "Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays": The second paper proposes method to use "on air hand gesture", as a recplacement of mouse, to control on very large screen. Study from 7 hand gesture techniques: hand-held pointing devices, laser devices, eye tracking, direct hand pointing, hand and body tracking virtual environments, authors try collect these techniques advantages and propose new methodology in order to overcome pointing task of mouse device for large screen in a distance. In evaluation part, result shows that this new technique has almost the same error rate as mouse, which is increidible since it is more natural to human. I think the main contribution of this paper: showing that in some specific circumstances, new technique would be more suitable than seems-to-be-perfect traditional one, which also motivates research in pointing and clicking task.
Zinan Zhang 23:21:09 11/22/2015
Skinput------------ This paper mainly talks about introducing a new way to input for human being, called Skinput. With this kind of new equipment, people can use their finger as the device to input and use their body as the surface for the finger to touch. In a word, the author invents a new to input system of finger and on-body, which is portable and handy. And as the following experiments that the author shows, the new equipment performs well and there are still rich design space in skinput. It sounds really cool. The image in science fiction film is becoming real. As we all have seen in some movie, in the future human world, people no longer need to bring a computer or laptop with them. They can seen virtual screen just in front of their eyes and input some commands with some simple click on their arm with their finger. Then, every thing has done. For example, when we want to see a movie online using this new technique is much better than using a laptop, pad or a mobile phone. Using a laptop: you have to take a laptop with you whenever and wherever you go. It is not convenience enough even though most of the laptop now is going to be lighter and pursuing to be portable extremely. Using a pad: the same problem as using the laptop. The size of a pad is much smaller than a laptop but it is still a burden when compare to bring nothing. Using a mobile phone: it is a device we can never leave, but its screen is too much small. The screen of a mobile phone is not big enough for us to see a movie and the using a mobile phone to watch a move will significantly consume the battery of the mobile phone. Finally, using the new equipment: you do not need to take anything but a small device on you arm ---- a sensor. When you deciding to watch a movie, just make some simple click on your hand. Then you can complete the operation of searching movie online, downloading movie and starting playing the movie. ====================================== Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays------------- This paper mainly focuses on introducing their new way of manipulating pointer on a huge screen from a distant by their hand. In the paper, the author invents three techniques for gestural pointing and two for clicking are developed and evaluated. From my point of view, I think this invention is necessary. For the need of the human, the screen will be larger and larger. There are mainly two reasons for that I think: first is to accomplish the need of the human. Larger screen means that people can get a larger experience of watching movie, writing code and something like that; second is that larger screen can show more things for more people. For example, during a meeting, many people are watching the content of the screen. If the screen is not big enough, people in the back of the meeting room cannot watch the screen clearly. So the screen becoming larger is inevitable. Then the problem is coming: with a larger screen, people cannot operate on it easily. So the author the new technique is necessary. With that equipment, people can manipulate the screen’s content in a distance and easily handle the operating.
Darshan Balakrishna Shetty 1:08:19 11/23/2015
Skinput: Appropriating the Body as an Input Surface: Skinput is a novel prototype for using the human body as an input interface. It uses waves produced by finger taps on the hand of the user and through machine learning different commands are issued to a system. A prototype wristband has been built that is enhanced with sensors for monitoring vibrations on the skin. The authors had to perform a considerable amount of data analysis so that the system is finely tuned. After gathering samples of different finger taps from a number of people, they built a SVM in order to recognize the waves and vibrations produced by different taps. Even though this work appears as novel and fascinating, I do not see its particular use in near future. ------------------------------------------------------------------------------------------------------------------------------------ Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays: In this paper a novel approach for remote pointing on large displays is presented. The authors are motivated by the fact that displays are becoming bigger and the need for remote pointing arises. A prototype is built that enables the user to control a pointer using only his hands. Since the kinesthetics feedback is absent in the proposed approach, the authors employed visual and audio methods for notifying the user. In addition, two algorithms for clicking have been implemented: AirTap and ThumbTrigger. The former simulates a click as a movement of the index finger, whereas the latter tracks the thumb's relative position to the hand. In addition, three different pointing techniques are presented. First, ray-casting, which is based on the pointing ability of human beings, and moves the cursor to the point on the display that is pointed by the user's finger. Next, relative pointing aims to make the pointing method easier and less tiring for the user. Finally, a hybrid method is also presented. The authors conducted an extended user study in order to compare different approaches on completion time, error rate, and recalibration. Useful results are gathered from their study and their idea is proven.
Lei Zhao 1:38:57 11/23/2015
Reading Critique The paper Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays introduces a technique for pointing and clicking from a distance using only human hand. This paper introduces previous work on distant pointing devices and analysis the advantages and problems of each. It then describes two techniques of clicking and three techniques of pointing, and uses experiments to evaluate them. Skinput: Appropriating the Body as an Input Surface designs a sensor and successfully uses it to analysis the location of finger taps on the body in order to use body as an input surface. An experiment is then operated to test the result. The conclusion is the device has a high accuracy... For the second paper: The design of the device is cool although it is based on another device. The meaning of using skin as an input device is, in my understanding, to avoid using devices and being natural. It is not nature at all to wear a Skinput on the arm while doing the tasks. The device can be made smaller and smaller, and I like the way they use arm not only as input surface, but also output surface. I have one question: is the subtle acoustic energy be affected by speech nearby or inside the person? The first paper: According to the reference, we can know that this paper is written later than 2005. Since the Kinect of Xbox has already achieve the freehand clicking and pointing from a distance, this must be written before the release of Xbox. Probably Xbox Kinect is inspired by this paper. The previous work part of this paper help us summarize all the distant controlling work till 2005, includes flying mice, touch pen, eye tracking, body and hand tracking, “gloves”…… each of them has an undeniable problem such as inaccurate, or less of clicking state. The two clicking techniques introduced in this paper are using index finger and using the thumb. The three pointing techniques introduced in this paper are absolute position finger ray casting, relative pointing with clutching, and hybrid technique. The clicking techniques both are using fingers. In the 3 pointing techniques, the 3rd one seems to be better from the comparing graphs.
Mingda Zhang 2:03:21 11/23/2015
Skinput: Appropriating the Body as an Input Surface This paper proposed a novel input approach for mobile devices and others, called Skinput. The concept itself is very interesting and from my perspective, kind of too ambitious to be true. The author proposed to use skin as the input surface and collect data from biological acoustics. By analysis of mechanical vibrations and the propagations through the body, the authors are able to determine the location of finger taps and other operations. The authors explained their innovations not only in data collections, but also in sensor modifications. Honestly speaking I feel shocked that the authors could use the sensors to collect waves with low frequency. Also, from the user experiment study, the authors proved that such method could reach high accuracy. This could be a breakthrough in human computer interfaces especially in the near future when ubiquitous computing became dominant in personal mobile computing. Actually one of the major drawbacks for current mobile devices is exactly the limited throughput and bandwidth of inputing information to computer. Although audio recognition (speech interface) has made great progress in the past decade, it is still not sufficient to fulfill the demand of modern human computer interaction. Considering the paper was published in 2010, I am really curious about what had happened to the promising technology since then. Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays This paper discussed about another interesting and highly useful technology in the future: how to use on air hand gestures to perform mouse operations on a large screen. The authors proposed several different approaches and evaluated their performances for comparison. Thumb and index are frequently used in these methods. According to the authors, little significant differences could be made from the test thus the combination of these method could be a good option. It is worth noting that in the prototype the authors used visual and auditory feedback mechanisms to compensate the lack of physical button. This is a very smart idea and could bring out great differences in real implementation. From my own perspective, I believe that more and more research papers will appear in this topic since the large screen has become more and more popular in daily life. The mouse and keyboard has been popular for decades with normal size screen, but the new trend in larger size, higher resolution displays would undoubtedly lead to other preferred input methods.
Samanvoy Panati 2:10:59 11/23/2015
Critique 1: Distant freehand pointing and clicking on very large high resolution displays This paper introduced a Skinput method, which make human body as an input surface. The authors are motivated by the fact that displays are becoming bigger and the need for remote pointing arises. A prototype is built that enables the user to control a pointer using only his hands. Two algorithms for clicking have been implemented: AirTap and ThumbTrigger. First one simulates a click as a movement of the index finger, whereas the latter tracks the thumb's relative position to the hand. Three different pointing techniques are presented. First, ray-casting, which is based on the pointing ability of human beings, and moves the cursor to the point on the display that is pointed by the user's finger. Next, relative pointing aims to make the pointing method easier and less tiring for the user. Finally, a hybrid method is also presented. An extended user study is conducted in order to compare different approaches on completion time, error rate, and recalibration. Meaningful results are shown and their idea is justified. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Critique 2: Skinput: Appropriating the body as an input interface This paper introduced a Skinput method, which make human body as an input surface. It uses waves produced by finger taps on the hand of the user and through machine learning different commands are issued to a system. A prototype wristband has been built that is enhanced with sensors for monitoring vibrations on the skin. The authors performed a considerable amount of data analysis so that the system is finely tuned. After gathering samples of different finger taps from a number of people, they built a SVM in order to recognize the waves and vibrations produced by different taps. This approach is novel and we may see many implementations of this technology in the future.
Chi Zhang 3:00:50 11/23/2015
Critiques on "Skinput: Appropriating the Body as an Input Surface" by Chi Zhang. This paper is an introduction paper to Skinput. It is an input capture technology that uses the acoustic transmission properties of human skin. When the user taps on their own body, these sensors can pick up the mechanical vibrations that occur. Skinput uses bio-acoustic sensors to achieve this input. The armband contains two sensor packages with 5 sensors each. The sensors are tuned to different frequencies. Using these different sensors, the approximate location of a tap on the body can be discerned. The experiment consisted of tapping the body at various points. The authors managed an accuracy rate of nearly 90% when attempting to classify which finger had been tapped against the thumb. Flicking increased this to nearly 97%. It's a very interesting paper, and it expresses ideas from very novel views. -------------------------------------------------------- Critiques on "Distant freehand pointing and clicking on very large high resolution displays" by Chi Zhang. This paper is about freehand point-and-click interaction with large displays. Three techniques were given in this paper to solve pointing, and two for clicking. The pointing techniques were hand postures, raycasting, and relative pointing while the clicking techniques were thumbtrigger and airtap. Using a Fitts' Law based study, the authors could determine the speed and accuracy of each technique, as well as the comfort and ease-of-use. It's a very exciting research, and the authors are very innovative in expressing their ideas.
Sudeepthi Manukonda 4:05:56 11/23/2015
“Skinput: Appropriating the body as an input surface” by Chris Harrison, Desney Tan and Dan Morris is an interesting paper that talks about a technology that turns the body into a touch-screen interface. Touch Screen gadgets have become popular as they are easy to use with, faster to execute and has less error rate. When touch screen has so many advantages, why do we need to have another technology? Although significant and advantageous devices are now being used, their small size leads to limited interaction space. We cannot make butting and screens larger without losing the primary benefit of the small size. Skinput can allow uses to comply tap their skin in order to control audio devices, play games, make phone calls and navigate hierarchical browsing systems. Each type of body creates different types of variations depending on the features of bonze, muscles and tendons. Bio-acoustics, bluetooth and pico-projector are what constitutes the functioning of skinput. Pico projector is a small projector basically used in gadgets. And on the other hand, bio acoustics says, when a finger taps the skin, several distinct forms of acoustic energy are produced. Bio acoustics work on the two different types of propagation namely transverse wave propagation and longitudinal wave propagation. The transverse wave propagation is the propagation where the wave passes underneath it and longitudinal is where waves cause internal skeletal structures to vibrate. One way of making the bio acoustics work is by wearing a sensor wrist band. A bluetooth model is where an audio interface is captured from sensors and is converted to digital signal form. This is connected to a PC/system via bluetooth. A software to match sound frequencies to specific skin locations is used. Skinput has many applications. Some of the applications are mobile systems, gaming, i-pods, simpler browsing systems, etc. ——————————————————————————— “Distant Freehand Pointing and Clicking on a Very Large, High Resolution Display” talks about techniques used to point at objects in a large resolution display by hand. High resolution means that you can view and modify the objects in greater detail. This allows accurate and clear pointing techniques. This lets the user use hand and also move back at a distance and work with the details without losing the view at accuracy and ease. The prototype has been built over a huge display with 6000 by 2000 pixels of resolution. To track the motion of the hand, a technology called vikon which is a real time motion capture system. Using two fingers, a clicking mechanism can be implemented. As the response is ambiguous, images and sound form the most of the feedback mechanism. The cursor appears where the finger ray intersects the display screen. Its called ray casting. As our hand movements are jittery, this method uses a low pass filter to reduce this effect. This is effective for large targets for ineffective for smaller targets. The next is the relative pointing technique. Its a closed fist clutch gesture. We should the clutch is active by dangling the cursor. The movements of the hand is dependent on the hand velocity. Raytorelative pointing technique is a technique, which uses ray technique to reccaliberate the intersection and uses relative technique to reposition the cursor close to the target. Relaxing the hand returns the cursor to the centre.
Jesse Davis 5:12:04 11/23/2015
Skinput: Appropriating the Body as an Input Surface This paper was a very interesting read and a very inventive/creative way of incorporating human computer interaction biologically without any invasive hardware. This is an important study and should be an inspiration to other researchers/hardware designers that are looking for a way to incorporate the human body as an input device with the state of mind that the user should be able to interact with the computer anywhere and (almost) anytime. I found the accuracy readings that they achieved to be very astonishing and the video that I found that went with this paper was even more insightful, which actually displayed a user playing/controlling Tetris with the use of finger taps. Additionally, they were also able to incorporate a video display that would serve as a menu for the user and the display would be laid out in a way that would make it look that the user was touching the buttons in response to their touches (but was really happening was the bio-acoustic readings; it was paired quite well and it did indeed look like it was responding to the user hitting the button rather than listening for the readings). While this (the non-invasive Human Computer Interaction Bio-Interaction model) is a very important aspect of HCI to research, I also believe that the research subject of invasive HCI research is also something that should be noteworthy, although it is often a touchy subject in the public eye. In conclusion, I found this paper pretty insightful as far as meshing humans and computers for the sake of a more fluid human computer interaction setup. Link to video: https://www.youtube.com/watch?v=g3XPUdW9Ryg Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays This paper went into a lot of detail for the existing methods section paper and I felt that was very appropriate because systems similar to this have already been done and they outline what methods have done that are similar to theirs here and why they didn’t include or try specific methods that wouldn’t work or have already been tried and didn’t work. I found the way that the user is able to interact with the large display to be very intuitive and surprisingly accurate, then again with the glove and motion detecting devices, I suppose they should be. A similar project that I did research on was SixthSense and I believe it would’ve worked well in combination with this, with the display being generating with a projector worn by the user and the user being able to insert commands with gestures while wearing gloves similar to the one used in the this project (as opposed to colored gloves used in SixthSense, which can generate a good bit of inaccuracy). Another interesting aspect of this project was the RayCasting, which is a term I don’t recall coming across before this paper. In terms of the speed, it did well versus the other methods, but in terms of error rate, it didn’t do so hot, which both make sense. While it is intuitive and quick, it leaves room for error because the user is attempting pinpoint accuracy at a distance with the index finger. Excellent paper, well organized, and thought-provoking. http://www.pranavmistry.com/projects/sixthsense/
Adriano Maron 8:48:00 11/23/2015
Skinput: Appropriating the Body as an Input Surface: This paper describes a novel technique that uses the acoustic transmission generated from tapping the human body as user input. The authors also created an arm-band with multiple fine-tuned sensors to detect the vibrations propagated through the body. An user experiment was performed to collect the data regarding tapping in 3 different locations of the arm (fingers, whole arm and forearm). In their experiments, situations such as tapping while walking and jogging were evaluated in order to study how such external noises could influence the input recognition. According to their studies, the accuracy of the system was still good under those situations. In my opinion, such kind of technology is interesting as a demonstration about what is possible, but I don't see how relevant it could for the every-day user. First of all, it always require an exposed skin, which is may not be possible in locations where temperature is low. Second, what kind of equipment one would control through this type of input? Without any type of feedback or visual clues from the skin, such approach relies heavily of user recall, and no recognition is provided. ================================================= Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays: This paper describes a hands-free technique that allows users to control a pointer in a distant large screen using their hands. In their prototype, the authors use a motion technique based on reflective markers placed in the user's fingers, back of palm and wrist. Their goal is to use gestures, similar to touchscreen taps, to acquire targets in a distant screen. The authors performed a study to evaluate three different algorithms to detect user pointing: RayCasting pointing, Relative Pointing and RayToRelative (hybrid) pointing. A within-participant factorial design was used in the experiments, and the input variables were Technique, Target Distance and Target Size. The dependent variable was target acquisition time and error rate. It would be interesting to include regular mouse as an input device comparison, in order to analyze the performance difference of the proposed technique.
Kent W. Nixon 8:56:25 11/23/2015
Skinput: Appropriating the Body as an Input Surface In this paper, the authors design a novel sensing armband which is able to detect the position of taps on a user's arm. The sensing is done using 10 acoustic sensors, with 5 positioned on either side of a user's arm. Through some initial testing, the authors determine the target frequencies they wish to sample to capture characteristic data related to taps, and appropriately weight the acoustic sensors' response curves. The authors then devise a brute-force machine-learning algorithm using a 186 feature SVM in order to classify input taps. Input taps are identified based on entering and exiting a specific intensity window within a certain amount of time. While accuracy varied, the authors are able to correctly determine the location of a tap from anywhere on the user's forearm all the way to their fingertips. They also investigate single-handed gestures, such as tapping a finger with the same hand's thumb. This was a very interesting paper as it discussed a lot of practical knowledge required to get such a system up and running. While I doubt such a system will ever see consumer use (the need to wear and armband limits its application), it is an extremely interesting proof-of-concept. It reminds me of a similar work I have read where the authors were able to detect single-handed gestures using a sensing plate attached to a smartwatch. Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays This paper discussed the design of a pointing and clicking system which was to be used at distance from a large display surface. The authors create such a surface utilizing 18 projectors and specialized software to link them all together. To remove any barrier to interaction, the authors decide to simply use the human hand as the main form of interaction with the display, as using a dedicated device would complicate the transition from working at a distance to working closely on the display. The authors investigate a number of pointing, clicking, and dragging style to discover which is the easiest and most efficient for users. They find that while a traditional ray tracing technique is faster than others, it is also more error prone. They instead conclude that a ray to relative technique is a much better choice as it greatly reduces errors when compared to the ray tracing method. At the same time, the authors discuss a significant amount of work that went into determining new type of audio and visual feedback to provide the user to replace that which was lost by having a physical device such as the mouse as a pointer. My favorite topic in this category was how the authors animated the cursor to swing loosely from a point after being released by the user. While the techniques in this paper do not relate to any of my research, it was interesting to read about advance research in the area of interaction at a distance. This sounded surprisingly similar to the interaction method utilized by the Kinect. Interestingly, I had found the hover-to-select method utilized for clicking with the Kinect to be frustrating and simplistic. If a method for clicking similar to the authors had instead been adopted, the interaction may have been much more intuitive.
Mahbaneh Eshaghzadeh Torbati 8:57:07 11/23/2015
Critique for Skinput: Appropriating the Body as an Input Surface This paper talked about a new input surface, which is human skin. The author detailed talked about how to use vibration as input on skill and their experimental result, which shows high accuracy of this method. This paper is great, to my opinion. The reason is that it presents a possible approach that can lead mobile computing into a new level. People may not need to carry a device with them in the future. Users can get the benefit from mobile computing on anytime. This approach based on a big device worn by users that collect vibration. It looks not that advanced. But I think in the future it can be better. The principle why this system works is that when people point on different position of the skin, the vibration that generated is different so that it can be used as an input signal. Treat skin as a screen and project image on it, users can have a skin touch surface. Also based on their experiment result, this method turn out to have a very high accuracy. I believe that it can be popular in the future. After reading this paper, I am shocked on this amazing input method. But I still think there are some limitations on it. Everything is good for something and worse for something else so that it is still a good idea anyway. The first weakness is that if use skin as surface, it may not be a good idea when people want to use it outside in winter time since people don’t want to show their skin out in cold weather. Maybe operation that over clothe can help to solve this problem. The other one is the projecting system. Wearable projecting system may need to make the approach be possible. But I think it can be solved soon. Critique for Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays In general, this paper talked about the method that use on air hand gestures to do mouse operation on a very large screen. The author shows a lot of ways to simulate mouse controls and also the experiment of their performance. In this paper, two major achievements are introduced. One is the on air hand gestures control that simulate mouse operation. Another one is achieving the operation on a very large and high-resolution display. The most important achievement is the on air hand gestures control. The author presented several different types of hand gestures for mouse operation. Thumb finger and index finger are used in the methods. This is a mapping from traditional mouse operation. All of the methods they invented are very usable. But there is no significant difference within these methods. All of them can be used for very small target so that by combining it with high-resolution display, user can achieve a very accurate operation. Nowadays, screen size and resolution are increasing dramatically. The newest iMac have a 5K screen that have similar resolution with the big screen that the project used, and also big screen are become popular in modern family. If this method can be used commonly, I think the computer operation will jump to a new stage. For ordinary use of computer, I think this method will be a great. People will not need to learn how to use it for a long time because they are working in natural way.
Ankita Mohapatra 13:15:07 11/23/2015
Skinput: Appropriating the Body as an Input Surface Chris Harrison, Desney Tan, Dan Morris This paper introduces a new input modality that uses the skin’s acoustic properties to allow direct input via the skin. Use of the skin as input is classified by the paper along with other work using “biological signals” for input. This research is novel in that the biological signal, bodily acoustics, can be consciously manipulated, as opposed to involuntary signals like heart rate. The authors presented a significant amount of related work for each of the advantages that they claimed for Skinput: it is always available, bio-sensing, and uses acoustic input. Identifying these properties required that the authors reference relevant research on human anatomy, which was a really interesting use of work from another field. Figure 3 was especially helpful for demonstrating the acoustic results of tapping your arm, as well as what properties their device is capable of capturing. Their experimental setup was particularly interesting because they began addressing the issue of training the system on users with vastly different body types. They recruited about an even number of men and women, had a range of ages, and had at least one normal and one obese subject. Their experiment was also extensive in that it covered a wide range of possibilities for sensing taps on the arm; they included taps to locations on the hand as well as across the whole arm, changed the location of the sensor to see what was feasible, and even had users walk or jog, getting surprisingly high accuracy in the end. The study was extremely thorough and the authors were careful to describe the results of their statistical tests responsibly. I would like to comment that the title of this paper is mildly disturbing, and could have been phrased in a more appealing way. ------------------------------------------------------------------- Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays Daniel Vogel, Ravin Balakrishnan This paper claims to explore the design space of free hand point-and-click interactions, but does not give any definition for that space. A consequence of this is that the reader does not know how well the pointing and clicking techniques that they evaluated cover the scope of what is possible within the space of free-hand point-and-click techniques. That said, their exploration of related work on large display interactions was very extensive. One of the key problems identified by this paper regarding input to high resolution displays is that a fixed spatial relationship between the human and the display should not be assumed. The authors suggest that use of just the human hand will alleviate many of the problems that arise with dynamic distance interactions. One of their claims is simply that since the user does not need to hold a device to interact with the display, they cannot lose anything. Also, not having a device alleviates the problem of mapping the paradigm of the distant interactions with the touch-enabled surface. They begin their discussion of pointing gestures by citing research from social anthropology, interesting use of work from another field. It was also interesting that the authors used a motion tracking system as an “enabling technology,” with the knowledge that accurate, marker-free hand motion tracking would be available soon. My immediate question from reading the clicking techniques section is why they didn’t have a click gesture where the thumb and middle/index fingers meet? This provides bodily tactile feedback and is much less awkward than one-finger gestures. The features of their pointing gestures were not always clear to me. What did “clutching” accomplish?