Natural User Interface

From CS2610 Fall 2014
Jump to: navigation, search

slides1 slides2



Reading Critiques

Wenchen Wang 14:46:36 11/19/2014

<Skinput: Appropriating the Body as an Input Surface > <Summary> This paper introduces Skinput a method that allows the body to be appropriated for finger input using a novel, non-invasive, wearable bio-acoustic sensor. <Paper Review> The motivation of Skinput is that devices with significant computational power and capabilities can now be easily carried on our bodies. One option is to find environment surface as input area. Another option is to find body surface as input area. Skinput applies the later option. It explores finger, whole arm and forearm as input area by a novel, non-invasive, wearable bio-acoustic sensor to sense finger tap. The bio-acoustic sensor is the key part of this system. When an arm is input by a finger tap, the acoustic energy transmitted through the arm. Bio-acoustic sensor is activated as the wave passes underneath it. I think the idea is very creative and novel. But I skinput still has some limitations. For example, when user uses their arm as input area, they have to show out their skin. When it is winter, it’s not convenient. Another potential application problem is that users have to wear the bio-acoustic sensor. However, I believe in the future, when this technique is applied as real product, this kind of concern may be handled nicely. <Distant Freehand Pointing and Clicking on Very Large, High Resolution Desplays> <Summary> This paper introduces a distant pointing and clicking technique to manipulate large high resolution displays. <Paper Review> The motivation of this paper is that we will soon have entire walls providing high resolution visual output. These high large resolution not only needs to close up interaction, but also require step back manipulation. The characteristics of this technique is accuracy, acquisition speed, pointing and selection speed, comfortable use and smooth transition between interaction distance. They designed three pointing techniques: absolute position figer ray casting, relative pointing with clutching, and a hybrid technique using ray casting for quick absolute coarse pointing. Raycasting is a technique that a ray extends with the tip of the finger and the cursor is positioned on the big screen where the ray intersects with the screen. Relative pointing with clutching is that when the user’s hand is open, it means user can do relative cursor control. When the user’s hand is clutching, the cursor arrow swings to a dangling position. Hybrid RayToRelative Pointing combines absolute position ray casting and relative pointing with clutching. When user open his/her hand, it will do relative cursor control. When the user is pointing with his/her finger, it will do absolute ray casting.

Qihang Chen 15:08:44 11/19/2014

The first paper, "Skinput: Appropriating the Body as an Input Surface", reports the results of tests on a new device the researchers have created that can interpret finger taps on a users body as input by measuring acoustic vibrations within the body. This prototype device is worn on the arm, and would allow users to always have a large input area available at all times. The researchers experimented with sensors detecting taps in a variety of different locations, and observed promising results in all cases. with accuracy ratings with the prototype typically ranging between 80 and 90%. Trials were also performed while users were walking and jogging to judge the effect of background noise from other body motions. There trials also produced positive results, with even jogging producing only a little disruption. Finally, trials were also conducted where users tapped different objects. It was found that the system could accurately distinguish between taps on different types of surfaces, for example, tapping an lcd screen or tapping another finger. In the second paper, "Distant Freehand Pointing and Clicking on Very Large High Resolution Displays", researchers propose methods of interacting with large displays from a distance using only the human hand. The researchers implemented their prototype by tracking sensors on the hand, but say this type of interaction should be available without attached sensors in the future. The researchers evaluated three different techniques, Relative, RayToRelative, and RayCasting, and compared them to each other. RayCasting was found to have much higher error rates, but the others were found to be effective. The researchers system differs from other attempts to create similar systems by providing more feedback to the users actions, while also warning users if a gesture they are making is ambiguous. These new techniques seems to have helped produce lower error rates when testing the new system.

Christopher Thomas 18:35:14 11/19/2014

2-3 Sentence Summary of Skinput: Appropriating the Body as an Input Surface: In this paper, the authors present a technology called Skinput, an input technique which allows the skin to be used as a finger input surface. The system is based on two principles of wave propagation from bio-acoustics: transverse wave propagation and longitudinal wave propagation, which are then sensed by an array of highly tuned vibration sensors. The authors demonstrated considerably high classification accuracy. This paper was interesting from an HCI perspective, because in a sense it makes the human body part of the computer. In other words, the approach exploits the conductance properties of the human body itself and leverages them into an input ‘device.’ By using the human body as the input device, the user doesn’t have to use an external surface or device for input. In this way, the device “disappears,” the ultimate goal for any HCI researcher. This technique opens up many application possibilities. Could, for instance, a cellphone in a pocket ultimately be able to detect the same type of bio-acoustic information without the need for any external sensors? When one considers this possibility, many new directions open up. When I first saw this paper, I thought this is a great idea in theory, but who would ever actually want to wear this huge sensor on their arm? Then, one realizes, that with the advent of smart watches, this technology becomes remarkably realistic. Consider, for a moment, that the armband of the smartwatch contains the same bioacoustic sensors. Now, the smartwatch may be able to detect pressing and tapping of the hand. Could this approach be used instead of a touch screen on a smartwatch to navigate? Perhaps it is lower power or more user friendly? Even more interestingly, this may provide a novel way for users to interact with a smart watch or smart armband without even looking at it. One must remember that many of the touch screen devices use capacitive sensing and, for instance, when users are wearing gloves in the winter, users are then unable to use their device without taking off their gloves or using voice commands. For these users, changing a song that’s playing to their headphones could become as easy as tapping their finger to their palm in a certain manner, even with gloves on. Thus, this type of sensing technology may help previous user interfaces (such as capacitive sensing screens) overcome many of their limitations. The authors also demonstrated the technology using a pico projector and projecting selections to the user’s hand which the user could then touch using just their finger. While, this is interesting, I don’t see the real novelty in this particular application (since the technology required an armband sensor and a pico projector). So, I believe the true innovation and utility of this approach is that existing interface approaches can be supplemented with it to improve their functionality under certain constrained use cases (such as the one I mentioned with the user wearing gloves). Since the paper demonstrated bio-acoustic feedback is possible for this type of input, I wonder what other type of acoustic feedback the system could gather (i.e. growling stomach indicates hunger, etc.) which could help certain exercise or fitness systems, for example. 2-3 Sentence Summary of Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays: The authors present several techniques for interacting at a distance by using hand gestures. The gestures involve a clicking motion with the index finger and a movement of the thumb. The authors demonstrate the technique experimentally and provide some example use cases. The authors state in the paper, that their work investigates pointing and clicking from a distance “using only the human hand.” However, if one reads (and looks closely), the authors (much deeper into the paper) state that the users actually had to have “passive markers” attached to the hand to actually see their gestures. If I were reviewing this paper, I would tell the authors that I think they are overselling their case in their introduction, because they have actually not demonstrated that pointing and clicking can be done from a distance using only the hand. Their approach required markers on the hand. While this distinction is subtle, it is important to mention. In terms of the HCI goals of the paper, this may not have been a big issue (thus, the gesture recognition task could even have been done via a wizard-of-oz for all the participants knew), but I still disagree with the authors claim early in the paper that they have demonstrated a pointing a clicking technique that functioned using only the user’s hands. Aside from that, I liked how the paper was very clear about the expected hypotheses. For instance, the authors plainly stated that, “we expect the raycasting technique to be the fastest... but will have a higher error rate with small targets.” I think it was interesting that the authors put their hypothesis plainly in the “goals” section of the experimental evaluation, which is something that should be done more often in my opinion. Additionally, I found that the authors were very explicit with their experimental design. They explained very clearly that they were using a repeated measures within-participant factorial design. They also clearly identified the independent variables and the dependent variables and even provided an experimental design summary which I thought was very useful to understand what they were doing. Though their techniques were well evaluated, I would have liked to see comparisons or at least textual feedback from the users of the system not just on these types of techniques. The authors had user feedback regarding these in-air pointing techniques. However, it would have been interesting to allow users the choice between voice commands, for instance, or perhaps some other wireless pointing device, and then ask the users which they felt was more useful and comfortable a direction to move in. I think it was very important, however, that the authors also provided FEEDBACK to the users. One of the hallmarks of their technique was that it provided visual and auditory feedback for their clicking techniques to compensate for the lack of kinesthetic feedback present when using physical interfaces. I thought this was a very well-thought out design decision, since we learned providing feedback was the most important part of many HCI problems.

Mengsi Lou 21:23:18 11/19/2014

Skinput: Appropriating the Body as an Input Surface ----------This paper presents a technology Skinput that uses skins as an input interface. The author designs a novel, wearable sensor for bio-acoustic signal acquisition. The aim of the project is to provide an always-available mobile input system, which does not require a user to carry or pick up a device. ----------First the author illustrate the background from the bio-acoustics. When a finger taps the skin, several distinct forms of acoustic energy are produced. Some energy is radiated into the air as sound waves. Here are two types of signals. One is the transverse waves that generated by the displacement of the skin from a finger impact and appears as ripples. The other type is longitudinal waves that generate as the wave travels through the soft tissues of the arm, and actives the bone. Then the bone and the soft issue have different respond to mechanical excitation by rotating and translating as a rigid body. In sum, the transverse waves moving directly along the arm surface, and longitudinal waves moving into and out of the bone through soft tissues. ----------At the sensing part, the author meets some challenges. The devices are typically engineered for capturing human voice, and filter out energy below the range of human speech. That means most sensors in this category were not especially sensitive to lower-frequency signals. Then they solved these problems by moving away from a single sensing element with a flat response curve, to an array of highly tuned vibration sensors. Finally, the Armband Prototype features two arrays of five sensing elements, incorporated into an arm-band form factor. They did the experiments for three conditions that are finger locations, whole arm and forearm. The results shows that classification rates were high. ----------Here the most appealing part is the supplemental experiment part. The author also test the performance when a person is walking and jogging, because acoustically-driven input techniques are often sensitive to environmental noise. This topic is highly relevant to our final project – designing effective vibration notification for the wrist devices. The noise in the environment also influences a lot for the efficiency of vibration notification. They designed the experiment for three locations and test the three location’s input performance either walking or jogging. And the author get the result that the chief cause for this decrease was the quality of the training data, although the noise generated from the jogging almost certainly degraded the signal. Thus, more rigorous collection of training data could yield even stronger results. //////////////////////////////////////////////// Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays ----------This paper discusses freehand pointing and clicking interaction with very large high resolution displays from a distance. ----------The author created three clicking techniques and also three pointing techniques . Here we mainly discusses the three clicking techniques. The first one is the AirTap that mainly solves two challenges. One is no physical no physical object for constrain the downward movement of the finger. They designed the click down recognition algorithm uses relative features of the downward finger motion. The second one is the ambiguity of this style of finger movement. They adopted a simple calibration scheme that tuned the recognition parameters to a particular individual’s clicking style, which narrowed the space of recognized clicks and reduced false positives in click recognition. The second technique is thumb trigger. They implemented a thumb trigger style click where the thumb is moved in and out towards the index finger side of the hand. The three technique is adjusting for intended click point.

Yanbing Xue 22:21:51 11/19/2014

The first paper is mainly about a system that leverages the acoustic properties of the body as a means of accepting input. The authors introduce a very interesting idea in this paper. Researches have long explored the possibility of using computer vision to recognize input or new ways of allowing users to manipulate a device to provide input. Comparatively little has been done in the way of utilizing the body directly as the input medium. In particular, the authors resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations the propagate through the body. With a armband prototype of wearable sensor, the recognition accuracy was high with average of 87% over different conditions. The idea proposed in this paper is really amazing. Exploiting human body surface as natural input is quite new to me. One direction that I can think of is using a camera to record the location of finger tap, similarly what we read in metaDesk. However, the approach in this paper is totally different. The author analyze the acoustic transmission when finger tap on arm or hand to learn the location of the taps. This idea is made possible firstly by a bio-acoustic properties of human body. Therefore they propose their solution of always available robust input, which is based on sensing acoustic signal that conducted through the skin. Specifically, this paper resolve the location identification for finger tapping on the flat skin surface, which is necessary and important for extend this input method into real mobile interaction applications. They use the figure printing approach, using multiple sensors attached in different places to identify the location of tapping based on supervised machine learning of all signal features of segmented windows computed at run time. ========== The second paper is mainly about the feasibility of freehand pointing and clicking using very large high resolution displays from a distance. The authors proposed three techniques for gestural pointing and two for clicking. The system uses motion tracking to identify the position of the hand and its digits. The design characteristics of the large, high resolution display space are as follows. Devices must be accurate and allow for reliable selection from afar and up close. Acquisition speed should be fast due to the "on again, off again" nature of interaction. Pointing and selection must be fast. The device must be comfortable to use. There should be a smooth transition between interaction distances. The system uses a Vicon motion tracking system to get accurate and fast position information from the hand. Their experimentation produced interesting results for mean selection time and error rate, uncovering interesting relationships between the effectiveness of pointing and selection techniques and different target distances and sizes. The results show an average accuracy across conditions of 87.6%. The system performs very well for a series of gestures, even when the body is in motion. This is the highlihgt in this paper I think since they've chosed an appropriate part to locate, which is critial in sensing. Overall, they build an interesting sensor into an armband, which could detect and localize finger taps on the forearm and hand. Is this a kind of ubiquitous computing?

Wei Guo 23:04:28 11/19/2014

Reading Critique The paper Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays introduces a technique for pointing and clicking from a distance using only human hand. This paper introduces previous work on distant pointing devices and analysis the advantages and problems of each. It then describes two techniques of clicking and three techniques of pointing, and uses experiments to evaluate them. Skinput: Appropriating the Body as an Input Surface designs a sensor and successfully uses it to analysis the location of finger taps on the body in order to use body as an input surface. An experiment is then operated to test the result. The conclusion is the device has a high accuracy... For the second paper: The design of the device is cool although it is based on another device. The meaning of using skin as an input device is, in my understanding, to avoid using devices and being natural. It is not nature at all to wear a Skinput on the arm while doing the tasks. The device can be made smaller and smaller, and I like the way they use arm not only as input surface, but also output surface. I have one question: is the subtle acoustic energy be affected by speech nearby or inside the person? The first paper: According to the reference, we can know that this paper is written later than 2005. Since the Kinect of Xbox has already achieve the freehand clicking and pointing from a distance, this must be written before the release of Xbox. Probably Xbox Kinect is inspired by this paper. The previous work part of this paper help us summarize all the distant controlling work till 2005, includes flying mice, touch pen, eye tracking, body and hand tracking, “gloves”…… each of them has an undeniable problem such as inaccurate, or less of clicking state. The two clicking techniques introduced in this paper are using index finger and using the thumb. The three pointing techniques introduced in this paper are absolute position finger ray casting, relative pointing with clutching, and hybrid technique. The clicking techniques both are using fingers. In the 3 pointing techniques, the 3rd one seems to be better from the comparing graphs.

phuongpham 23:14:18 11/19/2014

Skinput: Appropriating the body as an input surface: the paper presents a new input surface using human skin. The authors have used a new sensor device instead of using off the shelf ones. Evaluation results show the device work well. As I remember, professor Jingtao has mentioned about this work and there was a demo show playing e-guitar using the device. I think the authors have spent time to make the device more accurate and become a commercial product. It seems that the authors have not mentioned much about the limitation of the work as they have "promised" in the introduction section. However, it seems that the project has many interesting open questions, such as predicting the surface recognition, identifying finger. At this moment, the approach focuses on clicking (tapping) operation, I wonder if we can detect other operations such as dragging, drawing, those would open new opportunities for interactions. ***Distant freehand pointing and clicking on very large, high resolution displays: the authors have evaluated several freehand pointing and clicking techniques. The introduction part was written in a style that presenting current problem, then offer some solutions and analyse the evaluation. I like the design where the authors have emphasized on system feedback. Almost all proposed techniques come along with a feedback mechanism. The paper also mentioned about proposed techniques' weaknesses as well as strengths. I think the approach is potential, especially if the system can support multiusers. With a large display, it's easier to get many people involve and we can identify as well as solve new challenges.

Nick Katsipoulakis 23:15:59 11/19/2014

Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays :: In this paper a novel approach for remote pointing on large displays is presented. The authors are motivated by the fact that displays are becoming bigger and the need for remote pointing arises. A prototype is built that enables the user to control a pointer using only his hands. Since the kinesthetics feedback is absent in the proposed approach, the authors employed visual and audio methods for notifying the user. In addition, two algorithms for clicking have been implemented: AirTap and ThumbTrigger. The former simulates a click as a movement of the index finger, whereas the latter tracks the thumb's relative position to the hand. In addition, three different pointing techniques are presented. First, ray-casting, which is based on the pointing ability of human beings, and moves the cursor to the point on the display that is pointed by the user's finger. Next, relative pointing aims to make the pointing method easier and less tiring for the user. Finally, a hybrid method is also presented. The authors conducted an extended user study in order to compare different approaches on completion time, error rate, and recalibration. Useful results are gathered from their study and their idea is proven. //-------------------------------END OF FIRST CRITIQUE ----------------------------/// Skinput: Appropriating the Body as an Input Surface :: Skinput is a novel prototype for using the human body as an input interface. It uses waves produced by finger taps on the hand of the user and through machine learning different commands are issued to a system. A prototype wristband has been built that is enhanced with sensors for monitoring vibrations on the skin. The authors had to perform a considerable amount of data analysis so that the system is finely tuned. After gathering samples of different finger taps from a number of people, they built a SVM in order to recognize the waves and vibrations produced by different taps. Even though this work appears as novel and fascinating, I do not see its particular use. In fact, I see it as a fancier way of using the sence of touch in order to use a system.

yubo feng 23:25:58 11/19/2014

In this paper, "Skininput", the authors provided a new way to do as input: using users' skin to be a flat, then accessing the computer via the devices equipped in user's arm. In particular, the authors resolved the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body.Signals were collected by using a novel array of sensors worn as an armband. The approach provides an always available, naturally portable, and on-body finger input system. The authors assess the capabilities, accuracy and limitations of our tech- nique through a two-part, twenty-participant user study. To further illustrate the utility of our approach, authors conclude with several proof-of-concept applications we developed. Results from our experiments have shown that our system performs very well for a series of gestures, even when the body is in motion. Additionally, the authors have presented initial results demonstrating other potential uses of our approach, which we hope to further explore in future work. These include single-handed gestures, taps with different parts of the finger, and differentiating between materials and objects. The conclusion is made that with descriptions of several prototype applications that demonstrate the rich design space we believe Skinput enables.

Longhao Li 23:31:02 11/19/2014

Critique for Skinput: Appropriating the Body as an Input Surface This paper talked about a new input surface, which is human skin. The author detailed talked about how to use vibration as input on skill and their experimental result, which shows high accuracy of this method. This paper is great, to my opinion. The reason is that it presents a possible approach that can lead mobile computing into a new level. People may not need to carry a device with them in the future. Users can get the benefit from mobile computing on anytime. This approach based on a big device worn by users that collect vibration. It looks not that advanced. But I think in the future it can be better. The principle why this system works is that when people point on different position of the skin, the vibration that generated is different so that it can be used as an input signal. Treat skin as a screen and project image on it, users can have a skin touch surface. Also based on their experiment result, this method turn out to have a very high accuracy. I believe that it can be popular in the future. After reading this paper, I am shocked on this amazing input method. But I still think there are some limitations on it. Everything is good for something and worse for something else so that it is still a good idea anyway. The first weakness is that if use skin as surface, it may not be a good idea when people want to use it outside in winter time since people don’t want to show their skin out in cold weather. Maybe operation that over clothe can help to solve this problem. The other one is the projecting system. Wearable projecting system may need to make the approach be possible. But I think it can be solved soon. Critique for Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays In general, this paper talked about the method that use on air hand gestures to do mouse operation on a very large screen. The author shows a lot of ways to simulate mouse controls and also the experiment of their performance. In this paper, two major achievements are introduced. One is the on air hand gestures control that simulate mouse operation. Another one is achieving the operation on a very large and high-resolution display. The most important achievement is the on air hand gestures control. The author presented several different types of hand gestures for mouse operation. Thumb finger and index finger are used in the methods. This is a mapping from traditional mouse operation. All of the methods they invented are very usable. But there is no significant difference within these methods. All of them can be used for very small target so that by combining it with high-resolution display, user can achieve a very accurate operation. Nowadays, screen size and resolution are increasing dramatically. The newest iMac have a 5K screen that have similar resolution with the big screen that the project used, and also big screen are become popular in modern family. If this method can be used commonly, I think the computer operation will jump to a new stage. For ordinary use of computer, I think this method will be a great. People will not need to learn how to use it for a long time because they are working in natural way.

Xiaoyu Ge 23:32:28 11/19/2014

Skinput: Appropriating the Body as an Input Surface This paper introduced a Skinput method, which make human body as an input surface. The paper built a wearable bio-acoustic sensing array into an armband to detect and localize finger tabs on the forearm and hand. The finger tab sensing based on the acoustic energy generated while figure taps the skin, Transvers wave propagation and longitudinal wave propagation were used for the sensing. The paper made prototype and did user experiment. The idea of using human skin as input surface is very innovated, there is no such product already exist in the market. And this innovation can make people use electronic device without carrying a phone or tablet device. However, there will be issues. As for the limitation of the skin surface, the space of the forearm varies greatly among different people. And the color of the skin itself will make the projected words or numbers unreadable. And since it is projected to the skin of the forearm, the person can`t use both of their hands to do touch the screen since one of the hand have already been used as an input surface. The design of using already existed surface, as input surface is really an innovated thought. Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays This paper introduced a prototype of clicking and pointing technology for very large and high resolution screens since the price will decrease and will be widely use. And in order to manipulate detail information in large screen this paper introduced a gesture-based interface. The paper introduced three different techniques for hand position recalibrate and gesture recognition, RayCasting, Relative Pointing with Clutching and Hybrid RayToRelative Pointing. And as for the user interaction help evaluated the technology in speed, accuracy and ease-of-use. And it make used of visual and auditory feedback to compensate for the lack of kinesthetic feedback by clicking a physical button which alert user about the ambiguous gestures. The interface introduced in this paper is really innovated and useful. There is no similar product in the market nowadays, but the trend that more and more users will have access to larger and larger high-resolution screens is true. In that case, this product will be on the trend, and it can be utilize in other areas, since the experiment result is good.

Bhavin Modi 23:40:44 11/19/2014

Reading Critique on Skinput: Appropriating the Body as an Input Surface The paper can be summarized as identifying new sources of input, in this case, skin as an input channel. Experiments are conducted to measure the bio-acoustics and viability of such input. We move on to natural user interfaces, interfaces that utilize human as an input and output. The authors plan to utilize our ability to know where we are touching our body without even seeing, as we are physically aware of ourselves. Combined with input channel that is natural and ubiquitous. To find out the viability of such an endeavour, a lot of discussion goes into the setup. How can we detect finger taps on the skin and classify them accurately. We learn a lot about bio-acoustics and how vibrations are transmitted through the skin, flesh and bones. The frequency range needed to be detected and the building of the arm device prototype for sensing. Moving on we begin with the different experimental setups detecting finger taps by placing the arm sensor below and above the elbow for collecting data for the fingers, the whole arm and the forearm. Using a brute force machine learning setup, SVM’s they input has been classified to test if location of the tap can be detected. The accuracy results are admirable at around 87%. After figuring out that this indeed possible, we can see some examples of live applications of Skinput. Projecting data using piezo projectors on the human hand or arm and using Skinput for menu selection, scrolling and for dialling using a keypad. This concept looks like it came out directly from a science fiction movie, though the paper is about exploring new natural input channels, but some practical feasibility of such technology is questionable. First, is about projecting images on the screen, people have hairs on their arms which can be a problem. Then we have weather conditions, so clothes and jackets we be covering the arms mostly, so some knowledge of such implementation elsewhere on the body is needed. But the problems of using the bone near the ears has been discussed. Wearing a bulky sensors too is another concern. This is some exciting technology and introduces us to a new domain of human computer interfaces. -------------------------------------------------------------------------------------------------------- Reading Critique on Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays The paper summarizes the existing gesture recognition techniques and try to resolve some of their drawbacks, introducing new methods of Relative and RelativeRay using inherent human movement tendencies. We begin with the introduction for the need of gesture recognition and its general idea for interacting with large displays from a distance. The existing techniques include using hand-held pointing devices, laser devices (ray pointing), eye tracking, direct hand pointing, hand and body tracking virtual environments and selection with the hand. The authors learn from these techniques and utilize some of their advantages and figure out ways to overcome existing disadvantages. The problem of jitter in ray casting and direct hand pointing, using of the body can lead to fatigue, devices are not natural to use and can be misplaced and also create problems when close up interaction is required. The lack of kinesthetic feedback is another problem. The solutions proposed involve using sounds and visual feedback to make up for lack of feedback. Also recalibration technique is applied to account for jitters and pointing is relative, corresponding to a particular area. After carrying out many experiments the results were that distance, recalibration efficiency had an effect on performance. Relative pointing and Relative Ray performed the same. The index finger was used to point while using the AirTap method described, and the thumb in and out motion. The authors have tried to use natural affordances to make the interaction intuitive and reduce the number of false positives, which arise when you use hover time target selection in dense areas. The two papers today teach us two repeated lessons, we can always borrow from other fields to create exciting new technology Skinput and we can always learn from previous techniques and improve upon existing ideas. The ideas discussed here have no practical implementation as mentioned by the authors and are purely for research purposes till we figure out how to take care of the additional sensor requirements and the results though all not significant, give us some idea of areas already explored yielding no fruit.

Yingjie Tang 1:12:19 11/20/2014

Skinput: Appropriating the Body as an Input Surface” is a paper which propose an robust approach to solve the problem caused by the limitation of the screen size problem. As we all know that the power of the smartphone is becoming stronger and stronger. However, the screen size becomes an bottom neck for it to fully show its functions. There exists some solutions to this problem, some researchers try to project the buttons to tables and it seemed to be a valid solution. However, this kind of solution is based on the assumption that there will always be a table or a smooth platform for the user to project. The researchers in this paper brake this assumption and claim that people will not always want to carry appropriate surfaces with them. And thus they proposed that we can use human body as an input device. During their study, they tried to find a valid transducer that can detect the low frequency vibration. This caused them a lot of trouble because existing transducers were engineered to provide flat signal in toward tap signal on human body. Finally, they came to the idea that they can add more sensors and add weight to the cantilever in order to adjust the resonant frequency. I think this is a kind of valid solution to many existing problems. Although we can not achieve the performance with a single sensor or device, we can adopt a lot more sensors to collect more data. I remember there is a project which solves the problem that people can not have an eye-to-eye communication through video meetings. The researchers added more cameras and more monitors and thus provide the user with a face-to-face communication effect. This is the 1st paper that I read in Natural User Interface, I feel that in order to do a good job on natural user interface the researchers must have a solid background on biology. In this paper, the researcher have a clear understanding to the structure of human tissues. The implementation itself is not hard, but we need to analyze the data carefully and analyze both the strength and the weakness of our research.—————————————————————————————————————— “Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays” is a paper which propose three techniques to solve the problem of to point and click objects on large high resolution displays from distant place. I learned from the first sentence of the article that it claimed the Science of Art correctly with its topic.That’s where the problem came from: Since the performance are better and better and the price and cheaper and cheaper, the very large display will be widely used. And thus we need the techniques to accurately point and click the objects. Their work borrows the idea from social science that people are using 7 hand gestures when pointing the real objects. So they come to the idea to use hand gestures to point and click objects. Personally, I think there exists a research opportunity that we can use the Supporting Vector Machine to learn the people’s gestures when they are talking and to make use of those gestures to dynamically adjust the display. And there are also something I learned from the experiment part. Our class project is in some extent the same as this article, we all proposed some new approaches to solve the pointing problem. And they use the recalibration frequency and the recalibration time as a measuring value. We didn’t take that into account, so this may be added into our study. They once came up with the problem that the fingers will move involuntarily and thus to make noise to the selection. The way they solve this problem is to adopt relative movement of the fingers. This is useful in the study, we also came up with this kind of problem in our class project. That’s the absolute movement on the back panel will causes error since the user was totally occluded and we adopted relative movement and just to drag the screen relatively.

Brandon Jennings 1:14:37 11/20/2014

The Skinput paper investigates the well-known concept of wearable computing. A major contribution this paper makes is the ability to sense the body over a large area, as opposed to simply the specific spot where the device is. This device is able to leverage the natural acoustic conduction properties of the human body to provide an input system. This alleviates the conscious efforts of the user to provide the input for the system. Using the intrinsic behavior of the human body is more efficient and is general enough to be applicable in many implementations. Another important development is the ability to interact with technology via your body. The interactive projector armband can pave the way for interactive embedded bio devices. Projections can be made through the skin and a person would be able to make selections by touch their body. The application expansion of wall displays is endless. Because of this, a thriving area in human computer interaction is interfacing with such displays. The ideal interface would be a freehand pointing interface. This allows a user to control the display from anywhere in a room and be interactive with both the device and the audience. Such a feature could also be expanded to allow for multiple users and for users to exchange controls, such as a multi-person presentation where control of the presentation can change.

Xiyao Yin 8:12:37 11/20/2014

‘Skinput: Appropriating the Body as an Input Surface’ presents work on Skinput, a method that allows the body to be appropriated for finger input using a novel, non-invasive, wearable bio-acoustic sensor. Facing the difficulty of making buttons and screens larger without losing the primary benefit of small size, authors consider alternative approaches that enhance interactions with small mobile systems. This paper describes the design of a novel, wearable sensor for bio-acoustic signal acquisition and an analysis approach that enables authors’ system to resolve the location of finger taps on the body. The experiment is good in participants with almost even number of male and female. Another important point in this experiment is that the three types of location sets evaluated in this study(fingers, whole arm, forearm). Supplemental experiments are also efficient in this study. These sets contaions different parts of arm, hand and the relationship between them. Results from experiments have shown that authors’ systems performs very well for a series of gestures, even when the body is in motion. ‘Distant Freehand Point and Clicking on Very Large, High Resolution Displays’ explore the design space with analysis in three techniques for gestural pointing and two for clicking. This paper investigates potential techniques for pointing and clicking from a distance using only the human hand. It designed three pointing techniques:absolute position finger ray casting, relative pointing with clutching, and a hybrid technique using ray casting for quick absolute coarse pointing combined with relative pointing whtn more precision is desired. Results show that although RayCasting was faster in tasks, its high error rates prevent it from being a practical technique. Relative and RayToRelative techniques are almost the same with low error rates. One good point in this paper is that it uses visual and auditory feedback in clicking techniques to compensate for the lack of kinesthetic feedback typically present when clicking a physical button. It can also be some hint in evaluation in our project.

Jose Michael Joseph 8:39:14 11/20/2014

Skinput: Appropriating the Body as an Input Surface This paper is about using the skin as an input device for acoustic transmission. The authors justify that we cannot make buttons and screens larger without losing the primary benefit of small size. Thus they use a medium for input that is always present with us and hasn’t been used before – our skin. This is a beneficial means of input because there is roughly two meters of surface area and we can easily point to any location on our skin without having to think a lot about where it is which means that locations on our skin are mapped heavily in our brain. The authors state that when a location on the skin is tapped it produces lateral and longitudinal waves. The intensity of both these waves depends on a number of factors one of which is whether the region that was tapped contained mainly soft issue or bone. The authors finally decided on an arm band prototype. It was placed on the upper arm so that acoustic information from the fleshy bicep in addition to the firmer area of the underside of the arm could be collected. Each location thus provides significantly different acoustic coverage and this could be used to differentiate the input location. The authors then conducted experiments on a number of participants. The average accuracy across conditions was 87.6%. The five fingers averaged 87.7% on an average and thus segmentation was perfect. The participants then used whole arm approach which gave an accuracy of 95.5% and it dropped to 88.3% when the sensor was moved above the elbow. The eyes free version yielded 85% which is also quite impressive. The accuracy for the forearm conditions was 81.5%. The study also found that those with high BMI produced the lowest average accuracies. This opens the application for trouble as its performance varies significantly from individual to individual. Unless a standardization of responses is possible it would be hard to find a market for such a product.

Jose Michael Joseph 8:39:58 11/20/2014

Distant Freehand Pointing and clicking on Very Large, High Resolution Displays This paper talks about using very large high resolution displays for freehand pointing and clicking. It uses three techniques for gestural pointing and two techniques for clicking. The authors first state that the common mouse has the luxury of assuming that the user will always be close to the screen. This is not similar to the system they are building as in it the user will have to manipulate the screen while standing far away. Thus the mouse must be able to move with the user and instantly perform spatial selections. The authors thus state that the design characteristics for pointing and selecting devices for very large high resolution screens are accuracy, acquisition speed, pointing and selection speed, comfortable use and smooth transition between interaction distances. The authors then talk about the various previous work that has already been carried out in this field and states the results from these works that are beneficial for the authors’ on study. Hence they used the research done before them as a point to base off their efforts for their own research. This is an excellent technique as we have repeatedly seen that researchers must stand on the shoulders of other researchers. The authors then contemplate on the various different techniques that can be used for pointing and finally state that the hand seems the optimal one. They also discuss a method by which a laser ray emitting from the user’s hand could be used to make selections. The drawback with this method is that it is inaccurate for smaller objects due to the hand’s jitter. The authors created two clicking techniques, one for the index finger and the other for the thumb. It supports single clicks, double clicks and drags. They also use visual and auditory feedback to replace the lost kinesthetic feedback. To have visual feedback they indicate a short animated progression of a medium sized square shrinking and disappearing at the region of clicking. Such things are implemented for every action. The airtap method had two significant challenges. The first was that there was no physical object to constraint and thus define the motion. Thus the algorithm that they used had to find this out but understanding the relative acceleration and velocity of the finger. The second challenge was the ambiguity of gestures as there are situations that natural movement is comparable to the gestures required for clicking. The authors used a calibration scheme to calibrate the system for each individual user and thus this problem was resolved. But that again could be a problem now since each user would require some calibration and this information would either be stored in the system or the user has to calibrate the system each time. If it were to be stored in the system then for a large number of users, such information would take up considerable space and make the product not viable. The authors aso used ThumbTrigger for further gesture identification. The authors tested RayCasting and found that although it was faster than clutching it had high error rates and thus could not be a practical technique. The authors also state that the current selection method could possibly use another input method such as eye gaze to further refine their selection times. This is an interesting suggestion as such an implementation could indeed raise the accuracy and also be more natural to the user.

changsheng liu 8:56:48 11/20/2014

Skinput: Appropriating the Body as an Input Surface This paper introduces Skinput, an input capture technology that uses the acoustic transmission properties of human skin. The user wears a device on their arm that contains a variety of sensors. When the user taps on their own body, these sensors can pick up the mechanical vibrations that occur. The authors present this concept as a mobile device so that the user can efficiently input data on the go. The argument is that because the human body contains roughly two square meters of surface area that provides feedback in the form of proprioception, the body can be used as an eyes-free input device. Skinput uses bio-acoustic sensors to achieve this input. The armband contains two sensor packages with 5 sensors each. The sensors are tuned to different frequencies. Using these different sensors, the approximate location of a tap on the body can be discerned. The authors utilize a machine learning approach to sample classification, with 186 features making up each sample. A Support Vector Machine is built based on the training data. The experiment consisted of tapping the body at various points. The authors managed an accuracy rate of nearly 90% when attempting to classify which finger had been tapped against the thumb. Flicking increased this to nearly 97%. This is an exciting technology, especially when paired with mobile devices. The small screen area of a mobile device necessitates a similarly small area of input. If the full body could be utilized, not only could the area of input be increased dramatically, but there would be built-in feedback based on the sense of proprioception. Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays This paper investigates freehand point-and-click interaction with large displays. While the mouse and keyboard is very convenient while sitting down, they are not as easily used when standing and using a large screen. If one wishes to point to the screen while talking, they must get up and point, then sit back down to continue moving the cursor. If a better pointing method could be devised, then very large high resolution displays could replace whiteboards for group demonstrations. The authors present three techniques to solve the problem of pointing, and two for clicking. The pointing techniques were hand postures, raycasting, and relative pointing while the clicking techniques were thumbtrigger and airtap. The authors tested various combinations of pointing and clicking techniques in order to figure out which had the most synergy. Using a Fitts' Law based study, the authors could determine the speed and accuracy of each technique, as well as the comfort and ease-of-use. The authors found that some techniques were fast but had high error rates, while others were slower but had more acceptable error rates. The Relative and RayToRelative techniques performed similarly well, and also had low error rates. Therefore these are two that should be studied more. I believe that this is a practical and worthy area of research. The potential to use large, high resolution surfaces as a replacement for a whiteboard is very exciting.

Eric Gratta 8:57:08 11/20/2014

Skinput: Appropriating the Body as an Input Surface Chris Harrison, Desney Tan, Dan Morris This paper introduces a new input modality that uses the skin’s acoustic properties to allow direct input via the skin. Use of the skin as input is classified by the paper along with other work using “biological signals” for input. This research is novel in that the biological signal, bodily acoustics, can be consciously manipulated, as opposed to involuntary signals like heart rate. The authors presented a significant amount of related work for each of the advantages that they claimed for Skinput: it is always available, bio-sensing, and uses acoustic input. Identifying these properties required that the authors reference relevant research on human anatomy, which was a really interesting use of work from another field. Figure 3 was especially helpful for demonstrating the acoustic results of tapping your arm, as well as what properties their device is capable of capturing. Their experimental setup was particularly interesting because they began addressing the issue of training the system on users with vastly different body types. They recruited about an even number of men and women, had a range of ages, and had at least one normal and one obese subject. Their experiment was also extensive in that it covered a wide range of possibilities for sensing taps on the arm; they included taps to locations on the hand as well as across the whole arm, changed the location of the sensor to see what was feasible, and even had users walk or jog, getting surprisingly high accuracy in the end. The study was extremely thorough and the authors were careful to describe the results of their statistical tests responsibly. I would like to comment that the title of this paper is mildly disturbing, and could have been phrased in a more appealing way. ------------------------------------------------------------------- Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays Daniel Vogel, Ravin Balakrishnan This paper claims to explore the design space of free hand point-and-click interactions, but does not give any definition for that space. A consequence of this is that the reader does not know how well the pointing and clicking techniques that they evaluated cover the scope of what is possible within the space of free-hand point-and-click techniques. That said, their exploration of related work on large display interactions was very extensive. One of the key problems identified by this paper regarding input to high resolution displays is that a fixed spatial relationship between the human and the display should not be assumed. The authors suggest that use of just the human hand will alleviate many of the problems that arise with dynamic distance interactions. One of their claims is simply that since the user does not need to hold a device to interact with the display, they cannot lose anything. Also, not having a device alleviates the problem of mapping the paradigm of the distant interactions with the touch-enabled surface. They begin their discussion of pointing gestures by citing research from social anthropology, interesting use of work from another field. It was also interesting that the authors used a motion tracking system as an “enabling technology,” with the knowledge that accurate, marker-free hand motion tracking would be available soon. My immediate question from reading the clicking techniques section is why they didn’t have a click gesture where the thumb and middle/index fingers meet? This provides bodily tactile feedback and is much less awkward than one-finger gestures. The features of their pointing gestures were not always clear to me. What did “clutching” accomplish?

Qiao Zhang 9:12:26 11/20/2014

Skinput: Appropriating the Body as an Input Surface In this paper, the authors resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. They use an armband prototype to capture the resonance propagated on human body. To resolve the ambiguous opening and closing action, two threshold are set to filter out unintentional events. They use a SVM model to analyze the input waveforms; 186 features are fed into the model, and output is the location of the taps. The accuracy can be as high as 87.6%. Their approach utilizes the two square meters of external surface area and proprioception: the ability of accurately interact without visual cognitive effort. The technique proposed in this paper of using human body as extended input is very interesting to me. It provides an always available, naturally portable, and on-body finger input system which looks very high-tech and sci-fi. One limitation though, is that this technique still requires extra equipment carried by the user: the armband is required, but a projector might be eliminated in some applications. However, I still think it would work better with a projector to give visual feedback to the user and save the user from remembering the mapping between input locations and functions. The high accuracy of this input method is very impressive, especially its performance during walking and jogging. Such high accuracy could potentially be adapted to some special scenarios such as in military use. A potential shortage of this technique is discussed in the paper: results show that high BMI is correlated with decreased accuracies. But as a research prototype, this technique is highly impressive on every aspect. ====================================== Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays In this paper, the authors present an in-air gesture system that supports pointing and clicking. To compensate the lack of kinesthetic feedback, it also gives intuitive visual and auditory feedback, too. This approach offers direct manipulation on very large display and a natural way to interact with it beyond the traditional mouse approach. In summary, this research has several contributions: 1) It motivates the need for facile pointing and clicking techniques for interacting with large displays from a distance. 2) It identifies desirable characteristics for such techniques. 3) It develops and evaluates new pointing and clutching techniques that leverage the simplicity and inherent human ability to point with a hand. 4) Visual and auditory feedback rather than kinesthetic feedback. 5) Results show that the error rate is in the same level with typical mouses. This research prototype has a high chance of being adapted to large displays, as they become cheaper and more popular in the future.

Vivek Punjabi 9:43:08 11/20/2014

Skinput: Appropriating the Body as an Input Surface: The authors have presented a new technology, Skinput, to appropriate the human body for acoustic transmission and allowing the skin to be used as an Input device. They have built an armband which is a wearable, bio-acoustic sensing array.They have employed a new audio interface to digitally capture data from the ten sensors, applied some segmentation tasks and machine learning algorithms which help them to solve the problem of finding location of finger taps on the body. Then, they conducted a two part 20-participant user study giving an average accuracy of 87.6%. Apart from several input locations tested, the experiment was also conducted while walking/jogging, with gestures and object recognition. Finally, they have three proof-of-concept projected interfaces to illustrate the utility of coupling projection and finger input on the body. It provides a very innovative solution to use body as input surface and also resolve location of finger taps on body through computer vision algorithms. Distant Freehand Pointing and Clicking on Very large, High Resolution Displays: The authors have developed some techniques for gestural pointing and clicking to carry out freehand pointing and clicking interaction on very large high resolution displays form a long distance. For these devices, the characteristics required are accuracy, acquisition speed, pointing and selection speed, and comfortable use. They tried many different gestures and postures for carrying out the clicking techniques and found AirTap and Thumb Trigger to be the best. The pointing techniques developed are: absolute position finger ray casting, relative pointing with clutching and a hybrid technique using ray casting for quick absolute coarse pointing combined with relative pointing. The pilot evaluation and experimental evaluation were conducted using Vicon motion tracking system and found that Relative method was most desirable to the users and RayToRelative was also close enough. RayCasting was faster but with many errors. This paper motivates the need for facile pointing and clicking techniques especially with large displays from long distances. Also, visual and auditory feedback seems better when it comes to clicking techniques. And finally, the hand base pointing techniques can also provide similar usability similar to devices like mice.