Mobile Interfaces - 1

From CS2610 Fall 2014
Jump to: navigation, search





Reading Critiques

yubo feng 21:11:31 10/11/2014

Both papers are about mobile interface design and implement, the first paper published in year 2000 when the personal mobile phone is still kind of luxury; but some most common concept and technologies have already been seen in this paper, for example land scope display method is wildly used in every personal mobile phone. In this paper, authors used sensors to detect users motions, like by using proximity range sensor to detect distance between users and the phone; by using Tilt sensors to detect angles that users holding the phone. In this way, users motion could be calculated and triggers some application on the phone. Authors created voice memo in this way: the phone keep tracking user's motion if and only if some condition happens then the voice memo begin to work; another application authors implemented is land scope display, if and only if some angle is detected by sensors then the screen display mode changes. Both concept is commonly used nowadays. The second paper is about using mobile camera as capture sensor to detect users' motion to trigger some functional usage. The second paper is based on the idea of first paper, which is kind of implement I think.

nro5 (Nathan Ong) 16:25:51 10/13/2014

Review of “Camera Phone Based Motion Sensing: Interaction Techniques, Applications and Performance Study” by Jingtao Wang, Shumin Zhai, and John Canny The paper presents a system named “TinyMotion” which is software developed for mobile phones to utilize the camera to test the feasibility of using the camera motion detection for menu selection or game playing, and for using the camera for handwriting recognition. Even though the camera is not as high quality as they are currently on many smartphones, it was shown in the paper that the camera motion detection and recognition software the authors developed performs accurately even in somewhat extreme conditions. While this paper shows great promise in the usefulness of a camera upon other features, users will not consider a camera to be anything other than a recording device. In addition, users are unlikely to use a camera to do functions that require movement because it will either cause others to believe the camera is recording them or it will cause the user to look very strange while tilting the camera to do selections or games. I recall previously when there was one Nintendo game that used the camera to provide augmented reality, where animals would be flying towards the player while the player would have to point to the animal and shoot a laser to prevent it from colliding with the player. It looked very odd, and some people made comments about that, both to him and behind his back, and eventually he stopped playing that game (or at least stopped playing it in public). While there is a feeling of novelty associated with using a camera to do other things, it feels like any products that come from augmenting the usefulness of a camera will be dead-ends until people are willing to allow cameras to be constantly recording. Along the same vein, I was surprised also that this paper mentioned nothing about the privacy issues that may come about due to the usage of a camera, but it may not have been an issue back in 2006. Based on hindsight, it does not seem surprising that using a camera for tracking cursor movement works well. Optical mice use practically the same concept, by using a low quality camera to track movement. To use another low quality camera for the same end should indicate that the results will be similar. However, I wish the paper mentioned the sensitivity of the camera to cursor movement ratio, since it is not clear from the paper how quickly someone could acquire a target (e.g. one inch of movement to ten pixels on the screen). If this information was given, then it would be easier to get a sense of the magnitude of the tilt movements needed to move a cursor an approximate distance on the screen. Review of “Sensing Techniques for Mobile Interaction” by Ken Hinckley, Jeff Pierce, Mike Sinclair, and Eric Horvitz The authors present a device and software for the device to utilize accelerometers in two dimensions to detect the orientation of the phone. The software then uses the accelerometer readings to perform multiple services, such as memo recording, portrait and landscape orientation detection, powering up the device, and scrolling. It is interesting to see that only one of these ideas is in use today, namely the automatic detection of portrait or landscape orientation. As the authors have commented about their features in the paper, it is possible that the biggest reason why the other features may not have been seen in mobile smartphones today could be due to the lack of easy discoverability. It is possible that motions that may be natural for one event (e.g. holding a phone for a call) may occlude other possible functionalities that require the same gesture (e.g. the voice recording memo feature as presented in the paper). It may be easier for users to have a one-to-one correspondence of gesture-to-feature mapping, since a gesture would, without confusion, lead to a single service. However, the importance may not be focused on how the gestures were used, but rather the importance of using contextual data to aid a device to assist a user. Adding an accelerometer can provide many more features, four of which were presented in the paper. Notice that without the accelerometer, none of the suggested features can be used, since the device has no data to be able to determine the orientation of the device for portrait or landscaping, for example. While a desktop computer will not have any use for an accelerometer (unless a user is hoping to detect earthquakes?), accelerometers can enhance the utility and convenience of mobile devices. Even though a computer may not use certain sensors, there probably exist many other sensors that could be useful for making the desktop experience better. Overall, this paper is a good argument for the usage of context-aware sensors to improve mobile devices, and possibly other ubiquitous computing devices as well.

Qiao Zhang 23:13:10 10/14/2014

Sensing Techniques for Mobile Interaction In this paper, the authors describe using sensing techniques to assist mobile interaction including voice memo detection, display orientation detection, tilt scrolling and auto power-up. The paper was published 14 years ago and served as pioneer. Some of its ideas are now prevalent and become standard. The authors first describe the difference between mobile environment and traditional desktop environment. Mobile computing is different from the mouse-keyboard-desktop scheme, and can utilize context information to provide better user experiences. One interesting issue is the intimacy of such mobile devices. Some users are comfortable with device recording their daily conversations, others may not. It is an interesting topic whether should mobile devices use an explicit voice/physical instruction to do such activities. This paper uses several combined conditions to initiate a voice memo recording. Today's applications such as Google Now and Apple Siri use explicit voice instruction. If explicit instruction is used, it may interrupt the conversation; if not, it is hard to determine the starting point of an action. Maybe using extra sensors like in this paper would solve this problem. I personally never use screen orientation for my mobile phone, because most of the time I find myself holding the device in portrait mode. Sometimes I use the device lying in bed on my side, in which situation enabling auto-rotation will make the screen rotate towards a wrong direction. I believe that this issue can be addressed by using the front camera. False positive is a major concern to me regarding to this functionality. I do like the power management function. I wish my phone could automatically light up when I pick it up from the desk, and I believe that it is not hard to do if a touch sensor is attached to the back. The exploration of conditions is also very interesting. Because each sensor has different states, using different sensors would result in many possible combinations. Properly associating conditions with actions is quite interesting to me. Ideas in this paper have become the norm of today's devices, however, it is still inspirational with regards to utilizing different sensing techniques. Maybe in the future, mobile devices will come with pressure sensors, infrared sensors and eye-movement sensors. It will enable many new interaction that was not there before. ========================================== Camera phone based motion sensing: interaction techniques, applications and performance study The paper presents TinyMotion, a software that utilizes built-in back camera for innovative user input. TinyMotion can detect horizontal and vertical movements, rotational movements and tilt movements without needing extra sensors like accelerometers gyro sensors. It also does not require special scenes or backgrounds like any other controlled experiments. The good thing of using camera as input device is that optical information is analog, comparing to buttons as digital input sources. One possible extension of the work would be trying to find a mapping between analog input and continuous manipulation on mobile devices, rather than translating analog input to binary manipulation or vice versa. The authors present different applications for the idea. First, they present Mobile Gesture application that allows the users to input both western and eastern characters using camera. Second, they present Vision TiltText, which to me is quite an exciting idea. But I do think it better to use accelerometer (which may not be available on older phones at the time). The work was done before the prevalence of accelerometers, and successfully predicted the trend of built-in sensors on mobile devices. They conduct two user studies to prove the usability of the approach, one informal and one formal. The formal one shows that the movement time accords to the Fitts' Law. One learnt lesson from this paper is that quantified user study is more persuasive than qualitative experiments. As the instructor said in a previous talk, there has no universal standard for mobile phone interaction yet. 10 years ago, most mobile phones came with keypads. Nowadays due to the advent of touch screens, there are no mobile phones with physical keypads. The norm has not been formed yet, and there are many opportunities to explore the design space to find a better interaction scheme for mobile devices.

phuongpham 17:16:40 10/15/2014

Sensing Techniques for Mobile Interaction: this paper explored how using multiple sensors in a mobile device can benefit user in context awareness area. The paper has shown a way to do research is exploring new design points, make a prototype, uncover new challenges of these new points, and propose some common-sense solutions. This paper has been published since 2000 but the authors' visionaries are valid until now when we have more and more sensors integrated into mobile devices, e.g. Amazon Fire phone has 6 cameras. Some of their challenges come from the limit of current technology, such as scrolling problem. Nowadays, with large touch screen, users can scroll using finger gesture which gives better feedback and control feeling. I really like the experiment analysis where the authors confirm the advantage of their approach while also point out challenges which give more research topics for later researchers. As professor Wang has mentioned before, new devices with new interaction models will give new opportunities compared to the old methods developed for old models. ***Camera Phone Based Motion sensing: Interaction Techniques, Applications and Performance Study: compared to the above paper, this paper provide an in-depth study of a particular sensor, i.e. phone camera. The general idea is the same, i.e. with new data source we will have new opportunity to do things better. However, this paper addressed multiple opportunities using phone camera in a creative way. The evaluation was done using some standard models, e.g. movement time. As we have known, the technique has been developed and integrated more functions, e.g. heart rate monitoring. With the new functionality, a phone can do more things with its integrated camera. However, opportunities would be more to be explored. Another interesting approach may be pushing camera gesture to the limit to see new challenges as well as new opportunities for the technique.

Brandon Jennings 22:57:32 10/15/2014

Sensing Techniques This paper is about contextual sensing on mobile devices to make the devices more interactive with the user. This paper investigates the issues of the implementation of techniques that might conflict with each other, discussing the challenges and alternatives and suggesting new points in the design process. One of the things I disagree with about the paper is using tilt to scroll through content on mobile devices. For one, there is more control when one used their finger. A feature like that also means you have to be conscious of how the device is being held. Most people do not hold their phone rigidly. Their hands can naturally tilt one way or another. There is also the issue of people who look at their screens at different angles, or who move the phone a certain way too look away from it. When the angle for scrolling is set, there can be inadvertent scrolling. I also did not like the subject pool used in the experiments. There were not enough people and the demographic was not varied enough. What I did appreciate about the paper was the constructive discussion about solutions to problems. The authors did not present their work as being the solution to known issues and sometimes criticized their own methods, but offered suggestions. This paper also promotes the idea of using very simple devices and techniques to make the overall interactive experience more enhanced. The authors recognized that their techniques invited unwanted responses and reactions by their features. There is definitely more exploring to do in this area, but I would like to see more practical applications and wider range of users being tested. Camera Based Motion Sensing The second paper proposes the idea of using the camera to track hand movements, allowing for gesture recognition applications, like handwriting and texting. Though the phone used was not a smart phone, a discussion about the potential security and privacy flaws would have been useful. There are many papers on using features like the camera and microphone to perform new and interactive features. However such features come with a certain level of risk. Again I would have liked to see a larger testing pool. One of the things this paper contributes is the solidifying the notion of using already existing built-in functions of devices and enhancing them or extending their capabilities. No new sensors are required and small code modifications can be made to increase capabilities. Although TiltText was marginally faster, there were more error rates as opposed to multitap. I think the standard text based system is efficient enough on an individual user based level. However the techniques presented in this paper might be useful on larger applications. There is much development needed before these techniques will become mainstream but there is much potential for this type of interaction.

changsheng liu 23:07:26 10/15/2014

<Sensing Techniques for Mobile Interaction >, as the title implies, the paper presents some sensing techniques in mobile devices. The paper shows that the use of low cost sensor can significantly increase the usability of mobile apps. Smart phone is so popular today and the sensors have a wide range of application. Many types of the interactions in the paper are present in today’s phones. For example, scrolling the display using tilt. This feature is implemented in the newly published Amazon Smartphone. It’s very convenient for lazy people who don’t want to use fingers to scroll the screen. However, there are still some features that are missing in the mobile devices nowadays, such as touch and hold to turn on, gesture scrolling. Considering the period this paper was published, I think the paper is very creative and original. What I am worried about is false positive. If the phone cannot handle the sensors precisely and be completely context sensing, the phone might act incorrectly and deteriorate the smoothness of interaction. <Camera Phone Based Sensing: Interaction Techniques, Applications and Performance Study> introduces TinyMotion to detect movement and tilting of mobile devices. TinyMotion detects the movement of a phone by analyzing images sequences captured by its camera. It can detect a variety of movements, such as horizontal, vertical, rotational and tilt. This idea is very creative and the experimental part of the paper is carefully designed, which is very persuasive to me. I wonder if the power consumption cannot be ignored when we activate the camera phone. Meanwhile, the algorithm to process the image captured by the camera is computation intensive, it consumes CPU and memory resources. A good alternative to achieve the same object is to use accelerometers, which is much more cheap and energy saving.

zhong zhuang 23:14:55 10/15/2014

This paper foresees the future of smart phone, it did a successful job, and the features this paper talked about are now commonly used in today’s smart phone. Now when the phone is close to our ears, Siri is up, when we rotate the phone, the view will be adjusted automatically just like what the authors of this paper did. But there are several features that are not implemented in today’s smart phone. The first one is automatic power up. In the paper, the author proposed this feature as following: When the phone is facing the user and the user is looking at it for 0.5 seconds, the phone will be automatically powered up. This seams like a good feature, but not implemented in today’s phone. I think the reason is we can’t track the eye gazing gesture accurately nowadays, so we can’t determine if the user is looking at the phone or not. This paper illustrated the implementation of features in detail, including what hardware is used, how to deploy these hard ware, and how to implement the software. This experience is useful for student readers like us. The paper also explained how the user test is conducted, who should be chosen to test the product, how to design experiment, how to collect data, how to analysis data. This is also important for us as students. In sum, I think this paper is inspiring.

Xiaoyu Ge 23:22:50 10/15/2014

Camera Phone Based Motion Sensing: Interaction Techniques, Applications and Performance Study. The author introduced a software approach called TinyMotion, which can be use to detect mobile users` hand movement. After the built-in camera captured the image sequences, TinyMotion responsible for analyzing the image data. And the paper also presented the analytical evaluations of both formal and informal user study. And as a result TinyMotion function successfully and can be use to capture and recognize real time handwriting, and the technology have been used for game and interface development. The interaction technology introduced by the author is pretty useful and in advance. PlayStation move have used similar camera motion detection technology to recognize human movement and games based on it have great performances. And camera phones nowadays can even perform 3D motion and deep sensing and allowed device to make over quarter million 3D measurement in a second and update it`s position and orientation in real time. And technologies it used have some similarities with the author used. Sensing Techniques for Mobile Interaction The author utilized three sensors, touch sensor, proximity and tile sensor which combined image recognition and block correlations for motion estimation and as a result it improved the accuracy of it`s performance. There are four main features introduced in this paper: voice memo detection, display orientation feature, tile scrolling feature and power management feature. The intention of the author is to make the Cassiopeia more responsive to people`s natural gestures and the author even did experiments to determine the needs of users with this mobile device. According to the author voice memo detection feature are welcomed by users, however since the other smart phones were introduced and people are not using Cassiopeia anymore, this functionality is no longer useful and better solutions for voice memo have been developed latter. And as for the orientation detection feature, it is very useful and the same feature has become a widely used feature on almost all kinds of smart phone now. As for the scrolling feature, it dose not seems to be a useful feature in touching-screen based devices, since mobile device orientated websites and applications can scroll the page itself now in order to suit for the human with different width of finger, since the scroll bar implementation is really not a good suit for touchable mobile devices. This paper introduced useful features but still most of the features it presented can only be use to improve Cassiopeia device itself since it is not really a technology break through or can be count for a great innovation.

Wenchen Wang 0:08:25 10/16/2014

<Sensing Techniques for Mobile Interaction> <Summary> The paper introduces three kinds of sensors of mobile phone and develops some user application by applying those sensors. <Paper Review> There are three kinds of sensors, touch sensor, tile sensor and proximity sensor. Tilt sensor detects the tilt of a device relative to the constant acceleration of gravity. Proximity sensor is able to detect the presence of nearby objects without any physical contact. By using these sensors, the author comes up some small but interesting user applications related to human –computer interaction. For example, portrait/landscape display mode detection applies tilt sensor to detect user’s gesture of holding the device and automatically reformat the display to suit the current viewing orientation. Another application is that the author uses touch sensors to do automatic power-up. To be specific, when user picks up and looks at the device, the device will power up. But it will not turn on while in your briefcase, even it’s being shaking around. The applications could also be implemented by multiple kinds of sensors. Voice memo record application applies tilt sensors and proximity sensors. The device will record user’s voice, when the user is holding the device, or the device is tilted towards user’s mouth or the user is in close proximity. It is a paper written 14 years ago. And some of the user applications have been popularized, such as portrait/landscape display mode detection. By using touch sensor, the screen has been become touchable. Therefore, the paper is a very good guidance for hci researchers of mobile application filed. <Camera Phone Based Motion Sensing: Interaction Techniques, applications and performance study> <Summary> This paper introduces TinyMotion, a mobile approach to detect user’s hand movement without additional sensors. More importantly, it also proposes quantitative performance evaluation method. <Paper Review> TinyMotion can recognize user’s movement by analyzing image sequences captured by the mobile camera in various environments. The detection process is as follows. First they convert color image to grey image, then extract grid sampling to store the image. According to the macro-block obtained from grid sampling, they calculate the motion vector to do relative motion estimation. Finally, they also do absolute motion estimation by post processing. In this way, they could get the movement of user’s hand. For the evaluation, they test the TinyMotion in different light intensity environment. They also wrote four applications and three games to test the movement detection accuracy by different tasks, such as target acquisition/pointing, menu selection and text input. In the end, they quantified the result and did analysis. I think it is a very good approach applying existing build-in mobile sensor to detect user gesture. The user input could come from ways rather than just keyboard. Nowadays, voice recognition, such as siri, is also a good example as TinyMotion. The input of voice recognition is from phone speaker. Exploring different ways of user input is a very important aspect of HCI.

zhong zhuang 0:35:00 10/16/2014

In this paper, the authors proposed a camera based context recognition method called TinyMotion, basically, it uses the built-in camera of a cell-phone to track the motion of the phone, including moving, shaking and tilting and so on. I think this is a very creative idea because it requires no additional sensors or any other hard ware. The authors designed some applications to test TinyMotion, they are Motion Menu, Map Viewer, Gesture based text input, and some games. These applications across multiple domains of daily cell phone usage, so from my point, the test is very convincing. The author didn’t post any result about power usage, since it enables the camera all the time although it’s in preview mode. Even this method may impose some power consumption, I still think it is a very interesting research direction. As the screen of cell phone keeps expanding, one hand finger touching is facing serious drawbacks. We really need to come up with some new input alternatives, using the built-in camera is a good starting point.

Bhavin Modi 0:35:38 10/16/2014

Reading Critique on Sensing Technologies for Mobile Interaction The paper investigates new methods of interaction with PIM’s using sensors embedded in those devices, namely touch, tilt and proximity sensors. The main idea is to change the way we interact with mobile devices and make it more simple and convenient. The interaction should reduce the need for visual focus, basically should not require constant attention of the user. The interaction should come naturally as the user carries on with other tasks. The sensors used in this paper are an accelerometer, infrared sensors for proximity, 2D gyroscopic axis to measure tilt for changing viewing modes and touch for automatic power on. The authors believe that the combination of such sensors working together is the future of ubiquitous devices. They were right, this paper written in the year 2000, has found many practical application today. Proximity sensors are used in all touch devices to lock the touch screen when talking on the phone, and the changing viewing modes are too integral to all devices. Different from traditional GUI, the drawbacks or false sensing has also been taken into account and the way to overcome them is to find the right combination of gestures and values needed, derived from practical testing. Same sensors can be used to detect if a person is walking, the use not described in the paper, but present today in the health app in IPhone. The aim to create devices that can adapt to the environment the user is in and feel more natural to use. The use of better user testing would better justify the use of such technology. Adequate feedback and experimental values from a larger audience was required. The use of scenarios for usage would better illustrate the practical uses of the devices and the situations where they are more useful than traditional technology. The only thing is users need to get comfortable with such technology with usage because they have their benefits, with the option to disable them as per ones wishes. The tilt scrolling had its drawbacks because it requires a lot of control, but is still a feature provided by some. The technology instead lately introduced was scrolling based on eye movement present in Samsung phones. Other such features include airplay in Samsung mobiles and tracking eye movement to see which portion on the screen you are focusing by amazon. -------------------------------------------------------------------------------------------------------- Reading critique on Camera Phone Based Motion Sensing: Interaction Techniques, Applications and Performance Study The paper explores the use of the mobile phone cameras as an input for gesture recognition, tilting and movements. This is done through a software TinyMotion written on C++ BREW for the Motorola v710. TinyMotion utilizes one of most ubiquitous device present today and that is the personal cell phone and utilizes the camera based phones in an innovative way to suggest new modes of interaction. This opens up a new design space to explore for personal devices, leveraging existing capabilities. TinyMotion together with the other paper for today on sensors for mobile interaction explore existing and new way of making mobile interaction more fun and fluid. This encompasses the main idea behind the paper. Similarly on could think in the same design space so as create new methods of interaction, maybe not the phones per say, but other accessories that one uses with the cell phone, say the back cover that everyone almost always uses. The main topics was discussion about the TinyMotion are the use of cameras, for gesture recognition, for character recognition for input, creating camera based games, tilt text and motion menu. The input performance has been shown to follow Fitts’ law closely, but is much slower than the mouse, attributes to the lower frames per second size of 12 used in this case. The paper though does not conduct a formal extensive user study, an informal study with statistics is helpful to understand the challenges faced. Each usage scenario is also clearly depicted helping us understand the working, including the construction process used for generating images, their processing and storage. The challenges arise when different environment of usage are considered, as mentioned, and also if the user accidently cover the camera, or there is some other obstruction. Though the idea is novel, it is inconvenient in some cases like the tilt text, which is not a better way to write text. The capacity to build on this research is abundant and as seen in the lectures the example of camera based motion sensing of finger to control and use applications.

Nick Katsipoulakis 1:49:40 10/16/2014

Sensing Techniques for Mobile Interaction : In this paper, new ways for interacting with a mobile device are presented. The authors enhance a mobile device by adding sensors and implement applications for taking advantage of sensor readings to improve user experience. The prototype phone, which is used in this work, features: two accelerometers, an IR receiver/transmitter, and a couple of touch sensors. The authors succeeded in parsing readings from those sensors in order to identify the context of the device that it is operating in. In order to identify ways for leveraging sensor readings, they had to perform an exploration of the design space and identify concepts that have either been tried before, or were completely novel. The first application developed was a voice memo recording which was triggered by the position of the phone and its proximity. This application allowed users to perform visual and cognitive demanding tasks while they were recording a message. Feedback was provided to them through specific sounds produced by the phone. Another application developed, was the auto-detect of phone orientation. With this application, users experienced changes in the orientation of the phone's monitor based on its position. Several users found this feature really helpful, natural, and preferred it over conventional ways for altering display orientation. Furthermore, another application presented in this paper is tilt-sensitive scrolling, which enabled scrolling of documents through positioning of the phone. Finally, a phone feature presented is power management, which based on the phone’s position, phone’s orientation, and proximity of the user, user input, and duration of input, could power on/off the display of the phone. ///----------------------------------END OF FIRST CRITIQUE--------------------------/// Camera Phone based Motion Sensing: Interaction techniques, applications and performance study : This paper makes use of a common feature of mobile phones to create a new way for input of users. TinyMotion innovates in terms of mobile device interfaces and user interaction, because it leverages an embedded phone sensor to create input. The authors make a plausible observation, which states that cameras on mobile phones are common and they should be used as natural interfaces. TinyMotion borrows ideas from computer vision and through software engineering its creators succeed in developing it for mobile devices in a lightweight fashion. In order to test TinyMotion's feasibility and success rate, the authors perform two rounds of tests (informal and formal) with different users. On the one hand, informal testing involves simple benchmarks of motion detection in different environments and initial user impressions of random users. On the other hand, formal testing examines TinyMotion's features through a large number of test subjects. The users have to opportunity to go through a training session before the evaluation commences. Testing involves tasks with varying difficulty, ranging from simple tasks, such as target acquisition, to more complex tasks, such as playing games using TinyMotion. Evaluation results prove the success of TinyMotion and its performance, which resembles Fitts' law. Overall, the work presented in this paper shows a novel idea on making use of mobile phones' sensors and a nicely engineering interface for interacting with mobile devices.

Eric Gratta 2:39:25 10/16/2014

Sensing Techniques for Mobile Interaction (2000) Ken Hinckley, Jeff Pierce, Mike Sinclair, Eric Horvitz This paper seems to employ the “push to the limit” technique on handheld, mobile computing devices by exploring the utility of adding a number of new sensors to such devices and demonstrating possible sensing techniques for them. The combination of all of these sensors to work simultaneously is referred to as “sensor fusion.” The authors identify one of the key unique characteristics of mobile devices in relation to human computer interaction: there may be many contexts of interaction in mobile computing, as opposed to desktop computing, where the context is assumed to stay consistent because the user is in the same position. This change in context should be identified and leveraged for an enhanced experience. The paper’s focus on this issue of being “context-aware” falls into the same category from the paper we read from the same year, “Charting Past, Present, and Future Research in Ubiquitous Computing.” A significant length of text is dedicated to describing the sensor setup on their augmented PDA/PIM, as well as limitations of the sensors. Their tilt-recognizing sensor was a 2D accelerometer, meaning they could not detect if their device was upside or right side up or the “cardinal direction” of the phone (described as rotation about the axis parallel to gravity). This makes me wonder what research has been done since the development of cheap microelectronic gyroscopes that are able to measure all of the lacking input dimensions, and exist in modern smartphones. The paper does a good job of providing an honest assessment of all of the features that they attempted to incorporate into the handheld devices via sensor fusion. Many of the features they explored now exist on modern smartphones, and this paper’s discussion of the challenges with sensor fusion probably contributed to their development. Interestingly, many of those same features tend to be the “gimmicky” and not polished enough to be useful in all of their intended use cases, either meaning that there is more work to be done or that no number of sensors will be able to usefully capture the complexity of certain aspects of the real world. -------------------------------------------------------------------- Camera Phone Based Motion Sensing: Interaction Techniques, Applications and Performance Study (2006) Jingtao Wang, Shumin Zhai, John Canny In a similar fashion to the other paper, this paper seems to be pushing to the limit the use of the camera as a motion-detecting sensor, exploring various interaction techniques that make use of the direction of motion according to camera image analysis (one might also notice that this paper cites the previous paper). In contrast to the previous paper, the things being explored are interaction techniques rather than sensing techniques. The camera is used to enhance the expressiveness of user interactions rather than detect context. A unique advantage of this paper’s approach is that it does not require sensor fusion, but makes use of one sensor that was already built into the phones that were current at the time. The authors of the paper describe related work to give context to the problem they’re addressing and make it clear why the paper is making a worthwhile contribution to HCI research. Specifically, by evaluating existing methods for motion detection, assessing their flaws, and then proposing a solution to the flaws, the reader is sure that a problem is being solved. In this paper, the TinyMotion algorithm creates an alternative to existing computer vision techniques – on the basis that they make flawed assumptions about the relationship between consecutive frames – by using grid samples to estimate motion. The authors conducted both an informal user study for preliminary feedback, and then a formal study. The formal study attempted to use Fitts’ Law as a baseline to determine that using TinyMotion as a pointing mechanism is viable. Measured data is clearly displayed in charts and with detailed descriptions of statistical analysis. This paper may have suffered from the issue where the problem being solved became obsolete quickly. I say this because the TinyMotion features essentially capture the functionality of a microelectronic gyroscope, which you now find in smartphones, and so the image-based calculations of TinyMotion would be unnecessary. This issue was even addressed in the concluding discussions, where TinyMotion was compared to accelerometers, with little argument for why TinyMotion was any better than accelerometers. Overall, I was confused by the goals of this research if its only unique quality was being a “software-only” (although still camera-dependent) approach to motion detection.

Yingjie Tang 4:41:06 10/16/2014

The paper “Camera Phone Based motion Sensing: Interaction Techniques, Applications and Performance Study” introduces a software approach to detect a mobile phone user’s hand movement in realtime by using the built-in camera of the phone, TinyMotion. This paper proposes a informal evaluation and followed by a 17 participants’ formal evaluation. In this case, the paper followed the principle of iterative design which help the designers deeply understand about the problem. I feel quite excited with the idea of take the moving detection as an input for the text or to play a mobile game. The TinyMotion algorithm consists of 4 steps of which I think is quite relevant to pattern recognition. The main idea of the the algorithm is to compare the movement of gradients from frame to frame. Although the algorithm is not complicated but it is good enough for the TinyMotion which has been showed on the experimental statistics. Since this work was done before 2007 and the hardware of cellphone at that time was not as good as now. I think if the work is conducted now, the results will be quite different because 1)we can add large RAM on the cell phone and it will supply us with more space to put character dictionary. 2)The frequency of the camera is largely enhanced and it is more than the frame of a mouse and the information transmission rate will be much more than 0.9(bit/sec). During the target acquisition part, I am quite confused of the intuition of the warm up session, since as mentioned by the author, we are not intending on the learning curve, so is the warm up session necessary? And the participants only entered 8 sentences with each input method while another study entered at least 320 sentences. The author gives the reason that it is because we don’t need to know the learning curve of the participants, but does it really work with only 8 sentences? During the evaluation results part, Fitt’s Law was used as a benchmark for the applications on TinyMotion. There are many interesting and useful feedback given by the participants in the study, such as we should consider the pattern to input a sentence rather than only word by word. ————————————————————————————————— The paper “Sensing Techniques for Mobile Interaction” is a paper about sensing techniques motivated by unique aspects of human-computer interaction. The device also uses the touch and the tilt sensors was able to prevent power-off. The work contains interactive sensing techniques and when creating smarter interfaces by giving computers sensory apparatus to perceive the world is not a new idea but there are few examples of interactive sensing techniques. By implementing specific examples , it explores some new points in the design space, uncover many design and implementation issues, and reveal some preliminary user reactions as well as specific usability problems. I think this idea is good since some existing devices provide functionality similar to this automatic power-up through other approaches. This mean provides similar functionality without requiring the user to open or close anything and it allows easy one-hand operation. Exactly what we can or cannot accomplished with this techniques is not obvious. However, we must recognize that sensing techniques cannot offer a panacea for UI on mobile devices, and careful design and tasteful selection of features will always be required. Only some of the actions that mobile devices support can lend themselves to solutions through sensing techniques, other task maybe too complex or too ambiguous.

Yanbing Xue 6:44:45 10/16/2014

The paper "Camera Phone Based Motion Sensing" presents TinyMotion, a pure software approach for detecting mobile user’s hand movement by analyzing image sequences. The applicability of TinyMotion were proved with variety of application including mobile gesture, vision tilttext, motion menu, image/map viewer, and some other games. The idea of using images sequences to detect movement is very interesting, especially when the phone does not have accelerometer. The authors did not only make the idea of camera-based motion sensing into real application, but also conducted a comprehensive user study on this, and I appreciate this analysis. There are many games that use the position of the phone relative to a level position to move the game along. The popular game Temple Run is an example of this. The interaction is based on turning the screen and moving it up and down to jump and turn. There is no touch input in the actual game play, it is replaced with input from the user that may feel natural (like driving a car). With more phones and mobile devices having cameras and accelerometers, it is easy to see why there is much research being done into how efficient input could be and what could be accomplished through this type of input. I know my phone uses the GPS location and the position of the phone to detect if I am texting while driving. It then proceeds to remind me it is not safe and gives me an alternate menu with large buttons for calling and listening to voice mail. ========== The paper "Sensing Technologies for Mobile Interaction" is a paper that discusses the group’s experiments with sensor fusion in the space of mobile devices. The implement means to record memos by hold the phone to their face, scrolling my tilting the phone, redrawing screen on orientation change, and automatically powering on the phone. They did this by defining different states based on sensor readings. For instance, by getting a specific set of readings from two accelerometers they were able to determine if the phone was in a pocket, being held, or laying on a surface. They also incorporated IR sensors to detect if the device was being held up to the user’s face. The major issues they faced, any many other have also who pursue automation problems, is false positives and negatives. They argued that some of their problems due to this were because of overloading sensors too much. But the real problem is poorly defined states. The sensors can detect the holding and tilting gesture of the user and automatically start the voice record. Users just naturally pick up the device, hold it and speak to it, they don’t need to search the application menu and press any buttons. It's easy to use. However, interactive sensing increase opportunities for poor design as well as provide many benefits. We need experiments to quantify user performance with these techniques and we need longitudinal studies to determine if users may find sensing techniques "cool" at first, but later become annoyed by false positives and negatives.

Qihang Chen 7:01:16 10/16/2014

The paper "Sensing Techniques for Mobile Interaction" describes sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. To enable background interaction using passively sensed gestures and activity, the paper presents the hardware configuration and sensors including touch sensors, tilt sensor and proximity sensor. Meanwhile, the software architecture follows which are interactive sensing techniques, video demo detection, and portrait/landscape display mode detection and tilt scrolling and portrait/landscape modes, etc. In detail, take the first experiment as instance, the voice memo detection and enabling are involved and the scrolling velocity of only one axis at a time is greater than zero and this is perhaps the most effective approach since it requires less dexterity and avoids losing users in two-dimensional space. Further, the paper focuses on power management in the last part where it discusses using sensors to avoid false-positive for powering on or powering off devices. In addition, the authors perform user studies to evaluate the features, present the user's opinions and honestly shows some potential pitfalls. The most important contribution made by this paper is that it does pioneering work on mobile interactions and provides the outstanding prototypes, especially when we examine the techniques widely used in nowadays smart phones like iPhone and Galaxy S III. In fact, the application of sensors to improve the mobile interaction experience is still meaningful for today's mobile research. At the same time, the prototypes, including the hardware configurations and software are of use for deeper research. +++++++++++++++++++++++++++++++++++++++++++++++++++ Camera Phone Based Motion Sensing: Interaction Techniques, Applications and Performance Study --- This paper introduces TinyMotion. TinyMotion uses the camera of a cell phone to detect horizontal, vertical, rotational, and tilt movements. This system was developed before phones came with accelerometers. TinyMotion has the advantage over similar systems that it does not need a special background to track motion. It achieves this by using motion compensation techniques similar to the ones used in lossy video codecs. It uses a camera resolution of 176x112 at 12 fps. TinyMotion can be used to input gestures for characters. It allows for TiltText, which allows a user to select which character from a number pad they wish to use by tilting the phone. The phone tilt can also be used for video games. I found it interesting that the movement studies found that Fitts' law applied to this input method. I also found it interesting that it allowed enough precision to input Chinese characters. I was also surprised that such relatively slow hardware could perform good recognition. I keep forgetting that desktop hardware of a comparable speed (such as the Pentium) could do this years ago. I am impressed with the accuracy of the system. I am curious how it would compare to accelerometers. I wonder if the information from this technique could be combined with accelerometer data to make rotation more accurate without the use of gyroscopes.

SenhuaChang 7:03:52 10/16/2014

Camera Phone Based Motion Sensing: Interaction Techniques, Applications and Performance Study The author gives a software approach, called TinyMotion, for detecting a mobile phone user’s hand movement in real time. This TinyMotion algorithm has four parts: Color space conversion, Grid sampling, Motion estimation, and Post processing. The authors use both informal and formal evaluation to show this usage of this algorithm and give some mathematical analysis of the results. I think the most important part in this paper is the evaluation. First, unlike other papers, the authors give both information evaluation and formal evaluation. The informal one generally tests the usability and gives the reader an intuitive idea. And the formal one has a deep analysis on each task. The authors also use the gold standard test “Target Acquisition/Pointing” to test the performance of the TinyMotion algorithm, which makes people more convinced. I think another application of this TinyMotion is configuring the setting in the cell phone. People can use up and down to select menus like menu selection and use left and right to change the setting’s values, say left means decreasing and right means increasing. Another interesting part is the discussion about battery life. At first glance, this may appear irrelevant to this paper. However, after a second read, we can find this is an important part for this algorithm. If we want to make some real life application based on this TinyMotion algorithm, battery life is a necessary concern. It determines whether we can make it practical or not. <-----------------> Sensing Techniques for Mobile Interaction This paper addressed a very popular topic -- providing context-sensitive interfaces that are responsive to the user and the environment by taking advantage of various kinds of inexpensive but very capable sensors, such as the accelerometers, touch sensors and proximity sensors. As Buxton has observed, much technological complexity results from forcing the user to explicitly maintain the context of interaction. The authors made a very good attempt to integrate many useful features into our daily portable devices, which have been proved very successful and are widely adopted in most of the manufactures nowadays. For example, the portrait/landscape display mode detection, has been a standard configuration for the portable products designed these years. Behind this outstanding idea, I think what make the authors successful is that they observed the opportunities that brought to the HCI design by the combination of the popularity of ubiquitous computing and the versatility of emerging sensors with different capability. A good point in this paper is that they have a given a very good research direction that lead to the prosperity of mobile devices, especially the smart ones.

Xiyao Yin 7:29:52 10/16/2014

‘Sensing Techniques for Mobile Interaction ’ uses a set of sensors and does research and checks status between them. In this reading, I find different types of sensors and their various data in those figures. Besides, I look for more information about tilt sensor on the Internet. A tilt sensor can measure the tilting in often two axes of a reference plane in two axes. In contrast, a full motion would use at least three axes and often additional sensors. One way to measure tilt angle with reference to the earths ground plane, is to use an accelerometer. Typical applications can be found in the industry and in game controllers. As a result, for future interfaces, contextual awareness via sensing may be increasingly relied upon to mobile devices. Although simple, cheap and relatively dumb sensors may play an important role in the future of mobile interaction, careful design and tasteful selection of features will always be necessary. What impressed me most is that, we should still not to ignore opportunities for poor design about interactive sensing techniques. We should seriously consider about result of our work at any time. ‘Camera Phone Based Motion Sensing: Interaction Techniques, Applications and Performance Study ’ proposed a method called TinyMotion that measures cell phone movements in real time by analyzing images captured by the built-in camera. In the following part, authors show the five significant characteristics TinyMotion has. Careful research seems much more useful in this paper. To get information about environment and backgrounds in which TinyMotion work properly, people should collect many typical environmental data. After a large number of user study, authors collect data and try to analyze and evaluate them under the direction of the TinyMotion algorithm. By comparing with different aspects, authors finally get the five main characteristics about TinyMotion. Also, it is shown that “clutch” that can engage and disengage motion sensing from screen action can be a good direction for future study. In my opinion, steps shown in this paper should be a good example for me on my future research.

yeq1 7:35:22 10/16/2014

Yechen Qiao Review for 10/16/2014 Camera Phone Based Motion Sensing: Interaction Techniques, Applications and Performance Study In this paper, the authors presented TinyMotion, an interaction method that utilizes user’s hand movements on camera based phones. The paper briefly describes the algorithm, and the features of four sample applications (motion menu, vision tilttext, viewer, and mobile gesture). In formal user evaluation, the authors had done regression testing with Fitts’ law and compared MT with then current baseline, multi-tap. They have also completed user trials on more complex tasks such as gaming and motion gestures. They found that TiltText has significantly lower MT when difficulty is high. The speed of tilt-text is higher, but with more errors. The authors also found motion gestures is novel but potentially not a good way to enter text as it is. The users also seem to have different conceptual models as to how movements should be handled in games. While at the first glance, it seems that accelerometer can do all the tasks TinyMotion can do, and it’s much easier to use accelerometer these days due to stable libraries included in many of today’s smartphone OS. I would argue the approach is still novel and very relevant to today’s mobile interactions. The first one is target acquisition and selection in smartphone: it remains one of the more difficult task when compared to desktop computers with a mouse. TinyMotion supports target acquisition in similar way as IBM (I wondered why they’re interested in this until I thought about this…) Thinkpad’s pointing stick. Now that we have high resolution camera and algorithms that can also detect pressures, maybe we can now fully emulate the pointing stick with a camera and allows even faster and more accurate target selections? Pointing sticks were successful for two main reasons: 1) sensitive pressure sensor allows variable pointer movement speed, and 2) the user can use the stick without ever having to leave the keyboard area. Now that we have addressed the first part above, I think the second part can also be addressed if the camera is positioned correctly. Perhaps an additional button on the edge of the screen can be helpful in confirming the selection as well. The user’s fingers and thumbs do not need to leave the keypad area while pressing the selection button. Why would we want to do that today when we have smartphones that allows touch? Two main reasons: 1) Even though touch is good at selections over long distances, it’s still bad at selections of objects with too small selection area due to the fat finger problem; 2) Some people still cannot use smartphones – and it’s not because they can’t afford to buy one. Satellite phones and phones that has to work in less ideal conditions cannot have all these new features for different reasons. It would be interesting if we can have one low cost sensor to support multiple functionalities, and thus it’s a potential point of revisit today. Sensing Techniques for Mobile Interaction In this paper, the authors attempted to allow users more mobility by proposing using sensors on mobile devices to infer the intentions of the users. Specifically, the authors focused on prompts on voice memo and power management of the mobile devices. Different sensors are implemented, to support different context variables. The authors described the context variables by providing a taxonomy of usable and derivable contexts from these sensors. Informal user study was performed to explore the potential usefulness (via anecdotes and surveys) and usability problems (e.g. false positives) of the methods. I’m pretty impressed with some of the stuff they put here. For example: I was not aware that they have studied portrait/landscape view mode detections with tilt sensors way back before iPhone was released. The voice prompt is also a novel idea, and in some cases it is potentially a preferred method and the users may feel such technology is less intrusive (not sure yet, Dr. Lee and I have to study this very soon…) than having a device listening what you say (wait for “Okay Google”, for example) all the time. However, this prompt is less useful than the current alternative if users want to use it hands free. It may be interesting to see whether we can get a compromise between this two approaches to make the users feel more comfortable using these systems. (Potentially future work for me?)

Jose Michael Joseph 8:38:40 10/16/2014

Camera Phone based Motion Sensing: Interaction Techniques, Applications and Performance Study This paper talks about the application where the camera on a camera phone is used to gather data about the relative motion of the device and use that information in guiding the applications present on the phone. The application described in this paper is called TinyMotion. It uses both image differencing and correlation of blocks for motion estimation. It has four major steps which include color space conversion, grid sampling, motion estimation and post processing. The primary thing about these operations is that they are realized using integer only operations. This emphasizes that the calculations caused by these operations will be limited and would not consume much processing power or battery. One of the main drawback, according to me, about this application is that it continuously relies on data from a camera to make motion estimation. The battery consumed by the camera and the efforts to recharge it over time would be far greater than the cost to actually add an accelerometer to the device which brings doubt to the feasibility of such an application. The continuous use of this application would reduce battery life by more than 50% which is too significant a drain to consider a wider implementation of this project. A sub application that was implemented to show the feasibility of TinyMotion was TiltText. This involved tilting the phone in a particular direction to type a particular alphabet. Given that this was before the widespread use of qwerty keyboards on cellular devices, the method of input still feels like it is more of a project than something that can be applied on a large scale. The traditional method of hitting a key twice to produce an alphabet would still achieve better times than this application. Also the use of a predictive text scheme such as t9 would definitely render such an input method inefficient. While testing the application it was also noticed that participants felt that it was more difficult to acquire vertical lengths. This could be due to the fact that the human wrist is much better designed to move horizontally than vertically especially in the position when one holds the phone. These are subtle design considerations that generally have the ability to make or break an application. In conclusion although I find the application interesting I do not see much feasibility for this application in terms of large scale implementation.

Jose Michael Joseph 8:39:21 10/16/2014

Sensing techniques for Mobile Interaction: This paper explores the various ways a generic mobile phone can be equipped with extra sensors to create a device that is more flexible in its input mechanism and is thus a more versatile machine. Some of the various features implemented are changing the phone from portrait to landscape mode by tilting the screen, using tilting to scroll the page and recording a memo by just speaking close to the phone. The sensors that have been added to the phone are a two-axis linear accelerometer, capacitive touch screens and an infrared proximity range sensor. One of the problems in this set up is that it is unable to determine if the user is holding the device with the display facing right side up or upside down. The authors have stated that this can quickly be fixed by adding a simple gravity activated switch. The effects of adding this switch on power consumption or any subsequent performance changes have not been discussed presumably as they were not in the scope of this paper. The proximity sensors reached a maximum response at 5-7 cm from the sensor. This was a problem as if the sensor was brought any close there would be no change in the reading as the maximum had already been reached. The authors also pointed out that the data from this sensor would be noisy after 25 cm. There was no solution suggested to this problem and the authors simply implemented a workaround by calibrating the device based on the sensor’s limitations. A feature that I was personally very impressed by was the voice memo feature where one simply had to talk close to the phone to record it as a voice memo. I intuitively felt that such a feature would be very useful and would use less conscious action. But after looking at the statistics gleaned from the experiments I was surprised to find that both the traditional button approach and the intuitive new approach provided the same efficiency. I believe a major point for this might have been the feedback produced by the phone. I strongly feel that if stead of a sharp beep if the phone buzzed instead to indicate that its recording, the efficiency of the application would have been much better. This is because auditory feedback requires more conscious attention than haptic feedback. Another problem that the authors encountered which I myself have encountered numerous times on low end android phones is the problem where the screen rotates to landscape mode when you put the phone down. I thought it was interesting that the authors had found a solution for the same using a FIFO queue. In conclusion I feel that this is a very interesting paper which could have interesting implications on technology if its limitations are dealt with properly.

Christopher Thomas 8:48:55 10/16/2014

READINGS FOR OCTOBER 16 (NOT OCTOBER 14TH LIKE IT SAYS SINCE CLASS WAS CANCELLED OCTOBER 14) 2-3 Sentence Summary of Sensing Techniques for Mobile Interaction – In this paper, the authors modify a mobile device by adding sensors, such as an IR sensor, touch sensor, tilt sensor, etc. They use the new sensors to extend the functionality of the interface, detecting things like being viewed from different angle, etc. It was very interesting reading this paper, because in many ways, what was done with the cell phone was similar to our class project. Obviously, the paper was more complicated than the trivial things that we were doing in our project, but still, the motivation was the same. The idea here was detecting the user’s context and using that information in some intelligent way to improve the user’s experience. For instance, the tilt sensor could be used to detect when a user was looking at the display, holding the phone at their side, walking, etc. based on differences in the waveform. In our class project, we did something very similar except our goal was to use a mouse for the computer. We extended the mouse with photocells to detect whether or not the user’s hand was on the mouse and add vibration motors and force sensors to detect how hard the user was clicking the mouse. By detecting how hard the user was clicking, we were able to enhance the user’s experience by eliminating the need for double clicking and simply requiring the user to click a little harder. Similarly, we used the user’s context information to determine whether or not to vibrate the mouse (i.e. don’t vibrate it when a notification comes in if their hand isn’t on the mouse). While the projects are obviously different and end goals are different as well, reading the authors work through their engineering of their device reminded me strongly of the design challenges of our own device. We had to consider how this new information can be used in a helpful way. The common thread of the paper though is that detecting context is critical for making changes and adjustments to the interface. Adjusting the display orientation, how long the user has held the device, detecting ambient light, etc. are all different ways of knowing about the context the user and device are currently in. Obviously many of these things are direct parallels to what we were doing in the mouse (i.e. detecting ambient light, etc.). Obviously though, mobile devices introduce both more difficulties, but also more opportunities. For instance, on a mobile device, one can rotate the display when the phone is at a different orientation. The authors discussed at some length about axis control, tilt scrolling, and portrait vs landscape modes. Even though I use mobile devices every day, I never really had thought about how difficult implementing one of those things must have actually been. Reading the discussion about axis / orientation issues and those complexities gave me a new appreciation of how some things that seem obvious may not be so easy to do in practice. Even state of the art devices usually allow only portrait vs. landscape and do not allow some in between state of portrait vs. landscape at an angle. Obviously this is more of a challenge on a table computer like Microsoft Surface where one needs to deal with orientations at all angles, not just 2. Finally, I want to point out that a lot of what the authors did was first motivated by observing people using devices and taking notes. For instance, observing people holding the phone, then pressing the power button, etc. This allowed the authors to make changes to the interface, saving the users a button press. Similarly, we dealt with the same kind of design issues by observing our use with the mouse and designing the location of sensors in the mouse and vibrating motors around those observations.   2-3 Sentence Summary of Camera Phone Based Motion Sensing: Interaction Techniques, Applications and Performance Study- The authors introduce TinyMotion, a technique which uses the user’s camera to detect motion in a primitive cell phone. The technique can detect whether the phone is moving and the direction. Several use cases are considered, including text entry and a user evaluation is performed. When I began reading this paper, I first thought that this is useless. Why would we need to detect motion this way? Especially since most phones have gyroscopes, accelerometers, etc.? However, once I started reading the paper, I quickly realized that this was on an old phone, not a touch screen, just an old flip phone, which made so much more sense. I had this phone in particular actually many years ago, so I remember how slow it was. Thus, being able to detect motion in a real time way using only integer arithmetic and being able to handle it on such a primitive piece of hardware is truly an accomplishment in its own right. However, I also found it interesting that the authors were not naïve about their work and also tested the system in many different contexts, including low light environments to see how the poor illumination affected the performance. I remember this phone correctly and I recall how bad the camera was, so getting that to work at night was also quite impressive. I really liked the concept of TiltText, which was an application based on TinyMotion. One thing that was awful about these old phones is that to enter anything, you either had to hit letters many times (i.e. 3 times to get to ‘C’) or you could hope that the prediction algorithm worked. TiltText allows the user to just hold the letter and make a gesture with the phone to input the letter. Thus, it allows fast one handed text entry without having to even press the button multiple times. The biggest challenge here is getting users to learn the gestures necessary to use the interface properly. Obviously, though, extending the ability to detect motion in such a primitive platform provided that platform with functionalities that typically weren’t seen for several years until phones came standard with accelerometers, gyroscopes, etc. One thing I would like to point out which I think is interesting is that users sometimes can find out things that the designers didn’t even realize. For instance, the designers of this assumed that people would initially just move the phone to make their gesture. However, one of the users evaluating the TinyMotion system discovered that instead of moving the phone, they could actually just move their other finger in front of the camera and “trick” the system into thinking it was being moved. Therefore, the technology can be used in ways that the authors didn’t even envision originally, which I think is interesting. In fact, that same realization gave rise to other papers by Dr. Wang and his students where people make gestures by moving their fingers over the camera and not moving the whole device. It was also interesting to see how Fitts’ law was applied here for target acquisition by measuring performance of selecting items using TinyMotion. I would not have thought it possible to use Fitts’ law for something like that, but actually it is possible to get empirical measurements on the effectiveness of the technique. I also loved the idea of this, because it extended the phone’s capabilities in so many ways. Before this, there was no way to “draw” on this phone. After TinyMotion, the users could draw gestures on the phone by moving it around and most of the characters shown appeared pretty good. Thus, the introduction of this input channel greatly extended the functionalities afforded to the user by the phone, also extending what application designers can take advantage of on this platform when designing applications, even though TinyMotion itself was a software project with no hardware modifications necessary. Thus, the paper demonstrates that sometimes adding new hardware is not necessary – just taking advantage of the existing hardware in an intelligent way is enough to extend the functionalities as if additional hardware and sensors were present and then allowing user level applications to just use the API – something we dealt with directly in our homework 2 project.

Vivek Punjabi 9:13:19 10/16/2014

Sensing Techniques for Mobile Interaction: The paper introduces some new techniques for human-computer interaction with mobile/handheld devices. The authors have tried to use some basic sensor devices to complete common daily activities like using touch sensor on the whole device body to check if the user is holding it, proximity sensor to check objects in vicinity, etc. They have produced a prototype device coupled with these sensors to test it. They have also added some software functionalities such as voice memo detection and display mode detection. They have performed various experiments to check for any issues in many aspects and then tried to overcome the problems whenever feasible. One of the common problem is power management. Then, they conclude by giving some importance to the most basic sensors along with some future aspects to improve this system. The approach of using basic sensors for carrying out the common activities seems plausible and working for many cases. But, according to me, this makes things very vague and too basic. Instead we can try to find out some interesting input devices that can be used for multiple purpose such as using smart fiber optics to create the device body instead of adding just touch sensors all over the device. However, this paper still provides motivation to focus on solving common problems with same or new sensors. Camera Phone Based Motion Sensing: Interaction Techniques, Applications and Performance Study: The paper introduces a new software TinyMotion to create application that can detect mobile phone user's hand movements in real time by analyzing image sequences captured by the camera. The paper emphasizes on mobile phones and camera based applications in today's world. Then, it gives the algorithm and implementation used for TinyMotion siftware which has 4 major steps: color space conversion, grid sampling, motion estimation, and post processing. There are various input methods to add text like mobile gesture and vision tilt text. Then they have benchmarked the application informally and formally and presented the evaluation results in terms of Fitts' law, menu selection and text input. They have also considered the complex applications like handwriting different languages. The paper then discusses the common problems, comparisons and future aspects and finally conclude by giving some applications and uses of TinyMotion. The paper uses computer vision, pattern recognition and language processing which makes this application a bit more complex but it seems vary useful and helpful provided its accurate and precise. Its good that the software is made open source and freely available, so we can use it to modify as per our needs. Though it introduces some new methods of input, it looks easier to get used to it which makes it more reliable. The paper, thus provides a good platform to start with in this direction where we can just try and add modifications to see if any other algorithm works in a better and efficient way.