User Interface Toolkits and Adaptive Interfaces
- Past, Present, and Future of User Interface Software Tools, Brad Myers, Scott E. Hudson, Randy Pausch, ACM Transactions on Computer-Human Interaction, March 2000, pp. 3 - 28.
- Principles of Mixed-Initiative User Interfaces, Eric Horvitz, CHI 1999: ACM Conference on Human Factors in Computing Systems, pp. 159-166
- Proximity Toolkit is a catalyst for developing applications that make use of spatial information, and relations between objects in space.
- Direct manipulation vs. interface agents, Ben Shneiderman, Pattie Maes, ACM interactions, Volume 4 Issue 6, Nov./Dec. 1997 (original debate happened in CHI 1997)
Eric Gratta 21:40:33 9/23/2014
Past, Present, and Future of User Interface Software Tools (2000) Brad Myers, Scott E. Hudson, Randy Pausch This was a very thorough survey paper of the various means by which designers, developers, and to some extent end-users contribute to the creation of user interfaces. Some new and useful concepts used to describe these interface software tools included learning “thresholds” and “ceilings,” as well as the “moving target” problem and “ubiquitous computing.” The learning threshold refers to how easy it is to learn to use a program, and the ceiling describes the limit of what is possible to achieve by learning it. Ideally, an interface-building (or any other) program should provide a low threshold to using the application as well as offering a high ceiling of possibilities for what can be created (or done). The moving target problem refers to when a tool is developed to accomplish a task that is already obsolete or fading by the time the tool is available or matured. The term “ubiquitous computing,” which I have definitely heard of before, seems to have been displaced in favor of “the Internet of things,” as a way of describing the trend toward multiplicity of small, interconnected devices that communicate with each other, sharing data via the Internet infrastructure. The paper identified many issues with the modern interface software tools (of 2000) that “worked,” many of which seem to have been addressed accordingly in the 14 years since then, which gives the impression that the author of this paper was very insightful, or at least aggregated the results of others’ work very well. Just as one example of the paper’s insight, event-based languages have adapted to work equally well for recognition-based interfaces, a challenge that the paper predicted that these languages would have to face. For example, Apple’s CocoaTouch framework uses events to indicate both the beginning and end of events, in some case even indicating the beginning of sub-events such as decelerating and accelerating, and it even provides the application developer with access to the parameters of the touch input for even more customizable interactivity. As for the approaches that had not caught on by the time this was published, those methods remain unpopular or nonexistent. The main takeaway from this paper was that the arrival of ubiquitous computing brings with it a great diversity of computing devices and thus a great diversity of interfaces, creating a need for low-level consistency across those devices and some means of generalizing interface programming such that developers’ time is not devoted to mastering a plethora of programming techniques and interface creation paradigms. The best example of a direct attempt to address this problem is the Google Android platform, which is hardware-independent and open source. It seems that Google’s insight about this issue has contributed significantly to it success as a mobile (and other device) platform. -------------------------------------------------------------------- Principles of Mixed-Initiative User Interfaces (1999) Eric Horvitz This paper addresses an issue different yet similar to the one I brought up in response to the first paper we read on Direct Manipulation Interfaces. In that paper, the author diametrically opposed two systems – namely, the “conversation” metaphor and the “model-world” metaphor interfaces – when in fact, new direct manipulation interfaces could still incorporate elements of the fading dialog-based systems, and leverage some advantages in doing so. This paper addresses a similarly false perception (in 1999) that studying the use of intelligent automation by user agents and enhancing direct manipulation of objects are diametrical tasks, when in fact researching the combination of the two may offer distinct advantages to further developing either technique independently. It uses an application called “Lookout” which enhances Microsoft Outlook by adding a user agent that predicts when a user wishes to schedule a calendar event based on the contents of a received e-mail. Indicative of the influence of this paper is the prevalence of predictive tools in mail applications today, which augment e-mails with hypertext links to scheduling actions. It seems that the social agent concept, which earlier Windows operating systems were well-known for supporting, died out and predictive help is now generally limited to non-interfering visual cues. The elimination of the social agent may have been due to its tendency to interrupt the user, but perhaps also because using just text eliminates the complexity involved in predicting when the system should act; in the case of text cues, the system always acts if the goal is probable based on the collected evidence (scheduling-related text). It was interesting that context, in this case where the user is in the application and what they are doing, was an important feature in their predictive model for deciding to act. Depending on the level of confidence about the user’s current goal (based on the context and prior learning), the system might ask the user before acting, rather than acting independently. This is because the confidence level affects the system’s prediction about the utility of acting in that context. Through machine learning techniques, the interface could also learn to prompt users at time calculated to be optimal. While novel at the time, it may be because the social agent paradigm had little utility itself that it is no longer used today! That said, there may be other occasions where learning users’ habits in a temporal manner may lead to helpful system actions that do not interfere with the user workflow, such as something like data prefetching.
Bhavin Modi 21:46:01 9/24/2014
Reading Critique on Past, Present and Future of User Interface Software Tools The paper is about the trends in the user interface research areas. With a brief history and future possibilities it describe how interface software tools should be build and what to keep in mind. The main idea encompassed is that innovation in tools trail innovation in user interface. User interfaces were standardized when this paper was written and the authors are right on the mark when talking about future changes. These changes were attributes to ubiquitous computing, end-user customizations, gesture and speech recognition, and 3D applications. The problems mentioned are of tools trailing behind the innovation, the moving target problem which is inevitable, because design paradigms change and with it the requirements. The thing to be kept in mind again is that today’s technologies began research in the 80’s and the 90’s. Tools can be developed keeping that in mind and also the future trends. We now move onto the aspects of the tools created of efficient user interface design. The goal to achieve in this respect is a low threshold (learning curve) together with a high ceiling (increased functionality). This is difficult to achieve as more functionality involves more complex programming and knowledge, which is unavoidable. The drawbacks of many implementations to increase ease is shown as being unpredictable, not giving extended control to experienced programmers and again the moving target dilemma. Researches have explored many areas of work, including using heuristics to automate the process, using imperative programming languages deviating from the declaration style. The growing importance of scripting languages and why the WWW is popular. Basics, the users want control, and sometimes over every aspect, but at the same time not have to work more, well who wouldn’t want this. The importance of the paper was to analyse the different approaches taken by the researchers. Though all may not have succeeded, many will now gain importance with change in computer paradigms. The age of standardization is fading with ubiquitous computing taking over, we now move to smartphones, watches and explore speech and gesture recognition. The various avenues of research are depicted, which helps you understand the trends and where you might want to fit in. As mentioned ample of times, everything is good for something and worse for something, we just need to figure out what that something is. The point to remember amidst all of this, is that we are creatures of habit, though new interface designs are inevitably needed, one should not forget that what we have today has been built after decades of research and is one of the best approaches. Integration is the key, rather than discarding as obsolete. --------------------------------------------------------------------------------------------------------- Reading Critique on Principles of Mixed-Initiative User Interface Taking the best of both worlds, direct manipulation interface and intelligent agents to create a mixed initiative user interface, simply put User interface automation. The research has either been in intelligent agents and direct manipulation interfaces as two separate entities. The author creates a new application called LookOut to merge these different thoughts. The interface has a lot of practical uses and convenience. It complements the existing interface and learns from the user activities. This just feels like common sense, since we make interfaces for the user, it is called user interface, why not system develop as per user preferences. The applications todays achieve these goals to like the Apple Siri, which shows you available locations to eat nearby on a map when you are hungry. But these too are activated by the user and are not automated, since people are unpredictable, it’s a challenge to implement for all possible scenarios. Moving on to the design aspects one thing that could be done was to invoke the agent by voice recognition, as that will as fast as automation and solves most of problems mentioned. These are interference with existing tasks, user’s point of focus, and unnecessary invocation among others. The paper again implores us to explore new avenues of research, in this case intelligent agents. Using artificial intelligence we can greatly improve the scope of user interfaces. One possible idea is to create self-redesigning interfaces that learn and adapt according to usage and convert the interface that best fits the user. An example being automatically putting icons that are used with the most frequency at the easily reachable location, creating one’s own gestures for controlling functions. An automatic keyboard that can be reorganized to your preference, of course not the hardware. So there is scope for innovation here. The application developed has lot of value in terms of ease of use, the working has been clearly specified with the problems and advantages clearly mentioned. Basically unpredictable of human nature is depicted as a huge factor, and using Bayesian network probabilities and Markov decision process, taking pointers from AI and machine learning as a solution. Learning from and using techniques from other fields is also a good area to look into.
Yanbing Xue 21:54:52 9/24/2014
The first paper is mainly about the past and the future of developing users interface software tools. It also introduces a lot of existing examples to show how past research has dealt with the problems. First, a history of development of software tools for user interface are discussed. We can see many of those were very popular in the past and some of them are still widely used today: Visual Basic, MFC. Not only program language or libraries, there were also other parts making software tool, and they are even more principal, fundamental including event languages, scripting languages, object oriented programming, windows managers. Due to new types of devices, such as cell phones and tablets, the authors accurately predict that new interface design tools will be needed, and analyzing what has worked well for design tools in the past will help guide the creation of these new tools. New tools to develop user interfaces in the future are predicted to be different to currently existing tools. They will likely not be based around events, but rather maintain information about the uder, device, and application state. There are some important working features of UI in the past. One is overlapping windows due to its effectiveness and intuitiveness for human. The other one is event language. Actually I didn’t think of this is a paradigm before though I keep using this for building every component of a UI and make control of each respectively. However, the future direction is not event language but providing a rich context of information about the user, the devices and the application’s state. The primary difference is that the input is uncertain—the recognizer can make errors interpreting the input. Therefore, interfaces must contain feedback facilities to allow the user to monitor and correct the interpretation.--------------The second paper is mainly about principals to use when adding automation to user interfaces. LookOut is described as an example of this technology. With the development of Human Computer Interaction, there are two promising directions for designers to make their interface: direct manipulation and interface with intelligent agents. Firstly, the paper reviewed key challenges and opportunities for building mixed-initiative user interfaces which enable users and intelligent agents to collaborate efficiently. And then a set of principles were provided to direct the design mixed-initiative user interfaces that address systematic problems with the use of agents that may often have to guess about a user's needs. Next, the authors focused on methods for managing the uncertainties that agents may have about users' goals and focuses of attention. The basic thrust of the principles, in terms of areas on which the author later focuses, appears to be taking action when the user can obviously benefit from such action, refraining from action when it is obviously inappropriate, and correctly evaluating situations to avoid either taking unwanted action or not taking action when it is desirable. While the ideas and techniques presented here are interesting and intuitive, it's very difficult to determine whether or not they may be useful in practice because there is essentially no experimentation reported in this paper. The author discusses the capabilities of the Microsoft Lookout tool, but fails to report on how users either liked or were able to manipulate it. It's not easy to think up modeling in HCI, I think. So this paper conveys a theoretic overview.
Nick Katsipoulakis 21:57:06 9/24/2014
Past, Present and Future of User Interface Software Tools : In this article, the authors present their ideas on UI development tools and how those tools are going to be affected by research progress in Computer Science. This work concentrates on toolkits and development kits for interface design in computer systems. Some of those succeeded in providing tools for usable interface creation and others did not delivered their promises. Among those tools, there exist some that had a great impact in Software Engineering in general (i.e. object-oriented programming, scripting languages etc.). In my point of view, the “meat” of this publication is the authors’ thoughts on future prospects and visions about UI design. It is mentioned that computers are becoming a commodity, which is part of many digital objects. Also, technological advances provide designers with new I/O paths for devices and larger audiences that will use a product. Furthermore, distributed computing dictates the need for synchronization, sharing of information and security issues on applications. Systems will be able to recognize gestures and 3-dimensions is going to be available to many devices. Finally, the authors address a number of points that new UI tools will need to address in order to provide better user experience. /// ------------------------------- END OF FIRST CRITIQUE -------------------------------// Principles of Mixed-Initiative User Interfaces : The author of this paper addresses a debate, which existed the time of writing the paper, between UI concentrating on direct manipulation and automated interfaces. The goal of the author is to examine the aforementioned approaches to design from the beginning a user interface. In the beginning, a number of important principles for a mixed UI are provided. These concentrate on the definition of design aspects that a designer needs to consider during the development process. Lookout is also presented which is a system prototype for aiding users with calendar organizing and scheduling. This prototype is designed to work in different modes according to the user’s need for help. Lookout can gather information through mails, user’s voice tone and words. Then, it uses information on classification algorithms and is able to take decisions for the user. For improving the ability of Lookout to take decisions, its designers have developed an attention tracking ability. Through interactions and behavioral patterns, Lookout is able to identify users’ needs with a high success rate. Overall, Lookout is the resulting software of mixing direct manipulation software with automation.
phuongpham 22:55:06 9/24/2014
Past, present and future of user interface software tools: I really enjoying reading this paper. It shows 2 things about research that our professor has mentioned in the class: new technology brings new oppotunities and toolkits are neccessary for all new technologies. I have not read many papers in HCI area before. However, reading this paper gives me a feeling that HCI has merged into almost every computer related task today, from OS to programming language. I feel easily understand what the authors discuss because I have used most of the things mentioned in the paper, which is different from other research. The authors have mentioned insights about the pros and cons for current technologies. In other to be success, user interface tools have to be closer to their users, i.e. reduce gulfs. All the success and failed reasons given in the paper are very interesting. The authors are also correct about the future challanges and we are facing the same challanges at the moment, i.e. nearly 15 years after the paper was published. However, it is really hard to control the moving target and I don't think some "failed" projects are really failed. Moreover, as we have seen before, even they are not success now, the future of future technologies may apply these projects' results. ***Principles of Mixed-initiative user interface: this paper proposed principles for a mixed-initiative user interface and implement LookOut, a mail client having mixed-initiative user. The authors have pointed out that the system will become more helpful if it is able to care enough about its user. By incorporating automated components which can "understand" user's goals, the application and be very helpful. Actually, I have observed the calendar auto-scan function on Gmail web app. However, the current technology has not gone that far compared to the paper's intention. Even the authors have a clear desired principle set, and implemented them in a real application. The current technologies have not reach that level to actually know the user's goals. Given the task would be more complicated if users share the same account, and there are environmental factors which can affect user's mood. I think the mixed-initiative could be applied for a restricted domain, in which we could eliminate many environment factors and understand the user's reaction very well. However, the paper has offered a take away point that computer systems can always server human better if system designers care about human better.
nro5 (Nathan Ong) 23:54:55 9/24/2014
Review of "Principles of Mixed-Initiative User Interfaces" by Eric Horvitz The paper describes a system that aids users in scheduling by reading emails and providing an easy way for users to take the data from the email and placing it on the user’s calendar. In addition, they developed an algorithm for timing when the program should perform its helpful task and when it should not, depending on a user’s preference. I was lucky enough to be given a demonstration of this software at my time at Carnegie Mellon University. To be able to return to it and read the paper is quite an experience. I remember being able to get a demonstration of the system as well as the conference presentation that the authors gave. At the time I remember sitting there in awe of what computer technology could do, and how it could even act as an assistant who knew when it would be okay to do something for the user, when it would not be okay. To be able to model something that complex made me erroneously and naively believe that all aspects of life could be reduced to some kind of easy formula that could be derived from observation. I think the biggest contribution of that paper was the fact that a user's preference can be predicted through a three-part piecewise-defined function. To be able to determine when a user will want an assistant to automatically place information in the calendar is extremely useful, and for the times that the program predicts correctly, the user is pleasantly surprised and his or her day is made. At least, that was the impression that the author made when he presented the results of his user study. At the time, while user prediction was considered, it was never a mature field of study, but this paper seemed to make headway towards making more accurate predictions through historical actions. It also reaffirmed the importance of user predictability in the field of HCI, since a user experience can be greatly enhanced with it, but a minor error can make users annoyed. Review of "Past, Present and Future of User Interface Software Tools" by Brad Myers, Scott E. Hudson, and Randy Pausch This paper is essentially a historical survey on the state of user interfaces and the underlying software to aid designers and programmers in creating them, including those that did not take root. In addition, the authors present their hypotheses for what user interface software will focus on and what shape it will take in the future. I felt the most interesting portion was the predictions of what future UI software will look like. Almost all of the authors' predictions were correct, and they had the insight to be able to derive what the future would look like. I always assumed that predicting the future was impossible because people were not very predictable (this was after realizing that people's behavior cannot be recognized by simple functions). To be able to have the insight into what a field of study will look like in the future is truly amazing, and I hope to be able to get a fraction of that kind of knowledge. The one that they got wrong was also quite interesting. They believed that there would be problems in "Non-overlapping layout or rectangular and opaque interactive components," giving the example of two translucent windows with buttons and how a user would be able to click one or the other. Currently, it seems that there is an aversion to working in translucent conditions, mostly because of a focus issue; If a user wanted to multitask with physical objects, the objects that he or she uses tend not to be translucent, so they are placed side-by-side. This is the same way for current views of GUIs; users have the option of placing windows side-by-side or in any layout at all, but much like a desktop workstation, there is no room for transparency. I did not take the time to analyze the software that they cited, but I suspect that there still are very few situations where researchers believe transparency would be useful.
changsheng liu 0:49:07 9/25/2014
The first paper “Past, Present and Future of User Interface Software Tools” survey the interface design tools in terms of features and functions. It talks about the success and failure in past user interface tools. The authors mainly focus on these aspects: threshold and ceiling, path of least resistance, predictability and moving target. The threshold is the learning curve for the tool. If the tool is not intuitive and takes users a long time to learn, the tools will fail. The ceiling means the maximum function the tool can provide to the user. What is much more interesting is moving target. It means that the software solves a need which has already been obsoleted. The authors predict that there would be an increase in recognition-based user interfaces, like speech, and camera-based vision systems. This is quite correct according to the current technology, such as Siri on the Apple iPhone and camera-based gesture recognition in x-box. In the second paper “Principles of Mixed-Initiative User Interfaces” the author described some challenges for building mixed-initiative user interfaces. The paper presents Lockout which enables users to collaborate with computer efficiently. The author also gave twelve critical factors for designing effective mixed initiative user interface. I think some factors are interesting, such as considering uncertainty about a user’s goals, considering the status of a user’s attention in the timing of services. Also, this paper has shown a new and interesting application of machine learning methods in user interface design.
Longhao Li 0:51:25 9/25/2014
Critique for Past, Present and Future of User Interface Software Tools This paper mainly talked about the different tools that people developed in computer science for user interface software in history. Also, the author points out some problems that should be cared about in the future. To my understanding that this paper is important since it not only point the history of the development of software tools, but also point out the problems that developers need to care about in the future. When introducing the tools that have already existed, the author did some analysis to see if it is easy to learn how to use and how much functions that the system can bring to the user. From the analysis we can get a good idea about if the tool is the most suitable one for some certain task so that developers can easily make appropriate software for client. Also the tips about the future development of user interface software tools attract our attention to reconsider the tools that have already been developed to see if they needed to be modified in the future. The paper is written in 1999. After about 15 years’ development of user interface software tools, we can see some issues about tools that the author mentioned have showed in nowadays. The author says that the average level of users’ skills on using computer is changing. Since developers need to make sure that most people can use the interface easily, they need to think about the interface for average people. If the average skills that normal people have changed, the interface will not be the appropriate one anymore so that constant changing the interface will be important part for the development. Nowadays, lots of elderly people start to use smart phones. Since the old version smart phone interface is designed for people that are skilled, it may not be suitable for elderly people, since they have less skill on using smart phone so that the interface should be changed. Apple did a good job on that. The early version iOS is not so easy to use for people who need big font and who are not good at typing since during that time people who use it don’t care too much about this. Apple saw this change and added a lot of features to help people who are hard to see the words on the small screen. Also apple bring in the voice-recognizing interface to help people to directly transfer voice to sentences. I believe that in the future, the average using skill of users will keep changing. Focusing on the change will be very important for every interface designer and developer. Critique for Principles of Mixed-Initiative User interfaces. Basically, this paper talked about how to design an interface that includes self-learning ability on user’s behavior to enhance the interface itself. The author used a project related to it as example to show how self-learning can enhance the interface. This is a great paper, based on my understanding. Since interface developers need to design an interface that suitable for most users. Stable design may not be the optimal solution. Thus it will be better to make a self-learning and self-modifying interface that can be suitable for different kinds of people. But the way to achieve something like this will be hard. From this paper, developer can learned the factors that may need to consider when they are developing a “smart” interface. Also I believe that the example in the paper give people the innovation to try to think about how to make a self-learning interface. In nowadays, people are trying to achieve something like this. People added self-learning ability into input method. It learns people’s typing habit so that they can predict what people want to type by just analyzing several letters they input. Apple use people’s record of location to predict where is home so that they can give suggestions that how it will take to go home at this time. I think in the future there will be more interface that are smart showed in the market to improve people operation on computers.
firstname.lastname@example.org 1:01:24 9/25/2014
Past, Present and Future of User Interface Software Tools This paper analysis the success and failure in the past user interface tools and introduced themes to evaluate them. The tools that succeeded helped where they were needed, how difficult it is to learn how to use it, how much can be done using the system, path of least resistance, predictability and moving targets. Also, this paper introduced future perspective of the interface development in mainly three ways. Firstly, Ubiquitous Computing, which will make the user interfaces designed, to be used in different scales digital devices. Recognition-Based User Interfaces will be able to make the system automatic recognize the different need of the user. And three-dimensional technologies provide better visual experience for users. As for the future tools the end-user programming and customization and scripting allowed users to specify the interface themselves to meet personal expectations. Moreover, the author presented issues of current interfaces, for example, it will need higher skills from users and non-overlapping layout, rectangular and opaque interactive components inherited from early interface develop was as convention which limiting the feasibility of today`s technologies, and the interfaces lacking in support for evaluation, and extra. The evaluation themes are good criteria to identify good user interface design. And the future interface design methods introduced in this paper having already been used in industry these days. A lot of companies have already made use of mobile first web interface design method, and there is also recognition-based web authentication exist for user interface design. As for three-dimensional widgets, the newly published fire phone from amazon have a functionality called dynamic perspective which make user be able to see the widget in 3D, which actually capture the location of the user`s eye and change the appearance of the widget accordingly. The future trend of technology the author introduced is right, the evaluation method and the limitations are very useful for future interface design. Principles of Mixed-Initiative User Interface In this paper, mixed-initiative approach is introduced to make automated services and users to collaborate efficiently in order to achieve user`s desired and several critical factors were also illustrated. The paper used LookOut application as an example to introduced the difficult challenges and solutions of systematic problems the uncertainty of using automated agents. And it also calculated the costs and benefits of different situation while taking autonomous actions. The principle of mixed-initiative is very useful in providing evaluation principles in order to enhance the coupling of automated service with direct manipulation. It benefits the improvement of human-computer interface design.
Brandon Jennings 1:24:23 9/25/2014
Past Present and Future of User Interface Software Tools This paper serves as a survey of user interface tools. It points out the pitfalls and successes of interface tools to gain a better understanding of where improvements can be made. The organization of this paper I found to be extremely useful. It starts by defining the trends in evaluating tools. These serve as a metric by which to measure the efficiency of user interface tools. Then it discusses different techniques that have worked to serve as a base for future work, such as window managers and interactive graphical tools. It goes on to explain approaches that have proven themselves to be promising but for one reason or another have not become a standard. The analysis of the different methods and techniques can be used as a reference for future work. This is very much related to the current industry because as advanced as our understanding of user interfaces is, there are still companies that develop products that fail or receive negative reviews because the interface is terrible. When researching it is important to look back to what has worked and has not worked so that you can make adjustments and prevent repeating mistakes. Analysis papers like these make it easier to establish correlations and highlight the major areas of the field. Principles of Mixed-Initiative User Interfaces This paper investigates methods that combine human-computer interaction techniques that involve direct manipulation and automated services. The paper presents fundamental grounding for enhancements in human-computer interaction. This paper presents standards by which to measure the effectiveness of the automated-direct manipulation package. It describes a relatively lengthy list of important considerations when designing such systems. I appreciate the demonstration of using LookOut on Microsoft Outlook to provide some sort of solid concrete example. This paper is important because it provides a notion of automating direct manipulation programs to enhance the user’s efficiency. The ability for the system to make decisions for the user base don preferences is an ideology that is becoming more prevalent, especially in phones and computers. Many devices now can use a person’s preferences and habits to automatically predict what the user might want to do. This technique of combining automation and direct manipulation will prove to be extremely effective in productivity.
Mengsi Lou 1:36:18 9/25/2014
Past, Present and Future of User Interface Software Tools -----------This paper tells about the past, current and future of the field HCI. We can learn some useful experience and also the lessons from the past. And the year 1999 is at the turning point of HCI diversity that means this field would turn from small scale into various topics, including the computerized devices and also recognition-based user interfaces. -----------In the way of introducing the past of HCI, I am excited to see many familiar important develop and design became so successful that contribute our life now so much, such as the Window Managers and Toolkits, Scripting languages, Hypertext and Object-Oriented Programming, etc. Also we also need to look at the failures from people’s previous work, including User Interface Management Systems, Formal Language Based Tools and Model Based and Automatic Techniques. Then the author discusses the future of HCI. As we can see, many of his ideas realized nowadays. For example, computers becomes a commodity. And also comes the hot topic ubiquitous computing that contains three aspects in detail, that is, varying input and output capabilities, tools to rapidly prototype devices and tools for coordinating multiple, distributed communicating devices. Then the next part is about the recognition-based user interfaces Three-Dimensional Technologies and End-User Programming. ////////////////////////////////// Principles of Mixed-Initiative User Interfaces ------------This paper is under the background of the debate between ‘developing new metaphors that enhance users’ abilities to directly manipulate objects’ and ‘directing effort toward developing interface agents that provide automation’. And the author proposes that it is better to balance these two aims, that is the concept of Mixed-Initiative User Interfaces. And in this paper the author puts the point on the principles of this combined pattern. There are twelve principles and some of them are supported by the experiments of the project LookOut. ------------I would like to discuss some of these principles. Developing significant value-added automation. The interesting example is that the LookOut can parse the text and identify a date and time that may be associated with an event implied by the sender. Then it will invoke the calendar and make the fill-in things related to the event for users. That is a typical example that focus on the interfaces’ automation side, because it reduce the interaction and make the process of detecting and taking notes automatically.
Yingjie Tang 1:43:31 9/25/2014
The first article “Past, Present and Future of User Interface Software Tools” is a paper with a large amount of information. I am astonished to get the notion that computer language is a kind of human interface software tool. The paper tells us that since the main stream of windows metaphor in the past tens of years, tools have been matured. And in the future the user interface design will be radically changed because the rise of ubiquitous imputing and recognition based user interfaces which has different input and output hardware. I cannot agree more with the point of the author. Take the Course MIRROR project as an example, the developer should at least develop 2 versions of the program one in android platform and one deployed on the World Wide Web. And even more trivial things will happen if most of the users are using IOS platform. This may cause the repetitive workload and largely minimize the efficiency of the programer. The best solution on the user interface tool is to make a standard that we should only develop one version of the program and it can be run in different platform. As the development of the ubiquitous computing, different platform will arise, and it is really important to make that change for the tools. The threshold and ceiling in evaluating tools are quite useful. The lower threshold, the more easily the user will take use of it, efficiency is very important nowadays, if I have to learn a new programming language before I can pass over the threshold of the tool to take advantage of it, I would probably give up and to search for another convenient way to finish the job.————————————————————————————————————————- The second paper “Principles of Mixed-Initiative User Interfaces” mainly introduces a notion that combines the debates whether it is better to develop interface “agents” or it is better to directly manipulate interfaces to access information and invoke services. And an software OutLook has been developed taking the advantages of both sides. The mixed-intuitive user interface OutLook can help users automatically set the calendar by the indications of users. It takes people’s history of choices into account and helped users to make a calendar. I think this is a semi-agent which is a very good idea. Since the system which joined by human beings will be much more intelligent than a purely machine. Although with some training data sets, a computer can learn most of human’s intuition, but it may sometimes cause mistakes. However, with the small effort of the participation of human, the risk of fall into a mistake will be largely minimized. Besides OutLook, many of the applications nowadays implement the mixed-initiative user interfaces. Such as Siri, Siri can process the speech of human and then process it into a command to process, if it is not sure about the information given by the users, it will require more speeches from the user. The topic in today’s group meeting “Clinical Document Reviewing Assistant” also implement the mixed-initiative notion. The system gives the judgement of certain diagnosis to the doctor as an suggestion and also require the feedback from the doctor about the real judgment and certain keywords. I think the principle of mixed-initiative user interfaces will be the main stream of the future software design.
zhong zhuang 3:04:30 9/25/2014
This paper is about a whole new area of human-computer interaction—automated service, so far in the class, we have been focusing on direct manipulation. Now this paper opens a new topic, instead of being manipulated by the user, how the interface can actively interact with user, guess what the user is thinking and make appropriate action automatically. To achieve this goal, the design space is mostly different from what we have learned so far. It is all about machine learning technology. The paper illustrates its idea by explaining a sample project – LookOut, which is used to analysis user’s email and try to make schedule for the user. This technology is widely applied in today’s main stream product, like smart phones. First, designers should consider how to model user’s uncertainty, if a date and time appears in an email, should the agent take action? To address this problem, the paper discusses various machine learning technologies such as Naïve Bayes, SVM, etc. After computing the likelihood of an uncertainty, the agent invoke different actions based on the likelihood, if the likelihood is too low, the system will remain silent and wait for the user to invoke it, if it’s medium the system will pop dialog box or speech to ask user, finally, if it’s very likely, the system will just make schedule and ask user to confirm. I think this is a reasonable approach, but the paper didn’t post the result, like how many times the system correctly guesses the user’s goal and how many times the system correctly makes an action. Besides these machine learning approaches, some direct manipulation design principles must also be taken into consideration. For example, how big the virtual agent should be? Will it block important working area of the user? When should the agent uses speech to interact with user? Will it be uncomfortable to user? This could be a further study of this topic.
SenhuaChang 3:49:36 9/25/2014
Past, Present, and Future of User Interface Software Tools This paper basically summarize some themes in evaluating tools from the history and talk about the popular technique at that time and make some predictability about the future trends. Themes mentioned by the authors are very important, most of the failure about user interface can be concluded to the violation of these rules, which are still the creed for designers. The future part of the paper make me feel amazing, most of the ideas are realized in our daily life. It says “computing is appearing in more and more devices around the home and office”. Nowadays, a new word called “SOHO” (small office home office) has already popular for several years, people have lots of flexible choices to their own work. And lots of application such like Google Doc, Google cloud drive make the idea cloud computing which is familiar with the topic in paper, coordination multiple, distributed communicating devices. Very interesting and amazing paper it is. <0000000000000000000000000000000000> Principles of mixed-initiative user interfaces This paper start from a list of principles that we need to pay attention to when we want to combine the machine intelligence (automatic service) with dire manipulation. Then they use the Lookout system, which is used for managing schedules and meets as an example to illustrate these principles. Providing from the machine side is not always welcome and helpful, especially when it mis-predict the users intention. I like the idea that we can get around some hard problems in automation by incorporating direct manipulations (interaction with users). Let users help us clear the uncertainty through dialogues. In this way we can leverage both human and machine intelligence to accomplish the task more effectively. Another lesson to learn is that, always allow the user to redo what the system automatically did for them. "
zhong zhuang 4:05:32 9/25/2014
This paper is not about any particular technology or theory. It’s all about the highest level of user interface designing, it looks back in UI design history and tells us why they are success, and why they failed. It summarizes the main stream technologies at their time and points out what problems have been solved perfectly and what are still bordering people. It then predicted the future from their time, and warn their peers about what changes should be paid attention to. From today’s point, all its predictions are very accurate. So what does that imply? It means we should also do the same thing – look at today’s technology and try to guess what will happen in the next decade. Why this is important? The author introduced a term called moving target. I think this is a very appropriate term. When we are designing our user interfaces, we usually will focus on technology details. We are like always looking down on the road but rarely look ahead to see where the road leads us to. Back to the author’s time, the desktop metaphor is so strong that all tools were built around it. Designers will assume people will be facing a computer monitor which is about 20 inches wide, and use mouse and keyboard to input information. So they invented drop-down menus, score bars, overlapping windows etc, these designs achieved huge success based on the traditional desktop metaphor. But will the computer always looks like that? The author is skeptical about it, now it turns out that the authors concern is right. Mouse and keyboard is not suitable for smart-phones, micro computers on your refrigerator and your wearable devices. So for those who didn’t foresee this trend, they are trapped by the moving target problem. So are we also facing the same problem? I think the answer is yes. Like the paper we read last class, digitalDesk. In the future, there may be no screen at all, every surface could be screen. There may be no input device either. People will interact will computer by using only their hands and voice. So, when we decide to start learning a new language or new tool? We should always think about the moving target problem.
Christopher Thomas 5:30:55 9/25/2014
2-3 Sentence Summary of Past, Present and Future of User Interface Software Tools: This paper explores trends in past user interface design tools, explaining why certain design tools were successful and why others failed, even when they may have had good ideas. The authors argue that the best tools of the future will have a low overhead, but a high ceiling – meaning that inexperienced users can pick up and use the user interface design tools without a lot of learning, but can still accomplish many things with that toolkit. Finally, the authors explain what they think the future of computing will be and present some opportunities for improvement. This paper was written in 1999, before mobile phones and ubiquitous computing was realized. Still, we can see that the authors hit the nail on the head. One need look no further than the Android SDK or Apple SDK to see how user-interface design has become standardized through the use of toolkits. Look-and-feel standards for various platforms are now commonplace-if not required. The authors in some sense predicted this development ten years before its realization, from their observation that traditional UIs don’t work well on many different platforms. The authors not only discussed graphical user interfaces, however; they also mentioned speech and gesture based recognition interfaces. Almost every smartphone today contains speech commands and some sort of gesture control mechanisms. Many phones even allow users to provide text input simply by drawing the characters on the screen. The authors suggested that in the future, the OS should provide some handling for this so each application didn’t have to do it. This is exactly what we see with the Android keyboard, for example. The keyboard in Android allows users to choose their input method. Users can switch seamlessly between gesture input, speech input, “Swype” input, and traditional keyboard input. Applications can support voice command, text to speech, and gesture recognition, simply by taking advantage of existing Android features and allowing the system to handle to difficult parts (recognition, etc.). We see even these interfaces being standardized with the Google Voice SDK, as well, as the authors predicted. While many of the authors’ predictions have come to fruition, this does not mean there is not room for improvement. One research area I feel has been underexplored and I found interesting was the notion of constraint-based UI design or specification based-UI design – where the programmer inputs a very simple specification of the interface and allows the system to handle the positioning of elements, placement, window management, look and feel, etc. However, given the ever-increasing number of platforms, phones of all sizes, displays of all sizes, input mediums of all types, it may be time to start considering these options again. It was mentioned that many programmers didn’t like the unpredictability of these tools. However, in an age where programmers can make no assumptions about the capabilities of their users’ device, unpredictability is something that may have to be embraced and used to the user interface designer’s advantage. Imagine a framework where the design would change the type of menu presented to the user if the user had a device that only supported a pen-based input, or if the interface changed to present a traditional menu on a desktop PC with a mouse. Thus, the interface could adapt dynamically to the user’s situation and device. For instance, a different interface could be provided if the user was driving. This could happen across every application that used this basic toolkit, providing a unified experience for the user. I think this is a great research direction and something mobile software companies should explore more. 2-3 Sentence Summary of Principles of Mixed-Initiative User Interfaces: A discussion of mixed-initiative user-interfaces is given, i.e. interfaces that combine user inputs and those made by an intelligent autonomous agent. The authors present how mixed-initiative interfaces may be used, but more importantly provide a framework for deciding whether an intelligent agent should make some decision or provide some assistance based on a threshold parameter. This paper was also written in 1999. I remember using something similar to this in the late 90s called BonziBUDDY, which was a great example of a mixed-initiative user agent. The agent would periodically check for e-mail, read the weather when it changed, periodically tell jokes, etc. The user could interact with it through voice commands. Microsoft Clippy was another example. Unfortunately, many users found these kinds of agents annoying precisely because they tried to offer help too much and became clumsy – making simple jobs more complicated. In this paper, the authors presented a technology which is actually commonly used in every smartphone made today. Nowadays, when you receive an e-mail on a smartphone, dates and times in the e-mail become underlined. If you wish to schedule an appointment, the user only need to tap on the underlined text to have the appointment automatically populated in the calendar. In my opinion, this is the best approach on the mobile platform, where screen real estate is tight. It also provides a very fast way to schedule an appointment with minimal overhead. So, I can see the core contribution of this paper in modern cell phone interfaces. In this paper, a large problem was whether or not to take some action. In the case of cell phones, the initiative is almost entirely on the user. If any dates are found, they are underlined. In the paper’s case, the system tries to intelligently decide whether or not scheduling an appointment is what the user’s goal is – a task which is far more difficult. The system then takes its own initiative based on that decision. We can see here that because the system is taking initiative, balancing how often that initiative is taken is critical to prevent the user from being annoyed. I think one of the reasons this technology didn’t catch on more was that balancing the system’s initiative with the user’s wants was very difficult and many users just became dismayed by it – which ultimately lead to Clippy being removed from MS Office. Though some mixed-initiative technologies do still exist (with hints in programs for instance), many now take a more passive approach in the technologies I have used. Improvements in user-modelling and balancing the system’s initiative with the user’s needs and invasiveness will be critical for future mixed initiative technologies.
Xiyao Yin 7:28:59 9/25/2014
‘Past, Present and Future of User Interface Software Tools ’ briefly discusses the history of successes and failures of user interface software tool research to provide a context for future developments. It seems that Hypertext is deeply related to the World-Wide Web, so I search information in this aspect on the Internet. Hypertext is text displayed on a computer display or other electronic devices with references (hyperlinks) to other text which the reader can immediately access, or where text can be revealed progressively at multiple levels of detail (also called StretchText). The hypertext pages are interconnected by hyperlinks, typically activated by a mouse click, keypress sequence or by touching the screen. Apart from text, hypertext is sometimes used to describe tables, images and other presentational content forms with hyperlinks. Hypertext is the underlying concept defining the structure of the World Wide Web, with pages often written in the Hypertext Markup Language (aka HTML). It enables an easy-to-use and flexible connection and sharing of information over the Internet. Another significant thing in this paper is that it provides several great ideas but which did not in the end succeed, but the lessons of these past themes are particularly important now. By reviewing different ideas in the past, it is clear that the research on user interface software tools has had enormous impact on the process of software development. What’s more, this paper provides different ideas in future prospects and visions. I can see different ideas crossing and sparks through collision of ideas. It is really interesting. ‘Principles of Mixed-Initiative User Interfaces ’ is used for building mixed-initiative user interfaces which enable users and intelligent agents to collaborate efficiently. Principles are used to address systematic problems with the use of agents that may often have to guess about a user’s needs. Then this paper focus on methods for managing the uncertainties with examples drawn from the LookOut system which focuses on investigating issues with overlaying automated scheduling services on Microsoft Outlook, a largely direct-manipulation based messaging and scheduling system. The most interesting method used in this paper is to divide outcomes about actions and goals into four types. By connecting with these four types, the author uses graphical analysis to show the expected utility of action versus inaction, which makes readers more easily to understand meaning of different equations. Also, those pictures display the change between no action and action directly.
Wenchen Wang 8:43:44 9/25/2014
<Principles of Mixed-Initiative User Interfaces ><Summary: > This paper proposes twelve principles for mixed-initiative UI, which is a integrating direct manipulation with work on interface agents. Then they highlight methods and design principles with LookOut system. < Paper Review > There is a debate between developing new metaphors and tools that enhance user’s abilities for direct manipulating objects and developing interface agents with automation. The author just combines the two methods to become mixed-initiative user interface by proposing some principles and giving an example project to implement it. These principles focus on improving learning user’s attention with consideration of the costs and benefits of automated action, with timing of action, and also could with allowing a user to invoke or terminate the automated service. In other words, users could cooperate with intelligent automation agents to operated direct manipulation of UI efficiently. I am very interesting with some of the principles. One of them is maintaining working memory of recent interactions. For recent hci product, such as amazon and google chrome. Amazon website records the user’s browsing history and providing product recommendations for users by machine learning algorithm. <Past, Present, and Future of User Interface Software Tools><Summary>: The paper summaries some themes from past user interfaces and proposes some future prospects and visions. <Paper Review: > This paper applies a basic but important principle that we need to learn lessons from past and improve future work based on the lessons we learned. The themes of the historical perspective are that the parts of the user interface, threshold and ceiling, path of least resistance, predictability and moving targets. One of the most interesting future prospects for me is ubiquitous computing. The amount of data will be more and more big, since we have varies kinds of digital devices, so that we need to compute those data and help us work more efficiently. One way is allow our digital devices communicate with each other. For example, apple has iCloud that could allow iphone, ipad and mac to communicate. AppleTV is also a good example to show the video or pictures stored on our phone and computer on the TV. It is a good attempt to allow our digital devices to connect together.
yeq1 8:49:19 9/25/2014
Yechen Qiao Review for 9/25/2014 Past, Present, and Future of User Interface Software Tools In this paper, the authors surveyed the past UI toolkits and summarized some of the techniques that worked, some of the techniques that did not work, and why these toolkits worked or did not work. The authors also gave some of the predictions about future toolkits and the challenges they might face. Overall, I think the authors’ judgment is good and the paper is still very relevant today. I think the authors were correct in saying that UI toolkits that offers the past to least resistance attracts developers and would become more popular, abstractions such as events which provides natural mappings to an aspect of human-computer interaction are good abstractions, UI toolkits that supports fast prototyping and design process would no doubt be popular. Today, I still use Visual Basic to make fast UI prototypes to get a measure of how a UI component performs without having to develop heavy codes, as long as the UI uses WIMP paradigm. In many ways, the authors were also correct in pointing out many shortfalls of the systems that did not work. UIMS, for example, is difficult to specify affordance, formal language based tools are difficult to work, has a high learning curve, and has generates slow iterative design cycles (slow prototyping). I also think the auto generation of UI will not work because they’re trying to use an open system to develop a closed solution when they bring users out of the equation. No static model is sufficient to capture individual preferences of all users in the world, and if we were to evaluate against the end users we would have to do exactly that, and the only way we can do it is to run formal user studies. I think constraint, in particular, may warrant further research. Constraint has a lot of uses: it allows interface designers to make device-independent designs, and allows the interface preserve ratios that are pleasing to the end users. The first point is especially input because: 1) when making a device specific interface, both the hardware and the software may be constantly changing. For example, the designer may not know for certain what’s the screen’s resolution, what’s the screen size, what’s the brightness and contrast levels of a particular display when a company is designing it for the first time; 2) when making a software targeting a platform instead of a device (Android), it may be impossible to know what the properties of the output hardware is. I am impressed the authors had considered this already at that time. Rapid prototyping of devices is also starting to become a reality. 3D printing technologies allows device casings to be printed much faster than traditional assembly of prototypes, and allows the designers to judge the look of the device without having to go through a slow and expansive hardware prototyping. Principles of Mixed-Initiative User Interfaces In this paper, the author had presented the “adaptive interface” paradigm, which takes into account of user’s input, system’s input, user’s feedback, and system’s feedback, and these parameters can be used to make online decisions on whether to perform an action, and which action to perform. This approach, when implemented correctly, can potentially save the user’s time in operating the system, allow the user to pay less attention in the interaction and be more focused on the task. While I think the research is novel, I think there’s still much to be done before this can become more of a commercial reality. Throughout the years, both Microsoft and Apple had attempted to use some portions of this research with some success. For example, I remember I was really excited when I got Office 97, and saw a puppy jumps out of Microsoft Word and sometimes provides helpful hints to me on how to do certain tasks. If I remember correctly, it takes my input into consideration, and when I’m deliberating and has performed too many “undo actions” (such as hitting backspace, resetting text formats, etc.), the puppy would come back and give me a tip (which can be used to perform actions directly through the puppy’s dialog menu). Many times the tips had been very helpful and this is exactly how I learned to operate Office. Sometimes, the puppy can also be quite annoying: it obstructs the portion of the text I want to work on, jumps in when I don’t want to be bothered, or offering unhelpful tips. This is probably why they are no longer have it in today’s Office suite. (I still miss the puppy sometimes though…) Apple had also adapted one limited scenario specified explicitly in this paper: when a mail specifies a particular time, I can touch it and make an appointment in my Calendar screen. To some degree, Microsoft had been using this research more than Apple (who just blatantly copied from this paper), but Apple seems to have more success with this technology than Microsoft (who still don’t have this in their Outlook or Mail app for Windows 8). I think this is because Microsoft was being a bit too ambitious with this technology in commercial sector. Had they not give up on this idea and continued to pursue this direction, I think they would have found many useful small pieces of this research and adopt it in their products with more success.
Jose Michael Joseph 8:52:52 9/25/2014
Past, Present and Future of User Interface Software Tools This paper discusses the need for various User Interface design tools. It states that as the field of computing is growing rapidly into ubiquitous computing, where there are devices of various shapes and sizes each with their own unique display characteristics, tools play an even more prominent role as they drastically reduce the work that has to be put in to manually code all the user interface functionality while at the same time maintaining the quality. The first tool that is explained is the toolkit that was produced in the early days of XEROX and which were eventually used by Mac and Windows. These toolkits enabled developers to spend less time writing code for how windows would overlap each other and react with each other and thus saved a lot of time. One of the great tools that propelled interest in human computer interaction is the world wide web. According to me this paper does not give enough attention to the impact the world wide web has made on interfaces. It is the internet that has opened the people to large amounts of software thus spreading awareness about what is possible to achieve. This raised people’s expectations and they naturally came to demand more and thus there was a boost in research in human computer interaction. The paper then goes on to discuss the ever increasing rate of the growth of ubiquitous computing that has led to a stage where developers not only have to model user interface based on software but also the device themselves since each device has its own characteristic properties. Another new field of human computer interaction would be the 3D space. Currently most of the HCI we have are in 2D space but with the advent of multiple technologies that can perceive the depth of an image there is going to be a tremendous shift in HCI to 3D technologies. This will need more tools to develop as the manual coding of such features will be even more exhausting and time consuming. There are many other various issues that are discussed in the paper but what strikes me the most is the radical way that HCI is developing over the years. From window overlapping three decades ago to three dimensional technologies today, HCI is definitely a field that continues to push the boundaries. But such growth has a cost and that is that developers cannot be bogged down by having to manually code every single aspect of it. This further increases the demand and necessity of tools to create interfaces.-------------------Principles of Mixed-Initiative User Interfaces This paper discusses mixed initiative user interface which is basically user interface mixed with intelligent agents. It uses its product LookOut as an example to highlight the features. Thus such a product will be able to display information as well as partially process it in order to give suggestions to the user regarding his various actions and potentially help the user to focus on things they may have overlooked accidentally. Such a system has great potential as intelligent agents currently enjoy widespread interest and everyone wants to have agents that can guide and check when things are going wrong. LookOut can read emails and understand the dates it was referring to, can operate in multiple modalities and help the user by prompting them or things that they might have overlooked. These are all great features but they come at the cost of privacy and complexity. Privacy because the user might not want to be disturbed and sometimes even a single prompt can throw the user off base and destroy his concentration (eg: updates about a basketball game when he was trying to study). Complexity because this additional computation would require additional processing power. And thus while making the trade off one has to be aware of the cost of this functionality. Also the system might confuse the user’s movements for responses and take appropriate actions like closing windows. This is a huge drawback as this could really be frustrating and is detrimental to the user’s continual interest in the system. Thus even though the paper discusses something that is quite valuable and innovate it does not take into account any “triggering sequence” (eg: “Hello Google” to activate google’s voice commands) and this could lead to it taking incorrect inputs from the user and conducting actions not requested by the user. Another drawback of this paper is that even though is it reviewing mixed initiative user interfaces, it is only reviewing one product which is the LookOut. In the famous words that rule HCI, “Everything is best for something and worst for something else”, only after we consider the various other models and products that involve mixed initiative user interface can we totally conclude the various strengths, limitations and principles of such a concept. Hence this paper was lacking in comparison as it did not compare its own product with any other product that is based on mixed initiative user interface.
Qiao Zhang 8:56:43 9/25/2014
Past, Present, and Future of User Interface Software Tools It is a very good paper. It surprises me that this paper was written 15 years ago, while insights provided still suit today's world. The author predicts that user interfaces are about to break out of the "desktop" box and there will be increasing diversity of user interfaces on an increasing diversity of input and output devices. In this paper, the authors first discuss about toolkits of HCI of both success and failure in past user interface tools. This provides a context for future development. They then analyze different characteristics of the user interface toolkits, including what aspects of the user interface they addressed, the threshold and ceiling, what path of least resistance they offer, the predictabilities they provide and finally whether they address a target that became irrelevant. Finally, they discusses the requirements for the underlying operating system to support these tools. As discussed in the paper, a conventional screen keyboard mouse model provides consistency but is challenged nowadays because of plural of new devices. In different contexts, we need different tools to build different interfaces. A bunch of successful toolkits are then discussed in the paper. It is interesting to see the interpretation of the window manager toolkits. According to the authors, stacked windows can not only save limited display space, but also save user's cognitive resources, which is very true. However, as a computer science major, I find multiple displays help me a lot because I do not need to do context switching often. The event languages are also popular nowadays. It is almost the standard of designing user interfaces, by separating the appearance of a system with its behavior. Today's webpages all use this approach, by listening to an event and perform corresponding actions. However, as the authors say, for recognition-based UI, event-based paradigm may not apply. The interactive graphical tools offers low threshold and low ceiling. I used FrontPage in my elementary school to build webpages, and I realized that it was so difficult to build website in a fine-grained way. The component systems are like today's Java's Swing packages, which provides abstraction and saves human labor. Scripting languages, Hypertext and OOP are also discussed in the paper. The paper also discusses about some toolkits that are not very successful due to the moving-target problem (User Interface Management Systems, Formal Language Based Tools) and unpredictability, high threshold, and low ceiling (Constraints, Model Based and Automatic Techniques). The paper predicts that in the future, moving target problem will again be an important issue. Future prospects are also discussed, including commodity (higher performance of personal devices), ubiquitous computing (varying devices), recognition-base UIs, end-user customization. Some concerns are well-addressed by today's designs, like cloud storage for data consistency in multiple devices; drag-and-drop manipulation to provide easy UI customization etc. Principles of Mixed-Initiative User Interfaces There are mainly two directions of HCI. One group of researchers has expressed enthusiasm for the development and application of new kinds of automated services; the other believes that exploring new kinds of metaphors and conventions might be better. The word mixed-initiative indicates combining human intelligence and machine "intelligence". The main goal of this approach is to create adaptive interface that is not deterministic once initialized. The authors first present a set of principles for designing mixed-initiative user interfaces that address systematic problems with the use of agents that may often have to guess about a user's needs. Critical factors for the effective integration of automated services with direct manipulation inyerfaces have been proposed, including what to consider and how to do. Then they focus on methods for managing the uncertainties that agents may have about users' goals and focus of attention. The authors use Lookout as example to elucidate difficult challenges and promising opportunities for improving HCI through the combination of reasoning machinery and direct manipulation. The LookOut project serves as a testbed for mixed-initiative UI. This system reminds me of today's gmail, which supports google calendar that parses my email and helps me to schedule events. This is a promising direction of combining artificial intelligence with human computer interaction. I really hope that, in the near future, there will be more and more automatically adaptive UIs that suit different needs for a user.
Vivek Punjabi 9:03:11 9/25/2014
Past, Present and Future of User Interface Software Tools: The paper discusses about the success and failures of the user interface tools in past and therefore motivate the research for future tools. The author start with the need for the user interface tools and its advancements followed by the themes that have been used in past for evaluating tools such as parts of user interface addressed, threshold and ceiling, path of least resistance, predictablity and moving targets. Then the author discusses about the successes of the past that worked well such interactive graphical tools, event languages and component systems followed by those promising approaches which couldn't survive some or the other theme some of which were formal language based tools, constraints and automatic techniques. Then the predictions and observations for the future interface tools is given which seems the most interesting topic. It provides many technological advancements along with their in-depth analysis. Some of the important ones were ubiquitous computing, three-dimensional technologies and recognition-based user interfaces. It also provides couple of examples which make things more clear. It also addresses some general issues with these future tools which hinder their development such as skill level of users, interactive setting, support for evaluation, etc. Finally the author concludes by showing the impact and need for these future tools in user interface development and gives some basic requirements for the same. The paper gives a detailed analysis of the past as well as future user interface tools and its issues however, some of the tools seem outdated. The prediction of the author is realized as most of them are being used these days which shows the paper might have motivated many researchers. It gives some basic criteria of developing user interface tools which will help us even now to create some advanced tools and so provides a good motivation for us. Principles of Mixed-Initiatives User Interfaces: The paper reviews the mixed-initiative approach towards developing user interfaces which includes coupling of automated services with direct manipulation interfaces. The author gives a small introduction to each of these methods and their collaboration. It then gives many key problems and critical factors for the integration of the two techniques some of which are developing significant value added automation, uncertainty of user goals, cost of poor guesses, efficient agent-user collaboration, maintaining working memory of recent interactions, etc. The paper then provides a testbed for this mixed initiative interface. It takes the example of LookOut project by Microsoft which is largely direct-manipulation based messaging and scheduling system. The author gives some detailed analysis of LookOut services and its ability to make decisions and interaction modalities followed by its failure handling techniques. The paper then focus on managing user goal inferences and predicting the uncertainties. It provides some reviews on taking autonomous actions in different situations while taking LookOut as their testbed. The author thus gives the key principles and issues that arise in the collaborations of users and intelligent agents efficiently. However, the introduction to these two techniques is not much provided and that makes it a bit difficult to understand the collaboration. Hence, the individual techniques requires more description. The author could have taken few more short examples apart from LookOut which could help understand the collaboration in better way. The paper could also have focused on the data analysing techniques which seem essential in the context as it will form the backend of this collaboration.