Qualitative Evaluations

From CS1635 Spring 2014
Jump to: navigation, search

slides

Contents

Readings

Reading Critiques

Sara Provost 14:31:08 2/11/2014

This week’s reading is about the different ways of evaluating a design without users. The first method that the reading discussed is the cognitive walkthrough, which is a formalized way of imagining people’s thoughts and actions when they use an interface for the first time. In order to do this, one must have an interface, select one of the tasks that the design should support, and then one must try to tell a believable story about each action that the user has to take to do the task. In order to make the story believable, the person walking through the design must rely on the user’s background history and feedback given about the design. If a believable story cannot be told, then the interface has a problem. The second type of analysis that a person can do is an action analysis. An action analysis forces that the sequence of actions that a user will take is closely examined. Action analysis has two fundamental phases. The first is to decide what tasks the user has to complete to perform tasks with the interface, and the second is to analyze those steps while looking for problems. If done closely enough, this kind of analysis can give a somewhat accurate measurement of the time that it will take a user to complete an action using the interface. The final type of analysis that does not require users is a heuristic analysis. In this type of analysis the analyzer compares their interface to the guidelines of what their interface needs to be able to accomplish and the general guidelines of good interfaces. These guidelines are as follows: simple and natural dialog, use words from the users’ world, minimize user memory load, consistency, ample feedback, clearly marked exits, having shortcuts, giving good error messages, and preventing errors. The heuristic approach has the benefit of not being action oriented, which means that more of the interface may be covered and cross-task interactions are examined. I think that the content of this reading will be helpful with future class projects. During the projects I cannot always find users to test my interfaces, but with the methods described I can be more effective in my self-evaluation.

Steven Bauer 15:32:17 2/11/2014

Thursday's reading is Evaluating the Design Without Users. We have talked extensively in this class about the importance of getting feedback from users but this article is about what to do when this isn't possible because a user's time isn't free/unlimited. The article offers three approaches to evaluating an interface without users. They are cognitive walkthroughs, action analysis, and heuristic evaluation. Cognitive walkthroughs are a way of imagining users thoughts/actions/feelings upon seeing an interface for the first time. You select tasks that your design supports and tell a "story" about the actions taken by the user to complete said task. If this is not possible for you to do then you know that there is a problem with your design. In action analysis you look at the sequence of actions that a user performs to complete a task. You can either look at it formally which is using a high level of detail and the other method is "back of the envelope" which is more simple but will still reveal large scale problems. In both of the steps you need to decide what physical and mental steps a user will perform to complete a task and secondly you need to analyze these steps and look for problems in them. Heuristic Analysis is a list of general heuristics that evaluate a design. The nine heuristics are: simple and natural dialog, speak the users language, minimize user memory load, be consistent, provide feedback, provide clearly marked exits, provide shortcuts, good error messages, and prevent errors. By using the procedure it has been proven that you can find more problems than a single evaluate could identify. Even if we use all of these techniques we can't catch all problems.

Xiaoxuan Chen 1:36:26 2/12/2014

This article mainly discussed how to evaluate the design when no users are present. It described three approaches to evaluating an interface: 1) cognitive walkthrough - a task-oriented technique that fits especially well in the context of task-centered design, 2) action analysis, which allows a designer to predict the time that an expert user would need to perform a task, and which forces the designer to take a detailed look at the interface, and 3) heuristic evaluation - a kind of check-list approach that catches a wide variety of problems but requires several evaluators who have some knowledge of usability problems. It uses the interface "Chooser" of an early version for Mac as an example. This interface lest the user select printers and printer options. The evaluating task is turning on the background printing. Cognitive walkthrough is a formalized way of imagining people's thoughts and actions when they use an interface for the first time. For a small piece of interface, we can do a walkthrough "in our head". And for larger interface, a group of people would be helpful. Things you need before walkthroughs are: (1) a description or a prototype of the interface. It doesn't have to be complete, but it should be fairly detailed. Things like exactly what words are in a menu can make a big difference. (2) a task description. The task should usually be one of the representative tasks you're using for task-centered design, or some piece of that task. (3) a complete, written list of the actions needed to complete the task with the interface. (4) an idea of who the users will be and what kind of experience they'll bring to the job. This is an understanding you should have developed through your task and user analysis. Things to look for during the walkthroughs: 1) Will users be trying to produce whatever effect the action has? 2)Will users see the control (button, menu, switch, etc.) for the action? 3) Once users find the control, will they recognize that it produces the effect they want? 4) After the action is taken, will users understand the feedback they get, so they can go on to the next action with confidence? With the results, we can fix our interface. Action analysis is an evaluation procedure that forces you to take a close look at the sequence of actions a user has to perform to complete a task with an interface. There are two flavors: "formal" action analysis, is often called "keystroke-level analysis", contain extreme detail of evaluation, and "back of the envelope" approac - it won't provide detailed predictions of task time and interface learnability, but it can reveal large-scale problems that might otherwise get lost in the forest of details that a designer is faced with. There are two fundamental phase: 1) to decide what physical and mental steps a user will perform to complete one or more tasks with the interface, 2) to analyze those steps, looking for problems. Some questions for back-of-the-envelope action analysis after listing actions are: Can a simple task be done with a simple action sequence? Can frequent tasks be done quickly? How many facts and steps does the user have to learn? Is everything in the documentation? Last but not least, heuristics, also called guidelines, are general principles or rules of thumb that can guide design decisions. The Nielsen and Molich's Nine Heuristics includes: a) Simple and natural dialog - Simple means no irrelevant or rarely used information. Natural means an order that matches the task. b) Speak the user's language - Use words and concepts from the user's world. Don't use system-specific engineering terms. c) Minimize user memory load - Don't make the user remember things from one action to the next. Leave information on the screen until it's not needed. d)Be consistent - Users should be able to learn an action sequence in one part of the system and apply it again to get similar results in other places. e) Provide feedback - Let users know what effect their actions have on the system. f) Provide clearly marked exits - If users get into part of the system that doesn't interest them, they should always be able to get out quickly without damaging anything. g) Provide shortcuts - Shortcuts can help experienced users avoid lengthy dialogs and informational messages that they don't need. h) Good error messages - Good error messages let the user know what the problem is and how to correct it. i) Prevent errors - Whenever you write an error message you should also ask, can this error be avoided? These techniques really help me to understand how analysis of a design can be done without users. I learned that in general, the cognitive walkthrough will give the best understanding of problems it uncovers, and it is best to be thinking through the interface in walkthrough terms as we develop it. When a substantial part of the interface is complete, heuristic analysis is a good way to catch additional problems. Back-of-the-envelope action analysis makes a good "sanity check" as the interface becomes more complex, and it's also useful early in system design to help decide whether a system's features will pay back the effort of learning and using them. Formal action analysis is probably only appropriate for very special cases, as we described in that section. All in all, we still need to combine this with user testing.

David Grayson 11:13:06 2/12/2014

The author of “Task-Centered User Interface Design: 4. Evaluating the Design Without Users” mentions two reasons for testing without users (in combination with user based testing). Testing without users allows developers to identify small/annoying bugs that can be fixed before user testing. This improves user testing because the limited time users have can be better utilized and the user will have a more favorable outlook on the developer. It also catches bugs and problems that users will not catch in user based testing. The author uses the Mac printer selection software (part of the Mac OS) to explain three types of testing: cognitive walkthrough, action analysis, and heuristic evaluation. For the cognitive walkthrough the developer must put him/herself in the mind of the user and imagine what the user may think and do when using the interface in question for the first time. This seems like a difficult thing for developers to do because he/she has a certain level of expertise that the user does not have and which the developer must separate himself from in order to truthfully see the interface the way a user sees it, without having any training on how to use the interface. Action analysis involves evaluating the time it takes a user to complete actions and sequences of actions in either a “formal” or “back of the envelope” approach. This type of analysis helps ensure actions are simple enough for users and do not require more time than necessary. Finally, heuristic analysis is applying a checklist of guidelines to an interface in order to evaluate the design and remove/improve any pieces that fit into known problems with interfaces.

Derrick Ward 11:14:56 2/12/2014

Today’s reading is about the process one undertakes when evaluating an interface design, without any users to test it with. The author talks about three approaches to successfully evaluate an interface design. The first approach is to host sessions for “cognitive walkthroughs”. In this approach, a team is to imagine the thoughts and actions of users when they interact with the interface design for the first time. First the team is to produce a description of an interface or provide a low-fi prototype of the interface. Then a member of the team tries to tell a “believable” story about how a given task, a user may have, is accomplished with the interface. Some of the thoughts the team members are to keep in mind are “After the action is taken, will users understand the feedback they got?” and “Once the users find the control, will they recognize that it produces the effect they want?” I personally try to implement this approach whether I am designing a physical hardware device or writing a simple terminal program. Before a group brings in users to get feedback on designs, they can save the users time by implementing this step. Also, as mentioned in the chapter, users will view you as more professional when they are not seeing trivial flaws in your designs. The second approach the author describes is Action Analysis. The approach is very detailed and brings the interface design into a more quantitative light. In this process the group members will try to predict the amount of time it takes a user to complete tasks in their interface design. Group members will analyze every physical movement of a user; down to the seconds it takes a user to enter one keystroke on a standard keyboard. The author also mentions a similar approach to Action Analysis called “Back-of-the-Envelope Action Analysis”. The approach differs from Action Analysis in that it does not complete a set of detailed predictions. This is done to have the group member’s focus on a broader picture. Also group who takes this approach will not focus on the time it takes users to do things like click a button, just the time it takes to complete user-intended overall tasks. In my opinion, I Favor the Action Analysis approach over the “Back-of-the-Envelope” approach. The third approach the author discusses is the Heuristic Analysis. This approach is instrumental in any design team because it helps designers stay on task with how they want to communicate physical and mentally with a user. In this approach a set of general principles are created to guide the design process and help with design decisions. The author also provides nine heuristics from Jacob Nielsen and Rolf Molich as examples of some guidelines. Just to highlight a few that stuck with me, “Be consistent”, “Provide feedback”, and Good error messages” are the ones that I find crucial to a good interface design. Out of all three, technically four, approaches I favor the Action Analysis approach because I am a quantitative guy. One would agree though, that in any project one’s group should exercise all approaches unless there is a tight deadline.

Brian Kelly 11:49:59 2/12/2014

Today's reading provided a lot of common sense suggestions on how to properly design and evaluate an interface with the client in mind. As the designers of an interface, most aspects of our design make sense (at least to us). The truth is, the customer probably won't find the design as intuitive as you do. The author suggests several ways that you can avoid having to continually change your design by doing a few things beforehand. The cognitive walkthrough sounds like a good starting point. By pretending to be in their shoes, you can hopefully evaluate the ease of use of your prototype. Action analysis takes a closer look at the finer details of the interaction with the design. This is important as well as even if the task seems simple to the user, if the actions to perform it are difficult, they are not likely to use it. The Heuristic Analysis guidelines that the author mentioned seemed rather intuitive but I'm sure that they are worth a reminder.

Cody Giardinello 12:58:09 2/12/2014

In this week’s readings, the author explores the idea of evaluating an interface in the absence of users. There are three approaches in doing so: cognitive walkthrough, action analysis, and heuristic evaluation. All of these approaches are designed to imitate how a user may interact with a given interface. These details are especially important in the design process because they dictate how the user experience will be: good or bad. These examples are shown through what is called a “Chooser”, or an early version of the Apple Macintosh OS. In the section about cognitive walkthroughs, the emphasis lies in the walkthroughs ability to show shortcomings in the current specification of the interface. A walkthrough can be completed by the designer or by a third-party. Either way, a walkthrough should yield results the can then be improved upon. Action analysis is the next section described. There are two types of action analysis – “keystroke-level analysis” and “back of the envelope”. The first emphasizes details while the latter is more concerned with large-scale issues. The author goes on to show a table showing the average time for users to complete interface interactions. This reminded me of the previous reading into the human mind as a processor. Finally, the last type of approach in interface designing and testing is heuristic analysis. The author explains that heuristic analysis is the analysis of the guidelines laid in place for an interface. An example could be something like a window or a button. The author goes on about Nielsen and Molich who identified nine general heuristics in a table. These range from being consistent and providing feedback to preventing errors and be simple with natural dialog. All of these outline how an interface should be from a heuristic approach.

Hao Zhang 15:01:15 2/12/2014

When we design the UI of software, we may not let users help us to design. Firstly, users’ time is no always available and they don’t have duty to help us. Secondly, different user has different thought about the app, so we cannot cover them all. It is efficient to design by ourselves then let users to test it. This chapter teaches us several steps to design an interface without users which including: Research sources, Dealing with subject matter experts, Increasing domain knowledge and Building modular UI frameworks. It also shows us some methods that let us pretend a user to walkthrough our design, such as Who should do a walkthrough, and when, What's needed before you can do a walkthrough, What should you look for during the walkthrough and What do you do with the results of the walkthrough. Those are good questions to help us to optimize our designs. The most important thing is that we should stand at users’ positions to design and test the app. That means we should think what functions or pictures they want, why they like this app and how they will like to use it. What is more, don’t kill any “roads” and innovate is important!

James Devine 16:57:16 2/12/2014

This reading is about testing interfaces without users. The three methods discussed are cognitive walkthroughs, action analysis and heuristic analysis. Cognitive walkthroughs consist of trying to tell a believable story about how a person uses an interface. If a story cannot be made believable, a problem has been discovered and changes should be made. The action analysis method can be performed 2 separate ways. Formal action analysis is characterized by the extreme detail of the evaluation. The “back of the envelope” approach is done by listing the actions and then thinking about them. Also, when explaining the process to a user, speak to him/her as if they are an experienced user. Heuristic analysis defines general principles that guide design decisions. This has several evaluators identify problems with the interface, then combine their results to come to a conclusion.

Michael Mai 17:25:57 2/12/2014

This article discusses the benefits of evaluating designs without a user. Although users are definitely an important piece to a design process, sometimes it's good to do the evaluation without them to uncover other bugs that users may not always catch. Doing both kinds of evaluations(with and without users) improves the design by a lot. The author discusses 3 approaches to evaluating interface without users. The first is cognitive walkthrough. This is the way of imagining what people are thinking and doing when they first start using an interface. The example he shows is with a walkthrough of how to print a copy from the copier machine. The results from this walkthrough can be used to fix the interface. The second approach is action analysis. It is an evaluation procedure that forces you to look at the sequence of actions a user has to perform to do a task more closely. The author shows an example by having a table showing a few actions with computers and the time it take to do them. A good place for these action analysises are places in the interface where users will access repeatedly as part of many tasks. It's good to get the results from these frequently used parts of the interface. Lastly, the third approach is heuristic analysis. Heuristics are guidelines and when there is a bad interface, heuristics are usually proposed. The other 2 approaches were task-oriented whereas this method is not. It's important to use task-free evaluations also to catch problems that the other methods miss. The author then lists a few heuristics from Nielsen and Molich. Overall, this article described some important techniques for interface design. It is important to use these techniques in designing and debugging an interface.

Zach Liss 17:47:20 2/12/2014

In past readings we've read about how a user should be involved in the interface design process. This reading will help us evaluate an evolving design when there are no users present to talk to. This is important because users' time is almost never a free or unlimited resource. In the reading there were three approached to evaluating an interface without users. The first way is the cognitive walkthrough. It is a formal way of imagining people's thoughts. The second is action analysis. It is an evaluation procedure that forces you to take a close look at the sequence of actions a user has to perform to complete a task with an interface. I think this is key in generating an efficient interface. I t is important to be able to complete any number of actions in the most simple way possible. The third and final method is Heuristic Analysis. They are general principles that can guide to design decisions. I think it would be very helpful if our group went through Nielsen and Molich's Nine Heuristics while fine tuning our ui. Im glad that i had the chance to read this because i feel that it is important that our group does some more ui analysis without the aid of users. I think this will help our apps interface become much stronger

Brett Lilley 19:10:47 2/12/2014

The reading "Evaluating Design Without Users" discussed three different ways to evaluate interface design for applications without the users being present, as suggested by the title. These three methods are Cognitive Walkthroughs, Action Analysis, and Heuristic Analysis. The purpose of these methods are to analyze the interface design and determine flaws/problems with a sample interface based off user testing. Each method also is better at determining/discovering certain types of problems. As the article pointed out, although there can be potentially 1000's of users, only a handful test the app, and therefore there are going to be unforeseeable problems/bugs. Combined with other forms of user testing, these three methods can lead to an overall much better interface design for an app, opposed to an app that doesn't go through such a rigorous design and testing process.

Guoyang Huang (Guh6) 19:20:29 2/12/2014

This reading taught me things about the ways in which design can be formulated without actual users. I learned that cognitive walk though uses an imaginary person and perform actions based on that imaginary person. Second, I learned that walkthrough can be done informally in my head for creating interactions and design. Prior to the walkthrough, the four things needed are a prototype of the interface, task description, written list of actions, and idea of users. During the walkthrough, one should gather the results and actions for the users. One should always re-design the interface based on the walkthrough. The second concept I learned is action analysis that includes keystroke level analysis and back of the envelope approach. The keystroke level analysis is similar to the reading we did last Thursday that explain the cognitive, motor, and preceptor concepts. The second approach is easier but does not involve much details. There was also an interesting table that gives information about time to complete tasks. I also learned that the formal or keystroke level analysis has many details that needs to be thought of whereas back to envelope is easier with estimation rather than meticulous calculation. The third concept I learned is the heurists analysis or creating design based on guidelines. The two lists mentioned are Nielsen and Molich. This allows for the designers to find mistakes earlier because the design can be heuristically identifiable for major problems.

Melissa Thompson 20:09:09 2/12/2014

Lewis and Rieman bring up good points in this chapter. Users will not always be available to test and provide feedback, so the designers need to be able to evaluate the design without them. They discuss three ways to do this: cognitive walkthrough, action analysis, and heuristic evaluation. The first, cognitive walkthrough, is a way to imagine how users walk through designs by telling a believable story and focusing on the interface, a task, and actions needed to complete it. Designs should have target users, so we can describe application use as if we are one of them, making sure to back up 'their' actions with motivations from their knowledge and skills. This can reveal issues that arise when a user thinks a certain way. Perhaps labeling isn't done very well and a user can't find what they are looking for, or maybe they cannot see that their action actually had an effect. Either way, they end up confused on how to accomplish the task because the interface was not easy to use without training. According to the authors, brainstorming these walkthroughs ahead of time is helpful to clear up these problems before user testing an application. The more complicated an interface, the more people should weigh in on the walkthrough. Ultimately the goal is to make changes to an interface so that users know when they need to complete an action, are able to find and recognize the capabilities of the tool used to do it, and can tell that their action had an effect. The second way to evaluate a design is through action analysis, or studying the sequence of physical and mental actions done by users to complete tasks. This includes determining these steps and analyzing them. Action analysis can help designers see when a task is too complicated, or the controls are too flexible. A table of common times it takes to complete specific movements can be used to calculate an estimated time that a task will take. This is known as formal action analysis. A less formal approach, dubbed back-of-the-envelope action analysis, takes a more general approach by focusing less on specific times and more on environment and the task as a whole. The third way to evaluate designs without users is with heuristic analysis, or using guidelines to identify good and bad design details. There is no standard list of guidelines for this, and too few or too many guidelines can cloud design decisions. The authors put together a list of the hueristics that they think are the most important to think about when analyzing a design. These are: using simple and natural dialog, using the target user's language, minimizing memory usage, consistency, providing feedback, exits, and shortcuts, displaying descriptive error messages when needed, and preventing errors as much as possible. Following these guidelines can help prevent users from getting confused and making mistakes, and -- just like with the above methods -- it is always helpful to get a group of people to evaluate a design instead of just a single person. Lewis and Reiman suggest doing cognitive walkthroughs first to catch and fix design flaws, followed by heuristic analysis to refine the design, and finally action analysis to double check the resulting interface. After that, user testing can begin!

Longhao Li 21:14:22 2/12/2014

This article talked about how to do evaluating on their design without users. This method of evaluation is very useful since it may not be possible to do a lot of user evaluation on software. The author start the article with a example about the “chooser” function in Apple Macintosh operating system, which is used to let the user select printers and printer options. The entire article is going around this example. There are three approaches for the non-user evaluation: cognitive walkthroughs, action analysis and heuristic analysis. Cognitive walkthroughs is the way that simulates target users operations. If all the operations have a valid reason to say that this is the right operation I want, then the interface design is good. if not, then interface design have problem. This method is good for small piece of interface design. The author also talked about how to prepare for it, what kind of result the developers need to care about and how to deal with these results. The second approach, action analysis, is the method that analysis the sequence of actions that user performed. There are two different ways to do it, formal and back-of-the-envelop. Formal one care about the time used for the operations like the time for enter one keystroke on a standard keyboard or time to respond to a brief light. Back-of-the-envelop one didn’t. It cares about the operation at the big picture. Then it is the third approach, heuristic analysis. This approach require analyst have a fair amount of user interface knowledge so that they can make sufficient amount of assumption that may cause problems to make sure that the design of interface can catch most of the problems that user may encountered. To my understanding, this approach will help to make sure that the interface design is good enough to be as a beta version to publish. In general, the author tells the user three great approaches on the non-user evaluation. It gives developers the ways to enhance their design of interface.

Robert McDermot 21:29:56 2/12/2014

This week's reading dealt with three different methods for evaluating an application's interface without having user's present. Each method helped to identify certain problems with the design of a user interface, but no one method was able to catch all problems. Even when combining all of the methods together, it still may not be possible to catch all of your design issues. Certain techniques may make sense at different times during the design process. While a back of the envelope check may be a good "sanity check", you may wish to use something like an interface walk-through as your design becomes more complex. The article makes the point at the end that in order to be successful, you will need to combine any of the discussed techniques with user testing to get the best product.

MJ (Mary Letera) 22:06:52 2/12/2014

This reading covers three methods for evaluating an interface without users: cognitive walkthroughs, action analysis, and heuristic analysis. Each has its own uses and all three can be used together, though they will still not uncover every problem. Because of this, user testing is still needed. Cognitive walkthroughs are useful to do during actual development, as they can help to plan features. Heuristic analysis is useful when development is partially done, sort of like in an agile testing/development scenario to verify what is being done as it is being done. Action analysis can be useful at various stages, such as early on in a project's lifecycle to determine the value of features and later as a means of evaluating an interface once it has become more complex.

Ariana Farshchi 22:07:32 2/12/2014

This weeks reading, Evaluating the Design Without Users talks about evaluating an evolving design when no users are present. The reading describes three approaches to evaluating an interface when users are absent. The first approach is the cognitive walkthrough, the second approach is action analysis, and the third approach is heuristic evaluation. The chapter goes in detail about each of the approaches. The first approach, cognitive walkthrough is a task-oriented technique that fits especially well in the context of task-centered design. When you do a walkthrough, you select one of the tasks that the design is intended to support. Then you try to tell a believable story about each action a user has to take to do the task. If you can't tell a believable story about an action, then you've located a problem with the interface. The second approach is action analysis, which allows a designer to predict the time that an expert user would need to perform a task, and which forces the designer to take a detailed look at the interface. This chapter distinguishes between two flavors of action analysis: keystroke-level analysis, and back of the envelope analysis. Action analysis, whether keystroke-level or back-of-the-envelope, has two fundamental phases: the first phase is to decide what physical and mental steps a user will perform to complete one or more tasks with the interface. The second phase is to analyze those steps, looking for problems. The third approach is heuristic evaluation, a kind of check-list approach that catches a wide variety of problems but requires several evaluators who have some knowledge of usability problems. These three methods for evaluating an interface without users each uncover different problems and the combined result of all three techniques do not catch all possible problems either because no individual analysis is perfect.

MJ McLaughlin 22:31:16 2/12/2014

This chapter of “Task-Centered User Interface Design” provides an in-depth discussion on both the value of and process behind evaluating a design without users. The value lies in the fact that users’ time is a very limited resource, and also that a good evaluation can actually help find bugs and problems more so than a test with a few users, especially in a program designed to be used by thousands, if not even more, users. Three approaches to such design evaluation include the cognitive walkthrough, action analysis, and heuristic evaluation. These different kinds of evaluations are shown in action, applied with the goal of turning on background printing to the Apple Macintosh Chooser, an interface designed to let users select printers and printer options. In a cognitive walkthrough, the goal is to imagine users’ thought and action process as they use an interface for the first time. As you imagine yourself as a typical goal user using your design, you try to tell a believable story about the actions the user has to take to complete their goal task. Based on the user’s knowledge and the design and feedback of the interface, you try to motivate the user’s interactions with your design and tell a believable story of their actions in pursuit of their goal. If you can’t tell a believable story behind their actions, then there is a problem with your interface. By questioning whether each action is believable from a user’s point of view, we can find out problems with the interface that may not be obvious to a designer, such as inadequate feedback, unlabeled buttons, dangerous assumptions, and so on. We can also make sure that everything written in specifications is implemented and that further discussion with users is pursued and put to good use, which helps ensure that users are able to both use a design at first glance, and get even more comfortable with it as time goes by. In the second approach, action analysis, designers decide what physical and mental steps a user has to perform to complete goal tasks with their interface. Then, the designers analyze those steps and look for problems, such as that it takes too many steps, too much time, or too much knowledge to complete said tasks. Actions that the system is designed to do, but can’t, can also be uncovered, and this stepping through of actions can also help with making sure documentation covers all of the required procedures as well. On type of action analysis known as formal action analysis breaks down task completion all the way to the keystroke level, and, using average values for completion of various sub-tasks, can predict the time it will take an experienced user to accomplish a goal using your design to within 20% error. Breaking down tasks into hundreds (sometimes even more) sub-tasks and analyzing each and every one can be very time consuming, and the payoff may only be worth it in scenarios such as very important and frequently performed actions a user needs to be able to complete with an interface. A somewhat easier-to-perform version of action analysis, known as back-of-the-envelope action analysis, will not give designers as much precision, but can still be quite valuable. In this approach, instead of breaking down actions into minute micro-movements, instead designers think of tasks as a series of “natural” actions, such as those you’d explain to a user trying to use your design. This type of analysis can be very useful in determining, for example, whether or not new ideas for features are worth adding to the design, or if they just add unwanted complexity to the usage process. One last approach, known as heuristic analysis, is very valuable in that it is a task-free evaluation method, which helps make up for the disadvantageous aspects of the previous task-focused approaches including limited coverage and stand-alone task evaluation. Nielson and Molich have come up with nine heuristics that are effective at helping different evaluators find different problems with an interface. By making sure that their design follows heuristics such as “be consistent” and “provide feedback”, a few evaluators can identify most major problems with an interface, problems that will confuse, inconvenience, and slow down users and cause them to make errors. They can then fix these problems and provide an even better experience for the user. And by using all three of these approaches in our interface design process, hopefully we can offer a better experience for the user as well.

Zhanjie Zhang 23:03:16 2/12/2014

For designers, we must be aware to evaluate the evolving design when no users are present. We must be aware that their time is limited and we must be courteous of that. We should also evaluate design is that we can catch problems that only a few users mat not review. Thus, it is important to be able to evaluate design when users are not available to be there to provide feedback. One way to do this is by the cognitive walk through, a way to imagine people’s thoughts when seeing an interface for the first time. By creating a believable story we can locate problems or lack of them with an interface. Walkthroughs focus most clearly on problems that users will have when they first use an interface, without training. To do a walkthrough, we need a description or a prototype of the interface, a task description, list of the actions needed to complete the task with the interface, and who the users will be and what kind of experience they'll bring to the job. We can also use action analysis, a procedure that forces you to take a close look at the sequence of actions a user has to perform to complete a task with an interface. Another approach is the back-of-the-envelope approach. It foregoes detailed predictions in an effort to get a quick look at the big picture. In conclusion, cognitive walkthroughs provide best understanding of problems that it uncovers. Heuristic analysis is good for catching problems and the back of the envelope analysis is a good sanity check.

Matthew O' Hanlon 23:32:59 2/12/2014

This reading provided descriptions of three types of interface design evaluations. The three types are the Cognitive walkthrough, Action Analysis, and the Heuristics evaluations. Each of the approaches looks at different aspects of the design, but each are useful for different reasons. The Cognitive Walkthrough is a task oriented technique which attempts to evaluate task centered designs. The Action Analysis has two approaches. One which tries to predict the time an expert user would need to complete a task, and another that simply considers a sequence of steps to complete a task for redundancy and consistency. The Heuristic evaluation is more of a check-list approach. It requires several experienced evaluators to perform a thorough and accurate analysis. The Cognitive Walkthrough will try to tell a believable story about each step a user takes to complete a task. It is usually the case that if you can't tell a believable story, then you have a problem with the interface. A Cognitive Walkthrough should question your assumptions about what the user thinks, identify controls that might not be obvious to users, find possible difficulties with labels, note inadequate feedback, and uncover missing details in the specification. A passable story will help users become more advanced by transitioning them smoothly from action to action. The Walkthroughs should normally be performed by designers who work closely with real users. This is necessary for the designers to create a mental picture of users in their actual environment. Usually evaluators need a detailed description of the prototype of the interface. The task descriptions should be representative of the task-centered design for the prototype. This is why it’s important for the designers to know who the users are and their relative experience in the job. Some of the common mistakes that inexperienced evaluators make are not knowing how to perform the tasks themselves, or they do not consider that they have to generalize for users. Some evaluators stumble through the actions, then evaluate them. Instead, they must start with correct list of individual actions, and if user might have trouble with action, then they should focus not on what happens next and rather on fact that problem needs to be fixed. Also evaluators have to think of classes of users rather than one unique user when considering problems users face. Some of the things that evaluators should look for during a walkthrough are the effects an action has, the flow from one action to the next, and the recognition that an action has. An evaluator has to ask if an action a user is trying to perform will produce the desired effect. Users aren't normally thinking the same as designers think. An evaluator must consider if a user will see the control for the desired action. Will they notice that it exists? Once control is found, will the user recognize it producing the effect they want? After action is taken, will the user understand feedback such that he will move on to next action confidently? Once the walkthrough is completed, every attempt should be made to fix the interface. Designers should make controls more obvious, use labels that users will recognize, and provide better feedback. Hopefully this will eliminate any glaring mistakes designers made before bringing the prototype to users for testing. The Action Analysis approach is an evaluation procedure that forces you to take a close look at the sequence of actions a user performs to complete a task. There are two Types, formal which is known as keystroke level analysis, and informal which is called cack of the envelope analysis. Each procedure has two phases. The formal method makes you decide what physical and mental steps a user will perform to complete a task, and then analyze those steps looking for problems. The formal method hopes to make accurate predictions about specific tasks. The general way to do this is to test hundreds of users and get the average time it takes to complete a task. The procedure for making task list is like programming. You have to identify routines, then the subroutines and subtasks, and then finally the individual steps to complete a task. The result is a hierarchical description of task in an action sequence. There are problems with this approach though. A full blown analysis is a daunting task. Evaluators can probably find clusters of actions that are repeating, but the effort is a non-trivial. Also, different analysts might come up with different results based on different hierarchies. The approach does have it’s uses though. It really should be used only for very large payoffs. This is normally for segments of interface that users access repeatedly as part of many tasks. The Informal Approach forgoes detailed predictions and looks at the big picture. It also has two phases which are to list the actions and think about the actions. You don't need to break down into hierarchical tasks, just list natural sequence. Evaluators should ask themselves simple questions about the interface. Can a simple task be done with simple action sequence? Can frequent tasks be done quickly? How many tasks does the user have to learn? Is everything in the documentation? They should remember that every task takes about two or three seconds to complete. Staying at this high level means you're more likely to just keep the task in mind. You can compare time to complete a task by way of mundane vs computer. The Informal approach is useful in deciding whether or not to add features. It takes time to decide which way to do something, so the idea is to remove redundancy. The Heuristic Analysis gives general principles or rules of thumb that can guide design decisions. The first two approaches are different from the Heuristic approach in that they are task oriented. The Heuristic approach is a task-free technique. Generally with task oriented evaluations there's never time to evaluate every task a user would perform. They also have problems identifying cross-task interactions. On the other hand, task free analyses can catch problems that task oriented approaches miss. The Heuristic method has several evaluators use the nine heuristics to identify problems in the design. Each does their analysis individually, and then they combine the problems identified by evaluators into single list. The Nine Heuristics are Simple and Natural Dialog, Speak the user's language, Minimize user memory load, Be consistent, Provide feedback, provide clearly marked exits, Provide shortcuts, Good error messages, and Prevent errors. The Heuristic method has been proven to find 75% of the bad design problems when experienced evaluators are involved in the process. Usually inexperienced evaluators will find atleast 50% of the problems, and this while more evaluators are needed. In sum, each of the approaches has its use, and evaluators should choose the method based on the needs of the project and the prototype. The cognitive walkthrough can help find problems in a task from the user’s perspective. The Action Analysis approach can find inefficiencies in the steps taken to complete a task. And finally the Heuristic approach can find problems with cross-task interactions within the system.

Buck Young 2:26:43 2/13/2014

I found the nine heuristics the most interesting in todays reading. Recently, I had run across these heuristics in another article about User Interface design. Simple and natural dialog and speaking the user's language seems like a no brainer; however as a Computer Science student, it is easy to get caught up in jargon. Complicated language is unintuitive for the user and only causes more problems. Consistency is important across the board. It is interesting to see how android and iphone have evolved their own consistency patterns (similar, too, the desktop operating systems). Clearly marked exits is always important for applications. No one wants to find themselves at a dead-end, unsure of what to do. Quick exits are also important in this age of mobile applications, where a user may decide to quit an application at any time. Good error messages go hand in hand with preventing errors. Just remember to speak the user's language and provide them with a way to solve their problem.

Zach Sadler 3:16:46 2/13/2014

The description of the Mac's chooser is very strange. Why draw elaborate ASCII art when you could just take screenshots? The pictures were very well-done but still subject to some formatting issues in the way the user opens the text document. I really don't get why you wouldn't use images. The idea of a walkthrough is interesting, and I think something we all do implicitly. Saying that we should consciously and explicitly write out a walkthrough is a bit of a stretch for me, but I could see the usefulness in conveying the ideas to others. Once it got to the part about detailed technical analysis of user actions, the article lost me. It seems a little too superfluous for my taste. I guess I'm just not super hardcore about perfecting user interfaces.

Brian Kacin 7:10:32 2/13/2014

This reading section is about evaluating your design without the help of users. Getting users to review your design is almost never a free process and limited. Also you do not want people testing your product that have trivial bugs in it not fixed, that is a lack of professionalism. Also the users tested will likely not catch every single problem, and as a designer, you must know every single thing that should happen with your program and its functionality. The first step to evaluating your program is a cognitive walkthrough. This step is just imagining the thoughts people will have and the actions they will take while using your product for the first time. Understanding what the people will think about is a great way to approach your design. A simple walkthrough can uncover all types of problems the general public of test users won’t reveal. This step focuses on first time users of the interface and thinking how they will go step by step through a task in your program. The designer of the program should do the walkthrough as they are designing. After completing the walkthrough several times, obviously the right thing to do is evaluate the problems and fix the interface. The next step is action analysis which takes a closer look at a sequence of actions a user has to perform to complete a task. This step has two phases for evaluation. One, to decide the physical and mental steps the user will have to perform to finish tasks and the second is to analyze steps and look for problems. When creating a big program with many tasks, this is a very daunting task and takes up a lot of time. So this step is not the easiest, you must pick which steps to evaluate at that level or use your time will be taken up considerably. The last step is a heuristic analysis, which is just general rules that guide design decisions. When people made bad interfaces and this a problem, there became proposed heuristics for the interface design that covered many parts of the spectrum. A few examples of these is a simple and natural dialog, speak the users language, be consistent, good error messages, prevent errors and etc. After looking at the three methods for evaluating the design without users, each of these can uncover different problems to fix. After all these methods are completed, it must be combined with user testing for further evaluation, but after completing all of these methods, you are on the right track to completing your design!

Ryan Ulanowicz 7:42:51 2/13/2014

This section is about testing software without doing it with users. This is important because users of software during testing don’t have all the needs of every user nor do they have the same point of view as someone who will eventually become experience with the software. One thing that you can do instead is a cognitive walkthrough. In this you decide a task that a user might do and tell a reasonable story as they go about doing that task. This kind of task can help to illustrate the lack of feedback in the design. If it’s an individual, they must do it themselves obvious. But if it is being done by a group, then it helps to get designers and programmers and everyone together to do the testing as one hivemind. It is best to do it without managers, because this tends to make it more of a show. You need a fairly detailed description of the interface and a task list and a list of actions do those tasks. Finally you need a list of typical users and their needs. You need to ask yourself do users know what buttons are going to do? Do they understand the feedback that’s being given? Now that you have feedback it’s time to go back and fix the interface to make it even better. Action analysis, sometimes called keystroke-level analysis. This allows us to measure how long it takes to execute tasks on the keyboard and compare that to the time that it should work in to be a successful task. This allows a more numerical driven approach to fixing the interface. We should ask, can a simple task be done with a simple key combo? And the answer should be yes. Heuristic analysis are guidelines that can guide the design process. Here are the 9 Nielson and Molich’s Heuristics. Simple and natural dialog - Simple means no irrelevant or rarely used information. Natural means an order that matches the task. Speak the user's language - Use words and concepts from the user's world. Don't use system-specific engineering terms. Minimize user memory load - Don't make the user remember things from one action to the next. Leave information on the screen until it's not needed. Be consistent - Users should be able to learn an action sequence in one part of the system and apply it again to get similar results in other places. Provide feedback - Let users know what effect their actions have on the system. Provide clearly marked exits - If users get into part of the system that doesn't interest them, they should always be able to get out quickly without damaging anything. Provide shortcuts - Shortcuts can help experienced users avoid lengthy dialogs and informational messages that they don't need. Good error messages - Good error messages let the user know what the problem is and how to correct it. Prevent errors - Whenever you write an error message you should also ask, can this error be avoided?

Pedro Alvillar 8:02:02 2/13/2014

There exist three methods of evaluating an interface whenever users aren’t available for testing and feedback. The first method is called cognitive walkthrough, which refers to envisioning you are a user and thinking about how you, as a user, would react to seeing the particular interface for the first time. While the designers should already be doing this task while they design a part of the interface, it is also important to get a group of people, from the company, together in order to do a more detailed walkthrough. Before you perform a walkthrough you must have three things defined: what task you are trying to accomplish, what kind of user will be trying to accomplish the task, the interface that will be used, and what the correct steps are in completing said task. Upon completion of the walkthrough you should make fixes to the interface as necessary in order to make the completion of the task easier for the user. The second method is called action analysis and it refers to taking a careful look at the exact steps needed in completing an action on your interface, including how long it will take the user to complete the task. Action analysis can be either formal or back-of-the-envelope, meaning it can either be really detailed or look at large-scale problems. Action analysis consists of two phases: deciding what reps a user will execute in order to complete a task using the interface, and secondly analyzing said steps. The third method is called heuristic analysis and it refers to using general principles/rules of thumb in guiding design decisions of your interface in order to avoid developing a bad interface.

Alex Stiegel 8:40:24 2/13/2014

Walkthroughs are an interesting concept I haven't heard of before. It's kind of what people do a bit naturally but much less formally. So in general when you are designing something you want to think about how it will look. In this case you want to try to make yourself the user and then design the interface. After you finish your walkthrough you want to fix your design based on what you have learned. More importantly you want to do analysis on the whole thing. Action analysis has the trouble of not being consistent across different people. So while useful in a specific sense it doesn't necessarily provide good results every time. A less formal version of action analysis would be better.

Bret Gourdie 8:59:55 2/13/2014

Today's reading was an interesting reversal of sorts: that of evaluating a design without users. One way of doing so is by a cognitive walkthrough. This is done by imagining people's steps through the program, so some creativity must be employed here. Telling a believable story here is key! Another method is that of action analysis. There are two: formal, that of keystroke analysis, and there is also back of the envelope, for a skilled user. Both are very important to use, but appropriate at different times in development. Finally, there's heuristic analysis, more of a general principle approach. With any method to evaluate a design, it is important to select the most appropriate one, even though it may not always be clear!

Max Campolo 9:09:10 2/13/2014

This reading was about evaluating a design without users. Sometimes you can't always have users to evaluate your design so it's important to be able to evaluate yourself. Also, it can be helpful to catch problems in the design that users won't necessarily catch. The first part of the approach is doing a cognitive walkthrough In a walkthrough, you try to define why a user would select each of the appropriate items that they are supposed to engage in the interface. It is important to look for the interactions that would take place as well as define a problem. Other methods for evaluating a design include action analysis and heuristic analysis. The action analysis forces you to take a close look at actions users will perform. Heuristic analysis includes general guidelines on interface design. Combining all of these design evaluations will be effective in evaluating your interface without users and catching any problems that might exist.