- 1 Readings
- 2 Reading Critiques
- 2.1 Vivien Chang 16:33:48 2/9/2017
- 2.2 Jason Tucker 21:15:13 2/9/2017
- 2.3 Jonathan Hanobik 22:05:04 2/9/2017
- 2.4 Timothy Smith 23:50:23 2/9/2017
- 2.5 Jason Ly 19:16:33 2/12/2017
- 2.6 Kyle Plump 14:53:38 2/13/2017
- 2.7 Chad Pongratz 16:04:57 2/13/2017
- 2.8 Ariella Hanna 18:48:30 2/13/2017
- 2.9 Emily Hanna 22:17:23 2/13/2017
- 2.10 Gabriel Larson 22:19:01 2/13/2017
- 2.11 Nick Miller 22:23:01 2/13/2017
- 2.12 Daniel Kindler 22:37:50 2/13/2017
- 2.13 Brett Schuck 23:56:45 2/13/2017
- 2.14 Louis Seefeld 1:07:19 2/14/2017
- 2.15 John Ha 1:16:42 2/14/2017
- 2.16 Kenneth Woodruff 2:17:48 2/14/2017
- 2.17 Michael Smith 2:30:02 2/14/2017
- 2.18 Zhenya Lindsay 8:31:43 2/14/2017
- 2.19 Colin Schultz 8:42:00 2/14/2017
- 2.20 Anthony Tummillo 8:54:19 2/14/2017
- Evaluating the Design Without Users Clayton Lewis and John Rieman, Task-Centered User Interface Design Chap 4
Vivien Chang 16:33:48 2/9/2017
As designers, we need to evaluate the design when no users are present for several reasons. First of all, users' time is almost never free or unlimited. Another benefit of evaluating a design without users is that we can catch problems that an evaluation with only few users. This reading describes three ways to evaluate an interface in the absence of users: cognitive walkthrough, action analysis, and heuristic evaluation. Of all of these, the cognitive walkthrough was the most interesting to me since it deals with creating a story about the user's actions, which is something I've never thought about when creating an application.
Jason Tucker 21:15:13 2/9/2017
This reading was about evaluating design in the absence of users. It goes over different techniques to overcome this unique, yet common problem. The techniques, such as mental walkthroughs and action analysis are absolutely necessary to do, even when you do have test users. These techniques are among the first steps to good design.
Jonathan Hanobik 22:05:04 2/9/2017
Today's reading was really interesting. It talked about how to test a given interface without any user input. At first, I have to admit that I was skeptical, as applications usually evolve based on the input of the user. However, I quickly learned that the three suggested approaches would be great ways to reconfigure a given interface. I'll say that, after reading this article for the first time, I'm not 100% confident in my ability to deploy any of the described techniques. Yet, just knowing their existence and ability to outline and detect any given problem in an interface or set of procedures is comforting. Perhaps the greatest take-away for me is the set of nine distinct heuristics. These quick rules of thumb are easier to remember and remind the programmer how the interface should be designed. Overall, I think that these techniques will come in handy upon review!
Timothy Smith 23:50:23 2/9/2017
This was an interesting read. Sometimes, users are not available all the time to test out a product or prototype, so there are three techniques that can be used: cognitive walkthrough, action analysis, and heuristic evaluation. The cognitive walkthrough component is a way to imagine people’s thoughts and actions when they use an interface for the first time. Action analysis forces you to look at the sequence of actions a user must perform to complete a task with an interface. Heuristics are the general principles that can aid in making design decisions. Aside from learning of those three methods for evaluating an interface without users, it was also interesting to learn of what each method specifically uncovers and what you can learn from them.
Jason Ly 19:16:33 2/12/2017
The reading discusses three approaches to evaluating an interface in the absence of users: cognitive walkthroughs, action analysis and heuristic evaluation. In cognitive walkthroughs, you try to imagine how the user would interact with an interface for the first time. Action analysis focuses on the sequence of individual actions that the user would take to achieve a task. Heuristic analysis consists of a list of qualities that an interface should contain. I thought the most useful approach to evaluating an interface in the absence of users was Heuristic Analysis. Heuristic analysis provides a simple to understand checklist that focuses on providing the user with an easy to use and understand interface. It’s difficult to assume all possibilities of what a user might try to do that would cause an interface to perform undesired actions because everyone has different amounts of experience. Action analysis is difficult to accomplish accurately given that everyone types at different speeds and have varying cognitive capabilities. I thought this reading was very useful for our group project because we can’t always try to find participants to test out our paper prototypes or interface. It took a significant amount of time for our group to find three willing interviewees for the second group assignment. Also, students, staff and visitors have their own work or tasks to accomplish and testing our interface with participants would require finding someone with spare time or someone willing to give up what they were doing. In reducing our need to search for participants we can focus on providing the necessary features to our users.
Kyle Plump 14:53:38 2/13/2017
I think that today's reading is very applicable to our group project. It would be helpful if we ran through the different types of interface evaluations. Its interesting to try and think in small steps like the user, making no assumptions about their knowledge (e.g. we cannot assume that the user knows/should have to turn the copier on). Each of these evaluation techniques seem to have their pros/cons, but I think if you combine all 3 evaluations, you could get a very solid, mostly defect free interface that feels intuitive to the user.
Chad Pongratz 16:04:57 2/13/2017
The reading begins with explaining a few reasons that it's beneficial to evaluate a design of an application without the need for input by users. One of these reasons is that a god evaluation can catch problems that an evaluation with only a few users may not. Lewis and Rieman go on to give some detailed information about the Chooser, what it is and how it relates to the system. The structured layout of the reading made it easy to absorb the information, as the authors went on to break it down into the sections of cognitive walkthroughs, action analysis, and heuristic analysis. Each of these methods meant for evaluating an interface without users revealed their own problems. The cognitive walkthrough for example identified problems with finding the controls and suggested that they be moved to the Print dialog box. Even the combined result of the three methods wasn't able to find all the issues in this reading....Overall, I thought this reading was informative. It was also beneficial seeing as we are moving on in our group projects into developing our user interfaces, and this knowledge of the different methods fro evaluating user interfaces without user input will be essential in developing our application to the best of our abilities.
Ariella Hanna 18:48:30 2/13/2017
I thought this reading was very valuable to the interface design process. Cognitive walkthroughs, action analysis, and heuristic analysis are all helpful methods that can find different problems with an interface. Action analysis does seem like it might be too technical to be helpful in a smaller situations, but it can help you determine the value of implementing a change in the interface design. Congnitive walkthroughs seem to be the most helpful of the methods because it's the only one of the three that will be specific to your app.
Emily Hanna 22:17:23 2/13/2017
This reading was about evaluating the interface design without the input of the user. This is beneficial for a few reasons, including that doing evaluations with and without the user increases the number of evaluations and the user only needs to see a design when it is near completion as having them evaluate a multitude of designs can make them feel as though you’re wasting their time. One style of an evaluation sans user is a cognitive walkthrough. This is essentially pretending to be the user and attempting to walk through the interface or with a prototype in the way a user might, attempting to model their thoughts as you go. Second is an Action Analysis approach which involves looking at the sequence of actions the user will take to do a task. This can further be broken into a keystroke-level analysis which is so detailed it can predict the times each step will take, and the back of the envelope approach which isn’t nearly as detailed. The third method is the Heuristic Analysis which is not a task-oriented technique the way that the last 2 options were. Instead this involves a set a of heuristics that evaluators will use to identify issues with the interface. It is important to use both task-oriented and non-task-oriented approaches as they each can catch problems the other might not.
Gabriel Larson 22:19:01 2/13/2017
web page link was broken after putting in credentials. have video to prove if I'm the only one that had this problem. email me.
Nick Miller 22:23:01 2/13/2017
Today's reading focuses first on the idea of a Walkthrough. This is when you focus most clearly on problems that users will have when they first use an interface, without training. This is than used in order to fix the interface based on the problems that are found. Then the focus goes on to Action Analysis, which any way it is done takes 2 steps. Action analysis is the evaluation procedure where you look at the certain sequence of actions a user has to perform to complete a task with an interface. This is important because it can save you time, but more importantly it ensure that the interface won't fail.
Daniel Kindler 22:37:50 2/13/2017
I found this reading to be extremely informative on many aspects. It gave me better insight to the human psyche which I now find to be very important in software development. Different representation models have play a huge role in a users experience, and can have an affect on perspective, memory and other crucial topics.
Brett Schuck 23:56:45 2/13/2017
In this reading, the author discusses multiple techniques used to analyze the effectiveness of our design choices and determine problems in said design. I found this reading more interesting than our last, and more easily digestible. I found the Heuristic Analysis section particularly interesting, as it laid out some very basic rules of thumb which can go a long way in designing an effective and intuitive interface. On a side note, I also found the example used to demonstrate "back of the envelope" action analysis interesting because it directly related to another field of interest for me: photography.
Louis Seefeld 1:07:19 2/14/2017
Seems like a good continuation of information based on what we learned about human processing. There was a lot to tale in for a single reading. Taking away the person and allowing quantitative data to stand in could definitey be a heuristic used for further dev in this course.
John Ha 1:16:42 2/14/2017
This week's readings focused around the interface design of the app without necessarily having users to test it. This focuses on developing and improving the interface, because there are constant improvements that can be found. So the reading emphasizes understanding how the users will interact with the app. Whether or not they find the buttons they need to find, and how fast they find them.We, as developers, have to then analyze the time it takes for each option over an extended period and see what needs to be improved and simpler. Towards the end of the reading, it strays away from the task oriented focus of analysis, and brings it to the heuristic methods, which is a task-free evaluation methodology that should be used in adjacency with the cognitive walkthrough and action analysis, which is what the first 2/3 of the reading summarize.
Kenneth Woodruff 2:17:48 2/14/2017
I found this reading to be very interesting because it talks a lot about putting yourself in the user’s shoes and imagining things from their point of view. Rather than design what makes sense to you and then ask the users opinions, possibly wasting their equally valuable time, it explains how you should look at it from their perspective and walk through the interface trying to simulate what their thoughts and actions would be. The reading suggests gathering a list of actions the user would want to perform and attempting to perform them while imagining the thought processes of the user to make sure that your interface is easy to navigate and makes sense. It also mentions that this process should lead to revisions or improvements, which I find interesting because what may seem like a useful and intuitive interface to the developer may be a complex and confusing interface to a user, so really the interface should accommodate the end user. It shows that there is a lot of work that can be done on an interface before the user even gets a chance to see or test it.
Michael Smith 2:30:02 2/14/2017
I definitely incorporated some of the methods of the walk through described in the reading when I interviewed for Group Project 2. Feedback from interviews is a crucial part of the prototyping stage. After conducting my own interview I saw how the layout and functionality of the app could be improved upon in the future.
Zhenya Lindsay 8:31:43 2/14/2017
Evaluating of UI is a complicated task and it seems like sometimes full evaluation of design of the application is pretty much impossible. I think in real life it is usually the combination of an experienced designer and contributions of as many members of the team as possible that resolve this issue. Also re-evaluating of the interface at different stages of the application is of great help. Having guidelines as in heuristic analysis can help a lot as well as cognitive walkthrough and back of the envelope analysis.
Colin Schultz 8:42:00 2/14/2017
I think it is a good point that you also should evaluate your design without users. The article says it is a good idea because maybe 100 people will use the app, but only 12 people will beta test it, and these 12 people might not find any problems. I think this is more dangerous than having nobody test it, because in this scenario, the testers have a false sense of security, which can come back to bite them in the butt.
Anthony Tummillo 8:54:19 2/14/2017
I was surprised by how many issues can be uncovered without the need for any users at all. It was really helpful to learn about these three methods (cognitive walkthrough, action analysis, and heuristic analysis) individually and have a focus on what type of problem each is best at uncovering. I also really liked that at the end the author provides suggestions for when in the design process each method should be used. Overall this chapter provided a lot of clarity to the concept of identifying problems with an interface and offered some very helpful models on how to do it best (both task-oriented and task-free).