- 1 Readings
- 2 Reading Critiques
- 2.1 Zihao Zhao 18:43:08 10/18/2015
- 2.2 Vineet Raghu 19:32:42 10/18/2015
- 2.3 Ameya Daphalapurkar 17:00:50 10/20/2015
- 2.4 Kent W. Nixon 18:17:03 10/20/2015
- 2.5 Manali Shimpi 18:30:16 10/20/2015
- 2.6 Samanvoy Panati 23:02:39 10/20/2015
- 2.7 Matthew Barren 23:12:26 10/20/2015
- 2.8 Zinan Zhang 23:13:12 10/20/2015
- 2.9 Priyanka Walke 0:06:20 10/21/2015
- 2.10 Sudeepthi Manukondaudeepthi Manukonda 1:15:54 10/21/2015
- 2.11 Chi Zhang 2:43:42 10/21/2015
- 2.12 Xinyue Huang 4:04:56 10/21/2015
- 2.13 Jesse Davis 5:05:44 10/21/2015
- 2.14 Shijia Liu 7:43:41 10/21/2015
- 2.15 Ankita Mohapatra 8:15:55 10/21/2015
- 2.16 Mingda Zhang 8:20:15 10/21/2015
- 2.17 Darshan Balakrishna Shetty 8:47:29 10/21/2015
- 2.18 Adriano Maron 8:54:14 10/21/2015
- 2.19 Mahbaneh Eshaghzadeh Torbati 8:58:24 10/21/2015
- Methodology Matters: Doing Research in the behavioral and social sciences, Joseph E. McGrath, in Readings in Human-Computer Interaction: Toward the Year 2000, pp. 152 - 169.
- Evaluating User Interface Systems Research, Olsen, D.R., Proc of UIST 2007, pp. 251-258.
Zihao Zhao 18:43:08 10/18/2015
When I first see the title “Methodology Matters: Doing Research in The Behavioral and Social Science” I was quite confused of reason why Prof. Wang put this paper on this chapter, because it is an article in social science and it seems nothing to do with human computer interaction. However, when I finish the first 6 pages, I realized that the method for measuring, comparing and validating social sciences are comparable to those in the various of user interface systems. I remember that Prof. Wang once said that the bottom line of modern computers is no longer computers, and it is humans. In this case, the user interface nowadays are no longer those of keyboard and mouse, they includes a wide range of inputs and outputs. From the robotic gloves to the computational hamlet, from single person interface to multi person interface, the evaluation for user interfaces faces a big challenge. When we evaluate the traditional user interfaces, we normally have three assumptions: “wake up and use”, standardize tasks, and experiment in a small scale(This perspectives are from the prior article). However, with the modern user interface, all of the three assumptions are all invalid. Thus, we making a reference from the social science about the validation methods is viable. Another reason is that human plays a much more important role in modern user interface. When comparing user interfaces, baserate really exists. Some times we have no access to know some user feedback for the user interface. For example, whether users are willing to user finger print as the approach to unlock the smartphone or users tend to user other keys to unlock the smartphone. Because only a few people in the world get contact with this technique, and the questionnaire from the very few people who have a iPhone 5S or a galaxy 5 can not represent the whole 6 billion people in the world. The correlational question can also be adopted into the evaluation of user interface systems. We analyze the same problem about fingerprint. We can set X as the time a user uses fingerprint and set Y as the utilization of the smartphone. Password approach takes more time than fingerprint approach, which may prevent users from using the smartphone for some simple tasks. Randomization can also be adopted to user interface system evaluation. Since human plays a important role in modern human interface, the numbers of factors increases tremendously. So it is really important to randomly set make some assignment for some trivial factors.------------------------------------------------ The article “Evaluating User Interface Systems Research” gives a variety of alternative standards by which complex systems can be compared and evaluated. After reading this paper, I realize that the challenges to evaluate an user interface system are really there and computer scientists came up with some practical solutions to them. Before discussing about the challenges in evaluating user interface systems, the paper first address some principles of designing a successful UI system. What impress me most is the principle proposed by Bill Buxton that good interfaces should have lower skill barriers. It means that user interfaces should be easy to use by those who are not computer scientists, like artists and designers. I can not agree more with this saying. Even for us who majoring in computer science, a sophisticated user interface will bring us a lot of work load than a low skill barrier one. On the GUI design, I am for the C# much more than JAVA. It save me a lot of time to learn the interface in code while I go for C# because what I have to do is just to drag the graph into the center of the design board. And he also mentioned that there exists the fatal flaw fallacy which means that people tend to evaluate small interactive techniques or small behaviors to carefully examine all of the possible ways in which the validation might be wrong and he drew a great concern on it. Before I start research, I always think that research should be strictly correct and should be a serious thing. Yes, science itself means correctness, but if we always test a new technique from every aspect, it would prevent some novel ideas from booming. The hot topic “mobile gesture” draws a lot of attention these days, and there are countless scientists working on this topic. If every scientist push the gesture recognition a little bit to the accuracy, then the aggregate result will definitely be wonderful.
Vineet Raghu 19:32:42 10/18/2015
Methodology Matters: Doing Research in the Behavioral and Social Sciences This article discusses how research is conducted in behavioral and social sciences, and in particular what types of methodology are available and what type to use in a particular experiment. The first section of the paper goes over how each method has its own advantages and limitations for a particular experimental context, and that a mixture of methods may be the best bet to overcome limitations of a singular method. Next, the article delves into different research strategies and discusses how no one strategy can maximize the desirable goals of precision, generalizability, and realism all at the same time. They provide four “quadrants” of design strategies and discuss individually how each one has difficulty in maximizing some criteria. For example, a field study has difficulty in providing precise results, but it is very natural and generalizable (as the study is done in natural context). Afterwards, statistical validation techniques are presented, to allow researchers a reliable way to trying to determine cause and effect from an experiment. First, they discuss correlation and randomization and how to control for affects from confounding variables in an experiment. Then they discuss significance, and how probability values can help us in determining how reliable our results are (the probability that our observed differences occurred due to chance alone). Finally, the chapter gives different ways to measure variables from experimental participants such as self-reports, observations, and archival records, and it gives the strengths and weaknesses of these measures. Overall, it appears that the major takeaway from this chapter is that each component of an experimental design has many potential choices. Each choice has its own set of benefits and drawbacks that depend not only upon the choice itself, but also on the experimental context. To produce high quality experimental results, experiments necessarily need to be redone and validated using a variety of methodologies. -------------------------------------------------------------------------------------------------------- Evaluating User Interface Systems Research This paper presents new methodologies for evaluating user interface systems apart from the traditional desktop systems. It was written at a time in which desktop systems were the norm, despite the fact that mobile devices were becoming more and more useful for users. First, the authors discuss reasons that UI system development is valuable, including scalability, lower skill barriers for users, and allowing for common infrastructure (such as a pen/stylus acting as a mouse). Next, the authors present why usability studies are difficult for UI systems. The first is that it is exceedingly difficult to compare the usability of a novel system with the usability of a previously developed system because users are already familiar with the previous system. In addition, many usability tests can be significantly costly to provide statistically significant results. The most significant contribution the authors provide in terms of evaluation methods is the formalism of the Situation, Tasks, and Users context. Each of the tenets of UI evaluation that they mention (novelty, importance, generality, flexibility, etc.) is given in terms of this STU context. For example, for the generality tenet, the authors state that a new UI design is general if it can be applied to several populations Ui with several tasks Ti. However, since this is very difficult to measure, they state that a UI is general if it can be applied across various STU contexts that are currently unsolvable. Overall, the paper presents some very good points about evaluating UI designs, though it is difficult to evaluate this paper with the present knowledge available, as this was written before any sort of mobile revolution began.
Ameya Daphalapurkar 17:00:50 10/20/2015
The paper titled ‘Evaluating User Interface Systems Research’ explores the problems related to systems work as the simple usability testing is not applicable everywhere. The architectural inventions now on the decline due to the dominating operating systems, lack of skills in toolkit designs and unavailability of appropriate criteria. Paper basically helps in understanding the techniques in evaluating the interface such that the constant progress would be made. User Interface systems add a lot of value to the system by reducing the development viscosity, least resistance to solutions as explained by the idea of what Apple did in stylizing the manuals instead of setting a standard widget set, no specific set of skillful requirements on the part of the users as anyone could make the participation. It also includes evaluation errors like the usability trap, due to the assumptions of walk up and use which convey anyone with the expertise can just work hands on in the system. The other being the standardized task assumption and last one being the problem’s scale. The fatal flaw fallacy and legacy code are also the important ones. Conclusively, the usability systems can be made an attempt on, to simplify and map the controlled structures thus renewing the out of favor ideas. ******************************* The paper titled ‘METHODOLOGY MATTERS: DOING RESEARCH IN THE BEHAVIORAL and SOCIAL SCIENCES’ explains the ways of doing research in behavioral sciences. The basic features that are involved in a research are ideas, procedures and techniques, and content. The author maps them into the domains named substantive, conceptual and methodological. Substantive domain involves the state resulting after human actions. Methodological domain involves dealing with ways in which particular features the researcher can deal. Manipulation of techniques can also be done in modes of treatment by giving instructions, imposing constraints, selecting materials, giving feedback, using experimental confederates. Paper also defines methods and talks about the various intricate research methods which can be concluded as enabling but also limiting, valuable but limited, multiple methods can be selected. Not only the methods the paper also gos on to elaborate the various strategies involved. The comparing techniques are summed up as assessing associations and differences, randomization and true experiments. Conclusively, thus all the results depend on the methods, all desirable features are not maximized and the studies should be inter relating based on evidence.
Kent W. Nixon 18:17:03 10/20/2015
Methodology Matters: Doing Research in the behavioral and social sciences This reading was a book chapter excerpt dealing with research in areas related to human behavior, with CHI being one of those areas. The chapter is essentially a very long review of a number of topics previously talked about in class, such as methods, types of trials/tests (the wheel figure shown previously in class), selecting and assigning values to variables, assessing statistical significance, internal and external validity, and etc. A particularly interesting section was dedicated to the accuracy of different forms of measures, but essentially echoed the “everything is best at something and worst at something else” mantra that has already been covered in class. While this isn’t directly related to any of my research, I suppose the content would be useful in planning any future user studies. That being said, the vast majority of this reading was very dry and was a review of things already discussed. It was also presented in a very “fluffy” way, with not a lot of quantitative data to back up many of the statements. Evaluating User Interface Systems Research This paper, written in 2007, claim that much of the research occurring at the time related to user interfaces was being restricted due to the stagnation of the desktop platform as well as the goal of creating experiments that were easy to publish and not necessarily impactful. The author states that there are three main problems that are currently being faced by research: the usability trap, the fatal flaw fallacy, and the importance of legacy code. The author suggests that these problems can be approached by more properly evaluating the effectiveness of contributions. For example, does the STU of the work apply to a large number of people and/or situations? Is the work generalizable? Does it scale well? For me, reading this work was essentially a list of various metrics that could be used to measure success in fields related to HCI. My research is generally lower-level hardware stuff, so the only metric we use is power. It is interesting to see what type of metrics are possible outside of that.
Manali Shimpi 18:30:16 10/20/2015
METHODOLOGY MATTERS: DOING RESEARCH IN THE BEHAVIORAL and SOCIAL SCIENCES: The paper talks about the tactics, strategy and operations in the research in social and behavioral sciences. The research process involves three sets which are content, ideas and technique. The paper explains three domains which are substantive domain, conceptual domain and methodological domain. Substantive domain consists of phenomena and patterns of those phenomena like states and actions of human system. Conceptual domain involves properties of the states and actions of human systems that are the focus of study. In methodological domain, methods are referred as modes of treatment that includes measuring some feature, techniques for manipulating some feature. Modes of Treatment of variables also include a set of techniques for controlling the impact of various extraneous features of the situation that includes techniques for experimental control, statistical control. Comparison Techniques are methods or techniques which allow researcher to assess relations among the values of two or more features of the human system under study. Even though all methods are valuable, all have weaknesses or limitations. It is possible to offset the different weaknesses of various methods by using multiple methods. Techniques for manipulating variables are selection, direct intervention, induction. The conclusion of the paper is, strategies, designs, and methods together constitute a powerful technology for gaining information about phenomena and relations among them. EVALUATING USER INTERFACE SYSTEMS RESEARCH: The paper addresses question of How should we evaluate new user interface systems so that true progress is being made? The author first states the importance of research in user interfaces by saying that many people live and work across many platforms and interact with many people, yet our UI systems architectures support none of this. He explains hoe misapplied evaluation can damage the systems by explaining about the usability trap, the fatal flaw fallacy and legacy code. To reduce viscocity of the solution, the paper explains three techniques, flexibility, expressive leverage and expressive match. The effectiveness of tools increase by supporting combinations of more basic building blocks. If the study answers the question of whether the progress has been made then the study in the user interface system can be kept on going.
Samanvoy Panati 23:02:39 10/20/2015
Critique 1: Methodology Matters: Doing Research in the behavioral and Social Sciences This chapter illustrates different research methods and their characteristics. The fundamental principles of the research process are presented, followed by a design space categorization of research methods. A detailed description of the research techniques involves bringing together three things – content, ideas and techniques. Three different domains are illustrated. Substantive domain which is used to draw contents those are worthy to study. Conceptual domain which is used to draw ideas those give meaning to our results. Methodological domain which is used to draw techniques those are useful for conducting the research. The author mentions three desirable features that are required when gathering evidence for a study. They are Generalizability over the relevant populations, Precision of measurements and Realism which is a contrived research setting that may not be translated convincingly into a real-world conclusion. The authors make concrete points on randomness and statistical validity in experimentation. This paper illustrates many research techniques and categorize then such that they might be best for something and worst for some other thing. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Critique 2: Evaluating User Interface Systems Research This paper deals with several approaches for comparing User Interfaces toolkits with one another. The assumptions on different systems from the past may not be valid in the future because arising technologies may overcome many problems which are limitations for those systems in the past. The author discusses about the reasons for the research in UI toolkits. The need of UI systems architecture is important as it leads to reduced development viscosity, least resistance to good solutions, lowers the skill barrier and provides power in common infrastructure. The author makes a very good point in this paper where he mentions that many experimentalists limit themselves and their research to areas that will do well on most evaluation strategies. This is called “fatal flaw” fallacy. The concept of STU (Situations, Tasks and Users) is very interesting. The author states that systems should be evaluated based on goals of the users in terms of acceptance, generality and importance. The paper talks about some very good insightful points but at some points it needs organization. The author at some point states the shortcomings of UI toolkit evaluation and how these errors can be avoided in the future.
Matthew Barren 23:12:26 10/20/2015
Summary of Methodology Matters: Doing Research in the Research and Social Sciences: Joseph McGrath writes about the components of performing successful research in behavioral and social science fields. In doing so, he touches on the major components of research such as design, major domains, determining experimentation, and randomizations. McGrath clearly lays out the parts of research. One particularly interesting component in research is choosing the appropriate quadrant to conduct a study. Depending on the results and what is being researched this will heavily factor into the experiment selection. For example, if an individual is studying birds, in order to get representative results to a population the researcher will most likely conduct a field study or field experiment. In other circumstances a researcher may be seeking user feedback. Sometimes the study is unquantifiable such as an opinion or an emotion. In this case, the researcher would require a judgement study or sample survey. McGrath also examines randomization and the idea that nothing is a random distribution. Researchers have ways of combating or identifying randomness. One such method is to take a large sample and split the sample into smaller populations. The researcher then tests the sub populations for significant results, and additionally compares the results across the random subpopulations to see if the mean and standard deviations are similar. This method helps with identifying potential biases in a population. Another interesting point examined by McGrath is construct validity. This form of validity is to look at the concept and logic of the study to examine the relationships explored. This validity is a fit of how well the elements and relations work together logically. This validity seems to be a weak form of validity because it is more or less face value validity. It is examining the logical idea of the study, and not the actual relation to the real world. Summary of Evaluating User Interface Systems Research: Dan Olsen describes a set of categories and features, which can be used to evaluate interface systems research. He also identifies general points for interface research to be effective such as importance and generality. Evaluating interface systems research is key to any individual looking to examine a new HCI platform. Before a time investment in developing a new interface system is started, researchers can thinking about the importance, uniqueness of the problem, generality of the system, empowering new design participants, and power in combination. Generality is a key component that is often forgotten. It brings up the question: is it better to design something specific? Or something that is portable to many different applications? Clearly, most researchers would like their design or tools to be more general because it extends the overall impacted areas thus making the research have a wider reach. If set of tools becomes more general, they can be used for many different applications by leveraging the flexibility of the tool for a given situation. Another interesting idea is the power in combination. Designing tools and interfaces that work in combination with smaller building blocks are often key for design. This means that the tool created may utilize other smaller components that allow it to operate more effectively. Additionally the tool may be composed of many small solutions that can be used in different ways to produce varying results. One way of looking at power in combination is object oriented programing. This modular form of programming allows for greater code access with far fewer lines. Programs can utilize different code segments simultaneously, and additionally the code will not be duplicated, which makes the overall tool more efficient.
Zinan Zhang 23:13:12 10/20/2015
1. For Evaluating User Interface System Research--------- This paper mainly talks about how to evaluate user interface system. In the paper, the author gives some new criterions to evaluate the new user interface systems in order to truly empower the development of UI. I was thought that creating new criterion was just wasting time. However, after read the paper, I think it is really necessary now. Without a new criterion, people will be satisfied with the current user interface system forever. The result is that there will be no progress in this field and that is not what human expect. Among all the new criterion given by the author, what the most important I think is the “problem not previously solved” because the ultimate reason of presenting a new criterion is not to impede the development of UI. For example, the mouse from wire to wireless is a great development. The original mouse has to send information to computer by a electric wire. However, as the technology developed, it is not convenience for people to use laptop with a mouse that has an electric wire. The electric wire is not easy to take with and sometimes not long enough. As a result, people invent the wireless mouse, solving the convenience problem. Obviously, the interactive equipment has a great development. ----------------------------------------------------------------------- 2. For Methodology Matters----------- This paper mainly talks about how to do research in the behavioral and asocial science. And the author gives some criterion to help the reader to do research in proper way. Actually, doing research is not an easy case. Doing an experiment is even more difficult than doing research. People have to set different variables and constraint some conditions when doing an experiment. For example, in order to prove that plants will die without water, we have to use same water to test different kinds of plants. Just using one certain plant cannot prove that most of the plants cannot live without water. And at the same time, the water has to be the same. You cannot use sea water to water this plant and use normal water to water another plant. As we all know, doing experiments is just a part of doing research. Then we can see how important to do research in a correct way.
Priyanka Walke 0:06:20 10/21/2015
Reading Critique on Methodology Matters: Doing Research in the Behavioural and Social Sciences This paper about the various research methods that are available the field of behavioural and social sciences as well their advantages and disadvantages. It also states a comprehensive study of a lot research techniques along with the things that need to be kept in mind while collecting the statistical data. The field of behavioural and social sciences research involves a combination of content, ideas and techniques. Here Content exemplifies the behaviour that you want to study and is worth of your attention – the substantive domain. Ideas exemplify the attitudes that give meaning to our results – the conceptual domain. While techniques exemplify the practical procedures for assessment – the methodological domain being the area of interest here. A variety of research strategies have been discussed here in order to magnify 3 important criteria’s namely, generalizability, precision and realism. It states that the 3 criteria’s cannot be maximized at the same time, as maximizing 1 of them reduces either 1 or both of the remaining. The figure 2 ‘The strategy circumplex’ given in the paper clearly depicts this problem giving us an overview of the 4 quadrants i.e. the techniques for research. These techniques are then the pint of discussion for the rest of the paper which involve field, experimental, respondent and theoretical strategies each of which has 2 sub-parts. The fact here is that none of the strategies given here are alone enough to prove the required behaviour of study as they all have their individual strengths and weaknesses. Therefore it is necessary to figure a fruitful combination of these strategies in order to carry out the evaluation that provide us valid and unbiased results. While taking the comparison techniques into consideration, the correlation variables for an experiment are to be known for sure along with the base rates. The validity of the techniques to find such correlations are internal, construct, external validity and the threats to validity. In order to maintain generalizability of the experiment being conducted along with keeping an account of unknown variable correlations that may confound to our results, it is necessary to use randomness in selection as well as allocation process. Ultimately, all the types of measures are taken into consideration and explored. In order to summarize, it can be said that a lot of content is provided by this paper in terms of the choice of research techniques to be used and where they can fit into the design space. Reading Critique on Evaluating User Interface Systems Research This paper deals with different approaches used for evaluating complex user interface systems of the future that move away from the traditional GUI systems. Even though the suggested approaches are not new, they are not favoured currently as the currently used windows systems are highly stable. There has been a continuous research in UI development until the arrival of the GUI window based system brought about by Windows, Linux and Macintosh. They definitely started from command line interface, making the usage much more natural to non-programming computer users. The window, mouse and keyboard trio is widely used today and has also led to a variety of innovations. They have become the landmark interfaces of today’s world and a strong force will be required in order to move ahead to newer level of interfaces as people have become extremely comfortable with them. The need of UI systems architecture is required as it leads to ability to iteratively develop, least resistance to good solutions, lowers the skill barrier and provides common infrastructure. The paper mentions the innovations of new UI techniques that are evolving from the current one’s to something far better. These kind of systems are complex and hence the existing techniques are definitely not enough in order to evaluate them. Thus we can summarize that the 2 readings for today are both responsible for creating an awareness and also a level of understanding about the variety of research methodologies and evaluation techniques.
Sudeepthi Manukondaudeepthi Manukonda 1:15:54 10/21/2015
Methodology Matters: Doing Research In The Behavioural and Social Science by Joseph E McGrath is an interesting paper that talks about the systematic use of some set of theoretical and empirical tools. Research in social sciences is used to increase understanding of phenomenon or events, like states and actions of human systems or products of these systems. Doing research involves bringing together a set of three things and those are content of interest, ideas that give meaning to the content and techniques to study the ideas and contents. There are different domains facilitating this like substantive domain and methodological domain. In substantive domain units and elements are called phenomena and the relations amongst these are called patterns. Conceptual domain talks about the elements of interest, properties of states and actions. Methodological domain includes modes of treatment and that is the different ways by which a researcher can deal with a particular feature of a system she is researching. This paper also speaks about the research strategies. Evidence in social sciences always involves somebody doing something and in some situation. There are always three questions that can be asked: Who, What and When and Where. Who refers to the actors, What refers tot he behaviour and the last one refers to the context. Experimental strategies can be explained in four quadrants. Quadrant one talks about the respondent strategies, quadrant two talks about the theoretical strategies, quadrant three talks about the field studies and quadrant four talks about the experimental strategies. ————————————————— Evaluating User Interface Systems Research by Dan Oslen talks about how should we evaluate the new user interface so that true progress can be made. He first puts forth some questions and tries solving them through the course of the paper. He says that UI Research is the force for change. It is the value added by UI systems architecture. Research on UI reduces development viscosity, least resistance to good solutions and lowers skill barriers. There are evaluation errors like the usability trap, the fatal flaw fallacy, and the legacy code. Evaluation effectiveness of the system is very important to check for the performance. Tools are employed of this own purpose. This gives path to new ideas and innovations or modifications. This empowers new design participants. It acts as a power in combination that is inductive combination, simplifying interconnection and ease of combination.
Chi Zhang 2:43:42 10/21/2015
Critiques on “Methodology Matters: Doing Research in the behavioral and social sciences” by Chi Zhang. This paper is an introduction to the tools used for psychology research. As are talked about in the paper, doing research contains three domains, the substantive domain, the conceptual domain, the methodological domain. I learnt from the paper, that any set of results is limited as the results depend on methods; tradeoffs and dilemmas are involved; Each study must be interpreted in relation to other evidence bearing on the same questions. This is a very good introduction paper, it introduces all kinds of methodology views of doing psychology research, and gave very insightful comments on them. ---------------------------------------------------------------- Critiques on “Evaluating User Interface Systems Research” by Chi Zhang. This paper mainly talks about the methods we can use to evaluate new user interface. In this paper, the authors present the problems with evaluating systems work and a set of criteria for evaluating new UI systems. As is told in the paper, many usability experiments are built on three key assumptions: walk up and use, the standardized task assumption, scale of the problem. Legacy code can also be a barrier to new systems research, since most UI research still uses it. This is a very good paper, and it introduces how to choose the methods to evaluate new user interface. It’s actually providing us very good views to deeply understand the process of user interface evaluation.
Xinyue Huang 4:04:56 10/21/2015
Evaluating User Interface Systems Research The paper explored the problems with evaluating systems work and presented a set of criteria for evaluating new UI system work. The paper presented several reasons about why UI systems research is developed. The first one is the forces for change. The second one is value added by UI systems architecture, which included reducing development viscosity, least resistance to good solutions, lower skill barriers, power in common infrastructure and enabling scale. Evaluation errors include the usability trap, the fatal flaw fallacy, and legacy code. There are several common measures of usability such as time to complete a standard task, time to reach a certain level of proficiency and minimize number of good research in the CHI community. Fatal flaw fallacy is a very good practice to carefully examine all of the possible ways in which the technique or its validation might be in error. If a toolkit can run legacy applications while providing some new advance that is a good thing. If a new architecture necessitates rewriting applications, that is just the price of progress. The legacy code requirement is a barrier to progress. Evaluation effectiveness of systems and tools include STU representing situations, tasks and users, importance, problem not previously solved, generality. Reduce solution viscosity also includes some aspects like flexibility, expressive leverage, and expressive match. Besides these, it also include empowering new design participants, power in combination which includes inductive combination, simplifying interconnection and ease of combinations. A UI tool is flexible if it is possible to make rapid design changes that can be evaluated by users. Expressive language is where a designer can accomplish more by expressing less. The dominant cost of any design processes is the making, expression and evaluation of choices. Expressive match is an estimate of how close the means for expressing design choices are to the problem being solved. In the final part of the paper, the author also mentioned that we should avoid the trap of only creating what a usability test can measure. We must also avoid the trap of requiring new systems to meet all of the evaluations required. Methodology matters: doing research in the behavioral and social sciences The paper introduced that doing research simply means that the systematic use of some set of theoretical and empirical tools to try to increase our understanding of some set of phenomena or events. In the social and behavioral science, the phenomena of interest involve states and actions of human systems, of individual, groups, organizations, and large or social entities, and the by-product of these actions. The chapter first introduced some basic features of the research process, which include some content that is of interest, some ideas that give meaning to that content, some techniques or procedures by means of which whose ideas and content can be studied. These sets of things can be more formally referred as three distinct interrelated domains: the substantive domain, the conceptual domain and the methodological domain. In the substantive domain, units or elements are called Phenomena. For the social and behavioral sciences, the elements of interest in the conceptual domain are properties of the status and actions of those human systems that are the focus of study, properties of “actors behaving toward objects in context”. In the methodological domain, elements are methods. Social psychologists have tried to manipulate features of the systems they study by a number of techniques, such as giving instruction, imposing constraints, selecting feedback, giving feedback and using experimental confederates. There are also some opportunities and limitations for research methods. For example, methods enable but also limit evidence, all methods are valuable, but all have weaknesses or limitations, We can offset the different weakness of various methods by using multiple methods and we can choose such multiple methods so that they have patterned diversity. The paper also introduced some research technologies such as choosing a setting for a study which includes the field strategies, the experimental strategies, the respondent strategies and the theoretical strategies.
Jesse Davis 5:05:44 10/21/2015
Methodology Matters: Doing Research in the Behavioral and Social Sciences This excerpt is a very in-depth reading that explains methodology in relation to research techniques. It focuses on the important of research methods, limitations, and how we go about using the knowledge from them. The beginning of the paper elaborates on 3 basic, yet vital features for any research process: content (substantive domain i.e. worthy content), ideas (conceptual domain i.e. ideas that hopefully give meaning to the results), and techniques (methodologies that are useful when conducting the research). While the first two are important, the excerpt focuses on the techniques and methodologies because there are so many different ways to work with the first two concepts that it helps to have baseline methods with which to work with and build upon. Their summary of methods and research techniques explains the idea that research techniques provide knowledge and boundaries of said knowledge; it is at the bottom of page 4 (the a, b, c, d list). The excerpt goes on to give some research strategies by dividing the types of research into a pie chart, grouping sections, and then again categorizing them by generalizability, precision, and realism (the types are: laboratory, judgment study, sample survey, formal theory, computer simulation, field study, field experiment, and experimental simulation). Tips are given on when to use which strategy and then some techniques are analyzed by their association and differences. Near the end, the important of validity in experimentation is stressed and measurement techniques are defined as well as variable manipulation techniques. This excerpt is a plethora of information with regards to how my partner and I will be conducting our experiments for our final project and would be a good read for any field interested in doing research. Evaluating User Interface Systems Research This paper/excerpt is an overview/analysis of how user interface systems could be researched. It discusses the barriers that exist with current model analysis and the importance of developing a new way to evaluate UI because of the large amount of emerging technologies (phones, PDAs, etc. at the time of writing; copious amounts more at the time of reading). In addition to user interface evaluation, the paper also notes the importance of tools that are used for UI design, and just how expressive said tools are in comparison with how expressive the end-design UI should be. One of the most interesting parts of this paper in my opinion was the topic of empowering new design participants in which the writer stresses that it is important to let the target population know the existing tools are not appropriate and/or could be dramatically improved upon. This goes back to what we mentioned earlier in the class, with users wanting to use a setup that they are comfortable with, even though it might not be the best solution. All in all good paper, could be slightly organized differently, but was still easy to read.
Shijia Liu 7:43:41 10/21/2015
Methodology Matters: Doing Research In The Behavioral And Social Sciences: At first, this paper talked about the several significant characteristic of the procedure of doing research, then the author talked about a variety of strategies that should use in certain circumstances or conditions. Furthermore, it discussed that the Appropriate access of the design of study research and different forms of validity. In additional, it talked about how to control variables, and several styles of the variable strengths and weaknesses. At last, the author shows us some exists techniques for manipulating variables.================ Evaluating User Interface Systems Research: At beginning of this paper, it shows us the main reason of doing this research in UI systems. Until now we indeed need a new system for UI, the current hardware can't hold it any more. Then it tells us that the importances of avoiding the usability trap included by evaluated errors.At the same level, the trap of requiring new systems is also important and should be aware of. Furthermore, the author wants to let us know the appropriate direction of the vision of how to evaluate effectiveness of systems and tools.
Ankita Mohapatra 8:15:55 10/21/2015
Evaluating User Interface Systems Research When evaluating complex systems, simple usability testing is not adequate. In this paper, a set of criteria for evaluating new UI systems work is presented, and problems with evaluating systems work are explored. There are three main problems regarding to the decline in new systems ideas, among which the author addresses the question of "How should we evaluate new user interface systems so that true progress is being made?". Simple metrics can produce simplistic progress that is not necessarily meaningful, hence the author brings up several alternative standards by which complex systems can be compared and evaluated. The reason why to study UI systems work is that some assumptions such as saving a byte of memory no longer hold. Trying to fit newer input technologies into old models will result in information loss, which is undesirable. UI systems can bring a lot of good things into development: it can (1) reduce development viscosity, provide (2) least resistance to good solutions, (3) lower skill barriers, (4) empower new techniques in common infrastructure, and (5) enable scale. Some evaluation methods are misapplied, leading to damage of the field. The author discusses three kinds: (1) the usability trap, (2) the fatal flaw fallacy and (3) legacy code. For (1) usability trap, researchers should not assume all potential users have minimal training. Neither should they make standardized task assumption, which means that a task should be inherently less variable among different users with different expertise. The third faulty assumption is that the scale of the problem should be relatively low. When testing the usability of interactive tools and architectures, the population should be equally ignorant of the new and the old systems. The standardized task and scale of the problem assumptions also affect testing UI toolkits. For (2) the fatal fallacy, existence of a fatal flaw should be given. No research system will ever pass it focuses on "what does it not do". For (3) legacy code, old architectures should not be barriers to the new systems. A dozen of evaluation metrics are given in the paper. Some are quite similar to previous ideas, e.g. "Expressive Leverage" is similar to "Expressiveness"/"Gulf of execution", and "Expressive Match" is similar to "Effectiveness"/"Gulf of Evaluation". Some metrics are quite important but often overlooked, such as "Simplifying Interconnection" and "Ease of Combination". As researchers, we need to keep such fallacies and evaluation metrics in mind when we are developing UI toolkits. =================================== Methodology Matters: Doing Research in the Behavioral and Social Sciences A distinct difference of HCI research from other computer science fields is that HCI not only studies the computer, but also the human subjects. It involves certain parts of social and behavioral science, hence this book chapter is quite important for HCI researchers. This chapter presents some of the tools with which researchers in the social and behavioral sciences go about "doing" research, and talks about strategy, tactics and operations issues, as well as inherent limits and potential strengths. Contents, ideas and techniques are always involved in behavioral and social sciences. More formally, they are three distinct domains: substantive, conceptual, and methodological domain. Substantive domain consists of phenomena, which is a pattern of human systems. The conceptual domain consists of property of a state/action, such as "attitude", "cohesiveness" etc. The methodological domain consists of methods, which include techniques for measuring, manipulating, and controlling the impact of some feature. Methods as tools have their own opportunities and limitations. To summarize, methods enable but also limit evidence. All methods are valuable but come with weaknesses/limitations. You can offset the different weaknesses of various methods by using multiple methods. You can choose such multiple methods so that they have patterned diversity ("Best for something, worse for something else"). Who, what, where, formally actor, behavior and context are the three facets researchers care about. When gather a batch of research evidence, maximizing generalizability, precision and realism is desirable. One interesting thing about this part is that the author used a diagram to demonstrate the strategy circumplex, which gives a clear overview of balancing different criterion. The author then explains each quadrant in details. Each strategy has certain inherent weaknesses, although each also has certain potential strengths. Since all strategies are flawed in different ways, to gain knowledge with confidence requires that more than one strategy - carefully selected so as to complement each other in their strengths and weaknesses - be used in relation to any given problem. The author also talks about statistical inferences in this chapter. In most cases, it requires the cases in the study to be a "random sample" of the population to which the results apply. If the samples are not random, one cannot run statistical inferences on them because the results will be not correct. Biased sampling method such as convenience sampling does not truly reflect the value of the study. There are several validities of findings: internal validity, construct validity, and external validity. The author also suggests some potential measures and manipulation techniques such as self-reports, observations, archival records and trace measures. The strengths and weaknesses are discussed accordingly. To manipulate variables, techniques such as selection, direct intervention and inductions can be applied to control the variables in experiments. All in all, we need to keep these points in mind: (1) Results depend on methods. All methods have limitations. Hence, any set of results is limited. (2) It is not possible to maximize all desirable features of method in any one study; tradeoffs and dilemmas are involved. (3) Each study must be interpreted in relation to other evidence bearing on the same questions.
Mingda Zhang 8:20:15 10/21/2015
Methodology Matters: Doing Research in the Behavioral and Social Sciences In this paper, basic ideas for conducting psychological and similar research were carefully reviewed. In this paper, the authors provide a few methods for us and discussed the differences of them. In this paper, authors also propose the three critical components in behavioral and social sciences, aka. substantive domain, conceptual domain and methodological domain. Emphasis was focused on the last section, the methods. Personally speaking, I could not agree more with the authors that an inherently flawed experiments could lead us to nowhere except mistakes. Therefore, attentions must bu paid when designing the experiment. There would be no way to make up a badly designed experiment from the data analysis. However, as we always mentioned in class, everything is good for something and bad for something else. Although no perfect methods are existing as silver bullet for every situation, researchers can always chose multiple methods to make up for the flaws. Three effects were important for researchers, aka. generalizability, precision and realism. However, sometimes these criteria might contradict each other, so trading off became necessary. From this paper, the significance of choosing validated experiment methods were emphasized, and sometimes maybe multiple experiments were needed. Evaluating User Interface Systems Research In this paper, the authors introduce a new approach for evaluating large and complex user interface. According to the authors, traditional UI papers were not evaluated correctly. Specifically, previous authors mostly focused on comparing task completion time of old systems with their new systems, and from the comparison they just claimed the advantages of their new design. According to the authors of this paper, complex UI systems were not suitable for unexperienced newbies and they were not designed as a easy to start tools. Therefore, inviting participants for usability tests should be a more tricky task. Also, the more complex the systems are, the more time consuming it could be for user testing, which lead to more expensive evaluation. Therefore, the authors created a new criterion called STU, situation, task and users, and use these ideas to evaluation a UI system. Specifically, Situation stands for the usefulness and frequency the systems were used, Tasks represents the importance of involved tasks and User stands for the importances of user popularity and related questions. Another important concept proposed by the authors were reducing solution viscosity. According to the authors, this system can help eliminate some of the possibilities in the design spaces. From my perspective, this paper could be regarded as an important part of the ideas in the previous paper. A objective and rational evaluation could in turn help validate the methods used in the experiment.
Darshan Balakrishna Shetty 8:47:29 10/21/2015
Methodology matters: Doing research in the behavioral and social sciences: the author provides us with a healthy discussion about a research study process.Introduces us to a design space of 8 different research strategies. First criteria is to understand the research problem. After that, we would wonder which type of research question we are dealing with by comparing to 3 types of research question. Many measurements as well as ways to manipulate variables are also mentioned. From the article, it is really hard to conduct a "true experiment" where all factors are considered. However, the article provided some suggestions that we can do to make the study more objective oriented and credible. From conducting the experiment to interpreting the results. So that we would not come to an invalid conclusion about the current research questions. --------------------------------------------------------------------------------------------- Evaluating user interface systems research: The paper raises plenty of questions to address new requirements from new technology, especially touch based interaction. The author has pointed out that this usability has flaws and it is not the only way to evaluate an interactive UI system. The interesting thing about the paper is that the author not only showed us how to evaluate a UI system, he also showed us how to conduct a UI system study, how to write a paper about a new UI system. All main points mentioned in this paper are the questioned to be addressed of when writing a paper or conducting a study about UI systems. Moreover, many points can be generalized to other research areas. I found the STU (Situations, Tasks and Users) very interesting. The points needed to be addressed in almost every paper. On the other side,some of the short comings to mention. The author raised an interesting question about the need to have new evaluation system for new machines, off-the-desktop. However, he did not answer the question directly or completely in this paper. All the mentioned arguments are also true for a desktop, old pointing GUI system etc. Last but not the least, according the the Importance section, the author may want to argue why do we need new evaluation system for new technology, what have not been done correctly with current evaluation system? What will we gain if we have new interaction systems based on new technologies. Otherwise, "people will not discard a familiar tool and its associated expertise for a 1% improvement".
Adriano Maron 8:54:14 10/21/2015
Methodology Matters: Doing Research in the behavioral and social sciences: This chapter discusses the limits and strengths of various research techniques, providing 8 different research strategies to be followed based on the domain of the research in question. Despite being explained in the context of behavioral sciences, the insights about research methodology can be applied in other domains, such as HCI. The main takeaway from this paper is that the methodology used in evaluation and testing will influence and limit the results. Therefore, a careful analysis of the problem in question is necessary in order to identify the method that will provide the most accurate evaluation for the most relevant aspects of the problem. ===================================================== Evaluating User Interface Systems Research: This paper discusses several characteristics that could be used for evaluation and comparison of User Interfaces. The author first presents some arguments about why usability-based evaluation is imprecise and not suitable for most of the realistic cases. Next, several characteristics of systems and tools are cited as possible evaluation points of a UI system. Among all, "STU" and "Problem not previously solved" are the most compelling and easy to evaluate solutions. For STU, the domain of the system is very specific, therefore testing can be performed in a limited set of contexts, reducing costs and the likelihood of confounding variables. As for "Problem not previously solved", the claim is strong if the target population and task are relevant. This paper provides some interesting insights about evaluation, however, it does not discuss any use-case where they could be applied.
Mahbaneh Eshaghzadeh Torbati 8:58:24 10/21/2015
Critique for Evaluating User Interface System Research This paper basically talked about how to be able to evaluate complex system and new UI system. The paper also point out some evaluation traps that people may make. Evaluating complex system is hard, so do the new UI system. Like the author talked about, computer is changing due to technology development. New input method shows up, and current technology has been improved. These changed lead to the motivation of improving the evaluating system. The author first point out three errors that people make when doing the evaluation: the usability trap, the fatal flaw fallacy and legacy code. Usability trap is about misevaluated the usability of system. Fatal flaw can help to detect error, but for some complex system, doing this will make no research system can pass it, which means that this evaluation will prevent some good system to be accepted. Legacy code standard is not suitable for some brand new system since it will give some limitations on innovation. Then the author talked about how to evaluate effectiveness, in which the author introduced STU: situations, tasks and users. By using this modeling, we can determine the importance, generality etc. of the system. It is important for the evaluation. I think this is a great paper, because it point out one aspect that we need to think about when facing the changing of computers, which is the evaluation. It is important for the development of computer science. The evaluation result can determine if some new idea can came out to make computer technology become more advanced. The method that the paper introduced contributes a lot for the evolution of system evaluation so that I think it is an important paper for the development of computers. ----------------------------------------------------Critique for Methodology Matters: Doing Research in the Behavioral and Social Science This paper talks about the methodologies about doing research in behavioral and social science by the timeline order of conducting research. It tells us the specific knowledge that needs to be understood in each step of doing research. This is an important paper because it teaches readers in detail about how they can do in different stages of doing research in behavioral and social science. In general, people need to choose what they want to research, contents came from substantive domain, ideas from conceptual domain, and technique from methodological domain. Also people need to think about the strategies to use, like the setting for the study, which involved a lot of different categories of strategy, like field of study, field experiment and formal theory etc. Also this paper talked about the points that need to care about when conduct study and experiment, like how to do sampling, how to make sure the result can reflect the reality, and how to use the data to conclude the result. All of this guidance is very useful for researchers. New researchers like us, new PhD student can benefit a lot from it. By using my experience that involved in doing research, I do think the technique about doing study are important. When I am doing a user study of a research project, I got a lot of knowledge about how to design user study and how to conduct it. Controlling the variables are important, designers need to carefully control the variables to make sure the experiment can carry out strong result that can support the hypothesis. But to my understanding, even a good design of user study can lead to some useless result. Researchers needs to be patient and don’t afraid of failures. Try to believe that there must be one day of success waiting for you.