User Evaluation – An Overview
As I stated in a previous post, I have been trying to organize a user evalution session for my paper prototype, with mixed results. However, before I continue on to the results of the session, I will discuss what exactly I wrote up and did to get the necessary materials for it. I do advise anyone with a project or thesis subject similar to mine to do such an evaluation because the amount of information on design flaws that can be gleamed from them is quite valuable. Even just getting two or three people to follow through all the steps of the application can grant you an interesting insight into how people would use the application and what parts need revision (or what parts are missing, as I discovered).
The first step is to create your paper prototype. As shown in previous blog posts, I had used Photoshop to that effect to create the template interface. I had decided this mainly because of the ease of use of Photoshop and its layer-based interface, but also because my drawing skills are embarrassingly sub-par. However, drawing skills aren’t necessary, paper prototypes can be as simple as you wish, as long as they get the point across and feel like your application will in the end. The end result of my photoshop experience can be viewed in a this blog post. I proceeded to cut out each element and create several generic versions of each and subsequently printed them on regular A4 paper so they can be overlayed and otherwise manipulated.
The second step was to create some kind of “story” that users walk through and guides them through the various elements of the application. My first approach was to have entirely separate scenarios, each of which displays one or more functionalities. But my mentors advised that it was better to add more context to it, as well as rewrite the scenarios so there is a single storyline, not just atomic events. They also advised to make sure to include detailed information on what the user is supposed to be doing. The more vague the “setting” of the evaluation, the more difficulty test users will have with getting into the spirit of things. The following is a summary of the scenario I eventually came up with (after explaining the general purpose of the application, of course):
- You are a student at the department of Computer Science and are currently following a course on the basics of computer systems. Over the course of this subject you learn various basic principles of common programming elements such as arrays, collections of data, and you learn of the various ways these arrays can be manipulated. A common operation in computer science is the comparison of the contents of two arrays, which can be optimized in various ways. You decide to test out hte knowledge you have gained and set out to craft a “brute force” solution to the problem and post it on the application I am designing. However, before you proceed, you want to get some additional inspiration and begin to sift through the posts in search of those that match your current field of interest and are particularly valuable.
- At this point in the user evaluation, the test user is tasked with learning how the interface of the front page operates. He or she will have to use the search function in the menu bar to prune the possibilities to match favorite tags and keywords and then sort the results according to score and/or the expertise of the user who posted it. This part also urges the user to think about why they would want to check out what other people have been doing and how the interface displays the information related to a given solution. Incentives are also given to urge the user to follow the authors of the posts they study, as well as favorite and/or comment on them.
- The next step is to actually post your own solution. After developing your result, you decide to create a post and send it out onto the application.
- At this point, users will learn how to post a solution and what elements are required to fill out all the necessary information for it to be a complete post that will get positive feedback.
- Quite quickly, you receive a notification of comments and even a challenge and naturally wish to check out what has been said about your endeavor.
- Here, the users are introduced to the notifications that give users warnings when certain events have come to pass. Users are urged to find out where and how to look at these notifications, how to react to them, how to reply to comments and how to accept or refuse challenges. For the sake of the test, users are naturally ‘forced’ to accept the challenge but this would otherwise not be the case.
- After accepting the challenge that was issued, you decide to create a new iteration or ‘version’ of your previous solution that will use whatever feedback you received or how the challenge has urged you to rethink your earlier work. The fruits of your labor yield a faster and more robust result and you post the new iteration to the application.
- As can be expected, the next element in the evaluation gives users a chance to create a new iteration and post their new solution. Users will have to go through the interface to find how to accomplish this and after posting this new iteration, any followers and the person who challenged them will be notified. Users also need to figure out how to link a post to one or more challenges.
- Inspired by another user’s profile that displays various challenge tracks he has completed, you decide to create one yourself. After doing so you publish this and see if it garners any positive reactions from other users.
- The last major part of the evaluation leads users through the process of create and publishing a Challenge Track and how this may affect the application as a whole.
The third step, after everyone has completed the scenario, is to get both formal and more informal feedback on the application. A formal evaluation can be done in any number of ways, but my mentors advised me to use an SUS questionaire (otherwise known as “System Usability Scale”). SUS is a simple set of 10 mostly generic questions that can offer some insight on the usability of any application. It is easy to find information on this system online and I will speak more about it when I interpret the results. The informal method that I chose was to simply prepare a few questions and subsequently follow through with questions on the fly that go into subjects and issues brought up by the test users in greater detail. The questions I chose to use as a start-up points were:
- How is the user interface experienced, in general? Is everything readily and easily accessible? Are there any immediate impairments?
- How are the game elements experienced at first glance? Do they serve their purpose, even in an early stage? Which are pointless? Which simply need refinement?
- If you were following a course that had, at the very least, a significant section on computer algorithms, would you use this application to better understand the material as well as broaden your knowledge beyond the material?
- Is the process of challenges, rating and creating your own content to be considered enjoyable, even though this is on the short term?
- Are user-generated achievements relevant and something you would be willing/happy to show on your profile, or are they meaningless?
- If given a choice, what platform would you use this application on the most?
- Would you use any results, achievements or expertise gained while using the application on other online social sites, if possible, such as LinkedIn or Facebook?
As can be seen, the questions are rather basic and some of them, especially the game element-related questions, are difficult to answer because such things only reveal themselves through extended use and testing of the application. If the elements are subtle, their beneficial effects won’t be visible after a single, 10 minute test-run of a paper prototype. But regardless, the question begs to be asked and could very well give some valuable insight.
This is how I chose to go about a first user evaluation. It is a very different mindset that is required of a programmer to perform these things successfully and I am not used to this, at all. However, I have gotten some useful feedback, but it was sadly, mostly limited to user interface and useability and not so much related to any gamification elements. This was to be expected and was also the main goal, so I do not lose time with smaller details such as the lack of a certain button or easy-to-reach functionality. My next blog post will discuss the results I have gathered so far.