Engaging Students through Gamification - A Different Application of Game Design Concepts

Master Thesis on Gamification applied to Studies

  • About

The Application – A First Screenshot

Posted by kevindelval on February 18, 2013
Posted in: Gamification, Thesis. Tagged: thesis12. 1 Comment

The Application

After needing more time than expected to look into the technology necessary for the web application, development is finally getting on track. ASP.net, HTML, CSS and Javascript are all technology with which I have little to no experience, so I needed to educate myself in them and get used to them before I could properly continue my thesis’ application. Regardless, I have created the Front Page with placeholder information using .ascx custom user controls that I can reuse on other web pages. I’ll be posting a screenshot below that will show what the application currently looks like. Naturally, much of the style is missing, given I am currently using basic HTML borders, background colors and so on. Once the application has been developped and I have time remaining, I will attempt to make it look significantly more professional since I have made it so I only need to insert images and remove any border-related instructions from the appropriate Css style classes.

On a side-not, working with ASP.net so far has been an enjoyable experience. It resembles Windows Forms in many ways, which is a bit of a double-edged sword. The reasons for this are beyond the scope of this blog, however, so I won’t go into too much detail. Right now, I’m building the site to be rather modular. Most controls are custom controls with properties that can be accessed through server-side C# code. Events are present to handle the expected behavior and injecting the information (user name, title, achievements, posts, etc.) from the database is as easy as setting the HTML properties through those I created in the C# code. The biggest snag is one’s dependency on sticking to asp controls once you start. Because the C#code can only really interact properly with asp controls, making any important control an asp one will force you to start changing other, simpler and faster HTML controls into asp’s as well (the ViewState makes them generally larger and slower than their HTML counterparts). Because optimilization is not a main concern here, I can temporarily set this aside, however, in favor of usability and the advantages the Visual Studio Web Designer offers.

Either way, in the next view days I will upload more screenshots and a user evaluation is forthcoming, which will allow for the application to undergo a more proper test run. I learned a lot from the paper prototype, but actually seeing the application on a computer screen and having the tester use a mouse and keyboard is very different and more telling of the end product then any printed paper can ever be.

The Front Page

Screenshot

Advertisements

Planning – What will the First Iteration look like?

Posted by kevindelval on February 14, 2013
Posted in: Gamification, Thesis. Tagged: thesis12. 6 Comments

Features

My application will be constructed in various iteration; two at the very least. Each iteration will naturally build on the previous one, as well as fix any problems that may have occurred throughout the various user evaluations I will conduct after each iteration.

The following are the goals set that should be met by the first iteration. It mainly includes basic functionality and UI controls.

  • The application’s UI has been fully designed, with placeholders when need-be.
  •  UI element that must be operational are basic profile functionalities, main menu buttons related to functionality that is to be implemented in this iteration, list of solutions on the server, profile page view,
  • Users must be able to browse the various solutions, as well as filter based on tags and search based on string matching.
  • Users must be able to sort the list of available solutions according to basic criteria (A-Z, newest first, …)
  • Users must be able to create new solutions, as well as new iterations to an earlier solution.
  • Users must be able to create comment and rate posts.
  • Users must be able to receive notifications of basic functionalities.
  • Users must be able to issue challenges.
  • The application must be able to detect when challenges have been met.

The following functionality will remain tentative:

  • Users must be able to view their profile page.
  • Users must be able to filter and search lists using more available options than those listed above.
  • The application must be able to communicate with a persistent storage and not rely on local files as placeholders (SQL).
  • Users must be able to register, filled-in information being stored in the database persistently (this instead of the placeholder login method that does not check whether the given credentials are actually valid).

Planning

So, what does the planning look like then? I based it on a week-by-week progression, keeping some time left over in the last week in case of delays or urgent, unforeseen matters.

Following is a planning of how the development of the first iteration will proceed:

  • 11/02/2013 – 17/02/2013: UI in ASP WebForms has been created for all relevant screens, placeholders have been created wherever necessary. Users can login (fake login), and look at placeholder solutions and users (statically loaded in).
  • 18/02/2013 – 24/02/2013: Users can create and post solutions, filling out all the necessary information. Tags can be added and the list of available solutions can be filtered as described above.
  • 25/02/2013 – 03/03/2013: Users can issue, accept and refuse challenges. Users can place comments and rate solutions. The application can detect whether or not a challenge has been completed successfully.
  • 04/03/2013 – 10/03/2013: Users receive notifications of all relevant changes related to the above functionalities. Furthermore, the SQL server has been set up and a database is available for persistant storage of implemented functionality.

So, as one can see, there’s a lot of work to be done, but I am already making good progess with the first week. Once the functionality for the first week has been completed, I will be posting the results on the blog. The main difficulty I foresee is my lack of experience with any type of web application. I’ve been going over HTML, CSS, JavaScript and ASP.net in great detail and have learned most of what I will be needing for the UI. The main benefit is that I can use my experience with C# in ASP.net and a lot of the fiddling with CSS can be streamlined using the WebForms Designer (much like Windows Forms, really). Regardless, updatese will be posted soon.

User Evaluation – The SUS Results

Posted by kevindelval on January 22, 2013
Posted in: Game Design, Thesis. Tagged: thesis12. 1 Comment

User Evaluation – System Usability Scale

In my last post I stated that I would attempt to gather more SUS forms before interpreting the results. Due to exams, I haven’t gathered as many as I would have liked, but I do have enough to deduce some interesting results. In this post I will discuss the results and findings and see what I can do to correct where I’ve gone wrong or emphasize the points that seem to have been received well.

Most of the students I have been able to add to the list of testers are a few Computer Science people I know from abroad. Skype offered some tools to do these paper prototype sessons despite the distance. With this in mind, most of the results reflect the wishes and preferences of individuals with experience in computer science (apart from the history and drama students). Since the application is related to computer science only as an example, I naturally won’t leave out the valuable feedback from those two students.Before going over the results, I will explain how the results of the SUS questionnaire should be interpreted and turned into a single number that is indicative of the application.

As I stipulated in a previous post, the SUS consists of 10 questions that the users can give a rating of 1 to 5 on, 1 being a ‘strongly disagree’ and 5 being a ‘strongly agree’. The questions are very general and aimed entirely at the usability, practicality and ease of use of the application being tested. Once the results have been gathered, those numbers need to be converted into a single number that will say something about the usability of the system. This is done in the following way:

  • For odd items: Subtract one from the result.
  • For even items: Subtract the result from 5.
  • Now the values are between 0 and 4
  • Add up the converted responses and multiply is by 2.5. Now the range of possible values will lie between 0 and 100.

It is important to note, however, that these numbers are not percentages. A result of 70 does not correspond to 70%, even though it is 70% of the maximum value. The average value one can achieve is 68, which means the application is average. Any higher means it is above average. A result of 80+ in this way represents an application that is far above average and has few problems that need tackling. Now that we know this, what was the resulting rank for my application so far?

The Results

For each user separately:

  1. 75
  2. 75
  3. 77.5
  4. 65
  5. 72.5
  6. 77,5
  7. 80
  8. 75

Which gives an average score of: 74.68

All-in-all, this is a good result. It signifies that most of the people who used it found it both useful and easily accessible. However, this wouldn’t be a proper analysis if I didn’t assess some things that clearly showed in the results. Also note that this score is still close to the average of 68 and not close enough to the 80.3 mark (which is generally interpreted as the point where people start recommending an application to other users).

Overall, the question that received the lowest marks was whether or not the functions of the system were well integrated. This is along the same lines as the feedback I had addressed previously. It seems that some functionality, primarily the Challenge Tracks, feel out of place or are simply not integrated well enough. I had also not adapted the paper prototype at all after the first results, so as not to polute later results with improvements that could have been made already. So, one of the other main concerns was how to make a new iteration, which was far too cumbersome and not easy to access. This led to lower scores on questions 8 and 9. The former questions whether or not the system was cumbersome, while the latter questions whether the tester felt confident using the system. The confusion caused by the misplacement of some of the UI elements, as well as the overly complicated layout at times, caused many of them to feel less confident and made the application feel sluggish. These matters will definitely need to be addressed in one of the next iterations of the application. A gamified application that feels sluggish and cumbersome will fall flat as those game elements will be unable to do what they are supposed to if attached to such a crippled application.

So, as I stated before, the results are positive, but it did reveal a few issues that need urgent attention. The main surprise to me came in the form of the answer to the very first question. Some of the users said that they would indeed consider using the system frequently, most notably the two that weren’t Computer Science students. This leads me to believe, tentatively, that the application could serve its purpose in motivating students and seeing actual use.

In the next post I will discuss implementation decisions I have made and will be making and, after my exams have concluded, I will finally begin with implementing the application’s first iteration.

User Evaluation – The First Results

Posted by kevindelval on December 19, 2012
Posted in: Gamification, Thesis. Tagged: thesis12. 3 Comments

User Evaluation – How to Proceed

Over the past two weeks I have been able to get a few user evaluation sessions completed, though I’ve sady had some bad luck organising it in the university itself. However, I have been able to feedback from several new and old computer science students, a history student and even a drama student. This gives me a wider array of feedback that hopefully grants more insight into whether this application has wider uses than just computer science algorithms, which is merely an example application of it. I will continue to attempt to get more feedback over the coming few weeks but holidays and exams will naturally make this more difficult to set up and execute. Because of this, I also do not have enough filled out SUS forms to really make useful graphs so, unless I no longer have the time for more evaluations, I will wait to post those results until later.

Following is a list of various problems that cropped up through the informal methods of evaluation; the scenario and informal questions. To each of these bullets I will attach the solution I have devised for the problem or various possibilities that I am still weighing against one another.

Problems with the UI

  1. The lack of an EDIT button: This arose both in the evaluation by my mentors and the evaluation with nearly every user. When prompted to post a new iteration, the first instinct of most of the users was to try and edit their existing solution with an improved iteration.
    Solution: The solution is, naturally, quite simple: A button to quickly edit posts has been added to the interface.
  2. Creating a new iteration is too abstract: Together with the missing edit button, it was also pointed out numerous times that creating a new iteration is too difficult and not intuitive at all. The normal procedure is to create a new post and to click an Iteration dropbox. Selecting the post of which the new one is a new iteration will result in incrementing the iteration and posting it. In hindsight, the whole process is also faulty from a programming standpoint and far too complicated for what is actually just a different form of a “reply”.
    Solution: The solution I have chosen will have another button added to the interface on the page of a post that allows a user to quickly generate a new iteration. Much of the information of the original post, such as the description and tags, are auto-copied into the new post so the user need only add-on new text and information.
  3. Notifications in awkward spot: Especially the history student had pointed out the awkward position of the notifications bubble. While it does flash red and attract attention in this way, the lower left corner of the screen isn’t really the place for an application element that is supposed to notifiy a user. I had initially chosen this location because it was empty space and was rarely, if ever, filled by anything significant.
    Solution The solution is two-fold. First of all, notifications will be changed into a clickable button instead of that bubble, and the button will be located in the upper right corner, which is also often space that isn’t used. Because a lot of information such as the user console and the post headers are located nearby, the user’s eyes will be attracted to the notification tab much more often.
  4. Searching shouldn’t float: A complaint of many of the users is that some of the menu functions don’t belong in a floating tab that collapses when the user clicks anywhere else on the screen. The feedback is rooted in the fact that it’s common to accidentally click next to a button and if the advanced search functions, that might take some time to fill out properly, collapses whenever this happens, redundant and annoying repeat work will be required of the user. This complaint can extent to other functionalities in the future, garnering the need for a solution to this.
    Solution: Possible solutions are many, but I chose to go with the simplest and most common practice one. Moving the advanced search function to a different screen. When the user wants to do anything more complicated than looking for posts containing one or more words, they are taken to a different screen of the application that does not invalidate their work by collapsing. It may have seemed like a minor complaint, but an application must do everything in its power to ‘not’ be frustrating.
  5. User Summaries aren’t useful: This was an issue with a few of the users. A user’s smaller profile displayed near posts, comments or in the top left corner of the screen displays various achievements and expertiese and, when clicked, expands to show a summary of that information in greater detail in the form of a list. This was deemed not useful because it is common for more experienced users to have a lot of achievements and would likely also have a lot of tags for which they have at least bronze expertise. The list would get needlessly big and thus fail to summarize anything.
    Solution: The suggested solution was the make clicking it simply lead to the person’s profile, but this is redundant given clicking the person’s name already does this. I decided to simply alter the way the summary is laid out. When giving a summary of tags, only the most prominent tags and their expertise are displayed, leaving out very broad ones such as “Computer Science” simply because this tag would be present in just about all posts by an author specialized in that field of study. Achievements are a bit trickier and I am still mulling over ways to make them more organized and displayed efficiently for the user to gleam information from at a glance.
  6. Lack of Formatting options: This was a common issue among most users, either because programming code as flat test isn’t particularly interesting to read or because formating the problem domain can lead to a better understanding of the material. While basic formating options will be available, none of them will be related to code because the application isn’t specifically targetting computer science. Advanced formating options will not be included simply due to time constraints and because they are not integral to seeing whether or not the application does what it is supposed to.
    Solution: There is none, adding more of these options will not aid the application in reaching its goals beyond basic usability. Time constraints won’t allow me to put so much work into text manipulation.

Conceptual Problems

  1. “Friending” makes no sense: This one was only pointed out by my mentors during the user evaluation I did with them, but I found it a very valid point nonetheless. I was intent on providing an interface for friends and the like, but such request make little sense because there is no real shielding of information from non-friends and, if there was, it would destroy the purpose of the application as it must share solutions equally among all users. they suggested a Twitter-like follow mechanism instead that changes little to the interface but makes much more sense conceptually. A notification is given to anyone who is followed and following is just a matter of clicking a button for the interested user.
  2. Will Challenge Tracks be used? Once again, a problem pointed out by my mentors, but not the other users. Why would someone create a challenge track for himself when it serves no other purpose than granting achievements?
    Solution: The solution lies in the blog posts I have made before about user-generated content and meaningful achievement. Many people have an urge to share and created something yourself and subsequently sharing it with other users, most notably followers and people of equal interest, would yielf a beneficial effect and that alone would urge people to use the feature. Furthermore, it can serve as a more long-term, advanced challenge that promotes greater learning through the same iteration functionality that is supported by the application. Each step is clearly defined and a reward is given along the way. One thing I will add to the application’s explanation is that a teacher can use challenge tracks to help explain the material in an iterative fashion, preventing a complicated information dump and preferring to take the approach of letting students figure things out for themselves with the milestones and achievement “hints” as help.
  3. Useful on its own? A concern expressed by several test users was that, especially for certain fields of study, the application may not be useful on its own. People will not be inclined to use it without a first push into its direction. Unless the user is tremendously interested in the material, students will not necessarily see this application and think, right away, “this can help me”.
    Solution: This concern is an important one as it may indicate a fundamental flaw in my application. While the general concensus so far is that it would be useful and students would use it if the motivational aspects work in the long run, the very real problem exists that it would be overlooked in favor of simply studying the text book on the subject matter and nothing else. This indicates that an initial push towards the application must be made so the student can experience it and properly make out whether or not it is something that can help them. This can be achieved by making this application related to one or more subjects at the university in question. Professors would post some challenge tracks or point out the application for students so they know of its presence and are more inclined to try it out. It also indicates the typical problem of having a cold start. In the beginning, the application will be empty and it will take some time for a good repository of solutions, tags and problems to fill the database and make the application truly useful. While the above solution seems like something that could work, further study will be needed to point out whether this fundamental flaw is truly an issue or just a minor concern.

Next I will go over most of the informal questions I asked each of the users and summarize the general feedback and answers I received from them and how I will use that feedback:

  1. How is the user interface experienced, in general? Is everything readily and easily accessible? Are there any immediate impairments?
    The user interface was, in general, positively received. Most of the elements, apart from those discussed above, were visible, centralized and easy to access. This made the application intuitive, quick and easy to use.
  2. How are the game elements experienced at first glance? Do they serve their purpose, even in an early stage? Which are pointless? Which simply need refinement?
    The reactions were largely inconclusive here. In general, the elements were well received and they would urge users on to be involved with the application. Often, posts are made and published but then forgotten and not much else happens with the interplay between the users. But the challenges and other game elements urge the user to not forget about his or her posts, or those of others. However, no in-depth feedback could be given because the game elements did not really do what they were supposed to in such a short test. Only long term or fully implemented tests can show whether they need refinement, change or removal.
  3. If you were following a course that had, at the very least, a significant section on computer algorithms, would you use this application to better understand the material as well as broaden your knowledge beyond the material?
    Reactions to this were hard to come by. The non-computer science test users could not really imagine how the application would work because of their lack of knowledge on the subject, meaning the question itself was unfortunately formulated by myself. But two of the computer science students, one of which majors in video game design, had to admit it could have made their life a lot easier as their current occupations often have them altering existing algorithms to suit their needs and this was a difficult transition at first due to them having mostly book knowledge and not practical, flexible application know-how. This is a good result, but not at all conclusive, but it does hint at how the application may be capable of doing what it needs to do.
  4. Is the process of challenges, rating and creating your own content to be considered enjoyable, even though this is on the short term?
    As with the second question, most users could not give an answer to this question. The interface surrounding these elements was fluent and easy-to-use, but whether it was enjoyable could not be gleamed from the short evaluation scenario.
  5. Are user-generated achievements relevant and something you would be willing/happy to show on your profile, or are they meaningless?
    Some of the users stated that they would indeed show achievements generated by challenge tracks they would hand-pick on their profiles. The main reason for this being that they would often be achievements they could be proud of, actual proof of achieving. Others, however, did not see why these elements were even present in the application and were largely uninterested in them. This does show that it is indeed important to include various game elements as people’s tastes differ widely. Those users could perhaps enjoy other elements more and it is my job to try and include something for everyone (within the realm of the possible, of course).
  6. If given a choice, what platform would you use this application on the most?
    As I expected, all users chose a pc as a target platform due to the heavy text related core of the application. While a mobile version to view new posts would be useful, typing up a full solution on a mobile platform that isn’t at least as sophisticated as a touchpad, would be impractical and sometimes impossible. Given these unanymous answers, I will develop it for desk- and laptops.
  7. Would you use any results, achievements or expertise gained while using the application on other online social sites, if possible, such as LinkedIn or Facebook?
    This idea came from a suggestion of one of my mentors and got me thinking about how important it is for students to show their skills to the industry. One of the best ways of doing so is to take advantage of the pervasive presence of social networking. A lot of employers check applicants’ facebook and other social sites to see what sort of person it is and how their skills relate to what is demanded by the job. If expertise and relevant achievements, with links to actual, tangible and reviewable solutions are present on professional sites such as LinkedIn, it could prove the break a student needs to get a certain job. All but one user agreed that this would be a useful addition to the application.

These are all the results I have compounded so far. Naturally, I still lack proper feedback from people in my specific branch of computer science, which would prove most valuable for the example I am going with. But I do have people from very varying backgrounds, each of which pointed out issues others had not and thereby giving me food for thought. All in all, it was very educational and I can finally move on to implementation and getting a first iteration up and running that will show, digitally, how the program will work.

User Evaluation – SUS and Paper Prototyping

Posted by kevindelval on December 19, 2012
Posted in: Gamification, Thesis. Tagged: thesis12. Leave a comment

User Evaluation – An Overview

As I stated in a previous post, I have been trying to organize a user evalution session for my paper prototype, with mixed results. However, before I continue on to the results of the session, I will discuss what exactly I wrote up and did to get the necessary materials for it. I do advise anyone with a project or thesis subject similar to mine to do such an evaluation because the amount of information on design flaws that can be gleamed from them is quite valuable. Even just getting two or three people to follow through all the steps of the application can grant you an interesting insight into how people would use the application and what parts need revision (or what parts are missing, as I discovered).

The first step is to create your paper prototype. As shown in previous blog posts, I had used Photoshop to that effect to create the template interface. I had decided this mainly because of the ease of use of Photoshop and its layer-based interface, but also because my drawing skills are embarrassingly sub-par. However, drawing skills aren’t necessary, paper prototypes can be as simple as you wish, as long as they get the point across and feel like your application will in the end. The end result of my photoshop experience can be viewed in a this blog post. I proceeded to cut out each element and create several generic versions of each and subsequently printed them on regular A4 paper so they can be overlayed and otherwise manipulated.

The second step was to create some kind of “story” that users walk through and guides them through the various elements of the application. My first approach was to have entirely separate scenarios, each of which displays one or more functionalities. But my mentors advised that it was better to add more context to it, as well as rewrite the scenarios so there is a single storyline, not just atomic events. They also advised to make sure to include detailed information on what the user is supposed to be doing. The more vague the “setting” of the evaluation, the more difficulty test users will have with getting into the spirit of things. The following is a summary of the scenario I eventually came up with (after explaining the general purpose of the application, of course):

  • You are a student at the department of Computer Science and are currently following a course on the basics of computer systems. Over the course of this subject you learn various basic principles of common programming elements such as arrays, collections of data, and you learn of the various ways these arrays can be manipulated. A common operation in computer science is the comparison of the contents of two arrays, which can be optimized in various ways. You decide to test out hte knowledge you have gained and set out to craft a “brute force” solution to the problem and post it on the application I am designing. However, before you proceed, you want to get some additional inspiration and begin to sift through the posts in search of those that match your current field of interest and are particularly valuable.
  • At this point in the user evaluation, the test user is tasked with learning how the interface of the front page operates. He or she will have to use the search function in the menu bar to prune the possibilities to match favorite tags and keywords and then sort the results according to score and/or the expertise of the user who posted it. This part also urges the user to think about why they would want to check out what other people have been doing and how the interface displays the information related to a given solution. Incentives are also given to urge the user to follow the authors of the posts they study, as well as favorite and/or comment on them.
  • The next step is to actually post your own solution. After developing your result, you decide to create a post and send it out onto the application.
  • At this point, users will learn how to post a solution and what elements are required to fill out all the necessary information for it to be a complete post that will get positive feedback.
  • Quite quickly, you receive a notification of comments and even a challenge and naturally wish to check out what has been said about your endeavor.
  • Here, the users are introduced to the notifications that give users warnings when certain events have come to pass. Users are urged to find out where and how to look at these notifications, how to react to them, how to reply to comments and how to accept or refuse challenges. For the sake of the test, users are naturally ‘forced’ to accept the challenge but this would otherwise not be the case.
  • After accepting the challenge that was issued, you decide to create a new iteration or ‘version’ of your previous solution that will use whatever feedback you received or how the challenge has urged you to rethink your earlier work. The fruits of your labor yield a faster and more robust result and you post the new iteration to the application.
  • As can be expected, the next element in the evaluation gives users a chance to create a new iteration and post their new solution. Users will have to go through the interface to find how to accomplish this and after posting this new iteration, any followers and the person who challenged them will be notified. Users also need to figure out how to link a post to one or more challenges.
  • Inspired by another user’s profile that displays various challenge tracks he has completed, you decide to create one yourself. After doing so you publish this and see if it garners any positive reactions from other users.
  • The last major part of the evaluation leads users through the process of create and publishing a Challenge Track and how this may affect the application as a whole.

The third step, after everyone has completed the scenario, is to get both formal and more informal feedback on the application. A formal evaluation can be done in any number of ways, but my mentors advised me to use an SUS questionaire (otherwise known as “System Usability Scale”). SUS is a simple set of 10 mostly generic questions that can offer some insight on the usability of any application. It is easy to find information on this system online and I will speak more about it when I interpret the results. The informal method that I chose was to simply prepare a few questions and subsequently follow through with questions on the fly that go into subjects and issues brought up by the test users in greater detail. The questions I chose to use as a start-up points were:

  • How is the user interface experienced, in general? Is everything readily and easily accessible? Are there any immediate impairments?
  • How are the game elements experienced at first glance? Do they serve their purpose, even in an early stage? Which are pointless? Which simply need refinement?
  • If you were following a course that had, at the very least, a significant section on computer algorithms, would you use this application to better understand the material as well as broaden your knowledge beyond the material?
  • Is the process of challenges, rating and creating your own content to be considered enjoyable, even though this is on the short term?
  • Are user-generated achievements relevant and something you would be willing/happy to show on your profile, or are they meaningless?
  • If given a choice, what platform would you use this application on the most?
  • Would you use any results, achievements or expertise gained while using the application on other online social sites, if possible, such as LinkedIn or Facebook?

As can be seen, the questions are rather basic and some of them, especially the game element-related questions, are difficult to answer because such things only reveal themselves through extended use and testing of the application. If the elements are subtle, their beneficial effects won’t be visible after a single, 10 minute test-run of a paper prototype. But regardless, the question begs to be asked and could very well give some valuable insight.

In Conclusion

This is how I chose to go about a first user evaluation. It is a very different mindset that is required of a programmer to perform these things successfully and I am not used to this, at all. However, I have gotten some useful feedback, but it was sadly, mostly limited to user interface and useability and not so much related to any gamification elements. This was to be expected and was also the main goal, so I do not lose time with smaller details such as the lack of a certain button or easy-to-reach functionality. My next blog post will discuss the results I have gathered so far.

Paper Prototype – A Few Collages

Posted by kevindelval on December 1, 2012
Posted in: Case Study, Gamification, Thesis. Tagged: thesis12. 5 Comments

The Prototype of the Application – Useful Photoshop

As I stated in a previous entry, I will now attempt to explain the user interface of my application through several screens composed with the elements of the paper prototype I constructed in Photoshop. I will not be going through every small detail, but I will outline the most important elements and their function.

The Front Page

Front Page
Picture 1: Front Page Prototype

This screen is effectively a prototype of the front page of the application, with the user menu opened. I will go over each element in turn:

  • User Console: The user console is the panel located in the upper left corner. It displays a short summary of your personal information, or nothing if you are not currently logged in as a registered user. It shows the user’s name, title, a selection of achievements and how much expertise the person has. Clicking the name or avatar will redirect you to your own profile, while clicking the achievements or expertise will give a small pop-up window (not included here for brevity’s sake) with a scrollbar that allows you to cycle through all your achievements or the tags in which you have attained a certain expertise. This may not be useful for your own profile, but the same functionality is provided on the smaller user consoles next to the posts you see on the front page.
  • Main Menu: The main menu is normally collapsed simply as the menu button and only shows the options on the screen when clicked. These options contain all the basic functionalities of the application such as making a new post, issuing a challenge or searching the posts for specific topics. Advanced Search is also available which will allow users to specify additional criteria beyond simply a certain string to match with a post name.
  • Posts: The two posts located on the screen contain some basic information and clicking the post name will take you to the main view of it. Various information is displayed next to it such as the writer’s user console, as well as its current score and how much expertise it has garnered the writer. A small keyword also shows in the upper right corner of each post detailing whether the post is new, an update, an edit or a new iteration. Only two posts are given as examples, but naturally more can be displayed on the screen and skimmed over using a scrollbar on the side.
  • Notifications: A notification bubble is located in the lower left corner of the screen which is colored red whenever there are new notifications. This is effectively a listener that will be updated whenever something occurred that would be of interest to the user. That can be anything from acomment on a post of his/hers, a challenge that was issued to them, a new post by someone on their friends list, and so on. Notifications can be studied in a more traditional list through the main menu. Notifications are also minimized until the user clicks the bubble, after which they show up as is shown on the screen. All new notifications will always be shown on screen, but notifications shown in the short recap list will be padded with old notifications only up to five, to prevent the screen from being flooded with old information. To view older notifications, a user will have to go through the main menu.

That is basically all there is to say on the front page. It is simple and straight-forward and serves solely as a gathering point for the posts. Notifications, the user console and the menu button are all present across the entire application.

The Post View

This screen displays a post view, with comments and challenges uncollapsed so they can be viewed. Posts are mainly organised in three tabs, with general information (such as post title and any attachments) above it. It is important to note that the user’s console is replaced by a console giving the information of the user who posted the entry being viewed. Again, I will discuss the major elements in turn:

  • Description Tab: This tab contains the general description of the problem and perhaps the methodology the user followed in their solution. It is plain text, but can be formatted to be made more enjoyable to read.
  • Solution Tab: The solution tab contains a number of sub-tabs depending on the user who wrote the solution to the given problem. Tabs are added and cycled through using the buttons on the right. These tabs are optional but advised for lengthy solutions (especially computer programs) that would benefit from some organisation. The contents are, once again, simple text with optional formatting and potentially the ability to mark blocks of text as “code” to allow more structured formatting pertaining to a certain programming language.
  • Output Results Tab: This tab contains a description of the input the user gave his solution and what the output was. This is a very broad way of defining as this tab will contain wildly varying information depending on the field of study involved and the techniques used. Computer Algorithms will (at least if the user wants to properly inform others on what exactly he did) demand users to supply information on their own computer as well as what language was used, and so on.
  • Post Sidebar: To the left of the tabs is various information such as the times the post was viewed and favorited, its score and any expertise it has garnered. Giving a score is as simple as clicking the bar corresponding to the ranking you wihs to give the given post. Other elements in this side-bar are buttons to perform various actions such as posting a comment or issuing a challenge.
  • Comments & Challenges: Comments and challenges are listed below the post itself. By defauly, these are collapsed, but can be viewed as a list, complete with scrollbar, if the user presses the respective buttons. Comments work in much the same way as posts on the front page in that they display user information and the score the comment has received. Options pertaining to each type can be found underneath the actual comment or challenge in question (e.g.: whether to accept or refuse the challenge, to reply to the comment, etc.)

Once more, I tried keeping things simple and straightforward. Most things can be accomplished with a single button press and has most things grouped together. I found the inclusion of the tabs necessary in order to keep the text flow on the screen more limited. Without those tabs there would have been at least three large blocks of text, without considering the fact that the solution itself may be far large than this example shows, with many tabs.

Creating a Challenge Track

The last screen from the prototype that is worth showing in greater detail without being redundant is the screen that comes up when creating a Challenge Track.

Challenge Track
Picture 3: Create Challenge Track Prototype

  • Challenge Track Information: All the general information for a challenge track is located at the top of the screen, included the buttons to publish, save as a draft or publish it privately (essentialy using the track and its achievements for yourself only and not allowing others to attempt it). Filling out this information simply involves typing in the text boxes.
  • The Track: The track underneath displays all the milestones you have placed, in order. Users will have to achieve these in the same order as they are defined. Adding challenges to the track can be done with the ‘plus’ button and they can be clicked and dragged to take a spot before or after any other milestones you drag them past.
  • Milestone Information: Clicking one of the milestones will enlarge its icon and allow you to enter its name and the details of achieving it (which cannot be too long for brevity’s sake). Clicking the icon again allows the user to upload an image to represent the milestone, but a default image will naturally be supplied. The ‘minus’ button will remove the milestones if there are at least three in the track (in other words, a challenge track must have a minimum of two milestones).

Once more, a simple and hopefully (user evaluation will confirm or deny this) intuitive interface that should allow for users to create a track quickly and easily.

Last Words

My final words on the prototype are that I hope they are sufficient to give people a good idea of what the application would look and feel like. Only next week’s evaluation will be able to give a glimpse of whether or not it succeeds at this. Some screens such as the user profile have been left out because I did not see the point in describing how things were simply listed, but I may include them if they were deemed particularly successful or poorly designed in my next post, after the evaluation. I hope this has given some insight into what I have been doing and I hope this alone will already give me some useful feedback.

Planning

Posted by kevindelval on December 1, 2012
Posted in: Thesis. Tagged: thesis12. 2 Comments

Planning – The Rest of the Year

As per the request of one of my mentors, my next entry will detail my planning for the rest of the year. Naturally, this planning is a bit vague here and there because I do not know the details of what I will be doing at that time or how the application will have evolved. It is also fluid in the sense that certain weeks may switch places or certain activities may take longer than foreseen, and others less time. Regardless, this is, for now, how I see the rest of the year. Also keep in mind that my workload is far less during my course’s second semester, giving me a lot more time to invest in this.

The Planning

First Semester
26/11/2012 – 2/12/2012 : Finish the paper prototype and prepare the user evaluation.
03/12/2012 – 09/12/2012 : Do the user evaluation and start adapting the interface to reflect this feedback.
10/12/2012 – 16/12/2012 : Start researching implementation possibilities. Write extensive blog posts on user evaluation and feedback, as well as changes made based on that feedback.
17/12/2012 – 30/12/2012 : Start developing the first iteration of the application.
31/12/2012 – 03/02/2012 : Rewrite the first chapters of the thesis, add more content to them to finish the literature study and attempt to finish off all content not directly related to the application. (exam period so much less time to work)

Second Semester
04/02
/2012 – 10/02/2012 : Resume work on the first iteration of the application.
11/02/2012 – 18/02/2012 : Resume work on the first iteration and update thesis drafts based on feedback on blog and from mentors.
19/02/2012 – 03/03/2012 : Finish the first iteration of the application, which should have most important user interface elements up and running. (menu, user console, posts, comments and front page)
04/03/2012 – 24/03/2012 : After another user evaluation, begin the second iteration of the application, which will implement given feedback and basic datastore/persistence functionalities.
25/03/2012 – 31/03/2012 : Prepare second presentation and continue work on the second iteration.
01/04/2012 – 26/05/2012 : Finish the second iteration, which should include all important parts of the application (all gamification elements, all UI functionalities pertaining to engagement and so on). Attempt one last user evaluation of the progress made. Complete a possible third iteration that is based on user feedback.
27/05/2012 – 07/06/2012 : Finish the final draft of the master thesis text.
Rest of June before final presentation : Prepare the presentation and iron out any of the last details of the application. (perhaps some last bugfixes and so on)

So, much of the implementation will happen in the second semester of this school year, evidently. Given the fact that projects will be nearly non-existent in my second semester, I will have more time to spend on this than I do now. I’ve also made sure I have at least a month per iteration, often more, to give myself some room to breath in case parts of the planning turn out to be unrealistic in some way.

Posts navigation

← Older Entries
  • Recent Posts

    • The Application – A First Screenshot
    • Planning – What will the First Iteration look like?
    • User Evaluation – The SUS Results
    • User Evaluation – The First Results
    • User Evaluation – SUS and Paper Prototyping
  • Archives

    • February 2013
    • January 2013
    • December 2012
    • November 2012
    • October 2012
  • Categories

    • Case Study
    • Game Design
    • Gamification
    • Miscellaneous
    • Thesis
  • Meta

    • Register
    • Log in
    • Entries RSS
    • Comments RSS
    • WordPress.com
  • Advertisements
Blog at WordPress.com.
Engaging Students through Gamification – A Different Application of Game Design Concepts
Blog at WordPress.com.
Cancel