I asked 8 questions in total for the video prototype testing session. 5 of them are quantitative questions with concrete scores to measure the aspects that I focused on while the rest 3 questions are qualitative questions asked to get in-depth suggestions for the video prototype and the whole concept. These questions are listed below.
1.Can you understand the basic rules of the game? (1-5) ----------------- Multiple Choice
2.Do I communicate the design rationale clear enough? (1-5) ----------------- Multiple Choice
3.Do you think it's an interesting game? (1-5) ----------------- Multiple Choice
4.Why or why is it not interesting? ---------------- Text Question
5.Can you foresee any problems of the game? (e.g. rules, interactions, etc) ---------------- Text Question
6.Do you have any suggestions to refine the game to address the potential problems that you concern? ---------------- Text Question
7.What do you think of the quality of the video? (1-5) ----------------- Multiple Choice
8.What do you think of the quality of the audio? (1-5) ----------------- Multiple Choice
The type of study we did on the class was A/B testing and survey, but, unfortunately, I missed the A/B testing part. Therefore, what I really did was just an online survey by inviting the class to watch my video.
The purpose of the question 1 was the clarity of the game rules. It's pretty straightforward to get the feedback of whether the basic rules are communicated well and understood by the audience. From the feedback, I'll have a general idea of whether the game rules was clear or not.
The purpose of the question 2 was the clarity of the design rationale of the concept, which was very similar to the first one.
The 3rd question aimed to know if it was a fun game. It was one of the most important questions in this survey because it could significantly affect the iteration in the future. I also used a multiple choice to get an average score of the interestingness of the game so that I'll have a rough idea of the current fun level of the game. Besides, I didn't define 'fun' in the question, it should be broken down to several more specific questions that cover all the aspects of the definition of 'fun'.
The 7th and the last questions were asked to get rough scores of the quality of the audio and video of the prototype. I made the two questions hastily and forgot to put a text comment option under each question. What I found was that just scores, without comments, were nearly pointless of helping me improve the video-making skills, it could only be an encouragement or a pain.
All these 5 quantitative questions had the same problem that the options are too vague. I could have defined or expanded the options to make it clearer and more specific. For instance, for the 1st question, it might be better if I changed the score 1-5 to the 5 measurements: 'completely confused', 'vague', 'understand part of it', 'understand most of the rules', 'completely understand'.
In terms of the three qualitative questions, question 4 was related to question 3 in order to get the underlying reason for why or why is it not interesting. As I mentioned above, the suggestion of improving the interestingness of the game was the key feedback that I wanted to get from the survey. And facts have proved that I did get some useful suggestions from this question.
The 5th and 6th question is a group, which aimed to find out and address the potential problems of the game. It was like inviting the interviewees to brainstorm for me and helping me discover the current design defects as well as the possible future challenges.
Overall, as the sample (number of interviewees) is not that big, the simple and concise quantitative questions were not as useful as the qualitative questions. I thus planned to put more qualitative questions in the next survey to get more suggestive feedbacks.
没有评论:
发表评论