Exploring the role of user participation within contemporary web development.

This report outlines my participation in two research experiments and their application to the field of Human-Computer Interaction and contemporary web design. The first experiment investigates the effects of music on an individual’s focus, with the other researching the usability of various eReader devices. Both experiments provide valuable insights into the field of HCI, emphasising the need for web developers to closely consider the relationship between individuals and their interfaces. The report then concludes by highlighting the significance of user-participation experiments within web development, and their role in creating a personalised, user-centric application.

Introduction

The field of Human-Computer Interaction (HCI) has experienced significant growth in recent years – particularly in the context of online user experiences. While technical correctness is still vital to the success of an online application, it’s becoming increasingly clear that facilitating an emotional, interpersonal relationship between a user and a web app is equally as important to creating a fulfilling digital experience. (Nielsen, 1994) This report delves into two experiments, each encompassing a modern-day approach to HCI, with one examining the influence of music on participants’ focus, and the other assessing user experiences regarding various eReader device interfaces. By partaking in and analyzing these experiments, we can gain a unique insight into the role of present-day HCI, and the necessity for contemporary web developers to understand the distinctive ways their applications can connect with users.

Experiment One – “Can Binaural Beats Increase Your Focus? Exploring the Effects of Music in Participants’ Conscious and Brain Activity Responses” (Rahman, Gedeon, Caldwell, & Jones, 2021)

Experiment Summary and Objectives
The first experiment I participated in aimed to investigate the effects of music stimuli on the focus of an individual. It accomplished this by playing a variety of pieces of music, and recording participants’ focus (both quantitatively and qualitatively) watching a short video.

Beginning the experiment, an EEG machine was fitted to my head – with a short control test being run. Following this, I was asked a variety of baseline demographic questions, including my musical preferences, before diving into the experiment. Six pieces of music played, one at a time, on top of six different videos. Initially, I was wary of the music being played and was unsure whether six pieces of music were enough. With such a diverse range of genres available, I wondered whether Classical, Pop and Binaural tones provided enough variance to make such a wide comment about the effectiveness of music.

The videos themselves presented individuals displaying different emotions, some of which were acted, and others that were genuine. Our task was to decipher whether or not the emotion being displayed was ‘genuine’ or ‘acted’ – and report back to the examiner. Finally, I was asked to make a brief comment on how the music made me feel regarding my focus on the clips. This was easy enough, however, I would’ve preferred being given something more definitive – like a rating system or assortment of options to select from.

The results of the study suggested that binaural beats (which are often thought to increase the focus of an individual) tended to hinder one’s ability to focus. Contrarily, music with a more sombre tone was found to improve focus within the group. The experiment concludes that specific styles of musical stimuli may be somewhat beneficial to improving concentration and focus, though notes that this could vary drastically depending on individual preferences.

Critical Analysis of the Experiment
On the whole, I believe this study did a great job contributing to the discussion regarding music’s impact on focus, however, I believe there is still further room for experimentation. The aim of the experiment was met and the results helped answer the question being asked. The study itself was conducted well, and its use of an EEG machine to monitor physiological, and emotional responses added a layer of quantitative depth that couldn’t be reached using only a subjective, comment-based approach. Moreover, opting to have participants analyse videos showcasing human emotion, rather than comprehending text or solving math problems, helped lower the barrier of entry to the study and ensure all participants were on an even playing field.

This being said, the study is not without its flaws. As denoted by the authors, the small sample size could affect the generalizability and reliability of the findings. Furthermore, having to provide short answer responses rather than providing a definitive set of choices meant that the study relied on the participant’s to be able to articulate their feelings concisely and correctly. Finally, though I believe analysing emotion within the videos is a much better approach than something algorithmic, the study may see potential issues regarding one’s emotional maturity and ability to analyse the situation presented.

Application and Key Takeaways Regarding HCI and UX
Regardless of the validity of the experiment, it’s exceedingly clear that auditory stimuli have a mental influence (both positive and negative) on a user’s ability to focus. This aligns with Preece, Sharp, and Rogers (2019) who suggest ‘web developers must understand how their (designs) elicit specific kinds of emotional responses in users.’ In terms of Human-Computer Interaction, it’s clear that understanding the relationship between the developed stimuli and the emotional response of a user is crucial to creating an exemplary User Experience. Moreover, the study also does a remarkable job of highlighting the importance of user participation and UI/UX experimentation, providing valuable insight into how each individual’s subjective experiences and emotional understandings can vary from user to user.

Experiment Two – “Popular eReaders”  (Gedeon & Rampaul, 2015)

Experiment Summary and Objectives
Experiment two revolved around eReaders and the usability of their interfaces. The study placed us into pairs, asking one of us to unbox and operate the reader, whilst the other acted as a scribe – recording our experience with the device. Following this, the roles were reversed and a new device would come into the mix. The devices in question included the Barnes & Noble Nook, the Kobo Touch, the Amazon Kindle and the Sony PRS-600.

The experiment took a scenario-based approach, suggesting that each device was an ‘unexpected gift, yet to be opened’, and focused on a variety of interface-based usability tasks. Some of the tasks included opening the device, navigating through content, and changing font sizes. For each task, we were asked to fill out a survey assessing our experience. This form utilized the Likert Scale, presenting options ‘Very Bad’, ‘Bad’, ‘OK’, ‘Good’, and ‘Very Good.’. I much preferred this approach to the short-form verbal response seen in experiment one, and found it much easier to respond to. From here, our results were compiled and analysed – with the study concluding that the Nook was the poorest-performing device, with the rest being somewhat similar to one another.

Critical Analysis of the Experiment
Though the results of this experiment aren’t incredibly deducive, I believe that the assessment, much akin to the first, sets up a great foundation for further research within the field. If anything, by understanding the limitations of this study we can enhance our own research methods and the ways we as developers can better understand the relationship between our users and interfaces.

On the whole, the experiment was run well. There was a sizeable selection of devices chosen (and the reasoning behind the choices was clear), and the given survey using the Likert Scale yielded less ambiguity within participant responses. Furthermore, the scenario-based approach added a sense of ecological validity to the study and helped substantiate the claims made by the participants. Finally, by assigning tasks related to the interface, rather than the reading functionality, the study was able to reduce potential biases and ensure the responses only regarded the UI of the device.

However, there are some key reasons the experiment may not have worked as well as the researchers may have hoped. Notably (and as mentioned by the author), the sample size of the group only involved twelve participants. Additionally, these participants were all from roughly the same age range, and undertaking the same (technology-focused) university course – potentially influencing the findings. Should the experiment run again in the future, I would argue that non-technologically savvy individuals (or those not within the discipline) should be included in the mix.

Application and Key Takeaways Regarding HCI and UX
The most notable takeaway from this experiment lay in the importance of creating a fulfilling UI & UX, and the way through which one can efficiently evaluate and empathise with their users. Though all devices within the study ‘worked’, it’s evident that users were quick to judge and emotionally relate with each of the options’ interfaces and deduce a clear outlier. The study also underscores the value of involving users within the development process, emphasising the need to continually evaluate and assess the relationship between your user, and the interface being created. Zaphiris and Kurniawan (2006) reinforce this, arguing that ‘it is essential to include users in the process of website design, as users will do what they will – not necessarily what designers want or expect them to do. User behaviour can be anticipated up to a point, but not predicted in detail.’ On the whole, this experiment does a great job of outlining how unique users connect differently with an interface, and how by understanding this relationship, we as developers can improve overall user satisfaction and engagement.

Comparison of the Two Experiments

Overall, I believe both experiments went well. I enjoyed partaking in both, and each had a clear-cut goal in mind (and was structured accordingly). The experiments were run professionally and formally and provided an excellent foundation for their respective fields of study. It’s clear that, regardless of which study you look at, they can (and will) be used in the future as a fantastic point of reference.

The key difference, for me, lay in the contents of each experiment. The first tended to be more engaging, which may simply be due to my interest in the topic. It felt more fluid and less idealistic than the second, thus making it easier to respond to – especially regarding the freeform text sections. Notably, however, the use of the scenario-based approach within the second did make me feel more invested in the study, which may have influenced my responses. Finally, the use of the Likert Scale within the second again did make responses easier, as it provided set options to choose from, rather than a short-answer comment section.

Relevance of User-Participation Experiments to Web Design and Development

As touched upon, user-participation experiments are essential within the field of web design and development. Modern web development requires a personalized, physiological approach to understanding their audience – rather than simply having technical expertise in the field.

The studies outlined provide valuable insights into conducting effective user-participation experiments and the various methods through which this can be achieved. Whether in the form of techniques for performing statistical analysis (eg. Likert Scales or the EEG machine), or rather how one can consider subjective verbal responses. Even elements such as considering sample size biases, ways of sorting and tabulating responses or something as simple as making the user feel comfortable, are all excellent examples of what needs to be considered when creating user-participation experiments. Furthermore, these experiments truly highlight the necessity of user participation within the HCI design and development process, and the variety of ways through which the users can positively influence the usability of the final product. (Stone, Jarrett, & Woodroffe, 2015)

Regardless of the potential flaws of the outlined experiments, much as they provide a concrete foundation and results for their respective fields, they can also act as a wireframe for conducting user-participation experiments, both being exemplary learning opportunities for web developers looking to undertake similar studies.

Conclusion

It is clear that Human-Computer Interaction, and its implementation within modern systems, have been pivotal in the field of contemporary web development. By examining and participating in the experiments outlined we can gain many valuable insights into HCI, the ways it affects our users, and the different methods through which we can implement them into our own work. Whether by ensuring we develop a personalised and engaging UI or by integrating user participation experiments within the development process. It’s clear that modern-day web developers have a responsibility to foster and facilitate these techniques and that, without doing so, one cannot create a meaningful and engaging contemporary web experience.

References

Gedeon, T. D., & Rampaul, U. (2015). Popular eReaders (Computer Science Technical Report No. CSTR-2015-14). Research School of Computer Science, Australian National University.

Nielsen, J. (1994). Usability engineering. Boston, MA: AP Professional.

Preece, J., Sharp, H., & Rogers, Y. (2019). Interaction Design: Beyond Human-Computer Interaction (5th ed.). (pp. 130-132).

Rahman, J. S., Gedeon, T., Caldwell, S., & Jones, R. L. (2021, May). Can Binaural Beats Increase Your Focus? Exploring the Effects of Music in Participants’ Conscious and Brain Activity Responses. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-6).

Stone, D., Jarrett, C., & Woodroffe, M. (2015). User Interface Design and Evaluation. (pp. 13-15).

Zaphiris, P., & Kurniawan, S. (2006). Human-Computer Interaction Research in Web Design and Evaluation. (pp. 4-10).