This study takes inspiration from provocative design methods to gain knowledge on sound preferences regarding future vehicles’ designed sounds. A particular population subset was a triggering component of this study: people with hearing impairments. To that aim, we have developed a public installation in which to test a hypothetical futuristic city square. It includes three electrical vehicles whose sound can be designed by the visitor. The interface allows the user to interact and play with a number of provided sonic textures within a real-time web application, thus “playing” the design. This opens a design space of three distinct sounds that are mixed into an overall soundscape presented in a multichannel immersive environment. The paper describes the design processes involved.
To engage students in and beyond course activities has been a working practice both at KTH Sound and Music Computing group and at KMH Royal College of Music since many years. This paper collects experiences of involving students in research conducted within the two institutions. We describe how students attending our courses are given the possibility to be involved in our research activities, and we argue that their involvement both contributes to develop new research and benefits the students in the short and long term. Among the assignments, activities, and tasks we offer in our education programs are pilot experiments, prototype development, public exhibitions, performing, composing, data collection, analysis challenges, and bachelor and master thesis projects that lead to academic publications.
In this paper, we report the experiences of students and teachers in a master course in Musical Communication and Music Technology at KTH Royal Institute of Technology. The students were exposed to vocal sketching [1], a novel sound design method, both as their course material and for the examination. The results in terms of learning outcome and course experience were confirmed and more than convincing, while the results in terms of validating the efficacy of the method were meagre.As part of our research, we designed an experiment where the students first interviewed preschool children who were asked to describe a fantasy musical instrument and then built it. The course schedule included lectures on voice sketching, sound synthesis, sound quality, new musical instruments, parameter mapping, and music programming. The project work and idea was presented during the first lecture, eight weeks before meeting the children. The interview took place in a workshop at the Swedish Museum for Performing Arts who had an exhibition of new musical instruments. Student/child pairs visited the exhibition in order to 1) familiarize themselves, 2) establish communication, and 3) get a common point of reference in terms of the exhibited instruments. After this process, the pairs completed an interview session inspired by [2]. The parents and teacher could join in if desired. The students got two weeks to build the instruments and present these at the museum. The purpose was not to evaluate the instruments, but to explore the vocal sketch method. The design and building phase was a prototyping task which the students were comfortable with. All design decisions needed to be set in relation to the course literature. All the presented projects followed a scenario- and contextual-inspired design approach [3] where a target solution needed to be established quickly grounded on a basic understanding of the agent (the child), its goals, and its presumed actions [4], and where the child mainly acted as informant [5]. While all the children could voice sketch, few actually did so in the interview. Despite this, the finished instruments matched the expectations of the children, and the course work satisfied the intended learning outcomes. As a research outcome, we suggest that future studies should include training vocal sketch techniques to produce suitable sounds. As for the pedagogical outcome, we are convinced from both the high quality of the works and the unusually positive course evaluations compared to previous years that the unproved research method was appropriate as course material. The bidirectional challenge in the research where students know that the method is experimental is hypothesized to further boost student motivation.
A class of master of science students and a group of preschool children codesigned new digital musical instruments based on workshop interviews involving vocal sketching, a method for imitating and portraying sounds. The aim of the study was to explore how the students and children would approach vocal sketching as one of several design methods. The children described musical instruments to the students using vocal sketching and other modalities (verbal, drawing, gestures). The resulting instruments built by the students were showcased at the Swedish Museum of Performing Arts in Stockholm. Although all the children tried vocal sketching during preparatory tasks, few employed the method during the workshop. However, the instruments seemed to meet the children’s expectations. Consequently, even though the vocal sketching method alone provided few design directives in the given context, we suggest that vocal sketching, under favorable circumstances, can be an engaging component that complements other modalities in codesign involving children.
Sound Forest is a music installation consisting of a room with light-emitting interactive strings, vibrating platforms and speakers, situated at the Swedish Museum of Performing Arts. In this paper we present an exploratory study focusing on evaluation of Sound Forest based on picture cards and interviews. Since Sound Forest should be accessible for everyone, regardless age or abilities, we invited children, teens and adults with physical and intellectual disabilities to take part in the evaluation. The main contribution of this work lies in its fndings suggesting that multisensory platforms such as Sound Forest, providing whole-body vibrations, can be used to provide visitors of diferent ages and abilities with similar associations to musical experiences. Interviews also revealed positive responses to haptic feedback in this context. Participants of diferent ages used diferent strategies and bodily modes of interaction in Sound Forest, with activities ranging from running to synchronized music-making and collaborative play.
Sonification using audio parameter mapping involves both aesthetic and technical challenges and requires interdisciplinary skills on a high level to produce a successful result. With the aim to lower the barrier for students to enter the field of sonification, we developed and presented WebAudioXML at SMC2020. Since then, more than 40 student projects has successfully proven that the technology is highly beneficial for non-programmers to learn how to create interactive web audio applications. With this study, we present new feature for WebAudioXML that also makes advanced audio parameter mapping, data interpolation and value conversion more accessible and easy to assess. Three student projects act as base for the syntax definition and by using an annotated portfolio and video recorded interviews with experts from the sound and music computing community, we present important insights from the project. The participants contributed with critical feedback and questions that helped us to better understand the strengths and weaknesses with the proposed syntax. We conclude that the technology is robust and useful and present new ideas that emerged from this study.
Web Audio has a great potential for interactive audio content in which an open standard and easy integration with other web-based tools makes it particularly interesting. From earlier studies, obstacles for students to materialize creative ideas through programming were identified; focus shifted from artistic ambition to solving technical issues. This study builds upon 20 years of experience from teaching sound and music computing and evaluates howWeb Audio contributes to the learning experience. Data was collected from different student projects through analysis of source code, reflective texts, group discussions, and online self-evaluation forms. The result indicates that Web Audio serves well as a learning platform and that an XML abstraction of the API helped the students to stay focused on the artistic output. It is also concluded that an online tool can reduce the time for getting started with Web Audio to less than 1 h. Although many obstacles have been successfully removed, the authors argue that there is still a great potential for new online tools targeting audio application development in which the accessibility and sharing features contribute to an even better learning experience.
Web technologies in general and Web Audio API in particular have a great potential as a learning platform for developing interactive sound and music applications. Earlier studies at the Royal College of Music in Stockholm have led to a wide range of student projects but have also indicated that there is a high threshold for novice programmers to understand and use Web Audio API. We developed the WebAudioXML coding environment to solve this problem, and added a statistics module to analyze student works. The current study is the first presentation and evaluation of the technology. Three classes of students with technical repsectively artistic background participated through online courses by building interactive, sound-based applications. We analysed the source code and self-reflective reports from the projects to understand the impact WebAudioXML has on creativity and the learning process. The results indicate that WebAudioXML can be a useful platform for teaching and learning how to build online audio applications. The platform makes mapping between user interactions and audio parameters accessible for novice programmer and supports artists in successfully realizing their design ideas. We show that templates can be a great help for the students to get started but also a limitation for them to expand ideas beyond the presented scope.
Creating an effective sonification is a challenging task that requires skills and knowledge on an expertise level in several disciplines. This study contributes with WebAudioXML Sonification Toolkit (WAST) that aims at reaching new groups who have not yet considered themselves to be part of the ICAD community. We have designed, built, and evaluated the toolkit by analysing ten student projects using it and conclude that WAST did meet our expectations and that it lead to students taking a deep approach to learning and successfully contributed to reaching the learning outcomes. The result indicates that WAST is both easy-to-use, highly accessible, extensively flexible and offers possibilities to share the sonification in any device’s web browser simply through a web link, and without installations. We also suggest that a sonification toolkit would become an even more creative environment with virtual instruments and mixing features typically found in Digital Audio Workstations.
We present WebAudioXML as a suggested candidate for establishing a standard for describing Web Audio config- urations. The aim is to lower the barrier for artistic cre- ators for working within web audio applications as well as providing a modular system that can integrate into larger applications. WebAudioXML provides means for mak- ing interactive music without having to learn a program- ming language like JavaScript and consists of an XML syntax specification and a parser. The framework has been developed with and tested by audio experts and lecturers from music production and Sound and Music Computing. Workshop participants report that WebAudioXML has po- tential in keeping focus on the creative process instead of web development. We argue that an XML standard for Web Audio configurations would be beneficial for modular and collaborative development and therefore recommend a wider discussion on the topic. With the discussion we aim to promote the artistic in the making of interactive audio applications.
The field of research dedicated to Accessible Digital Musical Instruments (ADMIs) is growing and there is an increased interest in promoting diversity and inclusion in music-making. We have designed a novel system built into previously tested ADMIs that aims at involving assistants, students with Profound and Multiple Learning Disabilities (PMLD), and a professional musician in playing music together. In this study the system is evaluated in a workshop setting using quantitative as well as qualitative methods. One of the main findings was that the sounds from the ADMIs added to the musical context without making errors that impacted the music negatively even when the assistants mentioned experiencing a split between attending to different tasks, and a feeling of insecurity toward their musical contribution. We discuss the results in terms of how we perceive them as drivers or barriers toward reaching our overarching goal of organizing a joint concert that brings together students from the SEN school with students from a music school with a specific focus on traditional orchestral instruments. Our study highlights how a system of networked and synchronized ADMIs could be conceptualized to include assistants more actively in collaborative music-making, as well as design considerations that support them as facilitators.