top of page

DOI: https://doi.org/10.33008/IJCMR.2021.40 | Issue 7 | Oct 2021

Izabela Derda, Tom Feustel & Zoi Popoli (Erasmus University Rotterdam)


Abstract

Technology-driven design can support the creation of storytelling experiences that offer innovative ways of transmitting knowledge and information. Against the backdrop of criticisms by museums’ and art galleries’ visitors,’ who demand more participatory approaches in exhibitions’ design, new technologies have also emerged as tools for making art relevant again, and making museums and galleries hybrid places where the virtual and digital aspects of stories can be combined with corresponding physical artefacts. Observing people's reactions and behaviours in those hybrid environments requires an examination of the ways in which they engage with space in the process of meaning-making, by changing and adapting the space to suit their means.Theoretical literature has thoroughly parsed the concept of space, but in the context of mixed-reality experience design, it has become hazy again. In the article, we explore practitioners’ views on the ontological issues with defining experience space and discuss its in-betweenness, inseparability and unrealness.

Introduction

Historically, museums and art galleries were defined by their role as institutions of knowledge and cultural heritage (Lake-Hammond and Waite, 2010). The objects on display and space available determined the layout of each exhibition, with exhibition design framed solely as a means to please the curator and content experts (Lake-Hammond and Waite, 2010). However, audience expectations have challenged the conventional, passive, static-object display to evolve into audience-centred experiences (Barnes and McPherson, 2019; Dal Facco and Vassos, 2017; Lake-Hammond and Waite, 2010; Muller and Edmonds, 2006; Mygind, Hällman, and Bentsen, 2015). New forms of technology-backed exhibitions transform museums into hybrid places where the digital narrative intertwines with artefacts, and visitors actively interpret and co-create what they see (Dal Falco and Vassos, 2017). The availability of ubiquitous computing in the form of smartphones and tablets opened new venues for augmented reality and immersive technologies,, and introduced location-based applications (Martin et al., 2011). GPS tracking enabled the display of digital media to follow audience members while moving through the physical exhibition space (Cheng and Tsai, 2013; Dunleavy and Dede, 2013). This led to new design trends such as tech-supported gamification and experience hybridization, due to their inclusion of the physical surroundings into the interactive design.


The turn toward expressing and exposing art through digital means, in the form of digital images and virtual worlds, corresponds with another 20th century notion that rejected the idea that art is purely material. Rejecting colours and canvases, artists turned towards the world of immaterial expression (Grammatikopoulou, 2010), experimenting with experiencing the art as an ‘idea’ or ‘feeling’ that objects create (LeWitt, 1969). However, seeing digital or mixed-reality experiences as immaterial presents multiple challenges in terms of researching experience-related behaviours and social transactions (de Castell, Jenson, and Taylor, 2014), and defining subjects, agents, and especially ‘spaces’ of experience. It is important to address those issues with critical attention, as the problem of ‘labelling’ can lead to the misinterpretation of visitors’ actions in hybrid environments where they are encouraged to engage in place-making by changing and adapting the space.


In our research, we often meet creators to discuss their work and creative processes. In these conversations, we came to conclusion that, even among practitioners working daily on mixed-reality projects, there are problems with defining basic concepts and describing the space in which art experiences occur. Therefore, we interviewed 18 creators involved in the process of mixed-reality experiences creation in order to understand their perception of the ontological problem of defining the experience space of art exhibitions and installations. In the paper, we engage with the fast-changing field of exhibition design and shed light on what we have identified as key issues of mixed-reality experience: in-betweenness, inseparability, and (un)realness. In our work we do not aim to offer new concept grids or methodologies for assessing space, but rather to highlight the matters that seem most challenging and fleeting elusive in our existing understanding of experience spaces.


What is space?

A space can be defined as the essential geographic, three-dimensional environment that serves as the basis for all existence, comprising both living beings and lifeless objects (Harrison and Dourish, 1996). Space hosts actions and interactions among both its living and inanimate inhabitants (Gaver, 1992). Hence, space is the three-dimensional world in which we live and act.


Media space integrates audio-visual and communication technologies to create a virtual environment that can be assessed by different parties and is often used to collaborate remotely (Dourish, 2006). Media space mimics properties of space, especially relational orientation and presence awareness, as those properties enable interaction. While a media space itself takes place within a virtual environment, there remains some ‘realness’ to the virtual space, which relates to the real-life actions that are required to access it, such as sitting in an office and using a computer or smartphone. Moreover, the intellectual work of processing, navigating, and interpreting the space takes place in a real brain. The media space can, therefore, be seen as a hybrid of real and virtual elements (Harrison and Dourish, 1996).


Augmented Reality (AR) technology makes use of the three-dimensional environment by overlaying virtual elements onto the real space (Azuma, 1997; Carmigniani and Furht, 2011). Manovich (2006) introduced the concept of augmented space, which he established as physical space that is enriched by locally personalised and dynamically changing multimedia content. Augmented space does not only refer to augmented reality, but instead is an umbrella term for all physical spaces that are loaded with information. The augmentation can arise from different technologies, such as multimedia screens, animated building fronts, signs, smartphones, computers, and augmented reality, but also it can also be seen as data spaces (Manovich, 2006). Therefore, augmentation adds more information and make space multi-dimensional. Augmentation complicates the relationship between space and users, since content overlaid onto the physical space can be changed at any time, thereby changing the appearance of the space (Spohrer, 1999). While both media and augmented spaces, are hybrid, they contain disparities. Media spaces take place virtually, while augmented spaces occur in the physical world.


However, space should not be mistaken with place, which is understood as a socially produced space. Lefebvre (1991) suggests the notion that the social production of place is based on spatial practices, space representations, and representational space. Spatial practices describe the observations of oneself and others in space. Observing people's practices in space reveals how they engage in place-making by changing and adapting the space for their own ends. Humphreys and Liao (2011) share this notion and add that the social production of space occurs through communication about and through place. The former implies that people talk about a particular space and thereby create a value-laden version of that space - a place. The latter is more indirect, and describes a place-making process, which gives meaning to space through the people who stay there. On the contrary, representations of space are not self-realised but imposed by others.


In the context of experience design, it can be said that augmenting technologies support differentiation between space and place and provide opportunities to alter the social construction of space, which implies changing the place (Graham et al., 2012). Even though the process of place-making does not belong to the scope of this paper, making a differentiation between space and place is key for further discussion.


Where the experience occurs

Before we move to exploration of issues around defining a ‘space’ of experience, it is important to acknowledge that the space can be used to organise content according to its purpose.This implies that the role of space could be to structure digital content similarly to how physical space organises our everyday lives. Therefore, to determine the role of the physical space for mixed-reality experiences, we have to differentiate between three main types of augmentation. The first is fixed-to-screen, which can be observed in the vastly popular face and environment filters on social media platforms, such as Instagram and Snapchat. The physical space in which a user acts is irrelevant as it does not reshape or directly influence the experience.


The second type is omni-spatial and can be deployed in various spaces, which fulfil the spatial practicalities, without losing its purpose. An example would be an AR visualisation of a 3D model. This would work on any flat surface, such as a table or floor, which can be tracked by the AR device. Here, space only acts as a stage to host the content, but does not add to the experience. Some of the interviewed creators suggested that this kind of experience does not offer the full potential of immersive or augmenting technologies as the physical space does not have a real function without a meaningful interaction between the virtual and the ‘real.’


The third kind of experience is location specific and achieves its full effect only when deployed at the dedicated space for which it was designed. In this case, space not only acts as a stage, but actively contributes to the outcome of the experience and gives it purpose.

As the first type of experience does not need space to be effective, we focus the omni-spatial and location-specific kinds of experiences.


Our approach

Our research took a qualitative approach with 18 in-depth, semi-structured interviews with practitioners involved in the creation of mixed-reality experiences. Residing in Western Europe or the United States of America, these creators work across the globe in a variety of roles, ranging from developer to designer and AR artists. Accessibility and expertise dictated the sample selection; all the interviewees are recognised and, in most cases, highly awarded experts in the field, and lived in geographic proximity to the research team and their extended personal network.


The original idea behind the interviews was to explore both the defining features of immersive installations used in art and museum experiences and the creative processes behind them. However, it quickly became apparent that there is no common language in the field and the ontological issue of defining experience space came up naturally in our discussions. Further interviews were structured to include matching questions on this problem. Some of the interviewees chose to explore the topic using examples of their own and other creators’ work, which we share here in support of our discussion.


This approach has allowed us to explore the topic within broader contexts of creative processes, provide a more in-depth view of installation design, and identify three key issues related to the ontology of space in mixed-reality experiences, namely in-betweenness, inseparability, and unrealness.


There is nothing there

In-betweenness

‘The problem [with AR-based mixed-reality art experience] is that you cannot easily define the [art] object. The exposed artefact alone makes little-to-no sense to a visitor; the app will not work without the object; and what you see happening on the screen on your mobile is not really on the screen. It’s there. It’s in-between.’ (Interviewee 14).


Augmented art experiences are often achieved by overlaying digital content onto physical space and objects with AR technology. The displayed information can take multiple forms of data visualization and appear as a holographic projection in the visitors’ fields of view. Location-based augmentation integrates their subjective perception of reality with the digital overlay and creates an impression of virtual and real worlds merging into one. However, it poses the issue that the actual content of the installation is ‘hung’ in-between the worlds of real and virtual.


An example of this kind of in-betweenness (as suggested by two of the interviewees) is the augmented reality exhibition, Mirages and Miracles by Adrien M and Claire B. In the series of installations visitors can observe motionless, inorganic objects, like stone, which – when pointed to with a tablet computer – ‘come to life.’


Mirages and Miracles trailer, © Adrien M and Claire B


‘The tablet becomes the window to these previously undiscovered realities’ (ars.electronica.art), unlocking the world ‘in-between’ where virtual objects and people appear, move, and talk.


Even though existing definitions of the experience space can be to some extent be adapted to the principles of mixed-reality design by defining augmentation as an overlay (as mentioned before), they do not really grasp the problem of ‘in-betweenness,’ which appears in cases like ‘Mirages and Miracles.’ Likewise, augmented experiences challenge the idea of presence defined as ‘a spatial relationship to the world and its objects. (…) Something that is ‘present’ is supposed to be tangible for human hands, which implies that, conversely, it can have an immediate impact on human bodies’ (Gumbrecht 2004, xiii). This definition of presence implies that if we are not able to grab something with our hands, it is not ‘present’ in the space we occupy. Does it mean the object is present ‘elsewhere’? If so, where? Or is it not present at all?


The lack of terminology to describe in-betweenness or to assess presence in the design and experience of mixed-reality exhibitions was mentioned by most of the interviewees. As they explained, the industry has not yet developed labels consistent enough for establishing a common language of mixed-reality design. This issue was also acknowledged in scholarly writing, which struggles with consistent terminology on the topic (de Castell, Jenson, and Taylor, 2014; Yung and Khoo-Lattimore, 2019). Likewise, the creators mentioned the need for stronger differentiation within the concept design grids among the different types of experiences that can be created with immersive technologies. This, in their view, could help with developing a specific terminology for issues like in-betweenness. As Interviewee 8 explained regarding the concept of immersion, the mechanism and experience of immersion is very different if we compare AR, VR, and immersive projection: ‘This term should somehow play with the fact that [immersion in AR] is physical. You're more engaged with the [physical] world because you get additional content and additional features (…) [We need to] make it clear to people that this is kind of like one of the core differences between augmented reality and virtual reality.’ And Interviewee 5 agreed: ‘[AR experience] feels seamless to reality, it's even more immersive than just having a lot of visuals, so we shouldn’t agree to just putting all eggs into one basket.’ Many argued that the key difference here is that the physical context is of high importance for the visualisations and overlays in location-specific experiences. However, such an approach can be considered limiting as it restricts immersion to physical experience overlooking other types of immersion as spatial or narrative. Moreover, as we explore further, physicality can play an important role in VR experiences as well,; therefore, physical immersiveness cannot be considered AR-exclusive, which makes the issue even more complex.


Interviewee 3 suggested that ‘augmented reality is more about these virtual worlds interacting with the physical world.’ In practical terms, this implies that: ‘The essence [of AR location-based experiences] is that it's not just a layer on top of something. It starts becoming something meaningful when it actually takes place in and changes things in these physical realities’ (Interviewee 10). The notion of in-betweenness, therefore, implies that the success of experience execution relies on the liaison of the visual andand the real, while assigning the space with an important (if not a key) role in their interaction.


Inseparability

The problem of identifying spaces of experience is not exclusively relevant to AR applications, but can also apply to other kinds of mixed-reality art experiences. Paula Strunden, multisensory mixed-reality designer, who was one of our interviewees [1], created a location-based VR experience, Micro-Utopia: The Imaginary Potential of Home. The installation ‘proposes a shared, immersive, and interactive version of a home, where space is born from the finely-tuned sensorial interplay between the body and virtual/physical objects connected to the Internet of Things’ (micro-utopia.org). In our conversation, Strunden highlighted that not enough attention is given to phenomenological accounts of space (both in the context of art production and consumption), meaning, how we interact with and perceive the space, as well as how visitors will experience it. ‘Our spatial perception does not only consist of what we see through our eyes, but we perceive it very strongly by a sound that the spaces emits, the smell it has, by the tactilities …, like materials, texture, your very own locomotion … So, I think this more phenomenological understanding has a lot to do with embodiment and the presence of your body within that space. And understanding your environment through your body.’ [2] This notion is visible in Strunden’s works. In Micro-Utopia, a visitor explores designed physical spaces with a VR headset on, but can move within both physical and virtual spaces to explore objects placed there. Only the virtual space can be seen, while the physical space can be experienced with all the other senses. This makes the experience of space multi-layered, where physical and virtual realities become inseparable for providing full sense of the installation, and where experience space(s) cannot be investigated in isolation from one another.


Micro-Utopia trailer © Paula Strunden


Axonometric drawing of tactile objects encapsulating virtual spaces upon being touched by the 'inhabitant', Micro-Utopia: The Imaginary Potential of Home, Paula Strunden, 2018.


The experience challenges visitors’ perceptions of realness and questions again the idea of ‘presence.’ In this case, the visitor is able to grasp the physical objects, however, the objects experienced with touch are not identical to the objects experienced with vision.


The inseparability of an experience, which occurs in parallel in both physical and and virtual worlds, challenges definitions of three-dimensional space and augmented space (which lacks physical spatiality) by bonding them into one multidimensional, meaning-laden place through the visitor’s simultaneous interaction with both. At the same time, inseparability provokes the question of how such parallel spaces should be labelled, explored, and assessed in both design and research.


(Un)realness

‘Being able to add these digital elements kind of creates a layer of magic on top of the physical world’ (Interviewee 6)


In mixed-reality experiences, the goal is often to create an impression that the digital objects belong to the ‘real’ and are naturally placed in a physical space of experience. However, it is often the case that the ‘magical layer,’ as mentioned by one the interviewees, is perceived as unreal and in strong opposition to the physical world. This poses challenges for creating and perceiving new experience spaces that merge the virtual and the real (Interviewee 3, 8, 12 and 14).


In popular discourse, the realness of mixed-reality experiences is often linked to believability, which is challenged by public opinion, as explained by our interviewees: ‘Sometimes you have those people who are just disappointed with the experience. They come with really high expectations about (what they imagine as) realness of the experience, and then they realise that the experience didn’t place them, let’s say, in Haiti, and they didn’t actually leave a gallery for a moment, then well, they really challenge the idea of the installation’ (Interviewee 14). The interviewees linked the problem of mismatched expectations about realness with what they considered to be a common misconception about mixed-reality and VR experiences. People expect high levels of immersiveness from mixed-reality experiences, which they confuse with the virtual worlds offered by VR. As one of the interviewees (3) explained: 'It's not about forgetting the reality, it's more about forgetting that this is a projection … It is a lot about believing in the content that you're being exposed to and trusting it and acting upon the laws of that content and accepting as part of reality.’ Therefore, the goal of the creators is not necessarily to make a visitor forget about the ‘outside’ world (as it is desirable with the VR-driven immersion), but to achieve the correct placement in the context of offered experience, which will make it more ‘reality-like.’ As another Interviewee explained:


If you are in your own living room and you use the app that allows [you] to place a lion just there, you really need to make sure to include the space context. You cannot scale the lion to the space, you want to see real-life-size animal standing there between your sofa and a table. We could say this is the first step towards believability. Then you probably would like to add more sensual experience, so the lion could roar, when he sees you. And finally, if you can make this lion jump in on your table, then you probably can make the user really invested in the experience. Will they be immersed? Probably. Will they forget about their special context of being in their own home? They shouldn’t. The same goes for [art] exhibitions – you want to embrace the space and make it matter. (Interviewee 14)

This can be linked to the concept of immersion, which is not so much bound to digital realism, but instead describes a human state of mind that arises from being engrossed in a particular activity, and can be understood as a gradual cognitive state of engagement building toward the stages of flow and presence (Jennett et al., 2008). This implies that if an AR application is helpful to fulfil a particular task, a user might fully immerse himself in the AR to complete the task as efficiently as possible; this refers to the concept of challenge-based immersion (Gámez, 2009).


In this sense, it can be said that technologies, like augmentation technologies, aim to merge spaces rather than push for deep engrossment in the experience space (Interviewee 3). This opposes the virtual, enclosed notion of immersion, and implies that the visitor maintains awareness of the real world in his or her experience, and is constantly aware of the proximal happenings while participating. Evident here is the link to Milgram and Kishino’'s (1994) reality-virtuality continuum, which arranges different immersive technologies according to their degree of immersion in a synthetic virtual environment. AR is located at the end of the spectrum, as it features no or low immersion in a fully synthetic, digital environment while the user is immersed in the real, physical world.


Moreover, the majority of our interviewees linked the issue of believability with the inseparability of experience and space, pointing to multisensuality, which is a design principle that can often be overlooked in the design process. In the discussion, most of the practitioners highlighted the contextual and phenomenological use of space as crucial for the creation of believable experiences, but some (Interviewee 3, 8 and 13) discussed the need to offer the opportunity for spatial and social interaction within the experience. As explained by one of the experts: 'AR really makes the world adjust around you. And that's the contextual aspect where [the object] targets you, but it also has to satisfy everyone else, because everyone sees the same thing' (Interviewee 3). The findings suggest that the ability of AR to enable shared experiences plays an integral role in generating believability and driving immersion. Different users of the same AR experience witness the same digital content but from different perspectives and through separate AR devices. The emerging social consistency increases visitors’ belief in the realness of the augmented content if all users experience the same content. This implies that if several users witness the same augmented content, they are more inclined to forget about the fact that the content is virtual and accept it as more believable and real (Interviewee 3).


In research practice, the problem lays in the enduring distinction between ‘real’ and ‘virtual,’ and in how to approach the ‘unreal’ as an object of inquiry. If researchers understand augmented spaces (and in consequence, augmented experiences) through this distinction between real physical space and not-so-real digital overlays, they may treat the immaterial differently, ‘by supposing that virtual worlds and the subjects and communities acting within them are ontologically less than “real”’ (de Castell, Jenson and Taylor, 2014: 8). Therefore, they may feel licensed to behave or participate differently than they would in ethnographic research contexts in the 'real world’ (de Castell, Jenson and Taylor, 2014). Following this logic, visitor engagement and interaction would be assessed differently than it would be for experiences taking place solely in a physical environment. Therefore, there is a danger of distorting anthropological observations of people’s reactions and behaviours in those hybrid environments, which de Castell, Jenson and Taylor (2014) argue could undermine research integrity at its foundations as the ontological ambiguity subjects scholars to allegations of intrusive or misleading scientific methods, as well as accusations of "disguised observation" (Erikson 1967 as cited by de Castell, Jenson and Taylor, 2014).


Final thoughts

In this article, we have explored three challenges in defining the experience space in mixed-reality art experiences based on the accounts of our interviewees. In our mapping, we have identified the issues of (1) in-betweenness, (2) inseparability, and (3) (un)realness. The concepts are strongly correlated with each other and can be used to describe the problem of elusiveness of space in mixed-reality environments. In-betweenness can be defined as the impression that the content of the installation is ‘hung’ in-between the real and virtual worlds, leading to the perception of the (un)realness of the experience. (Un)realness can be understood as seeing digital overlay in strong opposition to the physical world, onto which it is projected, and therefore, perceiving it as less authentic than the ‘real’ space. This, in turn, can lead to the perceived lack of believability and truthfulness of the entire experience. Both concepts link to the idea of the inseparability of the physical and virtual realities within the phenomenological experience of the installation. Inseparability describes the merging of virtual and physical worlds through multisensory experience, leading to a meaningful interdependency of the virtual content and the physical space.


It can be concluded that the relation between augmenting technologies and their immediate spatial context is highly interdependent and has profound implications for the purpose of augmentation, the user experience, and its ability to mediate the sense of place. While literature that aims at defining AR technology (Azuma, 1997; Carmigniani and Furht, 2011; Milgram and Kishino, 1994) generally neglects AR’s ability to negotiate the reciprocal flow of data between media and user in the process of experience space creation, this study points out its importance in alignment with Manovich’s (2006) assumption about the augmented space. Strikingly, Azuma’'s (1997) fundamental definition of AR addresses the importance of including all human sensual stimuli in experience creation, but this is often overlooked in conceptualisations of AR and other augmenting technologies, as well as numerous handbooks for designing with AR and VR, which leaves young designers ‘wandering in the fog of experiments’ (Interviewee 13). Therefore, this research stresses the importance of mutlisensuality in describing the spatial aspects of augmented experiences and calls for rethinking the conceptual grid, which will allow designers and researchers alike to move more freely in the area of experience design, and help us to understand visitors’ behaviours in the spaces of experience, as well as the interdependencies between visitors, spaces, and content.


Acknowledgements

This work was supported by the Netherlands Organisation for Scientific Research under Grant KI.18.044.


References

  • Azuma, Ronald T. “A Survey of Augmented Reality.” Presence: Teleoperators and Virtual Environments 6, no. 4 (1997): 355–85. . https://doi.org/10.1162/pres.1997.6.4.355.

  • Barnes, Pamela, and Gayle McphersonMcPherson. “Co‐Creating, Co‐Producing and Connecting: Museum Practice Today.” Curator: The Museum Journal 62, no. 2 (2019): 257–67. https://doi.org/10.1111/cura.12309.

  • Carmigniani, Julie, and Borko Furht. “Augmented Reality: an Overview.” Essay. In Handbook of Augmented Reality, 3–46. Place of publication not identified, NY: Springer, 2014.

  • Castell, Suzanne De, Nicholas Taylor, Jennifer Jenson, and Mark Weiler. “Theoretical and Methodological Challenges (and Opportunities) in Virtual Worlds Research.” Proceedings of the International Conference on the Foundations of Digital Games - FDG '12, 2012. https://doi.org/10.1145/2282338.2282366.

  • Cheng, Kun-Hung, and Chin-Chung Tsai. “Affordances of Augmented Reality in Science Learning: Suggestions for Future Research.” Journal of Science Education and Technology 22, no. 4 (2012): 449–62. https://doi.org/10.1007/s10956-012-9405-9.

  • Dourish, Paul. “Re-Space-Ing Place.” Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work - CSCW '06, 2006. https://doi.org/10.1145/1180875.1180921.

  • Dunleavy, Matt, and Chris Dede. “Augmented Reality Teaching and Learning.” Essay. In Handbook of Research on Educational Communications and Technology, 735–45. New York, NY: Springer, 2014.

  • Falco, Federica Dal, and Stavros Vassos. “Museum Experience Design: A Modern Storytelling Methodology.” The Design Journal 20, no. sup1 (2017). https://doi.org/10.1080/14606925.2017.1352900.

  • Gámez, Calvillo (2009). On the core elements of the experience of playing video games (Doctoral dissertation). Retrieved from UCL Discovery.

  • Gaver, William W. “The Affordances of Media Spaces for Collaboration.” Proceedings of the 1992 ACM conference on Computer-supported cooperative work - CSCW '92, 1992. https://doi.org/10.1145/143457.371596.

  • Graham, Mark, Matthew Zook, and Andrew Boulton. “Augmented Reality in Urban Places: Contested Content and the Duplicity of Code.” Transactions of the Institute of British Geographers 38, no. 3 (2012): 464–79. https://doi.org/10.1111/j.1475-5661.2012.00539.x.

  • Grammatikopoulou, Christina. “Stepping Towards the Immaterial: Digital Technology Revolutionizing Art.” Papers Presented at the Conference in Tartu, 14ADAD.

  • Gumbrecht, Hans Ulrich. Production of Presence: What Meaning Cannot Convey. Stanford, CA: Stanford Univ. Press, 2007.

  • Harrison, Steve, and Paul Dourish. “Re-Place-Ing Space.” Proceedings of the 1996 ACM conference on Computer supported cooperative work - CSCW '96, 1996. https://doi.org/10.1145/240080.240193.

  • Jennett, Charlene, Anna L. Cox, Paul Cairns, Samira Dhoparee, Andrew Epps, Tim Tijs, and Alison Walton. “Measuring and Defining the Experience of Immersion in Games.” International Journal of Human-Computer Studies 66, no. 9 (2008): 641–61. https://doi.org/10.1016/j.ijhcs.2008.04.004.

  • Lake-Hammond, Alice, and Noel Waite. “Exhibition Design: Bridging the Knowledge Gap.” The Design Journal 13, no. 1 (2010): 77–98. https://doi.org/10.2752/146069210x12580336766400.

  • Lefebvre, Henri, Donald Nicholson-Smith, and David Harvey. 1991. The production of space.

  • LeWitt, Sol. “Sentences in Contemporary Art”, 0-9, New York, 1969. Print.

  • Manovich, Lev. “The Poetics of Augmented Space.” Mediatecture, 2010, 304–18. https://doi.org/10.1007/978-3-7091-0300-5_26.

  • Martin, Sergio, Gabriel Diaz, Elio Sancristobal, Rosario Gil, Manuel Castro, and Juan Peire. “New Technology Trends in Education: Seven Years of Forecasts and Convergence.” Computers & Education 57, no. 3 (2011): 1893–1906. https://doi.org/10.1016/j.compedu.2011.04.003.

  • Milgram, Paul, and Fumio Kishino. “A TAXONOMY OF MIXED REALITY VISUAL DISPLAYS.” IEICE Transactions on Information Systems77(12), 1321-1329.

  • Muller, Lizzie, and Ernest Edmonds. “Living Laboratories: Making and Curating Interactive Art.” ACM SIGGRAPH 2006 Art gallery on - SIGGRAPH '06, 2006. https://doi.org/10.1145/1178977.1179120.

  • Mygind, Lærke, Anne Kahr Hällman, and Peter Bentsen. “Bridging Gaps between Intentions and Realities: a Review of Participatory Exhibition Development in Museums.” Museum Management and Curatorship 30, no. 2 (2015): 117–37. https://doi.org/10.1080/09647775.2015.1022903.

  • Spohrer, J. C. “Information in Places.” IBM Systems Journal 38, no. 4 (1999): 602–28. https://doi.org/10.1147/sj.384.0602.

  • Yung, Ryan, and Catheryn Khoo-Lattimore. “New Realities: a Systematic Literature Review on Virtual Reality and Augmented Reality in Tourism Research.” Current Issues in Tourism 22, no. 17 (2017): 2056–81. https://doi.org/10.1080/13683500.2017.1417359.

Notes

[1] The interviewee allowed for not concealing her identity in the context of her work exploration.

[2] The phenomenological approach to space was also highlighted by other interviewees (2, 8 and 12).

DOI: https://doi.org/10.33008/IJCMR.2021.41 | Issue 7 | Oct 2021

Steve Whitford (University of Portsmouth)


Abstract

This article focuses on the somewhat neglected (at least within scholarly circles) area of location-based sound recording, drawing much-needed critical attention to the intricacies and skills involved in location sound recording within realist filmmaking – both scripted and unscripted. Through my own practice-as-research, I aim to reimagine an ontological definition of location sound recording by proposing that a reinvigoration of the ‘realist’ genre can be achieved by connecting the storytelling skills of recording for single camera with the new opportunities afforded by immersive audio technologies – ambisonics here being a vital part of that development process. I demonstrate how use of such immersive audio technologies offer new creative opportunities for realist makers and audiences, based on the unique experience of geographical place and physical event that immersive audio delivers.

Introduction

‘The potential of the ambisonics mic is limitless and we’re only just starting to see what content producers can really achieve with it now.’

– Rode, Australia, in an interview with the author, July 2019.


The art of location-based sound recording has been a neglected area of academic research. I seek to address this by drawing critical attention to the intricacies and skills involved in location sound recording within ‘realist’ filmmaking – both scripted and unscripted. In this article, itself something of a short methodological reflection on the opportunities and challenges presented by the practice of immersive location sound recording, I show how this art continues to be central to the creative process of production, in driving the narrative and shaping the text’s influence, within the profilmic space. I hypothesise that the realist sound recordist’s role has an authorial voice and a creative agency. I use this article as the beginnings of a reimagined ontological re-definition for the practice of location sound recording by proposing that a reinvigoration of the realist genre – unscripted, in particular – can be achieved by connecting the story-telling skills in recording for single camera with the new opportunities afforded by the emerging technologies of immersive field sound recording. I argue that deploying an ambisonic-centred location sound recording method, fused with the existing art of recording actuality sound, will offer new creative opportunities for realist makers and audiences, now presented with an exciting ability to experience a sense of the geographical place and physical event that immersive audio delivers.


The sound recordist in academia

Scholarly study of sound in film has so far focused primarily on music and post-production sound-design in fiction narrative cinema (Weis and Belton, 1985; Altman, 1992; Beck 2008; Sonnenschein, 2001). The function of sound in documentaries has been a relatively under-researched area in academia as well as being largely overlooked by film critics and often lacking the recognition it deserves within the industry. As veteran documentary-maker Roger Graef observed in an interview with the author recently, ‘Ah, sound – the Cinderella part of documentary filmmaking’ (2020).


Indeed, it is often the director that is credited as the sole author of a film, and if critical discussion recognises the role of crew at all, it is usually around cinematography, but rarely sound. Outside of the realms of music and post-production, the role of the sound recordist in realist film production, and its part in shaping the authorial voice and creative agency of the film text, has rarely been studied [1]. In part this is a result of the historical dominancy of auteur theory, which tended to ascribe authorship to the individual vision of the film’s director. Yet even when the collective authorship of the filmic text is recognised by scholars, the role of the sound recordist remains ambiguous. For example, in arguing for a collective approach to authorship, Paul Sellors has commented:


Is the sound recordist a member of a film’s collective authorship? This is not so simple to determine. Some sound recordists will count as authors under a notion of collective filmic authorship while others will not. It will depend on the recordist's contribution to the filmic utterance... we need to understand this person’s role in producing not just the material film, but also its utterance’ (2007: 269).

Illustrating this authorial ambiguity, Sellors further observes that ‘Auteurists have tried to explain a film’s coherency by overvaluing the authorial control and artistic aptitude of an individual’ (Sellors, 2007: 268). Gaut, as quoted by Sellors, argues that the authorial should in fact be ‘multiply classified: by actors, cameramen, editors, composers, and so on’ (ibid, 267). As Sellors summarises this perspective: ‘Gaut, instead, looks at the function of a collective to get from individual contributions to a completed text’ (ibid, 268).


Continuing in this direction, this article draws critical attention to the intricacies and skills involved in the art of the location sound recordist and to render visible its ‘utterance’, to borrow Sellors’ term, and to show how this art continues to be central to the creative process of production in driving the narrative and shaping the text’s influence. I will focus specifically on the role of sound recording within the inter-related sub-genres of realist filmmaking: social realism (scripted) and observational documentary (ObsDoc), where, as I will seek to show, sound carries a significant indexical value to the film text’s assertion about its relationship to reality. Although there are fundamental differences between Social Realist fiction films (scripted, using actors) and Observational Documentaries (unscripted, using social ‘actors’), the two genres and their often-hybridised forms, share a similar approach to the depiction of the pro-filmic event – scripted or unscripted.


Understanding realist filmmaking

In both cases, the aspiration is to use filmic devices to create for the viewer a sense of ‘being there’ [2] to minimise, if not illuminate, the inherent mediation of a reality created by the camera and sound recordist. Kuhn and Westwell et al define the profilmic space as ‘The space created within the film frame as opposed to the space of the real world’ or the world the lens sees. The ‘authenticity’ of the sound that is recorded in the pro-filmic event, or what I would term ‘The Truth of Sound’, is an essential component to achieve the aspiration in creating the sense of ‘being there’.


Fred Wiseman, one of America’s most prominent directors of documentary who, along with contemporaries, the Maysles brothers, Don Pennebaker and Richard Leacock, helped establish the American Direct Cinema tradition of the 1960s, in an interview with David Winn for The National Academy of Television Arts and Sciences, observed that:


‘Observational cinema somehow seems to suggest that you just turn the camera on and let things happen in front of you, when in fact all aspects of movies are the result of thousands of choices’ (emmyonline.org, 2014).

It is these choices that define the observational documentary genre but Wiseman’s comments also highlight the tension imposed by the ambition of ObsDoc filmmakers to minimise the mediation of reality and so to aspire to present to the viewers a sense of ‘being there’ with Wiseman’s own comment that ‘the notion that cinema is the truth, or that anything is the truth is preposterous… Everything is subjective, and everything represents a choice.’ (ibid. 2014).


Those thousands of unscripted choices within the ObsDoc production process pose questions around the authorial voice, too, not only because of the inherent tension Wiseman identifies between a film grammar that seemingly presents to the viewer ‘reality’ unfolding ‘as it is’ and the constructive nature of filmmaking, but also around the particular production context of the genre. Observational documentary does not just exist in spectatorship; crucially, it also exists in the actual: in the physical event-space. This involves literally sharing ‘slices of life’ with protagonists and being part of unscripted events, thus requiring a ‘reactive’ approach relying in many ways on the relationships forged within the making process: inter-protagonist; inter-spatial and inter-makers.


I consider the inter-makers choreography, beyond the pro-filmic space, to a specific ‘Action-space’, demonstrating how the camera operator and the sound recordist perform collaborative yet individualised and autonomous roles. The makers’ choreography requires an equivalence of cross-craft empathy in facilitating the creation of each’s own independent narrative, opting to privilege accordingly – both contributing to the ‘audio-visual scenography’ which Chion defines as: ‘Everything in the conjunction of sounds and images that concerns the constructing of a fantasmatic diegetic ‘scenic space’ (2009: 469), with meaning deriving from ‘live’ juxtaposition – or some of the ‘thousands of choices’ that Wiseman identified.


Sound’s indexical relation to the authenticity of realist filmmaking

The sound recordist’s agency centres around choices made in the event along with specific pre-emptive selections of audio equipment, chosen and deployed explicitly to gather ‘audio signs’ so as to contribute to ‘meaning’ and questions around the film’s text and its reception. Paul Sellors et al identify this authorial contribution as ‘utterance’ (ibid, 268), which he defines further as being the ‘collective authorship through theories of collective intentions’ (ibid, 268). Perhaps in a Venn diagram of ‘thinking’ (analysis) and ‘hands on’ (technical) elements, the ObsDoc location sound recordist’s utterance situates in that overlap.


That utterance clearly affirms the importance of sound’s indexical relation to the authenticity of the realist text, whether scripted or unscripted. Realist director Ken Loach, for example, observed that ‘the sound is true when it reflects the real experience of being in a location. … if the sound is not true, then the whole authenticity of the film is undermined’ (Author interview, 2020). Similarly, Loach-collaborator (editor) Jonathan Morrison reflected that ‘the authentic sound that we get from [the recordist] is all important …The Realism of the Sound. …what we make is social realism, so the sound has to be real’ (Author interview, 2020). Loach-collaborator (sound recordist) Ray Beckett also refers to his approach to recording sound as ‘direct sound’, as ‘capturing the moment in front of the camera’ (Author interview, 2020), which is identical whether he works on documentary or fiction social realist film. It is an approach which is akin to what Robinson described as a commitment to ‘jishizhuyi’ [translated as ‘record-ism’], a kind of ‘on-the-spot realism’ [translated as ‘document-ism’] (Luku, 2002: 1-30). Robinson elaborates: ‘In the context of documentary practice, this entails the realisation of a spontaneous and unscripted quality that is a fundamental and defining characteristic distinguishing [jishizhuyi]’ (ibid., 1).


Technological filmmaking advancements have been an historic enabler of content innovation, transforming how makers have utilised, developed and deployed those innovations to explore new opportunities in developing genre-specific film languages. For example, the change from 35mm film cameras to16mm film cameras; separate Sound (Nagra); Timecode; zoom lenses; radio mics and so on. As Leacock observed after shooting documentary on 35mm film cameras:


This experience gave me a goal with clearly defined standards. I needed a camera that I could hand hold, that would run on battery power; that was silent, you can't film a symphony orchestra rehearsing with a noisy camera; a recorder as portable as the camera, battery powered, with no cable connecting it to the camera, that would give us quality sound; synchronous, not just with one camera but with all cameras. What we call in physics, a general solution (Leacock, 1993).

Indeed, as Barnouw in Robinson recognises of Fred Wiseman: ‘This tradition emerged in the wake of specific technological developments – most obviously the disaggregation of camera, microphones and tape recorder, enabling synchronised sound shoots for the first time’ (Robinson, 1993: 11).


It sometimes goes unnoticed that as well as editing and de facto directing, Fred Wiseman was, and also still is, the sound recordist on his films. This technological enabling process continues to evolve storytelling possibilities and choices, and to open new markets, requiring a continued evolution of those defined ‘soft skills’ underpinning the Recordist’s utterance. The investigation of the role and creative agency of the sound recordist becomes even more complex yet relevant in the currently transforming landscape of film production, with the recent emergence of consumer accessible VR and 360-degree immersive technologies and their vibrant, cross-platform experimentation. The aspiration for immersion, interactivity and viscerality, in other words creating a sense of ‘being there’, is central to these new technologies, and they afford the potential to enhance this experience for the viewers in ways that previous technologies could not. Many filmmakers are experimenting with these new technologies within the documentary form, which stem from a similar aspiration to that of the ObsDocs genre – to put the viewer ‘within’ the film space, or, to create a sense that ‘…there’s no separation between the audience watching the film and the events in the film’ (Wiseman in Atkins., 1976: 43). But these new experiments, as before, focus predominantly on the visual, relying largely on 360-degree cameras and XR visual designs to create a sense of visceral immersion.


It is therefore key that researchers consider the contribution and effect of an immersive location sound recording methodology on the prevailing classic single camera, ‘2D’ filming methodology, and to understand how this might contribute to the reinvigoration or reimagining of the realist/ObsDoc genre in an age of immersive media. The hypothesis that guides this is that this positioning widens the understanding of how visceral immersion can be achieved, specifically suggesting that ambisonic audio would contribute to this.


So, what is ambisonics?

Robjohns in ‘Sound On Sound’ explains that: ‘Ambisonics was conceived in the late 1960s as a complete recording and reproduction system capable of recreating accurate three-dimensional sound stages...’ (Robjohns, 2001). Ambisonic records and reproduces 360-degree immersive sound from a single microphone source, giving four channels of audio recorded in the field. Software in post-production converts these channels so it is possible to recreate the effect of any conventional microphone polar pattern, pointed in any direction within the 360- degree audio soundscape. Furthermore, as ambisonics is ‘speaker agnostic’, a mix can then be transcoded to any transmission/consumption format, from mono to full 360-degree immersive stereo, with height information (see Binaural later). Although flawed, an analogy for ambisonics is of an ‘audio lens’ which can be zoomed, focused, panned and tilted to fine-tune the overall sound pick-up, post event, meaning that some audio ‘focus’ decisions can be made later – software can then steer a ‘virtual hyper-cardioid mic’ towards a sound source, offering more options in post. Google has recently adopted ambisonics as the audio format of choice for VR (virtual reality) and audio companies are now marketing ambisonic-capable location microphones and recorders.


In terms of defining a field recording methodology, the ambisonic microphone movement around ‘action’ is not the classic reactive mono shotgun ‘point at action/speech’ mode: it can be moved to allow action to take place around it meaning that some audio ‘focus’ decisions can be made later where software can steer a ‘virtual shotgun mic’ towards a sound source. Crucially, in an interview with the author, in 2019, long-standing practitioner of location ambisonic recordings for international theatre sound design, John Leonard, advises: ‘...if you’re a distance away from the person talking, you can zoom-in [in post] … but it’s like having a hyper cardioid pattern that’s too far away…’. Microphone placement and choreography within the pro filmic event space are therefore, still fundamental: new technology, with established skills. In the same way that other technical innovations have effected developing languages adding choice, so too have ambisonics.


This 360-degree audio ‘action’ recording provides an improved sense of space and place, bringing the location sound to bear – perfect for the visceral and authentic aspiration to put the ObsDocs viewer ‘there’. So how might the prevailing ‘2D’ shooting methodologies change for ambisonic-centred location sound recording, foregrounding ‘extreme naturalness’ or ‘being there’, within the two main filmmaking scenarios?


The first is 'separate sound': individual camera and sound operators, with a classic single camera narrative methodology, and typified by a ‘multi mono’ approach: i.e., radio mics, shotgun mic, placement mics – all augmented by a series of location-specific ambisonic atmosphere/place recordings. An ambisonic approach can utilise the ambisonic microphone as the main ‘action’ microphone, augmented with mono sources, such as radio mics.


The second is 'sound on camera': a single operator methodology. This approach utilises an on-camera mono microphone which effectively ‘looks’ wherever the lens is pointing. So, to pick-up ‘on mic’ sound, the camera has to point at the source otherwise it is ‘off mic’, or to use radio microphones but with a resulting increase in complexity for the single person operator. With ambisonic recording, the camera is liberated from needing to ‘aim’ at the sound source, and can now concentrate on shooting for the lens, thereby facilitating a more fluid camera response, now no longer dependent on inherent restrictions within the ‘single operator’ methodology.


With both 'separate sound' and 'sound on camera' methodologies, there are profound aesthetic and practical questions that arise, all of which impact on opportunities to examine and enhance the development of the form. Although the location audio can be embellished at the post-production stage, what remains crucial is the bridge between viewer and event space: being able to experience through one of the senses, an un-mediated ‘reality’. As Chesler summarised of Wiseman’s field sound recording strategies:


Ambient sound, typically picked up through an omnidirectional microphone, captures the whole of a sonic environment without privileging a specific sound source in a scene. These ambiences defy logics of listening practice as all sounds within a space are captured within a 360-degree area.

Leonard comments on his methodological approach to recording in this format and makes a crucial observation: ‘Ambisonics gives me surround which is what I want, but it doesn't give me surround in such a way that it's distracting, which is also what I want… what it does have is extreme naturalness.’ Loach, too, observes that ‘…if it’s about truth, and truth in the sense of authenticity, then you have to observe the natural rules of sound – of the experience of being there in terms of the sound… (Author interview, 2020).


Towards an ambisonic-centred location sound recording methodology

So, then, how might realist/scripted and realist/ObsDoc filming methodologies change for ambisonic-centred location sound recording, in ways that foreground the truth of ‘extreme naturalness’ and the authenticity of ‘being there’? Under such a mode of recording, makers, protagonists and viewers are all placed in a common sound space – but might this be too visceral a viewing experience for some? In the profilmic space, the audio’s equivalent of the camera’s ‘lens’ is the microphone pick-up pattern. 360-degree location audio can facilitate audience engagement in a newly-defined profilmic space paradigm – not just what is visually in front of, but crucially, now around the lens. Might a shifting of received priorities require a commensurate academic re-definition of the term ‘profilmic’? Or indeed, a new additional classification – for example, the ‘extra-profilmic event-space’ now describing the new recorded 360-degrees situation-specific world?


For instance, for what Chion categorises as the audio-viewer, location 360-audio brings choice to aurally focus on sound elements happening outside of the ‘profilmic event’, and then to be able to select and interpret from their own ‘point of audition’ (ibid. 2009: 485), that being the spatial position from where we hear a sound. How, then, do the storytellers deal with an audience’s ability to process audio information (sub)consciously from out of vision and from the world which Schaeffer defines as the ‘acousmatic’ (1966: 91): for example, choosing their own points of audition according to distance; clarity; dynamic; trajectory; movement; power, and so on. As such, one might argue that realist/ObsDoc storytelling space has now become authentically immersive, effectively contributing to the audio-visual scenography of what Chion identifies as an ‘in-the-wings effect’, with sound being located in ‘“absolute offscreen” space … to create the impression that the screen has a contiguous space’ (ibid. 478).


Camera ‘coverage’ is therefore profoundly impacted. Would the immersive location audio principle within the ‘single person, single camera’ acquisition set-up benefit from a re-discovering of the ‘fixed, prime lens’ aesthetic, meaning that camera movement itself does the ‘zooming’ and not the lens? As Loach observes, zoom lenses in realist operations distort the vision-to-audio ‘perspective of the sound, so the wide shot doesn’t have closeup sound on it. ... In general, it just devalues the truth of the sound because if you’re a long way away from someone, you don’t hear what they are saying’ (Author interview, 2020).


Perhaps, then, a new visual methodology is required, one that is analogous to the audio’s ‘natural’ 360-degree coverage, and where the audio ‘frame’ now matches the visual frame. Such a methodology would serve to promote a ‘naturalising’ perspective, and thereby contribute to the visceral experience of the immersive audio-visual content within the pro-filmic event space. Chion identifies the audio-visual scenography as being further broadened still ‘through the use of entrances to, and exits from, the auditory field’ (ibid. 2009: 469).


But how might the immersive location audio principle then affect film narrative language? For example, a person enters through a door but is not in shot. In a 2-dimensional audio world, the door sound would intrude as it would appear unexplained ‘on top’ of the diegetic audio, but in a 3-dimensional immersive audio world, the audio-viewer (sub)consciously ‘auditions’ the sound of the door and rationalises accordingly, placing the sound within a natural, experiential ‘world mix’. The camera, now no longer needing to explain this out-of-vision sound with a cut-away of the door or a reframe, is liberated and can act as a purely pictorial storyteller.


Relating to this point, Weis and Belton observe that ‘what the soundtrack seeks to duplicate is the sound of the image, not that of the world’ (Johnson, 1985: 4). They describe a post-production response normalising ‘natural’ as a ‘construct’, based around a typical ‘scripted’ filmmaking methodology. But, again, in realist filmmaking, questions immediately arise around authenticity: ‘The idea of an actor recreating a performance in a studio, a dead studio, is completely against capturing the truth of the moment … There’s heightened sound effects … they’re there to get an effect. But you get the effect at the expensive of truth’ (Loach, author interview, 2020). In effect, ambisonics already centralise ‘extreme naturalness’, and so perhaps the above Weis and Belton quote should be re-worked into something that presents an academic re-framing which moves away from sound as construct towards sound as truth: ‘What the soundtrack seeks to duplicate is the sound of the world, not that of the image’.


Both of these filmmaking scenarios open up a set of questions around agency, authorship, and performance. Who is now performing the filming – the camera? The sound recordist? The director? Or even a new role? Does a ‘fusion’ ethic better fit an emerging model around new audiences, new platforms, and new methods of consumption? Does this more liberated methodology now require a new response in terms of skillsets? Should the terms ‘camera person’, ‘sound recordist’ and ‘director’ now to be merged and re-titled, perhaps as ‘content acquisition artist’, or ‘maker’, or something else?


Equally, what is the effect here on the agency of the sound recordist, who is now consciously assessing, augmenting and recording a 360-degree environment, and thus telling their ‘story’ in a new, developing language? With the coincidental liberation of the camera as described, would a resulting shift in hierarchical-based assumptions of authorship/agency then provide a definitive response to Sellors’ earlier cited question: ‘Is the sound recordist a member of a film’s collective authorship’?


In any case, the ‘new role’ – be it content acquisition artist, maker or otherwise – foregrounds sound storytelling skills but now with a required empathy with the 360-sound world of the extra-profilmic event-space. This would include an added understanding of what is and is not achievable on location and in post-production, an ability to understand how ‘post-mic-steering’ will work, as well as having new visual storytelling skills – now filming for audio, perhaps? Conceptually speaking, should this approach be best described and understood as ‘sound on camera’ or ‘camera on sound’, or simply as ‘audio visual’?


Although Michel Chion was writing about post-constructed soundscapes, location-recorded ambisonics similarly furthers the audio-viewer’s ‘choice’ principal and adds to the visceral nature of the audio-visual scenography that he describes. This article has aimed to consider the potential creative role of ambisonic-centred location sound recording within in the realist filmmaking genre, and reflect on how this emerging sound technology might work to centralise multiagency and multi-authorial arts, while still aspiring to immerse the audio-viewer in a position closer to the reality that is being observed. If such a position were to be achieved, then there would be ‘no separation between the audience watching the film and the events in the film’ (Chion, 1976: 43). This would altogether continue the evolution of an audience’s potential ability to consume realist film, but now in a multi-platform, multi-screen, immersive world. Crucially, it facilitates the rediscovery of realist/ObsDoc single ‘sound/camera’ storytelling skills. As I have argued, ambisonics can contribute towards the reinvigoration of a whole new realist/ObsDocs format for makers, now no longer tied to the framing of -hour specials on terrestrial television with their speculative and high cost-bases, but instead now a reimagined version of, maybe, Instagram-length micro-docs. Might such social media micro-realist docs be consumed by audio-viewers on their mobile devices, while, let’s say, travelling on the proverbial (and actual) Clapham Omnibus, now fully immersed within a 360-degree audio film space, and now truly experiencing ‘no separation between the audience watching the film and the events in the film’ (ibid. 1976: 43)?


References

  • Altman, R. (Ed.). (1992). Sound theory, sound practice. Routledge, New York.

  • Atkins, T. (Ed.). (1976). Frederick Wiseman. New York: Monarch.

  • Beck, J. (Ed.). (2008). Lowering the boom: critical studies in film sound. University of Illinois Press.

  • Beckett, R. (2000). Online interview the author. 17 Aug 2020.

  • Chesler, G. (2012). ‘Truth in the mix: Frederick Wiseman’s construction of the observational microphone’. In: Frederick Wiseman, Kino des Sozialen (Ed. Eva Hohenberger, Vorwerk 8: 139-155.

  • Chion, M. (2009). Film and Sound Art. (Gorbman, C. Trans.). Columbia University Press.

  • Gaut, B. (1997). Film Authorship and Collaboration. In R. Allen, & M. Smith (Eds.), Film Theory and Philosophy. (pp. 149-172). Clarendon Press.

  • Graef, R. (2020). Online interview with author: 22 Sep, 2020.

  • Kuhn, A & Westwell, G. (2012). A Dictionary of Film Studies. Oxford University Press.

  • Leacock; Richard. (1993). A Revolution in Documentary Film Making as seen by a Participant.

  • Loach, K. (2000). Online interview the author. 19 Aug 2020.

  • Morrison, J. (2000). Online interview the author. 31 Aug 2020.

  • Robinson, L. (2007). Contingency and Event in China’s New Documentary Film Movement. Nottingham EPrints.

  • Robjohns; H. (2001). Surround Sound Explained: Part 3. Ambisonics. SOS Publications Group.

  • Schaeffer, P. (1966). Traité des objets musicaux. SEUIL.

  • Sellors, P.C. (2007). ‘Collective Authorship in Film’, The Journal of Aesthetics and Art Criticism 65(3). Available at: www.jstor.org/stable/4622239.

  • Sonnenschein, D. (2001). Sound design: the expressive power of music, voice, and sound effects in cinema. Michael Wiese Productions.

  • Weis, E & Belton, J. (Eds.). (1985). Film Sound: Theory and Practice. Columbia University Press.

  • Winn, D. (2014). 31st Annual News And Documentary Emmy Awards Lifetime Achievement Honoree: Frederick Wiseman. The National Academy Of Television Arts And Sciences. Available at: http://emmyonline.org/news/news_31st_interview_wiseman.html

Notes

[1] An exception is Chesler’s illuminating analysis of the work of sound in Fred Wiseman’s observational documentaries. See Chesler, G. (2012). ‘Truth in the mix: Frederick Wiseman’s construction of the observational microphone’. In: Frederick Wiseman, Kino des Sozialen (Ed. Eva Hohenberger, Vorwerk 8: 139-155.

[2] A defining term used by the veteran documentary filmmaker Richard Leacock. See ‘A Search for the Feeling of Being There’, Memoirs of Richard Leacock (1997). Available at: https://mf.media.mit.edu/courses/2006/mas845/readings/files/RLFeeling%20of%20Being%20There.pdf

DOI: https://doi.org/10.33008/IJCMR.2021.42 | Issue 7 | Oct 2021

Kerryn Wise (De Montfort University)


Abstract

Dis_place is a mixed reality performance that takes audiences on a journey using a range of virtual reality (VR) technologies, immersive sound, and live dance performance. Through close analysis of my practice as research project, this article presents reflections on the developing creative strategies and approaches to making VR-based mixed reality performance. It traces the creative process in the making of the work, combining links to the VR artwork, video footage of the live performance, and images from the project. This is combined with my observations and analysis of audience feedback. Through this analysis, the writing assesses the affordances of using VR technologies within immersive performance practices, addressing some of the technological, practical, choreographic, and conceptual concerns. Concluding that these technologies have huge potential for offering audiences new embodied encounters that can shift perspectives and produce transformational, intimate, emotive, and unsettling experiences. Dis_place VR should be viewed on a head-mounted display (HMD). It can be accessed through itch.io here and Viveport here.

Video documentation of Dis_place VR


Edited documentation of Dis-place Live


Introduction

Dis_place is a mixed reality performance [1] that takes audiences on a journey using a range of virtual reality (VR) technologies, immersive sound, and live performance. Through close analysis of my practice as research project, this article presents reflections on the evolving creative strategies and approaches to making VR-based mixed reality performance. As a choreographer interested in making intimate works, using new technologies, the practice aimed to create a performance encounter, offering new perspectives and experiences to the spectator through multi-sensory bodily engagement. Dis_place draws on established practices from immersive and one-to-one performance, dance, film, and digital artworks. This writing traces the creative process in the making of the work, combining links to video footage and images from the project, with my observations and analysis of audience feedback. Through this analysis, the article assesses the affordances of using these technologies within immersive performance practices. Furthermore, using VR technologies within live performance is a new area of investigation within the field and this examination aims to identify the techniques used and to unravel some of the technological, practical, choreographic, and conceptual concerns. Due to limitations of space, I have summarised the artistic concepts underlying the work.


Within this article, I use several terms that need explanation. I use the term ‘360-degree video’ (3DV) to refer to a spherical video file that is generated by a single omnidirectional camera or multiple cameras capturing a view in every direction recorded at the same time. When these images are stitched together using a software programme, they create a panoramic video that can be viewed from any direction in 360 degrees from the perspective of which it was filmed. This captured footage can then be viewed via a head-mounted display (HMD) or on devices through platforms such as Facebook or YouTube (Simcoe, 2018: 120). I use the term ‘Virtual Reality’ (VR) to refer to the immersive volumetric video work which allows the viewer to move around a room-scale VR experience [2], to see the work from multiple angles, which differs from seated or standing only VR experiences. I use the terms ‘participant’ and ‘spectator’ interchangeably rather than ‘viewer’, as these roles depend on the context, and highlight the more active role that the audience member plays in each element of the project.


Context

Dis_place is a one-to-one site-based immersive performance and VR experience which I created with Creative Technologist Ben Neal. Funded by Arts Council England, The Questlab Network with a range of local industry partners [3]. The first live performance took place in October 2019 at the People’s Hall in Nottingham [4], a decaying Georgian building with a rich history. Audiences were invited to experience a performance journey that moved across virtual and physical space through a combination of 3DV, immersive audio, and live dance performance. Dis_place highlights the memories and histories of the inhabitants and former uses of the building, revealed in the beautiful, decaying architecture, creating a unique, multi-sensory immersive experience.


The accompanying VR piece captured the People’s Hall space and dancers using the Microsoft Kinect sensor [5] with Depthkit software, using Unity to recreate a virtual representation of the space. This was presented at the Broadway Gallery opposite the People’s Hall building. This practice repurposes the Microsoft Kinect to capture dancers volumetrically [6] and places them in a virtually captured real historic site. The work guides the viewer through internal features of the space which are explored and highlighted by the digitised dancing performers. The interactive nature of VR allows the viewer to be part of a physical and performative response to the building expressed through movement and becomes an integral part of this new reality. Furthermore, the digital techniques enable new choreography, that would not be possible otherwise.


The key aims of Dis_place were to assess whether it was possible to integrate the 3DV into the overall immersive journey, using the virtual space to offer a different perspective and type of intimacy between the audience participant and the virtual and live performers. The practice aimed to explore how participants negotiate the layering of these intimate interactions across digital and visceral encounters, seeking to discover if a tactile closeness can be sensed through and across digital and physical immersive space. The project also acted as a prototype to test out new strategies and techniques when making this type of mixed reality performance work that uses new VR technologies.


The main development of this project was a two-week research and development phase that took place at the disused People’s Hall building in Nottingham in October 2019. The outcomes were prototypes for a one-to-one mixed reality performance and a VR dance work, both responding to the site. My collaborative team, which comprised a choreographer, four dancers, a creative technologist, lighting, sound, and costume designer, responded to the architecture and history of the Peoples Hall, to generate a site-based performance experience. Audience participants moved through the building experiencing 3DV technologies, immersive and live sound, with real performers who engaged with audiences through the mixed reality experience. This created a multi-sensory immersive experience which, accompanied by the VR piece, tested my central research questions through professional practice.


Furthermore, throughout this research, approximately forty-five participants experienced the one-to-one performance and thirty experienced the VR, resulting in twenty-five audience responses to the one to one performance and four in response to the VR piece. I gathered audience data via questionnaires, although some chose to submit audio recordings and freeform written responses. Some questions were adapted from Measuring Presence in Virtual Environments: A Presence Questionnaire (Witmer and Singer, 1998). The questionnaires were extensive and related to the overall experience, artistic content, engagement of the senses, connection to the dancers, as well as posing questions about the integration of the 3DV into the overall performance. This qualitative data has been examined using thematic analysis to understand audience responses to this type of mixed-reality performance environment and the associated questions it raises regarding choreographic approaches, director and spectator agency, and intimacy.


Research Process

I began the research process in March 2019, with several site visits, spending time with the unique atmosphere of the site, identifying distinctive physical features, and researching the history of the building. The creative team explored and responded to the building through a series of agile test and explorations to allow the site to inform the content of the performance. Alongside this, we had to negotiate the extensive technical elements of creating this type of mixed reality performance. Considering factors such as methods to enable the mixture of live and pre-recorded sound through a wireless Bluetooth network. Contemplating whether it would it be a guided or free-roaming experience, with multiple or single audiences, the placement of the 3DV and its integration within the overall immersive experience. This was in addition to all the usual production elements. The technical considerations for this work were immense and, in such a short development period, this had to be balanced with the artistic development.


Figure 1: The Peoples Hall building frontage and Broadway Gallery opposite.

Image credit: Kerryn Wise and Julian Hughes.


Initially, I had planned to combine the volumetric captured VR elements within the live performance, however, the limitations of the People’s Hall space, which was without mains power, hindered this. Therefore, the live performance used 3DV [7], and the VR became an interrelated element which was shown in the Broadway gallery opposite the People’s Hall. On reflection, this separation allowed me to compare the potentials of both technologies, focusing on the intentions within my practice, leading to some useful insights. In Figure 2, I have identified several key features of both 3DV and Volumetric VR, important within my work, and compared them [8]. The horizontal axis identifies a feature and the vertical axis suggests how much scope there is to achieve in each technology, the higher the number the more potential the technology has [9].


Figure 2: Comparison chart, 2020.


Dis_place: Volumetric room-scale VR

The first stage of the project involved exploring the potentials of volumetric video, a technology that expands the boundaries of 3DV further and enables some of the affordances of computer-generated VR. Volumetric capture extends audience agency in 3DV virtual space, by allowing spectators the ability to move freely within a room-scale virtual environment (VE) and view the digitised dancers from multiple perspectives. Our explorations involved ‘hacking’ the normative use of the Microsoft Kinect in conjunction with Depthkit to volumetrically capture the dancers, cost-effectively, and to place them within a virtual space created using the Unity software programme.


Figure 3: stills showing how the Kinect camera captures depth using infrared, 2019.


Our explorations initially tested how the camera captures the moving body. At the time of developing this work, Depthkit only supported the use of one Microsoft Kinect [10], which meant that the capture zone was quite small, and the dancers’ hands and feet kept getting clipped outside the capture area. Through this experimentation with space, we noted that if you were to isolate and capture a specific body part, it had to be dismembered at the correct point on the body to avoid looking macabre when placed in the virtual space. For example, a whole torso or full arm seems natural; however, if the body part is cut at the elbow or mid-thigh, it seems unnatural. Although the effect we eventually worked with had a fractured aesthetic, moving away from photorealism [11], we did not want to move towards the horror genre, or for the spectators to experience the repulsion associated with the uncanny valley [12].


Returning to the process, we experimented with different types of movement, including combining single and multiple dancers in the capture zone and experimenting with full-body movement and isolating specific body parts. We found that the smaller gestural sequences, involving intricate actions with the hands and arms whilst standing, worked best to capture the whole body. Furthermore, we experimented with how the dancers entered and exited the capture space as this would impact how they entered the virtual space, which influences how the spectator may interpret and engage with these virtual dancers. It was difficult to capture two dancers moving in contact together, or layered spatially, as the ‘shadowing’ that one person throws against another (caused by blocking the infrared sensor) causes a hole to appear in them as the sensor can no longer reach them. We realised that any layering to give the effect of the dancers moving together in contact would have to be created in the virtual space.


Paul and Levy’s term ‘glitch’ refers to ‘images and objects that have been tampered with…and these images can be created by adjusting or manipulating the normal physical or virtual composition of the machine or software itself, or by using machines or digital tools in methods different from their normative modalities’ (2015: 31). I began to make aesthetic decisions guided partly by how the technology was capturing and rendering the dancers’ bodies, alongside how these visual choices could influence the viewer’s potential engagement with the virtual dancers. Several visual style choices can be used in Depthkit, from very pixelated graphics to high fidelity photorealistic resolution. I decided to capitalise on the fragmented, glitchy quality, which balanced semi-realism with fractured edges. I felt that this style choice complemented the visual qualities of the decaying building we would be working in for the final capture, enhancing the eerie qualities within the work.


Figure 4: still of Dis_place VR, showing fractured edges of performers and space, 2019.


Furthermore, I intended to balance the slightly eerie quality this style creates, whilst avoiding the dancers becoming too pixelated and therefore alien to the viewer. My aim was for the spectator to be able to find some connection to the dancers and their narratives told through the movement, and to cultivate a sense of intimacy between dancer and spectator within the virtual realm. I hypothesised that this intimacy could be enhanced by the spectator’s ability to move, towards, through, and close to the dancers spatially; however, I was equally aware that these digital representations could also appear inhuman and thus, provoke limited engaged connection.


For the final VR work, the choreography was developed in response to the site’s heritage and suggests to the viewer the unknown histories of the space. The viewer is guided through internal features of the captured space, which are highlighted by the digitised performers. The interactive nature of this type of room-scale VR allows the viewer to be part of a physical and performative response to the building expressed through movement and becomes an integral part of this new reality. To begin, we separately captured each wall of a room in the People’s Hall building and the choreographic phrases using the Kinect sensor with Depthkit, utilising the learning developed from stage one. Using Unity, the creative technologist recreated a virtual representation of the room and we then began the process of choreographing the captured performers within the VE. Choreographing in virtual space is an innovative area of research, where the composition potentials are significantly expanded.


Figure 5: still from Unity work, 2020.


As we began this process, we were increasingly aware of how many choices were available using this method; we originally had over a hundred clips which we reduced to thirty-five. These were of three dancers which could be placed anywhere within the virtual space, clips could also be layered and duplicated. In a process like traditional video editing, we were working on a timeline to a five-minute composed soundtrack that had been developed in response to the building. We began by giving the spectator time to acclimatise to this new virtual space, offering a chance to explore their surroundings. After this acclimatisation, one dancer appears; the rationale being that I did not want the spectator to be shocked by the sudden appearance of multiple dancers all around them. I decided to create the effect of the dancers appearing from inside the walls, which adds to the eerie atmosphere created. This aimed to enhance the sense that the dancers belonged to or were part of the building and its history.


Figure 6: still from HMD VR view, Dis_place VR, 2019.


In designing the choreography within this three-dimensional virtual space, initially, I placed the dancers on the periphery, gradually introducing the different dancers successively. At times, the dancers’ presences overlapped in the space and looked like they were dancing in contact, or the dancers were placed in different parts of the room, making the spectator choose which to watch. As the piece progressed, I gradually brought the dancers closer to the centre of the room, which is the location that the spectator initially finds themselves in. We wanted the dancers to progressively get nearer to the spectator, building a more intimate relationship and allowing them a closer look. However, this relies on the spectator staying relatively still and central, which is contrary to the affordance of this technology; the spectator’s ability to move around the room-scale environment. Interestingly, it became apparent that the spectator's agency to move reduces my agency as a director to control the developing choreographic vision [13]. This artistic control had appealed to me in my previous work with 3DV and live performance, Exposure (2017) [14], where I played with the viewer's lack of agency. In 3DV, the viewer can look around; however, they cannot move in the virtual space and thus can be caught in a particular viewpoint or relationship with a performer and this can become a powerful device for a director.


In the VR work, whilst audiences reflected on the benefit of being able to move freely and ‘look at the movement from different angles’, there were differing responses to how connected audiences felt to the dancers. One respondent suggested that ‘the “glitchiness” and disappearing into walls made it clear that I was looking at digital representation so did not directly feel connected to them’. However, other responses suggested that ‘the feeling of connection grew throughout – probably as I got closer to the dancers’. Another highlighted that ‘the three-dimensionality of the dancers made them “real” but still removed from my immediate senses’. Likewise, these responses offer a range of perspectives on the reading and interpretation of the digital bodies within this work. Highlighting several factors including the effect of aesthetic choice in the capturing, and the positioning and closeness of the dancer to the spectator in the choreographic spatial design, elements I am exploring further as the practice develops. Overall, there was a distinctly lower level of connection experienced between the audience and digital performers in the VR work than in the live performance which combined digital technologies with live action.


Dis_place: one to one live performance

Figure 7: still from Dis_place live. Image credit: Julian Hughes, 2019.


The live performance took one spectator at a time on a journey that moved through a physical building where they encountered live performers who they followed, and as part of this journey, they were invited to experience 3DV via a HMD. This was used to offer the participant new perspectives on the choreography and as a shifting point in their relationship to the performers in the work. As in much immersive theatre work, there is a performer guide role, which acts as an anchor to the participant, managing their journey and easing the negotiation of the practicalities the technology. In this work participants also wear a pair of bone-conducting headphones [15] which allows them to hear the pre-recorded sound as well as the live performers and the amplified parts of the building, furthering the elements of mixed reality. They are given these at the start as part of the onboarding [16], to make sure they are comfortable and working. One of the issues with using HMD based VR in live performance is how the participant puts on and takes off the headset without it disrupting the overall sense of immersion [17]. The best method I have found is to have the guide place it on and off for the participant as part of the performance. In response to the question ‘How distracting did you find putting on / off the headset? participants noted that ‘I thought it was fairly seamless’ and ‘it worked well with the guide’. Having a performer guide to negotiate this whilst keeping the chosen atmosphere and extending the narrative can work effectively.


Returning to my initial research question which aimed to understand how 3DV technologies can be integrated within live works, there was a question posed within the feedback which asked, ‘How did the 360 VR experience fit within the overall live performance?’. Spectators responded that ‘it enabled more things to happen that would not have been able to happen in reality’, another indicated that it ‘added to the overall impact significantly at being transported through time and space’. Another said that ‘it transformed the space, but the textures of the film and the real smells continued with the movement and sound made me feel like I was inside the walls/fabric of the building’. We decided that the headphones would not be removed for the 3DV element, which aided the continuity for the spectator, as the 3DV visuals were timed with the soundtrack that the participant was already listening to throughout the performance. Another respondent noted that ‘it felt like a natural extension – continuing the images, connections and expanding the perspectives possible’, which is what we had set out to achieve, as well as offering the participant new viewpoints and a unique engagement with the live and virtual performers. Overall, the feedback was positive concerning this, however, two respondents noted that ‘the introduction to it seemed disconnected’, and that ‘the stopping to put the headset on broke the experience a little, however, the gentle guidance kept the performance continuity’. Largely, I think that this was well-managed, although it is always going to be a transitional point and needs to be considered carefully [18].


Figure 8: still from Dis_place live. Image credit: Julian Hughes, 2019.


In the development of this work, I was interested in framing certain views and perspectives of both the building and the dancers interacting with it. I often use framing devices from traditional film and photography to capture images that I display to the audience. This work was no different, and although the spectators can look anywhere, I was carefully curating both the journey they take through the building and the ‘framed’ images they see on that journey. I concluded quite early that the route needed to be set and would be for one audience member at a time, rather than a free-roam experience, which resulted in limited spectator agency within this structure. However, this meant I could control the timing, pace, imagery, negotiation of the technology, and order of scenes. Drawing on techniques used in immersive theatre practices, subtle lighting cues, sound, and the performer's actions were used to guide the participant's attention both in the physical and 3DV space, and the guide was on hand, should participants get disoriented.

Engaging the senses and finding intimacy across physical and virtual space

In recent years, several performance works have been produced which combine live performance with VR technologies including; ZU-UK’s Goodnight, Sleep Tight (2017), Curious Directive’s Frogman (2017), and Draw Me Close by Jordan Tannahill (2019). In his recent article, Harry Wilson considers the effects of bringing VR technologies to live performance, stating that:


…their specific modes of engagement and the ways of seeing, feeling and being that they produce are the unique result of the meeting point between virtual reality technologies and live performance practices. Furthermore, the specific forms of embodied spectatorship afforded at the intersection of VR and live performance can facilitate the movement between the actual and the virtual, intimacy and distance, immersion and making strange: producing very real and potent effects (2020: 115)

Wilsons’ reflections mirror my experiences of making works that combine intimate and immersive performance with VR technologies. The practice developed for Dis_place and its analysis support Wilson’s claim that this work can produce new, embodied experiences for audiences.


Concepts of embodiment were investigated in the questionnaire to understand how the participant's senses were engaged within Dis_place. The notion being that when spectators have their senses heightened, this can lead to more embodied experiences. Our findings noted, as expected, that participants were engaged by the sound and visual qualities, with many also mentioning the smells encountered in the building. One noted that their senses were ‘sharper’, with another noting that ‘I had a heightened awareness of space, noise, touch, movement’. Josephine Machon’s extensive writing about immersive theatre practices and her concept of (Syn)aesthetics, evidence that this type of multi-sensory engagement is common in immersive theatrical practices (2009; 2013). However, several respondents mentioned how their sense of their body and physicality was significantly intensified by the 3DV experience. The shifts in perspectives afforded by the unique 3DV perspectives, and the immersive quality of this type of virtual space, added to a fuller range of sensations for the spectator. A participant commented that ‘all the senses were combined with digital as well as real-life experiences which were disorienting at times but brought so much more to the experience of the building’.


The one to one immersive performance had the overarching idea to take participants on a multi-sensory journey that gently shifted perceptions of reality and questioned notions of real and virtual, past and present. Offering spectators new ways to experience the building and their embodied presence within it. Furthermore, the work aimed to establish a connection between the participant, the performers, and the spaces they shared across the journey, which encompassed both physical and digital environments. Questioning in a post-digital era, how audiences connect with both digital and live performers. I intended to test whether it was possible to use the virtual space to offer a different perspective and act as a transformational space that could shift the relationship between the spectator and the performer, which could be both comforting and unnerving. Scholar Sarah Whatley states that ‘virtual environments do not imitate live performance but visualisations can awaken the senses through an awareness of orientation, dislocation or displacement’ (Whatley, 2012: 277). Whatley also asserts that ‘immersive viewing environments, provide the viewers of virtual bodies an intense and transformative kinaesthetic experience, quite different from what is produced in a “live” encounter with real dancing bodies’ (2012: 266). I propose that if we combine both virtual and live performers within an immersive performance experience, new transformative embodied experiences can be created.


Dis-place was an ambitious project, and this analysis acts as a starting point for addressing some of these complex questions. In terms of experiencing connection, participants were asked to score from one to four how connected they felt with the dancers and to comment on their experiences. As expected, there was a slightly stronger connection between the spectator and the live performers, with most respondents scoring a four. However, there was also a significant number of responses with a score of four for feelings of connection with the virtual performers, with most scoring three or above. Interestingly, many had given the same score for the connection with both virtual and live performers suggesting that the outdated concept of real and virtual as binary opposites is no longer useful [19]. One participant noted that it was ‘Interesting that no particular difference’ when comparing the scores. My interpretation is that this is due to the timing and placement of the 3DV within the experience, the ongoing soundtrack, and primarily the fact that the virtual performers were the same as the live dancers, which created a sense of authenticity and continuity that aided these feelings of connection.


The Shifting Gaze of the Performer and Role of the Audience Participant

The role of the audience participant can be generally separated into pre and post 3DV within this work. Before the 3DV experience, the participant is an unseen witness, disregarded by the live performers who are preoccupied with their actions. Participants follow these performers who lead them through space, highlighting key features, yet the performers rarely directly face or acknowledge the participant’s presence. During the 3DV experience which takes place around halfway through the journey, the role of the participant shifts as the performers gradually begin to acknowledge their presence. The use of the digital performers’ gaze changes as they begin to look directly at the viewer through the camera’s lens. This use of the performers’ gaze is built up gradually through the rooms visited virtually in the 3DV space [20]. The dancers begin in front of you as you look out from a coal filled fireplace, this first perspective offers you a more traditional framed viewpoint, at the end of this scene one of the dancers notices you for the first time, their gaze is curious, yet distracted. One participant stated that:


In the final sequence one of the women seems to notice me (not ‘me’, of course; the camera) and moves closer, peers closer in a way that would again be uncomfortable in the real space but is okay again here because the screen, this hyper real VR screen is different from a screen; but also the woman is becoming enormous as she peers closer in a very weird unsettling way…

In the ensuing scene, the dancers are on either side of you in a narrow corridor with paint peeling off the walls, which looks almost as if it has been submerged underwater. This proximity forces you to choose who to focus on, as the performers begin to look directly at you, subtly encouraging you to watch them, participants noted the seductive quality of this shifting gaze.


The final 3DV scene begins with a view through a doorway into a cement washroom, aged with dirt. You transport forward virtually entering this room and find yourself very close to the performer, she directly gazes at you as she dances, and as she leaves beckons you to follow her. She is leading you back into the physical space, and when you remove the headset, she is waiting for you to continue the journey together as companions. She hands you a developing polaroid that has been taken of you whilst you were away in the virtual space, this captures and marks your presence in the building using old technology to witness new technology, yet also provokes a sense of shifting power relations. A participant noted, ‘What picture did they take while I was “away”, lost in the VR space, exposed and vulnerable to the other people in the room?’.


Figure 9: Polaroid image taken during the performance. Image credit: Julian Hughes, 2019.


This is a pivotal moment in the performance and the participant/performer relationship has shifted as the participant’s role changes from passive witness to active participant. The participant is now openly acknowledged through the performers’ gaze and use of touch, their presence has been captured by the polaroid image that marks the point of this shifting dynamic between participant and performer. Following this, the participant is taken by the hand and physically led towards the next part of the journey by the performer and encouraged to feel the texture of the building’s Georgian cornicing as you both descend the large, grand staircase. One participant commented:


She takes my hand and guides it to the wall…She guides me to feel the texture of the plasterwork; to experience what she was just experiencing. She guides me down the stairs. I’m still holding the polaroid. It still hasn’t developed. This feels as though we’re coming towards an ending.


Conclusion

This article has traced elements of the technological, practical, choreographic, and conceptual journey of Dis_place – as a live performance, VR work, and research project which broadly aimed to understand the potentials of using 3DV and VR technologies within live performance. I have offered creative strategies, reflective commentary, and audience feedback to highlight some of the affordances and issues of integrating these new technologies into live work. My analysis has led to the creation of a chart to compare the different affordances of 3DV and Volumetric VR important within my practice, focusing on features including audience connection, choreographic potential, and agency; suggesting that 3DV offers more control to the director, which is less possible in free-roam VR experiences.


This highlights that different strategies need to be employed to guide the audience in VR, methods which can be fruitfully drawn from immersive theatre practices. It also reveals the extended choreographic potential of volumetric VR. Furthermore, I have explored the relationship experienced between the spectator and the performers in the works. Finding that in the live performance, similar levels of connection were felt between spectators with both the live and digital performers, noting the lack of separation between real and virtual representations. I also noted that the participant's sense of their body could be significantly intensified by adding 3DV scenes to the wider immersive experience. Overall, it highlights the huge potential that these technologies have for offering audiences new embodied encounters that can shift perspectives and produce exciting, intimate, emotive, and unsettling experiences.


References

  • Auslander, Philip. 2008. Liveness: Performance in a Mediatized Culture. 2nd ed. Abingdon: Routledge.

  • Benford, Steve, and Gabriella Giannachi. 2011. Performing Mixed Reality. London: MIT Press.

  • Broadhurst, Susan, and Josephine Machon. 2011. Performance and Technology: Practices of Virtual Embodiment and Interactivity. Basingstoke: Palgrave Macmillan.

  • Curious Directive. 2017. Frogman.

  • Dixon, Steve. 2007. Digital Performance: A History of New Media in Theater, Dance, Performance Art, and Installation. London: MIT Press.

  • Giannachi, Gabriella. 2004. Virtual Theatres: An Introduction. London: Routledge.

  • Giannachi, Gabriella, and Nick Kaye. 2011. Performing Presence: Between the Live and the Simulated. Manchester: Manchester University Press.

  • Machon, Josephine. 2009. (Syn)Aesthetics: Redefining Visceral Performance. Basingstoke: Palgrave Macmillan.

  • Machon, Josephine. 2013. Immersive Theatres: Intimacy and Immediacy in Contemporary Performance. Basingstoke: Palgrave Macmillan.

  • Maravala, Persis Jadé, and Jorge Lopes Ramos. 2017. Good Night Sleep Tight.

  • Paul, Christiane, and Malcolm Levy. 2015. “Genealogies of the New Aesthetic.” In Postdigital Aesthetics: Art, Computation and Design, edited by David M. Berry and Michael Dieter, 27–43. Basingstoke: Palgrave Macmillan.

  • Rouse, Margaret. 2016. “What Is Room-Scale VR (Room-Scale Virtual Reality)? - Definition from WhatIs.Com.” WhatIs.Com. 2016. https://whatis.techtarget.com/definition/room-scale-VR-room-scale-virtual-reality.

  • Simcoe, Peter. 2018. 360 Video Handbook. [s.n.].

  • Slater, Mel, and Sylvia Wilbur. 1997. “A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments.” Presence: Teleoperators and Virtual Environments 6 (6): 603–16.

  • Sparks, Matt. 2019. “Metafocus: Avoiding the Uncanny Valley in VR & Serious Games.” Learning Solutions Magazine. December 26, 2019. https://learningsolutionsmag.com/articles/metafocus-avoiding-the-uncanny-valley-in-vr-serious-games.

  • Tannahill, Jordan. 2019. Draw Me Close.

  • Whatley, Sarah. 2012. “The Poetics of Motion Capture and Visualisation Techniques: The Differences between Watching Real and Virtual Dancing Bodies.” Kinesthetic Empathy in Creative and Cultural Practices. https://pureportal.coventry.ac.uk/en/publications/the-poetics-of-motion-capture-and-visualisation-techniques-the-di-2.

  • Wilson, Harry Robert. 2020. “New Ways of Seeing, Feeling, Being: Intimate Encounters in Virtual Reality Performance.” International Journal of Performance Arts and Digital Media 16 (2): 114–33.

  • Wise, Kerryn. 2017. Exposure. Dance and 360 Video.

  • Witmer, Bob G., and Michael J. Singer. 1998. “Measuring Presence in Virtual Environments: A Presence Questionnaire.” Presence: Teleoperators and Virtual Environments 7 (3): 225–40.

Notes

[1] Benford and Giannachi outline the term as ‘intended to express both their mixing of the real and virtual as well as their combination of live performance and interactivity’ (Benford and Giannachi 2011, 1). Benford and Gianacchi provide a detailed exploration of definitions of mixed reality performance in their book, Performing Mixed Reality (2011).

[2] Margaret Rouse provides a clear summary of room-scale VR, stating that ‘Room-scale VR…is the use of a clear space to allow movement for someone using a VR application such as virtual reality gaming. Being able to physically move within the space helps to replicate real-world movement for the user and make the virtual environment seem more real. The term room-scale distinguishes that type of setup from the self-contained environment of a VR room and from seated or standing VR, in which the user remains stationary’ (2016).

[3] Industry partners included Dance4, the National Dance Agency based in Nottingham, NearNow; Broadway Media Centre’s studio for arts and technology, and Nottingham City Council’s Heritage Team.

[4] The People’s Hall is a dilapidated Georgian building with a rich heritage. It was built in 1750 and through the Heritage Lottery Fund is due for renovation in 2020.

[5] The Microsoft Kinect was first designed as a motion sensor camera used in conjunction with the Xbox 360 computer console. Depthkit is a creative software tool developed to provide XR developers with access to low-cost volumetric capture using the Microsoft Kinect depth camera. Unity is a cross-platform game engine, used to develop video games, it is one of the primary engines for developing VR content, alongside Unreal Engine.

[6] Volumetric video capture is a process that involves multiple cameras capturing the volume of an object or performer from different angles, which are then combined to create a realistic 3D digital video asset that can be placed within a range of VR and Augmented reality (AR) virtual environments. See Dimensions Studio for professional volumetric capturing https://www.dimensionstudio.co/solutions/volumetric-video

[7] 3DV can be easily shown in a wireless VR device, such as the Oculus Go. Thus, making this a more usable device for site and outdoor work without access to mains power.

[8] Each feature will be discussed further in the subsequent sections.

[9] This chart is focused specifically on the intentions within my own work and is not a general comparison of the potentials of 3DV and VR.

[10] In summer 2019, Depthkit released software to support the Microsoft Azure, with plans to support multiple camera capturing from Autumn 2020.

[11] Photorealism in this context refers to the rendering of computer graphics to create highly realistic imagery.

[12] The Uncanny Valley is a concept first developed by Masahiro Mori in relation to robots, which has since been developed to describe animated versions of humans used in games and film. As the visuals become increasingly realistic, any slight defect could result in feelings of repulsion in the viewer and a lack of emotional connection with the animation (Sparks 2019).

[13] There is much work being done in the field to develop narrative storytelling methods in virtual space. Using sound, action and lighting cues to guide audience attention. For this research phase we did not have enough time to explore these techniques fully within the VR work, although we are now working on using triggers which are activated by participants actions. However, we did use these techniques for guiding audience attention within the live performance.

[14] Information about Exposure can be found at kerrynwise.co.uk/exposure

[15] Bone-conducting headphones sit on the outside of the ear and sound is transmitted through vibrations on the head and jaw bones, creating the effect of hearing the recorded sound and any live sound from external sources combined.

[16] Onboarding is a term which is increasingly being used to describe how participants are prepared to enter virtual spaces.

[17] I am using the term immersion here in the theatrical sense, to encompass the overall performance experience, rather than as a term to describe the 3DV virtual space. See Mel Slater and Sylvia Wilbur article A framework for immersive virtual environments (FIVE): Speculations on the role of presence in virtual environments, for specific discussion of presence and immersion in purely virtual environments (1997, 606).

[18] It also raises an important question about what the 3DV offers to the overall experience. There is not the space to discuss this fully in this article, however, it is crucial to understand why the chosen technology is used. We are still in the infancy of this mediums development and thus drawing from Mark Cogniglio’s discussion in his article Materials vs Content in Digitally Mediated Performance, he states that early experimentation with new technologies often relates to the technological potentials before the work can become content driven (Cogniglio, M. in Broadhurst and Machon 2011, 78–84). As we move towards more affordable and accessible technologies, artistic content driven work will begin to emerge more.

[19] There has been much scholarly discussion of the terms ‘real’ and ‘virtual’, with current thinking moving away from considering these as binary opposites and towards a more integrated use of the terms. For a detailed discussion see (Dixon 2007; Giannachi 2004; Giannachi and Kaye 2011). Also see Philip Auslander’s text Liveness: Performance in a mediatized culture for further discussion of live and mediated performance (2008).

[20] The rooms visited in the 3DV virtual environment are also spaces within the People’s Hall building not used within the live experience.

An interdisciplinary, peer-reviewed and open access academic journal devoted to pushing forward the approaches to and possibilities for publishing creative media-based research. 

IJCMR_Footer_Black.png
bottom of page