“ISSv2 and OpenISS Distributed System for Real-time Interaction for Performing Arts” by Mokhov, Li, Lai, Singh, Shen, et al. …

  • ©Serguei A. Mokhov, Deschanel Li, Haotao Lai, Jashanjot Singh, Yiran Shen, Jonathan Llewellyn, Miao Song, and Sudhir P. Mudur

  • ©Serguei A. Mokhov, Deschanel Li, Haotao Lai, Jashanjot Singh, Yiran Shen, Jonathan Llewellyn, Miao Song, and Sudhir P. Mudur

  • ©Serguei A. Mokhov, Deschanel Li, Haotao Lai, Jashanjot Singh, Yiran Shen, Jonathan Llewellyn, Miao Song, and Sudhir P. Mudur

  • ©Serguei A. Mokhov, Deschanel Li, Haotao Lai, Jashanjot Singh, Yiran Shen, Jonathan Llewellyn, Miao Song, and Sudhir P. Mudur

  • ©Serguei A. Mokhov, Deschanel Li, Haotao Lai, Jashanjot Singh, Yiran Shen, Jonathan Llewellyn, Miao Song, and Sudhir P. Mudur

Conference:


Type(s):


Entry Number: 16

Title:

    ISSv2 and OpenISS Distributed System for Real-time Interaction for Performing Arts

Presenter(s)/Author(s):



Abstract:


    ABSTRACT

    Illimitable Space System v2 is a configurable toolbox which provides multimodal interaction and serves as a platform for artists to enhance their performance through the use of depth and colour data from a 3D capture device. Its newest iteration was presented as part of ChineseCHI in 2018.

    This latest iteration of ISSv2 is powered by an open-source core named OpenISS. The core allows the ISSv2 platform to be run as a distributed system. Video and depth capture are done from a computer acting as a server with a client component for displaying the applied effects and video from a web browser. This has the added benefit of allowing the artist to broadcast their performance live and opens the way for audience interaction. There are two primary motivations behind creating an open-source core for the ISS: first, open-source tech allows more people to participate in the development process as well as understand how the technology works while spreading maintenance responsibilities to their respective parties. Secondly, having a core allows parts of the system to be switched out at will without having to modify it all at once, this is particularly relevant with respect to capture devices.

    INTRODUCTION

    Real Production Experience. Further development ofISSv2 as a platform necessitated the logical separation of its components which prompted the creation of the OpenISS core. This allowed for significant extensions to the functionality of the original system. The OpenISS core makes it possible to connect any 3D capture device supported by the libfreenect library to the system. It also allows the ISSv2 to process and set the effects from a computer acting as a server and broadcast the resultant video and effects over the internet. This is achieved via the APIs implemented by OpenISS. As a demonstration of the new server and web client functionality, ISSv2 was demonstrated at ChineseCHI 2018 using a Mac laptop as a server and a Kinect as a capture device. The audience was able to view the live performance and applied effects simultaneously on their personal devices using the web client. A summary on the ISSv2-based requirements, related work, and performances can be found in [Mokhov et al. 2016, 2018; Singh et al. 2018].

    ChineseCHI. ChineseCHI is designed as a forum connecting the international HCI community to the booming Chinese computing industry. It served as a good venue to further test the concept of ISSv2 with OpenISS. With the audience’s participation, it was possible to demonstrate the new capabilities, granted by the use of the OpenISS core. This also allowed us to receive feedback and suggestions on what features could be added. Continuous Integration Methodology. The logical separation of the various components of the system prompted us to apply a Continuous Integration Methodology while developing the ISS’s extended functionality. This allowed us to frequently deploy releases which in turn gave us the ability to adapt the ISS to changing requirements.

    The rapid release cycle of this methodology also allowed us to introduce new features and test them quickly. Further development of features would have been difficult were it not for the use of this methodology.

    OpenISS and ISSv2. The ISS provides an opportunity to try and create a fully interactive enhanced performance space with broadcast capabilities. Thanks to OpenISS core, it is easier to add effects to the ISSv2 repertoire. The core also gives the system the ability to connect to more types of depth cameras, including more recent ones, and the ability to communicate via an API. This API makes it possible capture a performance remotely and broadcast it with the applied effects live.

    The current iteration of OpenISS (https://github.com/OpenISS/ OpenISS) runs its server component off a Mac laptop with the broadcast made accessible via REST and SOAP calls using a URL. The capture device used for video and depth was a Kinect (versions 1 and 2) and the main display used HDMI output, specifically with a projector, but now expanded to the RGBD and RGB cameras, e.g., Intel RealSense.

    EXPOSITION

    At ChineseCHI 2018, ISSv2 was successfully demonstrated live with a working URL provided to view the presentation remotely, thanks to the OpenISS core. Setting up the presentation had been done without calibrating the system and show the OpenISS’s capabilities as a presentation and entertainment software with a relatively simple setup. The first segment of the presentation consisted of a lecture explaining how the ISSv2 works and the system’s ability to provide still images and seamlessly transition between its different effects in real-time. This also served to show how the data from the capture device was used to provide its effects. The second half of the presentation was a dance performance created to demonstrate the original intended artistic use case of the ISSv2 how its effects could be used in a creative context.

    CONCLUSIONS AND FUTURE WORK  

    The ISSv2 using the OpenISS core has shown that the platform’s theoretical capabilities are achievable and highly extensible. Currently the ISS can use multiple 3D capture devices, however, it has only used the Kinect RGB-D cameras from Microsoft, so far. Future development will partly focus on transitioning to more recent and better supported 3D capture devices. The reasons for this are twofold: firstly, Kinect is being phased out as a Microsoft product and is no longer being supported and secondly, newer cameras provide better data due to higher resolutions and framerates. Other work will include improvement of backend functionality including more advanced capabilities with respect to the accompaniment of a physical performance with different approaches such as activity recognition and machine learning. As of this writing we are not aware of readily available software packages that combine 3D data capture with distributed systems capable of using consumer-grade hardware and open source software.

    Ongoing and future research areas. Thanks to developments in computer vision several features have been partially integrated into the ISS and are being tested. Among those features are gesture recognition, face tracking and background extraction or green screening. Other extensions to the ISS currently being researched are AI-driven features such as person re-identification, real-time stylization and action recognition. We are currently exploring the Magenta framework which also offers support for sound generation. These new functions would allow for more modes of interaction with the ISS and allow for greater expressive freedom. On the backend further improvements to OpenISS’s API are being worked on to allow it to seamlessly function with any type of camera. To further push processing power Microsoft’s new Project Kinect for Azure could be a good candidate for integration with OpenISS as it provides a framework and infrastructure for processing depth and color data on the cloud.

    Multi-camera mode: Another specific feature being pursued is the use of multiple sensors. During the development process it was discovered that is possible use the depth cameras in combination with each other, so we are preparing to use it with multiple performers. This would tie in with the person re-identification features using machine learning that we are also completing in OpenISS.

References:


    • Serguei A. Mokhov, Miao Song, Satish Chilkaka, Zinia Das, Jie Zhang, Jonathan Llewellyn, and Sudhir P. Mudur. 2016. Agile Forward-Reverse Requirements Elicitation as a Creative Design Process: A Case Study of llimitable Space System v2. Journal of Integrated Design and Process Science 20, 3 (Sept. 2016), 3–37. https://doi.org/10.3233/jid-2016-0026
    • Serguei A. Mokhov, Miao Song, Sudhir P. Mudur, and Peter Grogono. 2018. Hands-on: Rapid Interactive Application Prototyping for Media Arts and Stage Performance and Beyond. In SIGGRAPH Asia 2018 Courses (SA ’18). ACM, New York, NY, USA, Article 9, 32 pages. https://doi.org/10.1145/3277644.3277760
    • Jashanjot Singh, Haotao Lai, Konstantinos Psimoulis, Paul Palmieri, Inna Atanasova, Yasmine Chiter, Amirali Shirkhodaiekashani, and Serguei A. Mokhov. 2018. OpenISS Depth Camera As a Near-realtime Broadcast Service for Performing Arts and Beyond. In SIGGRAPH Asia 2018 Posters (SA ’18). ACM, New York, NY, USA, 25:1–25:2. https://doi.org/10.1145/3283289.3283293

Keyword(s):



PDF:



Overview Page: