“Transforming the Commonplace through Machine Perception: Light Field Synthesis and Audio Feature Extraction in the Rover Project” by Twomey and McCrea

  • ©

Conference:


Type(s):


Title:

    Transforming the Commonplace through Machine Perception: Light Field Synthesis and Audio Feature Extraction in the Rover Project

Presenter(s)/Author(s):



Abstract:


    Rover is a mechatronic imaging device inserted into quotidian space, transforming the sights and sounds of the everyday through its peculiar modes of machine perception. Using computational light field photography and machine listening, it creates a kind of cinema following the logic of dreams: suspended but mobile, familiar yet infinitely variable in detail. Rover draws on diverse traditions of robotic exploration, landscape and still-life depiction, and audio field recording to create a hybrid form between photography
    and cinema. This paper describes the mechatronic, machine perception, and audio-visual synthesis techniques developed for the piece.

References:


    1. V. Vash, “Synthetic Aperture Imaging Using Dense Camera Arrays,” PhD Thesis, Stanford University (2007).

    2. “Integral Photography,” Scientific American, 165 (1911).

    3. M. Levoy and P. Hanrahan, “Light field rendering,” Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH (1996).

    4. The Stanford Multi-Camera Array, <https://graphics.stanford.edu/projects/array/>.

    5. “Light Field Gantry,” <http://lightfield.stanford.edu/acq.html>.

    6. R. Ng, “Light Field Photography with a Hand-Held Plenoptic Camera,” Stanford University Computer Science Tech Report CSTR 2005-02 (2005).

    7. Lytro Illum, <https://www.lytro.com/imaging>.

    8. W.G. Sebald, Rings of Saturn (New York: New Directions, 1995).

    9. R. Barthes, Camera Lucida (New York: Hill & Wang, 1980).

    10. A. Tarkovsky, Instant Light: Tarkovsky Polaroids (London: Thames & Hudson, 2006).

    11. G. Richter, Gerhard Richter: Landscapes (Ostfildern: Cantz Verlag, 1998) pp. 84–87, 97–99.

    12. GRBL, an open-source, embedded, high-performance g-code-parser and CNC milling controller written in optimized C, <https://github.com/grbl/grbl>.

    13. Python / OpenCV Camera Calibration Example, <http://opencv-python-tutroals.readthedocs.io/en/ latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html>.

    14. C. Wu, “VisualSFM: A Visual Structure from Motion System” (2011), <http://ccwu.me/vsfm/>.

    15. C. Wu, “SiftGPU: A GPU implementation of Scale Invariant Feature Transform (SIFT)” (2007), <http://cs.unc.edu/~ccwu/siftgpu>.

    16. C. Wu, et al., “Multicore Bundle Adjustment,” Proceedings of IEEE CVPR, pp. 3057–3064 (2011).

    17. N. Collins, SuperCollider Music Information Retrieval Library (SCMIR), <https:// composerprogrammer.com/code.html>.

    18. Black Box 2.0 Festival (6 May–7 June 2015), <http://www.aktionsart.org/allprojects/2015/5/6/ black-box-2>.

    19. Supported by an Amazon Web Services Cloud Credits for Research Grant, awarded December 2016.

    20. CoreXY Cartesian Motion Platform, <http://corexy.com/theory.html>.


ACM Digital Library Publication:



Overview Page:


Art Paper/Presentation Type: