Volca: Building a volumetric camera

Volca is a current project centred on investigating new kinds of camera for new kinds of imaging. Using volumetric imaging from a Kinect 3d sensor driven by openKinect wrapped in C++ openframeworks.

Currently the system creates a live 3d mesh with fore and back clipping, allowing for isolation of elements based on depth, allows painting the mesh from the inbuilt RGB camera, records and plays back painted meshes and allow 3d viewing of the mesh. the OpenCV computer vision library is incorporated giving simple blob and object detection and image processing routines applied to RGB and Depth data.

‘girl in red sweater’ rendered with delaunay triangulation

In progress is:

  • battery power modification of the Kinect to allow portability, particularly for street photography.
  • Inclusion of an additional DSLR (Canon 5d mkII) above the kinect, to allow high resolution RGB frames to be painted onto the mesh.
  • Addition of functionality to record both long exposure and timelapse imaging
  • Improved CV functions for face detection and flow detection
  • Incorporation of image recognition functions to ‘learn’ to label objects
  • EXIF meta data with GPS tagging, timestamping, capture settings etc
  • A/R style data overlay scraped from web sources
  • wifi and radio spectrum signal data sensing

Construction

partially built out of the ofxKinect libraries from openFrameworks (http://openframeworks.cc/documentation/ofxKinect/ofxKinect/) and using the Kinect Mesh recorder demo from Pelayo Mendez
(forked source code is here https://github.com/danbz/ofxKinectMeshRecorder)
The work on DepthKit, an RGBD (RGB and Depth) video project using Kinect and linked DSLR has been useful reading, the approach is geared more toward 3d video/motion capture for immersive video experiences the work is complex and now being commercialised. However I have a fork of the original open source material here https://github.com/danbz/ofxRGBDepth.

Early Volumetric Camera recording tests with unpainted point cloud in mp4.

3d printed parts for volumetric camera mount.
3d printed parts for volumetric camera mount.

 

The designs are available here opensource at Thingiverse.
I ordered the printing though 3dhub.com and it was printed in black PETG by Simon at Demon3d in Bristol.

The parts are assembled with stainless steel nuts and bolts and have a Manfrotto style camera quickrelease mount on top for ease of use.

Underneath the unit has a standard mount to connect to tripods, gimbals and other photographic equipment.

The mount until will be extended to house the hacked Kinect battery pack, and new portable mini pc (possibly raspberry pi).

3d printed parts for volumetric camera mount with Canon EOS-M mirrorless camera for RGB image collection
3d printed parts for volumetric camera mount with Canon EOS-M mirrorless camera for RGB image collection

3d printed parts for volumetric camera mount with Canon EOS-M mirrorless camera for RGB image collection

tests with multi rendering playback styles, freeze frame and frame scrubbing enabled.

Newest tests with openCV routines for erosion, dilation and gaussian blur applied to either the RGB image feed or the the depth data changing the live mesh. also featuring new recording settings for single shot recording and playback with EXIF metadata recording and loading.

test of openframeworks Kinext RGBD Volca volume camera software with simple OpenCv routines from Daniel Buzzo on Vimeo.

Shooting tests in open countryside involved livestock being cooperative and with 3m due to the poor IR performance in daylight.

After a number of excursions into the great outdoors with the camera rig it is painfully apparent how poorly the infrared kinect sensors perform in the presence of any kind of daylight.

Portable rig with USB button interface for eternal control, with CPU, batteries etc carried in shoulder bag.
Portable rig with USB button interface for eternal control, with CPU, batteries etc carried in shoulder bag.
Portable rig with USB button interface for eternal control, with CPU, batteries etc carried in shoulder bag.
Portable rig with USB button interface for eternal control, with CPU, batteries etc carried in shoulder bag.

The first tests with the shoulder bag carried portable rig felt uncannily like my early video experiments using the classic Sony portapac VHS portable video camera. A hand help (but rather bulky) camera tethered to an unfeasably heavy leatherette shoulder bag encasing a metal body VHS recorder and battery pack.

After revisiting sensor research I ran a number of tests on synthesising depth image data from stereo natural light image pairs via generation of difference maps using the openCV computer vision libraries. Porting code from Processing to openFrameworks C++ was straightforward and yielded reasonable results though reinforced the importance of calibration to achieve accurate results.

Generating difference and depth maps via openCV from a pair of horizontally separated images

The project is moving on with refactored code and a number of different iterations of how the code, and the data management process should be handled. The development of the Volca project coming full circle back to it’s origins, as a philosophical question. What is is to see and what if we can see other. The seeds of the project are probably in the short paper presented at ACM SIGCHI conference in Seoul, Korea in 2016. “collaborating with Intelligent machines’ Collaborating_with_Intelligent_Machines

Initial depth testing with ZED stereo camera on Jetson TK1 board running modified L4T Ubuntu Linux 14

The code runs in C++ using openFrameworks (openframeworks.cc) a framework/library geared toward art/design and interaction and after experiments in portability running the code on Mac Mini computers powered by inverters and batteries in rucksacks, porting to Raspberry Pi I recently acquired an nVidia Jetson TK1 board and a stereolabs ZED natural light depth camera.

The initial tests running the Jetson board are positive and the ZED camera, whilst, so far, appearing less accurate that either the structured light Kinect 1414 or time of flight Kinect2 InfraRed depth cameras, has the advantage of working in sunlight, ie outside.

Successful initial deployment of nVidia Jetson TK1 board. With soon to be deposed mac-mini sulking underneath.

The ZED camera gives interesting results in variable light conditions, in comparison to InfraRed depth cameras such as Kinect, here you can see the undulation of the far reaches of a depth generated mesh in low visible light.

latitude and longitude, satellite position, lock, strength, altitude and time from external GPS receiver

Additional development is going on in parallel to the hardware and core software pattern design,
Using a (found in a shoe box from an unfinished 2011 project to make a self aware, voice controlled in car music system) GPS antenna I have code running to give the Volca hardware location awareness. The lat/long XML/EXIF data is already coded into the core software and integration will come in the next phase.

Matched with the Jetson GPU board the intention is to have a daylight capable, battery powered, portable depth camera.

3d cat rendered as pointcloud
of minor development significance but an interesting real world test resulted in getting a variety of portraits of the cat that owns our house, Linus, – here seen posing in an unpainted 3d point cloud on the staircase.

Volca Volumetric camera project. VR photography portfolio from Daniel Buzzo on Vimeo.

Premiered in Montreal March 2018 at SIGCHI conference and exhibition.

read the whole paper here : http://eprints.uwe.ac.uk/34674/

Camera software written in C++ openFrameworks.
Portfolio constructed in Unity3D for HTC Vive headset.

With the grateful assistance of Alexander Birke at https://www.outofboundsgames.com
see the current source code here on github https://github.com/danbz/volume-camera