Volca is a current project centred on investigating new kinds of camera for new kinds of imaging. Using volumetric imaging from a Kinect 3d sensor driven by openKinect wrapped in C++ openframeworks.
Currently the system creates a live 3d mesh with fore and back clipping, allowing for isolation of elements based on depth, allows painting the mesh from the inbuilt RGB camera, records and plays back painted meshes and allow 3d viewing of the mesh. the OpenCV computer vision library is incorporated giving simple blob and object detection and image processing routines applied to RGB and Depth data.
In progress is:
- battery power modification of the Kinect to allow portability, particularly for street photography.
- Inclusion of an additional DSLR (Canon 5d mkII) above the kinect, to allow high resolution RGB frames to be painted onto the mesh.
- Addition of functionality to record both long exposure and timelapse imaging
- Improved CV functions for face detection and flow detection
- Incorporation of image recognition functions to ‘learn’ to label objects
- EXIF meta data with GPS tagging, timestamping, capture settings etc
- A/R style data overlay scraped from web sources
- wifi and radio spectrum signal data sensing
partially built out of the ofxKinect libraries from openFrameworks (http://openframeworks.cc/documentation/ofxKinect/ofxKinect/) and using the Kinect Mesh recorder demo from Pelayo Mendez
(forked source code is here https://github.com/danbz/ofxKinectMeshRecorder)
The work on DepthKit, an RGBD (RGB and Depth) video project using Kinect and linked DSLR has been useful reading, the approach is geared more toward 3d video/motion capture for immersive video experiences the work is complex and now being commercialised. However I have a fork of the original open source material here https://github.com/danbz/ofxRGBDepth.
Early Volumetric Camera recording tests with unpainted point cloud in mp4.
The parts are assembled with stainless steel nuts and bolts and have a Manfrotto style camera quickrelease mount on top for ease of use.
Underneath the unit has a standard mount to connect to tripods, gimbals and other photographic equipment.
The mount until will be extended to house the hacked Kinect battery pack, and new portable mini pc (possibly raspberry pi).
3d printed parts for volumetric camera mount with Canon EOS-M mirrorless camera for RGB image collection
tests with multi rendering playback styles, freeze frame and frame scrubbing enabled.
Newest tests with openCV routines for erosion, dilation and gaussian blur applied to either the RGB image feed or the the depth data changing the live mesh. also featuring new recording settings for single shot recording and playback with EXIF metadata recording and loading.
After a number of excursions into the great outdoors with the camera rig it is painfully apparent how poorly the infrared kinect sensors perform in the presence of any kind of daylight.
The first tests with the shoulder bag carried portable rig felt uncannily like my early video experiments using the classic Sony portapac VHS portable video camera. A hand help (but rather bulky) camera tethered to an unfeasably heavy leatherette shoulder bag encasing a metal body VHS recorder and battery pack.
After revisiting sensor research I ran a number of tests on synthesising depth image data from stereo natural light image pairs via generation of difference maps using the openCV computer vision libraries. Porting code from Processing to openFrameworks C++ was straightforward and yielded reasonable results though reinforced the importance of calibration to achieve accurate results.
The project is moving on with refactored code and a number of different iterations of how the code, and the data management process should be handled. The development of the Volca project coming full circle back to it’s origins, as a philosophical question. What is is to see and what if we can see other. The seeds of the project are probably in the short paper presented at ACM SIGCHI conference in Seoul, Korea in 2016. “collaborating with Intelligent machines’ Collaborating_with_Intelligent_Machines
The code runs in C++ using openFrameworks (openframeworks.cc) a framework/library geared toward art/design and interaction and after experiments in portability running the code on Mac Mini computers powered by inverters and batteries in rucksacks, porting to Raspberry Pi I recently acquired an nVidia Jetson TK1 board and a stereolabs ZED natural light depth camera.
The initial tests running the Jetson board are positive and the ZED camera, whilst, so far, appearing less accurate that either the structured light Kinect 1414 or time of flight Kinect2 InfraRed depth cameras, has the advantage of working in sunlight, ie outside.
The ZED camera gives interesting results in variable light conditions, in comparison to InfraRed depth cameras such as Kinect, here you can see the undulation of the far reaches of a depth generated mesh in low visible light.
Additional development is going on in parallel to the hardware and core software pattern design,
Using a (found in a shoe box from an unfinished 2011 project to make a self aware, voice controlled in car music system) GPS antenna I have code running to give the Volca hardware location awareness. The lat/long XML/EXIF data is already coded into the core software and integration will come in the next phase.
Matched with the Jetson GPU board the intention is to have a daylight capable, battery powered, portable depth camera.
see the current source code here on github https://github.com/danbz/volume-camera