In lieu of an abstract, here is a brief excerpt of the content:

Reviewed by:
  • softVNS 2 Real Time Video Processing and Tracking Software
  • Margaret Schedel
softVNS 2 Real Time Video Processing and Tracking Software softVNS 2.0 is listed for US$ 350. Current versions are softVNS 2.19d for Mac OS X, and softVNS 2.17 for Macintosh OS 9. Contact David Rokeby; electronic mail drokeby@sympatico.ca; Web homepage.mac.com/davidrokeby/ softVNS.html.

David Rokeby developed the Very Nervous System in 1986 as an installation that enabled a computer to trigger sound and music in response to the movement of human bodies. His computer was not fast enough to analyze streaming video data so he created a rudimentary camera out of 64 light sensors and a plastic lens. Using the Very Nervous System with its dedicated external digitizers to capture, convert, and extract motion information from live video, he has created installations for over a decade, adding hardware and software modules as needed. Since 1993 Mr. Rokeby has sold his technology as several generations of upgradable proprietary hardware. In 1999 he reworked the system to run under the Max programming environment and dubbed it SoftVNS; in July 2002 he released an updated and expanded version of this software: softVNS 2.

The integration of softVNS with Max is seamless—the software resides within the Max folder and is authorized by SoftVideo.key, a file that the developer sends the user via electronic mail which also resides within the Max folder. softVNS objects are denoted by a "v." prepend similar to Jitter's jit. nomenenclature. Speaking of Jitter, Mr. Rokeby has written objects which translate from softVNS streams to Jitter matrices and back again. softVNS has some overlap with the functionality of Jitter in terms of video playback and processing, but Jitter is designed for general data processing including OpenGL geometry, audio, physical models, state maps, generative systems, text, or any other type of data that can be described in a matrix of values. softVNS is designed purely for processing video and contains a very useful set of objects for real-time video tracking, including presence and motion tracking, color tracking, head-tracking, and object following.

The tracking algorithms in soft VNS 2 are unbelievable. I'm sure given enough time I could build similar patches in Jitter, but Mr. Rokeby has had 20 years of experience tracking movement and it shows. A key component in classic softVNS motion tracking is used in v.motion, which is implemented by subtracting the current frame from the previous one—the differences are generally caused by movement in the image. There are many other tracking algorithms: v.edges shows the edges of movement; v.heads tracks multiple objects at once; v.track follows a specified small object; v.bounds draws a rectangle around the borders of the object and gives its center (see Figure 1); and v.centroid gives the center of gravity of an object. It is fascinating to compare the center of an object as measured by its boundaries and its center of gravity. Together with objects that massage the incoming video stream to make it easier to track, these following objects are the strongest reason to purchase softVSN 2 if you already own Jitter.

The other half of softVNS allows video playback and manipulation. In his own installations, Mr. Rokeby has always had video and sound reacting to body movement, and he developed softVNS before Jitter and Nato were available. Working with video artist Charlie Woodman, I created a patch that tracks the movement of a dancer who controls over 60 parameters of the sound and 10 parameters of the video. Previously, I had created video patches for Mr. Woodman with Jitter, but he was more than happy with the video "effects" in softVNS 2. Although we didn't have to use the v.jit object, it works perfectly.

Getting started with softVNS 2 is pretty easy; there is no tutorial per se but there is a patch called softVNS_2_Overview that introduces the softVNS 2 objects and allows easy access to example/help patches for each object. The objects are divided according to type including sources/capture/display, spatial transformation, and tracking/analysis. I found it much more useful to simply work...

pdf

Share