Recently I have been researching ways of tracking and visualising use of space by pedestrians. The two technologies I have been looking at at Volumetric reconstruction and Model-based figure reconstruction. The two methods are very different in their approach. Volumetric reconstruction in one respect is completely dumb; it does not try to understand what it is looking at, and instead uses the maths of projection to reconstruct space purely on what it can see. In contrast, Model-based figure tracking attempts to model the behaviour and properties of a person, in order to understand how they are moving through space. Instead of just giving raw use of space, it gives you an idea of what the bodies are actually doing, and how.
Volumetric reconstruction uses position-calibrated cameras to take the silhouettes from each camera angle. It then uses these silhouettes to reconstruct the space being used. The process is the reverse of the Left, Top, and Front views windows in a 3D modelling program; the 3D program creates these views from what it knows about the 3D model, whereas here we effectively recreate the model based on the views. Like the 3 view windows, this process needs 3 specific perspectives to recreate a scene properly. It’s main weakness is in handling occlusion of views, as it has no way of knowing about any gaps behind what it can see.
The above and below images are from a 3D simulation of a test-system using only two cameras. The cameras raw view is processed to find the difference between the constant background and the changing pixels of the pedestrian. The images on the right are the results of processing.
The reconstruction works quite well provided there is no occlusion – i.e. people walking in front of each other. In fact, as we are dealing just with Pedestrians, we could could probably get away with just one camera; the top-down view. This is because we can assume a reasonable height for the pedestrians. The top-down view is also extremely helpful because it does not feature much occlusion. Think of a birds-eye view of people walking around, and you’ll agree it’s not very often that people jump up or fly in front of each other!
As you can see, the results are not perfect, but they are a very easily achieved, and are visually interesting. This process can very easily be run in real-time, resulting in animated blobs moving about the open space.
I have been looking today at Volumetric reconstruction, which I am more familiar with having worked with it in implementing my 3D scanner. Hopefully I will get a chance to look at Model-based tracking in more depth (Reading university – where I studied – has a strong computer vision group doing this sort of thing). If you are super-keen now, you can check out a paper in the field: “ROBUST PEDESTRIAN TRACKING USING A MODEL-BASED APPROACH”.