Sunday, February 28, 2010

Comparison Between IOimage and VideoIQ

http://spotonsecurity.com/2010/01/08/comparison-testing-is-hard-to-find/

By Posted by Doug Marman under VideoIQ 1/8/09

[6] Comments

"I will try to be objective about the results, but of course that’s not as easy as it sounds.

The demos weren’t long. We each had about 10 minutes to show our products working. Not exactly an in-depth test. Both systems had to detect someone walking from behind the curtains across the room. With our system they also tested “Object Missing” and with IOimage they tested “Object Left Behind”.

Both products worked fairly well. Some of the differences were minor but interesting.

IOimage needed to calibrate their system before the event. They used two people and it took them about 30 minutes. However, since it was indoors, they could cut their process short. In general it followed the videos that they show on how to calibrate their products, but these guys were clearly experienced and didn’t have much time, so they moved fast. One person sat at the PC and marked the head and feet of the other person at four different places in the room. They drew a line on the floor, showing a predetermined distance (10′ in this case). They only calibrated one axis, not two. And they only calibrated in the one area that was going to be tested, not the whole room.

Our system doesn’t need calibration, as I’ve mentioned before. So, we had plenty of time. Picture the Maytag repair guy with his feet up, while waiting.

However, what we found interesting was the way IOimage positioned the whole process of calibration and tuning. They claimed calibration is what makes detection better. This is why they could detect someone crawling. However, as I’ve said before, this is wrong. While calibration clearly makes their system work better, in our case the calibration runs automatically. So, it is really a question of manual calibration compared to automatic calibration. As I’ve pointed out, there are lots of big disadvantages to manual calibration:

http://spotonsecurity.com/2009/11/05/the-tuning-and-calibration-controversy/

The issue about detecting crawling people is bogus. Perhaps they don’t realize it. They might think that calibration has something to do with this, but it doesn’t. It is simply that we have not yet developed an object classification type for people crawling. Quite frankly, we haven’t had any demand for it yet. The fact that we can still detect people crawling is easily demonstrated by setting our detection to “suspicious objects”, which easily captures people who crawl.

On the other hand, IOimage worked hard to make sure that they were getting a full view of the people in the area they wanted to detect, and they were especially concerned about moving chairs or tables out of the way. We didn’t have to worry about that, because we have an object classification type for head-and-shoulders detection of people. That’s pretty important indoors, because chairs and desks are quite common. IOimage apparently doesn’t have that object classification type, which is why they needed to see the whole person. In other words, this has nothing to do with calibration; it is all about the types of objects the system has been trained to detect.

You can read my previous post about the down side of manual calibration. But here are some other questions: How do you calibrate boats? Does someone have to walk out on the water? If you are setting up a system to watch the tarmac at an airport, do you have to shut the runway down while you walk out there to calibrate it? What about hazmat sites? Do you really need to send people in there to calibrate the camera? Besides all the other problems with having to manually calibrate a system, as I mentioned in my previous post, it isn’t always practical.

The real test comes down to how well do the systems work. That’s what really matters. Unfortunately, this wasn’t a stringent test. We wished it could have been outdoors with trees blowing in the background, to make it a tougher comparison. In general, both systems seemed to work well. We spotted some minor false detects on their system when the curtains moved, high up in the air where calibration should have ruled detection out, but for some reason didn’t.

Another noticeable difference was that they were only streaming the video from their camera, while our camera captures and records the event. Therefore, it appears to be more difficult with their system if you missed what just happened. In our system, you get a video clip that you can playback whenever you are ready. You don’t have to be there watching. IOimage would have to add an NVR to their system to get storage playback.

What IOimage added instead, that we don’t have, is mouse trails of where a person has been in the scene. This helps when it isn’t as easy to see where someone has been. You can just look at their previous trails. That works. I would rather watch the actual pre-alarm video so you can see what they were actually doing, but both systems offer something that works.

Apparently, from what we could see, IOimage can adjust sensitivity, but it seems to change it for the whole camera. We can individually set sensitivity for each detection rule. So, our missing object can be very sensitive, if we want, but people detection does not need to be.

One of the best things about their product is their web interface. Very well done. An integrator can do all of their calibration through the web, and it worked smoothly. If you have to calibrate, that’s a big plus.

However, they had to switch their camera from people detection to object left behind during the demo. Apparently it doesn’t detect both at the same time. Ours does, and in fact can run quite a few different types of detections with different types of objects, in different regions of interest, all at the same time on the same camera.

Also, once our rules are set up, our system keeps detecting even when you are changing or adding new rules. We were surprised to see that the IOimage system shuts down when you are setting up rules. It’s not a big issue. It is more important to detect a number of different things at the same time. That’s very useful.

It wasn’t much of a detailed competitive test, as I said. Both systems use high quality video analytics, not advanced video motion detection. And the results from both were pretty good.

And of course, this would be more objective if someone else were reporting it. However, it is so hard to find any comparison testing that I thought I would share it, anyway."

No comments:

Followers

Blog Archive

About Me

My photo
HD Multimedia Technology player