Runs multiple computer vision algorithms from different video
sources, all in one platform at the same time.
Tracks processing demands in real time and scales server usage
Using off-the-shelf, inexpensive, smart cameras.
Our algorithms anonymise personal data on the camera, allowing
it to be transferred to the cloud for processing.
Camera management platform.
Setup, configuration and management of cameras by the
Computer vision platform.
Recognise the correct type of object in view.
eg: Person, airplane or lorry.
Follow the path of each detected object across the camera view
Identify the object performing a key event such as crossing a
count line, taking off or stopping
Distributed state store
Event metadata cache shared across server network in real time. Platform autoscales based on demand.
Event detection can trigger any kind of output.
This technology is used to tell (for instance):
There are 630 people in the building right now.
48 people have exited the building in the last hour.
221 people have walked past the building today.
Email, SMS, WhatsApp etc ...
API trigger of another system.
Charting / trends and data comparison.
It's a revolutionary new approach.
Our platform is radically different from existing solutions.
Using low cost off the shelf cameras
Ultra-scalable cloud distributed system
Multiple algorithms on one platform
Real time aggregation of multiple camera streams
Self install for rapid up/down scaling
Use on any object (not just people)
Existing people counting systems use 3D sensors. These
devices have become established by being accurate in real
world situations which are likely to introduce varied
lighting conditions, installation heights & angles and
environmental factors such as reflective floors.
Processing of video from these sensors is performed on the
device as a linear stream. Real-time aggregation of people
count data is not possible, limiting its application of use.
While the sensing algorithms are highly accurate, each
sensor is limited to running just one algorithm, again
limiting the diversity of application.
They're also limited to only counting people.
Our algorithms anonymise personal data on the camera,
allowing it to be transferred to the cloud for processing.
They have been trained to accurately count pedestrians,
giving customers a self-configurable cloud based computer
vision system. For the purposes of footfall and occupancy
analytics our system is as accurate as competitors
technology - with the advantage of being lower cost and more
Importantly, our technology is in no way limited to just
people counting. The underlying infrastructure of real-time
multi-threaded cloud processing, metadata caching, multiple
algorithms and rapid scalability can be applied to myriad
other physical events. The algorithms are designed to be
re-trained to count airplane landings, boxes on a conveyor
line, hard hats on a building site or whatever type of
visible 'thing' a business might want to report on.
The limitations of existing tehnologies:
High energy consumption.
Our technology deliveres the same counting accuracy with the
additional benefits of:
Can be trained to count almost any type of physical
Using off-the-shelf (inexpensive) cameras.
Self installation by customers.
Efficient resource allocation with distributed cloud
Real time data reporting across multiple camera streams.
We are building:
A distributed system.
That has high detection accuracy.
With anonymised video.
Maintaining per-video metadata.
Processing video chunks in real time.
That works at scale.
Allowing the use of simple, off the shelf cameras offers
huge potential benefits over existing state-of-the-art
technologies with dedicated cameras. Cost reductions and
ease of install makes takeup from customers much lower risk.
However, using less 'powerful' cameras means a very limited
amount of processing can be done on the device. The simple
solution is to upload and process each camera stream
individually but this has a number of limitations and in
particular is an inefficient utilisation of server resources
(less than 30% efficiency).
The challenge in distributing video for processing is:
Maintaining a shared cache across all cameras and videos.
Be robust against delay, gaps and detection inaccuracy in