Bristol-based Kudan develops an AR engine that can be used for 3D computer vision applications in drones and the Internet of Things without the need for specialist sensors. The technology uses any 2D camera sensor and is platform and camera independent and so can be easily ported to embedded modules.
“Our strategy is not to be dependent on anything,” said Tomo Ohno, founder and managing director. “We don’t want to do something that’s dependent on iOS or Android.”
The AR engine, called KudanCV, does not require specific processor hardware such as graphics processing units (GPUs), and can run on any processor with an ARM Cortex-M0+ core upwards, says John Williams, chief technology officer at Kudan. “We are not limited to any particular form of computer vision,” he said. "We handle detection and tracking, 3D depth perception, finding and tracking pre-recognised images with an unlimited number of points.” KudanCV also handles most operations internally without having to go to the operating system or the cloud.
This can be used for applications such as simultaneous location and mapping (SLAM) to provide position data when navigation such as GPS doesn’t work for sytems such as drones or robot vacuum cleaners.
This also opens the technology up to ARM-based embedded modules. “Jigsaw is one of our partners in Japan and it’s a data management company for IoT, compressing data and encrypting robust connections between devices,” said Ohno. “They acquired Mobicom in Japan that are an embedded software specialist to create the IoT on a chip.” Mobicomm works with companies such as Softbank, Altair Semiconductor and Oki to produce LTE, WiFi and other connected modules for IoT applications.
“Our strategy is to be somewhere in the stack for everything,” he said. “It’s a Jig-Saw business, and we become part of their hardware, but we will license the technology to anyone who wants to buy it."