This article is more than 1 year old

Want to build an AI app but don't know where to start with training? Take a Lobe off your mind with this low-code tool

Microsoft program outputs ready-to-roll model from image input set

Microsoft has built a desktop app called Lobe that can be used to train object-recognition models without having to write a single line of code. Getting that model into an application, though, will require some programming.

Crafting AI apps in the real world requires developers to not only understand inference but also training, and it can all seem overwhelming. Before your model can make decisions from arbitrary input data, it has to be trained: that involves collecting, organizing, and processing data to teach your model, running the training process, testing it, and so on.

Lobe tries to take all this training and testing faff away: it doesn’t require any technical know-how, and is free to use. The app, available for Windows and macOS, uses transfer learning to train off-the-shelf ResNet-50 V2 and MobileNetV2 image-recognition models using images supplied and labeled by the user. When this trained model is shown subsequent images, it can make a good guess what the label should be.

For example, if you feed Lobe a set of images of birds of prey, with their correct labels, it will produce a model capable of, hopefully, identifying subsequent snaps of vultures hawks, eagles, and falcons, provided its training included those creatures. To integrate this exported model into whatever software you're building, be it a desktop, mobile, or web app, you need to do some programming to plumb it in.

As another example, the quickest way to get playing with Lobe is to bring in images from your webcam, say of you drinking some water and not drinking, and training it to detect either case.

doctor_brain

Hack Google's AI for cash, DeepMind gets cancerous, new Lobe for Redmond – and more

READ MORE

To integrate your trained, exported model into a supported software framework, see the instructions listed here. In short, the trained model can be converted into formats that can be loaded by Apple's CoreML app framework as well as TensorFlow and TensorFlow Lite. That means, for instance, you can write an iOS app, or use one of the given samples, and have it tell CoreML to load your model, allowing your software to make decisions from the model's outputs. On Android, you'd use TensorFlow Lite.

This can all be used on end devices, such as people's phones or PCs or equipment at the network edge. All inference runs on those devices, meaning you don't have to pipe people's pictures off to the cloud for processing: it's done on-device using your Lobe-built model. Lobe doesn't reach out to any Microsoft services, we're told, and can be used without an internet connection if you're that paranoid.

“We really want to empower more people to leverage machine learning and try it for the first time,” said Jake Cohen, Lobe senior program manager.

“We want them to be able to use it in ways that they either could not before or didn’t realize they could before.”

Some suggested applications include helping beekeepers recognize pests, such as mites, wasps, and hornets, entering their hives with a motion detector camera and a Raspberry Pi and a Lobe-trained app. Lobe can only carry out simple image recognition tasks. However, it is hoped Microsoft will expand its capabilities in the future with object detection and data classification. ®

More about

TIP US OFF

Send us news


Other stories you might like