Object Detection

Object Detection In Action

With an Object Detection model, you can identify objects of interest in an image or each frame of live video. Each prediction returns a set of objects, each with a label, bounding box, and confidence score.

If you just need to know the contents of an image – not the location of the objects – consider using Image Labeling instead.

Custom Training Models for Object Detection

You can train a custom model that is compatible with the Object Detection API by using Quickstart: Use Fritz AI Studio to Train a Custom Model.

Technical Specifications

Architecture Format(s) Model Size Input Output Benchmarks
SSDLite + MobileNet V2 variant Core ML (iOS), TensorFlow Lite (Android) ~17 MB 300x300-pixel image Offsets for >2,000 candidate bounding boxes, Class labels for each box, Confidence scores for each box 18 FPS on iPhone X, 8 FPS on Pixel 2

Custom Model Compatibility Checklist

If you have a custom model that was trained outside of Fritz AI, follow this checklist to make sure it will be compatible with the Object Detection API.

  1. Your model must be a single-shot multibox detector with boxes matching the default configuration found here.
  2. Your model must be in the TensorFlow Lite (.tflite) or Core ML (.mlmodel) formats.
  3. iOS Only The name of the input layer must be named Preprocessor/sub:0 and the 2 outputs concat:0 (boxPredictions) and concat_1:0 (classPredictions).
  4. Android Only The 1 input layer (Preprocessor/sub) and 4 output layers (‘outputLocations’, ‘outputClasses’, ‘outputScores’, ‘numDetections’) should be defined in the TensorFlow Lite conversion tool.
  5. The input should have the following dimensions: 1x300x300x3 (batch_size x height x width * num_channels). Height and width are configurable.
  6. iOS Only The output should have the following dimensions: 4 (box points) x num_anchor_boxes x 1 for boxPredictions and num_classes x 1 for classPredictions.
  7. Android Only The output should have the following dimensions: 1 x num_anchor_boxes x 4 (box points) for outputLocations, num_classes x 1 for outputClasses & outputScores, and 1 for numDetections.