B.1 · Sensing & Vision

AI vision classification.

YOLO-based vision classification software that captures and recognizes the material a consumer drops in (glass, PET, metal) while it's moving on the conveyor. Trained on 100k+ real field images; classification time around 100ms. Runs on the manufacturer's existing processing platform with low resource consumption.

~100ms
Conveyor classification
100k+
Field-data training set
YOLOv8
Lightweight runtime
Overview

The "eyes" of the manufacturer's machine.

Our software recognizes the material a consumer drops into the manufacturer's RVM while it's moving on the conveyor. The classification result (glass / PET / metal + confidence score) is delivered to the manufacturer's software, where the routing-to-the-right-compactor or accept/reject decision is made.

Runs on top of Ultralytics YOLOv8 (PyTorch). Despite the broad training set, the model is small because it's optimized; it performs real-time classification with low resource consumption.

Pipeline

Real-time classification in three stages.

01 · Capture

Visual data acquisition

Image acquisition with industrial camera supported by transillumination (back lighting). Reliably distinguishes transparent glass and colored plastic; resilient to shadow and lighting variability.

  • Controlled illumination cabinet
  • High-speed image capture
  • Tolerant of dust and stains
02 · Inference
~100ms · lightweight

YOLOv8 inference

Our model runs on Ultralytics YOLOv8 (PyTorch) and classifies a moving bottle on the conveyor in approximately 100ms. Despite the broad training set, the model is small because it's optimized; it consumes few resources on the manufacturer's existing vision platform. No cloud dependency.

  • Edge inference: no latency, network-independent
  • 3 classes: glass / PET / metal + confidence score
  • Low CPU / GPU / memory footprint
03 · Learning

Continuous model updates

Our model has been trained on 100,000+ real field images and continues to grow. New versions are distributed to manufacturers OTA (over-the-air); no field visit required.

  • Semi-automated labeling with human-in-the-loop
  • Monthly model improvements
  • Manufacturer-specific fleet rollout
Capabilities

Classification proven under real field conditions.

  • 01

    Edge inference — no cloud

    All classification happens on the device. The machine keeps running even if the network goes down; consumer data never leaves the device.

  • 02

    Trained on 100k+ field images

    Our model is trained not on lab images but on 100,000+ images of materials actually dropped into RVMs by real consumers. The training set keeps growing with each release.

  • 03

    Robust to damaged / dirty material

    High recognition rate on items with peeled labels, partial deformation, or light contamination; safe classification of transparent containers via transillumination.

  • 04

    OTA model deployment

    New model versions are distributed remotely to the manufacturer's fleet. No field visit, USB update, or restart required.

Typical application scenarios

Which RVM model / which need is it suitable for?

Mixed material

RVMs accepting glass + PET + metal

For machines that handle three material types simultaneously, the routing-to-the-right-compactor decision.

Verification layer

Barcode + AI dual verification

Cross-validates the 360° barcode reader's result with the visual classification; eliminates erroneous or fraudulent acceptances.

Quality control

Damaged / foreign object detection

Recognizes objects that should not be accepted (e.g., old/shattered bottles, foreign waste) and protects the machine.

Market alignment

Localized model variants for EU DRS markets

Custom model variants tuned to local packaging diversity in Germany, Netherlands, UK, Ireland and Bulgaria.

Integration

Delivered as software — runs on the manufacturer's existing processor.

Runtime

Runs on top of Ultralytics YOLOv8 (PyTorch). Integrated into the manufacturer's existing vision processing platform; performs real-time classification thanks to its low resource consumption.

Software

Delivered to the manufacturer as licensed software (model + inference runtime). Standard call interface: send an image, receive class + confidence score + timestamp. No cloud dependency.

Lifecycle

New model versions distributed OTA on a per-fleet basis. The manufacturer rolls our updates onto its machines; no field visit required.

After NDA

Performance data and integration document.

Recognition accuracy rates, inference times, model versioning, OTA architecture, message protocol details and resource consumption profile are shared with NDA-signed manufacturers via a product-specific datasheet.

Get in touch

Request a quote for AI vision classification.

Describe your need, and our technical team will start the NDA process and share a product-specific datasheet and integration document with you.

Tell us your component need