EloquentTinyML library for Arduino
An Arduino library to make TensorFlow Lite for Microcontrollers neural networks more accessible and easy to use

Installation
EloquentTinyML is available on the Arduino IDE Library Manager.
Be sure you install version 2.4 and above.
Or you can clone the repo from Github.
git clone https://github.com/eloquentarduino/EloquentTinyML.git
Classification example
Using the EloquentTinyML library is straight-forward: you instantiate the network, load a model exported from TensorFlow in Python and call predictClass() to classify your input.
#include "EloquentTinyML.h"
#include "eloquent_tinyml/tensorflow.h"
#include "iris_model.h"
#define IN 4
#define OUT 3
#define ARENA 1024
Eloquent::TinyML::TensorFlow::TensorFlow<IN, OUT, ARENA> tf;
void setup() {
Serial.begin(115200);
tf.begin(iris_model);
}
void loop() {
float input[] = {5.1, 3.5, 1.4, 0.2};
Serial.print("Predicted class: ");
Serial.println(tf.predictClass(input));
delay(1000);
}
In this sketch, iris_model.h
contains the TensorFlow Lite model exported from Python.
Don't know how to export a TensorFlow Neural Network to a C file?
Read the tinymlgen
documentation.
You can access the raw prediction scores by calling getScoreAt()
.
#include "EloquentTinyML.h"
#include "eloquent_tinyml/tensorflow.h"
#include "iris_model.h"
#define IN 4
#define OUT 3
#define ARENA 1024
Eloquent::TinyML::TensorFlow::TensorFlow<IN, OUT, ARENA> tf;
void setup() {
Serial.begin(115200);
tf.begin(iris_model);
}
void loop() {
float input[] = {5.1, 3.5, 1.4, 0.2};
tf.predictClass(input);
Serial.print("Setosa score: ");
Serial.println(tf.getScoreAt(0));
Serial.print("Versicolor score: ");
Serial.println(tf.getScoreAt(1));
Serial.print("Virginica score: ");
Serial.println(tf.getScoreAt(2));
delay(1000);
}

Regression example
Performing regression is as easy as classification. You call predict()
instead of predictClass()
.
#include "EloquentTinyML.h"
#include "eloquent_tinyml/tensorflow.h"
#include "sine_model.h"
#define IN 1
#define OUT 1
#define ARENA 2*1024
Eloquent::TinyML::TensorFlow::TensorFlow<IN, OUT, ARENA> tf;
void setup() {
Serial.begin(115200);
tf.begin(sine_model);
}
void loop() {
float x = 3.14 * random(100) / 100;
float input[1] = { x };
float y_true = sin(x);
float y_pred = tf.predict(input);
Serial.print("sin(");
Serial.print(x);
Serial.print(") = ");
Serial.print(y_true);
Serial.print("\t predicted: ");
Serial.println(y_pred);
delay(1000);
}
Advanced use
The above examples are great to get started when you just want to experiment and try out a few different network configurations.
When deploying to production, you need to dig deeper.
Load custom ops
The Eloquent::TinyML::TensorFlow::TensorFlow
class is a convenient shortcut to instantiate a
TensorFlow Lite Neural Network with all available operations loaded.
When running huge models or on low resource badget, however, every single byte of RAM counts and you don't want to waste space by loading unused operations.
In these case, you want to use the Eloquent::TinyML::TensorFlow::MutableTensorFlow
class instead,
and manually load the operations your network requires before calling begin()
.
#include "EloquentTinyML.h"
#include "eloquent_tinyml/tensorflow.h"
#include "iris_model.h"
#define IN 4
#define OUT 3
#define ARENA 1024
Eloquent::TinyML::TensorFlow::MutableTensorFlow<IN, OUT, ARENA> tf;
void setup() {
Serial.begin(115200);
// let's pretend you trained a CNN
tf.addConv2D();
tf.addDepthwiseConv2D();
tf.addAveragePool2D();
tf.begin(iris_model);
}
void loop() {
float input[] = {5.1, 3.5, 1.4, 0.2};
Serial.print("Predicted class: ");
Serial.println(tf.predictClass(input));
delay(1000);
}
Exception handling
When deploying to production you should always check if your model loaded correctly before asking for predictions.
#include "EloquentTinyML.h"
#include "eloquent_tinyml/tensorflow.h"
#include "iris_model.h"
#define IN 4
#define OUT 3
#define ARENA 1024
using namespace Eloquent::TinyML::TensorFlow;
TensorFlow<IN, OUT, ARENA> tf;
void setup() {
Serial.begin(115200);
tf.begin(iris_model);
if (!tf.isOk()) {
// handle model failure
Serial.print("Model failed to load: ");
Serial.println(tf.getErrorMessage());
switch (tf.getError()) {
case TensorFlowError::VERSION_MISMATCH:
// Python version differs from C++ version
break;
case TensorFlowError::CANNOT_ALLOCATE_TENSORS:
// either ARENA size is not enough to allocate the operations
// or, if you used MutableTensorFlow, you didn't add all the
// required operations
break;
}
while (true) delay(1000);
}
}
void loop() {
float input[] = {5.1, 3.5, 1.4, 0.2};
uint8_t classIdx = tf.predictClass(input);
// check if inference was fine
if (!tf.isOk()) {
// handle inference failure
Serial.print("Model failed to make inference: ");
Serial.println(tf.getErrorMessage());
switch (tf.getError()) {
case TensorFlowError::INVOKE_ERROR:
// the interpreter returned an error
break;
}
}
}
Custom models
EloquentTinyML comes with some pre-loaded custom models to perform common tasks.
Person detection
Person detection on microcontrollers leverages state-of-the-art Neural Networks models trained on TensorFlow.
EloquentTinyML makes it easy to use those models.
#include "EloquentTinyML.h"
#include "eloquent_tinyml/tensorflow/person_detection.h"
const uint16_t imageWidth = 320;
const uint16_t imageHeight = 240;
Eloquent::TinyML::TensorFlow::PersonDetection detector;
void setup() {
Serial.begin(115200);
delay(5000);
// this will depend on the specific camera you use
// it has to configure the camera so that frames can be read
initCamera();
// person score ranges from 0 (100% sure no person is detected) to 255 (100% sure person is detected)
// setting a higher threshold reduces the chances of false positives
detector.setDetectionAbsoluteThreshold(190);
detector.begin();
// abort if an error occurred
if (!detector.isOk()) {
Serial.print("Setup error: ");
Serial.println(detector.getErrorMessage());
while (true) delay(1000);
}
}
void loop() {
// this is also camera-specific
// It must return a raw frame
uint8_t *frame = captureFrame();
bool isPersonInFrame = detector.detectPerson(frame);
if (!detector.isOk()) {
Serial.print("Loop error: ");
Serial.println(detector.getErrorMessage());
delay(10000);
return;
}
Serial.println(isPersonInFrame ? "Person detected" : "No person detected");
delay(1000);
}
Sensitivity setting
You can tweak the sensitivity of the detection to match your project's requirements.
The output of the model is made by 2 scores:
-
person_score
, from 0 to 255 -
not_person_score
, from 0 to 255
These 2 scores are independent by each other and don't sum up to 255.
By default, the detector will output a person detected everytime person_score
is greater
than not_person_score
, even if only by 1.
This default behavior may lead to a high false positive rate, so you may want to adjust the decision function to mitigate this problem.
There are 3 methods you can use.
// detect a person only if person_score > not_person_score AND
// person_score >= 190
// the higher this value, the lower the false positive rate
// if set too high, you may miss true positives!
detector.setDetectionAbsoluteThreshold(190)
// detect a person only if person_score > not_person_score AND
// person_score >= not_person_score + 50
// the higher this value, the lower the false positive rate
// if set too high, you may miss true positives!
detector.setDetectionDifferenceThreshold(50)
// detect a person only if person_score > not_person_score AND
// person_score >= not_person_score * 1.4
// the higher this value, the lower the false positive rate
// if set too high, you may miss true positives!
detector.setDetectionRelativeThreshold(1.4)
Camera configuration
The TensorFlow::PersonDetection
class is camera-agnostic: it works with any camera model
that can output raw RGB images (eg. cameras from OmniVision and Himax).
You will only need to write an adapter for your specific camera, in the form of 2 functions:
-
initCamera()
will setup and configure the camera -
captureFrame()
will return the captured frame
Each camera has its own custom instructions to perform these tasks, but in general you can leverage the board built-in code to do so.
Here are a couple examples: one for the Arduino Portenta Vision Shield and one for the Esp32 camera.
#include "camera.h"
CameraClass cam;
uint8_t frame[320*240];
/**
* Configure camera
*/
void initCamera() {
cam.begin(CAMERA_R320x240, 30);
}
/**
* Capture frame from Vision shield
*/
uint8_t* captureFrame() {
cam.grab(frame);
return frame;
}
// https://github.com/espressif/arduino-esp32/blob/master/libraries/ESP32/examples/Camera/CameraWebServer/camera_pins.h
#include "camera_pins.h"
#define CAMERA_MODEL_AI_THINKER
Eloquent::Vision::ESP32Camera camera;
camera_fb_t *frame;
/**
* Configure camera
*/
void initCamera() {
camera_config_t config;
config.ledc_channel = LEDC_CHANNEL_0;
config.ledc_timer = LEDC_TIMER_0;
config.pin_d0 = Y2_GPIO_NUM;
config.pin_d1 = Y3_GPIO_NUM;
config.pin_d2 = Y4_GPIO_NUM;
config.pin_d3 = Y5_GPIO_NUM;
config.pin_d4 = Y6_GPIO_NUM;
config.pin_d5 = Y7_GPIO_NUM;
config.pin_d6 = Y8_GPIO_NUM;
config.pin_d7 = Y9_GPIO_NUM;
config.pin_xclk = XCLK_GPIO_NUM;
config.pin_pclk = PCLK_GPIO_NUM;
config.pin_vsync = VSYNC_GPIO_NUM;
config.pin_href = HREF_GPIO_NUM;
config.pin_sscb_sda = SIOD_GPIO_NUM;
config.pin_sscb_scl = SIOC_GPIO_NUM;
config.pin_pwdn = PWDN_GPIO_NUM;
config.pin_reset = RESET_GPIO_NUM;
config.xclk_freq_hz = 20000000;
config.pixel_format = PIXFORMAT_GRAYSCALE;
config.frame_size = FRAMESIZE_QVGA;
config.fb_count = 1;
esp_camera_init(&config);
sensor_t *sensor = esp_camera_sensor_get();
sensor->set_framesize(sensor, FRAMESIZE_QVGA);
}
/**
* Capture frame from Esp32 camera
*/
uint8_t* captureFrame() {
return esp_camera_fb_get()->buf;
}
Having troubles? Ask a question
Related posts
tinyml
Separable Conv2D kernels: a benchmark
Real world numbers on 2D convolution optimization strategies for embedded microcontrollers
libraries
TinyMLgen for Python
A Python package to export TensorFlow Lite neural networks to C++ for microcontrollers
tinyml
Arduino WiFi Indoor Positioning
Localize people and objects as they move around your building with the power of Machine Learning
tinyml
Color classification with TinyML
Get familiar with data collection and pre-processing for Machine Learning tasks
libraries
MicroMLGen for Python
A Python library to export scikit-learn models into C format with a single line of code