Google's runtime library for executing trained TensorFlow models on microcontrollers. Originally TensorFlow Lite Micro (TFLM), rebranded LiteRT for Microcontrollers in 2024. Provides a kernel set optimized for Cortex-M, Tensilica HiFi, ARC, and RISC-V targets, with vendor backends for Arm Ethos-U, NXP eIQ, and ST X-CUBE-AI.
Pilot for MCU-resident inference (audio classification, IMU gestures, vibration anomaly detection); benchmark with quantization and operator fusion enabled; pair with Edge Impulse for the surrounding data and training workflow.
Pairs with #11 Edge ML Lifecycle Management practice and the Edge Impulse tool. Alternative runtimes include ONNX Runtime, microTVM, and vendor stacks (NXP eIQ, ST X-CUBE-AI) — TFLM/LiteRT remains the broadest by community size.