Sensor data is typically preprocessed with DSP in TinyML applications. As engineers deploy NNs on ever smaller processors, it is becoming necessary to tune DSP algorithms in order to fit within RAM or real-time processing constraints. But not all steps in a DSP pipeline are created equal! Knowing how to find sections to slim down can mean the difference between giving up a few percent of accuracy, and ending up with a model that’s no longer usable.
This presentation will show experimentation with DSP parameter choices (number of cepstral coefficients, spectrogram frame size, etc) for an example keyword spotting classifier, and analyze the RAM, latency, and accuracy impacts of various scenarios. Attendees will leave with ideas on where to find elusive kB of RAM and mS of latency next time they need to optimize a DSP pipeline.
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab.
Please login (on the right) if you already have an account on this platform.
Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: