MLRelated.com

Become A DSP Tuning Master and Build More Efficient Neural Networks

Alex Elium

Sensor data is typically preprocessed with DSP in TinyML applications.  As engineers deploy NNs on ever smaller processors, it is becoming necessary to tune DSP algorithms in order to fit within RAM or real-time processing constraints.  But not all steps in a DSP pipeline are created equal!  Knowing how to find sections to slim down can mean the difference between giving up a few percent of accuracy, and ending up with a model that’s no longer usable.

This presentation will show experimentation with DSP parameter choices (number of cepstral coefficients, spectrogram frame size, etc) for an example keyword spotting classifier, and analyze the RAM, latency, and accuracy impacts of various scenarios.  Attendees will leave with ideas on where to find elusive kB of RAM and mS of latency next time they need to optimize a DSP pipeline.


Learning how to Deliver AI Solutions In Days, Not Months

Jenny Plunkett

Please visit the following URL and read about a few things you should consider doing to prepare and take full advantage of the workshop:

https://bit.ly/2TqZ6CA

Visual AI solutions combined with powerful sound diagnostics for real-time decision-making are some of the hallmarks of the Sony Spresense with Edge Impulse's embedded ML technology. Together, Edge Impulse and Sony bring a unique combination of solid computing performance as well as serious power efficiency that is ideal for edge computing applications. 

Join our hands-on workshop to learn how to build future-proof solutions with smart sensor analysis, image processing, data filtering, collecting raw data, getting insight into that data using signal processing and machine learning, and deploying your ML models, ready for scale and industrial production.

  • Learn how embedded ML gives real-time insights into complex sensor streams
  • Build your first embedded ML model in real-time
  • Gain insight into the types of problems ML solves, then build better products
  • Learn how to take your ideas to production and scale through complete MLOps

Workshop details:

  • A 90-minute workshop
  • Beginner/Intermediate skill level
  • Hands-On, Instructor-Led, Live
  • A recording will be shared post-event
  • A personalized Certificate of Accomplishment from Edge Impulse
  • Purchase your Sony's Spresense kit from Adafruit today or check here for more buying options.


Tiny Machine Vision: behind the scenes

Lorenzo Rizzello

Tiny devices, like the ones suitable for low-power IoT applications, are now capable of extracting meaningful data from images of the surrounding environment.

Machine vision algorithms, even Deep Learning powered ones, need only a few hundred kilobytes of ROM and RAM to run. But what are the optimizations involved to execute on constrained hardware? What is it possible to do, and how does it really work?

In this session, we will focus on the capabilities that are available for Cortex-M microcontrollers, starting from the user-friendly environment provided by EdgeImpulse to train and deploy Machine Learning models to the OpenMV Cam H7+.

We will guide attendees through the process using a straightforward example that illuminates inner workings so that attendees can get a grasp on technologies and frameworks. Attendees will walk away understanding the basic principles and be able to apply them not just to the Cortex-M but beyond.


The Past, Present, and Future of Embedded Machine Learning

Pete Warden

Pete Warden, from Google's TensorFlow Lite Micro project, will be talking about how machine learning on embedding devices began, and where it's heading. ML has been deployed to microcontrollers and DSPs for many years, but until recently it has been a niche solution for very particular problems. As deep learning has revolutionized the analysis of messy sensor data from cameras, microphones, and accelerometers it has begun to spread across many more applications. He will discuss how voice interfaces are leading the charge for ML on low-power, cheap devices, and what other uses are coming. He'll also look into the future of embedded machine learning to try to predict how hardware, software and applications will be evolving over the next few years.


Object Classification Techniques using the OpenMV Cam H7

Lorenzo Rizzello

Machine Learning for embedded systems has recently started to make sense: on-device inference reduces latency, costs and minimizes power consumption compared to cloud-based solutions. Thanks to Google TFLite Micro, and its optimized ARM CMSIS NN kernel, on-device inference now also means microcontrollers such as ARM Cortex-M processors.

In this session, we will examine machine vision examples running on the small and power-efficient OpenMV H7 camera. Attendees will learn what it takes to train models with popular desktop Machine Learning frameworks and deploy them to a microcontroller. We will take a hands-on approach, using the OpenMV camera to run the inference and detect objects placed in front of the camera.


Causal Bootstrapping

Max Little

To draw scientifically meaningful conclusions and draw reliable statistical signal processing inferences of quantitative phenomena, signal processing must take cause and effect into consideration (either implicitly or explicitly). This is particularly challenging when the relevant measurements are not obtained from controlled experimental (interventional) settings, so that cause and effect can be obscured by spurious, indirect influences. Modern predictive techniques from machine learning are capable of capturing high-dimensional, complex, nonlinear relationships between variables while relying on few parametric or probabilistic modelling assumptions. However, since these techniques are associational, applied to observational data they are prone to picking up spurious influences from non-experimental (observational) data, making their predictions unreliable. Techniques from causal inference, such as probabilistic causal diagrams and do-calculus, provide powerful (nonparametric) tools for drawing causal inferences from such observational data. However, these techniques are often incompatible with modern, nonparametric machine learning algorithms since they typically require explicit probabilistic models. In this talk I'll describe causal bootstrapping, a new set of techniques we have developed for augmenting classical nonparametric bootstrap resampling with information about the causal relationship between variables. This makes it possible to resample observational data such that, if it is possible to identify an interventional relationship from that data, new data representing that relationship can be simulated from the original observational data. In this way, we can use modern statistical machine learning and signal processing algorithms unaltered to make statistically powerful, yet causally-robust, inferences.


Get Started with TinyML

Jan Jongboom

TinyML is opening up incredible new applications for sensors on embedded devices, from predictive maintenance to health applications using vibration, audio, biosignals and much more! 99% of sensor data is discarded today due to power, cost or bandwidth constraints

This webinar introduces why ML is useful to unleash meaningful information from that data, how this works in practice from signal processing to neural networks, and walks the audience through hands-on examples of gesture and audio recognition using Edge Impulse.

What you will learn:

  • What is TinyML and why does it matter for real-time sensors on the edge
  • Understanding of the applications and types of sensors that benefit from ML
  • What kinds of problems ML can solve and the role of signal processing
  • Hands-on demonstration of the entire process: sensor data capture, feature extraction, model training, testing and deployment to any device