Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

ADAC Scientific Seminar: Identifying and Addressing TinyML Bottlenecks at the Edge 

Presenter: Bruno Lovison Franco

Abstract:
Machine Learning (ML) has experienced substantial growth, expanding into numerous use cases. In IoT networks, ML algorithms are used to enable autonomous and intelligent behaviors. While ML models traditionally operate within servers, migrating these models to the edge of the network can provide several benefits, including reduced latency, improved security, and power efficiency. TinyML is the research field that aims to bring ML models to edge devices using software techniques such as precision reduction and compression. Despite TinyML’s software optimization, edge nodes rely on low complexity and consequently less accurate algorithms because of their limited computing capabilities. Thus, the integration of accurate ML algorithms at the edge remains a challenge. To address this issue, we explore inference bottlenecks on an edge-representative, FPGA-based platform. We compared three neural network architectures on several metrics, including inference time. Our study reveals an uneven distribution of inference time across the layers of the models. Our platform will allow us to study hardware-based TinyML inference acceleration, addressing these identified layer bottlenecks.

Date: July 19, 2023 from 2 to 4 pm (salle de séminiaires, LIRMM*)


				

DO YOU LIKE ? SHARE THIS !

Facebook
Twitter
LinkedIn