(Nov '21) The Knowles AISonic™ Hardware Grant

Wevolver, in partnership with Knowles, is providing hardware grants to support engineers developing cutting-edge audio products. 

We are giving away fiveAISonic™ IA8201 Raspberry Pi Development Kits to help you test, experiment, and create audio and machine learning features for various applications such as IoT, audio, and ear-worn devices.

To apply for the grant, submit this brief form describing how you will use the Knowles Development Kit. The five most innovative responses will be provided with the kit to build and implement their idea. The winning projects will be profiled at CES 2022, in Las Vegas along with other Knowles demos. The winning projects with other outstanding submissions will receive exclusive Wevolver and Knowles swag.

All submissions will be reviewed by a panel of senior Knowles engineers and industry experts.

Project submissions will be evaluated on the following: 

  • Innovation – How innovative or unique is your idea?
  • Product fit – How easily can this idea be turned into a product, solution, or service?
  • Scalability – How easily can your product/solution/service idea be scaled for the broad market or for applications with high volumes?

Left: The Raspberry Pi with Adapter. 
Middle: The IA8201 Processor Board.
Right: The Mic-Array Board.  

About the AISonic™ IA8201 Raspberry Pi Development Kit

The recently launched Knowles AISonic™ IA8201 Raspberry Pi Development Kit is an all-in-one package that brings voice, audio edge processing, and machine learning (ML) listening capabilities to devices and systems for a range of new applications. Product designers and engineers now have a single tool to streamline the design, development, and testing of technology that pushes the boundaries of voice and audio integration in their respective industries.

The new kit is built around the Knowles AISonic™ IA8201 Audio Edge Processor OpenDSP, for ultra-low power and high-performance for a plethora of audio processing needs. The audio edge processor combines two Tensilica-based, audio-centric DSP cores; one for high-power compute and AI/ML applications, and the other for very low-power, always-on processing of sensor inputs. The IA8201 has 1MB of RAM on-chip that allows for high bandwidth processing of advanced, always-on contextually aware ML use-cases and memory to support multiple algorithms for an optimal user experience.

Using the Knowles open DSP platform, the kit includes a library of on-board audio algorithms and AI/ML libraries. Farfield audio applications can be built using the available ultra-low power voice wake, beamforming, custom keywords, and background noise elimination algorithms from Knowles algorithm partners such as Amazon Alexa, Sensory, Retune, and Alango to open up the design possibilities and ensure the freedom needed to support a wide range of voice and audio customization. The kit also features TensorFlow Lite-Micro SDK for fast prototyping and product development for AI/ML applications. The TensorFlow-Lite SDK allows for porting models developed in larger cloud Tensor Flow frameworks to an embedded platform at the edge, usually with limited compute and lower power consumption, for example, AI inference engines for anomaly detection in verticals such as industrial and commercial.

With options for either two or three pre-integrated Knowles Everest microphones based on product design needs, the kit includes two microphone array boards to help select the appropriate algorithm configurations for the end application. By offering built-in microphone arrays that support the audio and voice capabilities on the IA8201 DSP, OEMs are provided a high-quality, high-performance all-in-one development solution from a single supplier. Developer support is available through the Knowles Solutions Portal for configuration tools, firmware, and algorithms that come standard with the kit, allowing for complete prototyping, design, and debugging. Read more about the kit here.

Application areas are only limited by your imagination but include:

  • Voice user interface systems for consumer devices, smart home devices, appliances, and other IoT devices
  • Voice communications devices with advanced audio features and machine-learning enabled context awareness
  • Acoustic event recognition for consumer, industrial (condition monitoring etc.), and medical devices (diagnostics, etc.)

The development kit can be used to prototype smart home devices from speakers to appliances.

Key dates

EventDate
Grant OpensOctober 26
Grant ClosesDecember 3 - Midnight Pacific time
Grant recipients announcedDecember 10
Grant Article and Meet the Jury