The world’s largest embedded systems conference, Embedded World, in Nuremberg Germany has just concluded and just like last year, the conference did not disappoint. The conference brings together vendors from all over the world including companies involved in semiconductors, tools, software, components and much more. The conference also brings together engineers from all over the world who can participate in three days of presentations that range from operating system fundamentals to advanced topics such as security and artificial intelligence. After attending this year’s conference, there are three trends that I noticed and believe you will find interesting.
Trend #1 – Security is important, but we aren’t quite there yet
I noticed all over the conference companies promoting security solutions. Security is critical to the IoT which is a major industry driver at the moment. The problem with security though is that companies know they need security, but they don’t know what that means. There are security technology solutions at the chip level such as TrustZone, but as I talked with engineers and vendors, they didn’t seem to really know what to do with it and in quite a few cases, the technologies were so new that they will not be available for mass markets until the end of the year.
I did come across several solutions that were interesting. For example, Secure Thingz has integrated a low volume solution for provisioning secure devices into the IAR Embedded Workbench. A security profile and certificates could be implemented through a security profile that just required entering a few pieces of information and checking several boxes that applied. Another interesting solution that I saw was for the PSoC 64 family of parts from Cypress. The solution is a multi core processor using an Arm Cortex®-M0+ and a Cortex-M4. The M0+ acts as a security processor while the M4 acts as a application processor similar to the “unsecure” world in a TrustZone implementation.
Trend #2 – Artificial Intelligence is coming
Artificial intelligence was everywhere at the conference, but it revealed itself through two primary application use cases:
- Object detection
- Audio processing
One could hardly move through the conference without seeing a camera in a booth with a display that was identifying objects as they passed the booth. I saw examples such as face detection, person detection, hand bags, pizza, soda and so on and so forth.
Specific solutions were all over but as a micro controller guru I gravitated towards solutions that could more or less run on a high-end microcontroller. I found several solutions in the booths from NXP and ST Microelectronics. The first solution I looked at was from ST Microelectronics which used their new AI plugin for their STM32CubeMx tool chain. The example performed object recognition using a series of neural networks. What I really liked about the solution was that it appeared easy to configure from with STM32CubeMx and it provided useful information such as memory usage and run-time statistics like time required to run the network. This is obviously important to understand what size and processor would be needed for an edge AI application.
At the NXP booth, there were several AI examples that I found to be very interesting. The first, was a I.Mx RT106A that had Amazon Alexa on-board. I’ve always had an interest in home automation applications and this solution particularly seemed interesting for edge devices and for me to play with for personal projects. A second solution that I found at the NXP booth was for a new project named Coral. The coral board not only had a high-end processor on board but also a Google TPU! The object recognition network they were running was doing so in nearly real-time! It was averaging approximately 5 milliseconds to detect an object in an image. I also found it interesting that the development board was in a Raspberry Pi format that could be used for prototyping and development and then the CPU was in a module form and could be placed on a carrier board for production.
Trend #3 – Heterogenous computing
Heterogenous computing is when there is more than one type of processing on a single chip. There are several advantages to doing this such as dedicating a core for video processing, another for real-time processing and so on and so forth. I found that these types of systems are becoming far more common in the real-time space (and have been for a while) but they are starting to find their way into microcontroller-based systems.
One problem that developers will be facing is how to coordinate communications between cores that could be running bare-metal, an RTOS, or even a combination. An interesting open source solution that I heard about is called OpenAMP. “The OpenAMP provides an open source framework that allows operating systems to interact within a broad range of complex homogeneous and heterogeneous architectures and allows asymmetric multiprocessing applications to leverage parallelism offered by the multicore configuration.” (https://www.multicore-association.org/workgroup/oamp.php).
Embedded World revealed quite a few interesting trends in the embedded systems industry but just like with many new technologies and tool releases, conferences often show us what we can expect to be using a year or two from now. They often show off the latest technologies but as I found, most solutions are at least 6 months away from being available to mass markets. While this can a bit disappointing for developers that want to tinker, it can be a useful tool for us to identify what we need to start learning today so that our designs will be more effective a year from now. It can also show where our businesses may need to adapt in order to keep up with competitors and unveil new opportunities that didn’t exist last week.