RISC-V Summit: Silicon Labs Q&A on IoT Hardware

Silicon Labs' chief technology officer shares his thoughts on what RISC-V technology means for IoT hardware and why the industry needs more renaissance people.

Brian Buntz

December 12, 2019

9 Min Read
Futuristic Circuit Board Render With Bokeh Effects - Technology Related ConceptGetty Images

The electronics field has a good deal in common with the pharmaceutical industry. Both spend massive amounts of cash on research and development and yet are struggling to maintain former innovation benchmarks. 

In 2003, the average cost of developing a marketable drug was $802 million, not adjusting for inflation. Now, the figure is around $2.6 billion. Complicating matters for the pharmaceutical industry are global efforts to curtail drug costs. 

While the costs of hardware development tend to be less exorbitant than R&D in pharma, there is a similar sense of uncertainty stemming from high costs and low returns. 

One strategy to drive efficiency is the use of multiple cores. In roughly 2005, the trend of multicore systems on a chip was launched. That same year, AMD debuted its first dual-core central-processing unit. In the past 14 years, the number of cores has steadily expanded in CPUs and SoCs. 

For SoCs, the use of multiple cores in embedded designs offers a range of benefits. It can help drive hardware consolidation and reduce bill-of-materials scope. It can also bolster energy efficiency and enable potentially higher capacity and performance. In the long run, the growing popularity of RISC-V could help drive down the cost of multi-core SoCs. 

[IoT World is North America’s largest IoT event where strategists, technologists and implementers connect, putting IoT, AI, 5G and edge into action across industry verticals. Book your ticket now.]

In terms of IoT hardware, the multiple-core trend means that many of the billions of SoCs used on an array of device types will leverage specialized cores to manage everything from subsystems supervision to neural networking-related tasks. 

alessandro-piovaccari-img.jpgOne open question, however, is the rising popularity of how the RISC-V architecture might affect multiple-core trend for IoT devices. At the RISC-V Summit in San Jose, we had the chance to speak with Alessandro Piovaccari, chief technology officer of Silicon Labs, about precisely that theme. In the following interview, Piovaccari shared his thoughts on everything from the trend of multicore and heterogeneous SoC designs for IoT applications and to the need for “Renaissance” hardware engineers. The responses have been edited for brevity. 

How do you think the RISC-V architecture will influence the multicore trend? And how do you see it affecting IoT hardware?

Piovaccari: You see more cores used in a single device. For the main core where you run the application, Silicon Labs is going to stay with Arm because the ecosystem for developers is massive. And RISC-V is not ready for that yet. 

Let’s say you’re doing more than with a Raspberry Pi, using RISC-V is a significant economic advantage. In IoT, it is not really that big of an advantage if you only have one core. The fee that you pay to Arm is totally manageable. But if you have more than one core, then you end up paying multiple times. 

It’s not that our chip with RISC-V is going to cost necessarily less. It’s the fact that we can add a lot of [functionality with] RISC-V without paying more. If I need to use six cores from Arm, it is going to be very expensive. If I use one Arm core and five from RISC-V, then the price will be good. 

How will the open-source nature of RISC-V change hardware engineering? 

Piovaccari: Traditionally, Arm processors come as a black box, and you would use the processor as it is. When you start working with a RISC-V core, you have full access to the source code. You can modify what you want. And you can take only what you need, and adapt it. Of course, you still pay for the development work, but it gives you a lot of freedom to make a system that is much more flexible, upgradeable and optimized for power, lifetime, etc. 

What’s your take on the assessment that Moore’s Law is stagnating or dying? 

Piovaccari: When [Gordon] Moore talked about Moore’s Law, there were two components. One component of Moore’s Law is the fact that geometry gets smaller every couple of years. But there is also the fact that we can make each component better. That’s what he calls ‘device and circuit cleverness.’

We’ve been spoiled with Moore’s Law because the geometric breakthroughs were so [substantial.] There wasn’t much point at looking at the circuit. But we still have a lot of smartness we can bring in the design of the circuit. So, there is still a lot of work we can do with Moore’s Law to make it better. 

Our devices are still in a phase that we can take advantage of Moore’s Law. So, as we proceed, we want to put more and more functionality in the chip. One of the things that is going to happen is neural networks are going to come to the chip. 

If you look at the power consumption on the chip, the transmission of power is the biggest culprit behind the power consumption. If you have a battery-powered, voice-enabled device with a microphone that talks to the cloud, continuously observing noise, the battery will drain in a matter of days. But why can’t I recognize the voice directly on the device? We can use Moore’s Law to cram a lot of stuff onto the chip on this device. We can do actual neural network processing on the chip. You will see that in the next five years — a lot of local devices will have AI capability. To do that, you’ll have to build neural networks. And these neural networks need optimized cores. RISC-V is a very good element to optimize neural networks in these very small devices — with lower power. 

Can you provide another example of an application where an edge computing approach can save a significant amount of energy? 

Piovaccari: If you have a display that turns on by gazing at it. If you look at the device, the switch wakes up. If you want to do that with a neural network running in the cloud, it means that the device is transmitting to the cloud continuously. That’s going to use a lot of power. But if you do the recognition of the image locally, the processing is much more efficient. The data you transmit is only like: ‘Hey, somebody is looking.’ That might only be three times a day. Neural networks are actually very power efficient at recognizing an image if you keep the data and the computing local. The only way we can bring the battery-operated image recognition of voice recognition to an end node is to use neural networks locally instead of the cloud. 

How do you see the role of hardware engineers changing as the industry’s relationship to Moore’s Law evolves?

Piovaccari: With Moore’s Law, there were a lot of companies using a building-block approach. To create a system was kind of like playing with Legos. Some people worked on the processor. Others worked on IP. Some put the chip together. And other people put the PCB together. It was very layered. And we were all happy. Until we realized two things: Moore’s Law is compressing and we’re using too much power. So this approach is not scalable anymore. 

It was similar to when the first Industrial Revolution started. At the beginning of that, you didn’t realize the pollution it created. About 50 years later, you realize people might have died from that pollution, so you need to go in a different direction. 

With power consumption, we didn’t really care until a few years ago. In the past, you needed a 500-Watt power supply for your PC. Now, [modern laptops have] more computing power, and they don’t even get warm. 

Now, if you want to optimize electronics, you can’t just use a building-block approach. You need to start by thinking about the problem you are trying to solve. When you build the system, you need to consider all the levels. Optimization and the customization with RISC-V fall in that category. 

In the past, I was getting a chip, and I would slap it in, and if it works, I’m fine. But now, I might need to cut power consumption by 50% with everything else being equal. When that happens, I really have to look at the source code and the processor to see which pieces consume the most power. Can I modify it? If I change it, maybe it’s very good for my particular application but not very good for another application. You need to have the people who modify the core, and the people who understand the application be the same person. 

In the past, hyper-specialization [was the norm]. I don’t mean this the wrong way, but we have enough people who are hyper-specialized. We don’t have enough ‘Renaissance’ people. In the 1960s, hardware engineers tended to be Renaissance people. Now, the [many of the people who would be candidates for] Renaissance hardware engineers want to do AI. They don’t want to come to electronics. And in electronics, we are left with specialized people. They are very, very good at doing one thing, but they only want to learn how to do that one thing. 

What do you think it will take to build a world with 1 trillion connected devices? That seems to be the new benchmark I keep hearing.

Piovaccari: To build this generation of 1 trillion devices, we need to think about power consumption. If you really had 1 trillion devices and [a significant portion of those used] batteries with the same power consumption we have today, we might have to triple the production batteries in the world. We already have 5–10 billion batteries in the world. Imagine if we needed another 20 billion batteries per year. That would result in a lot of pollution. 

So we need to try to solve this problem before IoT really becomes big. The Renaissance engineer, the person who can think about the full problem, thinks: ‘Hey, I’m designing something for this part of an application. And when I use this core, it has to have this functionality.’ You cannot just have people who only know their layer. You need to be multidisciplinary.  

Interestingly enough, I just read a book titled “Range: Why Generalists Triumph in a Specialized World” by David Epstein. The thought is that having people build a range of knowledge can bring you into a better career than being super-specialized. I really suggest reading it. It’s a great source of inspiration. 

About the Author

Brian Buntz

Brian is a veteran journalist with more than ten years’ experience covering an array of technologies including the Internet of Things, 3-D printing, and cybersecurity. Before coming to Penton and later Informa, he served as the editor-in-chief of UBM’s Qmed where he overhauled the brand’s news coverage and helped to grow the site’s traffic volume dramatically. He had previously held managing editor roles on the company’s medical device technology publications including European Medical Device Technology (EMDT) and Medical Device & Diagnostics Industry (MD+DI), and had served as editor-in-chief of Medical Product Manufacturing News (MPMN).

At UBM, Brian also worked closely with the company’s events group on speaker selection and direction and played an important role in cementing famed futurist Ray Kurzweil as a keynote speaker at the 2016 Medical Design & Manufacturing West event in Anaheim. An article of his was also prominently on kurzweilai.net, a website dedicated to Kurzweil’s ideas.

Multilingual, Brian has an M.A. degree in German from the University of Oklahoma.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like