SemIsrael Blogs

Please take a break to read SemIsrael blogs.

מדינות שונות שפות שונות ובפרט בסין דיאלקטים שונים. למרות זאת יש מדינות בהן המהנדסים כותבים אנגלית אולם ברובן ההנדסה כתובה בשפה המקומית. אף על פי כן השפה השיווקית בחלקת מהמדינות היא אנגלית. ביפן, סין וטייוון מפרטים טכניים יכתבו ביפנית/מנדרין אולם בקוריאה הדרומית והודו באנגלית.

Before we get started here, I’ll assure you it’s not as it sounds. I’m not talking about the end of coverage as though it’s something we’ll stop using. The end in this case is the home stretch of any non-trivial ASIC or FPGA development effort – which is almost all of them nowadays – where coverage collection, analysis and reporting consumes a team on its way to RTL signoff.

There is probably not one embedded system that is not built without open source software, 3rd party silicon IP or manufactured far from the design and distribution centers that make and sell these systems. Those who want to secure the design and delivery chain have no standard to use to address this. This has left develop teams to struggle with means to mitigate and address security risks when third party IP and associated components are integrated into today’s modern embedded systems.

Advice on how to compare inferencing alternatives and the characteristics of an optimal inferencing engine.

In the last six months, we’ve seen an influx of specialized processors and IP to handle neural inferencing in AI applications at the edge and in the data center. Customers have been racing to evaluate these neural inferencing options, only to find out that it’s extremely confusing and no one really knows how to measure them. Some vendors talk about TOPS and TOPS/Watt without specifying models, batch sizes or process/voltage/temperature conditions. Others use the ResNet-50 benchmark, which is a much simpler model than most people need so its value in evaluating inference options is questionable.

There are almost a dozen vendors promoting inferencing IP but none of them gives even a ResNet-50 benchmark.

The only information they state typically is TOPS (Tera-Operations/Second) and TOPS/Watt.

Let’s discuss why these two indicators of performance and power efficiency are almost useless by themselves.

Abstract - With increased number of mixed-signal SoC designs and accordingly mixed-signal verification needs, UVM as a proven verification methodology for complex digital SoC is imposed as a solution. However, many mixed-signal UVM verification approaches are present, with no standardized methods of connecting UVM environment with Mixed-Signal design. For these reasons, efficient Mixed-Signal design verification is becoming challenging and opens space for innovative verification solutions. This paper will show different ways to connect UVM environment with Mixed-Signal design when Verilog-AMS models are used.

Read the full blog here

High-performance SerDes represents critical enabling technology for advanced ASICs. This star IP block finds application in many networking and switching designs as well as other high-performance applications. So, when a new high-performance SerDes block hits the streets, it’s real news. eSilicon has been enjoying the spotlight on such an event. We recently announced silicon validation of our 7nm, 56G long-reach SerDes. We were happy to report in that announcement: “lab measurements confirm that the design is meeting or exceeding the target performance, power and functionality.” Anyone who has plugged a new and complex chip into a test fixture for the first time knows what this feels like.

click here to read the full blog

This moment has been a long time coming. The technology behind speech recognition has been in development for over half a century, going through several periods of intense promise — and disappointment. So what changed to make ASR viable in commercial applications? And what exactly could these systems accomplish, long before any of us had heard of Siri?
The story of speech recognition is as much about the application of different approaches as the development of raw technology, though the two are inextricably linked. Over a period of decades, researchers would conceive of myriad ways to dissect language: by sounds, by structure — and with statistics.

In part 1 and part 2 of my blog, we looked at the capabilities that the EDA tools provided in the area of supply network analysis as well as the different methods of power shut off control. In this third blog, we will look at inter-domain switch control and role it plays in further taming the inrush currents.

Inter-Domain Switching Control

We must also consider the inter-domain switching control methods during our supply network analysis. Here each method provides a way to control the simultaneous switching of multiple logic domains.

  • No control – logic domains may all transition at the same time. This can cause higher peak inrush currents to occur and require longer switch control delays, additional decoupling capacitance or larger physical power rails.
  • Sequential domain list – reduces the simultaneous transitions to one by following a fixed sequence for the domain transitions. This limits the inrush current to a single domain at a time, but increases total transition time.
  • First level grouping based on state followed by a sequential domain list – reduces the simultaneous transitions to one by processing the domains in a series of groups defined by the operating point state then, within each group following a fixed sequence for the domain transitions. This allows the transition order to be modified per operating point state. The inrush current is again limited to a single domain at a time, but increases total transition time.
  • Cost model per supply – reduces the simultaneous transitions to a smaller number based on a cost model per domain and a limit per supply. Domains with a small cost may transition if the supply remains under a limit. Domains with a large cost may have to wait until other transitions complete. This reduces the total time to transition a series of domains and still maintain an acceptable inrush current.

Read the full blog here

In the past few years, automatic speech recognition (ASR) has become common practice, with billions of voice-enabled products and services. A wide variety of ASR technologies exists, each suited for different use cases. Undeniably though, the holy grail of ASR is natural language processing (NLP), which lets users speak freely, as if they were talking to another person. A simple example is that you can say “Set a reminder for 9AM the day after tomorrow” to any of the leading virtual assistants like Alexa, Google Assistant, Siri or Cortana, and they would understand the intent. There is no specific order or magic word that you have to say. You could also say “remind me on Wednesday at 9 in the morning” or “set a reminder on May 16th at 9 AM” and get the same result. The bottom line in NLP is extracting the meaning, regardless of the phrasing.

Read the full article on Embedded Computing Design.

Page 1 of 18