SemIsrael Blogs
Greg Ehmann

Greg Ehmann

Greg Ehmann is EPU Architect at Sonics.

In part 1 and part 2 of my blog, we looked at the capabilities that the EDA tools provided in the area of supply network analysis as well as the different methods of power shut off control. In this third blog, we will look at inter-domain switch control and role it plays in further taming the inrush currents.

Inter-Domain Switching Control

We must also consider the inter-domain switching control methods during our supply network analysis. Here each method provides a way to control the simultaneous switching of multiple logic domains.

  • No control – logic domains may all transition at the same time. This can cause higher peak inrush currents to occur and require longer switch control delays, additional decoupling capacitance or larger physical power rails.
  • Sequential domain list – reduces the simultaneous transitions to one by following a fixed sequence for the domain transitions. This limits the inrush current to a single domain at a time, but increases total transition time.
  • First level grouping based on state followed by a sequential domain list – reduces the simultaneous transitions to one by processing the domains in a series of groups defined by the operating point state then, within each group following a fixed sequence for the domain transitions. This allows the transition order to be modified per operating point state. The inrush current is again limited to a single domain at a time, but increases total transition time.
  • Cost model per supply – reduces the simultaneous transitions to a smaller number based on a cost model per domain and a limit per supply. Domains with a small cost may transition if the supply remains under a limit. Domains with a large cost may have to wait until other transitions complete. This reduces the total time to transition a series of domains and still maintain an acceptable inrush current.

Read the full blog here

In part 1 of my blog, we looked at the capabilities that the EDA tools provide in the area of supply network analysis. Now, we look at the different methods of power shut off control in the supply network.

Power Shut Off Control

The various methods used to control power shutoff switches each provide a way to control inrush currents and a way to indicate when a stable supply level has been achieved. Generally speaking, they approach the problem of minimizing peak inrush current by starting with a large power switch resistance (since I = V/R) and then gradually reduce the effective resistance as the difference in voltage across the power switch decreases.

One often overlooked network on a chip is the power supply network. While not as glamorous as the communication network, it still provides an essential function and therefore, requires careful design and analysis.

The EDA companies have invested in the development of tools to perform design and analysis of many of the aspects of the power supply network to make sure we get it right. These aspects include the following.

Unified Power Format (UPF) has been an ever evolving standard started as a technical committee by the Accellera organization in 2006, producing the first revision of the UPF specification, UPF 1.0 in 2007. Soon after UPF 1.0 release the group reformed under the IEEE organization as IEEE1801 with a major goal of merging in a competing standard, the Common Power Format (CPF). IEEE 1801 has since released three new versions of the UPF specification over the past ten years: IEEE1801-2009 (UPF 2.0), IEEE1801-2013 (UPF 2.1) and IEEE1801-2015 (UPF 3.0).

There have been many blogs and articles written on power management utilizing dynamic voltage and frequency scaling (DVFS), a method by which a discrete voltage and frequency pair is chosen from a predetermined list based on an input requirement. For an example, read Don Dingee’s blog entitled “DVFS is Dead, Long Live Holistic DVFS.” Choosing this input requirement is where it all starts.

The most common input requirement is the required performance. Where do we find out the required performance? Well, the OS surely knows what tasks are running so it can estimate throughput. This throughput directly relates to the frequency and power state of one or more CPU cores. DVFS lets the OS reduce the frequency to match the desired throughput, while reducing voltage to the minimum level that supports safe operation at that frequency. But, how do we choose that voltage?