Synopsys Blog

Synopsys Blog

By Gordon Cooper, Staff Product Marketing Manager, Synopsys Solutions Group

From smart speakers and digital cameras to edge servers and hyperscale data centers, the types of applications that rely on deep-learning neural networks to deliver actionable insights run a wide gamut. Inside each of these systems are robust AI SoCs that bring them to life—SoCs that rely on powerful embedded processor IP to run the compute-intensive algorithms.

By Priyank Shukla and John Swanson, Staff Product Marketing Managers, Synopsys Solutions Group, and Anika Malhotra, Senior Product Marketing Manager, Synopsys Verification Group

It’s all about bandwidth these days – fueling hyperscale data centers that support high-performance and cloud computing applications. It’s what enables you to stream a movie on your smart TV while your roommate plays an online game with friends located in different parts of the country. It’s what makes big data analytics run swiftly and allows artificial intelligence (AI) algorithms to perform their magic and provide valuable insights for everyday gadgets and beyond.
As the data connectivity backbone for the internet, the Ethernet protocol is answering the call for increased bandwidth demands by supporting speeds of 200G, 400G and, now, 800G. Before long, 1.6T will not be out of the question. Going hand in hand with higher bandwidth is the need for efficient data connectivity over longer distances.

By Kenneth Larsen, Product Marketing Director, Synopsys Digital Design Group

The adoption of 3DIC architectures, while not new, is enjoying a surge in popularity as product developers look to their inherent advantages in performance, cost, and the ability to combine heterogeneous technologies and nodes into a single package. As designers struggle to find ways to scale with complexity and density limitations of traditional flat IC architectures, 3D integration offers an opportunity to continue functional diversity and performance improvements, while meeting form-factor constraints and cost.

By Neel Desai, Sr. Product Marketing Manager, Digital Design Group, and Michael Posner, Sr. Marketing Director, IP

Chiplets are fast becoming the answer to cost-effectively deliver the high transistor counts at smaller geometries demanded by burgeoning applications like artificial intelligence (AI), cloud and edge computing, high-performance computing (HPC), and 5G infrastructure. By combining multiple, single silicon dies onto one package, chiplets provide another way to extend Moore’s Law while enabling product modularity and optimization of process node selection based on function. However, meeting power, performance, and area (PPA) targets for chiplets as well as larger, faster, and more complex SoCs continues to be a race as designers strive to achieve increasingly stringent time-to-market goals.

By Manuel Mota, Sr. Staff Product Marketing Manager, Die-to-Die IP

High-performance computing (HPC) is a hot topic these days, and for good reason. Consider the can containing your favorite soda – countless hours of simulation and engineering work using HPC systems have gone into designing streamlined cans that minimize aluminum waste. Indeed, the benefits of HPC are far-reaching, from its use in mining cryptocurrencies to drug testing, genome sequencing, creating lighter planes, researching space, running artificial intelligence (AI) workloads, and modeling climate change.

By Stelios Diamantidis, Sr. Director, Synopsys AI Solutions, Office of the COO

Artificial intelligence (AI) is touching so many aspects of our everyday lives, from consumer devices to broader applications like drug discovery, climate change modeling, and self-driving cars. One of the advantages that AI brings is its ability to derive actionable insights quickly from massive amounts of data. And that advantage is also providing a productivity boost for chip design.

By Raja Tabet, Sr. VP of Engineering, and Anand Thiruvengadam, Product Marketing Director, Custom Design and Physical Verification Group

In our data-driven world, applications like high-performance computing (HPC) and artificial intelligence (AI) are taking center stage, delivering intelligence and insights that are transforming our lives. However, the growing complexities of HPC and AI designs are driving the need for much more complex semiconductor devices. Increasingly, multiple components and technologies are coming together in hyper-convergent designs to meet demands for bandwidth, performance, and power for these compute-intensive applications. To achieve power, performance, and area (PPA) targets, such complex chips need to be analyzed as a single system—an approach that’s difficult to support via traditionally disparate tools. In this post, originally published in the “From Silicon to Software” blog, we’ll examine the trend of IC hyperconvergence and explain why the traditional, disaggregated approach to circuit simulation is no longer sufficient.