The Designer's Guide Community
Forum
Welcome, Guest. Please Login or Register. Please follow the Forum guidelines.
Oct 4th, 2024, 4:48pm
Pages: 1
Send Topic Print
Million Device Simulation (Read 210 times)
rf-design
Senior Member
****
Offline

Reiner Franke

Posts: 165
Germany
Million Device Simulation
Jul 24th, 2005, 4:38am
 
Thank you Ken for the good overview shown in the presentation "rfic05-handout.pdf". I agree fully with all the aspects that the techniques developed for RF and mixed signal getting less useful for full verification and design analysis of already taped out communication devices. I notice that under the scheduling issues verification getting more and more incomplete and fragmented. Furthermore the missing results does not helpf novice designer generations to get insight into critical design decisions. For e.g. the first verfication skipped is mostly a full chip interblock noise analysis with all devices running and having incooperated a complete package model. So the value of the tools for verification and design experience vanish. I also agree that the special tool techniques is not the golden way respecting usage knowledge within the design teams.

What problems do classic spice encounter for the next projects having more than 1e6 devices?

1. CPU speed freezes at about 4GHz
2. Random memory access stay around 50ns
3. There is no error free hierachical model tree traversal verification

That means that there will be no future progress for classic spice based circuit simulation!

Why does this happen? There was a long standing rule in the IC industry that more powerful systems generate also more powerful tools to verify and construct himself. It is interessting to compare for example the real simulation speed with the physical simulation time. I call that ratio SimTime/RealTime. There is the argument if the CPU speed freeze also the SimTime freeze. But the issue is that the systems get bigger and the ratio increases. The practical situation is that either the simulation time or the wafer production time start to dominating the product development cycle. The introduction of multicore CPU is a bad news for circuit simulation because it is the first sign to show the "semantic gap" between the circuit simulation implementation and actual phycical silicon operating. These semantic gap getting more importance because of the size of the systems in the future.

What is the "semantic gap"?

It is the 35 years old common understanding of solving differential equation systems on single CPU machines. Remember the first applications of spice where TTL gates. That basic procedure remains up to now and will show in an increased SimTime/RealTime ratio. And it break the very old role of upturning the selfconstruction cycle.

Is there a reason of having this ratio?

No! If you use the silicon themselve as a simulator you get the ratio one. That is a very direct mapping of the differential equation system to silicon. The mapping of millions of differential equations to a single CPU and running all equations with the same update rate is what I call the semantic gap. It is the reason for having the ratio. Normally the flexibility of having a single system executing the differential equation simulation for different system is a requirement for design. So it should be programable to verfify a system which should be simulated instead of actual physical built. But there is no built in reason for increasing the ratio for bigger systems!

So I see likely the evolution of circuit simulation by breaking the "sematic gap". The major task is to develop methods for execution of multirate sparse differential equation systems on parallel processors. These parallel processors could be special purpose adapted to problem because there is also a "semantic gap" in the memory hierachy found in actual processors which lead to reduced performance and more energy usage. These tasks are not likely to exercised by the EDA industry. I argue that nuclear weapon verification, chemical reaction modelling or molecular dynamic simulation contribute more to silicon simulation and could be used as a byproduct.

What is your thinking?

By the way I think transient noise is good method to integrate noise focused analysis techniques into new generation circuit simulation methods.

Back to top
 
 
View Profile   IP Logged
Ken Kundert
Global Moderator
*****
Offline



Posts: 2386
Silicon Valley
Re: Million Device Simulation
Reply #1 - Jul 27th, 2005, 11:33am
 
The main premise of paper RF Simulation Challenges (can be found in www.designers-guide.org/Perspective/) is that the simulation challenge is getting harder in many different dimensions at once:
  • circuits are becoming larger
  • circuits are becoming more "algorithmic", meaning that they are tending to implement algorithms with complex behavior (such as delta-sigma converters, fractional-N synthesis, etc.), with the result being that many more cycle need be simulated
  • circuits exhibit many more modes, all of which must be exercised by simulation
  • circuits must be verified over increasing numbers of corner cases, or perhaps using Monte Carlo.
  • circuits are becoming increasingly diverse, meaning that special purpose algorithms that exploit particular characteristics of circuits to provide faster simulation, such as is the case with RF simulators, become increasingly ineffective
  • with designs becoming more complex and design teams becoming larger, it is increasingly important to move the verification task earlier in the design process to catch and correct errors sooner, thus verification must occur before all circuits are designed and working at transistor level
In parallel to these trends are the ones you mention, that  the performance gains of individual processors is leveling off, that computer makers will increasingly move to multiprocessor systems, and that circuit simulators in the past have been relatively unsucessful at taking full advantage of multiprocessors.

My belief is that simulation companies will begin to recognize the trend towards multiprocessors and slowly start to offer products that take advantage of them. The low hanging fruit will be to run multiple independent simulations simultaneously. You can do that today when doing corners and monte carlo, though current products are rather clunky. I hope that they also recognize the issue with multiple modes and multiple measurements and start allowing those to be easily run in parallel.

Simulators that actually distribute a single simulation to multiple processors will eventually emerge. Expect them to emerge in the specialty simulator areas first, where parallelism is easier to exploit. So I would expect parallel RF and timing simulators to become available before parallel SPICE simulators.

I also expect RF simulators to try to adapt to the increasing diversity in the circuits, with limited success. Timing simulators will also try to take on more analog, mixed-signal and RF circuits. They will have more success, but will never replace the SPICE and RF simulators if for no other reasons than the AC and noise analyses remain important.

None of this will allow the effective speed of simulators to keep up with increasing need for verification. So I expect design groups will turn more and more to top-down verification, which is a methodology that allows simulation speed to increase dramatically through the use of high-level modeling (see Principles of Top-Down Mixed-Signal Design in http://www.designers-guide.org/Design/). This does require that designers be more deliberative in planning their verification, but has the important side benefit in that it produces an accurate high-level model of the mixed-signal portion of the design, which can be shared with the digital designers that must use the mixed-signal design.

-Ken
Back to top
 
 
View Profile WWW   IP Logged
Pages: 1
Send Topic Print
Copyright 2002-2024 Designer’s Guide Consulting, Inc. Designer’s Guide® is a registered trademark of Designer’s Guide Consulting, Inc. All rights reserved. Send comments or questions to editor@designers-guide.org. Consider submitting a paper or model.