I'm a little sceptical, but mainly at the levels they're talking about, not at the principle itself. It's generally a matter of whether the gains outweigh the costs though.
Ultrasim already can do some fairly sensible partitioning, even on analog blocks which have coupling at the boundaries of the partitions (the main benefit of partitioning in that case is because of differing signal rates in each partition). However, there's still a need to effectively synchronise the boundaries. For loosely coupled partitions (i.e. where you have a signal flow between the partitions) you can treat the communication between the partitions as an event.
The point is that there is bound to be a reasonable amount of communication needed between the blocks, and so I doubt whether there would be many circuits which could benefit from the kind of parallelisation levels (100 machines) they're talking about, without IPC costs being too high.
This comment also looked a bit strange:
Quote:The initial products will support leading Spice and fast Spice simulators, Thakar said. They will support distributed processing only, not multithreading or symmetric multiprocessing on multiple-CPU machines. That results in too much overhead, he said.
He's saying that it's better to have the interprocess communication going around a slow network, rather than being able to do it in shared memory? That sounds a very peculiar (and opposite to usual experience) position to take.
Also, the fact that this is done simply by using existing simulators is interesting, but I'll be amazed to find out how they are truly decoupling blocks at the netlist level somehow. I'd not noticed that until I read some notes on their website more closely just now.
I'm prepared to be amazed though ;)
Andrew.