The Designer's Guide Community Forum
https://designers-guide.org/forum/YaBB.pl
Other CAD Tools >> Entry Tools >> Virtuoso Slow Down after Large Memory exhaustion
https://designers-guide.org/forum/YaBB.pl?num=1513431975

Message started by cheap_salary on Dec 16th, 2017, 5:46am

Title: Virtuoso Slow Down after Large Memory exhaustion
Post by cheap_salary on Dec 16th, 2017, 5:46am

I use very complex and very heavy post processing with ViVa in Cadence Virtuoso.
However I always encounter "Swap Activity Warning".

Once I encounter "Swap Activity Warning", Virtuoso is very slow down and never recover normal operation speed, even if I close ViVa.

I have to restart Virtuoso to recover normal operation speed.

I think it is because Virtuoso can not release allocated memory.

Is ther any method to recover normal operation speed without restarting Virtuoso ?

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Dec 21st, 2017, 7:34am

There has been some work on this recently - but in general releasing memory back to the operating system is challenging in most applications.

Please contact Cadence Customer Support for this as then a Cadence AE can see if there's anything that can be done to help (not sure which subversion of the IC tools you're using, for example).

Regards,

Andrew.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by cheap_salary on Dec 24th, 2017, 7:33am

My Subversion is:

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Dec 26th, 2017, 2:39am

OK - you're pretty recent (there's only one later subversion currently). I think you should talk to customer support, as I mentioned earlier - that way Cadence can get a better idea of what you're doing to see what is going on and to see if it can be improved.

Thanks,

Andrew.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by cheap_salary on Feb 6th, 2018, 4:32am

When I do large post processing by using Cadence Skill in IC5, I don't encounter an exhaustion of memory.
And I can complete in post processing.

However a large memory exhaustion occurs and it is never released, when I use Cadence Skill in IC6.
And I can not complete in post processing.

Here I don't use graph drawing, that is "ocean -nograph".

My conclusion is that used memories blow up if I use Cadence Skill in IC6.

I think this must be a critical defect of Cadence Skill in IC6.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Feb 8th, 2018, 12:59pm

That's a rather simplistic conclusion. It's not very likely to be SKILL itself that is the problem, but the objects being created not being garbage collected frequently enough.

Various types of objects are organised so that garbage collection is triggered when you run out of those objects, so it tries to do some housekeeping before allocating any more. It could be that your code is hanging onto objects which are no longer needed and thus preventing them being recycled, or it could be if this is waveform data that it is being cached to improve performance. You still read waveform data if you're doing simulation post-processing and not plotting anything; the caching occurs to improve performance if you access the same waveform more than once - we don't want to keep re-reading the same data from disk.

This is precisely why we (Cadence) need to see what you're doing - it's likely to be dependent on the nature of what you're doing rather than being something as simple as this being a "critical defect of Cadence Skill in IC6". Otherwise it would affect everyone, all the time. Which it doesn't.

Andrew.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by cheap_salary on Feb 9th, 2018, 3:07am


Andrew Beckett wrote on Feb 8th, 2018, 12:59pm:
That's a rather simplistic conclusion.
I don't think so, since I use same skill code and same psf data for both IC5 and IC6.


Andrew Beckett wrote on Feb 8th, 2018, 12:59pm:
It could be that your code is hanging onto objects which are no longer needed
and thus preventing them being recycled,
or it could be if this is waveform data that it is being cached to improve performance.
I use only four work variables, "Vrec_out_Raw", "Ain0_dBV", "yyy" and "y".
So these work spaces have to be reused.


Andrew Beckett wrote on Feb 8th, 2018, 12:59pm:
You still read waveform data if you're doing simulation post-processing
I do simulation as a completely another session, there I generate only psf data.

My skill script is for pure post processing without simulation.


Andrew Beckett wrote on Feb 8th, 2018, 12:59pm:
it's likely to be dependent on the nature of what you're doing
rather than being something as simple as this being a "critical defect of Cadence Skill in IC6".
I don't think so.


Andrew Beckett wrote on Feb 8th, 2018, 12:59pm:
Otherwise it would affect everyone, all the time. Which it doesn't.
Simply no one do heavy post processing by skill code.


Code:
Vrec_out_x = 1.5

for(i, 1, 150
 sprintf(psf_dir, "results_dir_%d/pss", i)
 openResults(psf_dir)

 selectResult("pss_fd")
 Vrec_out_Raw = real( harmonic( v("vout"), 0 ) )
 Ain0_dBV = cross(Vrec_out_Raw, Vrec_out_x, 1, 'rising)
 fprintf(fp, "%g", Ain0_dBV)

 selectResult("pss_td")
 for(k, 1, 200
   sprintf(yyy, "aho_%d:in", k)
   y = value( i(yyy), "Ain_dBV" Ain0_dBV )
   fprintf( fp, ",%g", average( abs(y) )/1m )

   sprintf(yyy, "boke_%d.Vb", k)
   y = value( v(yyy), "Ain_dBV" Ain0_dBV )
   fprintf( fp, ",%g", ymax(y) )
 ); for k

 when( dbGetDatabaseType() == "OpenAccess", closeResults(psf_dir) )
 fprintf(fp, "\n")
 drain(fp)
); for i

close(fp)

exit()

I can complete this code by Skill in IC5.
But I can not by Skill in IC6.

If loop is small, e.g. " for(k, 1, 10" instead of " for(k, 1, 200" , I can complete even by Skill in IC6.

BTW, "closeResults(psf_dir)" is available in OCEAN in IC6,
but it is not helpful at all.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Feb 18th, 2018, 1:34am


Quote:
Simply no one do heavy post processing by skill code.


Of course, nobody has written any SKILL code in the last 12 years (since IC6.1.0 was released) which does any heavy processing. What a ridiculous statement.

I'm afraid you're incorrect about this being a critical defect in SKILL. What it's likely to be (as I said before) is something that is not freeing a cached waveform object. This is not necessarily related to the variables you're using in your script. It's likely to be related to the fact that you are reading 150+150*200*2 signals (60150) and there's also another temporary waveform created in the inner loop (150*200) for the abs function.

Yes, of course these should be garbage collected and the memory re-used, but there is a tradeoff to be had with respect to signals read in from disk - you don't want to have to keep reading the same data off disk over and over again, which happens surprisingly often. This is why you should report this to Cadence with the data and your script so that R&D can take a look and see if any of the recent improvements help - or if something else needs to be done to prevent memory being exhausted. Quite likely there is.

However, as I said before - this is not a critical defect in SKILL itself. You're conflating correlation and causality. It's due to a change in how waveform objects (in ViVA) are handled in Virtuoso between IC5 and IC6 - there are lots of other benefits of this change (performance in particular) but with all implementations there are tradeoffs, and clearly in this case there is the opportunity (if you report it) to improve that tradeoff in the specific situation you're facing.

Regards,

Andrew.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by cheap_salary on Feb 18th, 2018, 1:44am


Andrew Beckett wrote on Feb 18th, 2018, 1:34am:
What it's likely to be (as I said before) is something that is not freeing a cached waveform object.
Right.


Andrew Beckett wrote on Feb 18th, 2018, 1:34am:
It's likely to be related to the fact that you are reading 150+150*200*2 signals (60150)
and there's also another temporary waveform created in the inner loop (150*200) for the abs function.
Right.

However "closeResults(psf_dir)" should release memories.

"closeResults(psf_dir)" was introduced in IC6 which does not exist in IC5.
But it is no meaning at all.

Conclusion is that skill in IC6 consume very large memories than IC5 and never release them.
http://www.designers-guide.org/Forum/YaBB.pl?num=1513431975/0#0
I have to restart Virtuoso to recover normal operation speed.

So we have to make one session of skill very small in IC6.

However ability of treatment is too small in IC6.

Due to this limitation, ADE-XL, ADE-GXL and ADE-Explorer assume to use multiple sessions of skill.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Feb 18th, 2018, 1:54pm

Again, you're drawing conclusions based on incorrect assumptions.

  • The purpose of closeSession is to free up the ADE session (and release the license); the goal is not necessarily to free memory (although this might be a side benefit of a recent improvement in the latest versions of IC617 - not sure if it helps your case though, hence the request to go to customer support). Might be worth checking with IC617 ISR17 first, although I can't guarantee that will help solve your problem.
  • ADE XL, Explorer and Assembler do not make any assumptions about using "multiple sessions of skill". They actually use several ICRPs (the background process used in ADE XL, Assembler and Explorer) in order to process in parallel (parameterisation, netlisting, simulation control and post-processing). It's not done for memory reasons.
  • It certainly does re-use memory for waveform objects in many cases given the kind of processing that many ADE users do, but clearly that's not happening properly in your specific case.


Have you actually contacted Cadence customer support over this issue?

Regards,

Andrew.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by cheap_salary on Feb 18th, 2018, 4:52pm


Andrew Beckett wrote on Feb 18th, 2018, 1:54pm:
The purpose of closeSession is to free up the ADE session
Wrong.

I use closeResults() which are introduced in IC6.


Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Feb 19th, 2018, 3:11am


Quote:
Wrong.

I use closeResults() which are introduced in IC6.

My mistake. I misread closeResults as ocnCloseSession (shows the dangers of responding when  you're on vacation and so I wasn't checking carefully enough).

I've done a bit of experimentation (now that I'm back at work) and I don't think closeResults() or ocnResetResults() are properly returning the memory to the pool (note, the memory may not be freed back to the OS, but at least Virtuoso should be able to re-use it for other waveform objects).

It's not clear from reading through the original request for closeResults() whether it was implemented with freeing memory in mind, although that seems to me a very good idea. It was mostly about closing any objects (and in particular file handles) so that psf directories could be deleted on disk.

So as I suggested, you should contact Cadence customer support about this (closeResults() should free memory used by any waveform objects which are no longer referenced via variables). I can raise it internally, but it's likely to get prioritised if coming from a real customer.

Regards,

Andrew.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Feb 19th, 2018, 5:00am

A workaround seems to be to force a garbage collection after the call to closeResults() by calling gc() .

I had a discussion with R&D today (I filed an internal CCR 1881844 to suggest improving closeResults()). So it would be worth mentioning that if you log a case with customer support.

I'd be interested in hearing if calling gc() in your code improves matters.

Regards,

Andrew.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Feb 19th, 2018, 10:33pm

It would also help to know how large the time domain data is in terms of the number of points. I suspect there must be a lot of points as otherwise 60,000 waveforms shouldn't be enough for excessive memory usage (I also would expect that number of waveforms to trigger garbage collection, probably). I'd like to try to reproduce some data with the characteristics of what you're seeing (logging a case with Cadence Customer Support would be best, but knowing this is better than nothing).

So after the line:


Code:
 selectResult("pss_td")


can you add:


Code:
 printf("Number of points: %d\n" drVectorLength(drGetWaveformXVec(v("boke_1.Vb"))))


That said, I have just realised (from looking at your script) that there's presumably a sweep around the pss in your setup because of the value() statements. So instead of above, I might need:


Code:
 printf("Sweep values: %L\n" sweepValues())
 sprintf(yyy, "boke_1.Vb", k)
 y = value( v(yyy), "Ain_dBV" Ain0_dBV )
 printf("Number of points: %d\n" drVectorLength(drGetWaveformXVec(y)))


Thanks,

Andrew.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by cheap_salary on Feb 20th, 2018, 2:41am


Andrew Beckett wrote on Feb 19th, 2018, 10:33pm:
It would also help to know how large the time domain data is in terms of the number of points.
Same problem occurs even if I only use "pss_fd".
Actually, this thread starts for post processing of only "pss_fd".


Andrew Beckett wrote on Feb 19th, 2018, 5:00am:
I'd be interested in hearing if calling gc() in your code improves matters.
I have to get post processing results, so I use IC5 OCEAN for a while, since I can not get results at all by IC6 OCEAN.

Later I will confirm "gc()".



Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Feb 20th, 2018, 3:28am

Thanks.

When you have a moment, I'd really appreciate knowing some more details on the characteristics of your data size (as I requested) as well as whether gc() helps, as this will help understand why automatic garbage collection is not working in your case. Of course, the real life design work comes first!

Regards,

Andrew.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Feb 23rd, 2018, 6:27am

A bit further information. I decided to try to reproduce a similar scenario to yours (with a bit of guess work on the sizes of the data that would be enough to show the problem). I picked about 1000 points in the time-domain results, and 20 points in the sweeps around the pss.

With this, I see the problem you're facing. Also, in this case gc() does help a bit, but not hugely (it reduces the memory footprint to 75%-85% of the memory footprint without the gc() calls). I do see evidence of garbage collection being performed during the script run - but the memory continued to grow (up to about 12Gbytes in my example - the amount would depend on the number of sweep points and time points). I was using a script that was a modified version of yours.

Anyway, I've passed this onto R&D - we need to identify what in the ViVA result management is not re-using memory, and this example is probably good enough to do it. Simply calling gc() as part of closeResults() won't solve the problem - it really helped in my other example which had a relatively small number of very large signals - but clearly doesn't help so much in the situation where you have a lot of medium-sized signals.

It would still make sense to log a case with Cadence customer support and have them file a duplicate of CCR 1881844 so that it has the weight of a customer request behind it.

Kind Regards,

Andrew.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by cheap_salary on Apr 9th, 2018, 6:53am


Andrew Beckett wrote on Feb 19th, 2018, 5:00am:
I'd be interested in hearing if calling gc() in your code improves matters.
gc() is not helpful at all.


Code:
\# Memory report: using         301 MB, process size 1,405 MB at UTC 2018.04.09 00:50:43.056
\# Memory report: using         830 MB, process size 1,933 MB at UTC 2018.04.09 00:50:50.018
\# Memory report: using       1,368 MB, process size 2,472 MB at UTC 2018.04.09 00:50:50.465
\# Memory report: Maximum memory size now 47,998 MB at UTC 2018.04.09 00:50:51.508

\w *WARNING* Low Memory: Less than 1903 megabytes of system memory remain available to this program (3.9456% of a maximum of 47.1 gigabytes).
\w *WARNING* Low Memory: No further low memory warnings will be output.

\# Available memory:          3,971 MB at UTC 2018.04.09 06:55:25.431

\# Available memory:          2,785 MB at UTC 2018.04.09 07:08:12.687
\# Memory report: using      47,612 MB, process size 48,715 MB at UTC 2018.04.09 07:08:12.863
\# test ***warning***: memory usage seems to be dangerously high.
\# Memory report: using      47,612 MB, process size 48,716 MB at UTC 2018.04.09 07:09:57.267
\# test ***warning***: memory usage seems to be dangerously high.

\w *WARNING* Swap Activity: Excessive swap activity has been detected, which can cause decreased performance.  The application will continue to run, but if excessive swapping continues
\w *WARNING* Swap Activity: due to the memory requirements of all the programs currently running on this system, the performance of this program may continue to be negatively affected.
\# Memory report: using      48,070 MB, process size 49,174 MB at UTC 2018.04.09 07:25:37.283

\# test ***warning***: memory usage seems to be dangerously high.
\# Available memory:          1,909 MB at UTC 2018.04.09 07:34:01.053
\# Memory report: using      48,315 MB, process size 49,418 MB at UTC 2018.04.09 07:34:01.053
\# test ***warning***: memory usage seems to be dangerously high.
\# Memory report: using      48,322 MB, process size 49,425 MB at UTC 2018.04.09 07:34:02.100
\# test ***warning***: memory usage seems to be dangerously high.
\# Memory report: using      48,331 MB, process size 49,434 MB at UTC 2018.04.09 07:34:06.139
\# test ***warning***: memory usage seems to be dangerously high.
Ocean died at this point.

I have to use Skill in IC5.

Title: Re: Virtuoso Slow Down after Large Memory exhaustion
Post by Andrew Beckett on Apr 17th, 2018, 12:46am

Thanks for confirming - that doesn't surprise me given that my attempt to replicate what I guessed was your structure from what you had described led to a similar conclusion; I found that repeatedly calling gc() during the loops slowed the memory growth a bit, but not massively.

It still would be best if you report this to Cadence customer support, referencing the CCR I mentioned above. Having to use a long-unsupported version of the tools is not a sustainable solution. Then we can work out how best to optimise the ViVA memory usage so that it doesn't exhaust the memory (as I said before, this is a tradeoff between avoiding unnecessary re-reading of results over and over again and consuming memory, but that trade-off needs to be adjusted based on what you and I are seeing here).

Kind Regards,

Andrew.

The Designer's Guide Community Forum » Powered by YaBB 2.2.2!
YaBB © 2000-2008. All Rights Reserved.