The compilation flow does build in a lot of dependency information into the compiled libraries, and if you try to do your own dependency management on top of that, you will break the flow. The actual parsing of the files should generally be very fast, so although there is some cost to re-parsing everything, it has the advantage that you won't have any problems with broken dependencies or incorrect partial re-compilations if you try to manage it yourself. Things like compilation unit scopes, macros etc are very easy to get wrong if you're not careful! If you're hoping to improve your elaboration speed, there are a couple of things to try, depending where the CPU time is spent. Elaboration happens in two stages, first the design topology is computed, parameter values derived, etc. This has to happen every time yo change any code, as the tool cannot predict the impact and dependencies caused by code changes, e.g. if you happen to have hierarchical references into or from something that changed. The second phase of elaboration is to generate optimised simulation code for the design. This part can be slow, depending on your design, but we can spped it up easily by adding a switch "-mccodegen" to do the code generation in parallel. The design topology analysis can't be parallelised, but we have a mechanism to allow you to partition the design into primary and secondary snapshots, where the the primary is part that doesn't change often (e.g. the DUT) and the secondary is the part that changes a lot (e.g. the TB). In this mode, you can avoid re-elaborating the primary part, thus saving a lot of time. Please check the tool documentation for "MSIE" or "Multi-Snapshot Incremental Elaboration" to see the use cases and corresponding switches.
↧