Quantcast
Channel: Jason Andrews Blog
Viewing all 33813 articles
Browse latest View live

Sunday Brunch Video for 31st May 2020

$
0
0
www.youtube.com/watch Made in "Paris" (camera Carey Guo) Monday: Memorial Day Tuesday: Simon Butler's Fireside Chat with Jim Hogan Wednesday: Automotive Ethernet Thursday: 5G: Connecting All the Things Friday: First US Manned Launch Since 2011...Not Yet Featured Post: Virtuoso Meets Maxwell: How to Route a Package in Virtuoso

Schematic plot of little large circuit

$
0
0
Hello, I am trying to print my circuit schematic view, the circuit is not that big, its like fully differential amplifier with biasing circuit. I tried to print the schematic by using three methods but non of them is giving me a clear result that I can use the picturer in formal writing or presentation method1. using export image method2. using export HTML from ADEXL method3. using "print" command Do Cadence suggest to use third part software for plot ? Thank you

RE: Schematic plot of little large circuit

$
0
0
may be I can show you part of the circuit, still not clear, I cant read transistors information from the image

RE: How to add solder mask to teardrops ?

$
0
0
The IPC association recommends adding teardrops to pads with drills to minimize bare board quality failures if the actual drill hole ends up being too close to the edge of the pad (breakout.) Teardrops in copper are also needed to add flex circuit reliability. If there isn't a drill hole to accommodate, except for RF connection tapers, I don't know of a universal benefit for teardrops on SMD pads. Soldermask does several things. One is keep solder from escaping or flowing away from the connection. Molten solder gets drawn into holes via capillary action or it spreads as far as it can because of gravity, risk of a dry solder joint. vias adjacent to SMD pads need to be covered by mask.

RE: axlSelect() does not select 'fillet'

$
0
0
a fillet is a shape, below is the top part of a 'show element' on a fillet: LISTING: 1 element(s) class ETCH subclass BOTTOM part of net name: CVL_RX_L0_P Connected vias: 1 Shape is solid filled Shape is fillet Area: 0.000099 (sq in)

Improving Tests Efficiency Using Coverage Callback

$
0
0
When you go to the store, you walk until you get there, stop, get your groceries, and go back home. You do not start circling around the block for few rounds. You do not say “if I walk around the block really fast, I can save time”. It is clear that if you avoid circling the block at first place – you will save even more time. Why don’t we adopt the same rationale in the verification process? Instead of thinking just about how to run faster, try to see if we can run less. If, for example, the test executes 10 million transactions in 10 hours, then instead (or in addition) of getting these transactions executed faster – let’s try to understand if we really need to run so many transactions. Perhaps the first 1 million got us to our goal. You might say that with verification, unlike the shopping situation, there is no big “Grocery” sign telling me I arrived at my destination. Well, there isn’t such a sign, but there are ways to know that you reached your goal. What is the goal? With Coverage Driven Verification methodology, we define the goal with coverage. Inquiring the coverage model during the run might reveal such “you reached your goal” sign. In a recent blog Specman Coverage Callback we introduced the new Coverage Callback API, which allows querying the coverage model whenever a coverage group is being sampled. That is – you can have full vision about were you stand regarding your coverage goal, during the test. In this blog we give some more details about various options you can use the coverage runtime information in order to improve the tests efficiency. The full code of the examples shown below is on github , next to other e utilities and examples. Analyze coverage progress throughout the test The coverage report we view at the end of the run gives us information about the coverage groups grade. It does not tell us how efficient we were in our journey – did we stop when we reached the goal, or did we continue circling around it. Using the Coverage Callback API, you can get the information during the test. This means we know the relative contribution of different segments of the test to the coverage. A past blog, analyze-your-coverage-with-python-plot , showed how such information can be used to produce plots showing the coverage progress. The plot below was created in a test in which the coverage of data items (blue line) reached almost to its maximum around the first third of the test. We can say that in term of data coverage – the test was very efficient in the first 200 cycles, and not very efficient after that. That blog was using the old Coverage Scan API. Using the new Coverage Callback API, we can get same reports, but with no performance penalties. Here is the code doing this: struct cover_cb_send_to_plot like cover_sampling_callback { do_callback() is only { if is_currently_per_type() { var cur_grade : real = get_current_group_grade() ; // Pass the name and grade to the plotter sys.send_to_graph( get_current_cover_group().get_name() , cur_grade); }; }; }; Sending the data to the plot app adds a nice touch, but it is not a must. You can also write to a text file. Here is, for example, a table created during one test, showing the coverage progress of three coverage groups. Note that each grade is recorded when the relevant group is being sampled, so not all samples are at same time. in_data_cov time 5 | 111 | 123 | 450 | 838 | 1062 | 1069 | 7924 | 53735 | 55560 | 66194 | 1018680 grade 14 | 29 | 37 | 38 | 38 | 38 | 38 | 38 | 41 | 41 | 43 | 44 power_cov time 0 | 0 | 89 | 30038 | 132124 grade 37 | 37 | 37 | 37 | 38 fsm time 0 | 0 | 0 | 0 | 0 | 8 | 14 | 45 | 939 | 20479 | 31278 | 67907 grade 5 | 5 | 5 | 5 | 5 | 5 | 14 | 14 | 16 | 16 | 40 | 40 After analyzing the reports, based on what you learn from the behavior of the past test, you can decide how to continue. You might decide, for example, to improve the tests that seem to run “full gas in neutral” by changing the constraints. Take run time decisions, based on coverage progress In the previous example, we talked about analyzing the coverage progress after the test (or the full regression) ends. But you also can take runtime decisions. One decision you can decide to take is to stop the test once you estimate that the test does not seem to add anything to the coverage; “if it didn’t get to any new area in the last two hours, most likely that it will not get any better if it continues”. The following code implements a cover_sampling_callback struct, that compares the current coverage grade to the previous grade. If there is no change in the grade for more than X samples – it emits an event. The user of this utility can decide what to do upon this event, for example – to gracefully stop the run. Another decision might be to change something in the generation hoping that it will exercise area that were not exercised before. The following is a snippet of the utility, that can be downloaded from cover_cb_report_progress . struct cb_notify_no_progress like cover_sampling_callback { do_callback() is only { var cr_name := get_current_cover_group().get_name() ; var group_info := items_of_interest.first(.group_name == cr_name); if group_info != NULL { if group_info.last_grade == get_current_group_grade() { group_info.samples_with_no_change += 1; } else { group_info.last_grade = get_current_group_grade() ; group_info.samples_with_no_change = 0; }; if group_info.samples_with_no_change > group_info.max_samples_wo_progress { message(LOW, "The cover group ", cr_name, " grade was not changed in the last ", group_info.samples_with_no_change, " samples " ); // Notify of no progress emit no_cover_progress; }; }; }; }; // User code: when there is no progress, change the weight of // illegal transactions. // Note that the legal_weight field is static, so – can be written // from one place, such as from the env unit. No need to accessing // a specific instance of a transaction <' struct transaction { legal : bool; // By default – most transactions are to be legal static legal_weight : uint = 90; keep soft legal == select { value (legal_weight) : TRUE; 100 - value (legal_weight) : FALSE; }; }; extend env { on cb.no_cover_progress { // Modify the static fields. From now on – all // transactions will have 50% probability to be illegal transaction::legal_weight = 50; }; }; We hope these examples intrigue your imagination. Stay tuned – next blog will show some more ideas for improving tests efficiency using Coverage Callback.

RE: How to open old version state in the current version IC61.

$
0
0
This works for me - if I use the Setup->Load State, when my simulator is "ams", I am able to pick the simulator to load a state from as "spectreVerilog" whether I am loading a state from a directory or from a cellView. I'm using IC617 ISR18 which you mentioned in your other post that you were using, together with INCISIVE152. That said, I did have a problem with loading the state from a directory when using ADE Explorer - the "What to Load" ended up blank for everything. Was OK loading a state cellVIew, and also loading the state into ADE L first worked. Which ADE are you using? ADE L,XL,Explorer or Assembler? It didn't sound from your description if that was what you were talking about though. Regards, Andrew

RE: how to determine the subthreshold process paramter?

$
0
0
Hi, I am using Cadence Virtuoso ADE L to simulate my circuit and I want to figure out the subthreshold factor. Can you help me out how to do so?

RE: how to determine the subthreshold process paramter?

$
0
0
[quote userid="477428" url="~/cadence_technology_forums/f/custom-ic-design/35311/how-to-determine-the-subthreshold-process-paramter/1367643"]Can you help me out how to do so?[/quote] Not unless you both read the thread here (especially the last sentence in my reply to which you posted your question) and can provide a lot more information. What is this "subthreshold factor" you refer to? Which simulator and device models are you using? Which version of the tools are you using? Please provide a reference for any equation you might mention.

RE: how to determine the subthreshold process paramter?

$
0
0
 and I am using the kit GPDK090

The Five Waves: AI, 5G, Cars, Clouds, IoT

$
0
0
In Cadence's recent earnings call, Lip-Bu Tan, our CEO, talked about the five waves that are hitting us simultaneously. Here's what he said: Yes. John, it's a good question. First of all, I'm excited about this industry, because it's very unusual to have five major waves happening at the same time. You have the AI machine learning wave and you have 5G is starting to deploy and then you have the hyperscale guy, the really massively scaled infrastructure. And then we have autonomous driving and then the whole digital transformation of the industry group. And then as I mentioned earlier, clearly some of this big system company and a service provider, they are quietly building up the silicon capability. They're also reaching out to us to really expand beyond that to the system analysis space. And so I think we are excited about the opportunity in front of us. Normally in the semiconductor industry, there is one major trend driving the business. In the very beginning, a lot came from the military. Then, when computers proliferated into businesses, it was building the components used to create computers. This was still the era before the microprocessor, when a chip would only hold a few gates. Many computers were built with TTL logic, which maxed out at something like the 4-bit ALU, the 74181. See my post Carry: Electronics for more details on that. Once the PC took off, the semiconductor industry rode that wave until the early 2000s. Then cellphones became a big driver, especially after 2007 when the smartphone explosion began. Even though PCs are no longer on the growth path they once were, it is still a 250M unit market, and there is a lot of silicon inside one. Smartphones are an order of magnitude more, at over 1.5B units per year. I know it is obvious, but Cadence gets most heavily involved with companies during (and even before) the design phase. It would be wrong to say we don't care about production volumes, we want our customers to be successful. But we don't (mostly) directly participate in production volumes. Whereas a foundry is the other way round—they don't care about much else. If a modern foundry is anything like an ASIC company used to be, it doesn't make any money on designs that don't enter production or have much lower production volumes than planned (and priced). What this means is that Cadence cares about a market a year or more before it is in high-volume production. Lip-Bu called out five markets. Let's take a look at the first one. AI and Machine Learning AI and machine learning (and various other names) is in a mode that is hard to describe. In one sense, it is experiencing fast growth, and in another sense, it is too early. The sense in which it is experiencing growth is in design starts. There are literally hundreds, if not thousands, of fabless AI chip startups, and dozens of programs in established semiconductor companies to create AI chips or embed AI technology in other parts of the product line. The sense in which it is too soon is that most of these chips are being designed and are not in volume production. Right now, the big winner in terms of production silicon is NVIDIA, whose GPUs are heavily used in some servers to do AI training. The growth of AI projects is driven by a couple of things, I think. First, neural network technology started to "work" over the last decade. Although neural networks have been known about and researched since the 1950s, nobody could work out how to program them. Then Yann LeCu, Geoff Hinton, and Yoshua Bengio worked out how to use gradient descent to tune the neural network weights. They won the Turing Award in 2019 for this work, which you can read about in my not-cleverly-titled Geoff Hinton, Yann LeCun, and Yoshua Bengio Win 2019 Turing Award . Another big consideration was that this training approach requires huge amounts of compute power, which was simply unavailable until recently. The combination of cloud data centers with GPU accelerators changed that and made training large neural networks feasible. As AI algorithms moved out into edge devices such as smartphones and smart speakers, there was a desire to do more of the inference at the edge. The power and delay inherent in uploading all the raw data to the cloud was a problem, as are privacy issues. But inference at the edge requires orders of magnitude less power than a data center server or big GPU. In turn, that has led to a proliferation of edge inference chip projects. It reminds me of the mid-1980s when I was at VLSI Technology and we had a couple of dozen companies working with us to design ASICs to create PCs with special capabilities—and each one of those dozens of companies had a business plan to be 20% of the PC market. I think pretty much every one of them failed, since it turned out that people didn't want a PC with a lot of differentiation, they wanted a boring standard PC that would run MS-DOS and, later, Windows. Most differentiation came in two areas other than the electronics: portability (Compaq) and business model (Dell in particular). When integration levels reached the point that everything except the memory and the processor in a PC could be put on a chip, it turned out to be VLSI technology ourselves who were successful in the PC chipset market (along with a couple of competitors). Indeed, at one point, Intel was OEMing our chipset in a bundle with their processor. Then, as we knew would happen eventually, Intel took over that market for itself. I suspect that a similar market dynamic may play out in deep learning since the cloud data center providers are also building their own in-house AI chips. For example, Google's TPU (see my post Inside Google's TPU ) and Amazon/AWS's Nitro (see my post The AWS Nitro Project ). So in the same way as there turned out to be a very limited market for mobile processors, there may turn out to be a limited market for AI processors: the system companies will optimize their own solutions. Next Later this week, the other four waves in part II, with 5G, automotive, cloud data centers, and IoT/industrial. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

RE: how to determine the subthreshold process paramter?

$
0
0
Cadence (R) Virtuoso (R) Spectre (R) Circuit Simulator Version 13.1.1.117.isr8 64bit -- 19 Jun 2014

Questions about Multithreading in Ocean/Spectre

$
0
0
Hi, I'm used to run Ocean scripts in Linux Command Line. and I am trying to use multithreading in Spectre/Ocean these days. I found the code following in a history topic and add it in my Ocean script, option( ?categ 'turboOpts 'numThreads "4" 'mtOption "Manual" 'uniMode "APS" ) The simulation speed(transient) seems to be much faster, but the result is not i expected, compared to the original result, even some signals just disappeared. And the elapsed time seems to be wrong, because the elapsed time for netlist with different complexity are nearly the same (about 8 seconds). I also noticed the output log says "Multithreading is disabled due to the size of the design being too small", so I think this should be totally same to the simulation which doesnt add the options above. but it was not. I checked the generated input.scs files, no difference between multithreading and non-multithreading cases Then I checked the runSimulation files in this two cases: spectre input.scs +escchars +log ../result1/psf/spectre.out -format psfxl -raw ../result1/psf +aps +mt=4 +lqtimeout 900 -maxw 5 -maxn 5 spectre input.scs +escchars +log ../result1/psf/spectre.out +inter=mpsc +mpssession=spectre0_30362_26 -format psfxl -raw ../result1/psf +lqtimeout 900 -maxw 5 -maxn 5 It seems that " +aps +mt=4 " affected the simulation, It worked well when I deleted this two opitons. I also found replies in previous topic such as "Using APS rather than turning on the multithreading in spectre", so did I use APS in the wrong way? Spectre version: 15.1.0.284 Ocean version: 6.1.7-64b Thanks in advance, Freud

RE: how to determine the subthreshold process paramter?

$
0
0
You still didn't give a reference for those equations - just an image. In what text are they published? They don't look that familiar to me, and certainly don't match the bsim3v3 model equations used in GPDK090. Presumably the reference [13] gives some more detail on this sub-threshold slope factor? Unfortunately most of the relevant text books I have on this are stranded in the office, so I'm having to go on memory, a bit of googling, and looking in the Spectre Circuit Simulator Components and Device Models Reference manual. Andrew.

RE: Questions about Multithreading in Ocean/Spectre

$
0
0
Hi Freud, It would be better if you can contact customer support . To be honest, such a short simulation may not show much improvement - it's possible that some of the steps may be limited by the reading of the netlist and models, checking out licenses and so on - and the core simulation is such a small part that it makes very little difference. The OCEAN settings you've chosen have been correctly set up - the +aps +mt=4 is telling it to run in APS mode with four threads, but as the design size is too small (check the circuit inventory section of the log file - you're going to need at least 1000 devices or so to support multi-threading with four threads), the multi-threading is disabled as otherwise it can actually get in the way of the simulation for small circuits. It is possible with APS that some simulation nodes are eliminated from the design (it combines parallel identical devices, removes zero-volt sources, removes dangling resistors and a few other optimisations) which can mean that signals are no longer present (normally if you explicitly save those signals, they get preserved though). Without seeing the case though, it's very hard to know precisely what's gone on in your circuit. Regards, Andrew.

RE: Auto Click ButtonBox using SKILL

$
0
0
Rather hard to know without seeing your code - I'm having to guess based on what's most likely, My assumption is that NRFInit() displays a blocking form. Because of that, the code that follows NRFInit() doesn't get called until the form is OK'd, and so it doesn't call the hiRegTimer until after the original form is OK'd. Normally you need to enqueue the commands (e.g. with hiRegTimer or hiEnqueueCmd) before you display the form - otherwise it won't get enqueued. I've no idea what these cte forms are, so I can't help with any specifics. Oh, and by the way - please don't post a question and then delete it later and then post a similar question. It's really annoying, as often I spend time working on a response to the original question, go to respond and find it's been deleted. Far better to post the question and then post a follow-up saying that you'd figured it out (ideally with how you've figured it out). Andrew.

RE: how to determine the subthreshold process paramter?

$
0
0
The reference is Wang, A., Calhoun, B. H., & Chandrakasan, A. P. (2006). Sub-threshold design for ultra low-power systems. New York: Springer. https://link.springer.com/content/pdf/10.1007%2F978-0-387-34501-7.pdf

RE: How to add solder mask to teardrops ?

$
0
0
Hi, Generate your gerber data for the layer you need First. Then import it back in and place it on the Board Geometry layer. See if that works, thinking it should. (Click Load File) to position the artwork over your board outline so you get a 1:1 match. All the best.

RE: Using pre-defined constants in ADE-L (Analog expressions)

$
0
0
Vivek, This is something that broken in the change from "socket" netlisters such as cdsSpice, spectreS, hspiceS to "direct" netlisters such as spectre, hspiceD and so on. These AEL constants were no longer dealt with during net listing, and were just passed through to the simulator. In spectre, the constants are named differently (see "spectre -h constants") to be M_K, M_SQRT2 etc. I found some discussion of this in an old CCR 231925 in which the documentation was supposed to have been updated to reflect that the fact that this was happening, but that seems to have vanished from the documentation too. My view is that the spectre netlister should map boltzmann to M_K, pi to M_PI, sqrt2 to M_SQRT2 etc. I would suggest you contact customer support and ask for this to happen (please reference this thread, and ask the application engineer to put me on copy for an Cadence Change Request (CCR) they file). In the meantime you might be able to work around it for spectre using M_K, M_SQRT2 in the expression, but this may have issues then when using different simulators - e.g. AMS. Note, I didn't check the behaviour - it's possible that ADE may mistakenly see these as global variables. Andrew.

RE: how to determine the subthreshold process paramter?

$
0
0
Given that key authors in chapters of that book (I'm not familiar with it, and don't have access to it) are Christian Enz and Eric Vittoz, and it also talks about the EKV model (these two are the E and V in EKV), I would have expected that maybe this equation shows up in the documentation for the EKV or EKV3 model in Spectre. It doesn't. So I don't think I can help you - you'd be trying to fit one equation against another - and given that gpdk090 uses bsim3v3 (which is a totally different approach to compact MOS modelling than EKV), it's going to be difficult. Andrew.
Viewing all 33813 articles
Browse latest View live


Latest Images