Quantcast
Channel: Jason Andrews Blog
Viewing all 33813 articles
Browse latest View live

RE: updating symbols

$
0
0
1. From the old symbol in your OLB, save the symbol under a different name and edit pin assignments and footprint definition (don't forget to save on exit). 2. Export the old footprint. 3. Modify the footprint and save/create the .psm to the foot prints \symbols folder using the name in (1). 4. Do a netlist with board update checked, using a revisioned name enabling you to save the "old" version in case you forgot something.

HOT CHIPS: The AWS Nitro Project

$
0
0
In 2016, Amazon acquired the Israeli company Annapurna Labs. Since they were in stealth mode, doing something to do with Arm processors, nobody really knew why. At the time, press reports called them "secretive chip maker Annapurna". Last year, at CDNLive Israel, the CEO of Annapurna, Hrvoye "Billi" Bilic gave one of the keynotes (see my post CDNLive Israel 2018) and revealed a few details, but it was still unclear why it was so critical to Amazon. Since then AWS has revealed some details at their user conferences. But the first deep dive I have seen was at HOT CHIPS. At HOT CHIPS in August, AWS's Anthony Liguori gave a lot of details on The Nitro Project—Next-Generation AWS Infrastructure . He started with some statistics. Every other piece of AWS infrastructure is built on top of EC2. There are over 60 availability zones (datacenters or groups of datacenters) many of which have over 100,000 servers. There are millions of servers worldwide. They launched Nitro in November 2017, although some of the groundwork started back in 2013. All new launches in EC2 since 2017 are built on Nitro. Nitro is the thing that powers everything we do. AWS had originally built their cloud up on commodity hardware, then later added some Annapurna chips. But it was time to think big. As Antony put it: After ten years of Amazon Elastic Computer Cloud (EC2), if we applied all our learnings, what would a hypervisor look like? He went on to give a tutorial on virtualization, which I will skip. If you need more background, see my post How Does Virtualization Work? One challenge is that Intel's pre-2004 processors didn't meet all the Popek and Goldberg requirements for virtualization. One example is that you can read privileged registers in user mode without the hypervisor gaining control. When EC2 launched, they used the Xen hypervisor which does what Anthony called "paravirtualization", rewriting the guest operating system to make direct hypercalls. The early days of Nitro were before they started working with Annapurna labs. An EC2 instance in January 2013, pre-Nitro, looked like in the diagram below, without the chip in the dotted-orange box in the lower right. This image, and all the others in this post, are from Anthony's presentation. Later that year, in what Anthony called "early Nitro", they added Nitro chips to enhance the networking, which is the chip in the lower right. This boosted their bandwidth from 100K packets per second to 1 million packets per second. There was also a big reduction in tail latency. Often, networking performance doesn't depend on the average latency but on the worst latency, known as tail latency. It is only a rough analogy, but if you put a truck on the freeway going at 25mph it doesn't really matter what the average speed is, that's going to cause congestion. After that, they started working with Annapurna labs, and used the A15 32-bit Arm processor. They continued to bump performance of in-the-wire networking. However, they also introduced storage virtualization and a remote block device. Up to and including C3 they had to hold back about 20% of cores for the devices models. In C4, in the above picture, dating from January 2015, they reduced that to 10% so immediately made more cores available to customers. Then, as Anthony said: We liked Annapurna so much that we acquired them shortly after we launched C4. We started to work with them to build truly custom silicon. The next step was I3 a couple of years later in February 2017. This used the next generation of Annapurna technology, built after they became part of AWS. It is based on the Arm A57. One very important feature in the cloud is encryption. This new chip allowed them to do encryption of remote block storage at the line rate. They also changed the system to remove the restriction to a single card. They can use 4 separate Nitro cards. These controllers allow them to do PCI passthrough, provide normalized performance even with drives from different vendors, encryption, and managing the underlying drives. This is the platform they used to introduce ENA Elastic Network Architecture. So at this point they have offloaded local storage, remote storage, and networking. So the question was "what next?" The next step was to move the data plane and the control plane onto the Nitro cards, using the Arm cores inside. This gave the C5 in November 2017. But then there is nothing much for the Xen hypervisor to do, so AWS introduced the Nitro hypervisor, which is really only responsible for dividing up memory and dividing the cores among the guests. As you can see from the above diagram, most of the functionality has migrated into the Nitro chips, apart from the server processor itself. Nitro consists of three parts, two hardware and one software: Nitro cards (of which there are 4 types): Elastic Network Adapter (ENA) PCIe controller, Virtual Private Cloud (VPC) data plane NVMe (non-volatile memory express) PCIe controller, transparent encryption NVMe PCIe controller, Elastic Block Storage (EBS) data plane System control, root of trust The Nitro security chip, integrated into the motherboard, that protects hardware resources and the hardware root of trust. The Nitro lightweight hypervisor, that provides bare-metal type performance since it is only doing memory and CPU allocation. Anthony pointed out that all through his talk he has said Intel, to keep things simple. But they have AMD instances too, and they have launched their own Arm SoC too, called Graviton, for Arm-based servers. The focus of his talk was not so much the processor, more what the rest of the system is. We can support any processor that supports PCI. One big advantage of this approach is jitter. True real-time systems are very hard to get right with a hypervisor. The above graph compares i3.metal (red) which has a tiny uptick right at the end with something taking a few cycles. The yellow line is one of the last Xen-based systems. The dotted green line is the service level agreement (SLA) that packets will be handled in 150us. But that yellow line going up at the right shows that some packets are requiring milliseconds. The C5 (blue) instance looks almost the same as bare metal, just a little higher due to interrupt delivery delay in a virtualized system, but no uptick at the right end at all. Not every customer can bring everything they have into the public cloud, so AWS announced Outposts where the same Nitro hardware is offered to customers to run in their datacenters or back offices. It uses the same underlying hardware and software as AWS does themselves. It is then accessed through the standard AWS API and console. In the Q&A, Anthony was asked about side-channel attacks (such as Spectre and Meltdown). He said that they never allow two instances to occupy the same core simultaneously. A lot of the side-channel issues in the last couple of years were L1-cache-sharing which AWS has never done. He said they have also done a lot of mitigation for things like Rowhammer, which doesn't really work with ECC memory that they use. "We are on top of the latest research for things like this." He was asked if they do anything for GPU virtualization. He said that as of today they only support dedicated GPUs. "It is a hard problem since GPU interconnects are so fast and no GPUs are designed to be multi-tenant. At last year's Reinvent we announced Inferentia". Inferentia is Amazon/Appurna's neural network accelerator chip (see image). Another question was on congestion management. Anthony said that they always want to make sure that congestion is not a problem for customers who need high-performance. "Our network is big and not oversubscribed, so it's not a problem at the high end." For lower-end, when there are a lot of instances on a single machine, they take a more statistical approach with a lot of limits (like how many packets each instance send, and so on). Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

RE: how to use ADE_L to plot waveform after command line simulation

$
0
0
Hi Lokesh, I suggest you start by reading the Guidelines for the Custom IC Design Forum which ask you not to post questions on the ends of old threads (plus various other guidelines). In general, the simplest way is to set things up in ADE L, then do Session->Save OCEAN Script. With this resulting OCEAN script you can (say) put a line at the end saying exit() and then do: ocean -restore oceanScript.ocn from the command line. You might want to change the design() line to design("libName" "cellName" "schematic") so that it takes care of net listing too. For more information consult the OCEAN Reference manual (in the Help menu in Virtuoso, in the search box type "OCEAN" and it will take you to various hits including the OCEAN Reference manual). Regards, Andrew.

RE: DESIGN ENTRY CIS RENUMBERING?

$
0
0
The version I have does not seem to have advanced button under annotate. That said, I have all versions at my disposal, just need to know which one; This is such an obvious need; If I have one section of design for Bluetooth and one section for Wifi, I make all Bluetooth x500 and all Wifi x600; then when I lay out PCB I know exactly how to group parts. Renumbering parts on a single page basis vs entire design is quite intuitive. In fact, one could just create new schemo, move a page of design to new schemo, renumber, then move back IF NUMBERING COULD START AT A SPECIFIC NUMBER. as for " last_ref_des+1", as not really that; rather was just next available in that series. FOr example, if you have R100, R101, R102, R104, R105....If I copy R100, paste, it becomes R103. It "Fills the holes". This worked great up until about 2012, then it SUDDENLY quit working!

Editing of jedec type

$
0
0
I have to design a pcb for a 32 pin qfn package with 5mm x 5mm dimension. But i was not able to find any jedec type regarding this. i have used QUAD50M32WG700 but it dont have center pad. I am fully confused Please help me out regarding as it is urgent 1.) How can we edit the library that is available online as they are not working? 2.) How can we make a jedec type for our layout? 3.) Is there any tutorial regarding it as i am not able to find tutorial for it for editing jedec type that is used for part developer or eding online library?

How to Extract segment length info from Constraint Information

$
0
0
Hello, I'm new to SKILL is there a function to extract the info in the blue circles (pin/via locations and cline segment length and layer) in SKILL? Thanks

RE: Laser Via - Buried Via

$
0
0
We're using 17.2 along with the high-speed/miniaturization option. I'm hoping there's a way to flag it with a DRC rather than any other option, but sounds like there may not be a way to do that. Thanks for responding redwire and masamasa.

RE: DESIGN ENTRY CIS RENUMBERING?

$
0
0
Ahh...the way Cadence approaches the idea of groups of parts such as you describe is through the "ROOM" property that's been around *forever*. Allegro honors that. You can use it to place only parts of a certain ROOM, then move the ROOM, etc... or just renumber the ROOM. It's a great feature to solve the issue you're trying to do by using ref designators instead. I personally prefer to have my designators sequenced on the board so I can find my parts easily but to each his own. As far as the advanced property, in OrCAD that feature is certainly in 17.2 but not sure what hotfix it might require. I am using the latest hotfix. It does not exist in 16.6 tho. And as I'm sure you're aware, once you switch a design to 17.2 you're stuck. No moving back to 16. The ref des feature you're looking for has been under the miscellaneous settings (Auto Reference>Automatic->Design Level). When "design level" is turned off it picks ref_des+1, otherwise it will use the "gap" like you're looking for.

RE: Laser Via - Buried Via

$
0
0
Have you asked your VAR? Sometimes their apps guys know these tricks which escape even us power users. I would ask that if you do solve it, please post the solution for the rest of us. Happy hunting.

RE: DESIGN ENTRY CIS RENUMBERING?

$
0
0
Thanks, I will review ROOM property to see if it will help. Again, I want parts in that "ROOM" to be numbered starting at my discretion, not at "1"; As far as "design level" Option, just tried it, it does NOT solve the issue. I copied "C305", pasted it, now I get "C1" which is not the gap, rather COMPLETELY at the other end of the spectrum. I expect to get "C322" as it was the next open in that RANGE. This is how the tool USED to work, not sure why it is now broken...

RE: DESIGN ENTRY CIS RENUMBERING?

$
0
0
Renumbering from Allegro will allow you to filter any components with specific ROOM property and then you can give rules as to how it renumbers. The poorly documented "fst_ref_des" variable will direct Allegro to start its numbering with whatever you set that variable to (100, 200, etc..). Regarding the schematic, I do find as well that it finds the "first available" ref des -- not sure how to get it to select the range as you mention.

RE: updating symbols

$
0
0
thank you for your reply. i can try that method. thank you again.

cadence technology forum

$
0
0
hello all when i search on google, i notice there is another cadence technology forum under https://communitystg.cadence.com i can not get access to it. do you know how to get access to the forum?

SKILL function to create a 'negative' of a layer?

$
0
0
Hi, I have a layout of power MOSFET chip. It has single metal layer. To perform some checks on the layout design, I want to create a 'negative' of the metal layer. Is there a way to do it using SKILL function? Thanks..

RE: DESIGN ENTRY CIS RENUMBERING?

$
0
0
Again, the range assignment USED TO WORK. I am unclear why it suddenly quit working;

RE: SKILL function to create a 'negative' of a layer?

$
0
0
You could use dbLayerAndNot or use the Tools->Layer Generation in the layout editor to do the same thing. You can't just do a not because you need an outer bound of the inverse layer you're producing, but you could create a rectangle with the bBox of the cellView on a temporary layer, and then do the and not relative to that (i.e. with that layer as the first layer). Andrew.

Manpower Consultancy- Emagine Solutions

$
0
0
Emagine is one of the largest market places. Emagine provides the best recruitment consulting services in India. Benefits of the Employers: Being the best placement recruitment marketplace, we ensure that we are providing with the best curated CVs. The Manpower Consultancy of our agency provides such a platform, by which recruiters will able to get a job similar to the top candidates. We offer various important features for both recruiters and employers. We come under the list of the best marketplace in India, as we offer some personalized recruitment plans. For this, you can easily track every stage of the recruitment process. We always ensure to give a full range of services. With the help of Manpower Job Placement Consultants , you can gather the information about the feedback and verified testimonials. It focuses on delivering work on time and at sensible costs. Visit here to know more: https://www.emagine.co.in/

How to use ?use in cdfCreateParam

$
0
0
I want to rule out one of the choices in cdfCreateParam. I suppose ?use is what is need, but how do I use it?

RE: How to use ?use in cdfCreateParam

$
0
0
By "rule out" do you mean "not display"? If so, the ?display callback is probably the best choice - then specify ?display "nil". The ?use setting isn't really used consistently, so I'm not entirely certain whether it's useful or not... Regards, Andrew.

RE: Calibre xACT3D Extraction Speed Problem

$
0
0
Hi Andrew, My question is that the generation of PEX netlist is very fast but the generation of calibre view (schematic) for post-layout simulation is very slow. But, this job is highly related to Cadence Virtuoso. The odd thing is that we can't setup the CPU to speed up the generation. How can we resolve this issue? Thanks. Cheers, Chi Fung
Viewing all 33813 articles
Browse latest View live


Latest Images