Palladium cloud is one part of the Cadence Cloud offering. As you would guess from the name, it allows design groups to make use of Palladium emulation without purchasing a Palladium Emulator. Since a Palladium Emulator is a hardware product, albeit with a large software component too, it can't simply run in a cloud datacenter from one of the big providers like Amazon AWS or Microsoft Azure. Instead, Cadence has created Palladium datacenters provisioned with a sufficiently large number of Palladium Z1 emulator systems. I sat down with Bennett Le, Mr. Palladium Cloud, to get a little more color. Why Purchase Your Own Emulator? The value proposition for purchasing an actual emulator system requires that the usage over a multi-year period will be sufficient to justify the capital investment. In fact, it is not just the purchase price of the emulator itself, but also providing an environment for it. A Palladium Z1 is a standard sized rack and can fit in a typical computer room where it will occupy two tiles, However, it requires more power and more cooling than is typically available. The largest Palladium Z1 systems, with four racks per unit are water-cooled. Smaller ones can be air-cooled, replacing one of the racks with the cooling system. This is less efficient than water cooling, but some installations are just not able to be provisioned with water cooling. So purchasing and maintaining a Palladium emulator system requires both a significant capital investment and the creation of a complex environment to run it on a day-to-day basis. But once it is set up, it is fire-and-forget from an IT point of view, and all the day-to-day operation is handled virtually over secure network connections. Cadence's biggest emulation customer is NVIDIA, and they do so much emulation that they have many Palladium emulators of various generations that they have made extensive use of over the years. In this video, a gaming blogger visits NVIDIA and Narendra Konda, the head of the emulation lab, takes him how they use their Palladium systems. The bottom line for them is that by emulating the entire system for 10 months, they can bring it up once they get silicon in literally just a few hours. The value of that is obvious, but you have to be operating at the scale of NVIDIA to justify an emulation farm the size of NVIDIA's. https://youtu.be/650yVg9smfI Palladium Cloud Most design groups are not NVIDIA, and don't have that type of budget. However, the value proposition is a bit different for Palladium Cloud compared to other offerings in the Cadence Cloud Portfolio. Palladium Cloud has been quietly running since Q4 of 2016. But its roots go back even further. When I was looking for some material for my post on why our first attempt at EDA in the cloud didn't really go anywhere in the early 2000s (see Remember Virtual CAD? DesignSphere Access? What an ASP Was? ), I came across an article on embedded.com . It featured an interview with George Zafiropoulos who was then the VP marketing at what we seemed then to call Quickturn-a-Cadence-Company. We had a service called QuickCycles, where people could lease that generation of emulators and thus avoid what the article calls "the $1M to $10M acquisition cost." Interestingly, George said that over 80% of people who rented verification services ended up purchasing an emulator within six months. I suspect we are going in the other direction now, and a lot of people with their own datacenters and emulators will move to the Cloud. A single rack of Palladium can run from 4M to 576M gates. There are 4 racks in a unit if it is water cooled, 3 if it is air cooled. But up to 4 units can be ganged together to take the capacity up to a maximum of 9.2B gates (that's 16 racks inside the 4 units, 16x576M = 9.2B). There are many use modes for Palladium. See the above chart for details. It is probably worth emphasizing that the modes involving SpeedBridge can be used in Palladium Cloud. SpeedBridge allows interfaces between the emulated SoC and the outside world, via PCIe, Ethernet etc. You can have Cadence install a PC (for example) in the datacenter, connected to Palladium via SpeedBridge. By connecting to both Palladium and the PC you can run tests that model software on the PC interfacing with the SoC in the emulator. I won't run through all the 22 use modes summarized in the above diagram. I think that the best way to look at them is to regard Palladium as having two main uses: verification of the hardware, and software development before the SoC is available. A modern system has a large software load, typically with multiple processors and their operating systems. Nobody is going to tape out an SoC without first booting the operating system on the chip, and only emulation (or FPGA prototyping) is fast enough to run the billions of cycles that an OS like LInux or Android requires to bring up. System Design Enablement (SDE) is about more than the chip, it encompasses the software, and increasingly the complex packaging of die manufactured in different processes (such as RF and DRAM). Application software that runs high up in the stack can be developed on a PC since it doesn't depend on the underlying architecture of the SoC. But lower-level software that interacts with the chip cannot simply be compiled and run on a PC, it needs Palladium to emulate the chip to test and debug the software. Waiting until the chip is available has two problems. The obvious one is that it serializes two activities that would be much better done in parallel. But, more subtly, the hardware cannot be signed off from a verification point of view until a real software load is running correctly. It is quite possible that bugs in the hardware only get exposed when the software doesn't run correctly. I've personally run into problems with Tektronix graphics terminals and with DEC synchronous communication hardware—in both cases, devices that were already shipping, not beta tests (or earlier). In both cases, I managed to work around them, but both cases were hardware bug escapes into the field. Basically, Palladium Cloud is "Palladium without the hassle." Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧