By Richard Goering on October 8, 2012Phil Bishop has come into his new role – Vice President and General Manager of System Level Design at Cadence – at an exciting time. After years of slow growth, technologies such as high-level synthesis and virtual prototyping are seeing adoption and showing results in more and more production environments. As a 28-year veteran of the EDA and semiconductor industries, and former CEO of an early pioneer in C-based synthesis, Bishop is uniquely qualified to help drive the adoption of system-level design.

In this interview Bishop discusses his background and shares his views on the emergence of system-level design, the move to transaction-level modeling (TLM), who’s using system-level technologies and how, what challenges remain, and what’s coming up in the future.

Q: Phil, what’s your job responsibility at Cadence?

A: I’m part of the SSG [System and Software Group] team led by Nimish Modi, and my responsibility is to encourage as many customers to move to TLM design and verification as possible. The products that I have are C-to-Silicon Compiler, which helps customers take SystemC into RTL, and VSP [Virtual System Platform], which allows customers to virtualize their overall system with TLMs and do early software development and early system verification.

Q: Where were you before coming to Cadence?

A: I have a long history – I’ve been in the semiconductor business for 28 years. Previous to Cadence I was at Magma Design Automation. I was there for 2 ½ years and most recently I was running all their marketing. Before Magma I ran a startup called Pyxis that was in the IC routing business, and was sold to Mentor. Before Pyxis I was CEO of Celoxica for 5 years. We were one of the early pioneers of C-based synthesis techniques. Before that I was at Mentor for 12 years and managed worldwide services. I also did IC design at both Motorola and Boeing.

Q: What brought you to Cadence?

A: I was always very intrigued by Cadence. I felt Cadence had a strong product portfolio and I was impressed with the market approaches Cadence had taken over the years. A couple of things attracted me to Cadence now. One is that I saw an upward trend in Cadence’s fortunes under the leadership of Lip-Bu [Tan]. I had met him before, and I was excited to join a team led by Lip-Bu.

Also, I’ve had a lot of familiarity with the technologies I’m working on for Nimish. I had the opportunity to get involved with these technologies now, with corporate backing from Cadence, rather than being in a VC-backed scenario. It’s an exciting opportunity for me. I can reflect back on my history and take some lessons learned from the past and apply those to C-to-Silicon and VSP now.

Q: People have been talking about system-level design for many years, and the initial adoption was slow. Is this the moment for system-level design, and if so why?

A: We’ve been tracking production projects for C-to-Silicon. This year we’ll have more than 60 projects using C-based synthesis. Along with that, with our VSP technology, we’re seeing more and more people who want to do some early software development. RTL is still an important part of design flows, but being able to move up a level in abstraction is really important.

With C-to-Silicon, we have RTL Compiler underneath. There’s a RTL synthesis tool within C-to-Silicon that allows you to make high level, TLM-based micro-architecture tradeoffs, and do detailed timing as part of a full SoC flow. That’s one key thing that’s allowing us to go to a higher level of abstraction. Another thing that’s happening is the prevalence of software as a differentiator. SoCs are becoming more of a platform for software, and there’s a growing focus on providing more software efficiency and software-driven verification. As a result, VSP is a key enabler.

It’s been a slow march but it’s an inevitable march. The levels of complexity and the importance of software are such that you have to move up a level to get your complex SoCs designed and verified. We’re seeing some really good traction with these technologies now, and system level growth and adoption is an industry-wide trend.

Q: What’s driving the move to TLM more – verification or implementation?

A: It’s tending more towards verification as the key driver. We usually start working with customers on the verification side – my group provides the SystemC portion of Incisive [simulation], so we engage with customers to help them improve their verification methodologies. Then they start to ask, is there a way to get to RTL from here? That’s where C-to-Silicon comes in. Similarly, when they’re modeling something at a high level, they’d like to assemble these models into an executable platform for the software guys. That’s where VSP becomes a critical enabler.

Q: What types of design applications are using transaction-level modeling these days?

A: On the verification side, we see a wide range of applications, mostly ARM-based projects. We’ve seen wireless applications, mobile devices, industrial control, networking, encryption/decryption and automotive to name a few. On the design side we see a lot of graphics and image processing type applications. What’s interesting to us is that while the initial focus was on datapath oriented designs, we’re seeing more and more people with both advanced datapath and control using TLMs for both design and verification. Complexity is driving people to try new methodologies.

Q: Most of the original adoption of high-level synthesis was in Japan. Is that still the case or is it moving to other geographies?

A: Japan was the original leader because they’re consumer product oriented with incredibly tough design schedules. But now the new area of growth is the U.S. Major accounts in the U.S. are growing for us at a good clip. If you look at the 60-plus production projects I mentioned, probably half or so have moved to the U.S. That’s a major change.

Q: What are the obstacles or challenges to the adoption of TLM-based design and verification?

A: One obstacle we sometimes see is that customers want to do automatic equivalence checking at high levels of abstraction, even though it can severely affect the style and optimization capabilities of their designs. Cadence is focused on extremely fast dynamic verification to allow designers to get the most out of our high level design optimization algorithms. Another key challenge is building up all the TLM models that represent the IP blocks in a complex system. We have some powerful technology inside VSP that can automatically generate most of the TLM models for an IP block using RDL [Register Description Language] and IP-XACT representations.

I think the next major challenge is connecting VSP platforms to RTL and emulation for faster verification. At Cadence we have developed transactors that allow you to link VSP to Incisive and Palladium.

Q: The TLM models used for software development in a virtual platform today are not detailed enough for high-level synthesis. Can this gap be bridged?

A: Loosely timed [LT] TLM models are really for fast simulation, but you can refine them further and make them synthesizable. What we often see in SoC designs are that some blocks are primarily for verification, but other blocks are intended for synthesis, so the detail is there. You can mix and match the different realms. There are clever ways in which you can mix levels of abstraction where most of the design is running in fast LT models but one or more blocks are synthesizable with all the needed details, even though it’s still a SystemC TLM.

Q: VSP was announced in May 2011. How is it being used today?

A: VSP is coming along at a pretty nice clip now. We really see three use models. The first involves a pure TLM-based platform for early software development. We’ve seen good-sized customers demanding this as a means to provide an executable reference design to their software teams.

The second use model is a mostly TLM platform that is linked to RTL simulation through transactors. The RTL stays within the simulated realm with Incisive. For example, we have a strong partnership with Xilinx that has resulted in an executable platform for the Zynq-7000, and we are working with customers to emulate the RTL piece that would go into the FPGA fabric for that architecture. We see lots of customers who want to mix RTL with a virtual platform.

The third use case involves VSP with transactors linked to Palladium. Here you could essentially simulate and verify the drivers for a complete wireless system, for instance, with the CPU running fully inside VSP as a fast model and the GPU running on the Palladium box.

Q: What R&D directions are you pursuing these days?

A: In TLM analysis and verification, we’re seeing more and more the importance of verification IP, and high-level synthesizable interface and math IP, so we are adding that to the portfolio. On the TLM design side, we’re studying how physical effects  and power optimization can be represented in the overall TLM flow . With VSP, we’re looking at the software development side, including powerful native software debugging and analysis.

We’re really keen to move the ball quickly in all of these areas. I think there’s a lot of opportunity there.

Richard Goering