Disclaimer The results here are provided for general informational purposes, as a convenience to the readers. The materials are not a substitute for obtaining professional advice from a qualified person, firm or corporation.

* Consistency in the roadmap and in what we can offer is really a focus for us.* People don't realize how broad our gaming franchise is. They think about it narrowly as discrete graphics. But we think about it as console, cloud, notebooks, and desktop CPUs and GPUs.* Getting our GPU software on things like TensorFlow and Caffe to perform very well on our GPU architecture is really important and will really advance our opportunity there* We have shown TensorFlow up and running well on our GPU platforms, we have done some really good work with the open source community, and we are now able to upstream some of our software with TensorFlow, and contribute those optimizations we develop* 7-nanometer is, I think, very impactful for us. It is the first time, arguably ever, that we are, let's call it, at the leading edge of process technology.Lisa Su took her turn upon the stage at the Consumer Electronics show for the first time ever this year. She emphasized the company's progress in bringing to market follow-on versions of successful parts, including "Ryzen III," the third-generation of the company's desktop CPU; "Radeon VII," the newest version of the company's graphics processing unit (GPU), and "Epyc II," the second generation of AMD's server processor.Su sat down to talk one on one with CMLviz during the conference. The underlying theme of both the keynote and her interview was that AMD is no longer the one-hit wonder that it was for decades before she arrived: it follows its successes with sequels in a disciplined manner. As Su observes, "One of the knocks on AMD in the past has been that we had good products, but we hadn't necessarily followed that up.""Consistency in the roadmap and in what we can offer is really a focus for us," she says.Aside from consistency, Su emphasized that the company is a maker of higher-performance general-purpose CPUs and GPUs. That is important to Su because a lot of discussions of chips these days tend to focus on specialized parts for individual domains, such as neural network "inference." Su believes the path to market share and increasing profits will come from consistently delivering high-performing parts that can handle a broad number of workloads, including some of the specialized tasks but also very popular mainstream applications such as virtualization and cloud computing.Su's style in interviews generally is to emphasize progress and partnerships. She does not stoop to criticizing or trash-talking competitors, as evidenced by the very circumspect responses she offered to negative comments by Nvidia CEO Jensen Huang about her keynote. Rather, Su sticks to emphasizing the positive, and turns the discussion promptly back to what is working and the road ahead.Congratulations, you did a very good job with the keynote, especially for a first time.Thank you. It's a honor for us. We wanted to make sure people understood our focus on products, and on really taking computing to the next level, that was the whole theme of the presentation. We covered a lot of ground in different markets, whether mobile or high-end desktops, whether for content creation or gaming or the cloud.You made a point of really emphasizing the gaming community. You said several times, "We love gamers." Did you feel you needed to make a special effort in their direction?Yeah, for gamers, I think especially for this show - in recent events, we've focused on other things, such as with our November event, we were talking more about the data center. And so, there were questions out there about us, like where is the gaming and the content business going.People don't realize how broad our gaming franchise is. They think about it narrowly as discrete graphics. But we think about it as console, cloud, notebooks, and desktop CPUs and GPUs. It's been a while since I think we have brought all of that together for people, and CES was right place and time to do that.What we were trying to get across was that these gaming platforms are much more connected than you might think. You heard from Phil Spencer [executive vice president for video gaming] of Microsoft, their vision of connecting consoles though the cloud, and their content community. With our partnerships on the discrete-graphics level, we talked about the developer partnerships with Ubisoft and CapCom, and how do you get cloud content, as with our work with Google.So how long does that movement take to play out, something like cloud gaming?It will become very clear in a short amount of time. It won't take ten years to get that uptake, it might not even take five years. Every major company in this space is thinking about how do they really get more users to experience content across multiple platforms. And the desire to go there is so broad there will be a lot of progress in this area.The desire is on the consumer side, as well, to have access to a much broader array of content. You don't have to buy a several-thousand-dollar PC gaming rig to play the best games anymore. I think some of the rise in gaming recently has been somewhat surprising to people because you do see a broader demographic than you did traditionally. You see that with eSports, for example, where there are so many people watching. With [Google's] Project Stream, we are bringing triple-A games to a Web browser. If the product is good enough, consumers will want access to that breadth across platforms.I don't know if you heard anything about Jensen Huang's meeting with a small group of reporters, which took place right after your keynote. The first question he was asked was what he thought of your keynote. He said something along the lines of, "Wow, underwhelming, huh?"He was specifically arguing that Radeon VII, which you unveiled during the keynote, doesn't do a lot of things that Nvidia's "GeForce RTX 2060" card [announced the same week] does, such as ray tracing. Do you have any response to that comment?We are pretty pleased with where it [Radeon VII] is. The comment, or the question, we often got - even when we were talking about the 7-nanometer datacenter chip in the quarter - was, when is the next-gen card for gaming going to be out? I heard that a lot. We have been working on this for a long time now.I think he [Huang] probably doesn't even have a Radeon VII card! Look, we are absolutely committed to competing at the high end. We have a broad set of ambitions in gaming. And we will let the reviewers of Radeon VII speak.Anything to say about his implication that Radeon is not exploiting technical advances in rendering technology the way GeForce is?As far as rendering, I would not say we are not thinking about the next architectural changes. We certainly are. Ray tracing is an important capability. We also believe strongly in an ecosystem that makes possible many things.Look, the capability is only as good as what an end user sees in terms of, Is the experience measurably different, the user experience? And that's about working very closely with developers and with system guys to bring hardware and software together.New architectural advances are definitely important, and software is important as well. That's the way we think about graphics and CPUs both, that's the way advances usually come. It is true that most of these things take some time to come together. With Vega, although it launched in 2017, it has gotten better and better with time. As the developers spent more time with the architecture, they were able to unlock the capability of Vega. And we are doing quite a bit of concurrent development ourselves. I know that there is a lot of interest in our "Navi" architecture, it's the next-generation graphics architecture.Let's talk about software, because Nvidia has a big lead there. Huang in his press event made a crack about AMD GPU talent going over to Intel [Intel VP for visual computing is Raja Koduri, who formerly worked for Su running the graphics effort at AMD]. Huang said something like, "Intel's graphics team is basically AMD's graphics team," and he joked "there is a law of conservation of graphics teams in the industry."I think we have tremendous graphics talent. For example, David Wang [senior vice president of AMD's "Radeon Technologies Group], who has been in the industry since the beginning.We have great talent across hardware and software. We feel very good about the graphics talent we have.Another bit of news in yesterday's keynote was "Epyc II," your second-generation server chip. Tell us about what your goals are for that.Our mid-term goal has been and remains to get to double-digit market share. I would say that we think our second generation looks really, really good. We are putting the final optimizations in place right now and we expect it to be available in the middle of the year. We view it as a good vehicle to get to that double-digit market share goal. The response from the cloud guys has been very strong, you know, we've worked with Amazon [AWS] and with [Microsoft] Azure, we've continued to expand those partnerships and also the number of partnerships, as more cloud vendors continue to view Epyc as a good solution for what they're trying to do.If you put aside the quarterly stuff - because we are not that large at this point to make that a significant issue, getting designed into a significant part of the cloud infrastructure is important. We are not looking to be a niche, we are looking to be a mainstream player in the very large, general-purpose data center market, call it a $12 or $15 billion market for CPUs.We are very underrepresented in that market today. So, let's ask, what are the key workloads that we do very well on? We do very well on data analytics-type things, very well on Web serving-type workloads, for example. These are workloads that can benefit from a large number of cores, and from the performance capabilities that we have.As you go into the machine learning-type workloads, I think there are opportunities for us in training. The thought process there is, getting our GPU software on things like TensorFlow and Caffe to perform very well on our GPU architecture is really important and will really advance our opportunity there. So, there is a set of CPU workloads we are really strong in, and then there is a set of GPU workloads, and we can really specialize in some of these workloads.What about machine learning inference? [That's the part of AI where a trained neural network answers questions, makes predictions.] Is inference an opportunity for you, say, inference in the cloud?Inference is a very broad term, so I don't actually think of it as a workload. There are different ways to do inference. I don't think of it as a specific workload. It depends on whether you're referring to in the data center, or if you are looking at the edge of the network.In training, we have made really nice progress with Radeon and with our open compute platform [software]. We have shown TensorFlow up and running well on our GPU platforms, we have done some really good work with the open source community, and we are now able to upstream some of our software with TensorFlow, and contribute those optimizations we develop. We have seen the number of downloads for that really increase. Some of those optimizations have since been integrated into the Linux kernel, in the package manager, so you automatically have access to all that.But, it is absolutely a multi-year effort. The more people who get familiar with Radeon, the more opportunities you have, like Google for cloud gaming, and a set of optimizations just for that. And you know, there is more than just machine learning in cloud workloads. There are workloads like virtualization.Let's talk about the approach to AI. About AI chips. There are all these specialized AI chips out there. What's your thinking about how AMD positions itself as an AI chip vendor?I would say that there are two approaches. You can be very specific, as with adding "tensor cores," or some of the AI startups that have very specific use cases for inference. That's not our strategy. Our strategy is that we will have on our GPUs all the various [numeric] precisions that are necessary for AI, with the idea that these are general-purpose compute capabilities.Now, we may put some special accelerations on them, there will be a number of those, but our focus is the overall compute needs of our customers. In addition to floating point capabilities, you will see some of those accelerators on your general-purpose device. If you only do 8-bit precision compute, sure, there will be something that is better for it, but if you want to do multiple types of workloads, then that is the market that we are going after. My view is that these things will co-exist.The idea that specialized accelerators will take over is unlikely, but the idea that GPUs will take over is also unlikely. We believe in the idea of heterogenous processing across a high-bandwidth, interconnected infrastructure.Things will be specialized at the system level, with a mix-and-match approach. I think what happens is, as algorithms mature, your ability to do fixed-function accelerators is higher. And so, if you believe we are done learning about machine learning, yes, you could say everything can go to accelerators. But we are still in a process of learning about the technology. Some things can be offloaded, but not everything.I go back to the idea that you have to take a look at the market, and what are we building for. There are so many different things, there are Web applications and database applications and specific large-scale simulations, and a general-purpose processor is helpful for all those areas. Now, we know very well, from being in the custom processor business for years now, what the size of the market needs to be for you to do special-purpose processing.We believe for many applications, it's at the software level not just the hardware level. Just think about how long it takes to bring up new architectures. You are going to have places where those things [special-purpose processors] are going to be very valuable, but not everywhere.Any thoughts about the RISC V instruction-set movement? Some people argue RISC V is how these specialized chips can be both specialized to a task but also more general-purpose.RISC V is a nice idea, and certainly in some lower-volume applications it can be worthwhile.The RISC V folks are convinced it's the future of instruction-set architectures.Well, that's certainly a viewpoint...Let's talk about manufacturing. With your manufacturing partners, you have soundly moved ahead of Intel in process technology with the newer 7-nanometer parts. I think many people are astounded this has taken place, both Intel losing the lead and then that it should be AMD pulling ahead like this, it's really never happened before in the competition between the two.With manufacturing, you never start with the idea that the competition has issues. We were planning our roadmap based on where we thought our industry was going, and we knew that 2019 would be the year of 7-nanometer.That said, 7-nanometer is, I think, very impactful for us. It is the first time, arguably ever, that we are, let's call it, at the leading edge of process technology. Now, the underlying design, the architecture, is the most important piece, but to be on a par or better in process is certainly helpful in power and performance, and from just the standpoint of giving confidence to our customers.The key is to stay on a very predictable roadmap for our processors. One of the knocks on AMD in the past has been that we had good products, but we hadn't necessarily followed that up. We launched our first generation Epyc in 2017, and now, the second generation is here in 2018, and now we are working very hard on our third generation, and our fourth. Consistency in the roadmap and in what we can offer is really a focus for us.It's understanding technology that gets us an edge to find the "next Apple," or the "next Amazon."This is what CML Pro does. We are members of Thomson First Call -- our research sits side by side with Goldman Sachs, Morgan Stanley and the rest, but we are the anti-institution and break the information asymmetry. We have five stocks we like even more than Nvidia.The precious few thematic top picks, research dossiers, and alerts are available for a limited time at a 80% discount for $29/mo. Join Us: Discover the undiscovered companies that will power technology's future The author has no position in the stocks mentioned in this article.Thanks for reading, friends.The information contained on this site is provided for general informational purposes, as a convenience to the readers. The materials are not a substitute for obtaining professional advice from a qualified person, firm or corporation. Consult the appropriate professional advisor for more complete and current information. Capital Market Laboratories ("The Company") does not engage in rendering any legal or professional services by placing these general informational materials on this website.The Company specifically disclaims any liability, whether based in contract, tort, strict liability or otherwise, for any direct, indirect, incidental, consequential, or special damages arising out of or in any way connected with access to or use of the site, even if we have been advised of the possibility of such damages, including liability in connection with mistakes or omissions in, or delays in transmission of, information to or from the user, interruptions in telecommunications connections to the site or viruses.The Company makes no representations or warranties about the accuracy or completeness of the information contained on this website. Any links provided to other server sites are offered as a matter of convenience and in no way are meant to imply that The Company endorses, sponsors, promotes or is affiliated with the owners of or participants in those sites, or endorse any information contained on those sites, unless expressly stated.