Research Report, $AMD: Modularity Wins: Computation, Chiplets, and the New Era of Semiconductor Design.
AMD is a computational engine - fundamentally concerned with the manipulation and transformation of information through physical processes. This is, in a sense, a reflection of the human mind.
Introduction:
At its most abstract level, AMD should not be viewed merely as a “chip company.” Rather, AMD is a computational engine - an entity fundamentally concerned with the manipulation and transformation of information through physical processes. This is, in a sense, a reflection of the human mind itself: we intake information, process it through cognitive structures, and produce meaningful output - decisions, actions, or innovations. In AMD’s case, silicon replaces synapses, and electrons replace neurons. The outcome is the same: structured information transformed into useful action or change in the world.
Technology, at its essence, is the harnessing of physical laws to process information - often through the flow of electrons - and convert it into outcomes that serve human purpose. This is what AMD builds. The company’s role in the computational universe is not simply the manufacture of semiconductors, but rather the enabling of computation itself. And in a world increasingly dominated by information processing - through AI, data analytics, edge computing, and more - this role is foundational.
Computation, in its purest form, is the manipulation of symbols or data through defined rules and mechanisms. AMD’s products, whether CPUs, GPUs, FPGAs, or emerging AI accelerators, are all instruments in this broader computational framework. As such, the company’s Total Addressable Market (TAM) is, in principle, unlimited—bounded not by a specific device category, but by the very demand for computation across industries, applications, and technological paradigms.
The future of the computational market is, by nature, uncertain. New architectures, paradigms, and use cases will emerge. Therefore, the company best positioned to lead is not necessarily the one with the biggest market share today, but the one with the greatest agility. Success will hinge on iteration speed, agility in architecture and product design, and the capacity to pivot toward new opportunities. In this regard, AMD’s chiplet architecture, heterogeneous computing model, and culture under Lisa Su provide it with a structural advantage.
To invest in AMD, then, is not merely to back a chip manufacturer - but to bet on a flexible, foundational player in the evolving architecture of computation itself.
Moore’s Law:
To begin, it's important to understand Moore’s Law - a foundational principle that provides essential context for AMD’s chiplet architecture and highlights the company’s emerging role as a key player in the evolving semiconductor landscape.
Moore’s Law is a prediction pertaining to the number of transistors on a circuit, suggesting that the number of transistors on a circuit shall double every two years – as the transistors themselves get smaller. This, thus far, has been correct. Yet, as chips move beyond the 5nm scale, complexity is increasingly exponentially. This is referred to in the industry as the lithographic rectile limit, whereby production of cutting edge chips is getting exponentially more expensive and even physically impossible.
This dynamic has been reshaping the industry in silence for decades; yet the broader market is started to recognise it. Lisa Su, the now CEO, recognised this back in 2014, with the inception of the then untested and “speculative” chiplet structure, which sits at the backbone of the current Zen architecture, and too, has widespread adoption across the industry. Or, is at least starting to.
Moore’s Paper predicted this too:
“It may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected. The availability of large functions, combined with functional design and construction, should allow the manufacturer of large systems to design and construct a considerable variety of equipment both rapidly and economically.”
The central aim of the semiconductor industry is to deliver more computing power for less cost and less energy consumption. The industry has achieved this through the creation of monolithic chips, which consist of cramming increasing amounts of computing power into one chip. AMD, however, have been pursing the chiplet route for the last decade, which initially seemed like a trivial idea – especially to companies such as Intel.
As we move towards the lithographic limit, monolithic chips look increasingly like a dead end, and thus the industry will likely start increasing adoption of the chiplet structure.
Su: “You’re right, it was not the obvious strategy. However, we had a vision. I had a vision. The vision was: We’re not the cheaper guy either, right? That’s not a long-term, sustainable business strategy. You’re always going to be number two. So, we had to say, “What do we think we could be best at?” In our industry, there are also inflection points that are there, and what I saw was an opportunity. If you take a look at the five-year arc, not the two-year arc, Moore’s law was changing.”
Chiplets Versus Monolithic Design:
Thus, we have two different views on how to make chips:
Monolithic chip design: referring to ramming of many components into a chip
Chiplet: multiple smaller chiplets are integrated to function as a singular unit.
To outline a brief comparison between chiplets and monolithic design structures:
Monolithic Chip Design
A single, large die integrates all components (CPU, GPU, memory, etc.).
Benefits:
High-speed, low-latency communication between components.
Simpler power and thermal management.
Historically the standard for high-performance computing.
Drawbacks:
Yield issues: Larger dies are more prone to manufacturing defects, reducing production efficiency.
Scalability challenges: As transistor density increases, designing and fabricating larger chips becomes complex and expensive.
Thermal constraints: Managing heat across a large die can be difficult.
Chiplet-Based Architecture
A processor is built using multiple smaller dies (chiplets) that communicate via a high-speed interconnect.
Benefits:
Higher yield: Smaller dies have lower defect rates, improving production efficiency.
Scalability: Different chiplets can be developed and upgraded independently.
Cost efficiency: Fabricating smaller chiplets is cheaper than manufacturing a single large die.
Customization: Chipmakers can mix and match chiplets for different applications.
Drawbacks:
Interconnect latency: Communication between chiplets introduces some latency.
Complexity in integration: Requires advanced packaging and interconnect technologies (e.g., AMD’s Infinity Fabric, Intel’s Foveros).
Software optimization: Applications may need optimization to leverage chiplet-based architectures efficiently.
As chip processing nodes become more and more advanced, R&D and production costs continue to rise, and yields are declining - physical bottlenecks are dragging down the pace of Moore’s Law. Chiplets, comparable to that of Lego bricks are becoming one of the common choices of AMD, Intel and other companies, who wish to continue Moore’s Law.
In the past, chips were integrated and packaged into a singular chip by multiple IP Cores: however, the chiplet method can combine chiplets designed and packaged from different companies to build a more economical and efficient chip system.
This new method of design not solely greatly simplifies the complexity of chip design, but also effectively reduces design and production costs. Omdia, a well-known market research organization, predicts that the global market for chiplets will expand to US$5.8 billion in 2024, a 9-fold increase from the US$645 million in 2018. In the long run, the chiplet market is expected to increase to 57 billion U.S. dollars in 2035.
The lithographic rectile limit is this: with smaller and smaller sizes, chip physical bottlenecks are becoming more and more difficult to overcome. Especially in recent years, when advances in nodes are moving towards 10nm, 7nm, and now 5nm, the problem is no longer just a physical obstacle. The more nodes evolve, the higher the cost of miniaturisation, and there are fewer design companies that can bear the economic burden.
According to public reports, the design cost of the 28nm node is about US$50 million, and to the 5nm node, the total design cost has soared to more than US$500 million, which is equivalent to more than 3.5 billion yuan.
Keeping Moore's Law is about maximizing profits. If R&D and production costs cannot be reduced, then it will be a terrible financial burden for chip giants and start-ups.
Fortunately, whenever Moore's Law is walked to an end, scientists and engineers will always inspire innovative ideas, put forward breakthrough technologies to turn the tide, and repeatedly push the seemingly end of Moore's Law to the distance.
The modular design based on chiplets is one of the most critical ideas to solve the cost problem.
The Three Major Values Of Chiplets:
Chip makers buy ready-made parts (soft or hard) from different companies. Then, they add their own special parts to make a complete chip. After that, they build the chip in a factory.
The benefits of a monolithic chip over a singular chip are evident:
a) firstly, the development of chiplets is faster.
b) secondly, the development of chiplets is lower.
This is because the chiplet is composed of different chip modules, designers can thus choose the most advanced technology in a specific design part, and choose more mature and inexpensive technology in other parts, thereby saving the overall costs.
For example, AMD’s second-generation EPYC server processor Ryzen uses a chiplet design, combining a more advanced CPU module manufactured by TSMC’s 7nm process and a more mature GlobalFoundries 12/14nm process I/O module. 7nm can meet the demand for high computing power while 12/14nm can reduce manufacturing costs.
The benefit of this is that the chip area of the 7nm process part is greatly reduces, and the use of more mature process modules will help improve the overall yield. On the whole, the more CPU cores, the more obvious the cost advantage of the chiplet combination.
c) finally, chiplets can flexible meet different functional requirements.
On the one hand, the chiplet solution has good scalability. For example, after constructing a basic die, only one die may be used for laptops, two for desktop computers, and four for serves.
On the other hand, chiplets can act as a heterogeneous processors, combining different processing elements such as GPUs, AI accelerators and other processing elements in any number to provide richer acceleration options for various application needs.
In essence, chiplets are able to yield almost the same performance as the top monolithic chips, but at a 40% discount because if something goes wrong in production, you do not have to throw the whole chip away, but rather just one or a few chiplets. Thus, yields are much higher and across the board, so is organizational agility . This has translated into AMD giving Intel a really hard time in the CPU space and into the company inexplicably reaching new highs, quarter after quarter.
Vitally, the corporate DNA / culture that goes into production of chiplets is vastly distinct from that of monolithic manufacturing. This, in my view, shall become increasingly important. The chip market is changing rapidly, and so too is the nature of AI. Thus, companies in which can adapt and enact agility, are vastly more favourable in this dynamic market of chiplets.
I argue that NVDIA & Intel are now moving towards chiplets: albeit, NVDIA seems to be doing so tentatively. Yet, AMD has been at this for a decade. The countless business processes, cultural elements, and collective wisdom AMD has developed during this period will serve it well going forward. This is another example of the innovators dilemma. Along comes a non-incremental innovation that gradually puts the industry on its head, with incumbents initially being dismissive in the earlier stages, only then to pivot when the main contender is now marching ahead.
Interestingly, when AMD initially pursued their chiplet structure in 2014, many individuals thought that this was a terrible idea. In fact, Lisa Su has explained that – at the time – chiplets were “highly speculative” and many engineers internally were frankly sceptical about the use cases and viability of chiplets. This was because nobody had actually tried this chiplet structure beforehand.
This is especially interesting: namely, the countless correct bets that Lisa Su has made, while as CEO – many of which have either come into fruition, or shall do so over the following decade. What makes this even more stunning is the fact that Su made these bets simultaneously as the company was at the verge of bankruptcy prior towards Su taking over. The company had a stock price of $4 p/s, 2.5B in dept, and little-to-no viable product roadmap ahead.
If it couldn’t get any better, Su then – while in the process of turning the company around, betting on the novel chiplet structure, and financially reconstructing the company – managed to disrupt Intel’s CPU business, gaining considerable market share in light of their initial early bet on the-then-novel chiplet structure.
Culture:
Su, frankly, has achieved the impossible. This is one of – if not the most – stunning corporate turnarounds I have ever seen.
“She embarked on a radical refocusing of AMD, including a big bet on a revolutionary (read: then-experimental and untried) way of putting together processors from smaller “chiplets.” Prior to this approach, you made more powerful microprocessors by—short version here—making bigger and bigger microprocessors, a model that had become unsustainable. Su and her team decided to go for a more modular—and hopefully scalable—way of achieving the same goals by packaging chiplets together. AMD dubbed it their “Zen architecture.”
“The old AMD would’ve said, ‘Well, what’s Intel doing? Let me make sure that I’m doing what they’re doing,’” Su says. “This AMD said, ‘Let me do what I think is the right thing, and let’s bet on ourselves.’ If you look today, by the way, all of our competition is doing what we’re doing, which is chiplets.”
This shows you: AMD was previously focused on incessant obsession with their competitors – not, their customers. Yet, under Su, this focus changed, focusing more on making quality product roadmaps over a multi-decade period, irrespective as to if the market in the now understood or agreed with their philosophy or not.
“What I saw was that we had a lot of the pieces, they just weren’t quite put together correctly. We were always trying to be somebody else. We were, “How do I compete in this area or that area?”
There are countless things to say about this. But what makes me most interested in this story is the corporate culture that I believe has metastasis in the company, as a result of this mission and turnaround. Listening to Lisa Su: it sounds like the company has an “underdog mentality” – one that shall yield significant results in the future. The importance of an iconoclastic corporate culture cannot be more overstated. Just as the personality of an individual determines his fate, so true in the case of companies.
For instance, Su mentioned in her interview with TIME, various fascinating insights pertaining to the culture at AMD.
“So, part of the culture that I wanted to build was clarity, focus, ambitious, but also, we’ve got to move fast. We have to be agile. That’s our recipe for success.”
Su also explains how she motivated employees – who frankly were utterly demoralised – when she took-over as CEO. Su explains:
“You want to be ambitious. You want to build the best. You don’t want to build the second best. Getting in front of 8,000 engineers and saying, “Hey, our strategy is to be the second-best,” that’s hard. You say, “Look, we’re going to be ambitious. We’re going to make some bold bets. There are some things we’re not going to do.” One of the questions that my board asked me quite clearly was, “Look, maybe you should be in the low-power business. Build chips for mobile phones.” Everybody wanted a new tablet or a new phone”
I said, “No, no, no, we’re not those types of people. That’s not fundamentally what we’re going to be good at. We’re going to be good at building the biggest computers.”
This indicates a corporate culture that is obsessed with hard challenges; underdog mentality in which is ingrained culturally within the companies DNA.
Vitally, the semiconductor industry is unique, insofar as it consists of prolonged roadmaps – unlike many other industries. For instance, the chiplet architecture that AMD bet on in 2014, is solely now really coming into fruition – despite its utilisation in the CPU market. Therefore, management ought to have a very well laid out product roadmap.
Yet, the issue here is that not all bets will be correct: I like this excerpt from the TIME interview, which gives us a really fascinating insight into the corporate culture.
The magic in our business is you have to have the right amount of risk-reward, because you’re not always going to be right. There are some decisions where, boy, you know, that didn’t quite turn out the way I wanted, but you have to make enough right. Particularly on these larger decisions, we spend a lot of time debating among ourselves and then also ensuring that whatever we build has to be leadership
But what I also like to do is to talk to the people who are doing the work. I like to believe that, as a leader, one of the things that is most important is always knowing and getting a broad spectrum of input. As much as I love my staff, they’re only one input. Talking directly to the engineers, talking directly to customers, sometimes talking with people in the labs. I love going into the labs and just saying, “Hey, how’s it going?” Because you don’t always hear everything sitting in your office, and that’s been very helpful for me.
What stands out to me is the clear hierarchical structure of the company. Ideally, I always look for flat corporate hierarchies, meaning that the flow of information between employees and senior management is not halted by varying degrees of bureaucracy, in which inevitably build up in large organisations.
The fact that Su is on the ground; she speaks DIRECTLY to engineers; engineers can speak DIRECTLY to her. This allows the flow of information to mitigate stagnation – a common occurrence in large organisations. This tells me that the company is has little time for internal politicisation, but instead prioritises results and information.
Further, Su also explains the importance of constant feedback:
I learn a lot from reading. I read a lot about what people say about us. I don’t always like what they say. Sometimes I like it, sometimes I don’t, but it gives me a perspective of how people are viewing our products.
She willingly leads into the criticism & seemingly has an adequate touch on the pulse pertaining to critics. This is vital for ensuring feedback is delivered to the CEO – thus mitigating the possibility of error feedback cycles.
Su continues:
What we’ve done well is we do have a learning culture as a company. We instituted something called the Next 5 Percent, and I remember how this started. I was actually in Europe doing a town hall, and it was probably after a not-so-good quarter, and somebody asked me, “What do you want the company to be thinking about?” I said, “Look, what I want the company to be thinking about is, no matter what we do, we have to learn from it. There is a next 5 percent. Every situation that we’re in, we can make it at least 5 percent better. If we think about that, it helps us drive that learning culture.”
This idea of the Next 5 Percent, for me, was, “Let’s make sure we’re doing lessons learned after every major project.” No matter how good it was, it can always be better
That’s an important aspect of our culture that I believe we’ve gotten ingrained in the culture.
This is a culture that is obsessed with learning & improvement. It is hard to overstate how vital this is. There is a massive difference between saying this & then actually doing it. It seems to be the case that this corporate culture is ingrained into the fabric of the company.
Another instance of this desire for learning and constant iteration is present here:
We’re probably running 150 pilots across the company. In any given function there’s, “How do you use AI to accelerate software development? How do you use AI to accelerate test plants? How do you use AI to help you track quality issues?” Every one of these has a different proof of concept, and so it does require tone from the top, which is change management
Lisa is currently running 150 pilots across the organisation. Trying to determine what the best course of action is to take. This iteration is referred to as the “innovation stack” – a term that is commonly associated with Spotify, insofar as they continually iterated upon their singular product, as a means of outpacing large incumbent competitors. I see the same dynamic here. Constant iteration and constant feedback loops – accelerating the business in ways that are difficult to compute.
Another nugget that gives us insight into AMD’s corporate culture:
CEO: Leaders wanted to hold onto their best people. Nobody wanted their best person to go somewhere else. What had really helped me at IBM is that every two years I did a different job, so I got a lot of experience that way. So here, we implemented a process where we would review talent with my staff basically every month. The purpose of the talent review was, “Hey, who are our top people? Who’s potentially ready for another role? What are the other roles that are really important roles?”
CEO: My thought process is, “Look, put your best people in your toughest and most important problems, and if they succeed, then wonderful. They’ve learned a lot, and we’ve now grown a new leader.” That’s how I felt I had those opportunities. That’s a fairly active process for us.
This is fascinating: a company at the “atomic” level is merely the collection of individuals gathered towards a set goal. Yet, the way in which those individuals are organised – through culture and structure – directs the outcome of the operation.
Thus, an obsession with talent – not merely acquisition – but also retention, is vital for a successful company. One story that I have been obsessed with pertaining to talent is from CEO Alex Karp of Palantir Technologies. Rumour has it that Palantir didn’t hire a single individual for a whole year. Why? Not because they didn’t want to – but because, despite hiring often from the top schools – but because the company simply couldn’t find anything that reached the competency levels in which they desired.
A similar tale here is present in the case of AMD. Lisa Su analyses the best talent – and ensures that this talent is recognised, rewarded, and also deployed effectively. Best talent is placed in the most complex situations, as a means to test them, and to ensure that they are always learning. This is vital for retention. Competent people want to work on difficult problems.
Lisa continues:
CEO: We do—in every organization. In every one of our major organizations, we run an organizational HR map, and we say, “Hey, these are the important roles. These are important people. Let’s make sure that the important people are in the important roles.” And then we’re very active on.
Moreover:
CEO: When someone is actually good enough to actually tell me when I’ve screwed up, that’s when you learn, and so that’s what I try to practice. I say “try to practice” because nobody really wants feedback. Feedback is always kind of painful.
Once again, in this latter paragraph, we see an instance upon feedback cycles. This consists of closing the loop associated with error.
In summary, AMDs culture consists of extraordinary properties, including:
Management actively ensuring that everyone in the company feels connected to the mission & feels as if they have a real opportunity to make an impact in the industry.
Employees are motivated and have plenty of freedom.
Su uses a term “extreme communication” – employees run towards problems, facilitating the transparent flow of information.
We can see that AMD under Su have overcome a series of daunting challenges: all of which have now made chiplets a reality. The whole picture screams of a world class management and culture.
While Intel & Nvidia are too excellent companies – what especially stands out about AMD is their product roadmap. Especially pertinent to the chip industry is successful “bets”, namely product roadmaps in which often take decades to come into fruition. We can see this with chiplets – yet this is solely one side of the coin.
Thus, successful chip companies must make the right technical bets.
What we have seen thus far is the result of the roadmap that Su (and others that take less credit for it) put in place years back. It denotes an ability to allocate capital well and this is perhaps the most important attribute of the company. Pursuing the chiplet route was an excellent, contrarian decision. Now we have the same capital allocators, with way more resources at their disposal, leading the way around Moore´s Limit.
Su expresses this:
The way I think about it is, AMD has been investing in AI for many, many years. This is not a new thing. We didn’t just decide 18 months ago we were going to invest in AI. It was, the strategy was actually very clear that our first job was general purpose computing, and that’s what we did with the Zen architecture.
My comment is, these technology arcs are usually not one, two or three years, they’re usually five to 10 years, if not more. This AI opportunity is perhaps the largest inflection that I’ve seen in my career.
We’re here, we went all-in on a very novel technology, the Zen architecture and what we call chiplets. It hadn’t been done in our industry, and I remember spending time with the technical team when they came to me with this recommendation, and I’m, like, “Okay. Why do we think this is going to work? It hasn’t been done before. There’s a reason it hasn’t been done before. Why do we think it’s going to work?
In our industry, there are also inflection points that are there, and what I saw was an opportunity. If you take a look at the five-year arc, not the two-year arc, Moore’s law was changing.
She embarked on a radical refocusing of AMD, including a big bet on a revolutionary (read: then-experimental and untried) way of putting together processors from smaller “chiplets.” Prior to this approach, you made more powerful microprocessors by—short version here—making bigger and bigger microprocessors, a model that had become unsustainable. Su and her team decided to go for a more modular—and hopefully scalable—way of achieving the same goals by packaging chiplets together. AMD dubbed it their “Zen architecture.
The magic in our business is you have to have the right amount of risk-reward, because you’re not always going to be right. There are some decisions where, boy, you know, that didn’t quite turn out the way I wanted, but you have to make enough right. Particularly on these larger decisions, we spend a lot of time debating among ourselves and then also ensuring that whatever we build has to be leadership.
Especially novel for the chip industry is the necessity to make long-term-bets, all of which likely don’t come to fruition in merely the next few years, but rather have an arc of at least 5 years – perhaps even longer.
Therefore, a successful product roadmap is absolutely vital for AMD to really become successful.
Xilinx & Pensado:
Semi-conductors are no longer merely about C/GPUs. Rather, they are about moving electrons around and generating insights. Artificial intelligence shall become pervasive across our world.
As Su has said in the past, the future of the chip market is not merely a singular chip that shall cover all work-loads and use-cases. Rather, different types of computing shall be required for the novel job and workloads that are demanded. Thus, she explained, you need CPUs, GPUs, FPGAs, and other forms of chips, in which shall aid this diverse range of workloads for the future.
CEO: Then, we also believe that you need different types of computing for all the jobs or workloads that there are in the world, and so you need CPUs, GPUs, FPGAs. That was our acquisition of Xilinx. I’ve always believed that you need all of these components, and our investments in this area have really accelerated as AI has become such a bigger platform. My conversation about AI is: We’re still in the early innings. There is no one size fits all when it comes to computing. Depending on what you’re trying to do, you’re going to need different technology.
Vitally, there is no one-size-fits-all model for chips. Depending upon what you are trying to do, you are going to need different technologies.
The issue with the market today is that everyone believes that the market is merely concerned with CPUs and GPUs. This is because, computation is not thought of at a fundamental level. Computation at the most fundamental level is merely the manipulation of information based on physical processes.
We will enter into a world whereby more and more technologies are connected to the internet. This is commonly touted as “Edge Ai”. In order for C/GPUs to drive incremental productivity, they must sit in a highly connected and self-optimising environment.
In the following decades, the economy will shift towards collecting data from endpoints and processing it (training AI) to then yield insights (AI inference). This will mean that our economy shall move from being terrible at making predictions to being great at them.
The Xilinix & Pensado acquisitions set AMD up very well for this future.
Pensado:
Starting with Pensado: the company excels at making datacentres stateful, meaning that it provides the environment for their products to self-optimise. Pensando specializes in making stateful data centers, which is a key part of AMD’s future strategy. A stateful data center creates an environment where systems can self-optimize, helping industries run Industry 4.0 infrastructures more efficiently.
Data centres have evolved over the years. Before 2010, most data traffic moved in a north-south direction—requests would come into the data center, and responses would be sent back. These data centers ran on bare metal, meaning everything was processed on physical servers.
Then came Gen 3 data centers. With virtualization, applications were broken into microservices—different parts of an application (like logic, databases, and storage) were spread across multiple virtual machines. This created east-west traffic, where data moved between different components inside the data center. While this improved flexibility, it also made managing data centers much more complex.
A stateful data center eliminates this complexity. It allows data to flow seamlessly within the system without putting extra strain on CPUs or GPUs. As a result, managing data centers becomes much easier, even for companies that want to scale up to hyperscale operations. Additionally, as stateful data centers process north-south and east-west workloads, they generate valuable data. This data can be fed into machine learning (ML) algorithms, helping them automate tasks like cybersecurity (XDR), analytics, and microsegmentation.
With Pensando’s stateful technology, AMD can now create an environment where its computing units operate efficiently for the next decade or more. This ensures that their customers have the best possible infrastructure for future computing needs.
TLDR: a stateful data centre continuously remembers and tracks operations, allowing it to self-optimise, reduce complexity, and efficiently manage data flow without overloading CPUs or GPUs.
This is facilitated by Pensado’s hardware – referred to as DPU, or a data processing unit – however, it is actually mostly enabled by Pensado’s Stateful Software Services.
Pensando: “The software can orchestrate a discrete set of switches as a single entity.”
Forrest Norrod, Senior VP at AMD: “But seriously, with Pensando, it is not just the hardware. Around 90 percent of the engineers at Pensando are software engineers. I don’t think people realize that. I mean, they’ve got a complete, hardened enterprise and cloud stack that covers everything.”
Thus, the future of data centres are stateful data centres, referring to a new type of data centre that continuously remembers and tracks operations, allowing it to self-optimise, reduce complexity, and efficiently manage data flows without overloading CPUs or GPUs.
Xilinix:
Xilinix – another one of AMDs acquisitions – build FPGAs, which are similar to ASICs (application-specific-integrated circuits). ASICs are chips that are customised for a specific use. Yet, with FPGAs, these are chips that can turn into any ASIC one wants with a little bit of code, versus an entire manufacturing process.
ASICS and by extension FPGAs are far better than GPUs at making inferences from AI models. So, you can litter other computing units with FPGAs to equip them with inference capabilities.
As noted by Su, the diverse range of use cases for chips shall characterise the future of the industry. Thus, through FPGAs, these chips can morph accordingly at a marginal cost, as you move from working with one model to another. Only FPGAs can do that & Xilinix is the clear leader in the field.
As AI becomes part of the economy, one of the most vital components of artificial intelligence is inference – using trained models to make predictions or decisions. Yet, for inferences to be useful, inferences must be fast and cost-effective. This is where FPGAs come in to the picture.
Currently, GPUs are widely used for AI tasks, but FPGAs are set to take over, especially in Generation 4 Data Centres. Unlike GPUs, FPGAs can be reprogrammed on the fly, making them more efficient for AI specific tasks. They can handle two types of inference:
a) Endogenous Inference: improving the components inside the data centre – like CPUs, GPUs and other hardware – helping them get smarter over time.
b) Exogenous Inference: processing data from outside the data centre, such as analysing applications running on the system.
Imagine a CPI in a data centre that is learning and improving based on the inferences made by an FPGA attached to it. This means less work for GPUs and a more balanced, efficient AI system.
With these advancements, AMD is no longer just a company that makes CPUs and GPUs. Instead, it is building a smarter environment where all these components work together seamlessly. By focusing on AI inference at scale, AMD is positioning itself as a major player in the future of AI computing, which will have huge business implications going forward.
In this sense, AMDs experience in connecting different computing units in the form of chiplets will be a vital facilitator. This is because, chiplets will enable products to be configured in a highly tailored manner. In fact, AMD has announced that it will “infuse its CPU portfolio with Xilinx's FPGA-powered AI inference engine, with the first products slated to arrive in 2023”. Monolithism is getting rapidly antiquated in the semiconductor industry.
Context is vital to understand the importance of these aforementioned technologies:
a) Firstly, we shall see an explosion in unstructured data over the following decades.
b) Secondly, the datacentre’s intelligence will extend to the edge.
In effect, we shall have many objects connected to the internet and they are going to be running inferences on their own too, working through varying types and forms of data.
Summary: AMD as a Foundational Engine of the Computational Age
I. Reframing AMD: From Chipmaker to Computational Infrastructure
AMD is not simply a semiconductor company—it is a computational engine at the heart of modern information processing. Its mission is to transform physical processes into structured outcomes, much like the human brain processes inputs into thought and action. In this way, AMD is building the machinery of the digital age: enabling computation, not just fabricating silicon. In a world increasingly shaped by artificial intelligence, data analytics, edge computing, and automation, the demand for computation is limitless. AMD is uniquely positioned to serve this demand across diverse platforms and workloads.
II. Moore’s Law and the Emergence of Chiplets
The original formulation of Moore’s Law predicted an exponential increase in transistor density, but as we approach sub-5nm fabrication, we encounter physical and economic constraints—the so-called lithographic reticle limit. AMD saw this coming. While competitors continued to pursue monolithic chip design, AMD—under Lisa Su’s leadership—pioneered the use of chiplet architecture. This modular approach, once dismissed as speculative, is now becoming industry standard.
Moore himself foresaw the shift: “It may prove more economical to build large systems out of smaller functions.” AMD’s bet on chiplets, starting in 2014, was bold and visionary. It has enabled them to overcome the scalability, yield, and cost challenges associated with monolithic chips. In fact, AMD’s decision to adopt chiplets laid the foundation for its Zen architecture and its current dominance in the CPU space.
III. The Strategic Advantages of Chiplet Architecture
Chiplets represent a modular, scalable, and cost-efficient model for building processors. By separating components into smaller dies, AMD improves yields, reduces manufacturing complexity, and can mix advanced and mature fabrication processes (e.g., 7nm for CPUs, 12nm for I/O) to optimize for cost and performance. This architectural agility has allowed AMD to produce chips that rival or exceed the performance of monolithic competitors—at lower cost and with higher flexibility.
Furthermore, chiplets introduce a heterogeneous computing paradigm. They enable AMD to combine CPUs, GPUs, AI accelerators, and FPGAs in tailored configurations suited to specific workloads, especially as AI workloads diversify across inference and training tasks. This flexibility will be crucial as computing expands beyond general-purpose use into highly specialized domains.
IV. Culture as a Competitive Advantage
AMD’s success cannot be understood without recognizing the cultural transformation led by CEO Lisa Su. Taking over a nearly bankrupt company in 2014, Su reshaped AMD with an underdog mentality, high-agency thinking, and bold decision-making. She rejected the safe route—copying Intel—and instead bet the company on novel technologies like chiplets.
AMD’s culture now prioritizes clarity, ambition, agility, and speed. Engineers are encouraged to take risks, iterate constantly, and learn from failure. The “Next 5 Percent” initiative, Su’s flat communication structure, and her direct engagement with technical teams reflect a high-feedback, high-autonomy environment—rare for a company of AMD’s scale. This culture has been the bedrock of its strategic foresight and rapid execution.
V. Roadmap Discipline and the Power of Technological Bets
What makes AMD particularly impressive is its ability to allocate capital into long-duration technological bets. Chiplets were just one example. Today, AMD’s product roadmap extends into the heterogeneous future of AI and edge computing, a future defined not by a single chip type, but by flexible computational frameworks.
In the chip industry, product roadmaps often require 5–10 years to materialize. This means companies must have the foresight and conviction to make accurate long-term bets. AMD, again under Su’s guidance, has repeatedly demonstrated this foresight—placing calculated bets on chiplets, then acquiring Xilinx and Pensando to complement and extend its heterogeneous computing strategy.
VI. Xilinx and Pensando: Building the AI-Ready Future
AMD’s acquisition of Xilinx provides it with market-leading FPGA capabilities—programmable silicon that allows chips to adapt to specific workloads, especially in AI inference. FPGAs can outperform GPUs in certain scenarios, particularly when inference speed and flexibility are critical. This is crucial for Generation 4 data centers and edge AI applications, where speed and energy efficiency are paramount.
Pensando, on the other hand, enables AMD to build stateful data centers—systems that can self-optimize, reducing CPU/GPU load and enabling infrastructure-wide intelligence. With Pensando’s DPUs and software stack, AMD can deliver hyper-efficient, scalable environments for AI workloads, industry 4.0 infrastructure, and cloud services. Together, these acquisitions transform AMD from a component provider into a platform builder.
VII. Strategic Implications and Long-Term Outlook
What AMD is doing is rare: it’s building the foundational stack for the next era of computing. As AI permeates every layer of the economy—from cloud inference to industrial automation—computation becomes not just a product, but an ecosystem. AMD, with its chiplet architecture, heterogenous computing vision, agile culture, and strong roadmap, is positioned to lead.
This isn’t a short-term trade. It’s a structural bet on the most competent capital allocators in the industry, backed by a philosophy of agility, innovation, and technical excellence.
Conclusion: A Conviction Investment in the Future of Computation
To invest in AMD is to invest in the infrastructure of intelligence itself—not just in chips, but in a company that understands how to turn physics into information, and information into insight. AMD is a symbol of the deep, underappreciated transformation of technology markets: away from monolithic rigidity and toward modular, intelligent, and adaptive architectures.
The company has already proven it can execute on long-term visions. Now, with the resources, partnerships, and culture to extend its dominance, AMD is not just a semiconductor company- it is an intelligent computation company, with a multi-decade advantage ahead.