3D InCites Podcast

From Electrons to Photons: ASE's Vision for Sustainable AI

Francoise von Trapp Season 5 Episode 14

Send us a text

The race toward more powerful AI carries a hidden cost that's becoming impossible to ignore: skyrocketing energy consumption. Did you know AI is projected to devour 10% of global electricity by 2030? This staggering figure has even forced tech giants to delay their sustainability goals.

Enter ASE's Executive Vice President Yin Chang, who reveals how the world's largest semiconductor packaging company is tackling this challenge head-on. The solution lies in revolutionary approaches to power delivery and data transmission. By integrating voltage regulators directly into substrates, power can be delivered mere nanometers from compute chips, drastically reducing energy loss. Even more promising is the shift from electrons to photons for data transmission, which slashes power consumption by an impressive 6x.

At the heart of these innovations is ASE's VI-PACK platform, a comprehensive toolbox that empowers system architects to create maximally efficient AI systems. Moving compute components closer together minimizes power requirements, while co-packaged optics enable the crucial electron-to-photon conversion for longer-distance communication. These technologies aren't just theoretical—industry leaders like NVIDIA and AMD are already implementing them, with significant efficiency improvements expected within five years.

The conversation extends beyond data centers to the future of AI at the edge. As foundry processes advance toward smaller nodes, the voltage requirements decrease, making AI more viable for battery-powered devices. Chang envisions a near future where personal devices run limited AI models locally, offering enhanced privacy by processing sensitive data without cloud dependencies.

Discover how advanced packaging is becoming the unsung hero in balancing our appetite for AI innovation with the planet's energy limitations. Follow ASE Global on LinkedIn or visit aseglobal.com to learn more about their pioneering work in sustainable semiconductor solutions.


Learn more at aseglobal.com or follow ASE Global on LinkedIn.


Support the show

Become a sustaining member!

Like what you hear? Follow us on LinkedIn and Twitter

Interested in reaching a qualified audience of microelectronics industry decision-makers? Invest in host-read advertisements, and promote your company in upcoming episodes. Contact Françoise von Trapp to learn more.

Interested in becoming a sponsor of the 3D InCites Podcast? Check out our 2024 Media Kit. Learn more about the 3D InCites Community and how you can become more involved.

Francoise von Trapp:

All around us. Digital transformation is changing the way people live, work, play and communicate. As a leading provider of semiconductor packaging and test services, ase plays a significant role in the development of the world's most innovative electronics. Their technologies enable customers to create cutting-edge products that deliver superior performance, power, speed and connectivity. Learn more at asCglobalcom. Hi there, I'm Francoise von Trapp, and this is the 3D Insights Podcast. Hi everyone, you know, one of the things that keeps me up at night is how the rapid growth of AI technology is impacting our energy grid. In fact, did you know that by 2030, ai is expected to consume 10% of the world's electricity? Now, luckily for us, the advanced packaging community is really hot on the case of solving these challenges. Us, the advanced packaging community, is really hot on the case of solving these challenges, and here to talk to me about this is ASE's Executive.

Yin Chang, ASE :

Vice President Yen Chang, Welcome to the podcast Yen.

Francoise von Trapp:

Oh, thank you. So you know, ASE is the world's largest outsourced semiconductor assembly and test service provider, otherwise known as OSOT, correct, yes? So if anybody can solve this, you guys can.

Yin Chang, ASE :

Well, we definitely are part of the community trying to understand how to improve the overall power efficiency and there's multiple ways to improve power efficiency. But the simplest way is trying to move as close to the silicon as possible. So one way is trying to create this voltage regulator modules that we put directly into the substrate so we can deliver power as close to the compute SOC as we can. So that will improve the overall power efficiency, so we can get more compute with the least amount of power possible. So that will help us resolve some of the AI power hunriness. Second is trying just moving from electron to photons, which is a lot of emphasis these days in our space. So AAC is working hard to put co-packaged optic directly on the package and by simply doing that from the today's pluggable silicon photonic modules we're able to reduce the overall power consumption by 6x from 30 picajoules per bit down to less than 5 picajoules per bit. So hopefully through that we can slow down the power requirement as AI compute continues to increase.

Francoise von Trapp:

Okay, so let's back up a little bit and talk about what's causing this. I mean, we know we've had a rapid, rapid adoption of AI technology. Did we expect that it would have the impact on the power grid that it's having?

Yin Chang, ASE :

No, I think the past few years what he taught us is that, as the algorithm for AI model continue to evolve and accelerate, the compute that required to train and post-train and do inference for those models are far exceeding our ability to innovate compute. So what we end up doing is just putting more and more compute and, as you put in more and more compute that draw even more and more power. So those are the challenges that we face, because the only thing we can know how to do today is building faster processors, putting more processors inside the data center, but in essence, that basically creates the power problem that you described earlier. So it's a bit unanticipated, because it's just. Acceleration of AI was much faster than we all anticipated because it's just.

Francoise von Trapp:

Acceleration of AI was much faster than we all anticipated. Okay so, we've seen that companies like Apple and, I think, meta have pushed out their net zero goals. They were very focused on sustainability but because of AI being so energy intensive, they've actually pushed out those goals, and I know that ASE is very, very focused on its ESG and sustainability efforts. So how are you working to help these companies meet their goals through developing the technologies that will address the power hungry?

Yin Chang, ASE :

I think ASE is very committed to ESG. The environment is very important to AAC as a company. So as a company we continue to reduce our carbon footprint. But to help you know a company that you mentioned, whether it's Apple, alphabet or Meta I think one of the things we are trying to understand is what is their optimum power requirement and power efficiency. And we're trying to understand. You know, not every model is the same. Not every model requires the same amount of heavy compute. So how do we help them to design from a packaging standpoint, help their silicon, help their system?

Yin Chang, ASE :

I think one of the things that advanced packaging has done recently is kind of highlight the importance or the assistance that we could create on the overall system efficiency. And so the system architect and silicon architect and now there's things called package architect are now working together and says you know, maybe I can use photons here, Maybe I can use a different power delivery system here. In a package and the combination of it, I can reduce overall package consumption. I can maybe solve some of the thermal issue that come with high power, and then collectively I don't need liquid cool, maybe I can do air cool so I can reduce the power drain in overall system, not just the electricity that go to drive the compute silicon but the overall power that required to drive the whole silicon, but the overall power that's required to drive the whole data system.

Francoise von Trapp:

Right, because it's not just about the silicon.

Yin Chang, ASE :

Yes.

Francoise von Trapp:

It is all of it.

Yin Chang, ASE :

Yes, because the more power you put in, the harder it gets. The harder it gets. You need to figure out a way to cool it, and then if you need chilled refrigerant, then that's even harder. So everything just multiplies in terms of power consumption. So what ASC could do on the advanced packaging side is maybe try to reduce the power at the source and then by reducing the power at the source, then subsequent power adder to compensate for those additional power can be reduced.

Francoise von Trapp:

Okay, so ASC has a whole VI-PAC platform.

Yin Chang, ASE :

Yes.

Francoise von Trapp:

That has different elements to support different steps or the different types of advanced packaging. How is that enabling this heterogeneous integration era that you're talking about?

Yin Chang, ASE :

VI-PAC platform is a platform that we introduced in late 2022. What it does is really provides a comprehensive toolbox for a system architect or a silicon architect to look at what is the best way to arrange the compute and the memory requirements. So you enable chiplets, but that's not all enabled, and he also invents the conversion from electron to photons. So we are trying to offer the community the ability to be creative, to be able to comprehend what they need and then reconstruct it to a point where they reach the maximum compute that they're achieving, they reach the power that they need, and then they also reach the maximum efficiency that they can get. So the VI pack is a platform, but it's more just a platform. It's really this comprehensive tool that people can use to create their next AI breakthrough.

Francoise von Trapp:

Okay, so you've been talking about electrons and photons and how those can be adjusted or used to improve the power efficiency of an AI package, and I don't know we call them AI packages. I mean, I know we have AI chips, but then they also become like subsystems, right?

Yin Chang, ASE :

Or packages.

Francoise von Trapp:

So, for instance, when you're using a chiplet architecture, how would you use that to improve power efficiency for the AI chips?

Yin Chang, ASE :

I think for us is, you know, the power is really the function of the compute distance.

Francoise von Trapp:

Okay.

Yin Chang, ASE :

So, as you get closer and closer together, the less power required to drive, less current required to drive it. So the 3D, the 2.5D advanced packaging solution that we offer as part of the VI-PAC solution give us the minimum distance required for the compute to communicate with a memory die, communicate with the IO controller. So all this ability for us to reduce that compute distance is what the VI-PAC toolbox provides. And in addition to that we are now able to offer, you know, co-packaged optics that allow some of the electrons to convert to photons. So not only I gain bandwidth but I reduce the number, the power that's required to drive that distance. So that is kind of the goal for us is, you know, by not just one solution but adding multiple pieces, we can achieve the maximum power efficiency.

Francoise von Trapp:

So where do you use the electrons and where do you use the photons?

Yin Chang, ASE :

So the electrons are typically used within the compute matrix, right? So you have SOC die on top of the IO driver top of a memory controller, soc die on top of the IO driver top of a memory controller. On top maybe there's people looking at 3D memory directly onto the compute structure. So all those things are connected through silicon bias, copper pillar. Those are all electrons. Now, the minute you need to communicate with outside maybe another GPU, another CPU, you know that is probably best to do with photons, because instead of looking at nanometers or microns now, you may look at millimeters and millimeter to centimeters. Those are the ones that drive more power, generate more heat.

Francoise von Trapp:

That's a larger pipeline.

Yin Chang, ASE :

Yes, and it's also a longer pipeline.

Francoise von Trapp:

Okay, and so how do you convert electrons to photons without it getting too technical?

Yin Chang, ASE :

Well, the silicon-photonics is really a combination of three silicon. There is electronic IC, there's a photonic IC. That EIC and PIC combination basically is the conversion of electron to photon.

Francoise von Trapp:

Okay.

Yin Chang, ASE :

And then, once the photon is converted, they transmit it through a laser dial? Okay, and that's sent through a fiber optic?

Francoise von Trapp:

Okay, right, and then does it get converted back to electrons when it gets to the next.

Yin Chang, ASE :

Yes.

Francoise von Trapp:

It's almost like a relay.

Yin Chang, ASE :

It's like a relay. So basically we shine light down through a pipe and then, once you reach the destination, it can convert it back to electrical signals and the electrical signals get processed at that site and they can send that result to the next compute die in the chain. So if you look at NVIDIA NBL72, they connect 72 die together and the communication between all those silicons are through photons, not electrons.

Francoise von Trapp:

And so does the photon transmission. Is it faster and generate less heat than the electrons?

Yin Chang, ASE :

Yes, it requires less power, less power. Okay, it's less power-hungry or more power-efficient, less power, less power. Is less power hungry or more power efficient? And then, as far as heat is concerned, there's less heat. In terms of going through fiber optics, you know in terms of what the overall system will require. The more optical solution you use, the wider the pipe, the larger the bandwidth, right. But to achieve the power efficiency you've got to get close to the die right. So that's where the CPU comes in.

Francoise von Trapp:

So are there other areas besides co-packaged optics that you are working towards? That will also help reduce the power needs of AI.

Yin Chang, ASE :

Well, I think we're working on these integrated voltage regulator modules where a lot of the first stage and second stage regulator are putting very close to the silicon itself. So instead of driving 48 to 12 volts to 8 volts to 1 volt to 0.8 volts, we're trying to put all those conversions as close to the silicon as we can and by doing that I reduce a lot of power loss by doing all these long-distance conversions, and the more power loss I get, the more power I need to put in to recover. So that's one of the things for us to create more power efficiency through the overall package system, and that's another way that we're hoping to reduce the power requirement, giving a given level of compute. What we're seeing in the marketplace is people say, well, if you can give me more efficient, what I want is more compute. Right, okay.

Yin Chang, ASE :

So this insatiable requirement for compute acceleration is one of the causes for the problem that you described earlier.

Francoise von Trapp:

Okay, and one of the other areas I've heard that are still a challenge is thermal issues.

Yin Chang, ASE :

That basically heat relates to power loss I think thermal is a function of the power. That you put in.

Yin Chang, ASE :

You know, because more electron you put in through any kind of structure, you will end up getting some heat, unless there's zero resistance for that to occur. I think for us, from advanced packaging point of view, the idea for us is really trying to maximize the power efficiency and then hopefully the amount of current we need for a given compute can be more modest, right? So then the amount of heat that it generates subsequently would not be quite as high.

Francoise von Trapp:

Okay, so how close are we to solving these challenges?

Yin Chang, ASE :

I think for the power regulators, I think we are within the next few years, and then for CPOs, I think for leading edge, there's already announcement from NVIDIA and AMD as such that they are actively looking to implement that in the next generation data center structure. So I think within the next five years you will see, I think, a dramatic increase in efficiency, the way we want to have in terms of overall system. So from the package point of view, I think there are going to be more compute using photons or more communications in photons, and then the power delivery will be much more efficient, as we deliver power directly into silicon.

Francoise von Trapp:

Okay, you know, I think back to I don't know, 10, 15 years ago, where, before we even knew what AI was, or, you know, everybody talked about AI, but we never expected it to be as big as it is already, and the main driver of all of this technology was our smartphones. And so, you know, at that point everything was small form factor, smaller chips, and now we're looking at these larger size die. How does that impact, from ASC's perspective, the decision to like, develop? How do you know what's coming next and how do you leverage all of your knowledge to address all of these different markets?

Yin Chang, ASE :

I think for us is that you know the high performance computing and AI has drive ASC to accelerate our innovation. So the AIPAC is an example to such innovation create, like we mentioned earlier, as many tools as we can to help the HPC and AI silicon architect or system architect to solve some of their problems. And what we found is that some of the tools we create may be able to periphery down to the more consumer-based products and some of the AP or application processes you mentioned earlier may benefit from the development that we're currently doing on the AI side. So the package may not grow forever, but some of the tools we can extract and scale that for large consumer market. That cell phone could easily be one of them.

Yin Chang, ASE :

We're already looking at some of the cpu design for laptop that can leverage some of the tool that we use you know, namely fan out solution. So I think that everything that we're doing for the most advanced package requirement are able to periphery down to affect our daily lives. So maybe it's even as much as robotics that people talk about, right? So all those things require higher compute, maybe not into an AI type of standard, but those technology can easily migrate or transfer to a more day-to-day consumer product.

Francoise von Trapp:

And that's the cycle. We see right Things always start where it's more expensive and high performance, because the return on investment is there, the high performance computing. It's not at the consumer level.

Yin Chang, ASE :

Right.

Francoise von Trapp:

But you know, I was just thinking. Now, as you were talking, I'm thinking, but we're talking like these big AI die right. What needs to happen to get the power of that and we've seen it happen with other things, right like the power that of that kind of compute to be reduced and shrunk to, to work in a smartphone down the road people are already working on shrinking the overall silicon.

Yin Chang, ASE :

So you know, as we go from the 2 nanometer to 1.8 nanometer and below that, the amount of voltage required to drive those circuits can be reduced right. So it can help in terms of some sort of battery operation that we need for cell phones. So, but all those things still require a lot of data, a lot of bits. So how?

Francoise von Trapp:

do you?

Yin Chang, ASE :

connect the memory to those compute devices and some of the advanced packaging solution can help solve those problems. So I think it's really a combination of the advances in Foundry silicon design and a combination with advanced packaging to solve ability to transfer from right now high power, high current solution down to maybe low power, you know, microcurrent solution.

Francoise von Trapp:

So, for instance, if we're using our phone, now, right and you can do an AI search, right, like you go on Google and it's, but that's not on the phone. That is in the data center, that AI function. We don't really have AI functions yet in our handsets, do we?

Yin Chang, ASE :

Not in the way that people is not chat GPT, for example right so. But I think people are envisioning where a subset of the AI function can reside on your phone that can access your calendar, access your email, access what's on the phone itself and do a limited amount of processing Right. So that will be potential AI events where they help your daily life. You don't need to ask very complex questions, but it helps your daily life activity that you will know or anticipate what your daily activity, that you will know or anticipate what your daily activity may be and then recommend certain things or help you prepare certain things. So all those things are not as data intensive but require your own data.

Francoise von Trapp:

So then it's also more encapsulated, because you're not really pulling data from everywhere, you're just working with what you have locally.

Yin Chang, ASE :

It's become private, private Okay.

Francoise von Trapp:

So, okay, my last question, because I'm now tangential, but I just keep thinking of these things. We hear a lot about edge, ai at the edge. So when we're talking about edge, we're talking about our personal devices, right, for the most part. How far are we away from that? I know there's like AI enabled PCs, now right, and there's AI software like where does that fit into the whole AI picture that we're looking at?

Yin Chang, ASE :

I think they are closer than we think.

Yin Chang, ASE :

I think there are already people who are downloading Lama Meta's open source model directly onto their laptop and run certain version of Lama to answer questions using limited amount of data from outside, or maybe just on their own hard drive.

Yin Chang, ASE :

So I think running AI function on your laptop or not that far away. I think the question is how useful it is for an individual person. It's always better to run the data set as large as possible, so running off the cloud it makes not more sense. Runs a data set as large as possible, so running off the cloud, it makes not more sense, right. But if you want private AI, then you want to run it off your devices, because that's your data. You want the results of it to be private to yourself and the action it takes to be only for you. So I think those are the next step for AI, from my personal view, is that will be the you know, the agentic AI, where you know they're reasoning about what you may want and then they're deciding based on your preferences and they can take action to execute those preferences.

Francoise von Trapp:

So some of the major markets right now that are benefiting from AI are things like the medical market and, you know, robotics and industrial. Is there a market where you think AI should not be used?

Yin Chang, ASE :

I think AI can't be used, you know, like anything, in moderation. I think that you know, as far as you know, medical research and looking for new protein. I think AI will be extremely useful in terms of some of the morality questions about what should AI do in terms of genomes and so on. I might not be the best person to ask that question. But I think AI can be used in almost everything in your life, but probably in certain moderations.

Francoise von Trapp:

And some areas it's ready for it and some areas it's not.

Yin Chang, ASE :

Yeah, I think you know. I'm not sure if an individual will relinquish all their decision power to GenX AI to say I'm just going to let AI decide my day right and then you will plan it all out for me. So I think there's still a certain amount of self-empowerment that people want to have over their life, but AI can definitely be a very helpful assistant to those lives.

Francoise von Trapp:

So now, where should people go to learn more about ASE and the IPAC?

Yin Chang, ASE :

I think they should go to aseglobalcom and please search us for LinkedIn and ASE Global.

Francoise von Trapp:

Okay, great. Thanks so much. There's lots more to come, so tune in next time to the 3D Insights Podcast. The 3D Insights Podcast is a production of 3D Insights LLC.