I'd recommend anyone interested in Confidential Computing to read the work from Rodrigo Branco (@BSDaemon) to understand why it's mostly a failure and a PR stunt from cloud providers to give the illusion that the customer stays in control, while at the same time the hardware capabilities CC is built upon are unsecure (and can't be fixed by firmware or microcode update, most of the time).
The slides were an interesting read, I'd enjoy seeing the talk if it was recorded.
They finish mentioning in "2023" though, we're in the back half of 2025 now - has anything changed significantly in the past couple of years? (I genuinely don't know)
Even if you were to trust secure boot and that there are no cpu bugs around the isolation, you're still running on someone else's hardware.
The CPU and Secure boot has no reliable way to tell if the hardware was modded to allow bus snooping or a fake crash that still keeps the memory on a refresh loop.
Don't put things in the cloud if your threat model doesn't allow you to trust the cloud provider, or whoever has the power to compell your cloud provider to do things.
Could this be solved with some sort of TPM-like secure attestation that can prove you’re running on the CPU you think you are, plus encrypted memory to defeat external memory reads?
For it to work, the whole CPU would pretty much need to be a secure enclave. It puts very different requirements on the hardware than affordable high performance computing does.
Even then, many secure enclaves have been compromised by people with enough time and motivation.
Timely considering the current (yet another) chip act. Presumably government mandated surveillance silicon would also require confidential compute capability.
Years ago, I saw a demo for a confidential gaming VM with the idea that games could ship with a whole VM instead of an anti cheat engine. Most of the tech was around doing it performantly. I wonder why it was never productized.
Denuvo's is a virtual machine similar to Java's virtual machine, in that it executes bytecode specifically written for it, within an application's process.
I believe the parent post was referring to something closer to a Hyper-V virtual machine, an entire virtual computer.
Apple has done a good job on the implementation and documentation for their confidential computing (https://security.apple.com/documentation/private-cloud-compu...) but of course it’s Apple only. There’s a few folks working on a non-Apple version of this, eg https://confident.security/ and others (disclaimer that I helped work on a very early version of this.
Read the Apple docs - they are very well written and accessible for the average HN reader.
Confidential computing is the straw for many people to overcome GDPR headaches in Europe. I know particularly medical researchers that hope that they get access to scalable infrastructure this way, because they can tick it as the only additional TOM on the processor side. As mentioned in the comments of OP though it is more a promise than a reality at the moment with very little actual benefit in term of reducing relevant attack vectors.
Yeah, much like the "sovereign cloud" stuff from amazon around where they pretend that setting up and independent advisory board with no real power is somehow a fix for the cloud act.
It only fools people who want to be fooled, or genuiely have no idea.
I find the article a difficult read for someone not versed in “confidential computing”. It felt written for insiders and/or people smarter than me.
However, I feel that “confidential computing” is some kind of story to justify something that’s not possible: keep data ‘secure’ while running code on hardware maintained by others.
Any kind of encryption means that there is a secret somewhere and if you have control over the stack below the VM (hypervisor/hardware) you’ll be able to read that secret and defeat the encryption.
Maybe I’m missing something, though I believe that if the data is critical enough, it’s required to have 100% control over the hardware.
Now go buy an Oxide rack (no I didn’t invest in them)
The unique selling point here is that you don't need to trust the hypervisor or operator, as the separation and per-VM encryption is managed by the CPU itself.
The CPU itself can attest that it is running your code and that your dedicated slice of memory is encrypted using a key inaccessible to the hypervisor. Provided you still trust AMD/Intel to not put backdoors into their hardware, this allows you to run your code while the physical machine is in possession of a less-trusted party.
It's of course still not going to be enough for the truly paranoid, but I think it provides a neat solution for companies with security needs which can't be met via regular cloud hosting.
Exploited in the wild, difficult to say, but there has been numerous vulnerabilities reported on underlying technologies used for confidential computing (Intel SGX, AMD SEV, Intel TDX, for example) and quite a good amount of external research and publications on the topic.
The threat model for these technologies can also sometimes be sketchy (lack of side channel protection for Intel SGX, lack of integrity verification for AMD SEV, for example)
I don't believe so? I have no doubt that there have been vulnerabilities, but the technology is quite new and barely used in practice, so I would be surprised if there have been significant exploits already - let alone ones applicable in the wild rather than a lab.
The technology is only new because the many previous attempts were so obviously failures that they never went anywhere. The history of "confidential computing" is littered with half baked attempts going back to the early 2000s in terms of hypervisors, with older attempts in the mainframe days completely forgotten.
The CPU attests what it booted, and you verify that attestation on a device you trust. If someone boots a shim instead then the attestation will be different and verification will fail, and you refuse to give it data.
That creates a technical complexity I still don't trust.
Because I don't see how you can trust that data isn't exfiltrated just because the boot image is correct.
If you control the hardware, you trust them blindly.
Even when running on bare metal I think the concept of measurements and attestations that attempt to prove it hasn't been tampered with are valuable, unless perhaps you also have direct physical control (eg: it's in a server room in your own building)
Looking forward to public clouds maturing their support for Nvidia's confidential computing extensions as that seems like one of the bigger gaps remaining
I don't believe in the validity of the idea of 'confidential computing' on a fundamental level.
Yes, there are degrees of risk and you can pretend that the risks of third-parties running hardware for you are so reduced / mitigated due to 'confidential computing' it's 'secure enough'.
I understand things can be a trade-off. Yet I still feel 'confidential computing' is an elaborate justification that decision makers can point to, to keep the status quo and even do more things in the cloud.
I'm a relative layman in this area, but from my understanding, fundamentally there has to be some trust somewhere, and I think confidential computing aims to provide a way to both distribute that trust (split the responsibility between the hardware manufacturer and cloud provider, though I'm aware already sounds like a losing prop if cloud providers are also the hardware manufacturer) and provide a way to verify it's intact.
Ultimately it's harder to get multiple independent parties to collude than a single entity, and for many threat models that's enough.
Whether today's solutions are particularly good at delivering this, I don't know (slides linked in another comment suggest not so good), but I'm glad people are dedicating effort to trying to figure it out
Well there were some advances in the space of homomorphic encryption, which I find pretty cool and would be an encryption which does not require a secret to work on the data. Sadly the operations which are possible are limited and quite performance intensive.
Maybe this will check a box in some OpenStack cluster but it wont work for me personally. Anything sensitive I use physical servers. Once I am on a VM of a physical server that is not mine then my data is their data. It is just turtles all the way down and there will always be a way to obtain data. Whats more this is required for lawful intercept and authorities expect providers today to be able to live copy/clone a VM. There will always be a back door and when authorities can access the back door, so can the providers and malicious actors. Even more unpopular is that to me encryption is just mathematical obfuscation a.k.a. magic math and the devil is in the implementation details remember WEP and DVD encryption? Just like cell phones there will always be some simple "debugging" toggle function that can bypass it.
Why do you trust your physical servers? Do you believe it is impossible for a backdoor to exist in the CPU's Management Engine? Do you inspect the contents of every single network packet entering and exiting? Do you have a way of blocking or inspecting all electromagnetic radiation?
Confidential computing is trying to solve the very problem you are worried about. It is a way of providing compute as a service without the customer having to blindly trust the compute provider. It moves the line from "the host can do anything it wants" to "we're screwed if they are collaborating with Intel to bake a custom backdoor into their CPUs".
To me that sounds like a very reasonable goal. Go much beyond that, and the only plausible attacker is going to be the kind of people who'll simply drag you to a black site and apply the big wrench until you start divulging encryption keys.
A physical server can use all the same mechanisms a VM in a cloud can use (worst case put your stuff in a single "confidential" VM), but can also rely on physical control of the machine. But there is no longer a 3rd party cloud operator in a pre-privileged position to exploit VMM or CPU vulnerabilities.
It is essentially by definition more secure than a VM anywhere.
I wouldn't "fully" trust it without going on-prem though. But trust isn't binary either; container < VM < hosted machine < on-prem machine. That's all there is to this.
Unfortunately, if someone really wants into modern equipment it is rather trivial. As modern clouds often just used cost-optimized consumer grade CPUs/GPUs with sometimes minor conveniences like more ECC RAM, and backplane management options.
In many ways, incident detection and automated-recovery is more important than casting your servers in concrete.
Emulated VM can create read-only signed backing images, and thus may revert/monitor states. RancherVM is actually pretty useful when you dig into the architecture.
Best policy is to waste as much time and money of the irrational, and interleave tantalizing payloads of costly project failures. Adversaries eventually realize the lame prize is just not worth the effort, or steal things that ultimately will cost them later. =3
I'd recommend anyone interested in Confidential Computing to read the work from Rodrigo Branco (@BSDaemon) to understand why it's mostly a failure and a PR stunt from cloud providers to give the illusion that the customer stays in control, while at the same time the hardware capabilities CC is built upon are unsecure (and can't be fixed by firmware or microcode update, most of the time).
For example, a direct link to his keynote slides from ESA 3S conference last year (PDF): https://indico.esa.int/event/528/attachments/5988/10212/Keyn...
The slides were an interesting read, I'd enjoy seeing the talk if it was recorded.
They finish mentioning in "2023" though, we're in the back half of 2025 now - has anything changed significantly in the past couple of years? (I genuinely don't know)
Nope. Newer hardware, newer exploits.
Even if you were to trust secure boot and that there are no cpu bugs around the isolation, you're still running on someone else's hardware.
The CPU and Secure boot has no reliable way to tell if the hardware was modded to allow bus snooping or a fake crash that still keeps the memory on a refresh loop.
Don't put things in the cloud if your threat model doesn't allow you to trust the cloud provider, or whoever has the power to compell your cloud provider to do things.
Could this be solved with some sort of TPM-like secure attestation that can prove you’re running on the CPU you think you are, plus encrypted memory to defeat external memory reads?
For it to work, the whole CPU would pretty much need to be a secure enclave. It puts very different requirements on the hardware than affordable high performance computing does.
Even then, many secure enclaves have been compromised by people with enough time and motivation.
that's exactly what confidential vms are
Timely considering the current (yet another) chip act. Presumably government mandated surveillance silicon would also require confidential compute capability.
https://www.atlanticcouncil.org/blogs/geotech-cues/how-the-c...
Funny, some people never consider burning goodwill with populations directly open a competitive advantage for competitors. =3
Years ago, I saw a demo for a confidential gaming VM with the idea that games could ship with a whole VM instead of an anti cheat engine. Most of the tech was around doing it performantly. I wonder why it was never productized.
Isn’t that more or less what modern Xbox is doing?
https://en.wikipedia.org/wiki/Xbox_system_software#System
I'd imagine cost is a big factor. You have to contend with a lot of bad drivers on gpus, right? (This isn't my arena, just speculating here).
My understanding is that some modern game DRM does use an approach like that. See https://connorjaydunn.github.io/blog/posts/denuvo-analysis/
Denuvo's is a virtual machine similar to Java's virtual machine, in that it executes bytecode specifically written for it, within an application's process. I believe the parent post was referring to something closer to a Hyper-V virtual machine, an entire virtual computer.
Apple has done a good job on the implementation and documentation for their confidential computing (https://security.apple.com/documentation/private-cloud-compu...) but of course it’s Apple only. There’s a few folks working on a non-Apple version of this, eg https://confident.security/ and others (disclaimer that I helped work on a very early version of this.
Read the Apple docs - they are very well written and accessible for the average HN reader.
Bit Google Cloud and AWS support confidential computing: https://cloud.google.com/security/products/confidential-comp... https://aws.amazon.com/confidential-computing/
Confidential computing is the straw for many people to overcome GDPR headaches in Europe. I know particularly medical researchers that hope that they get access to scalable infrastructure this way, because they can tick it as the only additional TOM on the processor side. As mentioned in the comments of OP though it is more a promise than a reality at the moment with very little actual benefit in term of reducing relevant attack vectors.
Yeah, much like the "sovereign cloud" stuff from amazon around where they pretend that setting up and independent advisory board with no real power is somehow a fix for the cloud act.
It only fools people who want to be fooled, or genuiely have no idea.
Someone willing to price this out?
I find the article a difficult read for someone not versed in “confidential computing”. It felt written for insiders and/or people smarter than me.
However, I feel that “confidential computing” is some kind of story to justify something that’s not possible: keep data ‘secure’ while running code on hardware maintained by others.
Any kind of encryption means that there is a secret somewhere and if you have control over the stack below the VM (hypervisor/hardware) you’ll be able to read that secret and defeat the encryption.
Maybe I’m missing something, though I believe that if the data is critical enough, it’s required to have 100% control over the hardware.
Now go buy an Oxide rack (no I didn’t invest in them)
The unique selling point here is that you don't need to trust the hypervisor or operator, as the separation and per-VM encryption is managed by the CPU itself.
The CPU itself can attest that it is running your code and that your dedicated slice of memory is encrypted using a key inaccessible to the hypervisor. Provided you still trust AMD/Intel to not put backdoors into their hardware, this allows you to run your code while the physical machine is in possession of a less-trusted party.
It's of course still not going to be enough for the truly paranoid, but I think it provides a neat solution for companies with security needs which can't be met via regular cloud hosting.
The difference between a backdoor and a bug is just intention.
AMD and Intel both have certainly had a bunch of serious security relevant bugs like spectre.
Hasn't that been exploited several times?
Exploited in the wild, difficult to say, but there has been numerous vulnerabilities reported on underlying technologies used for confidential computing (Intel SGX, AMD SEV, Intel TDX, for example) and quite a good amount of external research and publications on the topic.
The threat model for these technologies can also sometimes be sketchy (lack of side channel protection for Intel SGX, lack of integrity verification for AMD SEV, for example)
I don't believe so? I have no doubt that there have been vulnerabilities, but the technology is quite new and barely used in practice, so I would be surprised if there have been significant exploits already - let alone ones applicable in the wild rather than a lab.
The technology is only new because the many previous attempts were so obviously failures that they never went anywhere. The history of "confidential computing" is littered with half baked attempts going back to the early 2000s in terms of hypervisors, with older attempts in the mainframe days completely forgotten.
How can I believe the software is running on the CPU and not with a shim in between that exfiltrates data?
The code running this validation itself runs on hardware I may not trust.
It doesn’t make any sense to me to trust this.
The CPU attests what it booted, and you verify that attestation on a device you trust. If someone boots a shim instead then the attestation will be different and verification will fail, and you refuse to give it data.
That creates a technical complexity I still don't trust. Because I don't see how you can trust that data isn't exfiltrated just because the boot image is correct.
If you control the hardware, you trust them blindly.
I saw what I thought was a nice talk a couple of years ago at fosdem introducing the topic https://archive.fosdem.org/2024/schedule/event/fosdem-2024-1...
Even when running on bare metal I think the concept of measurements and attestations that attempt to prove it hasn't been tampered with are valuable, unless perhaps you also have direct physical control (eg: it's in a server room in your own building)
Looking forward to public clouds maturing their support for Nvidia's confidential computing extensions as that seems like one of the bigger gaps remaining
I don't believe in the validity of the idea of 'confidential computing' on a fundamental level.
Yes, there are degrees of risk and you can pretend that the risks of third-parties running hardware for you are so reduced / mitigated due to 'confidential computing' it's 'secure enough'.
I understand things can be a trade-off. Yet I still feel 'confidential computing' is an elaborate justification that decision makers can point to, to keep the status quo and even do more things in the cloud.
I'm a relative layman in this area, but from my understanding, fundamentally there has to be some trust somewhere, and I think confidential computing aims to provide a way to both distribute that trust (split the responsibility between the hardware manufacturer and cloud provider, though I'm aware already sounds like a losing prop if cloud providers are also the hardware manufacturer) and provide a way to verify it's intact.
Ultimately it's harder to get multiple independent parties to collude than a single entity, and for many threat models that's enough.
Whether today's solutions are particularly good at delivering this, I don't know (slides linked in another comment suggest not so good), but I'm glad people are dedicating effort to trying to figure it out
Well there were some advances in the space of homomorphic encryption, which I find pretty cool and would be an encryption which does not require a secret to work on the data. Sadly the operations which are possible are limited and quite performance intensive.
[dead]
Maybe this will check a box in some OpenStack cluster but it wont work for me personally. Anything sensitive I use physical servers. Once I am on a VM of a physical server that is not mine then my data is their data. It is just turtles all the way down and there will always be a way to obtain data. Whats more this is required for lawful intercept and authorities expect providers today to be able to live copy/clone a VM. There will always be a back door and when authorities can access the back door, so can the providers and malicious actors. Even more unpopular is that to me encryption is just mathematical obfuscation a.k.a. magic math and the devil is in the implementation details remember WEP and DVD encryption? Just like cell phones there will always be some simple "debugging" toggle function that can bypass it.
Why do you trust your physical servers? Do you believe it is impossible for a backdoor to exist in the CPU's Management Engine? Do you inspect the contents of every single network packet entering and exiting? Do you have a way of blocking or inspecting all electromagnetic radiation?
Confidential computing is trying to solve the very problem you are worried about. It is a way of providing compute as a service without the customer having to blindly trust the compute provider. It moves the line from "the host can do anything it wants" to "we're screwed if they are collaborating with Intel to bake a custom backdoor into their CPUs".
To me that sounds like a very reasonable goal. Go much beyond that, and the only plausible attacker is going to be the kind of people who'll simply drag you to a black site and apply the big wrench until you start divulging encryption keys.
A physical server can use all the same mechanisms a VM in a cloud can use (worst case put your stuff in a single "confidential" VM), but can also rely on physical control of the machine. But there is no longer a 3rd party cloud operator in a pre-privileged position to exploit VMM or CPU vulnerabilities.
It is essentially by definition more secure than a VM anywhere.
I wouldn't "fully" trust it without going on-prem though. But trust isn't binary either; container < VM < hosted machine < on-prem machine. That's all there is to this.
>[you already trust all these things, why do you think adding even more things you must trust makes it less trustworthy?]
is a kinda insane argument at even a surface level
Unfortunately, if someone really wants into modern equipment it is rather trivial. As modern clouds often just used cost-optimized consumer grade CPUs/GPUs with sometimes minor conveniences like more ECC RAM, and backplane management options.
In many ways, incident detection and automated-recovery is more important than casting your servers in concrete.
Emulated VM can create read-only signed backing images, and thus may revert/monitor states. RancherVM is actually pretty useful when you dig into the architecture.
Best policy is to waste as much time and money of the irrational, and interleave tantalizing payloads of costly project failures. Adversaries eventually realize the lame prize is just not worth the effort, or steal things that ultimately will cost them later. =3