I'm all for it -- it's hard to understate the extent to which LetsEncrypt has improved the WebPKI situation. Although the effective single-vendor situation isn't great, the "this is just something you only do via an automated API" approach is absolutely the right one. And certificate lifetimes measured in days work just fine with that.
The only things that continue to amaze me are the number of (mostly "enterprise") software products that simply won't get with the times (or get it wrong, like renewing the cert, but continuing to use the old one until something is manually restarted), and the countless IT departments that still don't support any kind of API for their internal domains...
It's not single-vendor. The ACME protocol is also supported by the likes of GlobalSign, Sectigo, and Digicert.
You've got to remember that the reduction to a 45-day duration is industry-wide - driven by the browsers. Any CA not offering automated renewal (which in practice means ACME) is going to lose a lot of customers over the next few years.
Effectively single-vendor. I'm not aware of any ACME-compatible CAs that don't have pernicious limits on their free plans (and if there are, I'd love to hear!), and here in the EU we've even recently lost a rather big player...
I'm glad there are free alternatives to Let's Encrypt, but a major PKI provider also being by far the largest browser provider feels like a disaster waiting to happen. The check on PKI providers doing the right thing is browsers including them (or not) in their trust stores. Having both sides of that equation being significantly controlled by the same entity fundamentally breaks the trust model of WebPKI. I'm sure Google has the best of intentions, but I don't see how that's in any way a workable approach to PKI security.
Not if you look at the per-cert pricing, but if you factor in the cost of "dealing with incompetent sales" and "convincing accounting to keep the contract going", they absolutely are.
When I was working with Digicert a decade ago, it was expensive, but they also had knowledgable support and with a wildcard cert, they would issue all sorts of 'custom duplicates' by request that were super handy. No incompetent sales, but certainly you do need to make sure accounting will pay.
We pulled all of our business after they failed to renew a cert with 30d(!!!) notice and got themselves stuck in a loop of useless org re-validations.
They were completely unresponsive and wasted dozens of hours of our time trying to rectify the situation before we pulled the plug and switched everything to ACME. I still can’t believe we wasted so much time and money with them.
It's not only the sellers of the other party. You have to work with the buyers of your company too. Stuff that costs no money and needs no contracts move faster than stuff that must be negotiated, agreed upon, paid for.
Doesn't ZeroSSL do this? acme.sh has been using it as the default for the last few years. As I understand it, it basically offers the same as Let's Encrypt.
The "ACME Certificates" are free and unlimited. The 3 free "ZeroSSL Certificates" are old-fashioned manual certs: this is a strictly more generous offering than Let's Encrypt!
> If you have a server or other device that requires automatic issuance of certificates and supports the ACME protocol, you can use our free 90-day ACME certificates on all plans.
Zerossl is integrated with Caddy by default and there’s no indication from Caddy that you would only be able to renew the cert twice before needing to cough up some money.
> The only things that continue to amaze me are the number of (mostly "enterprise") software products that simply won't get with the times
Yeah, no one's rewriting a bunch of software to support automating a specific, internet-facing, sometimes-reliable CA.
Yes it's ACME, a standard you say. A standard protocol with nonstop changing profile requirements at LE's whim. Who's going to keep updating the software every 3 months to keep up? When the WebPKI sneeze in a different direction and change their minds yet again. Because 45 will become 30 will become 7 and they won't stop till the lifetime is 6 hours.
"Enterprise" products are more often than not using internal PKI so it's a waste.
I would like to see the metrics on how much time and resources are wasted babysitting all this automation vs. going in and updating a certificate manually once a year and not having to worry the automation will fail in a week.
In circles I'm running in, automatic certificate renewal has not caused a single problem over 7 years of using it, and whatever time was spent on setting it up, has paid many times over, both in saving effort on renewal, and in putting out fires when (not if) someone forgets to renew a certificate. You just have to be careful picking your automation — I haven't been impressed with certbot, for example.
Also, everything is using https now. Living in a low-income country, certificates were too expensive to use them where they weren't absolutely required, but not anymore. This is orthogonal to automation, I'm just pointing out that LE is not as demonic as you make it out to be.
I'm afraid enterprise users are on their own, probably approximately no-one else is interested in going back to the old ways of doing it. (Maybe embedded.)
Forcing automation would be fine if the default software package (certbot) was any good but from my experience certbot is simply not fit for purpose. Certbot doesn't support the industry standard PKCS#12 format, which makes it extremely brittle for anyone using a Java based webserver. Instead it uses the non-standard PEM format which requires conversion before usage. That conversion step breaks all the time and requires manual intervention. It's ridiculous.
PEM is very standard. Calling `openssl pkcs12` also should not be hard; IDK about certbot, but there is a hook for acmetool (which I use) that does just that for you: https://github.com/dlitz/acmetool-pkcs12-hooks
Well, I could regale you with my anecdotes on how all my 'grab a LetsEncrypt cert on Docker deploy and whenever remaining lifetime goes below 75%' services have not had a single TLS-related outage in at least a year, and how we just had a multi-day meltdown caused by a load-bearing cert that everyone forgot about expiring, but I doubt you'll be interested.
I'm not here to take away your right to set up a few more 5-year+ internal CAs (or, hey, how about an entire self-signed hierarchy? can't be that hard...), and neither is LetsEncrypt. But on the Big Bad Internet (where Google is the leading cause of More Security, and LetsEncrypt is merely the leading vendor catering to that), things have moved on slightly.
> I would like to see the metrics on how much time and resources are wasted babysitting all this automation vs. going in and updating a certificate manually once a year and not having to worry the automation will fail in a week.
I would also like to see those metrics, because I strongly suspect the cost is dramatically in favor of automation, especially when you consider the improved security posture you get from short lived certificates. My personal experience with Let's Encrypt has been that the only hassle is in systems that don't implement automation, so I get the worst of both worlds having to effectively manually renew short lived certificates every 90 days.
CRL based revocation is largely a failure, both because implementation is a problem and because knowing when to revoke a certificate is hard. So the only way to achieve the goal of security in WebPKI that's resilient to private key compromise is short-lived certificates. And the only way to implement those is with automated renewal.
The change in validity does not in any way alter the protocol itself. As mentioned in the linked blog post: if you've already got automated cert renewal, it'll almost certainly require zero change.
After all, the logical approach is to schedule a daily cron job for your renewal process, which only contacts the server when the current cert has less than X days of validity remaining. Scheduling a one-off cron job every, say, 60 days would be extremely error-prone!
With regards to future reductions: the point is to force companies to automate renewal, as they have proven to be unable to renew in time during incidents due to dozens of layers of bureaucracy. 45 days means one renewal a month, which is probably sufficient for that. Anything below 7 days isn't likely to happen, because at that point the industry considers certification revocation unnecessary.
Internal PKI is not relevant here. Anyone with weird internal cert use shouldn't be using public certs anyways, so moving those people to certs backed by self-signed internal CAs is a win. They are free to use whatever validity they want with those.
> the point is to force companies to automate renewal
Cool. I'm a small-time webmaster with a couple of hobby sites with no more than a handful of visitors. Why do I need to set up automation to renew certs every 45 days, too?
For the same reasons as forcing companies to do it.
1. Revocation is a clusterfuck. Microsoft is currently failing to revoke tens of thousands of defective certificates for over seven months (the Baseline Requirements state that they should have been revoked within five days). Entrust was distrusted over repeated failures to revoke. Many TLS clients don't even bother to check revocation lists, or if they do they do it in a "fail-open" manner where inability to load the list does not prevent access which largely defeats the purpose. Short certificate lifetimes make revocation less important as both defective and compromised certificates age out rapidly, but without automation any meaningful reduction in lifetime directly translates to an increase in required effort which makes it a balancing act. With automation, however, reducing lifetimes is effectively free.
2. Automation is mostly a one-time cost. Manual renewal is a repeating cost. I started using LE to add HTTPS to my personal site when it first became available to the public in 2016 and then started using it for my work systems when our GoDaddy certs came up for renewal a bit less than a year later. Since then out of roughly 50 systems pulling around 70 certs I've had to "babysit" two of them once each, both because they were using wildcard certs which I was a very early adopter of and something about how I had set them up was technically wrong but worked in the initial version. Compare this to the "old world" where every couple of years I had to generally engage vendor support on both our server platforms and our CA because it had been long enough that things had changed on both sides so doing things manually required learning new things. Mandating short lifetimes is good for everyone. This is part of why LE has used a short lifetime since the beginning, to strongly encourage people to not try to do it without automation.
3. It's super easy. Lots of platforms have ACME support built in. For those that don't, packages like acme.sh are damn close to universal. If your system is able to expose a web server on port 80 it's basically trivial. If it's not, it's slightly harder. There's just not a good reason not to do it other than stubborn insistence. "Handcrafted TLS Certificates" just don't matter.
> I'm talking about managing two certificates so I can share a static site with a handful of friends. Each one takes about 10 minutes a year to update.
The personal site I started with is one certificate for a static site that I use for basically the same thing. It took me 10 minutes to set up in 2016 and I haven't thought about it for a second since then. It just works.
> Adding automation means I have to set up a process that I have to check up on at least once every 6.5 weeks to make sure it's still working.
Assuming you're using a common automation package and not rolling your own it should be included. I personally use acme.sh which can be configured to use email, XMPP, or HTTP(S) requests with prebuilt templates for most popular webhooks, as well as supporting fully custom notification scripts. I get an email every time it attempts a renewal that tells me if it succeeded or failed. Again one-time setup, easy, did it once literally almost a decade ago and haven't had to think about it since. As I pointed out in my previous post I did once have two of my systems fail to renew, I was notified, and I fixed it within a few minutes of seeing the emails.
Now that I'm actually thinking about the topic, these days for my work systems I have a platform that monitors for periodic updates and alerts me if they don't come in so I should probably reconfigure my notifications to use that instead of email and clean up my team's inboxes a bit by no longer needing to receive a couple dozen "everything's OK" mails every couple of months (or soon, couple of weeks).
I'm a small time webmaster and I haven't "set up" any automation - for my shared-hosting sites, the host has it built in; and for my self-hosted sites, the web server has it built in
The problem is that this breaks down if you don't want to leak any obscure subdomains you might be using via CT-logs – shared hosting rarely supports DNS-based certificate renewals for wildcard certificates, and even less so for domains hosted by an external registrar.
(Even for a fully self-hosted system you'd still have to figure out how to interface the certificate renewal mechanism with your DNS provider, so not as easy to set up as individual certificates for each subdomain.)
> (Even for a fully self-hosted system you'd still have to figure out how to interface the certificate renewal mechanism with your DNS provider, so not as easy to set up as individual certificates for each subdomain.)
That's exactly what the new DNS-PERSIST-01 challenge is for, being able to authorize a specific system or set of systems to request certs for a given FQDN and optionally subdomains without having to give that system direct control over your DNS as the existing DNS-01 challenge requires.
> I would like to see the metrics on how much time and resources are wasted babysitting all this automation vs. going in and updating a certificate manually once a year and not having to worry the automation will fail in a week.
I have multiple systems. Legacy that I've inherited, and modern that are able to automatically update their certs.
The automated updates require almost zero maintenance, there was a week a couple of years ago when I had to switch some configuration for a new root cert.
The legacy system requires manual installation and multiple weeks of my time every year because of the number of environments, and since generation of the certs requires me to file a request for someone to manually create it, they invariably typo something and it has to be redone everywhere.
So multiple engineers, over multiple weeks?
Manual process at a guess is £50k pa, while the automated is close to an annual £0?
All the enterprise software needs to do is create an API for configuring the certificate in the product. Then they can integrate Certbot or one of the many other ACME solutions, or let the customer do it (they are enterprises after all).
I'll happily take failed automation with errors I can fix over being totally screwed when we find out the guy who did it every year left the company this year and nobody else knows anything about it.
> A standard protocol with nonstop changing profile requirements at LE's whim. Who's going to keep updating the software every 3 months to keep up?
It really doesn't change that often. And whether this is a "breaking" change is something that's debatable (you shouldn't hard-code the cert lifetime, but I suspect many programs do, so de-facto it may or may not be a breaking change).
If you look at the Go implementation for example (https://github.com/golang/crypto/commits/master/acme), then there haven't been any major changes since it was first written in 2018: just bugfixes and basic maintenance tasks. Maybe I missed a commit but it certainly doesn't require 3-month update cycles. You can probably take that 2018 code and it'll work fine today. And since it doesn't hard-code the 90 days it will continue to work in 2028 when this change is made. So it's more "you need to keep updating the software every decade to keep up".
Big news for both the lazy homelab admin that can set a TXT once and ultimately be more secure without spraying DNS Zone Edit tokens all over their infra AND for the poor enterprise folks that have to open a ticket and wait 3 weeks for a DNS record.
It will help that side of the process (although, as a sibling has noted, you can CNAME your way into a better-controlled update service), but the challenge of automating cert changes for various non-HTTP services, including various virtual or physical boxes with funky admin interfaces, remains. I don't expect that vendors will do much about that, and it will end up on admins' plates, as usual. There will be much grumbling, but fewer solutions.
There are quite many solutions. For very funky systems, you can use a cert tied to a private CA. Then you can control the cert lifetimes. Or place them behind a reverse proxy that is easier to control.
desec.io allows you to create (through the api) tightly-scoped tokens that can only update the "_acme-challenge.subdomain.example.com" domain needed for DNS-01 challenges.
I switched to them from cloudflare dns for that specific functionality and it works great.
Very good question. On e.g. AWS one could probably do something like that with a custom Lambda…? Still, would be very convenient if there was some IAM rule for that.
a perhaps non-obvious option is to CNAME (or NS) the `_acme_challenge` record from your main zone to another zone you can control better and can't affect production traffic the same way the main zone could. `acme-dns` is a neat little tool for exactly this, that has an https api for your ACME client to request a cert from, and a dns server to respond to dns-01 challenges from your provider.
Yeah, I have all my _acme_challenge as their own zone so that BIND can auto increment the serial number without going through the pain of locking/unlocking the entire domain and hoping you don't end up with stale data that stops syncing.
That said, I like that the current system proves that you have control of the domain at the time of renewal, and I'm not sure how setting a one-off token would achieve the same.
This replaces an anonymous token with a LetsEncrypt account identifier in DNS. As long as accounts are not 1:1 to humans, that seems fine. But I hope they keep the other challenges.
I really would have felt better with a random token that was tied to the account, rather than the account number itself. The CA side can of course decide to implement it either way , but all examples are about the account ID.
Thank god. The only remaining failure mode I’ve seen with LE certs recently is API key used to manipulate DNS records for the DNS-01 challenge via some provider (Cloudflare etc.) expiring or being disabled during improper user offboarding.
Since we're on the topic of certificates, my app (1M+ logins per day) uses certificate pinning with a cert that lasts for one year, because otherwise it would be a nightmare to roll the cert multiple times in production. But what would be the "modern" way to do smart and automated certificate pinning, now that short-lived certs are becoming the trend?
Don't. Don't pin to public certificates. You're binding your app to third-party infrastructure beyond your control. Things change, and often.
Note that pinning to a root or intermediate seems 'sensible' - but it isn't. Roots are going to start changing every couple of years.
Issuing/intermediate CAs will be down to 6 months, and may even need to be randomised so when you request a new cert, there's no guarantee it'll be from the same CA as before.
This, have you thought about what happens when your CA needs to revoke your certificate because of some issue? can you even realistically re-pin before it's revoked (hours-days)?
The certificates will expire, but (as far as I'm aware), you're still allowed to use the same private key for multiple certificates, so as long as you pin to the public key instead of to the certificate itself, you should be fine.
The real modern way to do certificate pinning is to not do certificate pinning at all, but I'm sure that you've already heard this countless times before. An alternative option would be to run your own private CA, generate a new public/private keypair every 45 days, and generate certificates with that public key using both your private CA and Let's Encrypt, and then pin your private CA instead of the leaf certificates.
> The certificates will expire, but (as far as I'm aware), you're still allowed to use the same private key for multiple certificates, so as long as you pin to the public key instead of to the certificate itself, you should be fine.
It's allowed, but the intent of short cert expiration is to also have short private key lifetimes, so that point in time key compromises have time limited forgery potential. If the same key is used for a long period, someone who got the key once can download your public certificate and use that with the key they have.
> The real modern way to do certificate pinning is to not do certificate pinning at all, but I'm sure that you've already heard this countless times before. An alternative option would be to run your own private CA, generate a new public/private keypair every 45 days, and generate certificates with that public key using both your private CA and Let's Encrypt, and then pin your private CA instead of the leaf certificates.
The tricky thing here is you need to serve your private CA signed cert to pinned clients and a PKI cert to browser clients. If you want cert pinning for browsers (which afaik, is very limited availability), you should pick at least two CAs that you are pretty sure will continue to issue you certs and won't be delisted; bonus if you're also confident they won't issue certs for your domains without your explicit consent.
I would also recommend two private CAs, if you're doing private CAs. Store them physically separate, so if one is compromised or becomes unavailable, you can use the other.
I still think 'don't pin' is the best advice, but absolutely it should never be done to public CAs. I agree with your point about different endpoints, but maybe one endpoint for pinned apps, separate to your browser-based sites/endpoints.
I think the suggestion of pinning the public key and keeping the same private key across certs is the best option. But if you don't want that, perhaps this is a (high complexity, high fragility) alternative:
- Make sure your app checks that enough trusted embedded Signed Certificate Timestamps are present in the certificate (web browsers and the iOS and Android frameworks already do this by default).
- Disallow your app to trust certificates that are more recently requested than N hours. This might be hard to do.
- Set up monitoring to the certificate transparency logs to verify that no bad actor has obtained a certificate (and make sure you are always able to revoke them within N hours).
- Make sure you always have fresh keys with certificates in cold storage older than N hours, because you can't immediately use newly obtained certificates
Pinning the intermediate CA should work. Alternatively, calculate the cost of updating the cert pinning mechanism if it's custom and compare it to paid, 1 year certificates (though those will go away eventually too).
On the other hand, if you're using an app specific server, there's no need for you to use public certificates. A self-generated one with a five or ten year validity will pin just as nicely. That breaks if you need web browsers or third parties to talk to the same API, of course.
Please don't suggest pinning a publicly-trusted intermediate. The CA may change which intermediate they're using at any time for any reason with no warning, and then the app which pinned that intermediate is hosed.
It depends what intermediate you pin, but the CA can also choose to change the root certificate they use at any time like Let's Encrypt did in 2024 when the CA that signed their cross signed certificate stood to expire. Plus, depending on where you get your certificates from, the reseller certificate may already be an intermediate rather than its own root.
You should probably pin the highest certificate in the chain that's going to stay current for as long as possible. Or, if the goal is just "I don't want people snooping around in my app's traffic" rather than "I want to protect against a rogue CA being used to hijack my customers' traffic", reuse the private key in the CSR and pin that, it'll get the job done.
You can prepare CSRs with new public keys years in advance. It'll take some certbot/ACME scripting to use them instead of aurogenerating new ones on the fly, but that way you can pin your future certificates. Add pins as you prepare new CSRs and drop them as the certificates expire, and depending on the size of the list you choose you should be good for months or years without app updates.
Plus, if you do any key pinning, you'd probably do well to also pin a backup public key you haven't used in case your CA/infra collapses and you quickly need to redo your HTTPS setup.
Forgetting the obvious security advantage for the moment, I've found this to actually be convenient that the lifetimes are rather short. I'm not disciplined when it comes to setting up my homelab projects, so in the past sometimes I'd just LE it and then worry about it when renewal failed. My family is the only consumer so who cares.
But then they set some shorter lifetime, and I was forced to set up automation and now I've gotten a way of doing it and it's pretty easy to do. So now I either `cloudflared` or make sure certbot is doing its thing.
Perhaps if they'd made that more inconvenient I would have started using Caddy or Traefik instead of my good old trusty nginx knowledge.
> Acceptable behavior includes renewing certificates at approximately two thirds of the way through the current certificate’s lifetime.
So you can start renewing with 30d of lifetime remaining. You probably want to retry once or twice before alerting. So lets say 28d between alert and expiry.
That seems somewhat reasonable. But is basically the lower margin of what I consider so. I feel like I should be able to walk away from a system for a month with no urgent maintenance needed. 28d is really cutting it close. I think the previous 60d was generous but that is probably a good thing.
I really hope they don't try to make it shorter than this. Because I really don't want to worry about certificate expiry during a vacation.
Alternatively they could make the acceptable behaviour much higher. For example make 32d certificates but it is acceptable to start renewing them after 24h. Because I don't really care how often my automation renews them. What matters is the time frame between being alerted due to renewal failure and expiry.
“I really hope they don’t try to make it shorter than this. Because I really don’t want to worry about certificate expiry during a vacation.”
You might want to consider force-renewing all your certs a few days before your vacation. Then you can go away for over 40 days. (Unless something else breaks…)
Might not be a bad idea if it is within their rate limit rules but I'd really rather not take a manual action before leaving a system alone for a while and not worry that I managed to force renew every single cert.
If you forget a cert then you’re no worse off than the case where the automation fails during the vacation.
You could also run a simple program that checks each site and tells you the remaining lifetime of the cert used, to verify that you didn’t miss any cert.
It all depends on the scale of your operations, of course.
I’m maintaining a server with Let’s Encrypt certs for a B2B integration platform. Some partner systems still can’t just pin the CA and instead require a manual certificate update on their side. So every 90 days we do the same email ping-pong to get them to install the new cert — and now that window is getting cut in half.
Hopefully their software stack will be able to automate this by 2028.
Cert lifetimes are such a burden. I wanted to provide pre-configured server examples of my WebRTC project, something that was download-and-run without any more prior knowledge (an important point), which users could access from their LAN e.g. to test the examples from their phones (not from the useless localhost exemption that exists for secure contexts), for which a self-signed cert embedded in the examples was fine. New users could run them, new concepts (such as security and certificate management in production apps) could be learned at an apropriate time.
Until web browsers started to believe that no, that was too much of a convenience, so now long expiration certs became rejected. What's the proposed solution from the "industry"? to run a whole automation pipeline just to update a file in each example folder every few months? bonkers. These should be static examples, no reason to having to update those any earlier than every few years, at most.
If I understand it right they bundle a publicly trusted shared cert and want to allow their users to get running on vanilla devices without having to provide a domain.
A certificate is a binding of a cryptographic key, along with an attestation of control of a DNS record(s) at a point in time. DNS changes frequently. The attestation needs to be refreshed much more frequently to ensure accuracy.
How do people here deal with distributed websites? I’m currently issuing one certificate on my machine and then Ansible-ing it into all the servers. I could issue one certificate for each server, but then at any given time I’d have dozens of certs for the same domain, and all must be valid. That doesn’t sound right either.
Organizations with many frontends/loadbalancers all serving the same site tend to adopt one of four solutions:
- Have one node with its own ACME account. It controls key generation and certificate renewal, and then the new key+cert is copied to all nodes that need it. Some people don't like this solution because it means you're copying private keys around your infrastructure.
- Have one node with its own ACME account. The other nodes generate their own TLS keys, then provide a CSR to the central node and ask it to do the ACME renewal flow on their behalf. This means you're never copying keys around, but it means that central node needs to (essentially) be an ACME server of its own, which is a more complex process to run.
- Have one ACME account, but copy its account key to every node. Have each node be in charge of its own renewal, all using that shared account. This again requires copying private keys around (though this time its the ACME key and not the TLS key).
- Give every node its own ACME account, and have each node be in charge of its own renewal.
The last solution is arguably the easiest. None of the nodes have to care about any of the others. However, it might run into rate limits; for example, LE limits the number of new account registrations per IPv6 range, so if you spin up a bunch of nodes all at once, some of them might fail to register their new accounts. And if your organization is large enough, it might run into some of LE's other rate limits, like the raw certificates-per-domain limit. Any of the above solutions would run into that rate limit at the same time, but rate limit overrides are most easily granted on a per-account basis, so having all the nodes share one account is useful in that regard.
Another factor in the decision-making process is what challenge you're using. If you're using a DNS-based challenge, then any of these solutions work equally well (though you may prefer to use one of the centralized solutions so that your DNS API keys don't have to live on every individual node). If you're using an HTTP-based challenge, you might be required to use a centralized solution, if you can't control which of your frontends receives the HTTP request for the challenge token.
Anyway, all of that is a long-winded way to say "there's no particularly wrong or right answer". What you're doing right now makes sense for your scale, IMO.
By "distributed websites" you mean multiple webservers for one FQDN? Usually TLS termination would happen higher up the stack than on the webservers themselves (reverse proxy, L7 load balancer, etc) and the cert(s) would live there. But if your infrastructure isn't that complicated then yes, the happy path is have each webserver independently handle its own certificate (but note your issuance rate limits, 5 certs per week for the exact same hostname[1]).
In my case is multiple servers handling the same FQDN. They are load balanced via DNS or use DNS anycast in some situations. In any case, my server is the one terminating TLS.
“One quantitative benefit is that the maximum lifetime of certificates sets a bound on the size of certificate revocation lists. John Schanck has done heroic work on CRLite at Mozilla to compress CRLs, and the reduction from 398 days to 47 days further shrinks them by a factor of more than 8. For Let’s Encrypt the current limit is 90, so a more modest but still useful factor of 2.”
One would hope they're also increasing rate limits along with this, but there's no indication of that yet.
> Up to 50 certificates can be issued per registered domain (or IPv4 address, or IPv6 /64 range) every 7 days. This is a global limit, and all new order requests, regardless of which account submits them, count towards this limit.
This is hard to deal with when you have a large number of subdomains and you'd rather (as per their recommendations) not issue SAN certificates with multiple subdomains on them.
We are working on further improvements to our rate limits, including adding more automation to how we adjust them. We're not ready to announce that yet.
We wanted to get this post out as soon as we'd decided on a timeline so everyone's on the same page here.
Certificates that look like renewals -- for the same set of names, from the same account -- are exempt from rate limits. This means that renewing (for example) every 30 days instead of every 60 days will not cost any rate limit tokens or require any rate limit overrides.
As someone who works at a company who has to manage millions of SSL certificates for IoT devices in extremely terrible network situations I dread this.
One of the biggest issues is handling renewals at scale, and I hate it. Another increasingly frusturation is challenges via DNS are not quick.
Are these IoT devices expected to be accessible via a regular Web browser from the public Internet? Does each of them represent a separate domain than needs a separate certificate, which it must not share with other similar devices?
I would strongly suggest that these certs have no reason to be from a public CA and thus you can (and should) move them to a private CA where these rules don't apply.
For those who want to solve the problem buy throwing money at it, one can probably buy a solution for this. I’m thinking of stuff like AWS IoT Core, I would guess there are other vendors in that space too.
I'm sure this is for good reasons, but as someone that maintains a lot of ssl certificates, I'm not in love with this change. Sometimes things break with cert renewal, and it sometimes takes a chunk of time to detect and then sit down to properly fix those issues. This shortens the amount of time I will have to deal with that if it ever comes up (which is more often than you would expect), and increases the chances of running into rate limit issues too.
Without getting into specific stuff I've run into, automated stuff just, breaks.
This is a living organism with moving parts and a time limit - you update nginx with a change that breaks .well-known by accident, or upgrade to a new version of Ubuntu and suddenly some dependency isn't loading correctly, or that UUID generator you depended on to generate the name for the challenge doesn't get loaded, or certbot becomes obsolete because of some API change and you can't upgrade to the latest because the OS is older and you installed it from the package manager.
You eventually see it in your exception monitoring or when an ssl monitor detects the cert is about to expire. Then you have to drop that other urgent thing you needed to get done, come in and debug it, fix it, and re-issue all the certs at the rate limit allowed. That's assuming you have that monitoring - most sites probably don't.
If you detect that issue with 1/3 of the cert left, you will now have 15 days to figure that out instead of 30. If you can't finish it in time, or you don't learn about it in time, the site(s) hard fail on every web browser that visits and you've effectively got a full site outage until you repair it.
So you discover it's because of certbot not working with a new API change, and you can't upgrade with the package manager. Now you need to figure out how to compile it from source, but it doesn't like the python that is currently installed and now you need to install that from source, but that version of python breaks your python web app so you have to figure out how to migrate your app to that version of python before you can do that, and the programmer that can do that is on a week long whitewater rafting trip in Idaho.
Aside from all that, what happens if a hacker manages to wreck the let's encrypt infra so badly they need 2 weeks to get it back online? The internet archive was offline for weeks after a ddos attack. The cloudflare outage took one site of mine down for less than 10 minutes, it's not hard to imagine a much worse outage for the web here.
AKA the real world, a place where you have older appliances, legacy servers, contractual constraints and better things to do than watch a nasty yearly ritual become a nasty monthly ritual.
I need to make sure SSL is working in a bunch of very heterogeneous stuff but not in a position to replace it and/or pick an authority with better automation. I just suck it up and dread when a "cert day" looms closer.
Sometimes these kind of decisions seem to come from bodies that think the Internet exists solely for doing the thing they do.
Happens to me with the QA people at our org. They behave as if anything happens just for the purpose of having them measure it, creating a Heisenberg situation where their incessant narrow-minded meddling makes actually doing anything nearly imposible.
The same happens with manual processes done once a year - you just aren't aware of it until renewal.
Consider the inevitable need for immediate renewal due to an incident. Would you rather have this renewal happen via a fast, automated and well-tested process, or a silently broken slow and manual one?
The manual process was annoying but it wasn't complicated.
You knew exactly when it was going to fail and you could put it on your calendar to schedule the work, which consisted of an email validation process and running a command to issue the certificate request from your generated key.
The only moving part was the issued certificate, which you copied and pasted over and reloaded the server. There are a lot less things to go wrong on this process, which at one point I could do once every two years, than in a really complicated automated background task that has to happen within 15 days.
I love short duration automated free certs, but I think we really need to have a conversation about how short we can make them before we make it so humans no longer have the time required to fix problems anymore.
There are also alternatives to Cloudflare and AWS, that didn't stop their outages from taking down pretty much the entire internet. I'm not sure what your point is, pretty much everybody is using let's encrypt and it will very much be a huge outage event for the web if something were to go seriously wrong with it.
One key difference: A cert is a “pickled” thing, it’s stored and kept until it is successfully renewed. So if you attempt to renew at day 30 and LE is down, then you still have nearly more than two weeks to retrieve a new cert. Hopefully LE will get on their feet again within that time. Otherwise you have Google, ZeroSSL, etc where you can fetch a replacement cert.
This is too short and the justification provided is flimsy at best.
I predict that normal people will began to get comfortable with ignoring SSL errors, even more than they already are. Perhaps we will see the proliferation of https-stripping proxies too.
I’m imagining that xkcd meme about internet infrastructure and one of the thin blocks holding the whole thing up being LE.
Is there any good argument for short lifetimes? The only argument I know of is that short lifetimes are supposedly better in case the key gets compromised, but I disagree. If the key can be compromised once it can be compromised again when it renews; the underlying cause of compromise doesn’t go away. NIST stopped recommending forced password rotation for this reason, it’s pseudosecurity.
> It is already getting dangerously close to the duration of holiday freeze windows, compliance/audit enforced windows, etc.
How do those affect automated processes though? If the automation were to fail somehow during a freeze window, then surely that would be a case of fixing a system and thus not covered by the freeze window.
> Not to mention the undue bloat of CT logs.
I'm not sure what you mean by "CT logs", but I assume it's something to do with the certificate renewal automation. I can't see that you'd be creating GBs of logs that would be difficult to handle. Even a home-based selfhosted system would easily cope with certificate logs from running it hourly.
"CT Logs" are Certificate Transparency Logs, which are cryptographically provable append-only data structures hosted by trusted operators. Every certificate issued is publicly logged in two or more CT Logs, so that browsers can ensure that CAs aren't lying about what certs they have or have not issued.
Reducing the lifetime of certificates increases the number of certificates that have to be issued, and therefore the number of certs that are logged to CT. This increases the cost to CT operators, which is unfortunate since the set of operators is currently very small.
However, a number of recent improvements (like static-ct-api and the upcoming Merkle Tree Certs) are making great strides in reducing the cost of operating a CT log, so we think that the ecosystem will be able to keep up with reductions in cert lifetime.
Translation: Like any large bureaucracy, the certificate industry sees its own growth as a moral virtue, and no limits to the burdens which it should be free to impose on the rest of society.
"This change is being made along with the rest of the industry, as required by the CA/Browser Forum Baseline Requirements, which set the technical requirements that we must follow."
I dont follow. Why? Why not an hour? A ssl failure is a very effective way to shut down a site.
"you should verify that your automation is compatible with certificates that have shorter validity periods.
To ensure your ACME client renews on time, we recommend using ACME Renewal Information (ARI). ARI is a feature we’ve introduced to help clients know when they need to renew their certificates. Consult your ACME client’s documentation on how to enable ARI, as it differs from client to client. If you are a client developer, check out this integration guide."
Oh that sounds wonderful. So every small site that took the LE bait needs expensive help to stay online.
Do they track and publish the sites they take down?
They've been slowly moving the time lower and lower. It will go lower than 45 days in the future, but the reason why we don't go immediately to 1 hour is that it would be too much of a shock.
>So every small site that took the LE bait needs expensive help to stay online.
It's all automated. They don't need help to stay online.
>Oh that sounds wonderful. So every small site that took the LE bait needs expensive help to stay online.
I agree with the terminology "bait", because the defaults advocated by letsencrypt are horrible. Look at this guide [0].
They strongly push you towards the HTTP-01 challenge which is the one that requires the most amount of infrastructure (http webserver + certbot) and is the hardest to setup. The best challenge type in that list is TLS-ALPN-01 which they dissuade you from! "This challenge is not suitable for most people."
And yet when you look at the ACME Client for JVM frameworks like Micronaut [1], the default is TLS and its the simplest to set up (no DNS access or external webserver). Crazy.
> the defaults advocated by letsencrypt are horrible
You’re completely misinterpreting the linked document. See what it says at the start:
> Most of the time, this validation is handled automatically by your ACME client, but if you need to make some more complex configuration decisions, it’s useful to know more about them. If you’re unsure, go with your client’s defaults or with HTTP-01.
This is absolutely the correct advice. For Micronaut, this will guide you to using TLS-ALPN-01, which is better than HTTP-01 if the software supports it. But for a user who doesn’t know what’s what, HTTP-01 is both the easiest and the most reliable, because, as they say, “It works with off-the-shelf web servers.” Typical web servers which don’t know about ACME themselves can be told “serve the contents of such-and-such a directory at /.well-known/acme-challenge/” which is enough to facilitate HTTP-01 through another client; but they don’t give you the TLS handshake control required to facilitate TLS-ALPN-01.
According to TFA LE already offers a "shortlived" profile that issues 6-day certs if you want to stress test your automation, or just gain the security advantages of rapid certificate turnover immediately.
The goal is to move to short lived certs to make the fragile system of revocation lists and public certificate logs unnecessary.
OTR still has static identities, with DH used to ratchet the ephemeral keys. The comparison would be more like Signal ditching Safety Numbers and Registration Lock for hourly SMS verification of new independent keys with no successor signing.
There's a fundamental divide in what certificates mean: modern CAs view WebPKI as a fancy vantage point check--cryptographic session tickets that attest to the the actual root of trust, usually DNS. Short-lived certs (down to 10 minutes in Sigstore, 6 days trialed by LetsEncrypt) make perfect sense to them.
But DNS challenges are perfectly forgeable by whoever controls the DNS. This reduces authentication to "the CA says so" for 99% of users not running a private CA alongside the public one.
Transparency logs become impenetrable to human review, and even if you do monitor your log (most don't), you need a credible out-of-band identity to raise the alarm if compromised. The entire system becomes a heavier Convergence/DANE-like vantage point check, assuming log operators actually reverify the DNS challenges (I don't think one-time LetsEncrypt challenges are deterministic).
I think certificates should represent long-term cryptographic identity, unforgeable by your CA and registrar after issuance. The CA could issue a one-time attestation that my private root cert belongs to my domain, and when it changes, alert to the change of ownership.
I'm all for it -- it's hard to understate the extent to which LetsEncrypt has improved the WebPKI situation. Although the effective single-vendor situation isn't great, the "this is just something you only do via an automated API" approach is absolutely the right one. And certificate lifetimes measured in days work just fine with that.
The only things that continue to amaze me are the number of (mostly "enterprise") software products that simply won't get with the times (or get it wrong, like renewing the cert, but continuing to use the old one until something is manually restarted), and the countless IT departments that still don't support any kind of API for their internal domains...
It's not single-vendor. The ACME protocol is also supported by the likes of GlobalSign, Sectigo, and Digicert.
You've got to remember that the reduction to a 45-day duration is industry-wide - driven by the browsers. Any CA not offering automated renewal (which in practice means ACME) is going to lose a lot of customers over the next few years.
Effectively single-vendor. I'm not aware of any ACME-compatible CAs that don't have pernicious limits on their free plans (and if there are, I'd love to hear!), and here in the EU we've even recently lost a rather big player...
Google Trust Services: https://pki.goog/
I'm glad there are free alternatives to Let's Encrypt, but a major PKI provider also being by far the largest browser provider feels like a disaster waiting to happen. The check on PKI providers doing the right thing is browsers including them (or not) in their trust stores. Having both sides of that equation being significantly controlled by the same entity fundamentally breaks the trust model of WebPKI. I'm sure Google has the best of intentions, but I don't see how that's in any way a workable approach to PKI security.
"multiple vendors, but only one of them is nice enough to give the product away for free" is not "effectively single-vendor".
The other CAs aren't prohibitively priced for anyone who has a business need for lots of certificates, in case Let's Encrypt disappears or goes rogue.
> other CAs aren't prohibitively priced
Not if you look at the per-cert pricing, but if you factor in the cost of "dealing with incompetent sales" and "convincing accounting to keep the contract going", they absolutely are.
When I was working with Digicert a decade ago, it was expensive, but they also had knowledgable support and with a wildcard cert, they would issue all sorts of 'custom duplicates' by request that were super handy. No incompetent sales, but certainly you do need to make sure accounting will pay.
Unfortunately DigiCert has gone way downhill.
We pulled all of our business after they failed to renew a cert with 30d(!!!) notice and got themselves stuck in a loop of useless org re-validations.
They were completely unresponsive and wasted dozens of hours of our time trying to rectify the situation before we pulled the plug and switched everything to ACME. I still can’t believe we wasted so much time and money with them.
It's not only the sellers of the other party. You have to work with the buyers of your company too. Stuff that costs no money and needs no contracts move faster than stuff that must be negotiated, agreed upon, paid for.
Doesn't ZeroSSL do this? acme.sh has been using it as the default for the last few years. As I understand it, it basically offers the same as Let's Encrypt.
https://zerossl.com/pricing/ suggests a 3-cert limit on the free tier, as well as a huge influx of expected spam...
The "ACME Certificates" are free and unlimited. The 3 free "ZeroSSL Certificates" are old-fashioned manual certs: this is a strictly more generous offering than Let's Encrypt!
> If you have a server or other device that requires automatic issuance of certificates and supports the ACME protocol, you can use our free 90-day ACME certificates on all plans.
I think that refers to something else (manual non-acme certificates)? Many other pages says it's unlimited. E.g. https://zerossl.com/documentation/acme
I believe that is 3 hosts not total certs.
Zerossl is integrated with Caddy by default and there’s no indication from Caddy that you would only be able to renew the cert twice before needing to cough up some money.
> The only things that continue to amaze me are the number of (mostly "enterprise") software products that simply won't get with the times
Yeah, no one's rewriting a bunch of software to support automating a specific, internet-facing, sometimes-reliable CA.
Yes it's ACME, a standard you say. A standard protocol with nonstop changing profile requirements at LE's whim. Who's going to keep updating the software every 3 months to keep up? When the WebPKI sneeze in a different direction and change their minds yet again. Because 45 will become 30 will become 7 and they won't stop till the lifetime is 6 hours.
"Enterprise" products are more often than not using internal PKI so it's a waste.
I would like to see the metrics on how much time and resources are wasted babysitting all this automation vs. going in and updating a certificate manually once a year and not having to worry the automation will fail in a week.
In circles I'm running in, automatic certificate renewal has not caused a single problem over 7 years of using it, and whatever time was spent on setting it up, has paid many times over, both in saving effort on renewal, and in putting out fires when (not if) someone forgets to renew a certificate. You just have to be careful picking your automation — I haven't been impressed with certbot, for example.
Also, everything is using https now. Living in a low-income country, certificates were too expensive to use them where they weren't absolutely required, but not anymore. This is orthogonal to automation, I'm just pointing out that LE is not as demonic as you make it out to be.
I'm afraid enterprise users are on their own, probably approximately no-one else is interested in going back to the old ways of doing it. (Maybe embedded.)
Forcing automation would be fine if the default software package (certbot) was any good but from my experience certbot is simply not fit for purpose. Certbot doesn't support the industry standard PKCS#12 format, which makes it extremely brittle for anyone using a Java based webserver. Instead it uses the non-standard PEM format which requires conversion before usage. That conversion step breaks all the time and requires manual intervention. It's ridiculous.
PEM is very standard. Calling `openssl pkcs12` also should not be hard; IDK about certbot, but there is a hook for acmetool (which I use) that does just that for you: https://github.com/dlitz/acmetool-pkcs12-hooks
PEM is standardized in RFC 7468, from 2015 [1]. PEM has been an industry standard for a decade.
[1]https://datatracker.ietf.org/doc/html/rfc7468
I hear ya. I’m also not fond of certbot and other existing clients.
The best solution I’ve found so far was to implement a custom cert manager using the formidable acmez library.
at this point PEM is more standard and prevalent than pkcs#12
> I would like to see the metrics
Well, I could regale you with my anecdotes on how all my 'grab a LetsEncrypt cert on Docker deploy and whenever remaining lifetime goes below 75%' services have not had a single TLS-related outage in at least a year, and how we just had a multi-day meltdown caused by a load-bearing cert that everyone forgot about expiring, but I doubt you'll be interested.
I'm not here to take away your right to set up a few more 5-year+ internal CAs (or, hey, how about an entire self-signed hierarchy? can't be that hard...), and neither is LetsEncrypt. But on the Big Bad Internet (where Google is the leading cause of More Security, and LetsEncrypt is merely the leading vendor catering to that), things have moved on slightly.
> I would like to see the metrics on how much time and resources are wasted babysitting all this automation vs. going in and updating a certificate manually once a year and not having to worry the automation will fail in a week.
I would also like to see those metrics, because I strongly suspect the cost is dramatically in favor of automation, especially when you consider the improved security posture you get from short lived certificates. My personal experience with Let's Encrypt has been that the only hassle is in systems that don't implement automation, so I get the worst of both worlds having to effectively manually renew short lived certificates every 90 days.
CRL based revocation is largely a failure, both because implementation is a problem and because knowing when to revoke a certificate is hard. So the only way to achieve the goal of security in WebPKI that's resilient to private key compromise is short-lived certificates. And the only way to implement those is with automated renewal.
The change in validity does not in any way alter the protocol itself. As mentioned in the linked blog post: if you've already got automated cert renewal, it'll almost certainly require zero change.
After all, the logical approach is to schedule a daily cron job for your renewal process, which only contacts the server when the current cert has less than X days of validity remaining. Scheduling a one-off cron job every, say, 60 days would be extremely error-prone!
With regards to future reductions: the point is to force companies to automate renewal, as they have proven to be unable to renew in time during incidents due to dozens of layers of bureaucracy. 45 days means one renewal a month, which is probably sufficient for that. Anything below 7 days isn't likely to happen, because at that point the industry considers certification revocation unnecessary.
Internal PKI is not relevant here. Anyone with weird internal cert use shouldn't be using public certs anyways, so moving those people to certs backed by self-signed internal CAs is a win. They are free to use whatever validity they want with those.
> the point is to force companies to automate renewal
Cool. I'm a small-time webmaster with a couple of hobby sites with no more than a handful of visitors. Why do I need to set up automation to renew certs every 45 days, too?
For the same reasons as forcing companies to do it.
1. Revocation is a clusterfuck. Microsoft is currently failing to revoke tens of thousands of defective certificates for over seven months (the Baseline Requirements state that they should have been revoked within five days). Entrust was distrusted over repeated failures to revoke. Many TLS clients don't even bother to check revocation lists, or if they do they do it in a "fail-open" manner where inability to load the list does not prevent access which largely defeats the purpose. Short certificate lifetimes make revocation less important as both defective and compromised certificates age out rapidly, but without automation any meaningful reduction in lifetime directly translates to an increase in required effort which makes it a balancing act. With automation, however, reducing lifetimes is effectively free.
2. Automation is mostly a one-time cost. Manual renewal is a repeating cost. I started using LE to add HTTPS to my personal site when it first became available to the public in 2016 and then started using it for my work systems when our GoDaddy certs came up for renewal a bit less than a year later. Since then out of roughly 50 systems pulling around 70 certs I've had to "babysit" two of them once each, both because they were using wildcard certs which I was a very early adopter of and something about how I had set them up was technically wrong but worked in the initial version. Compare this to the "old world" where every couple of years I had to generally engage vendor support on both our server platforms and our CA because it had been long enough that things had changed on both sides so doing things manually required learning new things. Mandating short lifetimes is good for everyone. This is part of why LE has used a short lifetime since the beginning, to strongly encourage people to not try to do it without automation.
3. It's super easy. Lots of platforms have ACME support built in. For those that don't, packages like acme.sh are damn close to universal. If your system is able to expose a web server on port 80 it's basically trivial. If it's not, it's slightly harder. There's just not a good reason not to do it other than stubborn insistence. "Handcrafted TLS Certificates" just don't matter.
I'm talking about managing two certificates so I can share a static site with a handful of friends. Each one takes about 10 minutes a year to update.
Adding automation means I have to set up a process that I have to check up on at least once every 6.5 weeks to make sure it's still working.
> I'm talking about managing two certificates so I can share a static site with a handful of friends. Each one takes about 10 minutes a year to update.
The personal site I started with is one certificate for a static site that I use for basically the same thing. It took me 10 minutes to set up in 2016 and I haven't thought about it for a second since then. It just works.
> Adding automation means I have to set up a process that I have to check up on at least once every 6.5 weeks to make sure it's still working.
Assuming you're using a common automation package and not rolling your own it should be included. I personally use acme.sh which can be configured to use email, XMPP, or HTTP(S) requests with prebuilt templates for most popular webhooks, as well as supporting fully custom notification scripts. I get an email every time it attempts a renewal that tells me if it succeeded or failed. Again one-time setup, easy, did it once literally almost a decade ago and haven't had to think about it since. As I pointed out in my previous post I did once have two of my systems fail to renew, I was notified, and I fixed it within a few minutes of seeing the emails.
Let's Encrypt also used to send their own emails if a cert was expiring but they stopped doing that this year for a variety of reasons: https://letsencrypt.org/2025/01/22/ending-expiration-emails
Now that I'm actually thinking about the topic, these days for my work systems I have a platform that monitors for periodic updates and alerts me if they don't come in so I should probably reconfigure my notifications to use that instead of email and clean up my team's inboxes a bit by no longer needing to receive a couple dozen "everything's OK" mails every couple of months (or soon, couple of weeks).
I'm a small time webmaster and I haven't "set up" any automation - for my shared-hosting sites, the host has it built in; and for my self-hosted sites, the web server has it built in
The problem is that this breaks down if you don't want to leak any obscure subdomains you might be using via CT-logs – shared hosting rarely supports DNS-based certificate renewals for wildcard certificates, and even less so for domains hosted by an external registrar.
(Even for a fully self-hosted system you'd still have to figure out how to interface the certificate renewal mechanism with your DNS provider, so not as easy to set up as individual certificates for each subdomain.)
> (Even for a fully self-hosted system you'd still have to figure out how to interface the certificate renewal mechanism with your DNS provider, so not as easy to set up as individual certificates for each subdomain.)
That's exactly what the new DNS-PERSIST-01 challenge is for, being able to authorize a specific system or set of systems to request certs for a given FQDN and optionally subdomains without having to give that system direct control over your DNS as the existing DNS-01 challenge requires.
Use Caddy and it will do it all for you automatically https://caddyserver.com/docs/automatic-https
You just have to set up your web server, that you need anyway to serve your hobby sites.
Is the certificate you use on your website any different to that on google.com? Does/could a browser know this and act differently?
> I would like to see the metrics on how much time and resources are wasted babysitting all this automation vs. going in and updating a certificate manually once a year and not having to worry the automation will fail in a week.
I have multiple systems. Legacy that I've inherited, and modern that are able to automatically update their certs.
The automated updates require almost zero maintenance, there was a week a couple of years ago when I had to switch some configuration for a new root cert.
The legacy system requires manual installation and multiple weeks of my time every year because of the number of environments, and since generation of the certs requires me to file a request for someone to manually create it, they invariably typo something and it has to be redone everywhere.
So multiple engineers, over multiple weeks?
Manual process at a guess is £50k pa, while the automated is close to an annual £0?
All the enterprise software needs to do is create an API for configuring the certificate in the product. Then they can integrate Certbot or one of the many other ACME solutions, or let the customer do it (they are enterprises after all).
I'll happily take failed automation with errors I can fix over being totally screwed when we find out the guy who did it every year left the company this year and nobody else knows anything about it.
> A standard protocol with nonstop changing profile requirements at LE's whim. Who's going to keep updating the software every 3 months to keep up?
It really doesn't change that often. And whether this is a "breaking" change is something that's debatable (you shouldn't hard-code the cert lifetime, but I suspect many programs do, so de-facto it may or may not be a breaking change).
If you look at the Go implementation for example (https://github.com/golang/crypto/commits/master/acme), then there haven't been any major changes since it was first written in 2018: just bugfixes and basic maintenance tasks. Maybe I missed a commit but it certainly doesn't require 3-month update cycles. You can probably take that 2018 code and it'll work fine today. And since it doesn't hard-code the 90 days it will continue to work in 2028 when this change is made. So it's more "you need to keep updating the software every decade to keep up".
There's a slew of RFC documents that cover these related protocols so imagine that now means "requests for compliance".
> The key advantage of [DNS-PERSIST-01] is that the DNS TXT entry used to demonstrate control does not have to change every renewal.
> We expect DNS-PERSIST-01 to be available in 2026
Very exciting!
https://datatracker.ietf.org/doc/html/draft-sheurich-acme-dn...
Big news for both the lazy homelab admin that can set a TXT once and ultimately be more secure without spraying DNS Zone Edit tokens all over their infra AND for the poor enterprise folks that have to open a ticket and wait 3 weeks for a DNS record.
It will help that side of the process (although, as a sibling has noted, you can CNAME your way into a better-controlled update service), but the challenge of automating cert changes for various non-HTTP services, including various virtual or physical boxes with funky admin interfaces, remains. I don't expect that vendors will do much about that, and it will end up on admins' plates, as usual. There will be much grumbling, but fewer solutions.
There are quite many solutions. For very funky systems, you can use a cert tied to a private CA. Then you can control the cert lifetimes. Or place them behind a reverse proxy that is easier to control.
Why dont providers offer dns api keys restricted to txt records?
https://dns.he.net/ does. Each record can have its own secret. You can also use this for things like A records to do dynamic DNS.
desec.io allows you to create (through the api) tightly-scoped tokens that can only update the "_acme-challenge.subdomain.example.com" domain needed for DNS-01 challenges.
I switched to them from cloudflare dns for that specific functionality and it works great.
Very good question. On e.g. AWS one could probably do something like that with a custom Lambda…? Still, would be very convenient if there was some IAM rule for that.
Very true. I have been in both roles.
a perhaps non-obvious option is to CNAME (or NS) the `_acme_challenge` record from your main zone to another zone you can control better and can't affect production traffic the same way the main zone could. `acme-dns` is a neat little tool for exactly this, that has an https api for your ACME client to request a cert from, and a dns server to respond to dns-01 challenges from your provider.
Yeah, I have all my _acme_challenge as their own zone so that BIND can auto increment the serial number without going through the pain of locking/unlocking the entire domain and hoping you don't end up with stale data that stops syncing.
That said, I like that the current system proves that you have control of the domain at the time of renewal, and I'm not sure how setting a one-off token would achieve the same.
Absolutely, this works well. Where it's approved :)
Yep, being able to “redirect” the “challenge record” is very handy in some cases. Did a writeup here with some examples of how it can be arranged: https://hsm.tunnel53.net/article/dns-for-acme-challenges/
This replaces an anonymous token with a LetsEncrypt account identifier in DNS. As long as accounts are not 1:1 to humans, that seems fine. But I hope they keep the other challenges.
I really would have felt better with a random token that was tied to the account, rather than the account number itself. The CA side can of course decide to implement it either way , but all examples are about the account ID.
That seems worth suggesting to the acme working group mailing list, if it hasn't already been discussed there.
I don't expect we'll ever remove the other validation methods, and certainly have no plans to do so.
There are pros and cons of various approaches.
Accounts are many to one email address. Each of my servers have an individual account attached to the same email address.
Thank god. The only remaining failure mode I’ve seen with LE certs recently is API key used to manipulate DNS records for the DNS-01 challenge via some provider (Cloudflare etc.) expiring or being disabled during improper user offboarding.
The year is 2055, certificate lifetimes are measured in picoseconds. The Authority is still not pleased with your automation.
[dead]
Since we're on the topic of certificates, my app (1M+ logins per day) uses certificate pinning with a cert that lasts for one year, because otherwise it would be a nightmare to roll the cert multiple times in production. But what would be the "modern" way to do smart and automated certificate pinning, now that short-lived certs are becoming the trend?
Don't. Don't pin to public certificates. You're binding your app to third-party infrastructure beyond your control. Things change, and often. Note that pinning to a root or intermediate seems 'sensible' - but it isn't. Roots are going to start changing every couple of years. Issuing/intermediate CAs will be down to 6 months, and may even need to be randomised so when you request a new cert, there's no guarantee it'll be from the same CA as before.
Don't pin to certs you don't control.
This, have you thought about what happens when your CA needs to revoke your certificate because of some issue? can you even realistically re-pin before it's revoked (hours-days)?
The certificates will expire, but (as far as I'm aware), you're still allowed to use the same private key for multiple certificates, so as long as you pin to the public key instead of to the certificate itself, you should be fine.
The real modern way to do certificate pinning is to not do certificate pinning at all, but I'm sure that you've already heard this countless times before. An alternative option would be to run your own private CA, generate a new public/private keypair every 45 days, and generate certificates with that public key using both your private CA and Let's Encrypt, and then pin your private CA instead of the leaf certificates.
> The certificates will expire, but (as far as I'm aware), you're still allowed to use the same private key for multiple certificates, so as long as you pin to the public key instead of to the certificate itself, you should be fine.
It's allowed, but the intent of short cert expiration is to also have short private key lifetimes, so that point in time key compromises have time limited forgery potential. If the same key is used for a long period, someone who got the key once can download your public certificate and use that with the key they have.
> The real modern way to do certificate pinning is to not do certificate pinning at all, but I'm sure that you've already heard this countless times before. An alternative option would be to run your own private CA, generate a new public/private keypair every 45 days, and generate certificates with that public key using both your private CA and Let's Encrypt, and then pin your private CA instead of the leaf certificates.
The tricky thing here is you need to serve your private CA signed cert to pinned clients and a PKI cert to browser clients. If you want cert pinning for browsers (which afaik, is very limited availability), you should pick at least two CAs that you are pretty sure will continue to issue you certs and won't be delisted; bonus if you're also confident they won't issue certs for your domains without your explicit consent.
I would also recommend two private CAs, if you're doing private CAs. Store them physically separate, so if one is compromised or becomes unavailable, you can use the other.
I still think 'don't pin' is the best advice, but absolutely it should never be done to public CAs. I agree with your point about different endpoints, but maybe one endpoint for pinned apps, separate to your browser-based sites/endpoints.
I think the suggestion of pinning the public key and keeping the same private key across certs is the best option. But if you don't want that, perhaps this is a (high complexity, high fragility) alternative:
- Make sure your app checks that enough trusted embedded Signed Certificate Timestamps are present in the certificate (web browsers and the iOS and Android frameworks already do this by default).
- Disallow your app to trust certificates that are more recently requested than N hours. This might be hard to do.
- Set up monitoring to the certificate transparency logs to verify that no bad actor has obtained a certificate (and make sure you are always able to revoke them within N hours).
- Make sure you always have fresh keys with certificates in cold storage older than N hours, because you can't immediately use newly obtained certificates
Pinning the intermediate CA should work. Alternatively, calculate the cost of updating the cert pinning mechanism if it's custom and compare it to paid, 1 year certificates (though those will go away eventually too).
On the other hand, if you're using an app specific server, there's no need for you to use public certificates. A self-generated one with a five or ten year validity will pin just as nicely. That breaks if you need web browsers or third parties to talk to the same API, of course.
Please don't suggest pinning a publicly-trusted intermediate. The CA may change which intermediate they're using at any time for any reason with no warning, and then the app which pinned that intermediate is hosed.
It depends what intermediate you pin, but the CA can also choose to change the root certificate they use at any time like Let's Encrypt did in 2024 when the CA that signed their cross signed certificate stood to expire. Plus, depending on where you get your certificates from, the reseller certificate may already be an intermediate rather than its own root.
You should probably pin the highest certificate in the chain that's going to stay current for as long as possible. Or, if the goal is just "I don't want people snooping around in my app's traffic" rather than "I want to protect against a rogue CA being used to hijack my customers' traffic", reuse the private key in the CSR and pin that, it'll get the job done.
It'll be tough when ICAs rotate every 5/6 months and may even randomise.
You can prepare CSRs with new public keys years in advance. It'll take some certbot/ACME scripting to use them instead of aurogenerating new ones on the fly, but that way you can pin your future certificates. Add pins as you prepare new CSRs and drop them as the certificates expire, and depending on the size of the list you choose you should be good for months or years without app updates.
Plus, if you do any key pinning, you'd probably do well to also pin a backup public key you haven't used in case your CA/infra collapses and you quickly need to redo your HTTPS setup.
You can, but it’s still dangerous. You don’t have control over if those certs are revoked or keys blocklisted.
It’s best to simply not use public certs for pinning, if you really must do it.
Pin the cert authority instead?
This, and lock the account ID via CAA record.
your app would download the new certificate from an endpoint which returns the new certificate signed with the old one that you currently pin?
Pin the public key (SPKI) or CA.
Forgetting the obvious security advantage for the moment, I've found this to actually be convenient that the lifetimes are rather short. I'm not disciplined when it comes to setting up my homelab projects, so in the past sometimes I'd just LE it and then worry about it when renewal failed. My family is the only consumer so who cares.
But then they set some shorter lifetime, and I was forced to set up automation and now I've gotten a way of doing it and it's pretty easy to do. So now I either `cloudflared` or make sure certbot is doing its thing.
Perhaps if they'd made that more inconvenient I would have started using Caddy or Traefik instead of my good old trusty nginx knowledge.
> Acceptable behavior includes renewing certificates at approximately two thirds of the way through the current certificate’s lifetime.
So you can start renewing with 30d of lifetime remaining. You probably want to retry once or twice before alerting. So lets say 28d between alert and expiry.
That seems somewhat reasonable. But is basically the lower margin of what I consider so. I feel like I should be able to walk away from a system for a month with no urgent maintenance needed. 28d is really cutting it close. I think the previous 60d was generous but that is probably a good thing.
I really hope they don't try to make it shorter than this. Because I really don't want to worry about certificate expiry during a vacation.
Alternatively they could make the acceptable behaviour much higher. For example make 32d certificates but it is acceptable to start renewing them after 24h. Because I don't really care how often my automation renews them. What matters is the time frame between being alerted due to renewal failure and expiry.
“I really hope they don’t try to make it shorter than this. Because I really don’t want to worry about certificate expiry during a vacation.”
You might want to consider force-renewing all your certs a few days before your vacation. Then you can go away for over 40 days. (Unless something else breaks…)
Might not be a bad idea if it is within their rate limit rules but I'd really rather not take a manual action before leaving a system alone for a while and not worry that I managed to force renew every single cert.
If you forget a cert then you’re no worse off than the case where the automation fails during the vacation.
You could also run a simple program that checks each site and tells you the remaining lifetime of the cert used, to verify that you didn’t miss any cert.
It all depends on the scale of your operations, of course.
I’m maintaining a server with Let’s Encrypt certs for a B2B integration platform. Some partner systems still can’t just pin the CA and instead require a manual certificate update on their side. So every 90 days we do the same email ping-pong to get them to install the new cert — and now that window is getting cut in half.
Hopefully their software stack will be able to automate this by 2028.
CAs are gonna start rotating more frequently soon, and you may even see randomisation. Pinning to public certs is a real no-no.
Cert lifetimes are such a burden. I wanted to provide pre-configured server examples of my WebRTC project, something that was download-and-run without any more prior knowledge (an important point), which users could access from their LAN e.g. to test the examples from their phones (not from the useless localhost exemption that exists for secure contexts), for which a self-signed cert embedded in the examples was fine. New users could run them, new concepts (such as security and certificate management in production apps) could be learned at an apropriate time.
Until web browsers started to believe that no, that was too much of a convenience, so now long expiration certs became rejected. What's the proposed solution from the "industry"? to run a whole automation pipeline just to update a file in each example folder every few months? bonkers. These should be static examples, no reason to having to update those any earlier than every few years, at most.
Wouldn't it be better to bundle a script that generates a cert instead of the cert itself?
If I understand it right they bundle a publicly trusted shared cert and want to allow their users to get running on vanilla devices without having to provide a domain.
A certificate is a binding of a cryptographic key, along with an attestation of control of a DNS record(s) at a point in time. DNS changes frequently. The attestation needs to be refreshed much more frequently to ensure accuracy.
I think next we should set maximum lifetime of public CAs to double of this value. Then we can also protect against failures there.
How do people here deal with distributed websites? I’m currently issuing one certificate on my machine and then Ansible-ing it into all the servers. I could issue one certificate for each server, but then at any given time I’d have dozens of certs for the same domain, and all must be valid. That doesn’t sound right either.
Organizations with many frontends/loadbalancers all serving the same site tend to adopt one of four solutions:
- Have one node with its own ACME account. It controls key generation and certificate renewal, and then the new key+cert is copied to all nodes that need it. Some people don't like this solution because it means you're copying private keys around your infrastructure.
- Have one node with its own ACME account. The other nodes generate their own TLS keys, then provide a CSR to the central node and ask it to do the ACME renewal flow on their behalf. This means you're never copying keys around, but it means that central node needs to (essentially) be an ACME server of its own, which is a more complex process to run.
- Have one ACME account, but copy its account key to every node. Have each node be in charge of its own renewal, all using that shared account. This again requires copying private keys around (though this time its the ACME key and not the TLS key).
- Give every node its own ACME account, and have each node be in charge of its own renewal.
The last solution is arguably the easiest. None of the nodes have to care about any of the others. However, it might run into rate limits; for example, LE limits the number of new account registrations per IPv6 range, so if you spin up a bunch of nodes all at once, some of them might fail to register their new accounts. And if your organization is large enough, it might run into some of LE's other rate limits, like the raw certificates-per-domain limit. Any of the above solutions would run into that rate limit at the same time, but rate limit overrides are most easily granted on a per-account basis, so having all the nodes share one account is useful in that regard.
Another factor in the decision-making process is what challenge you're using. If you're using a DNS-based challenge, then any of these solutions work equally well (though you may prefer to use one of the centralized solutions so that your DNS API keys don't have to live on every individual node). If you're using an HTTP-based challenge, you might be required to use a centralized solution, if you can't control which of your frontends receives the HTTP request for the challenge token.
Anyway, all of that is a long-winded way to say "there's no particularly wrong or right answer". What you're doing right now makes sense for your scale, IMO.
Thank you for writing it!
> What you're doing right now makes sense for your scale, IMO
Absolutely. I use DNS validation, and I’m fine running it manually every quarter, but I’m sure I’ll be quite annoyed to have to do it every month.
By "distributed websites" you mean multiple webservers for one FQDN? Usually TLS termination would happen higher up the stack than on the webservers themselves (reverse proxy, L7 load balancer, etc) and the cert(s) would live there. But if your infrastructure isn't that complicated then yes, the happy path is have each webserver independently handle its own certificate (but note your issuance rate limits, 5 certs per week for the exact same hostname[1]).
[1] https://letsencrypt.org/docs/rate-limits/#new-certificates-p...
In my case is multiple servers handling the same FQDN. They are load balanced via DNS or use DNS anycast in some situations. In any case, my server is the one terminating TLS.
The relevant section of the CA/Browser forum requirements that resulted in this change are here: https://cabforum.org/working-groups/server/baseline-requirem...
That's the decision. Do you know the reasoning?
The primary reason: Revocation doesn’t work for webscale (OCSP is now obsolete). So instead, shorter cert lifetimes.
PS. Saw this insightful comment over on Lobsters:
“One quantitative benefit is that the maximum lifetime of certificates sets a bound on the size of certificate revocation lists. John Schanck has done heroic work on CRLite at Mozilla to compress CRLs, and the reduction from 398 days to 47 days further shrinks them by a factor of more than 8. For Let’s Encrypt the current limit is 90, so a more modest but still useful factor of 2.”
https://lobste.rs/s/r2bamx/decreasing_certificate_lifetimes_...
One would hope they're also increasing rate limits along with this, but there's no indication of that yet.
> Up to 50 certificates can be issued per registered domain (or IPv4 address, or IPv6 /64 range) every 7 days. This is a global limit, and all new order requests, regardless of which account submits them, count towards this limit.
This is hard to deal with when you have a large number of subdomains and you'd rather (as per their recommendations) not issue SAN certificates with multiple subdomains on them.
Note that renewing certificates is generally exempt from rate limits: https://letsencrypt.org/docs/rate-limits/#limit-exemptions-f...
We are working on further improvements to our rate limits, including adding more automation to how we adjust them. We're not ready to announce that yet.
We wanted to get this post out as soon as we'd decided on a timeline so everyone's on the same page here.
Certificates that look like renewals -- for the same set of names, from the same account -- are exempt from rate limits. This means that renewing (for example) every 30 days instead of every 60 days will not cost any rate limit tokens or require any rate limit overrides.
As someone who works at a company who has to manage millions of SSL certificates for IoT devices in extremely terrible network situations I dread this.
One of the biggest issues is handling renewals at scale, and I hate it. Another increasingly frusturation is challenges via DNS are not quick.
Are these IoT devices expected to be accessible via a regular Web browser from the public Internet? Does each of them represent a separate domain than needs a separate certificate, which it must not share with other similar devices?
I would strongly suggest that these certs have no reason to be from a public CA and thus you can (and should) move them to a private CA where these rules don't apply.
For those who want to solve the problem buy throwing money at it, one can probably buy a solution for this. I’m thinking of stuff like AWS IoT Core, I would guess there are other vendors in that space too.
I'm sure this is for good reasons, but as someone that maintains a lot of ssl certificates, I'm not in love with this change. Sometimes things break with cert renewal, and it sometimes takes a chunk of time to detect and then sit down to properly fix those issues. This shortens the amount of time I will have to deal with that if it ever comes up (which is more often than you would expect), and increases the chances of running into rate limit issues too.
What kind of issues do you usually face?
Without getting into specific stuff I've run into, automated stuff just, breaks.
This is a living organism with moving parts and a time limit - you update nginx with a change that breaks .well-known by accident, or upgrade to a new version of Ubuntu and suddenly some dependency isn't loading correctly, or that UUID generator you depended on to generate the name for the challenge doesn't get loaded, or certbot becomes obsolete because of some API change and you can't upgrade to the latest because the OS is older and you installed it from the package manager.
You eventually see it in your exception monitoring or when an ssl monitor detects the cert is about to expire. Then you have to drop that other urgent thing you needed to get done, come in and debug it, fix it, and re-issue all the certs at the rate limit allowed. That's assuming you have that monitoring - most sites probably don't.
If you detect that issue with 1/3 of the cert left, you will now have 15 days to figure that out instead of 30. If you can't finish it in time, or you don't learn about it in time, the site(s) hard fail on every web browser that visits and you've effectively got a full site outage until you repair it.
So you discover it's because of certbot not working with a new API change, and you can't upgrade with the package manager. Now you need to figure out how to compile it from source, but it doesn't like the python that is currently installed and now you need to install that from source, but that version of python breaks your python web app so you have to figure out how to migrate your app to that version of python before you can do that, and the programmer that can do that is on a week long whitewater rafting trip in Idaho.
Aside from all that, what happens if a hacker manages to wreck the let's encrypt infra so badly they need 2 weeks to get it back online? The internet archive was offline for weeks after a ddos attack. The cloudflare outage took one site of mine down for less than 10 minutes, it's not hard to imagine a much worse outage for the web here.
AKA the real world, a place where you have older appliances, legacy servers, contractual constraints and better things to do than watch a nasty yearly ritual become a nasty monthly ritual. I need to make sure SSL is working in a bunch of very heterogeneous stuff but not in a position to replace it and/or pick an authority with better automation. I just suck it up and dread when a "cert day" looms closer.
Sometimes these kind of decisions seem to come from bodies that think the Internet exists solely for doing the thing they do.
Happens to me with the QA people at our org. They behave as if anything happens just for the purpose of having them measure it, creating a Heisenberg situation where their incessant narrow-minded meddling makes actually doing anything nearly imposible.
The same happens with manual processes done once a year - you just aren't aware of it until renewal.
Consider the inevitable need for immediate renewal due to an incident. Would you rather have this renewal happen via a fast, automated and well-tested process, or a silently broken slow and manual one?
The manual process was annoying but it wasn't complicated.
You knew exactly when it was going to fail and you could put it on your calendar to schedule the work, which consisted of an email validation process and running a command to issue the certificate request from your generated key.
The only moving part was the issued certificate, which you copied and pasted over and reloaded the server. There are a lot less things to go wrong on this process, which at one point I could do once every two years, than in a really complicated automated background task that has to happen within 15 days.
I love short duration automated free certs, but I think we really need to have a conversation about how short we can make them before we make it so humans no longer have the time required to fix problems anymore.
“Aside from all that, what happens if a hacker manages to wreck the let’s encrypt infra so badly they need 2 weeks to get it back online?”
There are other CAs that offer certs via ACME. For example, Google Trust Services.
There are also alternatives to Cloudflare and AWS, that didn't stop their outages from taking down pretty much the entire internet. I'm not sure what your point is, pretty much everybody is using let's encrypt and it will very much be a huge outage event for the web if something were to go seriously wrong with it.
One key difference: A cert is a “pickled” thing, it’s stored and kept until it is successfully renewed. So if you attempt to renew at day 30 and LE is down, then you still have nearly more than two weeks to retrieve a new cert. Hopefully LE will get on their feet again within that time. Otherwise you have Google, ZeroSSL, etc where you can fetch a replacement cert.
Forced changes for one.
This is too short and the justification provided is flimsy at best.
I predict that normal people will began to get comfortable with ignoring SSL errors, even more than they already are. Perhaps we will see the proliferation of https-stripping proxies too.
I’m imagining that xkcd meme about internet infrastructure and one of the thin blocks holding the whole thing up being LE.
Is there any good argument for short lifetimes? The only argument I know of is that short lifetimes are supposedly better in case the key gets compromised, but I disagree. If the key can be compromised once it can be compromised again when it renews; the underlying cause of compromise doesn’t go away. NIST stopped recommending forced password rotation for this reason, it’s pseudosecurity.
> DNS-PERSIST-01
Won't that introduce new security problems? Seems like a step back.
fwiw, I host a free little service that can check certs once daily and email you if they're about to expire - https://ismycertexpired.com/check?domain=ismycertexpired.com
I understand all of the benefits with regards to compromise and pushing automation, but I really hope they don't push the maximum lower.
It is already getting dangerously close to the duration of holiday freeze windows, compliance/audit enforced windows, etc.
Not to mention the undue bloat of CT logs.
> It is already getting dangerously close to the duration of holiday freeze windows, compliance/audit enforced windows, etc.
How do those affect automated processes though? If the automation were to fail somehow during a freeze window, then surely that would be a case of fixing a system and thus not covered by the freeze window.
> Not to mention the undue bloat of CT logs.
I'm not sure what you mean by "CT logs", but I assume it's something to do with the certificate renewal automation. I can't see that you'd be creating GBs of logs that would be difficult to handle. Even a home-based selfhosted system would easily cope with certificate logs from running it hourly.
"CT Logs" are Certificate Transparency Logs, which are cryptographically provable append-only data structures hosted by trusted operators. Every certificate issued is publicly logged in two or more CT Logs, so that browsers can ensure that CAs aren't lying about what certs they have or have not issued.
Reducing the lifetime of certificates increases the number of certificates that have to be issued, and therefore the number of certs that are logged to CT. This increases the cost to CT operators, which is unfortunate since the set of operators is currently very small.
However, a number of recent improvements (like static-ct-api and the upcoming Merkle Tree Certs) are making great strides in reducing the cost of operating a CT log, so we think that the ecosystem will be able to keep up with reductions in cert lifetime.
One way to protest against this would be to run our cert revoke&update script every hour.
Not trying to diss on Letsencrypt, but I'm open to suggestions on paid cert providers.
Google Trust Services https://pki.goog/
[dead]
> Decreasing Certificate Lifetimes to 45 Days
45 days ? So long ? Who needs so long living certificates ? A couple of miliseconds shall be enough. /s
Translation: Like any large bureaucracy, the certificate industry sees its own growth as a moral virtue, and no limits to the burdens which it should be free to impose on the rest of society.
This is all bullshit...
"This change is being made along with the rest of the industry, as required by the CA/Browser Forum Baseline Requirements, which set the technical requirements that we must follow."
I dont follow. Why? Why not an hour? A ssl failure is a very effective way to shut down a site.
"you should verify that your automation is compatible with certificates that have shorter validity periods.
To ensure your ACME client renews on time, we recommend using ACME Renewal Information (ARI). ARI is a feature we’ve introduced to help clients know when they need to renew their certificates. Consult your ACME client’s documentation on how to enable ARI, as it differs from client to client. If you are a client developer, check out this integration guide."
Oh that sounds wonderful. So every small site that took the LE bait needs expensive help to stay online.
Do they track and publish the sites they take down?
LE bait. Wow.
To your actual content, unless you did something weird and special snowflake like, everything will just keep working with this.
They've been slowly moving the time lower and lower. It will go lower than 45 days in the future, but the reason why we don't go immediately to 1 hour is that it would be too much of a shock.
>So every small site that took the LE bait needs expensive help to stay online.
It's all automated. They don't need help to stay online.
re too much shock, how so?
I'd say two big reasons: 1) A lot of people/enterprises/companies/systems are not ready. They're simply not automated or even close to it.
2) Clock skew.
Nope. I renew my LE certs manually. I take my http server down, run certbot, and pull http back online
>Oh that sounds wonderful. So every small site that took the LE bait needs expensive help to stay online.
I agree with the terminology "bait", because the defaults advocated by letsencrypt are horrible. Look at this guide [0].
They strongly push you towards the HTTP-01 challenge which is the one that requires the most amount of infrastructure (http webserver + certbot) and is the hardest to setup. The best challenge type in that list is TLS-ALPN-01 which they dissuade you from! "This challenge is not suitable for most people."
And yet when you look at the ACME Client for JVM frameworks like Micronaut [1], the default is TLS and its the simplest to set up (no DNS access or external webserver). Crazy.
[0] https://letsencrypt.org/docs/challenge-types/
[1] https://micronaut-projects.github.io/micronaut-acme/5.5.0/gu...
> the defaults advocated by letsencrypt are horrible
You’re completely misinterpreting the linked document. See what it says at the start:
> Most of the time, this validation is handled automatically by your ACME client, but if you need to make some more complex configuration decisions, it’s useful to know more about them. If you’re unsure, go with your client’s defaults or with HTTP-01.
This is absolutely the correct advice. For Micronaut, this will guide you to using TLS-ALPN-01, which is better than HTTP-01 if the software supports it. But for a user who doesn’t know what’s what, HTTP-01 is both the easiest and the most reliable, because, as they say, “It works with off-the-shelf web servers.” Typical web servers which don’t know about ACME themselves can be told “serve the contents of such-and-such a directory at /.well-known/acme-challenge/” which is enough to facilitate HTTP-01 through another client; but they don’t give you the TLS handshake control required to facilitate TLS-ALPN-01.
Why not 7 days then
According to TFA LE already offers a "shortlived" profile that issues 6-day certs if you want to stress test your automation, or just gain the security advantages of rapid certificate turnover immediately.
The goal is to move to short lived certs to make the fragile system of revocation lists and public certificate logs unnecessary.
It's headed there.
7 days is too long! It should be 30 minutes!
Certificate per request
that's just OTR
OTR still has static identities, with DH used to ratchet the ephemeral keys. The comparison would be more like Signal ditching Safety Numbers and Registration Lock for hourly SMS verification of new independent keys with no successor signing.
There's a fundamental divide in what certificates mean: modern CAs view WebPKI as a fancy vantage point check--cryptographic session tickets that attest to the the actual root of trust, usually DNS. Short-lived certs (down to 10 minutes in Sigstore, 6 days trialed by LetsEncrypt) make perfect sense to them.
But DNS challenges are perfectly forgeable by whoever controls the DNS. This reduces authentication to "the CA says so" for 99% of users not running a private CA alongside the public one.
Transparency logs become impenetrable to human review, and even if you do monitor your log (most don't), you need a credible out-of-band identity to raise the alarm if compromised. The entire system becomes a heavier Convergence/DANE-like vantage point check, assuming log operators actually reverify the DNS challenges (I don't think one-time LetsEncrypt challenges are deterministic).
I think certificates should represent long-term cryptographic identity, unforgeable by your CA and registrar after issuance. The CA could issue a one-time attestation that my private root cert belongs to my domain, and when it changes, alert to the change of ownership.
Of course, so we have another global failure/censorship point besides Cloudflare…
Yes, that's the whole point..
Er, I mean, its totally for security guys!
> 7 days is too long! It should be 30 minutes!
And secure boot shall be signed with it. /s