Hi HN -- this is what I've been working on for the last 14 months, with the help of many contributors and the backing of several sponsors. (Thank You!)
Caddy 2 is a fresh new server experience. Some things might take getting used to, like having every site served over HTTPS unless you specify http:// explicitly in your config. But in general, it will feel familiar to v1 in a lot of ways. If you've used v1 before, I recommend going into v2 with a clean-slate using our Getting Started guide: https://caddyserver.com/docs/getting-started
I'm excited to finally get this out there. Let me know what questions you have!
Hey Matt - thanks for creating and maintaining Caddy all these years! Like others have said in this thread, it is so easy to set up and maintain that it really does feel like "magic".
In terms of speeding up adoption of Caddy 2, it may be useful to have a list somewhere of the concrete improvements between the two (as I'm sure there are many). A (very) brief look and search around only yielded this article[1] which referenced another link of improvements that now 404s[2].
Another piece of feedback: It’s scary to consider using such a crucial piece of software in production when documentation is referred to as “sort of deprecated” and “slightly outdated.”
Just tried it by replacing NGinX on my personal servers... I don't have anything complex (python backends, some static files...) but so far the user experience is stellar :)
Thank you for your work on Caddy! After a false start during the beta (mainly because of the missing documentation) I upgraded my personal websites from v1 to v2 RC1, and since then I have been a fan of the new version. Caddy 2 makes somewhat complex configuration more consistent and easier to express in the Caddyfile. The only thing I miss from v1 is the default handling of when static files are not found.
I am very interested in the prospects of Starlark in Caddy 2. With an integrated scripting language Caddy could on its own be a replacement for OpenResty or Apache with mod_mruby. The preliminary implementation was removed in the beta phase with a note saying it would have to wait for when the project was financially stable [1]. Do you still plan to integrate Starlark if the project is a financial success?
Thanks, we'll work on it! The nav/flow of content could probably be improved.
Not gonna lie though, there's more to learn. V2 is a powerful machine -- so do expect that there will be some reading. Once you know how it works, it's easy to use. Very simple configs are possible, etc.
It's often used as a reverse proxy and static file server, but oh, so much more is possible. Today, if you're using HTTPS, you should almost certainly be using Caddy. Maybe tomorrow if you need to set up a memory-safe SSH server, you could be using Caddy. (Just an example.)
Not a stupid question, it's not a phrase you hear very often because there isn't... really... one... at all. (Not in mainstream use AFAIK?)
Memory safety is a class of guarantees certain software offers you against certain vulnerabilities. Software written in C is generally considered "memory unsafe" since it's hard to write correct C code when managing with memory, so it's easier to find exploits in them that cause them to reveal secrets.
Go software has stronger memory safety guarantees than C programs like OpenSSH and Nginx. So that's one big benefit of using Caddy.
As it happens, someone in the audience here is writing a SSH app for Caddy, so you have a pure Go SSH server that is less vulnerable to those class of attacks.
But what is the value add for Caddy here? Like how the out of box and auto SSL defaults with HTTP.
Apart from memory safety, can the SSH version of caddy impose better defaults that OpenSSH doesn’t right now? Maybe TLS certs, security key support, etc?
Absolutely. And Caddy adds its on-line config API and simpler configuration experience for an all-around more secure, easier-to-maintain, harder-to-get-wrong system.
Right, I suppose I'm asking what an elegant SSH config would look like, having almost only ever touched my sshd config less than 10 times in the 15+ years I've managed servers.
I know I'd ideally like easier SSO integration, for example. Or provisioning of users.
Yeah, my hope is that someone will write a sort of scheduler / supervisor app for it so that I don't have to keep re-learning systemd every time I stand up a new service...
Thank you for Caddy, it has saved me a lot of time and it's a joy to work with. Running it on about 8 boxes.
Is there an admin UI for v2? Seeing the configuration changes, it was the first idea that popped, would be great for the selfhosted community - maybe someone picks it up as a side project if it's not in scope.
I know that HN is usually the first to outrage when a project website is too verbose or does not explain the product, but I just want to take a moment to say WHAT A GREAT WEBSITE Caddy has!
- The first screen tells me everything I need to know about what Caddy is and why it stands out
- Scroll down on how to setup (hattip to whoever did the angled asciinema embed. Looks so cool)
- Every page worth of scroll is exactly the right amount of information with links to dig deeper
- The Upcoming features at the end is such a great touch as well.
- All this in a UI which looks absolutely great!
Excellent work guys and congratulations on the V2 launch! Been playing with Caddy quite a bit for personal projects, and been a very happy user. Will definitely be using v2 more! :)
I was really nervous to post this on HN because HN has also been the source of great misery for me in the past, frankly. But I'm relieved at the overall positivity today. Maybe because it's Star Wars Day we're all in a good mood?
I'm glad you like the landing page. Took me a couple weeks of trying and throwing designs away, then a few days of concerted effort, just standing in front of my text editor cranking out HTML and CSS best I could muster. The visual touches aren't perfect but I'm quite pleased.
... come to think of it, reading the criticism from HN on other projects made me hyper-aware of certain things, like explaining what the product actually does, and how to lay it out, and not mess with scrolling, etc etc. (But not the toxic comments, I skip those.)
You have a really impressive site about a really impressive product. Kudos.
The only constructive criticism I'd offer immediately for the new updates is that they look very light on detail about setting up a production deployment on different platforms -- things like running Caddy as a service that starts automatically, monitoring its health and restarting if necessary, ensuring that any important security updates are known about and installed, etc. It's great to have so much that "just works" and a simple developer experience demonstrated immediately, but the other stuff is still important too.
Edit: I see https://caddyserver.com/docs/install now talks about some of these issues for Caddy 2. In that case, perhaps it would be useful to add a prominent link there from the v2 page at https://caddyserver.com/v2? I was half-expecting the "Download" button to take me to such a page, and was very surprised to find myself sent to GitHub (particularly since there are neither instructions about downloading nor visible links to the assets to download actually visible on the GH page you arrive at).
Thanks -- honestly, the website was the least of my concerns up to this point, but it'll get more attention now that the software is actually released.
In that case, having a smart and informative site already is even more of an impressive achievement. :-)
I don't personally think much needs to change dramatically. It's already useful and seems reasonably well organised. As someone who hadn't previously heard of Caddy other than incidentally and who is currently setting up some new projects for which Caddy is a very interesting find this evening, that production-ready setup information is the key information that was missing for me until I found the other page (and as it happens, I found it via a link that wingworks helpfully posted in this HN discussion, not via browsing the Caddy site or a search engine).
If I were to add one other possibly helpful point, the configuration via HTTP request idea is interesting, but everywhere I've seen it mentioned on the site so far seemed to imply it is on by default and did not say anything about locking it down. Again, this looks potentially useful for a quick start experimenting with Caddy or maybe for development purposes, but it doesn't seem like a killer feature and it's obviously a concern for using it in production if anything like that doesn't default to safe.
Please do try to mount an attack on your server's admin endpoint; let me know if you're able to break into it.
Keep in mind that if your system runs untrusted code, all bets are off: there's not really anything we can do as a single user space process to protect it. If your system runs untrusted code, you'll need to make sure your system is locked down properly... maybe you can use a permissioned unix socket for the admin endpoint, or just disable it entirely... it's really up to you. I think our defaults are safe though. Let me know if they aren't.
I was trying to explain that during my initial browsing of the Caddy site, I found several references to using the API endpoints to configure Caddy, but nothing in the same places to say how to secure it or whether it was enabled by default. I wasn't talking about any sort of cunning attack, simply the issue of having such functionality accessible to anyone who could visit /config/ and knew anything about HTTP.
I did later discover the relevant configuration in the JSON config structure documentation, including the flag to disable it that is what I really wanted to know about. It just seemed like the kind of important detail that would be worth linking from places that introduce the REST API, such as the section about it on your v2 landing page, if you're refining the site.
My understanding is that the config is available via localhost only. It most instances it does not need to be disabled. I think the hope is that it will be left enabled in production, not disabled.
The default listen address is "localhost:2019", which means it'll only accept requests from apps running on the same machine. If you're running untrusted code on the same machine, then that might be problematic for you. You can also change the admin endpoint to be a unix socket instead of a TCP endpoint which allows you to use linux file permissions to protect it.
Forget the misery. Caddy has been sheer wonder to work with since more or less the beginning. 2.0 in beta and RC has been looking swell for a long time. Well done and congratulations to you.
It's a one-way street, though. Once I'd set up my first Caddy site, there really wasn't any way I was ever going back to the legacy stacks.
the landing page almost looked too slick and polished! i thought it was a for sale program instead of an open source one at first. neat tool... thanks for sharing it
I started using caddy v1 thanks to lightning-fast and simple TLS setup and extremely simple configuration in general, thanks to the excellent docs and handy plugins for common stuff like cors, rewrites and fastcgi.
Shortly after upgrading to caddy v2, I switched to nginx, after spending way too much time trying to make sense of the new Caddyfile and docs (the thing I miss the most of the docs is the plugin list, along with the cors plugin, and in general the non-existant entry curve of Caddy v1).
Tbh, I also miss the one-liner install script <3
I'm genuinely sorry for the project; I didn't mind at all any of the previous so-called shady behavior (I personally don't object at all against a developer trying to make a living out of his own open-source project); I've been using caddy for four years now, and I really hope to switch back to it, one day.
As francislavoie said, the website with its nice one-click custom download page and list of registered plugins is a lot of work. It'll come, but it's not ready yet. Much like the v1 release 5 years ago where the interactive download page didn't come until about a year later, it'll take some time. Hopefully not a full year though. :)
I've already got the new build server mostly written -- should be way faster.
Maybe this has changed since I looked at things yesterday, but I couldn't find a list of default plugins anywhere on the website, nor how to configure those default plugins. I think this is what the person you're replying to is talking about.
On the old site, there was a plugin documentation page that made it trivial to see what the plugins were called and what options they took.
I eventually asked my local copy of Caddy what plugins were available, which answered one of the two questions, but am I missing some kind of documentation page?
The interactive, custom build page is an unrelated, but also interesting topic. About a year ago, I was using abiosoft's Caddy Builder image to build a Caddy docker image with the plugins I wanted, but this no longer seems to work even for Caddy V1, and I don't think it has been updated for V2. It would be awesome if there were an official Caddy Builder docker image. Even if the website existed, I would still have to wrap that binary in a docker image somehow for my own use cases.
But, these are all just my personal opinions -- I'm sure you have your own priorities for the project, and I'm excited to see V2 has officially released!
If you're navigating from the homepage, click on "Documentation" at the top, then click on "modules" on the left navbar just above "JSON Config Structure". That list is generated directly from the source code, so it is the definite list of what's included by default and supported by the native JSON config.
The other way is, as you found out, to run `caddy list-modules` to get the definitive list of what's packed into the binary.
The new website does not yet have the ability to register plugins. I'm writing it -- I already have the new build server mostly written -- but like with v1, it may take a few months.
It's not worth delaying the release for that, especially when we have tools like xcaddy that make it easy to build Caddy with plugins, no need to touch any code: https://github.com/caddyserver/xcaddy
Right now, the process is: find a plugin you like, then run:
xcaddy build --with <module-path>
But yeah, it will get easier with time. We'll get there.
If you need any help when coming back to Caddy v2, come ask for help on the community forums: https://caddy.community/
We'll be glad to help you get set up!
The docs are still a work in progress, we'll eventually have a plugin registry like in v1 but that requires a bunch of work behind the scenes to make it happen.
I have been using Caddy for quite some time now, as a replacement for "python -m SimpleHTPServer" and by just flipping a switch as real webserver and this is such great news.
I am so happy you end up going down the route of an open license, this right here is a game changer. I can't even begin to imagine the ins and outs that must have been crossing your mind in the last year. But, as things stands, I am very sure there is a bright future for Caddy. What a fantastic piece of software.
I am one more to be grateful for this product. Thank you. You are helping people.
I recently switched my site from using Amazon Linux 1 + Apache + manual setup to Amazon Linux 2 + Caddy 2 + automated Ansible setup. The documentation for Caddy 2 wasn't quite 100% at the time, but it looks like it's in much better shape now. But Caddy 2 just worked, and gave me SSL seamlessly out of the box, zero config. Love it! My entire Caddyfile looks like this:
giftyweddings.com {
encode gzip
reverse_proxy localhost:8080 # Go web server
}
www.giftyweddings.com {
redir https://giftyweddings.com{uri} permanent
}
Also, mholt's "Caddy saves money" quip on the homepage was true in my case. After the switch I was able to downsize my instance to a very small size and save about 50% on my AWS bill, down to less than $10 per month. Not entirely sure it was due to Caddy, but I "blame" it on that anyway. :-)
The one thing I found tricky was getting the systemd config file correct (I'm not a Linux pro). Looks like he's addressed that too, with this README and demo systemd config -- nice! https://github.com/caddyserver/dist/tree/master/init
Be aware that the [trusted=yes] config disables GPG signature verification for the repository, which does slightly break the security model of Apt (where repositories can be untrusted/HTTP-only as long as all packages are signed by trusted keys).
Unless I'm mistaken, a malicious package with the same name as a critical system package could be added to that untrusted repository by an attacker who had compromised it, and if the version number is higher than that of the 'official' package and your Apt priority config prioritises it, your system would download and install it without any verification.
This is a fairly niche attack vector that may not be considered a significant risk in some environments, but it's one to consider when establishing your threat model and risk appetite.
As a side note, it's worth noting that GPG signing packages is not a silver bullet either, especially if the signing keys and administrative access to the repository fall within the same security boundary (e.g. developer doing both from their PC without any segregation/sandboxing). However, it has proven to be a robust method so far, and definitely beats explicit [trusted=yes].
Yeah, we're aware. Unfortunately Gemfury doesn't support GPG signing yet (see https://gemfury.com/help/apt-repository#apt-setup) but we will set that up as soon as they do. Their service made it the easiest to get set up on short notice.
Thank you so much for investing the time to do it, this was the only thing holding me back from using Caddy for everything over the last ~2 years. I know it's a painful process for Go packages in particular, but it's definitely worth it to be more easily installable for diehard apt users like myself.
You may already know this, but it's very tricky to package Golang projects in official debian repos because they require that every dependency be a separate package. This is a ton work both upfront and to maintain in the long-term, so we haven't made much progress on that front.
We'd love to get some help from people who are more comfortable with the debian ecosystem though! It's definitely something we want to achieve, but for the short term we had to make the decision to simply release the .deb via https://gemfury.com which provides us with free APT repo hosting for open source projects.
Unless I'm missing something, that only says you need to package all dependencies if it is a library. Which means you could make a binary-only statically compiled package without adding a bunch of dependencies right?
Of course, that isn't helpful if you need to compile with extra plugins, but in that case you are probably using standard go tools and not apt packages anyway.
We were specifically told that we would need to have all our dependencies packaged. Debian needs to be able to validate that it was built entirely from source code from library packages, not from a binary built externally.
That sidebar doesn't show on major landing pages, so it's not as silly a mistake as it might seem. I missed it for several minutes while looking around myself, as I started by following the prominent links on those landing pages. I only found the separate installation information page when someone else here helpfully linked to it.
Honestly, I hadn't even noticed the search box. It appears in a slightly unusual place and styled sufficiently subtly that I totally overlooked it throughout my entire browsing session. (FWIW, this is on a very large and colour-calibrated monitor on my PC, so my experience here may not be typical.)
In case it's of interest, as a new visitor, I started browsing from your main v2 landing page, scanning down most of that information. Then I followed the prominent download and get started links near the top. I think I next went to the documentation area, and started browsing the links on the left, though I totally missed the "Install" link just under "Welcome" because my eye was drawn first to the Tutorials section and its getting started link, and then I went exploring from there on down.
I suppose that was an unfortunate combination of two things to miss. :-)
I liked v1 a whole lot. I'm interested in v2, though I'm extremely skeptical of the additional complexity of the config adapter layer (https://caddyserver.com/docs/config-adapters) - I'm more or less sold on having an internal JSON structure with a DSL on top of it (allowing dynamic/programatic config is really interesting), but the idea of having _multiple_ different DSLs or config syntax bindings for it seems a bit much.
The biggest feature in Caddy 2, from what I can tell, is the new HTTP config API, and I'd love to see a practical example of what could be done with that.
Don't worry about the "additional complexity" -- config adapters are modular, and only the Caddyfile adapter is baked into the standard modules. Just plug in the adapters you need. Caddy functionality can grow rather infinitely without bloat because of this.
True, the Caddyfile adapter is a bit of a beast because it is extensible itself -- but it's a lot cleaner and more flexible than the v1 implementation, where the Caddyfile was its native config syntax.
Believe me, this is a big improvement.
Anyway, you don't have to use the adapters if you don't want to. That's the beauty of it.
More info on the architecture and config adapters:
Caddyfile is the primary supported DSL. The rest are nearly 1:1 mappings to the internal JSON with some sugar, except for the nginx config adapter which is a bit special! For example jsonc provides JSON, but with comments.
A few of those are third party maintained, and are plugins. They're listed on that page to point out what's possible.
Same here! We are huge fans of Caddy at Narration Box (https://narrationbox.com) and we are considering replacing NGINX completely in our Django production stack. (Currently we only use Caddy for development and staging) The author of Caddy, Matt Holt, is a really great person and is extremely responsive on Twitter and GitHub. Highly recommend checking them out.
Nginx requires too much configuration. It is very brittle if you do not have a full time DevOps person monitoring things. E.g. for production web apps, take a look at the amount of tooling that has been developed to manage Nginx:
This is not a criticism of Nginx per se, it comes from a time when Apache is the only other alternative and end-user UX is hardly priority. An analogy would be systemd and pm2, pm2 have few advantages over systemd on paper but in practice it has a much more pleasant UX and saner defaults even though it requires a separate installation on most servers while systemd is available by default. It is the difference between Heroku and AWS. For the 99th percentile of efficiency Nginx still beats Traefik and Caddy, but sometimes scaling servers is cheaper than hiring DevOps staff.
I love Caddy 1, been using it for years after my annoyances with nginx. I've never looked back. Built-in TLS setup is amazing but the killer is that you can host a website with only a few lines of configuration in a single file. Now I'm managing dozens of websites with a single file with only ~4 lines of config per domain (most of it is the same).
I was nervous about Caddy 2 because I could imagine the core was better but I couldn't fathom how the config process could be any better than v1 (my favorite part). I was wrong - the v2 config introduces some clever ideas which makes it better too. It has a great simplicity to it that is extremely powerful and I found that I could get rid of a couple of the plugins I used with v1 because you can harness the power of the v2 matchers to do so many things.
Thank you mholt and thank you francislavoie, Caddy 2 is awesome!
I am a happy user of Caddy v1, and I want to thank you developers! It's made my life much easier, managing TLS automatically for my home server, and having a less than 10 lines adequate config for 3 web services I use behind it. I even forgot I was using Caddy, which is a proof of how great it is for systems integration.
I'll be looking forward to v2 in a few weeks, when I will upgrade to newer ubuntu and go full ZFS.
I just heared about Caddy v2 four days ago on a reddit post and it is awesome! Thanks a lot for your great work :)
It is by far the easiest solution I could find to install a reverse proxy which can route by the URL /api/* and gets an automatic Let's Encrypt Certificate.
Hi mholt!,
I am building an ecommerce SaaS.Obviously,i will have each domain pointed to my server.Is Caddy a good choice for automatically getting a certificate for each website?Are there any limitations?
Thanks!
Yes, it is! Ask Jack Ellis at Fathom Analytics, they're doing this for their customers: https://usefathom.com/blog/bypass-adblockers -- and I think they have a new blog post and video coming out soon about it.
We've had great fun with Caddy for Version 1 of our custom domains feature.
For Version 2, we've done a lot of work with Matt and the solution we've now reached is:
1. DynamoDB as storage for certificates, allowing sharing between servers without regenerating during an issue with an availability zone
2. Multiple Caddy reverse proxy servers in different regions
3. AWS Global Accelerator to route the user to the closet server to them
It's so great because we can also proxy our CDN through Caddy (yes, it needs to go through the user's custom domain), and we've got insanely fast response times on that.
We've load tested the proxy servers and they can handle an incredible amount, we're very pleased with the solution.
I will be publishing and sharing an article soon, detailing my journey and our final solution.
It really goes to show that you can write good software in any language/framework you're comfortable in. PHP powers Facebook, wikipedia, and Slack for instance and it gets tons of hate on HN.
I'm mostly a C#/Typescript/Python guy .. but good software is good software!
Sooo... you bring up something I could use some "learn me somes" on... golang. Recently, in my personal life when choosing a "next language" to learn I chose Rust over Go because it seemed to offer everything Go does, and then some (like them systems level programmings). However, at my current employer, Go has a solid following and I'm unlikely to sell Rust (or even want to[1]): could you please speak to that real-world efficacy or what things you think it does superbly well to enable software like this?
I tried briefly to find some guides on go over the weekend, but an awful amount of them are targeted to using "X web framework," which seems utterly worthless (or boring) to me. I guess, to your point, I could go read through the Caddy source code and see what I find since this seems like a nice and useful application using Go.
[1]: There's almost no reason to sell a team on any new technology if there isn't a substantial gain from it. For the org I am at, I don't believe, Rust offers much over Go for most of the use cases. Sure, it certainly can be _faster_ but who fucking cares? Most of what we do isn't bound by machine time, it's bound by developer time. Thus, I seek to learn me some golang and what virtues it offers.
Go is excellent for writing CLI's and network services since it comes with built-in cross-platform sockets, HTTP, easy-concurrency, crypto, SSL and a comprehensive standard library. You don't need an overarching framework. You can pick and choose helper libs if you really need them.
Doing the same in something like C++ or Java is a headache simply because the standard library is not sufficient and you waste valuable time in framework and library decision making.
I learnt golang through books. The traditional GOPL, and OReilly's "Concurrency in Go". "Go Programming Blueprints 2nd Edition" shows how to implement several projects using (mostly) the Go standard library.
(Sometimes developer time is not always the top most consideration though. There is one internal service that we refactored from Python -> Golang -> C++ for low-latency and low-memory performance.)
> For the org I am at, I don't believe, Rust offers much over Go for most of the use cases. Sure, it certainly can be _faster_ but who fucking cares?
That's actually not at all why I use Rust. I switched because I prefer the code locality. After ~5 years professionally with Go, I found large code bases were a pain point with Go because I never felt I had robust ways to solve problems, and instead always resolved to helper funcs. In Go it ended up being a ton of loops and helper functions. Spreading out the logic all over the damn place.
Go is still a great language, I'm not knocking it. I just don't think speed is a selling point for Rust, honestly. Memory control is. Enums and pattern matching is. Iterators are (my favorite thing about Rust).
I used caddy on and off for many years and finally went back for good.
For one, the community support is fantastic. Matt, Francis, you will not recognize me from my name here but you were first class helpers (and still are).
The fact that caddy has an API is a breakthrough. Automating config provisioning is a godsend
I belive that you child easily take over Traefik in the edge router space if configuration was a bit more modular (today is is Caddyfile, or API and no external sources of data such as the docker daemon (there was some discussion about that with the owner of Lucas Lorentz @lucaslorentz but it is I think dead-ish).
I moved yesterday all my prod from Traefik to caddy and it works just great.
I have been using Caddy v1 for more than a year now as a web server for some of my production sites. I'm really happy with the simplicity of it and especially out of the box SSL support with LetsEncrypt. I believe with v2 I will be moving more of my sites to Caddy. Thank you for all the hard work :)
Thank you for sharing your experience. Glad to hear it. Feel free to join our forums and be a part of the community! https://caddy.community - could always use more people with experience helping out. :)
I just upgraded a server to Caddy 2 this past weekend. It's running a single WordPress site.
There were some gotchas in converting the Caddyfile to the new format, but I eventually sorted it out. Caddy 2's method of updating the config made the iteration process a lot faster.
I'll admit I got bit by a few differences between the v1 Caddyfile and the v2 Caddyfile too. Namely forgetting * at the end of a path _prefix_ matcher. (In Caddy 2, path matches are exact, unless you suffix with a *.)
Thanks for the nice piece of software. I like Caddy a lot. I would like to better integration with docker in terms of automated exposure of docker services based on labels for example. Similar how traefik does. That would be really nice to have.
I've always felt like that was kind of a hack, but with Caddy's API it's doable. I don't really use Docker myself, so... want to contribute the feature? I'll show you how to integrate it.
Excellent. I'm really excited about Caddy 2. We've been using Caddy 1 to server all our main sites over https and simply for the last 4 years. I moved away from Apache and have never ever looked back.
I really like Caddy and use it for general side projects. Super simple setup, handles all the LetsEncrypt certificates for me, can be running different apps on different paths.
We have several companies using it for tens of thousands of their customer sites. It's the only server we know of that can handle hundreds of thousands of certificates efficiently!
This new version looks amazing. I was actually looking at it earlier today. I had to postpone my upgrade because I couldn’t find any alternative or how to include the http.git plugin which I rely on. Any direction in that regard will be appreciated. Congratulation for your work.
It is nice to see an alternative to Nginx. I am glad that Caddy took a stab at it.
It is good that the author is looking to have some kind of stable monetization too. It's hard enough to maintain OSS from time perspective. I hope that it works out for him.
I see a mention of using it as an ingress controller, but not much in the docs. Is there more details on this or is this better suited to run in a container as opposed to using nginx for that and leave the ingress to traefik/etc?
This is really a side-note. I think Caddy is great.
> Still the only web server to use TLS automatically and by default.
This sounds great at first. It also makes it too easy to hit the Let's encrypt rate-limit by mistake and get stuck for a week. I really think that the default should be to use the staging environment certs[1] instead and have the user actively decide when they are ready to go live.
We thought of this! As francislavoie said, Caddy's retry logic involves switching to the Let's Encrypt staging endpoint until the DNS resolves properly. That is coupled with exponential backoff.
Hitting the staging endpoint as a test isn't a perfect thing, by the way -- it can help where DNS records are misconfigured, but for things like rate limits or other state internal to the CA server, switching endpoints will only give you a false positive.
So, better to not hit that problem in the first place, and to make sure your DNS records are configured properly first.
Caddy v2 does this! If an ACME error is encountered, it will continue to retry via the staging endpoint instead until it works.
This won't help if you're throwing away the data storage directory though (i.e. not persisting the volume with Docker), because Caddy won't have a way to keep track of what happened in past runs.
How fast is this bad boi? Or are there any benchmarks I can look at? I can tell the sell here, isn't "fastest EVAR" but how fast is it? Or where is fucking righteous, and where is it horrible? I'm guessing it's doing okay since it's
Regardless, looks super cool, and excited to give this a try. One last thing... great job on the website, seriously 11/10 for a developer/command-line tool. Clearly says it does, shows some neat shit it does, and even looks pretty darn good doing it :)
I'm afraid I agree with a few sibling comments here, the documentation, although good, still feels a bit light for 2.0 (I recently set up a caddy reverse proxy on a legacy host, in order to be able to avoid an old version of openssl - and decided to go with 2.0 - then in pre-release).
That said, if your needs are simple, the examples are pretty good - a caddy file shouldn't have to be too many lines:
There's a full tutorial on the Documentation section! Start here https://caddyserver.com/docs/getting-started and go through the links listed under "Tutorial" on the left side of the page.
Did you see the "Tutorials" section at the top of our docs, along with the quick-start guides, one of which is specifically for reverse proxying? https://caddyserver.com/docs/
The pairing of Caddy & LetsEncrypt have been spectacular for shipping TLS software with way less pain.
We're currently on a funny (Caddy docker external reverse proxy -> Nginx docker internal reverse proxy -> docker app) config. Our user admins can do custom TLS via Caddy, and thereby end the years of Nginx TLS pain. Meanwhile, we get to keep doing most of the fun Nginx tricks internally to make sure our GPU graph visual analytics streaming tech does cool things. It's been a breath of fresh air.
We've been watching Caddy 2. Project history like phoning home for a school project (this might have gotten people sued and endangered others!) taught us to be especially careful w/ upgrades, but w/ Nginx going open core startup and now being part of F5, I've gone pretty down on Nginx and up on Caddy over the last few years. The more we want to do more w/ Nginx, the more we want to get off Nginx for the internal proxy, so looking forward to Caddy continuing to improve!
Hi @mholt, thanks a lot for Caddy. It is really awesome and we are considering using it for www.mailerlite.com landing pages. The problem that we have right now is that there is no Cloudflare plugin built-in for V2 so it is a show stopper for us.
"Latest", as in "1 hour ago" latest, or "15 minutes ago" latest? Because the 2.0 image (not RCs) is needed and I think it only got published a few minutes ago.
I have just tried again, used caddy:latest and got the same error. Didn't mean to take much of your time, I can open up a GitHub issue if it's the better way to resolve this.
EDIT: I have just seen your edit. Will try and report back.
For me as a user, the documentation is sometimes not very clear if it comes to Caddyfile syntax vs. JSON. It would be maybe better to always provide both or configurable via some switch e.g. (maybe its only for me not always fully clear).
-> Great product, nice security settings by default and for me definitely the way to go.
The REST API for dynamic updates is attractive. Was it influenced by Envoy? Are there any influences/overlap with Envoy? Is Caddy overlapping in the space of Envoy in any notable way?
I will say that getting started with and learning the configuration landscape of Envoy is quite painful, so the simplicity and approachability of Caddy has always been attractive.
Not really -- it was more like, I sat down and thought about it: "I want to be able to change ANYTHING about my server using a simple REST endpoint. Hmm..." and then I remembered that JSON can be traversed using path-like notation. I'm not the first to implement this kind of API, but it wasn't a borrowed design. It's elegant as heck though. Every time I use it, I'm like "Ahhh that felt nice."
> Are there any influences/overlap with Envoy?
Probably, but I don't use Envoy. I'm pretty confident that Caddy 2 is capable of doing anything Envoy can do (with the added benefit of memory safety) -- just gotta write modules for it, maybe with some extension to the core to expose surface for those modules to hook into, but shouldn't be too hard to do that.
> Is Caddy overlapping in the space of Envoy in any notable way?
Yeah I think so; it's a greatly flexible reverse/sidecar proxy.
Looks great. I have fond memories of Caddy 1.x, but I keep looking for a reverse proxy that can do social auth to protect selected endpoints (I was betting on Traefik having an easy to setup extension for that, but 2.x has been a challenge in other ways...).
Anyone know if Caddy 2 has any such extension beyond the basic auth provider I see in the docs?
Some parameters might have one or two more words to type, but the growth in flexibility is enormous, and we felt it was worth the tradeoff. You can see a comparison here: https://caddyserver.com/docs/v2-upgrade#caddyfile
We also think slightly more verbosity in the v2 Caddyfile is actually a good thing. Aside from more flexibility, it makes it a little clearer what's going on, and concerns are better separated.
For example, the reverse proxy is no longer in the business of rewriting requests: there are other directives for that.
In fact, we completely got rid of "rewrite hacks" that involved appending path components to the URI so they could be routed, then needing to strip those before proxying.
Request matching is very powerful in Caddy 2: you can still easily match on paths in-line with your handler directives, but you can also match on any other part of the request (client IP, query string, regular expressions, etc) with an intuitive @-block syntax: https://caddyserver.com/docs/caddyfile/matchers
Anyway, I think you'll feel right at home with it after a while. I've been using it for months now and it's so simple overall.
(I also use the JSON config directly on a couple of my servers, where the behavior I need to express is way more elegant and concise than in the Caddyfile.)
Thanks for the response! Looking forward to upgrade my stuff. Is there a reason why you decided to introduce an entirely new configuration method via JSON?
Basically, it is highly interoperable and allows us to build config adapters on top of it, so you don't have to use JSON if you really don't want to. Most people use the Caddyfile. You can use YAML or TOML or CUE if you prefer those. Or heck, you can use your nginx config: https://github.com/caddyserver/nginx-adapter
I've started using caddy 2 couple of days ago, I am using Caddyfile because json based configuration looks really awful and is hard to maintain (quick changes take a lot of time). The only problem I've encountered is that you can't turn off TLS for a route using Caddyfile. Why is that?
CommonName has been deprecated for years, so all of Caddy's certificates (and all of Let's Encrypt's) are SAN certificates. Caddy manages single-SAN certs, in accordance with recommended best practices: https://docs.https.dev/acme-ops#use-one-name-per-certificate
(We learned from experience that single SAN scales better and is less prone to troubles. For example, Caddy sites were not affected by the recent Let's Encrypt revocation incident because it manages only single-SAN certificates. And frankly, that shouldn't matter since you don't have to manage them.)
The nginx autoconfiguration plugin has no influence on how many domains the certificate you're requesting will be valid for.
Example: We run a SaaS on our client's domains, so for every client, we have to run `certbot -d subdomain.client1.com,client1.ourdomain.com,staging.client1.ourdomain.com,...`
That means we run certbot with ~4 domains (we include staging and other subdomains) for each client we have. This is highly automatable.
Whether you use nginx autoconfiguration or not is up to you.
No no, you should totally use separate certificates (for our SaaS that's crucial, otherwise we would let our competitors know who our entire client base is, if all the domains would be included in one certificate)
I don't know too much about certbot's nginx plugin, I don't use it and don't see the benefit to be honest, we ran into problems with it (it didn't work for us because our clients have to set up a CNAME DNS entry, and that domain has to be included in the certificate)
We simply run certbot without the nginx plugin, and then have a config template for new virtual servers in nginx, based on a template. Certbot's nginx plugin would mess with our config constantly, changing from version to version, leaving artifacts, and we didn't like that.
Good luck! I'd also recommend splitting the config into multiple files, like it has been best practice for the last decades [1] (link goes to Arch (which I also recommend using) but the advice is not Arch specific)
There's no wsgi transport built-in, we haven't found any golang libraries that let us do this yet. For now, until we find a solution, we recommend using something like gunicorn and use the `reverse_proxy` directive in Caddy to proxy to it.
I'm no *SGI expert, but ASGI might be easier to bridge with the Golang-style concurrency-model than WSGI. Not sure if there are any libraries for go<->ASGI yet though.
I see Caddy as your general use case production web server prior to hitting large scaling needs. I've never gotten it to perform as leanly or scale as well as nginx, but most people will probably not need the headroom nginx provides vs. caddy.
Caddy is significantly easier to manage and get running to a "good enough" stage for people that aren't familiar with nginx and apache, even for most medium sized production deployments.
But, if you've got a team that's comfortable with nginx or apache, and doesn't find themselves spending an undue amount of time fiddling around with managing them, I don't personally believe there's any reason to switch from them to caddy.
If you're just using nginx as a reverse proxy in your setup, it's almost certain Caddy can do the job! Come ask for help on the community forums if you need it https://caddy.community
* Citation needed? (I mean, yes, of the thousands of dimensions of a web server, Caddy is not the fastest or leanest in all of them, but what's important to you?)
I'm not trying to offend anyone but am I the only one here who doesn't like the way this tool is pitched? It is mentioned on HN like a dozen times before.
I somewhat remember the Caddy team having a history of shady behavior: advertisements in headers, hostility towards contributors, telemetry you have to opt-out of. Has any of this changed or has it become worse?
The telemetry server was shut down months ago and v2 has no telemetry client. Although, the counts we gathered were informative. We learned a lot about MITM activity on the Internet, the health and maturity of TLS clients, and how many connections Caddy has secured (trillions, if you're wondering).
As for whether things have gotten worse, I suppose that's up to you!
I wish there was a better way manage telemetry for open source. We have this problem at smallstep, too. We don’t have any telemetry — nothing is instrumented / nothing phones home — but that means it’s really hard to know which features are popular. Which makes it hard to prioritize. We only find out when something breaks and we can look at the volume of issues, heh.
We ask people to let us know if / how they’re using our stuff, but very few people do. It’s a tough problem.
Why did you do it? What have you sought to achieve with it? It's always a head scratcher for me why people building open source tools would put time and effort towards telemetry. Advertisments in HTTP headers? That's a new one. Advertise to whom, devs looking at a browser debug console?
>It's always a head scratcher for me why people building open source tools would put time and effort towards telemetry.
Because you want to know what people are actually using in your projects. I've never put telemetry in anything I use, but I 100% understand why. It's hard to prioritize feature development if you don't know how people are using what you're making, especially if the users aren't very vocal. Without people talking to you and without telemetry on what people are doing, you might spend months worth of man-hours on something no one cares about.
For projects where all you do is build out the things you specifically need this isn't an issue - all you care about is making the stuff you're doing work, and people getting benefit out of things is a bonus but not necessarily your goal. People will submit pull requests or feature requests, or they won't. No skin off your nose. Not all projects are developed this way, though.
It was for a class during my grad program. It was an academic interest in understanding client behavior on the Internet that wasn't limited to proprietary networks like Cloudflare or Google.
We collected some good measurement data -- anonymous, technical, non-personal, etc -- but the terabytes added up and the academic community didn't seem particularly responsive to it in the end. The "opt-in" / "opt-out" depended on how you built/obtained Caddy, but essentially we made it a compile-time decision so that we could reduce biases from the data. By deciding on the download page whether telemetry was enabled, we could also know how representative the data actually was: otherwise it'd be meaningless.
We also wanted to know how widely Caddy was being used. Telemetry was mostly just counts of things, so that's how we know that Caddy has secured trillions of HTTPS requests and managed millions of certs. But it was expensive to run.
The "ads" in headers were intended to be a friendly nod to our sponsors who made it possible. It was a novel idea. I thought it was a good balance of non-intrusive and perfectly visible at the same time: developers who were peering into HTTP requests would see the headers and our sponsors would get some benign recognition from their target audience, while nobody else would see them. It also was supposed to encourage purchasing commercial licenses, which didn't have that header. The licenses were necessary to continue funding a desperately underfunded project.
Needless to say that didn't work out and the only reason the Caddy project didn't shut down entirely is because corporate sponsors (Ardan Labs and one other to be announced probably next week) believed in the project enough to pick it up.
So anyway, I also got college credit for implementing an Internet measurement system, which was really fun and interesting. And as mmalone said adjacently, open source projects really need to know what kind of usage they're getting. With no way to engage customers except at their voluntary discretion, it's impossible to know how to improve the project. Open source is, by definition, an open feedback loop. It only closes if users come back and provide information.
A lot of other major open source projects or free software ship telemetry, even on by default sometimes -- see Windows, Chrome, Ubuntu, Firefox, VS Code, macOS, and countless others. Yet nobody cares.
But having that information was critical in shaping the development of Caddy 2, FWIW.
It's not a stretch to say that other projects didn't get as much hate for it. It really hurt the project's momentum when there was such a strong reaction against it.
It is absolutely a stretch. Mozilla is still catching hell for it. People have switched operating systems over this. Caddy is no different in that regard... except that its creator did the right thing and got rid of it. Mozilla and Microsoft haven't.
> Windows, Chrome, Ubuntu, Firefox, VS Code, macOS
Those are projects funded by 800-pound gorilla companies/foundations. It's either their way or the highway and people complain all the time, some have even forked the code and ripped out telemetry (Vscodium). Homebrew got a response comparable to Caddy but they screwed up even worse because they use Google Analytics.
> It's always a head scratcher for me why people building open source tools would put time and effort towards telemetry.
It's not completely the same as telemetry, but as an administrator of a free open source web service, I have to say that web page analytics (1st party for us) are a huge motivator, when you see how many people from all around the world are using it.
Matt's already spoken to the ads/telemetry (which are ancient history).
My experience as a contributor has been nothing but positive. But if anyone's had a bad experience, I'm sure the Caddy team is more than willing to talk it out and reconcile.
Since the author appears to be in the comments, I'll ask here:
What benefits does this provide for the common Nginx usage (reverse proxy to a number of Python/etc application servers)? I see automatic HTTPS and HTTP/2. Am I missing something else notable?
It's written in Go, so right out the gate you get stronger memory safety guarantees than other servers written in C, which results in more secure software on your edge.
Caddy's HTTPS/TLS implementation is among the most robust in the industry - powered by Go's performant and fairly bulletproof TLS stack, and combined with CertMagic's advanced TLS automation logic, your sites can stay online through situations that are problematic for other setups.
Fewer moving parts means less that can go wrong. Caddy will make you more productive and lower your costs.
An on-line config API is nice if you are automating anything.
Caddy 2 is a fresh new server experience. Some things might take getting used to, like having every site served over HTTPS unless you specify http:// explicitly in your config. But in general, it will feel familiar to v1 in a lot of ways. If you've used v1 before, I recommend going into v2 with a clean-slate using our Getting Started guide: https://caddyserver.com/docs/getting-started
I'm excited to finally get this out there. Let me know what questions you have!