Oh, it looks like! Something went wrong with Zola build and I must have not noticed. Thanks a lot for pinging me about that, I will fix it today!
EDIT: Fixed! That’s what you get when you forget to bump the Docker image version after you upgrade zola version locally with a breaking change in the config! :) Thanks for letting me know, it would have taken me a long time to see it was broken!
Comfort is the main reason, I suppose. If I mess up Wireguard config, even to debug the tunnel I need to go to the KVM console. It also means that if I go to a different place and I have to SSH into the box I can’t plug my Yubikey and SSH from there. It’s a rare occurrence, but still…
Ultimately I do understand both point of view. The thing is, SSH bots pose no threats after the bare minimum hardening for SSH has been done. The resource consumption is negligible, so it has no real impact.
To me the tradeoff is slight inconvenience vs slightly bigger attack surface (in case of CVEs). Ultimately everyone can decide which compromise is acceptable for them, but I would say that the choice is not really a big one.
Hey, the short answer is yes, you can.
I would elaborate a little more:
In practice I personally would choose a simple setup where the interesting logs are just forwarded (in Syslog format for example) to a single crowdsec instance. If you have ingress from a single node, I’d go for running it on the host and banning via firewall, if you have multiple ingress nodes, then I would run it inside the cluster and ban via a loadBalancer/cloud firewall/whatever you have in front.
In essence, I would spend some time to think about your preferences, and it might take a little bit to make the setup clean, but I think you have plenty of flexibility to do what you prefer. Let me know if you want to bounce some more ideas!
Yeah I know (I mentioned it myself in the post), but realistically there is no much you can do besides upgrading. Unattended upgrades kick in once a day and you will install the security patches ASAP. There are also virtual patches (crowdsec has a virtual patch for that CVE), but they might not be very effective.
I argue that VPN software is a smaller attack surface, but the problem still exists (CVEs) for everything you expose.
AFAIK I know that SSH has MaxAuthTries and LoginGraceTime, but all it does is terminating the SSH session (I.e. slow down at most), it won’t block the IP via firewall or configuration.
Not sure if there is a recent feature that does the same.
Yeah, what I mean is that it’s useless using ports like 2222, that’s like the unofficial SSH port! Bots are generally harmless (once you move to key auth), and you get functional the same result with the automatic IP ban on failed auth, minus the bother to change client configurations to your custom port. Anyway, if someone does want cleaner logs, changing port works :)
Yep I agree. Especially looking at all the usernames that are tried. I do the same and the only risk come from SSH vulnerabilities. Since nobody would burn a 0-day for SSH (priceless) on my server, unattended upgrades solve this problem too for the most part.
Thanks! I did mention this briefly, although I belong to the school that “since I am anyway banning IPs that fail authentication a few times, it’s not worth changing the port”. I think that it’s a valid thing especially if you ingest logs somewhere, but if you do don’t choose 2222! I have added a link to shodan in the post, which shows that almost everybody who changes port, changes to 2222!
Been there…
I thought my API keys were expired, I regenerated them, changed a couple of things, checked all API calls to see if they changed API itself…then I searched the exact error and found out.
For such a breaking change to the API, was it hard to drop an email to every account not meeting the damn “requirements” with an API call performed in the last x months, to alert of the change?
Yep, I like bunny in fact. It didn’t have all the features I needed back then, but it’s a very good product, I heard very good things.
I also agree about the pricing. I ended up not using desec.io, but if I did, I would have probably set a 1-2 Euros recurring donation, as I feel that’s a totally acceptable price.
As for why people use GoDaddy well… I feel personally attacked as that’s exactly how I ended up there, when I didn’t know better.
I also use porkbun, their API is not a masterpiece but it works and allows you to get, set and update records. In fact their API is now supported by some of the common ddns scripts out there.
I think I used it in the past. Is the one where every X months you need to go the the console and confirm the domain is still used, right?
I think nowadays there are better options (incl. Free) with less maintenance and more flexibility
That’s a very interesting gotcha. They don’t seem to support address ranges either. Unless once you add the whitelist the requests still work from any address (their documentation is ambiguous). This is even more confusing.
I also migrated everything to Porkbun. Gandi used to be good too, we used it extensively at work in my previous org (~3 years ago).
Is the whole sector regressing? It seems these companies aren’t happy just earning a profit based on the service they offer. There is always something “more” that they need to do. Often this makes the experience worse. Meh.
Super happy with Porkbun BTW, it just works, does what it’s needed and I found the renewals to be 50% cheaper compared to GoDaddy…
I found it on their FAQ.
Yes, it is generally less restrictive, but… I have 4 domains, and now I have renewed all of them for the maximum amount. They will all expire after 2033. So unless I decide to add more domains (which is unlikely), I won’t spend a cent in the next ~9 years. I wonder if they really enforce it as it is written or they consider still the renewal an expense “split” over the duration.
Still, I really don’t understand. You can - and should - have proper rate limits on the API. You have API keys that uniquely identify the source, what is “the abuse” they are trying to prevent this way…?
$20/month for a service that anyway is low traffic (especially for hobbyists) is a completely insane price. Even more insane is that their cheapest subscription still doesn’t offer any API access. I agree anyway, but are these staying in business just because they have a consolidated market share? Do they have access to more TLDs? I don’t know, I am genuinely confused. I have absolutely no reason whatsoever to even think of using GoDaddy again.
NameCheap
WOW! I did not know that. I just checked and after a little search:
We have certain requirements for activation to prevent system abuse. In order to have API enabled, your account should meet one of the following requirements:
- have at least 20 domains under your account;
- have at least $50 on your account balance;
- have at least $50 spent within the last 2 years
$50 in last 2 years is not much, but for those who renew for many years, it is still stupid.
Ironically, Namecheap is what the people in https://github.com/navilg/godaddy-ddns/issues/32 migrated to!
I really wish that domain registration was done in a different way, but even in current scenario, gutting features for such a basic service to extract a few bucks and risking losing customers…?
Oh Yeah, Porkbun does have API (it seems since sometime last year? ). I think also Cloudflare, Namecheap and many others do too.
I agree about GoDaddy. It was an original sin for me to use them years ago, and I was lazy with just one domain that I use for most of my emails etc. I deferred the move for a while and then - how it often happens - I had to do it in “emergency” mode.
ClouDNS
I think I heard of it. I think most DDNS scripts support a lot of registrars as well, if one doesn’t want to go with full DNS hosting.
In case of DNS hosting (I also linked it in the post, but it’s a good shotout), there is desec.io too. EU-hosted, free (although donations are highly encouraged) and has a tons of features! There is also a Terraform provider!
I think the benefits of federation is discoverability. I can spin up my gitea or forgejo (or something else!) Instance, but when people look for code in their instances, they can still discover my public repositories, and if they want to contribute, they can fork and open PRs from their instances.
So yeah, it means mostly you can selfhost and provide space to others, but with the same benefits that right now github offers (I.e., everything is there).