IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.
Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: “It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers.”
He isn’t alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.
Sadly, for our administrator, things are less than ideal.
Another Redditor posted: "They sent us a patch but it required we boot into safe mode.
"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.
Pity the administrators who dutifully kept a list of those keys on a secure server share, only to find that the server is also now showing a screen of baleful blue.
Lol, can you imagine? It empathetically hurts me even thinking of this situation. Enter that brave hero who kept the fileshare decryption key in a local keepass :D
That’s why the 3-2-1 rule exists:
- 3 copies of everything on
- 2 different forms of media with
- 1 copy off site
For something like keys, that means:
- secure server share
- server share backup at a different site
- physical copy (either USB, printed in a safe, etc)
Any IT pro should be aware of this “rule.”
We have a cron job that once a quarter files a ticket with whoever is on-call that week to test all our documented emergency access procedures to ensure they’re all working, accessible, up-to-date etc.
Seems like an argument for a heterogeneous environment, perhaps a solid and secure Linux server to host important keys like that.
Linux can shit the bed too. You need to maintain a physical copy.
Their point is not that linux can’t fail, it’s that a mix of windows and linux is better than just one. That’s what “heterogeneous environment” means.
You should think of your network environment like an ecosystem; monocultures are vulnerable to systemic failure. Diverse ecosystems are more resilient.
Sure but the chances of your Windows and Linux machines shitting the bed at the same time is less than if everything is running Windows. It’s exactly the same reason you keep a physical copy (which after all can break/burn down etc.) - more baskets to spread your eggs across.
Very few businesses are going to spend the money running redundant infrastructure on two different operating systems. Most of them won’t even spend the money on a proper DR plan.
Then they get to suffer the consequences when shit like this happens
Then they get to suffer the consequences when shit like this happens
Oh, they are.
Hey Ralph can you get that post-it from the bottom of your keyboard?
CS did take down Linux a few years back… I forget the exact details.
Sounds like we may have an easier conclusion to draw here
We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.
Someone never tested their DR plans, if they even have them. Generally locking your keys inside the car is not a good idea.
The good news is! This is a shake out test and they’re going to update those playbooks
Sysadmins are lucky it wasn’t malware this time. Next time could be a lot worse than just a kernel driver with a crash bug.
3rd party companies really shouldn’t have access to ship out kernel drivers to millions of computers like this.
The bad news is that the next incident will be something else they haven’t thought about
I wish you were right. I really wish you were. I don’t think you are. I’m not trying to be a contrarian but I don’t think for a large number of organizations that this is the case.
For what it’s worth I truly hope that I’m 100% incorrect and everybody learns from this bullshit but that may not be the case.
We also backup our bitlocker keys with our RMM solution for this very reason.
I hope that system doesn’t have any dependencies on the systems it’s protecting (auth, mfa).
It’s outside the primary failure domain.
I remember a few career changes ago, I was a back room kid working for an MSP.
One day I get an email to build a computer for the company, cheap as hell. Basically just enough to boot Windows 7.
I was to build it, put it online long enough to get all of the drivers installed, and then set it up in the server room, as physically far away from any network ports as possible. IIRC I was even given an IO shield that physically covered the network port for after it updated.
It was our air-gapped encryption key backup.
I feel like that shitty company was somehow prepared for this better than some of these companies today. In fact, I wonder if that computer is still running somewhere and just saved someone’s ass.
They also don’t seem to have a process for testing updates like these…?
This seems like showing some really shitty testing practices at a ton of IT departments.
Apparently from what I was reading these are forced updates from Crowdstrike, you don’t have a choice.
I’ve heard differently. But if it’s true, that should have been a non-starter for the product for exactly reasons like this. This is basic stuff.
Companies use crowdstrike so they don’t need internal cybersecurity. Not having automatic updates for new cyber threats sorta defeats the purpose of outsourcing cybersecurity.
Automatic updates should still have risk mitigation in place, and the outage didn’t only affect small businesses with no cyber security capability. Outsourcing does not mean closing your eyes and letting the third party do whatever they want.
Outsourcing does not mean closing your eyes and letting the third party do whatever they want.
It shouldn’t, but when the decisions are made by bean counters and not people with security knowledge things like this can easily (and frequently) happen.
Not bothering doing basic, minimal testing - and other mitigation processes - before rolling out updates is absolutely terrible policy.
Unfortunately, the pace of attack development doesn’t really give much time for testing.
I get storing bitlocker keys in AD, but as a net admin and not a server admin…what do you do with the DCs keys? USB storage in a sealed envelope in a safe (or at worst, locked file cabinet drawer in the IT managers office)?
Or do people forego running bitlocker on servers since encrypting data-at-rest can be compensated by physical security in the data center?
Or DCs run on SEDs?
Lemmy appears to be weathering the storm quite well…
…probably runs on linux
The overwhelming majority of webservers run Linux
(it’s not even close, like high 90 percent range)Edit: Upon double-checking it’s more like mid-80s, but the point standsIt runs on hundreds of servers. If any of them ran windows they might be out but unless you got an account on them you’d be fine with the rest. That’s the whole point of federation.
I wonder if any Lemmy servers run on Windows without WSL. I can’t think of any hard dependencies on Linux, so it should be possible.
Lmao this is incredible
Another Redditor posted: "They sent us a patch but it required we boot into safe mode.
"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.
“Most of our comms are down, most execs’ laptops are in infinite bsod boot loops, engineers can’t get access to credentials to servers.”
N.B.: Reddit link is from the source
I hope a lot of c-suites get fired for this. But I’m pretty sure they won’t be.
C-suites fired? That’s the funniest thing I’ve heard yet today. They aren’t getting fired - they are their own ass-coverage. How can they be to blame when all these other companies were hit as well?
I guess this is a good week for me to still be laid off.
Our administrator is understandably a little bitter about the whole experience as it has unfolded, saying, "We were forced to switch from the perfectly good ESET solution which we have used for years by our central IT team last year.
Sounds like a lot of architects and admins are going to get thrown under the bus for this one.
“Yes, we ordered you to cut costs in impossible ways, but we never told you specifically to centralize everything with a third party, that was just the only financially acceptable solution that we would approve. This is still your fault, so we’re firing the entire IT department and replacing them with an AI managed by a company in Sri Lanka.”
Stupid argument though, honestly just chance that crowdstrike was the vendor to shit the bed. Might aswell have been set. You should still have procedures for this
Fired? I hope they get class-actioned out of existence as a warning to anyone who skimps on QA
If you have EC2 instances running Windows on AWS, here is a trick that works in many (not all) cases. It has recovered a few instances for us:
- Shut down the affected instance.
- Detach the boot volume.
- Move the boot volume (attach) to a working instance in the same region (us-east-1a or whatever).
- Remove the file(s) recommended by Crowdstrike:
- Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
- Locate the file(s) matching “C-00000291*.sys”, and delete them (unless they have already been fixed by Crowdstrike).
- Detach and move the volume back over to original instance (attach)
- Boot original instance
Alternatively, you can restore from a snapshot prior to when the bad update went out from Crowdstrike. But that is not always ideal.
A word of caution, I’ve done this over a dozen times today and I did have one server where the bootloader was wiped after I attached it to another EC2. Always make a snapshot before doing the work just in case.
Good advice!
At least no mission critical services were hit, because nobody would run mission critical services in Windows, right?
…
RIGHT??This is why every machine I manage has a second boot option to download a small recovery image off the Internet and phone home with a shell. And a copy of it on a cheap USB stick.
Worst case I can boot the Windows install in a VM with the real disk, do the maintenance remotely. I can reinstall the whole thing remotely. Just need the user to mash F12 during boot and select the recovery environment, possibly input WiFi credentials if not wired.
I feel like this should be standard if you have a lot of remote machines in the field.
This is why every machine I manage has a second boot option to download a small recovery image off the Internet and phone home with a shell. And a copy of it on a cheap USB stick.
You’re fucking killing it. Stay awesome.
Also gist this up pls. Thanks.
I wish it was more shareable, but it’s also not as magic as it sounds.
Fundamentally it’s just a Linux install with some heavy customizations so that it does one thing only: boot Linux, and just enough prompts to get it online so that the VPN works, and download the root image into RAM that it boots into so I can SSH into the box, and then a bunch of Linux tools for me to use so I can reimage from there, or run a QEMU with the physical disk passed through so I can VNC into an install even if it BSOD.
It’s a Linux UKI (combined kernel+initramfs into a simple EFI file the firmware can boot directly without a bootloader), but you can just as easily get away with a hidden Debian install or whatever. Can even be a second Windows install if that’s your thing. The reason I went this particular route is I don’t have to update it since it downloads it on the fly, much like the Mac recovery. And it runs entirely in RAM afrerwards so I can safely do whatever is needed with the disk.
I didnt know so many servers still run windows.
On prem AD. At least for my MSP’s clients. Have been pushing hard last few years to migrate to azure.
I’m the corporate world, very much Windows gets used. I know Lemmy likes a circle jerk around Linux. But in the corporate world you find various OS’s for both desktop and servers. I had to support several different OS’s and developed only for two. They all suck in different ways there are no clear winners.
It’s not just a circle jerk in this case. Windows is dominant for desktop usage but Linux has like 90% of the server market and is used for basically all new server projects.
Paying for Windows licensing when it doesn’t benefit you, it’s silly, and that’s been realized for years.
I got super lucky. got paid for my car just before the dealership systems went down, got my return flight 2 days before this shit started.
It might be CrowdStrike’s fault, but maybe this will motivate companies to adopt better workflows and adopt actual preproduction deployment to test these sort of updates before they go live in the rest of the systems.
I know people at big tech companies that work on client engineering, where this downtime has huge implications. Naturally, they’ve called a sev1, but instead of dedicating resources to fixing these issues the teams are basically bullied into working insane hours to manually patch while clients scream at them. One dude worked 36 hours straight because his manager outright told him “you can sleep when this is fixed”, as if he’s responsible for CloudStrike…
Companies won’t learn. It’s always a calculated risk, and much of the fallout of that risk lies with the workers.
That dude should not have put up with that.
Sounds so illegal, that it makes labour authoririty happy
Is it illegal? I’m not American so I have no idea if there are laws in your country against on-call maximum hours.
- It’s not about oncall, they are literally in the office
- See 1
- Not sure about America, but it is very illegal in Russia.
80% of our machines were hit. We were working through 9pm on Friday night running around putting in bitlocker keys and running the fix. Our organization made it worse by hiding the bitlocker keys from local administrators.
Also gotta say… way the boot sequence works, combined with the nonsense with raid/nvme drivers on some machines really made it painful.
deleted by creator
Just a thought from experience: Be wary of any critical products and/or taking a job from a company run by an accountant. CrowdStrike CEO… accountant!
Accounting firms are an obvious exception.
If it only impacts a percentage of your machines then there was a problem in the deployment strategy or the solution wasn’t worthwhile to begin with.
… So your point was that it would have been better if everything went down?
There are plentiful reasons why deployments are done in parts, and I’m guessing that after today strategies will change to apply updates in groups to avoid everything going down.
Also, dear God, stop using windows as a server, or even a client for that matter. If you’re paying actual money to get this shit then the results are on you.
Also, 😂
No.
My main point was that crowdstrike has always been lazy man’s garbage.
Why the fuck does an antivirus need a kernel driver
Because that’s where filesystem access lives? AV wouldn’t do very much good if it could only run from userspace.
Because the windows OS is inherently insecure with lots of permission elevation opportunities.
Pretending linux privelege escalation doesn’t exist… to fight something that gets root you have to be able to fight at the root level, or the root access malware can simply nuke the av from userland.
Or you could just use kernel namespaces, SELinux, Systemd sandboxing, etc. There is zero need to run in ring 0 for security reasons.
Also, privilege escalation is a lot rarer on Linux than it is on Windows.