Does any one here, working in IT, have a sense for how “on-going” this issue is expected to be? Is this something that is largely going to be resolved in a day or two, or is this going to take weeks/ months?
Fully agree as a security engineer with a mostly Microsoft shop. We have some pending laptop fixes, but I think we’ve talked our cio out of hastily pulling out of CrowdStrike. Really, it didn’t hit us hard. Maybe down for 2-3 hours around 4 am Friday morning. Microsoft gives us many more issues more frequently and we don’t have constant talk of pulling it out…
My guess as an on-field technician is that this is going to take at least a week to resolve. As you probably know, it’s an easy fix; the difficult part is going to every single store to actually do the procedure.
Today I worked on 30-35 PCs, and most of my time was spent going from location to location. There’s the tour de France so it’s very time consuming. Anyway, yeah, at least a week.
It’s going to be a grind. This is causing blue screen of death on Windows machines which can only be rectified if you have physical/console access.
In the cloud space if this is happening to you I think you’re screwed. I mean theoretically there’s a way to do it by installing Windows unmounting the disc from the virtual machine to another working virtual machine but it’s a freaking bear.
Basically everyone’s going to have to grind this whole thing out to fix this problem. There’s not going to be an easy way to use automation unless they have a way to destroy and recreate all their computers.
I live in linuxland and it’s been really fun watching this from the side. I really feel for this admins having to deal with this right now because it’s going to just suck.
I’d have thought the cloud side would be pretty easy to script over. Presumably the images aren’t encrypted from the host filesystem so just ensure each VM is off, mount its image, delete the offending files, unmount the image and start the VM back up. Check it works for a few test machines then let it rip on the whole fleet.
Oh my friend. You think these companies do things in a logical scalable way? I have some really bad news…
Theoretically that could work but sometimes security measures require computers be BitLocker encrypted and certain softwares could make this difficult to achieve like fixing a domain controller.
Does any one here, working in IT, have a sense for how “on-going” this issue is expected to be? Is this something that is largely going to be resolved in a day or two, or is this going to take weeks/ months?
My guess as a Linux admin in IT.
I understand the fix takes ~5 minutes per system, must be done in person, and cannot be farmed out to users.
There are likely conversations about alternatives or mitigations to/for crowdstrike.
Most things were likely fixed yesterday. (Depending on staffing levels.) Complications could go on for a week. Fallout of various sorts for a month.
Lawsuits, disaster planning, cyberattacks (targeting crowdstrike companies and those that hastily stopped using it) will go on for months and years.
The next crowdstrike mistake could happen at any time…
Sounds like the tagline to an action movie.
Coming in a computer near you: Crowdstrikenado!
When will crowd strike next?
Fully agree as a security engineer with a mostly Microsoft shop. We have some pending laptop fixes, but I think we’ve talked our cio out of hastily pulling out of CrowdStrike. Really, it didn’t hit us hard. Maybe down for 2-3 hours around 4 am Friday morning. Microsoft gives us many more issues more frequently and we don’t have constant talk of pulling it out…
Maybe you should ;)
As a Linux user I deal with Windows issues way too often administering other laptops.
God, I wish!
My guess as an on-field technician is that this is going to take at least a week to resolve. As you probably know, it’s an easy fix; the difficult part is going to every single store to actually do the procedure. Today I worked on 30-35 PCs, and most of my time was spent going from location to location. There’s the tour de France so it’s very time consuming. Anyway, yeah, at least a week.
It’s going to be a grind. This is causing blue screen of death on Windows machines which can only be rectified if you have physical/console access.
In the cloud space if this is happening to you I think you’re screwed. I mean theoretically there’s a way to do it by installing Windows unmounting the disc from the virtual machine to another working virtual machine but it’s a freaking bear.
Basically everyone’s going to have to grind this whole thing out to fix this problem. There’s not going to be an easy way to use automation unless they have a way to destroy and recreate all their computers.
I live in linuxland and it’s been really fun watching this from the side. I really feel for this admins having to deal with this right now because it’s going to just suck.
I’d have thought the cloud side would be pretty easy to script over. Presumably the images aren’t encrypted from the host filesystem so just ensure each VM is off, mount its image, delete the offending files, unmount the image and start the VM back up. Check it works for a few test machines then let it rip on the whole fleet.
Oh my friend. You think these companies do things in a logical scalable way? I have some really bad news…
Theoretically that could work but sometimes security measures require computers be BitLocker encrypted and certain softwares could make this difficult to achieve like fixing a domain controller.