this post was submitted on 19 Jul 2024
632 points (98.5% liked)

Technology

59086 readers
3760 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.

Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: "It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers."

He isn't alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.

Sadly, for our administrator, things are less than ideal.

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can't boot into safe mode because our BitLocker keys are stored inside of a service that we can't login to because our AD is down.

you are viewing a single comment's thread
view the rest of the comments
[–] catloaf@lemm.ee 193 points 3 months ago (5 children)

We can't boot into safe mode because our BitLocker keys are stored inside of a service that we can't login to because our AD is down.

Someone never tested their DR plans, if they even have them. Generally locking your keys inside the car is not a good idea.

[–] Zron@lemmy.world 52 points 3 months ago

I remember a few career changes ago, I was a back room kid working for an MSP.

One day I get an email to build a computer for the company, cheap as hell. Basically just enough to boot Windows 7.

I was to build it, put it online long enough to get all of the drivers installed, and then set it up in the server room, as physically far away from any network ports as possible. IIRC I was even given an IO shield that physically covered the network port for after it updated.

It was our air-gapped encryption key backup.

I feel like that shitty company was somehow prepared for this better than some of these companies today. In fact, I wonder if that computer is still running somewhere and just saved someone’s ass.

[–] jet@hackertalks.com 43 points 3 months ago (3 children)

The good news is! This is a shake out test and they're going to update those playbooks

[–] jlh@lemmy.jlh.name 40 points 3 months ago

Sysadmins are lucky it wasn't malware this time. Next time could be a lot worse than just a kernel driver with a crash bug.

3rd party companies really shouldn't have access to ship out kernel drivers to millions of computers like this.

[–] Quexotic@infosec.pub 15 points 3 months ago

I wish you were right. I really wish you were. I don't think you are. I'm not trying to be a contrarian but I don't think for a large number of organizations that this is the case.

For what it's worth I truly hope that I'm 100% incorrect and everybody learns from this bullshit but that may not be the case.

[–] Evotech@lemmy.world 14 points 3 months ago

The bad news is that the next incident will be something else they haven't thought about

[–] SapphironZA@sh.itjust.works 17 points 3 months ago (1 children)

We also backup our bitlocker keys with our RMM solution for this very reason.

[–] catloaf@lemm.ee 12 points 3 months ago (1 children)

I hope that system doesn't have any dependencies on the systems it's protecting (auth, mfa).

[–] SapphironZA@sh.itjust.works 5 points 3 months ago

It's outside the primary failure domain.

[–] JasonDJ@lemmy.zip 11 points 3 months ago* (last edited 3 months ago) (2 children)

I get storing bitlocker keys in AD, but as a net admin and not a server admin....what do you do with the DCs keys? USB storage in a sealed envelope in a safe (or at worst, locked file cabinet drawer in the IT managers office)?

Or do people forego running bitlocker on servers since encrypting data-at-rest can be compensated by physical security in the data center?

Or DCs run on SEDs?

[–] catloaf@lemm.ee 10 points 3 months ago (1 children)

When I set it up at one company, the recovery keys were printed out and kept separately.

[–] nobleshift@lemmy.world 4 points 3 months ago

Paper never goes out of style ....

[–] Tankton@lemm.ee 4 points 3 months ago (1 children)

Paper print in a safe is what's usual done.

[–] modeler@lemmy.world 2 points 3 months ago (1 children)

You need at least two copies in two different places - places that will not burn down/explode/flood/collapse/be locked down by the police at the same time.

An enterprise is going to be commissioning new computers or reformatting existing ones at least once per day. This means the bitlocker key list would need printouts at least every day in two places.

Given the above, it's easy to see that this process will fail from time to time, in ways like accicentally leaking a document with all these keys.

[–] JasonDJ@lemmy.zip 1 points 3 months ago (1 children)

I think the idea is to store most of the keys in AD. Then you just have to worry about restoring your DCs.

[–] modeler@lemmy.world 1 points 3 months ago

I think that's a better plan than physically printing keys. I'd also want to save the keys in another format somewhere - perhaps using a small script to export them into a safe store in the cloud or a box I control somewhere

[–] ripcord@lemmy.world 9 points 3 months ago (2 children)

They also don't seem to have a process for testing updates like these...?

This seems like showing some really shitty testing practices at a ton of IT departments.

[–] USSEthernet@startrek.website 20 points 3 months ago (1 children)

Apparently from what I was reading these are forced updates from Crowdstrike, you don't have a choice.

[–] ripcord@lemmy.world 9 points 3 months ago (1 children)

I've heard differently. But if it's true, that should have been a non-starter for the product for exactly reasons like this. This is basic stuff.

[–] Entropywins@lemmy.world 13 points 3 months ago (2 children)

Companies use crowdstrike so they don't need internal cybersecurity. Not having automatic updates for new cyber threats sorta defeats the purpose of outsourcing cybersecurity.

[–] hangonasecond@lemmy.world 6 points 3 months ago (1 children)

Automatic updates should still have risk mitigation in place, and the outage didn't only affect small businesses with no cyber security capability. Outsourcing does not mean closing your eyes and letting the third party do whatever they want.

[–] kent_eh@lemmy.ca 7 points 3 months ago

Outsourcing does not mean closing your eyes and letting the third party do whatever they want.

It shouldn't, but when the decisions are made by bean counters and not people with security knowledge things like this can easily (and frequently) happen.

[–] ripcord@lemmy.world 5 points 3 months ago

Not bothering doing basic, minimal testing - and other mitigation processes - before rolling out updates is absolutely terrible policy.

[–] catloaf@lemm.ee 3 points 3 months ago (1 children)

Unfortunately, the pace of attack development doesn't really give much time for testing.

[–] ripcord@lemmy.world 5 points 3 months ago (1 children)

More time that the zero time than companies appear to have invested here.

[–] TonyOstrich@lemmy.world 5 points 3 months ago (1 children)

I was just thinking about something similar. I can understand wanting to get a security update as quickly as possible, but it still seems like some kind of rolling update could have mitigated something like this. When I say rolling, I mean for example split all of your customers into 24 groups and push the update once an hour to another group. If it causes a massive fuck up it's only some or most, but not all.

[–] hangonasecond@lemmy.world 4 points 3 months ago

Heck even 30 minutes ahead for 1% of devices wouldve had a reasonable chance of catching this