this post was submitted on 19 Jul 2024
1201 points (99.5% liked)

Technology

59086 readers
3485 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It's all very exciting, personally, as someone not responsible for fixing it.

Apparently caused by a bad CrowdStrike update.

Edit: now being told we (who almost all generally work from home) need to come into the office Monday as they can only apply the fix in-person. We'll see if that changes over the weekend...

you are viewing a single comment's thread
view the rest of the comments
[–] Saik0Shinigami@lemmy.saik0.com 8 points 3 months ago (2 children)

I think we’re defining disaster differently. This is a disaster.

I've not read a single DR document that says "research potential options". DR stuff tends to go into play AFTER you've done the research that states the system is unrecoverable. You shouldn't be rolling DR plans here in this case at all as it's recoverable.

I imagine CrowdStrike pulled the update

I also would imagine that they'd test updates before rolling them out. But we're here... I honestly don't know though. None of the systems under my control use it.

[–] Skimflux@lemmy.world 3 points 3 months ago (1 children)

Right, "research potential options" is usually part of Crysis Management, which should precede any application of the DR procedures.

But there's a wide range for the scope of those procedures, they might go from switching to secondary servers to a full rebuild from data backups on tape. In some cases they might be the best option even if the system is easily recoverable (eg: if the DR procedure is faster than the recovery options).

Just the 'figuring out what the hell is going on' phase can take several hours, if you can get the DR system up in less than that it's certainly a good idea to roll it out. And if it turns out that you can fix the main system with a couple of lines of code that's great, but noone should be getting chastised for switching the DR system on to keep the business going while the main machines are borked.

[–] Monument@lemmy.sdf.org 2 points 3 months ago

That’s a really astute observation - I threw out disaster recovery when I probably ought to have used crisis management instead. Imprecise on my part.

[–] Monument@lemmy.sdf.org 2 points 3 months ago* (last edited 3 months ago) (1 children)

The other commenter on this pointed out that I should have said crisis management rather than disaster recovery, and they’re right - and so were you, but I wasn’t thinking about that this morning.

[–] Saik0Shinigami@lemmy.saik0.com 3 points 3 months ago

Nah, it's fair enough. I'm not trying to start an argument about any of this. But ya gotta talk in terms that the insurance people talk in (because that's what your c-suite understand it in). If you say DR... and didn't actually DR... That can cause some auditing problems later. I unfortunately (or fortunately... I dunno) hold the C-suite position in a few companies. DR is a nasty word. Just like "security incident" is a VERY nasty phrase.