I mean, my point still stands but if we want to talk about semantics - are you saying betamax wasn’t a giant?

Obviously they entered the vhs war and lost, but after that it was pretty much downhill for the rest of their company and products. They were a big name brand and crashed out by entering a war they ultimately lost. That’s all I’m tryin to get at

That’s a broad leap no? Giants rise and fall. Look at betamax, BlockBuster, Kodak, etc

There’s always going to be something better out there, as long as you’re still looking and leaving the old post. Chin up!

We’re not a fascist, propaganda-addicted cult guys, trust us

[-] stevedidwhat_infosec@infosec.pub 5 points 3 days ago* (last edited 3 days ago)

Got snatched up at an airport

Fell out of a 5th story window

Sent to a religious work camp

Being poisoned with polonium-210

All sorts of fun little things

[-] stevedidwhat_infosec@infosec.pub 2 points 4 days ago* (last edited 4 days ago)

Not at all what I meant. The premise was that this wouldn’t happen if they were being paid fairly. Supply chain attacks happen with or without fair pay.

Look at what happened with the XZ backdoor. Whether or not they’re getting paid just means a different door is opened.

The root of the problem is that we blindly trust anyone based on name-brand and popularity. That has never in the existence of technology been a reliable nor an effective means of authentication.

If it’s not outright buying out companies it will be vulnerabilities/lack of appropriate management, if it’s not vulns it’ll be insider threat.

These are problems we’ve known about for at least a decade+ and we’ve done fuck all to address the root of the problem.

Never trust, always verify. Simple as that.

… he made plenty off the product and made additional when he sold. Devs ability to make money has nothing to do with companies coming in and injecting malware to the service.

Any threat actor group with sufficient funds from various campaigns, spyware, etc could use said funds to buy out a dev, owner, etc.

Not to mention state-sponsored threat actors. This is the perfect example of distracting from the fact of what happened.

Good catch! Missed that one

Because it exposes root and system internals. Biggest reason android devices get compromised/hacked and your fun, quirky android becomes a link in a bot net peddling god knows what including attacks against people and other illegal activities and media

[-] stevedidwhat_infosec@infosec.pub 26 points 4 days ago* (last edited 2 days ago)

For anyone interested - I’d you are using umatrix to block shit you can punch these lines into a new text file and import as blocklist, then commit it with the tiny arrow that points left toward the permanent list to save it permanently:

* www[.]googie-anaiytics[.]com * block

* kuurza[.]com * block

* cdn[.]polyfill[.]io * block

* polyfill[.]io * block

* bootcss[.]com * block

* bootcdn[.]net * block

* staticfile[.]org * block

* polyfill[.]com * block

* staticfile[.]net * block

* unionadjs[.]com * block

* xhsbpza[.]com * block

* union[.]macoms[.]la * block

* newcrbpc[.]com * block

Remove the square brackets before saving the file - these are here to prevent hyperlinks and misclicks.

Edit: this is not a bulleted list, every line must start with an asterisk, just in case your instance doesn’t update edits made to comments quickly.

Edit2: added new IOCs

Edit3: MOAR IOCS FOR THE HOARDE

This has almost nothing to do with what you’re talking about.

A Chinese company bought the domain and the service in February and are attacking people in highly specific conditions. (Mobile devices at specific times)

This is an attack. Not negligence, not an uh oh oopsie woopsie fucky wucky. Attack.

Intuit uses pollyfill… and a lot of people use that service.

Cloudflare and fastly wouldn’t be setting up mirrors if it weren’t still being used, I can guarantee that.

-6

Anyone else getting tired of all the click bait articles regarding PoisonGPT, WormGPT, etc without them ever providing any sort of evidence to back up their claims?

They’re always talking about how the models are so good and can write malware but damn near every GPT model I’ve seen can barely write basic code - no shot it’s writing actually valuable malware, not to mention FUD malware as some are claiming.

Thoughts?

view more: next ›

stevedidwhat_infosec

joined 11 months ago