this post was submitted on 24 Sep 2023
16 points (94.4% liked)

datahoarder

6526 readers
33 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS
 

Using Archive.org doesn't work on medium posts and ideally I want to archive every post. The blog I'm trying to archive is https://itsairborne.com in case the posts go down. Googling how to backup medium posts only gives me articles on how to do it if it were my blog. I found this extension called Monolith of Web that allows you to backup a website using the Rust tool Monolith and I just went to each article and clicked the extension and saved them all one by one

you are viewing a single comment's thread
view the rest of the comments
[–] Borger@lemmy.blahaj.zone 5 points 11 months ago

Write a scraper using python and selenium or something. You may have to manually log in as part of it