userbinator 9 hours ago

This incident brings up a good point: Who archives the archives?

  • divbzero 8 hours ago

    There have been collaborative computing projects like SETI@home [1] and Folding@Home [2] where unused computing power could be used for productive purposes. Could there be something similar for storage? Software that provides unused storage for Internet archiving? In the best case scenario, we could have redundant backups of the Internet Archive distributed around the world.

    [1]: https://setiathome.berkeley.edu/

    [2]: https://foldingathome.org/

    • boomlinde 5 hours ago

      Perhaps torrents?

      archive.org does use torrents and I have one such torrent laying around in my client, which occasionally connects to peers although the the trackers are currently offline. I suppose a new client would find me and other peers through the DHT. I'd share a magnet link for someone to try, but it's a copyright-ignoring ROM dump archive so it may not be the best idea to post it here.

      It's interesting that torrents may not be the first thing that comes to mind. They have the "PR issue" of being the now seemingly mundane way in which we've been downloading DVD rips for the last 20 odd years. Newer technology like IPFS does a better job making the cool core of this technology actually sound cool.

    • binaryroof 3 hours ago

      The vision behind IPFS is that (to an extent) https://ipfs.tech/

      • hypercube33 an hour ago

        IPFS on the tin seems pretty awesome however when I attempted to dig into it for an hour I still had no idea how to actually do anything with it. Their usability needs to go a long way before I give it another go. it's definitely not a two step process where you download a client and click on a link to start load sharing an archive in my past experience.

    • odo1242 7 hours ago

      There is currently ArchiveTeam going on

  • JKCalhoun 9 hours ago

    r/DataHoarder

    (or r/archiveteam ?)

    Personally, I have archived a few of the magazine collections.

    • notpushkin 8 hours ago
      • chambers 8 hours ago

        > The INTERNETARCHIVE.BAK experiment has come to a close a number of years ago.

        > Much was learned in the process, and many thanks are given to the dozens of people who donated time, space and coding efforts to make the system work as long as it did. A number of useful facts and observations came from the project.

        > The Internet Archive continues to explore methods and code to decentralize the collection, to have a mirror running in various ways - these include IPFS, FileCoin, and others. The INTERNETARCHIVE.BAK project also added general mirroring and tracking code to a number of projects that are still in use.

        IA called this their Postmortem, but it sounds... intentionally opaque. Also, I'm not sure if this website is affiliated with archive.org, since they say at the bottom of their homepage:

        > Archive Team is in no way affiliated with the fine folks at ARCHIVE.ORG Archive Team can always be reached by e-mail at archiveteam@archiveteam.org or by IRC at the channel #archiveteam (on hackint).

        • notpushkin 7 hours ago

          Yeah, it’s a completely [1] separate team (they do run a bunch of archiving projects that end up in the IA / Wayback Machine though). Just wanted to share – it’s sad there isn’t much more info though apart from some code; maybe worth looking into IRC logs?

          [1]: On paper, at least; the founder, Jason Scott, seems pretty involved with the IA as well, and I’m not really sure how much the teams intersect.

          • textfiles 9 minutes ago

            The co-founder, Jason Scott, retired from Archive Team years ago and stays around as a cheeleader and advisor. He is employed by Internet Archive.

  • Sakos 7 hours ago

    I really wish the EU would have their own organisation for creating an internet archive that at the very minimum mirrored IA. This is our history and there's only a single place now that has any significant archive of it. It seems like the EU should have a significant interest in preserving it for generations to come.

keepamovin 10 hours ago

how vulnerable is IA to some malicious actor who wanted to rewrite history or run an 'information cleansing' operation?

- take offline

- purge 'problematic' archives

- return to service

is that impossible? are there redundancies to make this very hard?

  • cookiengineer 10 hours ago

    Don't give the SVR any ideas, man.

    The problem that multi generational projects like this always have is tech debt. Any library/dependency chosen by the previous generation might be unmaintained for decades until it falls through the cracks and someone notices it.

    Heretrix, for example, was written in a very old "Java way" to do it. They have also lots of services that were built in the PHP4 age, with globals by default and stuff like that.

    Always keep in mind that whatever you choose, it's a bet, essentially. Over time you'll realize that different language ecosystems have different aligned or misaligned goals to your project. Don't choose libraries because of hype, choose them because of maintainability.

    • Apocryphon 8 hours ago

      I dunno about the state actor hypothesis, but if there is, it all sounds like Charles Stross's description of future cold war in Halting State:

      > "And that's the twentieth-century model, what they used to call an electronic Pearl Habour. Things have moved on since then. Footnotes inserted in government reports feeding into World Trade Organization negotiating positions. Nothing we'd notice at first, nothing that would be obvious for a couple of years. You don't want to halt the state in its tracks, you simply want to divert it into a sliding of your choice."

      Who knows what will appear after the archives are restored?

    • keepamovin 10 hours ago

      Hah! As if they need ideas. But that's not the point, how possible is it?

      Re your comprehensive edit, I totally am on board with that tech choice idea. It's a bet, avoid the fads, pick stuff that's robust (or at least a fit for your possible futures)

      • cookiengineer 9 hours ago

        I'd say we have to differentiate between human error as an attack surface and software bugs / vulnerabilities as an attack surface here.

        Software-wise I wouldn't know where to start, honestly, because the internet archive as a project is so vast [1] that it's hard to get an architectural overview of how the pieces are glued together. Unifying the tech stack seems to have been no concern at all in its development...

        But from a pentesting perspective I'd try to find vulnerabilities in the perl based services first, then Java, then PHP, then NPM and so on... because older projects tend to have a higher likeliness of being unmaintained or using outdated libraries.

        [1] (~242 public repositories) https://github.com/orgs/internetarchive/repositories

  • bubblesnort 2 hours ago

    - openly speculate the tactic to preemptively address concerns

  • emmelaich 8 hours ago

    I hope that Google (for instance) has an occasional snapshot of everything tucked away somewhere on a tape in Norway or somewhere. Like the seed bank.

  • g-b-r an hour ago

    Yeah, last time I checked they weren't doing any timestaping.

    They definitely should.

Apocryphon 8 hours ago

The timing of Google getting rid of the Google Cache couldn't be even worse with these ongoing DDOS attacks on and necessary hardening of the Internet Archive. Wonder what kind of twisty narrative one could posit about why this is happening?