edent 2 days ago

About 60k academic citations about to die - https://scholar.google.com/scholar?start=90&q=%22https://goo...

Countless books with irrevocably broken references - https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...

And for what? The cost of keeping a few TB online and a little bit of CPU power?

An absolute act of cultural vandalism.

  • toomuchtodo 2 days ago

    https://wiki.archiveteam.org/index.php/Goo.gl

    https://tracker.archiveteam.org/goo-gl/ (1.66B work items remaining as of this comment)

    How to run an ArchiveTeam warrior: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

    (edit: i see jaydenmilne commented about this further down thread, mea culpa)

    • progbits a day ago

      They appear to be doing ~37k items per minute, with 1.6B remaining that is roughly 30 days left. So that's just barely enough to do it in time.

      Going to run the warrior over the weekend to help out a bit.

    • xingped 13 hours ago

      For those in the now, is this heavy on disk usage? Should I install this on my disk drive or my SSD? Just want to avoid tons of disk writes on an SSD if it's unnecessary.

  • jlarocco a day ago

    IMO it's less Google's fault and more a crappy tech education problem.

    It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

    And really it's not much different than anything else online - it can disappear on a whim. How many of those shortened links even go to valid pages any more?

    And no company is going to maintain a "free" service forever. It's easy to say, "It's only ...", but you're not the one doing the work or paying for it.

    • justin66 a day ago

      > It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

      It's a great idea, and today in 2025, papers are pretty much the only place where using these shortened URLs makes a lot of sense. In almost any other context you could just use a QR code or something, but that wouldn't fit an academic paper.

      Their specific choice of shortened URL provider was obviously unfortunate. The real failure is that of DOI to provide an alternative to goo.gl or tinyurl or whatever that is easy to reach for. It's a big failure, since preserving references to things like academic papers is part of their stated purpose.

      • dingnuts a day ago

        Even normal HTTP URLs aren't great. If there was ever a case for content-addressable networks like IPFS it's this. Universities should be able to host this data in a decentralized way.

        • justin66 13 hours ago

          A DOI handle type of thing could certainly point to an IPFS address. I can't speak to how you'd do truly decentralized access to the DOI handle. At some point DNS is a thing and somebody needs to host the handle.

        • nly a day ago

          CANs usually have complex hashy URLs, so you still have the compactness problem

    • gmerc a day ago

      Ahh classic free market cop out.

      • bbuut 12 hours ago

        Free market is a euphemism for “there’s no physics demanding this be worked on”

        If you want it archived do it. You seem to want someone else to take up your concerns.

        An HN genius should be able to crawl this and fix it.

        But you’re not geniuses. They’re too busy to be low affect whiners on social media.

      • jlarocco 11 hours ago

        Well, is the free market going anywhere?

        Who's lost out at the end of the day? People who didn't understand the free market and lost access to these "free" services? Or people who knew what would happen and avoided them? My links are still working...

        There are digital public goods (like Wikipedia) that are intended to stick around forever with free access, but Google isn't one of them.

      • FallCheeta7373 a day ago

        if the smartest among us publishing for academia cannot figure this out, then who will?

        • hammyhavoc a day ago

          Not infrequently, someone being smart in one field doesn't necessarily mean they can solve problems in another.

          I know some brilliant people, but, well, putting it kindly, they're as useful as a chocolate teapot outside of their specific area of academic expertise.

      • kazinator a day ago

        Nope! There have in fact been education campaigns about the evils of URL shorteners for years: how they pose security risks (used for shortening malicious URLs), and how they stop working when their domain is temporarily or permanently down.

        The authors just had their heads too far up their academic asses to have heard of this.

  • epolanski 2 days ago

    Jm2c, but if your resource is a link to an online resource that's borderline already (at any point the content can be changed or disappear).

    Even worse if your resource is a shortened link by some other service, you've just added yet another layer of unreliable indirection.

    • whatevaa a day ago

      Citations are citations, if it's a link, you link to it. But using shorteners for that is silly.

      • ceejayoz a day ago

        It's not silly if the link is a couple hundred characters long.

        • IanCal a day ago

          Adding an external service so you don’t have to store a few hundred bytes is wild, particularly within a pdf.

          • ceejayoz a day ago

            It's not the bytes.

            It's the fact that it's likely gonna be printed in a paper journal, where you can't click the link.

            • SR2Z a day ago

              I find it amusing that you are complaining about not having a computer to click a link while glossing over the fact that you need a computer to use a link at all.

              This use case of "I have a paper journal and no PDF but a computer with a web browser" seems extraordinarily contrived. I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs. If we cared, we'd use a QR code.

              This kind of luddite behavior sometimes makes using this site exhausting.

              • jtuple a day ago

                Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

                Reading paper was more comfortable then reading on the screen, and it was easy to annotate, highlight, scribble notes in the margin, doodle diagrams, etc.

                Do grad students today just use tablets with a stylus instead (iPad + pencil, Remarkable Pro, etc)?

                Granted, post grad school I don't print much anymore, but that's mostly due to a change in use case. At work I generally read at most 1-5 papers a day tops, which is small enough to just do on a computer screen (and have less need to annotate, etc). Quite different then the 50-100 papers/week + deep analysis expected in academia.

                • Incipient a day ago

                  >Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

                  I just had a really warm feeling of nostalgia reading that! I was a pretty average student, and the material was sometimes dull, but the coffee was nice, life had little stress (in comparison) and everything felt good. I forgot about those times haha. Thanks!

              • ceejayoz a day ago

                > I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs.

                This is by no means a universal experience.

                People still get printed journals. Libraries still stock them. Some folks print out reference materials from a PDF to take to class or a meeting or whatnot.

                • SR2Z a day ago

                  And how many of those people then proceed to type those links into their web browsers, shortened or not?

                  Sure, contributing to link rot is bad, but in the same way that throwing out spoiled food is bad. Sometimes you've just gotta break a bunch of links.

                  • ceejayoz a day ago

                    > And how many of those people then proceed to type those links into their web browsers, shortened or not?

                    That probably depends on the link's purpose.

                    "The full dataset and source code to reproduce this research can be downloaded at <url>" might be deeply interesting to someone in a few years.

                    • epolanski a day ago

                      So he has a computer and can click.

                      In any case a paper should not rely on an ephemeral resource like internet links.

                      Have you ever tried to navigate to the errata corrige of computer science books? It's one single book, with one single link, and it's dead anyway.

                      • JumpCrisscross a day ago

                        I’m unconvinced the researchers acted irresponsibly. If anything, a Google-shortened link looks—at first glance—more reliable than a PDF hosted god knows where.

                        There are always dependencies in citations. Unless a paper comes with its citations embedded, splitting hairs between why one untrustworthy provider is more untrustworthy than another is silly.

                        • ycombinatrix a day ago

                          The Google shortened link just redirects you to the PDF hosted god knows where...

              • andrepd a day ago

                I feel like all that is beyond the point. People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.

                • SR2Z a day ago

                  > People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.

                  Anyone who is savvy enough to put a link in a document is well-aware of the fact that links don't work forever, because anyone who has ever clicked a link from a document has encountered a dead link. It's not 2005 anymore, the internet has accumulated plenty of dead links.

                  • andrepd a day ago

                    Very much an xkcd.com/2501 situation

              • reaperducer a day ago

                This kind of luddite behavior sometimes makes using this site exhausting.

                We have many paper documents from over 1,000 years ago.

                The vast majority of what was on the internet 25 years ago is gone forever.

                • eviks 21 hours ago

                  What a weird comparison. Do we have the vast majority of paper documents from 1,000 years ago?

                  • SR2Z 13 hours ago

                    We certainly have more paper documents from 1000 years ago than PDFs from 1000 years ago! Clearly that's the fault of the PDFs.

                • epolanski a day ago

                  25?

                  Try going back by 6/7 years on this very website, half the links are dead.

            • IanCal 16 hours ago

              That’s an even worse reason to use a temporary redirection service. If you really need to, put in both.

            • leumon a day ago

              which makes url shorteners even more attractive for printed media, because you don't have to type many characters manually

        • epolanski a day ago

          Fix that at the presentation layer (PDFs and Word files etc support links) not the data one.

          • ceejayoz a day ago

            Let me know when you figure out how to make a printed scientific journal clickable.

            • epolanski a day ago

              Scientific journals should not rely on ephemeral data on the internet. It doesn't even matter how long the url is.

              Just buy any scientific book and try to navigate to it's own errata they link in the book. It's always dead.

            • diatone a day ago

              Take a photo on your phone, OS recognises the link in the image, makes it clickable, done. Or, use a QR code instead

  • zffr 2 days ago

    For people wanting to include URL references in things like books, what’s the right approach to take today?

    I’m genuinely asking. It seems like its hard to trust that any service will remaining running for decades

    • toomuchtodo 2 days ago

      https://perma.cc/

      It is built for the task, and assuming worse case scenario of sunset, it would be ingested into the Wayback Machine. Note that both the Internet Archive and Cloudflare are supporting partners (bottom of page).

      (https://doi.org/ is also an option, but not as accessible to a casual user; the DOI Foundation pointed me to https://www.crossref.org/ for adhoc DOI registration, although I have not had time to research further)

      • afandian 8 hours ago

        Crossref is designed for publishing workflows. Not set up for ad hoc DOI registration. Not least because just registering a persistent identifier to redirect to an ephemeral page without arrangements for preservation and stewardship of the page doesn’t make much sense.

        That’s not to say that DOIs aren’t registered for all kinds of urls. I found the likes of YouTube etc when I researched this about 10 years ago.

        • toomuchtodo 6 hours ago

          Would you have a recommendation for an organization that can register ad hoc DOIs? I am still looking for one.

      • whoahwio a day ago

        While Perma is solution specifically for this problem, and a good one at that - citing the might of the backing company is a bit ironic here

        • toomuchtodo a day ago

          If Cloudflare provides the infra (thanks Cloudflare!), I am happy to have them provide the compute and network for the lookups (which, at their scale, is probably a rounding error), with the Internet Archive remaining the storage system of last resort. Is that different than the Internet Archive offering compute to provide the lookups on top of their storage system? Everything is temporary, intent is important, etc. Can always revisit the stack as long as the data exists on disk somewhere accessible.

          This is distinct from Google saying "bye y'all, no more GETs for you" with no other way to access the data.

          • whoahwio a day ago

            This is much better positioned for longevity than google’s URL shortener, I’m not trying to make that argument. My point is that 10-15 years ago, when Google’s URL shortener was being adopted for all these (inappropriate) uses, its use was supported by a public opinion of Google’s ‘inevitability’. For Perma, CF serves a similar function.

    • edent a day ago

      The full URl to the original page.

      You aren't responsible if things go offline. No more than if a publisher stops reprinting books and the library copies all get eaten by rats.

      A reader can assess the URl for trustworthiness (is it scam.biz or legitimate_news.com) look at the path to hazard a guess at the metadata and contents, and - finally - look it up in an archive.

      • firefax a day ago

        >The full URl to the original page.

        I thought that was the standard in academia? I've had reviewers chastise me when I did not use wayback machine to archive a citation and link to that since listing a "date retrieved" doesn't do jack if there's no IA copy.

        Short links were usually in addition to full URLS, and more in conference presentations than the papers themselves.

      • grapesodaaaaa a day ago

        I think this is the only real answer. Shorteners might work for things like old Twitter where characters were a premium, but I would rather see the whole URL.

        We’ve learned over the years that they can be unreliable, security risks, etc.

        I just don’t see a major use-case for them anymore.

    • danelski 2 days ago

      Real URL and save the website in the Internet Archive as it was on the date of access?

    • AbstractH24 16 hours ago

      What's the right approach to take for referencing anything that isn't preserved in an institution like the Library of Congress?

      Say the interview of a person, a niche publication, a local pamphlet?

      Maybe to certify that your article is of a certain level of credibility you need to manually preserve all the cited works yourself in an approved way.

  • kazinator 2 days ago

    The act of vandalism occurs when someone creates a shortened URL, not when they stop working.

  • djfivyvusn 2 days ago

    The vandalism was relying on Google.

    • toomuchtodo 2 days ago

      You'd think people would learn. Ah, well. Hopefully we can do better from lessons learned.

    • api 2 days ago

      The web is a crap architecture for permanent references anyway. A link points to a server, not e.g. a content hash.

      The simplicity of the web is one of its virtues but also leaves a lot on the table.

  • SirMaster a day ago

    Can't someone just go through programmatically right now and build a list of all these links and where they point to? And then put up a list somewhere that everyone can go look up if they need to?

  • QuantumGood a day ago

    When they began offering this, their rep for ending services was already so bad I refused to consider goo.gl. Amazing for how many years now they have introduced then ended services with large user bases. Gmail being in "beta" for five years was, weirdly, to me, a sign they might stick with it.

  • crossroadsguy a day ago

    I have always struggled with this. If I buy a book I don’t want an online/URL reference in it. Put the book/author/isbn/page etc. Or refer to the magazine/newspaper/journal/issue/page/author/etc.

    • BobaFloutist a day ago

      I mean preferably do both, right? The URL is better for however long it works.

      • SoftTalker a day ago

        We are long, long past any notion that URLs are permanent references to anything. Better to cite with title, author, and publisher so that maybe a web search will turn it up later. The original URL will almost certainly be broken after a few years.

  • eviks 21 hours ago

    > And for what? The cost of keeping a few TB online and a little bit of CPU power?

    For the immeasurable benefits of educating the public.

  • lubujackson a day ago

    Truly, the most Googly of sunsets.

  • jeffbee a day ago

    While an interesting attempt at an impact statement, 90% of the results on the first two pages for me are not references to goo.gl shorteners, but are instead OCR errors or just gibberish. One of the papers is from 1981.

  • asdll a day ago

    > An absolute act of cultural vandalism.

    It makes me mad also, but something we have to learn the hard way is that nothing in this world is permanent. Never, ever depend on any technology to persist. Not even URLs to original hosts should be required. Inline everything.

  • nikanj a day ago

    The cost of dealing and supporting an old codebase instead of burning it all and releasing a written-from-scratch replacement next year

  • bugsMarathon88 2 days ago

    [flagged]

    • edent 2 days ago

      Gosh! It is a pity Google doesn't hire any smart people who know how to build a throttling system.

      Still, they're a tiny and cash-starved company so we can't expect too much of them.

      • acheron a day ago

        Must not be any questions about that in Leetcode.

      • lyu07282 a day ago

        Its almost like as if once a company becomes this big, burning them to the ground would be better for society or something. That would be the liberal position on monopolies if they actually believed in anything.

      • bugsMarathon88 a day ago

        It is a business, not a charity. Adjust your expectations accordingly, or expect disappointment.

    • quesera a day ago

      Modern webservers are very, very fast on modern CPUs. I hear Google has some CPU infrastructure?

      I don't know if GCP has a free tier like AWS does, but 10kQPS is likely within the capability of a free EC2 instance running nginx with a static redirect map. Maybe splurge for the one with a full GB of RAM? No problem.

      • bbarnett a day ago

        You could deprecate the service, and archive the links as static html. 200bytes of text for an html redirect (not js).

        You can serve immense volumes of traffic from static html. One hardware server alone could so easily do the job.

        Your attack surface is also tiny without a back end interpreter.

        People will chime in with redundancy, but the point is Google could stop maintaining the ingress, and still not be douches about existing urls.

        But... you know, it's Google.

        • quesera a day ago

          Exactly. I've seen goo.gl URLs in printed books. Obviously in old blog posts too. And in government websites. Nonprofit communications. Everywhere.

          Why break this??

          Sure, deprecate the service. Add no new entries. This is a good idea anyway, link shorteners are bad for the internet.

          But breaking all the existing goo.gl URLs seems bizarrely hostile, and completely unnecessary. It would take so little to keep them up.

          You don't even need HTML files. The full set of static redirects can be configured into the webserver. No deployment hassles. The filesystem can be RO to further reduce attack surface.

          Google is acting like they are a one-person startup here.

          Since they are not a one-person startup, I do wonder if we're missing the real issue. Like legal exposure, or implication in some kind of activity that they don't want to be a part of, and it's safer/simpler to just delete everything instead of trying to detect and remove all of the exposure-creating entries.

          Of maybe that's what they're telling themselves, even if it's not real.

          • bugsMarathon88 a day ago

            > Why break this??

            We already told you: people are likely brute-forcing URLs.

            • quesera a day ago

              I'm not sure why that is a problem.

    • nomel a day ago

      Those numbers make it seem fairly trivial. You have a dozen bytes referencing a few hundred bytes, for a service that is not latency sensitive.

      This sounds like a good project for an intern, with server costs that might be able to exceed a hundred dollars per month!

mrcslws 2 days ago

From the blog post: "more than 99% of them had no activity in the last month" https://developers.googleblog.com/en/google-url-shortener-li...

This is a classic product data decision-making fallacy. The right question is "how much total value do all of the links provide", not "what percent are used".

  • bayindirh 2 days ago

    > The right question is "how much total value do all of the links provide", not "what percent are used".

    Yes, but it doesn't bring in the sweet promotion home, unfortunately. Ironically, if 99% of them doesn't see any traffic, you can scale back the infra, run it in 2 VMs, and make sure a single person can keep it up as a side quest, just for fun (but, of course, pay them for their work).

    This beancounting really makes me sad.

    • quesera 2 days ago

      Configuring a static set of redirects would take a couple hours to set up, and literally zero maintenance forever.

      Amazon should volunteer a free-tier EC2 instance to help Google in their time of economic struggles.

      • bayindirh a day ago

        This is what I mean, actually.

        If they’re so inclined, Oracle has an always free tier with ample resources. They can use that one, too.

    • socalgal2 a day ago

      If they wanted the sweat promotion they could add an interstitial. Yes, people would complain, but at least the old links would not stop working.

    • ahstilde 2 days ago

      > just for fun (but, of course, pay them for their work).

      Doing things for fun isn't in Google's remit

      • kevindamm 2 days ago

        Alas, it was, once upon a time.

      • morkalork a day ago

        Then they shouldn't have offered it as a free service in the first place. It's like that discussion about how Google in all its 2-ton ADHD gorilla glory will enter an industry, offer a (near) free service or product, decimate all competition, then decide its not worth it and shutdown. Leaving a desolate crater behind of ruined businesses, angry and abandoned users.

        • jsperson a day ago

          I’m still sore about reader. Gap has never been filled for me.

      • ceejayoz 2 days ago

        It used to be. AdSense came from 20% time!

  • HPsquared 2 days ago

    Indeed. I've probably looked at less than 1% of my family photos this month but I still want to keep them.

  • sltkr a day ago

    I bet 99% of URLs that exist on the public web had no activity last month. Might as well delete the entire WWW because it's obviously worthless.

    • chneu 17 hours ago

      Where'd all my porn go!?

  • fizx a day ago

    Don't be confused! That's not how they made the decision; it's how they're selling it.

    • esafak a day ago

      So how did they decide?

      • chneu 17 hours ago

        new person got hired after old person left. new person says "we can save x% by shutting down these links. 99% arent used" and the new boss that's only been there for 6 months says "yeah sure".

        Why does google kill any project? the people who made it moved on, the new people dont care because it doesn't make their resume look any better.

        basically nobody wants to own this service and it requires upkeep to maintain it alongside other google services.

        google's history shows a clear choice to reward new projects, not old ones.

        https://killedbygoogle.com/

      • nemomarx a day ago

        I expect cost on a budget sheet, then an analysis was done about the impact of shutting it down

        • sltkr a day ago

          You can't get promoted at Google for not changing anything.

      • ratg13 14 hours ago

        They launched Firebase Dynamic Links and someone didn't like the overlap.

  • SoftTalker a day ago

    From Google's perspective, the question is "How many ads are we selling on these links" and if it's near zero, that's the value to them.

  • firefax a day ago

    > "more than 99% of them had no activity in the last month"

    Better to have a short URL and not need it, than need a short URL and not have it IMO.

  • esafak a day ago

    What fraction of indexed Google sites, Youtube videos, or Google Photos were retrieved in the last month? Think of the cost savings!

    • nomel a day ago

      Youtube already does this, to some extent, by slowly reduce the quality of your videos, if they're not accessed frequently enough.

      Many videos I uploaded in 4k are now only available in 480p, after about a decade.

  • handsclean 2 days ago

    I don’t think they’re actually that dumb. I think the dirty secret behind “data driven decision making” is managers don’t want data to tell them what to do, they want “data” to make even the idea of disagreeing with them look objectively wrong and stupid.

    • HPsquared 2 days ago

      It's a bit like the the difference between "rule of law" and "rule by law" (aka legalism).

      It's less "data-driven decisions", more "how to lie with statistics".

  • FredPret a day ago

    "Data-driven decision making"

JimDabell a day ago

Cloudflare offered to keep it running and were turned away:

https://x.com/elithrar/status/1948451254780526609

Remember this next time you are thinking of depending upon a Google service. They could have kept this going easily but are intentionally breaking it.

  • fourseventy a day ago

    Google killing their domains service was the last straw for me. I started moving all of my stuff off of Google since then.

    • nomel a day ago

      I'm still shocked that my google voice number still functions after all these years. It makes me assume it's main purpose is to actually be an honeypot of some sort, maybe for spam call detection.

      • joshstrange a day ago

        Because IIRC it’s essentially completely run by another company (I want to say Bandwidth?) and, again my memories might be fuzzy, originally came from an acquisition of a company called Grand Central.

        My guess is it just keeps chugging along with little maintenance needed by Google itself. The UI hasn’t changed in a while from what I’ve seen.

      • hnfong a day ago

        Another shocking story to share.

        I have a tiny service built on top of Google App Engine that (only) I use personally. I made it 15+ years ago, and the last time I deployed changes was 10+ years ago.

        It's still running. I have no idea why.

        • coryrc a day ago

          It's the most enterprise-y and legacy thing Google sells.

      • throwyawayyyy a day ago

        Pretty sure you can thank the FCC for that :)

      • mrj a day ago

        Shhh don't remind them

      • kevin_thibedeau a day ago

        Mass surveillance pipeline to the successor of room 641A.

  • thebruce87m a day ago

    > Remember this next time you are thinking of depending upon a Google service.

    Next time? I guess there’s a wave of new people that haven’t learned that that lesson yet.

jaydenmilne 2 days ago

ArchiveTeam is trying to brute force the entire URL space before its too late. You can run a Virtualbox VM/docker image (ArchiveTeam Warrior) to help (unique IPs are needed). I've been running it for a couple months and found a million.

https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

  • localtoast 2 days ago

    Docker container FTW. Thanks for the heads-up - this is a project I will happily throw a Hetzner server at.

    • chneu 17 hours ago

      im about to go setup my spare n100 just for this project. If all it uses is a lil bandwidth then that's perfect for my 10gbps fiber and n100.

      • addandsubtract 17 hours ago

        Doing the same, even though I'm worried Google will throw even more captchas at me now, than before.

    • wobfan a day ago

      Same here. I am geniunely asking myself for what though. I mean, they'll receive a list of the linked domains, but what will they do with that?

      • fsmv 6 hours ago

        They are downloading and archiving the pages that the links point to

      • fragmede a day ago

        save it, forever*.

        * as long as humanly possible, as is archive.org's mission.

  • hadrien01 a day ago

    After a while I started to get "Google asks for a login" errors. Should I just keep going? There's no indication on what I should do on the ArchiveTeam wiki

  • ojo-rojo 2 days ago

    Thanks for sharing this. I've often felt that the ease by which we can erase digital content makes our time period susceptible to a digital dark ages to archaeologists studying history a few thousand years from now.

    Us preserving digital archives is a good step. I guess making hard copies would be the next step.

  • AstroBen a day ago

    Just started, super easy to set up

cpeterso 2 days ago

Google’s own services generate goo.gl short URLs (Google Maps generates https://maps.app.goo.gl/ URLs for sharing links to map locations), so I assume this shutdown only affects user-generated short URLs. Google’s original announcement doesn’t say as such, but it is carefully worded to specify that short URLs of the “https://goo.gl/* format” will be shut down.

Google’s probably trying to stop goo.gl URLs from being used for phishing, but doesn’t want to admit that publicly.

  • growthwtf a day ago

    This actually makes the most logical sense to me, thank you for the idea. I don't agree with the way they're doing it of course but this probably is risk mitigation for them.

jedberg 2 days ago

I have only given this a moment's thought, but why not just publish the URL map as a text file or SQLLite DB? So at least we know where they went? I don't think it would be a privacy issue since the links are all public?

  • DominikPeters 2 days ago

    It will include many URLs that are semi-private, like Google Docs that are shared via link.

    • ryandrake a day ago

      If some URL is accessible via the open web, without authentication, then it is not really private.

      • bo1024 a day ago

        What do you mean by accessible without authentication? My server will serve example.com/64-byte-random-code if you request it, but if you don’t know the code, I won’t serve it.

        • prophesi a day ago

          Obfuscation may hint that it's intended to be private, but it's certainly not authentication. And the keyspace for these goog.le short URL's are much smaller than a 64byte alphanumeric code.

          • hombre_fatal a day ago

            Sure, but you have to make executive decisions on the behalf of people who aren't experts.

            Making bad actors brute force the key space to find unlisted URLs could be a better scenario for most people.

            People also upload unlisted Youtube videos and cloud docs so that they can easily share them with family. It doesn't mean you might as well share content that they thought was private.

          • bo1024 a day ago

            I'm not seeing why there's a clear line where GET cannot be authentication but POST can.

            • prophesi a day ago

              Because there isn't a line? You can require auth for any of those HTTP methods. Or not require auth for any of them.

          • wobfan a day ago

            I mean, going by that argument a username + password is also just obfuscation. Generating a unique 64 byte code is even more secure than this, IF it's handled correctly.

    • chneu 17 hours ago

      That's not any better than what archiveteam is doing. They're brute forcing the URLs to capture all of them. So privacy won't really matter here.

    • charcircuit a day ago

      Then use something like argon2 on the keys, so you have to spend a long time to brute force them all similar to how it is today.

    • high_na_euv 2 days ago

      So exclude them

      • ceejayoz 2 days ago

        How?

        How will they know a short link to a random PDF on S3 is potentially sensitive info?

  • Nifty3929 2 days ago

    I'd rather see it as a searchable database, which I would think is super cheap and no maintenance for Google, and avoids these privacy issues. You can input a known goo.gl and get it's real URL, but can't just list everything out.

    • growt 2 days ago

      And then output the search results as a 302 redirect and it would just be continuing the service.

  • devrandoom 2 days ago

    Are they all public? Where can I see them?

    • jedberg 2 days ago

      You can brute force them. They don't have passwords. The point is the only "security" is knowing the short URL.

    • Alifatisk 2 days ago

      I don't think so, but you can find the indexed urls here https://www.google.com/search?q=site%3A"goo.gl" it's about 9,6 million links. And those are what got indexed, it should be way more out there

      • chneu 17 hours ago

        archiveteam has the list at over 2billion urls with over a billion left to archive.

      • sltkr a day ago

        I'm surprised Google indexes these short links. I expected them to resolve them to their canonical URL and index that instead, which is what they usually do when multiple URLs point to the same resource.

ElijahLynn 2 days ago

OMFG - Google should keep these up forever. What a hit to trust. Trust with Google was already bad for everything they killed, this is another dagger.

spankalee a day ago

As an ex-Googler, the problem here is clear and common, and it's not the infrastructure cost: it's ownership.

No one wants to own this product.

- The code could be partially frozen, but large scale changes are constantly being made throughout the google3 codebase, and someone needs to be on the hook for approving certain changes or helping core teams when something goes wrong. If a service it uses is deprecated, then lots of work might need to be done.

- Every production service needs someone responsible for keeping it running. Maybe an SRE, thought many smaller teams don't have their own SREs so they manage the service themselves.

So you'd need some team, some full reporting chain all the way up, to take responsibility for this. No SWE is going to want to work on a dead product where no changes are happening, no manager is going to care about it. No director is going to want to put staff there rather than a project that's alive. No VP sees any benefit here - there's only costs and risks.

This is kind of the Reader situation all over again (except for the fact that a PM with decent vision could have drastically improved and grown Reader, IMO).

This is obviously bad for the internet as a whole, and I personally think that Google has a moral obligation to not rug pull infrastructure like this. Someone there knows that critical links will be broken, but it's in no one's advantage to stop that from happening.

I think Google needs some kind of "attic" or archive team that can take on projects like this and make them as efficiently maintainable in read-only mode as possible. Count it as good-will marketing, or spin it off to google.org and claim it's a non-profit and write it off.

Side note: a similar, but even worse situation for the company is the Google Domains situation. Apparently what happened was that a new VP came into the org that owned it and just didn't understand the product. There wasn't enough direct revenue for them, even though the imputed revenue to Workspace and Cloud was significant. They proposed selling it off and no other VPs showed up to the meeting about it with Sundar so this VP got to make their case to Sundar unchallenged. The contract to sell to Squarespace was signed before other VPs who might have objected realized what happened, and Google had to buy back parts of it for Cloud.

  • gsnedders a day ago

    To some extent, it's cases like this which show the real fragility of everything existing as a unified whole in google3.

    While clearly maintenance and ownership is still a major problem, one could easily imagine deploying something similar — especially read-only — using GCP's Cloud Run and BigTable products could be less work to maintain, as you're not chasing anywhere near such a moving target.

  • rs186 a day ago

    Many good points, but if you don't mind me asking: if you were at Google, would you be willing to be the lead of that archive team, knowing that you'll be stuck at this position for the next 10 years, with the possibility of your team being downsized/eliminated when the wind blows slightly in the other direction?

    • spankalee a day ago

      Definitely a valid question!

      Myself, no, for a few reasons: I mainly work on developer tools, I'm too senior for that, and I'm not that interested.

      But some people are motivated to work on internet infrastructure, and would be interested. First, you wouldn't be stuck for 10 years. That's not how Google works (and you could of course quit) you're supposed to be with a team a minimum of 18 months, and after that, transfer away. A lot of junior devs don't care that much where they land, the archive team would have to be responsible for more than just the link shortener, so it might be interesting to care for several services from top to bottom. SWEs could be compensated for rotating on to the archive team, and/or it could be part-time.

      I think the harder thing is getting management buy-in, even from the front-line managers.

romaniv a day ago

URL shorteners were always a bad idea. At the rate things are going I'm not sure people in a decade or two won't say the same thing about URLs and the Web as whole. The fact that there is no protocol-level support for archiving, versioning or even client-side replication means that everything you see on the Web right now has an overwhelming probability to permanently disappear in the near future. This is an astounding engineering oversight for something that's basically the most popular communication system and medium in the world and in history.

Also, it's quite conspicuous that 30+ years into this thing browsers still have no built-in capacity to store pages locally in a reasonable manner. We still rely on "bookmarks".

hinkley a day ago

What’s their body count now? Seems like they’ve slowed down the killing spree, but maybe it’s just that we got tired of talking about them.

davidczech 2 days ago

I don't really get it, it must cost peanuts to leave a static map like this up for the rest of Google's existence as a company.

  • nikanj a day ago

    There’s two things that are real torture to google dev teams: 1) Being told a product is completed and needs no new features or changes 2) Being made to work on legacy code

cyp0633 2 days ago

The runner of Compiler Explorer tried to collect the public shortlinks and do the redirection themselves:

Compiler Explorer and the Promise of URLs That Last Forever (May 2025, 357 points, 189 comments)

https://news.ycombinator.com/item?id=44117722

krunck 2 days ago

Stop MITMing your content. Don't use shorteners. And use reasonable URL patterns on your sites.

  • Cyan488 a day ago

    I have been using a shortening service with my own domain name - it's really handy, and I figure that if they go down I could always manually configure my own DNS or spin up some self-hosted solution.

pentestercrab a day ago

There seems to have been a recent uptick in phishers using goo.gl URLs. Yes, even without new URLs being accepted by registering expired domains with an old reference.

citrin_ru 13 hours ago

Link shortener in read-only mode should be very cheap to run (highly available writes can be expensive in a distributes system but it's easier to make a read-only system work efficient).

They are saving pennies but reminding everyone one more time that Google cannot be relied upon.

ccgreg a day ago

Common Crawl's count of unique goo.gl links is approximately 10 million. That's in our permanent archive, so you'll be able to consult them in the future.

No search engine or crawler person will ever recommend using a shortener for any reason.

pluc 2 days ago

Someone should tell Google Maps

david422 a day ago

Somewhat related - I wanted to add short urls to a project of mine. I was looking around at a bunch of url shorteners - and then realized it would be pretty simple to create my own. It's my content pointed to my own service, so I don't have to worry about 3rd party content or other services going down.

Brajeshwar 2 days ago

What will it really cost for Google (each year) to host whatever was created, as static files, for as long as possible?

  • malfist 2 days ago

    It'd probably cost a couple tens of dollars, and Google is simply too poor to afford that these days. They've spent all their money on AI and have nothing left

  • chneu 17 hours ago

    it's not the cost of hosting/sharing it. It's the cost employing people to maintain this alongside other google products.

    So, at minimum, assuming there are 2 people maintaining this at google that probably means it would cost them $250k/yr in just payroll to keep this going. That's probably a very low ball estimate on the people involved but it still shows how expensive theses old products can be.

bunbun69 19 hours ago

Isn’t this a good thing? It forces people to think now before making decisions

rsync a day ago

A reminder that the "Oh By"[1] everything-shortener not only exists but can be used as a plain old URL shortener[2].

Unlike the google URL shortener, you can count on "Oh By" existing in 20 years.

[1] https://0x.co

[2] https://0x.co/hnfaq.html

delduca 13 hours ago

Never trusted on Google after Google Reader.

xutopia a day ago

Google is making harder and harder to depend on their software.

  • christophilus a day ago

    That’s a good thing from my perspective. I wish they’d crush YouTube next. That’s the only Google IP I haven’t been able to avoid.

    • chneu 17 hours ago

      The alternatives just aren't there, either. Nebula is okay but not great. Floatplane is too exclusive. Vimeo..okay.

      But maybe a youtube disruption would be good for video on the internet. or it might be bad. idk.

pkilgore a day ago

Google probably spends more money a month than what it would take to preserve this service on coffee creamer for a single conference room.

gedy 2 days ago

At least they didn't release a 2 new competing d.uo or re.ad, etc shorteners and expect you to migrate

micromacrofoot 2 days ago

This is just being a poor citizen of the web, no excuses. Google is a 2 trillion dollar company, keeping these links working indefinitely would probably cost less than what they spend on homepage doodles.

charlesabarnes a day ago

Now I'm wondering why did chrome change the behavior to use share.google links if this will be the inevitable outcome

mymacbook a day ago

Why is everyone jumping on the blame the victims bandwagon?! This is not the fault of users whether they were scientists publishing papers or the fault of the general public sharing links. This is absolutely 100% on Alphabet/Google.

When you blame your customer, you have failed.

  • eviks 21 hours ago

    They weren't customers since they didn't buy anything, and yes, as sweet as "free" is, it is the fault of users to expect free to last forever

ChrisArchitect a day ago

Noticed recently on some google properties where there are Share buttons that it's generating share.google links now instead of goo.gl.

Is that the same shortening platform running it?

ourmandave 2 days ago

A comment said they stopped making new links and announced back in 2018 it would be going away.

I'm not a google fanboi and the google graveyard is a well known thing, but this has been 6+ years coming.

  • goku12 a day ago

    For one, not enough people seem to be aware of it. They don't seem to have given that announcement the importance and effort it deserved. Secondly, I can't say that they have a good migration plan when shutting down their services. People scrambling like this to backup the data is rather common these days. And finally, this isn't a service that can be so easily replaced. Even if people knew that it was going away, there would be short-links that they don't remember, but are important nevertheless. Somebody gave an example above - citations in research papers. There isn't much thought given to the consequences when decisions like this are taken.

    Granted that it was a free service and Google is under no obligation to keep it going. But if they were going to be so casual about it, they shouldn't have offered it in the first place. Or perhaps, people should take that lesson instead and spare themselves the pain.

  • chneu 17 hours ago

    I just went through the old thread and it's comments. It appears google didn't specifically state they were going to end the service. They hinted that links would continue working, but new ones would not be able to be created. It was left a bit open-ended, and that likely made people think the links would work indefinitely.

    This seems to be echoed by the archiveteam scrambling to get this archived. I figure they would have backed these up years ago if it was more well known.

pfdietz 2 days ago

Once again we are informed that Google cannot be trusted with data in the long term.

fnord77 a day ago
  • quesera a day ago

    From the 2018 announcement:

    > URL Shortener has been a great tool that we’re proud to have built. As we look towards the future, we’re excited about the possibilities of Firebase Dynamic Links

    Perhaps relatedly, Google is shutting down Firebase Dynamic Links too, in about a month (2025-08-25).

    • chneu 17 hours ago

      Thanks for pointing this out. That's hilarious.

Bluestein a day ago

Another one for the Google [G]raveyard.-

lrvick a day ago

Yet another reminder to never trust corpotech to be around long term.