Show HN: Any-LLM – Lightweight router to access any LLM Provider

github.com

123 points by AMeckes 4 days ago

We built any-llm because we needed a lightweight router for LLM providers with minimal overhead. Switching between models is just a string change : update "openai/gpt-4" to "anthropic/claude-3" and you're done.

It uses official provider SDKs when available, which helps since providers handle their own compatibility updates. No proxy or gateway service needed either, so getting started is pretty straightforward - just pip install and import.

Currently supports 20+ providers including OpenAI, Anthropic, Google, Mistral, and AWS Bedrock. Would love to hear what you think!

swyx 4 days ago

> LiteLLM: While popular, it reimplements provider interfaces rather than leveraging official SDKs, which can lead to compatibility issues and unexpected behavior modifications

with no vested interest in litellm, i'll challenge you on this one. what compatibility issues have come up? (i expect text to have the least, and probably voice etc have more but for text i've had no issues)

you -want- to reimplement interfaces because you have to normalize api's. in fact without looking at any-llm code deeply i quesiton how you do ANY router without reimplementing interfaces. that's basically the whole job of the router.

  • AMeckes 3 days ago

    Both approaches work well for standard text completion. Issues tend to be around edge cases like streaming behavior, timeout handling, or new features rolling out.

    You're absolutely right that any router reimplements interfaces for normalization. The difference is what layer we reimplement at. We use SDKs where available for HTTP/auth/retries and reimplement normalization.

    Bottom line is we both reimplement interfaces, just at different layers. Our bet on SDKs is mostly about maintenance preferences, not some fundamental flaw in LiteLLM's approach.

  • scosman 3 days ago

    Yeah, official SDKs are sometimes a problem too. Together's included Apache Arrow, a ~60MB dependency, for a single feature (I patched to make it optional). If they ever lock dependency versions it could conflict with your project.

    I'd rather a library that just used OpenAPI/REST, than one that takes a ton of dependencies.

  • chuckhend 4 days ago

    LiteLLM is quite battle tested at this point as well.

    > it reimplements provider interfaces rather than leveraging official SDKs, which can lead to compatibility issues and unexpected behavior modifications

    Leveraging official SDKs also does not solve compatibility issues. any_llm would still need to maintain compatibility with those offical SDKs. I don't think one way clearly better than the other here.

    • AMeckes 3 days ago

      That's true. We traded API compatibility work for SDK compatibility work. Our bet is that providers are better at maintaining their own SDKs than we are at reimplementing their APIs. SDKs break less often and more predictably than APIs, plus we get provider-implemented features (retries, auth refresh, etc) "for free." Not zero maintenance, but definitely less. We use this in production at Mozilla.ai, so it'll stay actively maintained.

    • amanda99 4 days ago

      Being battle tested is the only good thing I can say about LiteLLM.

      • scosman 4 days ago

        You can add in it's still 10x better than LangChain

  • Szpadel 3 days ago

    I use litellm as my personal AI gateway, and from user point of view there is no difference if proxy uses official SDK or not, this might be benefit for proxy developers.

    but I can give you one example: litellm recently had issue with handling deepseek reasoning. they broke implementation and while reasoning was missing from sync and streaming responses.

amanda99 4 days ago

I'm excited to see this. Have been using LiteLLM but it's honestly a huge mess once you peek under the hood, and it's being developed very iteratively and not very carefully. For example. for several months recently (haven't checked in ~a month though), their Ollama structured outputs were completely botched and just straight up broken. Docs are a hot mess, etc.

piker 4 days ago

This looks awesome.

Why Python? Probably because most of the SDKs are python, but something that could be ported across languages without requiring an interpreter would have been really amazing.

pglevy 3 days ago

How does this differ from this project? https://github.com/simonw/llm

  • peskypotato 3 days ago

    From my understanding of Simon's project it only supports OpenAI and OpenAI-compatible models in addition to local model support. For example, if I wanted to use a model on Amazon Bedrock I'd have to first deploy (and manage) a gateway/proxy layer[1] to make it OpenAI-compatible.

    Mozzila's project boosts of a lot of existing interfaces already, much like LiteLLM, which has the benefit of directly being able to use a wider range or supported models.

    > No Proxy or Gateway server required so you don't need to deal with setting up any other service to talk to whichever LLM provider you need.

    Now how it compares to LiteLLM, I don't have enough experience in either to tell.

    [1] https://github.com/aws-samples/bedrock-access-gateway

gapeleon 2 days ago

You guys need to fact check your AI-generated blog posts:

https://blog.mozilla.ai/introducing-any-llm-a-unified-api-to...

> One popular solution, LiteLLM, is highly valued for its wide support of different providers and modalities, making it a great choice for many developers. However, it re-implements provider interfaces rather than leveraging SDKs that are managed and released by the providers themselves. As a result, the approach can lead to compatibility issues and unexpected modifications in behavior, making it difficult to keep up with the changes happening among all the providers.

LiteLLM is rock-solid in practice. The underlying API providers announce breaking changes well in advance, and LiteLLM has never been caught out by this. LLMs will come up with hypothetical cons like this upon request.

> Lastly, proxy/gateway solutions like OpenRouter and Portkey require users to set up a hosted proxy server to act as an intermediary between their code and the LLM provider. Although this can effectively abstract away the complicated logic from the developer, it adds an extra layer of complexity and a dependency on external services, which might not be ideal for all use cases.

OpenRouter is a hosted service that provides the proxy/gateway infrastructure. Users don't "set up a hosted proxy server" themselves; they just make API calls to OpenRouter's endpoints. But older LLMs don't know what OpenRouter is and will assume it's a self-hosted proxy server.

> Another option, AISuite, was created by Andrew NG and offers a clean and modular design. However, it is not actively maintained (its last release was in December of 2024) and lacks consistent Python-typed interfaces.

Okay so you clicked the "releases" tab and saw December 2024. Next time check https://github.com/andrewyng/aisuite/commits/main/ Small, fast moving community projects like this, exllamav2, etc don't necessarily tag releases.

I've got nothing against using AI to write posts like this, but at least take the time to fact check before dumping on other people's work.

If not for the Mozilla branding, I'd have assumed this was a scam/malware - especially since it's name is so similar to Anything-LLM.

sparacha 4 days ago

There is liteLLM, OpenRouter, Arch (although that’s an edge/service proxy for agents) and now this. We all need a new problem to solve

  • CuriouslyC 4 days ago

    LiteLLM is kind of a mess TBH, I guess it's ok if you just want a docker container to proxy to for personal projects, but actually using it in production isn't great.

    • tom_usher 4 days ago

      I definitely appreciate all the work that has gone in to LiteLLM but it doesn't take much browsing through the 7000+ line `utils.py` to see where using it could become problematic (https://github.com/BerriAI/litellm/blob/main/litellm/utils.p...)

      • swyx 4 days ago

        can you double click a little bit? many files in professional repos are 1000s of lines. LoC in it self is not a code smell.

        • otabdeveloper4 3 days ago

          LiteLLM is the worst code I have ever read in my life. Quite an accomplishment, lol.

          • swyx 3 days ago

            ok still not helpful in giving substantial criticism

            • otabdeveloper4 3 days ago

              Sorry if this sounds harsh, but I'm not really interested in spending time to code review the worst code I've ever seen in 30 years of programming.

              Is LiteLLM's code written by an LLM?

            • honorable_coder 3 days ago

              and you say you aren't "vested" in liteLLM?

              • swyx 3 days ago

                yes, green text hn account, i am not. i just want help in properly identifying flaws in litellm. clearly nobody here is offering actual analysis.

    • dlojudice 4 days ago

      > but actually using it in production isn't great.

      I only use it in development. Could you elaborate on why you don't recommend using it in production?

  • wongarsu 4 days ago

    And all of them despite 80% of model providers offering an OpenAI compatible endpoint

    • troyvit 3 days ago

      I think Mozilla of all people would understand why standardizing on one private organization's way of doing things might not be best for the overall ecosystem. Building a tool that meets LLM providers where they are instead of relying on them to homogenize on OpenAI's choices seems like a great reason for this project.

  • swyx 4 days ago

    portkey as well which is both js and open source https://www.latent.space/p/gateway

    • pzo 3 days ago

      why provide link if there is not a single portkey keyword there?

      • swyx 3 days ago

        its my interview w portkey folks which has more thoughts on the category

  • ieuanking 4 days ago

    we are trying to apply model-routing to academic work and pdf chat with ubik.studio -- def lmk what you think

omneity 3 days ago

Crazy timing!

I shipped a similar abstraction for llms a bit over a week ago:

https://github.com/omarkamali/borgllm

pip install borgllm

I focused on making it Langchain compatible so you could drop it in as a replacement. And it offers virtual providers for automatic fallback when you reach rate limits and so on.

nodesocket 4 days ago

This is awesome, will give it a try tonight.

I’ve been looking for something a bit different though related to Ollama. I’d like a load balancing reverse proxy that supports queuing requests to multiple Ollama servers and sending requests only when a Ollama server is up and idle (not processing). Anything exist?

dlojudice 4 days ago

I use Litellm Proxy, even in a dev environment via Docker, because the Usage and Logs feature greatly helps in providing visibility into LLM usage. The Caching functionality greatly helps in reducing costs for repetitive testing.

honorable_coder 4 days ago

a proxy means you offload observability, filtering, caching rules, global rate limiters to a specialized piece of software - pushing this in application code means you _cannot_ do things centrally and it doesn't scale as more copies of your application code get deployed. You can bounce a single proxy server neatly vs. updating a fleet of your application server just to monkey patch some proxy functionality.

  • AMeckes 4 days ago

    Good points! any-llm handles the LLM routing, but you can still put it behind your own proxy for centralized control. We just don't force that architectural decision on you. Think of it as composable: use any-llm for provider switching, add nginx/envoy/whatever for rate limiting if you need it.

    • honorable_coder 4 days ago

      How do I put this behind a proxy? You mean run the module as a containerized service?

      But provider switching is built in some of these - and the folks behind envoy built: https://github.com/katanemo/archgw - developers can use an OpenAI client to call any model, offers preference-aligned intelligent routing to LLMs based on usage scenarios that developers can define, and acts as an edge proxy too.

      • AMeckes 4 days ago

        To clarify: any-llm is just a Python library you import, not a service to run. When I said "put it behind a proxy," I meant your app (which imports any-llm) can run behind a normal proxy setup.

        You're right that archgw handles routing at the infrastructure level, which is perfect for centralized control. any-llm simply gives you the option to handle routing in your application code when that makes sense (For example, premium users get Opus-4). We leave the architectural choice to you, whether that's adding a proxy, keeping routing in your app, or using both, or just using any-llm directly.

        • sparacha 3 days ago

          But you can also use tokens to implement routing decisions in a proxy. You can make RBAC natively available to all agents outside code. The incremental feature work in code vs an out of process server is the trade off. One gets you going super fast the other offers a design choice that (I think) scales a lot better

  • RussianCow 4 days ago

    You can do all of that without a proxy. Just store the current state in your database or a Redis instance.

    • honorable_coder 4 days ago

      and managed from among the application servers that are greedily trying to store/retrieve this state? Not to mention you'll have to be in the business of defining, updating and managing the schema, ensuring that upgrades to the db don't break the application servers, etc, etc. The proxy server is the right design decision if you are truly trying to build something production worthy and you want it to scale.

spooky_deep 3 days ago

Really needs a Docker image (maybe just not mentioned?) so one doesn’t have to wrestle pip and python versions.

renewiltord 4 days ago

In truth it wasn’t that hard for me to ask Claude Code to just implement the text completion API so routing wasn’t that much of a problem.

klntsky 3 days ago

Anything like this, but in TypeScript?

  • AMeckes 3 days ago

    Python only for now. Most providers have official TypeScript SDKs though, so the same approach (wrapping official SDKs) would work well in TS too.

  • funerr 3 days ago

    ai-sdk by vercel?

bdhcuidbebe 3 days ago

What is mozilla-ai?

Seems like reputation parasitism.

  • JohnPDickerson 3 days ago

    Common question, thanks for asking! We’re a public benefit corporation focused on democratizing access to AI tech, on enabling non-AI experts to benefit from and control their own AI tools, and on empowering the open source AI ecosystem. Our majority shareholder is the Mozilla Foundation - the other shareholders being our employees, soon :). As access to knowledge and people shifts due to AI, we’re working to make sure people retain choice, ownership, privacy, and dignity.

    We're very small compared to the Mozilla mothership, but moving quickly to support open source AI in any way we can.

    • bdhcuidbebe 3 days ago

      Many thanks for a detailed response! TIL!

  • daveguy 3 days ago

    It is an official Mozilla Foundation subsidiary. Their website is here: https://www.mozilla.ai/

    • bdhcuidbebe 3 days ago

      Interesting. I made my comment after visiting their repo and website. Didnt see a pixel worth of the mozilla brand there, hence my comment.

      On a second visit I notice a link to mozilla.org on their footer.

      Still doesent ring official by me from being a veteran mozilla user (netscape, mdn, firefox) but ok, thanks for the explanation.

      • JohnPDickerson 3 days ago

        Good feedback. Some of this is intentional - as an independent and growing ~20-person company, we're able to operate more quickly than the larger Mozilla organizations, and we're purposefully distancing ourselves from the associated bureaucracy that comes with any large organization. We are very much in line with the Mozilla ethos around personal ownership, privacy, control, and agency. We're figuring out how to best push on those principles in the world of AI, and appreciate feedback and contributions from the community.

      • daveguy 3 days ago

        I agree it's not very clear. They would do well to mention it somewhere besides the main site footer because it would probably help adoption / community / testing too. That said, any company with a lawyer wouldn't let that stand as a name-squat for long.

t_minus_100 3 days ago

https://xkcd.com/927/ . LiteLLM rocks !

  • AMeckes 3 days ago

    I didn't even need to click the link to know what this comic was. LiteLLM is great, we just needed something slightly different for our use case.

weinzierl 4 days ago

Not to be confused with AnythingLLM.

mkw5053 4 days ago

Interesting timing. Projects like Any-LLM or LiteLLM solve backend routing well but still involve server-side code. I’ve been tackling this from a different angle with Airbolt [1], which completely abstracts backend setup. Curious how others see the trade-offs between routing-focused tools and fully hosted backends like this.

[1] https://github.com/Airbolt-AI/airbolt

  • swyx 4 days ago

    (retracted after GP edited their comment)

    • mkw5053 3 days ago

      I didn’t intend my original comment to be overly-promotional without relevance. I'm genuinely curious about the tradeoffs between different LLM API routing solutions, most acutely as a consumer.

    • qntmfred 4 days ago

      don't you post links to your own stuff all the time? i don't think their comment was out of line.