charlie-haley a day ago

Hey HN, I wanted to show off my project Marmot! I decided to build Marmot after discovering a lot of data catalogs can be complex and require many external dependencies such as Kafka, Elasticsearch or an external orchestrator like Airflow.

Marmot is a single Go binary backed by Postgres. That's it!

It already supports: Full-text search across tables, topics, queues, buckets, APIs Glossary and asset to term associations

Flexible API so it can support almost any data asset!

Terraform/Pulumi/CLI for managing a catalog-as-code

10+ Plugins (and growing)

Live demo: https://demo.marmotdata.io

  • wiredfool a day ago

    How does this get the maps of the data flows and so on? Does it require read credentials to each data silo, or is there a manual mapping process?

    • charlie-haley a day ago

      It supports either, I didn't want to restrict people to just one method of getting their catalog populated. The CLI and Plugin system works on needing read credentials to a given Service, it then populates the catalog with those assets. Any lineage links currently need to be done manually (unless they're part of the same plugin). Otherwise, you can integrate with your existing IaC pipelines using Terraform or Pulumi to populate the catalog at deploy time instead of needing to scrape a bunch of services.

pratio a day ago

Hey there, Great to see Marmot here and I'm a huge fan of your project. Recently, we deployed a catalog but we went with open-metadata https://open-metadata.org/ another amazing project.

What we missed on marmot was existing integrations with Airflow and other plugins like Tableau, PowerBI etc as well as other features such as sso, mcp etc.

We're an enterprise and needed a more mature product. Fingers crossed marmot reaches there soon.

  • charlie-haley a day ago

    That's great to know, I wasn't aware anybody even attempted to used it yet! I'm currently in the process of overhauling the Plugin system, it's been quite hard to test some enterprise closed-source integrations like Tableau and Snowflake to build out plugins.

    SSO is sort kind of available, but undocumented, it currently only supports Okta but I'm working on fleshing out a lot of this in the next big release (along with MCP)

    • pratio 6 hours ago

      We gave it a proper deployment and were blown away by the speed but in the end we need a lot of features. SSO/SAML is really important for not just access but also governance. We also miss the Snowflake and dbt plugin among others.

      I saw the plugin system but having never written any production ready go code, it doesn't make sense to just use an LLM to generate code and pull requests which you then need to spend time reviewing.

      Marmot is a wonderful project and I'm sure it'll be worth the wait.

  • esafak a day ago

    That's useful feedback. Charlie, what's the process for adding integrations? A tutorial would be great. The plugin links here don't work: https://marmotdata.io/docs/Plugins/

    • charlie-haley a day ago

      Hey, there's some documentation around creating plugins here. It's relatively simple and involves adding a new Go package to the repo. Currently they have to be compiled into the Binary but I'd like to support external plugins at some point https://marmotdata.io/docs/Develop/creating-plugins

      Also, thanks for pointing out the issue with the docs, I'll get that fixed!

hilti a day ago

I’ve been burned by metadata platforms twice now and honestly, it’s exhausting.

The demo is always incredible - finally, we’ll know where our data lives! No more asking “hey does anyone know which table has the real customer data?” in Slack at 3pm.

Then reality hits.

Week 1 looks great. Week 8, you search “customer data” and get back 47 tables with brilliant names like `customers_final_v3` and `cust_data_new`. Zero descriptions because nobody has time to write them.

You try enforcing it. Developers are already swamped and now you’re asking them to stop and document every column? They either write useless stuff like “customer table contains customers” or they just… don’t. Can’t really blame them.

Three months in, half the docs are outdated.

I don’t know. Maybe it’s a maturity thing? Or maybe we’re all just pretending we’re organized enough for these tools when we’re really not.

paddy_m a day ago

When should you reach for a data catalog via a data warehouse or data lake? If you are choosing a data catalog this is probably obvious to you, if you just happened on this HN post less so.

Also, what key decisions do other data catalogs make via your choices? What led to those decisions and what is the benefit to users?

  • charlie-haley a day ago

    It depends on your ecosystem. If everything lives under one vendor their native catalog will probably work really well for you. But most of the time (especially for older orgs) there's usually a huge fragmented ecosystem of data assets that aren't easily discoverable and spread across multiple teams and vendors.

    I like to think of Marmot as more of "operational" catalog with more of a focus on usability for individual contributors and not just data engineers. The key focus being on simplicity, in terms of both deployments and usability.

badmonster 12 hours ago

This looks great! I'm curious about the plugin architecture - how does Marmot handle schema evolution and versioning across different data sources? For instance, if a Postgres table's schema changes, does the catalog automatically detect and update the lineage, or is there a manual reconciliation step?

Also, given that you're using OpenLineage for cross-system lineage tracking, have you considered building native integrations with data orchestration tools beyond Airflow (e.g., Dagster, Prefect) to automatically capture DAG-level lineage?

  • charlie-haley 7 hours ago

    Hey, that's a good question! At the moment, it treats the latest run as the desired state. So any new changes to a schema will simply overwrite the old version. I'd like to version these so people can navigate schema versions in the UI. If using a plugins, they currently are triggered either via the CLI or a schedule on the UI, so updates will only appear in the catalog after a plugin has run.

    I'd also love to have some native integrations beyond Airflow. Once I've matured the existing plugin ecosystem a bit more, it's high on my list (along with column-level lineage).

e1gen-v a day ago

How are you able to see a datasets lineage across storage types. For example how are you able to see that an s3 buckets files are the ancestor of some table in Postgres?

  • e1gen-v a day ago

    Oh I see it uses open lineage. I thought it was able to handle discovery

    • charlie-haley a day ago

      It can handle discovery within a plugin if the asset types are related. You can also manually add lineage via the UI or use Terraform to create lineage links via IaC. It's pretty complicated to automatically handle discovery of asset lineage, I'm yet to find a nice way of doing it that can work for many use-cases

rawkode a day ago

This looks fantastic! I’ll need to explore building a SQLite / D1 plugin to consolidate all my worker data

mrbluecoat 20 hours ago

If single binary is a selling point, why not use sqlite instead of postgres?

  • charlie-haley 20 hours ago

    Postgres has a lot of features such as trigram-based search which is pretty essential if I don't want to use a dedicated search indexer. It's also much better at handling concurrent writes than SQLite.

stym06 a day ago

How's it different from existing open source data catalogs like amundsen.io?

  • NortySpock a day ago

    Amundsen has two databases and three services in its architecture diagram. For me, that's a smell that you now have risk of inconsistency between the two, and you may have to learn how to tune elasticsearch and Neo4j...

    Versus the conceptually simpler "one binary, one container, one storage volume/database" model.

    I acknowledge it's a false choice and a semi-silly thing to fixate on (how do you perf-tune ingestion queue problems vs write problems vs read problems for a go binary?)..

    But, like, I have 10 different systems I'm already debugging.

    Adding another one like a data catalog that is supposed to make life easier and discovering I now have 5-subsystems-in-a-trenchcoat to possibly need to debug means I'm spending even more time on babysitting the metadata manager rather than doing data engineering _for the business_

    https://www.amundsen.io/amundsen/architecture/

nchmy a day ago

Not to be confused with Marmot, the multi-master distributed SQLite server, which has been around for a couple years longer and just came out of 2 years in hibernation, shed its NATS/Raft fat in favour of a native gossip protocol for replication.

https://github.com/maxpert/marmot