magicalhippo 6 hours ago

Having played with the GB sized Whisper models, I'm amazed to learn the 80MB version is actually useful for anything.

I was aiming for an agent-like experience, and found the accuracy drop below what I'd consider useful levels even above the 1GB mark.

Perhaps for shorter, few word sentences like "lights on"?

  • rjwilmsi 3 hours ago

    I've played quite a lot with all the whisper models up to "medium" size. Mostly via faster-whisper as original OpenAI whisper only seem to care for/optimize performance for GPU.

    I would agree that the "tiny" model has a clear drop off in accuracy, not good enough for anything real (even if transcribing your own speech, the error rate means too much editing needed). In my experience, accuracy can be more of a problem on shorter sentences because there is less context to help it.

    I think for serious use (on GPU) it would be the "medium" or "large" models only. There is now a "large-turbo" model which is apparently faster than "medium" (on GPU) but more accurate than "medium" - haven't tried it yet.

    On CPU for personal use (faster-whisper, CPU) I have found "base" is usable, "small" is good. On a laptop CPU though "small" is slow for real time. "Medium" is more accurate, though mostly just on punctuation, far too slow for CPU. Of course all models will get some uncommon surnames, place names wrong.

    Since OpenAI have re-released the "large" models twice and now done a "large-turbo" I hope that they will re-release the smaller models too so that the smallest models become more useful.

    These moonshine models are compared to original OpenAI whisper, but really I'd say they need to compare to faster-whisper: multiple projects are faster than original OpenAI whisper.

  • heigh 5 hours ago

    There are libraries that can help with this, such as SpeechRecognition for python. If all you're looking for is short terms with minimal background noise, this should do it for you.

  • Randor 4 hours ago

    Looks like Moonshine is competing against the Whisper-tiny model. There isn't any information in the paper to see how it compares to the larger whisper-large-v3.

    • magicalhippo 3 hours ago

      Yeah I was just mildly surprised such a small variant would be useful. Will certainly try when I get back home.

perihelion_zero 2 hours ago

Nice. Looks like a way to have achievable live text transcripts via tiny devices without using APIs.

heigh 8 hours ago

This looks awesome! Actually something I’m looking at playing with this evening!

  • heigh 6 hours ago

    I don't mean to give negative feedback, as I don't consider myself a full-blown expert with Python/ML, however, for someone with passing experience, it fails out of the box for me, with and without the typically required 16Hz bit rate audio files (of various codecs/formats).

    Was really hoping it would be a quick, brilliant solution to something I'm working on now, perhaps I'll dig in and invest in it, but I'm not sure I have the luxury right now to do the exploratory work... Hope someone else has better luck than I!

    • krisoft 4 hours ago

      > I don't mean to give negative feedback

      I would recommend then to be more specific. Did you had trouble installing it? Did it give you an error? Was there no output? Was the output wrong? Is it not working on your files, but working on example files? Is it solving a different problem than the one you have?

      • heigh 4 hours ago

        Installing was okay, but it was not running on any of the sample files I had. This is the output I got:

        UserWarning: You are using a softmax over axis 3 of a tensor of shape (1, 8, 1, 1). This axis has size 1. The softmax operation will always return the value 1, which is likely not what you intended. Did you mean to use a sigmoid instead? warnings.warn(

        I know this isn't the right place for this, the right place is raising within github, but because you asked I posted...

spencerchubb 39 minutes ago

So they're claiming SOTA because they compare against OpenAI as SOTA. What about Groq or fal.ai?

pabs3 6 hours ago

Wonder where the training data for this is.

  • heigh 5 hours ago

    They supply their paper in the Git repo, here: https://github.com/usefulsensors/moonshine/blob/main/moonshi...

    The section "3.2. Training data collection & preprocessing" covers what you're inquiring about: "We train Moonshine on a combination of 90K hours from open ASR datasets and over 100K hours from own internally-prepared dataset, totalling around 200K hours. From open datasets, we use Common Voice 16.1 (Ardila et al., 2020), the AMI corpus (Carletta et al., 2005), Gi- gaSpeech (Chen et al., 2021), LibriSpeech (Panayotov et al., 2015), the English subset of multilingual Lib- riSpeech (Pratap et al., 2020), and People’s Speech (Galvez et al., 2021). We then augment this training corpus with data that we collect from openly-available sources on the web. We discuss preparation methods for our self-collected data in the following."

    It does continue...

bbor 9 hours ago

Very, very cool. Will have to try it out! It’s all fun and games until a universal translator comes out in glasses or earpiece form…

elphinstone 9 hours ago

Kind of a weird name choice, not very searchable and completely unrelated. But the tech looks great.

  • jaco6 8 hours ago

    I agree, it’s also irresponsible to name something that reminds many people of alcohol. I think they were trying to invoke the idea of the moonlight as reflecting the sun’s light, just as their software reflects the speech of the user.

    I think “Artemis” or “Luna” would work better.

    • lemonberry 2 hours ago

      As an alcoholic - sober 6 years on January 1, 2025 - I can comfortably say that my addiction is my problem. I do not expect the world to comport to my issues.

      If an alcoholic is triggered by this name they need more tools in their toolkit.

      The best thing anyone in recovery* can do is build up a toolkit and support system to stay sober. If one expects the world to isolate them from temptation they will never get sober.

      * recovery: I loathe this term, but use it because it's familiar. My dislike for it is another conversation and unnecessary here.

    • Veen 7 hours ago

      Luna stigmatizes people with mental illnesses (lunatic) and Artemis was the goddess of virginty—valorizing feminine virginity is misogynistic.

      You can play this game with every possible name.

      • oezi 3 hours ago

        Offensiveness is certainly a sliding scale and infused by the culture of the recipients. I agree with OP that some thought should be given to a name to not inflict hurt unnecessarily but also wouldn't have caught moonshine as a term celebrating alcoholism (which is still one of the most damaging drugs available, c.f. David Nutt's Drugs without the hot air).

      • vincnetas 6 hours ago

        great example (tutorial) how to get offended by anything.

    • emptiestplace 8 hours ago

      How is it irresponsible?

      • polotics 7 hours ago

        I guess jaco6 thinks recovering addicts shouldn't be reminded of their addiction. I also guess that is casting a quite wide net on speech.

      • sirolimus 6 hours ago

        I’m also awaiting an answer for such a preposterous claim. Let’s rename Svelte because it reminds me of Swedish pancakes and I have diabetes.

    • perching_aix 6 hours ago

      Isn't it clearly a reference to the idiom "talking moonshine", making it a self-deprecating joke?

    • sirolimus 6 hours ago

      You think it’s irresponsible to name something a drug?