Launch HN: Phind 3 (YC S22) – Every answer is a mini-app
Hi HN,
We are launching Phind 3 (https://www.phind.com), an AI answer engine that instantly builds a complete mini-app to answer and visualize your questions in an interactive way. A Phind mini-app appears as a beautiful, interactive webpage — with images, charts, diagrams, maps, and other widgets. Phind 3 doesn’t just present information more beautifully; interacting with these widgets dynamically updates the content on the page and enables new functionality that wasn’t possible before.
For example, asking Phind for “options for a one-bedroom apartment in the Lower East Side” (https://www.phind.com/search/find-me-options-for-a-72e019ce-...) gives an interactive apartment-finding experience with customizable filters and a map view. And asking for a “recipe for bone-in chicken thighs” gives you a customizable recipe where changing the seasoning, cooking method, and other parameters will update the recipe content itself in real-time (https://www.phind.com/search/make-me-an-recipe-for-7c30ea6c-...).
Unlike Phind 2 and ChatGPT apps, which use pre-built brittle widgets that can’t truly adapt to your task, Phind 3 is able to create tools and widgets for itself in real-time. We learned this lesson the hard way with our previous launch – the pre-built widgets made the answers much prettier, but they didn’t fundamentally enable new functionality. For example, asking for “Give me round-trip flight options from JFK to SEA on Delta from December 1st-5th in both miles and cash” (https://www.phind.com/search/give-me-round-trip-flight-c0ebe...) is not something that neither Phind 2 nor ChatGPT apps can handle, because its Expedia widget can only display cash fares and not those with points. We realized that Phind needs to be able to create and consume its own tools, with schema it designs, all in real time. Phind 3’s ability to design and create fully custom widgets in real-time means that it can answer these questions while these other tools can’t. Phind 3 now generates raw React code and is able to create any tool to harness its underlying AI answer, search, and code execution capabilities.
Building on our history of helping developers solve complex technical questions, Phind 3 is able to answer and visualize developers’ questions like never before. For example, asking to “visualize quicksort” (https://www.phind.com/search/make-me-a-beautiful-visualizati...) gives an interactive step-by-step walkthrough of how the algorithm works.
Phind 3 can help visualize and bring your ideas to life in seconds — you can ask it to “make me a 3D Minecraft simulation” (https://www.phind.com/search/make-me-a-3d-minecraft-fde7033f...) or “make me a 3D roller coaster simulation” (https://www.phind.com/search/make-me-a-3d-roller-472647fc-e4...).
Our goal with Phind 3 is to usher in the era of on-demand software. You shouldn’t have to compromise by either settling for text-based AI conversations or using pre-built webpages that weren’t customized for you. With Phind 3, we create a “personal internet” for you with the visualization and interactivity of the internet combined with the customization possible with AI. We think that this current “chat” era of AI is akin to the era of text-only interfaces in computers. The Mac ushering in the GUI in 1984 didn’t just make computer outputs prettier — it ushered in a whole new era of interactivity and possibilities. We aim to do that now with AI.
On a technical level, we are particularly excited about:
- Phind 3’s ability to create its own tools with its own custom schema and then consume them
- Significant improvements in agentic searching and a new deep research mode to surface hard-to-access information
- All-new custom Phind models that blend speed and quality. The new Phind Fast model is based on GLM-4.5-Air while the new Phind Large model is based on GLM 4.6. Both models are state-of-the-art when it comes to reliable code generation, producing over 70% fewer errors than GPT-5.1-Codex (high) on our internal mini-app generation benchmark. Furthermore, we trained custom Eagle3 heads for both Phind Fast and Phind Large for fast inference. Phind Fast runs at up to 300 tokens per second, and Phind Large runs at up to 200 tokens per second, making them the fastest Phind models ever.
While we have done Show HNs before for previous Phind versions, we’ve never actually done a proper Launch HN for Phind. As always, we can’t wait to hear your feedback! We are also hiring, so please don’t hesitate to reach out.
– Michael
I love the direction. It feels really fresh.
Thank you, great to hear :)
The problem is I don't think every answer needs a mini-app. I'd argue there are very few answers that do.
For example, it feels like Google's featured snippet (quick answer box) but expanded. But the thing is, many people don't like the feature snippet, and there's a reason it doesn't appear for many queries - it doesn't contribute meaningfully to those.
This functionality is doing exactly the opposite of the process of building good web apps: Rather than "unpacking functionality" and making it specific for an audience, it "packs" all functionality into a generalized use case, at the cost of becoming extremely mediocre for each use case, which makes it precisely worse than any other tool you'd use for that job.
As a specific example, I clicked your apartments in LES search (https://www.phind.com/search/find-me-options-for-a-72e019ce-...) and it shows us just 4 listings...? It shows some arbitrary subset of all things I could find on StreetEasy, and then provides a subset of the search functionality, losing things such as days on market, neighborhood, etc.
It's a cool demo, but "on-demand software" is exactly "Solution-In-Search-of-a-Problem".
The difficult part you need to ask is, like feature snippet, what are the questions worth solving with this, and is the pain point big enough that it's worth solving?
Thanks for the feedback, and I agree that it is very much early days for this product category. To be clear, our goal is to make the software specific for an audience: you. What's exciting, though, is that models are rapidly improving at building on-demand software and this will directly benefit Phind. There are still many edge cases, but I think it will get better quickly.
We're using a similar approach at https://hallway.com ... launching soon!
Interesting -- could you try with a vanilla browser (no extensions or VPN) please? Preferably Chrome or Safari.
It seems to work except when I connect to my work VPN, which is very permissive -- I haven't observed it to break anything else
This is good. It’s fascinating how it spins up interactive pages instantly. Some of the mini-apps actually feel useful, but others break in ways you wouldn’t expect.
I’m curious to see how it evolves with more complex, multi-step queries.
Tried the prompt:
>A geometry app with nodes which interact based on their coordinates which may be linked to describe lines or arcs with side panels for variables and programming constructs.
which resulted in:
https://www.phind.com/search/a-geometry-app-with-nodes-ed416...
which didn't seem workable at all, and notable was lacking in a side panel.
Hi, I just clicked the link and it's showing up for me. Could you refresh?
While I initially noted it as not showing up, after a while, things did appear, but what I'm getting isn't what I would consider usable, and in particular, the requested areas for values and variables do _not_ appear at the side as requested and it's not workable for my needs/expectations.
I agree that this answer was a bit wacky. Phind Fast is the fast and free model. Selecting Phind Large, GPT-5.1, and Claude models would be better for a modeling task like this.
[flagged]
[dead]
OK, I've had a chance to play with it in earnest.
First: my sense is that for most use cases, this will begin to feel gimmicky rather quickly and that you will do better by specializing rather than positioning yourself next to ChatGPT, which answers my questions without too much additional ceremony.
If you have any diehard users, I suspect they will cluster around very particular use cases, say business users trying to create quick internal tools, users who want to generate a quick app on mobile, scientists that want quick apps to validate data. Focusing on those clusters (your actual ones, not these specific examples) and building something optimized for their use cases seems likelier to be a stronger long term play for you
Secondly, I asked it to prove a theorem, and it gave me a link to a proof. This is fine, since LLM generated math proofs are a bit of a mess, but I was surprised that it didn't offer any visualizations or anything further. I then asked it for numerical experiments that support the conjecture, and it just showed me some very generic code and print statements for a completely different problem, unrelated to what I asked about. Not very compelling
Finally, and least important really: please stop submitting my messages when I hit return/enter! Many of us like to send more complex multi-line queries to LLMs
Good luck
First time I'm seeing valid business advice on HN - unlike the infamous Dropbox comment haha :) But I strongly agree with the above advice on specializing for a vertical and hope the founders take it seriously!
Congrats on the launch I love the idea! Super exciting to see these generative UIs
I tried to make it generate an explainer page and it created an unrelated page: https://www.phind.com/search/explain-to-me-how-dom-66e58f3f-...
Hi, apologies for this -- it seems to have written a syntax error that it then failed to auto-fix (hence the white screen).
I tried generating your answer again: https://www.phind.com/search/explain-to-me-how-dom-78d20f04-....
hey michael, long term phind user here. phind became absolute sh*t. almost every answer is wrong. web search should be on by default to get accurate info. but even then is ends up hallucinating a lot.
if every response starts with "You're absolutely right -- ..." you know phind is hallucinating and you can immediately close the tab.
hey, sorry to hear that. web search is on by default, but we had some teething issues with it in the last hour. it should be fully fixed now. can you send some links that failed?
people often can't share their searches due to privacy concerns, maybe you should at least provide an email address so they can share it privately? rather than posting on HN (going forward, does you app have a feedback button in each search? if not it should)
anyway I think you need better QA processes
It's definitely cool and engineering wise close to SOTA given lovable and all of the app generators.
But, assuming you are trying to be in between lovable and google, how are you not going to be steamrolled by google or perplexity etc the moment you get solid traction? Like, if your insight for v3 was that the model should make its own tools, so even less hardcoded, then i just dont see a moat or any vertical direction. What really is the difference?
Thanks, and great question. The custom Phind models are really key here -- off-the-shelf models (even SOTA models from big labs) are slow and error-prone when it comes to generating full websites on-the-fly.
Our long-term vision is to build a fully personalized internet. For Google this is an innovator's dilemma, as Google currently serves as a portal to the current internet.
That is really cool! Congrat on the launch!
I was surprised not to see a share and embed button. I would expect that could be huge for growth.
Thank you! There is a share button in the upper-right corner of the answer page screen :)
The loading issues should be fixed now (as of 11am PST). Apologies for this -- one of our search providers went down right as we launched :(
After waiting 5 minutes, the only feedback I get was "You would've gotten a better answer with Phind Pro. Upgrade to unlock multiple rounds of searching for better answers -- automatically. Upgrade to Phind Plus, Pro, or Ultra to continue researching in depth!"
Not a single thing was actually shown or build. Astonishing what kind of crapware gets funded by YC if they slap AI on the application
Hey to be fair getting in the front page of HN floods a site with traffic and that’s even harder for an AI app. Just wait a bit and will likely be fine.
Congrats on the launch and keep up the great work.
Hi, sorry about that -- we are receiving an HN traffic "hug" spike right now and I'm working on getting that fixed ASAP.