These were nice early in the TensorFlow evolution, for things like Frigate...
But even CPU inference is both faster and more energy efficient with a modern Arm SBC chip, and things like the Hailo chip are way faster for similar price, if you have an M.2 slot.
I haven't seen a good USB port alternative for edge devices though.
The big problem is Google seems to have let the whole thing stagnate since like 2019. They could have some near little 5/10/20 TOPS NPUs for cheap if they had continued developing this hardware ecosystem :(
>The big problem is Google seems to have let the whole thing stagnate since like 2019.
Google's flightiness strikes again. How they expect developers (and to some degree consumers) to invest in their churning product lines is beyond me. What's the point in buying a Google product when there's a good chance Google will drop software support and any further development in 5 years or less?
At my last day job, https://github.com/google-coral/libedgetpu/issues/26 this was the last nail in the coffin that got us to move away from coral hardware. This was when we were willing to look past even the poor availability of the hardware during the peak chip shortage.
> As per https://www.tensorflow.org/guide/versions , Can we assume that libedgetpu released along with a tflite version is compatible with all the versions of tflite in the same major version?
> Hi, we can't give any guarantee that libedgetpu released along with a tflite version is compatible with all the versions of tflite in the same major version.
I tested frigate on this combination and I must recommend against it. The Raspberry Pi 5 lacks hardware accelerated video encode like it's predecessor the Pi 4. Due to this limitation, a Pi 5 will struggle with more than a few cameras, even with the 26 TOPS Hailo.
I'd also keep an eye on the Rubik Pi 3 as it looks specifically designed to compete with the Pi 5 + AI kit and should provide a faster, cheaper, more efficient option. They're only just starting to ship and no support in Frigate yet, so just something to consider if you're not in a hurry to build a system.
Frigate is surprisingly not that cpu intensive with if you have Coral.
I got repurposed HP G2 SFF desktop with old i5-6500 cpu running proxmox with bunch of VMs and LXC containers including frigate.
I am passing both coral USB through to frigate container for object detection and passing intel's gpu through for video decoding.
With 10 cameras continuously recording, corals inference cpu usage is about 12%, frigate CPU usage is about 5%, although a service called go2rtc which is used by frigate to read the cameras streams and restream them to frigate takes up about 15% of the cpu.
Overall my cpu usage fluctuates below 30% on that entire machine devoted to more than proxmox.
I did run the watt calculation on that machine and it was something reasonable, dont recall it right now
I'm running frigate on an SFF computer I got off eBay for $100 with an i7-8700. It averages around 14 watts. OpenVino for object detection and intel-qsv hardware acceleration preset.
I'm looking to upgrade my home surveillance setup, currently running Arlo Pro 2 cameras. They work fine, but I'd prefer higher resolution and to avoid saturating my internet upstream with frequent video uploads.
My needs are pretty much the same as people who buy camera bundles from big box stores. I want reliable motion detection for intruders, deliveries, and visitors, and the ability to watch videos recorded in the past couple of weeks.
Oh nice thanks for digging that up. I currently run Frigate (along with Home Assistant) on my HP Prodesk 400 with a 8th gen Intel i5 and the Coral USB.
I wonder if it would run better with that Hailo-8 on my Pi5 with 4GB.
Finding anything that's not hundreds of dollars to go from Thunderbolt to actual PCIe slot or M.2 slot is tough though.
There are a couple solutions, but then you have to have hardware that has a Thunderbolt port as well... and those aren't everywhere, especially on cheaper computers and SBCs.
Not sure if there is something new here but it looks like the same product that has been around for a few years now (wasn't Coral released in 2019-ish?)
Yeah, I don't see anything new here, I am guessing that OP just chanced across it.
This is not something that is very useful or relevant these days - it's basically abandoned at this point and only works with older versions of Python etc.
Searching around, it appears that the Coral USB Accelerator does about 4 TOPS.
The Raspberry Pi 5 AI Kit which costs a few bucks more (and came out last year instead of in 2020) does 13 TOPS.
The Jetson Orin Nano Super, which costs $250, does 67 TOPS, and was just updated last month (although it's a refresh of the original product).
I own all three of these products and they are all very frustrating to work with, so you need to have a very specific use-case to make them worthwhile - if you don't, just stick with your machine's GPU.
> The Jetson Orin Nano Super, which costs $250, does 67 TOPS, and was just updated last month (although it's a refresh of the original product).
FWIW, there wasn't actually a physical revision/refresh, it's all software. So older owners can just update and get the boost as well.[0]
> With the same hardware architecture, this performance boost is enabled by a new power mode which increases the GPU, memory, and CPU clocks. All previous Jetson Orin Nano Developer Kits can use the new power mode by upgrading to the latest version of JetPack.
See my other comment regarding efficiency of my Intel Xe iGPU.
Jetson is a different league though. These can run even LLMs (tho 16 GB version was overpriced when I bought during covid, so went for 8 GB). Ollama Just Works (tm); now compared to getting Ollama working with ROCm on my 6700 XT however, that was frustrating.
So, object detection with Tensorflow, works well w/these Coral TPUs. However, you can forget even running Whisper.cpp
One nice thing the Coral USB has for it though is it is USB. You can get it to work on practically any machine. Great for demos.
For old version of Python fire up a VM, OCI, use a decent package manager like uv or pipx.
Using Ollama/llama.cpp with Vulkan is both much easier, and works across more GPUs, than ROCm. I wish they'd merge the PR that adds it across the board :(
Coral's claim to fame was 2 TOPS/W. But, to everyone's point here, there's literally no news here since its core (Google's Edge TPU) wasn't updated. I wouldn't be surprised if there were newer products that perform better at this point.
I have a couple of these, unfortunately I've been waiting for the ecosystem to get better and run newer/improve models to no avail. I attempted some YOLO ports (since Coral uses a specific architecture) and not sure if I'm just bad at this or it's actually hard, but beyond the basic examples with Google's own ecosystem I wasn't able to run anything else on these. I was hoping an upgrade from seeing this on HN, but it seems to be the same old one.
Well, a small shoutout for my project to facilitate YOLO on the EdgeTPU without any dependencies on Ultralytics. Some contributors helped a bit with more recent versions, but it seems like all you need is to twiddle a few of the input shapes. I say facilitate rather than port because someone else did the hard work actually implementing the converter to tflite.
You really do need the model to be defined in Tensorflow though. Channel ordering screws things up in PyTorch and it's not trivial to get round it, otherwise the compiler does all sorts of weird gymnastics to permute for you.
See, this looks great, so now I have to give this a try! Just a quick Q to avoid the pain of trying if it's known not to work, is there a major reason this might not work on a raspberry pi? Thanks for sharing
If you can get the libraries installed then the code should work. The only risk is that Coral is not particularly well maintained and you might need to downgrade to e.g. an older version of Debian. Feel free to post an issue if you run into trouble.
Same. I have a few Corals I was trying to build some prototypes with. Gave up and found Hailo. Much more powerful. Compatible with far more models. The documentation isn’t great, but its far better than Coral.
FWIW, The M.2 and mini PCIe form factors are more cost effective. I added one to the WiFi+Bluetooth slot on a refurbished Dell desktop to perform object detection in my CCTV NVR.
For anyone else looking at these devices to use with frigate, I found that using the openvino mode on a <$200 intel mini pc was more than adequate for 5 cameras, and probably could have handled more. Glad I tried it out first before buying an accelerator.
Maybe I’m
Stupid but I couldn’t figure out how to set the pull up or pull down resistors on these boards. Maybe with LLMs I can figure this out now…. Something to do with device tree confits?
I have a Coral M.2 and an Intel Xe iGPU on the same machine. Both passthrough to VM, Proxmox, iGPU with SR-IOV. On Proxmox I can see power usage of one camera for SR-IOV device utilizing intel_gpu_top: less than 0.1W.
It does not at all compare. The whole Coral/edge TPU project was the idea of a guy at Google (https://www.linkedin.com/in/billy-rutledge-1b750a4/) who never did anything interesting with it except a series of press releases and various hardware drops and demos). It has nothing to do with google's real TPUs. I still don't understand why this project has been allowed to live while talented engineers and projects were cancelled.
I don't get the impression Google has invested much more into Coral beyond the initial set of products.
In terms of longevity - it's kinda table stakes in the electronics industry. Nobody's going to integrate your edge AI chip into their smart CCTV camera or whatever if you can't promise 5+ years of availability. There are chipmakers still selling parts they launched in the 1980s.
a couple questions for people who have been using it. Where does this fit between a typical budget arm cortex and a gpu? and what are the practical sized models you could run on one of these?
This is way too outdated to be relevant in any way. Back in the day they had a board with a TPU on it, before everyone else did. That board ran object detection at a pretty good resolution at like 80fps in 2.5W power budget. I still have that board in my drawer - I never did find any use for it at its price point. Plus, because it's Google, I expected that they'd abandon the board within 2 years tops, which is exactly what happened. The board was like $100 IIRC which was a good chunk of cash when RaspberryPi was like $25. Nowadays there are _dozens_ of Chinese boards available with on-chip TPUs. Tooling still sucks mightily, but that's expected when dealing with embedded systems. Unlike with the Google board, you can usually build your own Linux for these using Yocto or Buildroot with minimal tweaks.
A mini-PC like a Lenovo m720q or m920q can be had for about $120 on secondary markets. It works great for me as a frigate server. I'm using it with 6 cameras and plan to add couple of more. If you look around on ebay, you might able to find one with 16gb ram and even a m.2 or SSD drive included. Then get the m.2 (A+E) version of the Coral TPU board ($30-35). That can be inserted into the wifi board slot. The m720q has a i5-8400T. While it's a 8th gen intel CPU , its more than capable of handling Frigate , as well several other applications if you run it in Proxmox or even barebones install. It has one m.2 for storage slot. But can also hold one 2.5 inch SSD. So you can drop in a cheap nvme drive for boot & OS. Use a much cheaper larger SSD for video storage. (what i've done)
These were nice early in the TensorFlow evolution, for things like Frigate...
But even CPU inference is both faster and more energy efficient with a modern Arm SBC chip, and things like the Hailo chip are way faster for similar price, if you have an M.2 slot.
I haven't seen a good USB port alternative for edge devices though.
The big problem is Google seems to have let the whole thing stagnate since like 2019. They could have some near little 5/10/20 TOPS NPUs for cheap if they had continued developing this hardware ecosystem :(
>The big problem is Google seems to have let the whole thing stagnate since like 2019.
Google's flightiness strikes again. How they expect developers (and to some degree consumers) to invest in their churning product lines is beyond me. What's the point in buying a Google product when there's a good chance Google will drop software support and any further development in 5 years or less?
At my last day job, https://github.com/google-coral/libedgetpu/issues/26 this was the last nail in the coffin that got us to move away from coral hardware. This was when we were willing to look past even the poor availability of the hardware during the peak chip shortage.
> As per https://www.tensorflow.org/guide/versions , Can we assume that libedgetpu released along with a tflite version is compatible with all the versions of tflite in the same major version?
> Hi, we can't give any guarantee that libedgetpu released along with a tflite version is compatible with all the versions of tflite in the same major version.
Yea, right! stares at my Nest smoke detectors
For a device with a 10 year life, and enough connectivity to be future-proof, Google has handled them poorly.
It's only a few weeks ago that they added support for them in the Home app.
I believe they have a 802.15.4 radio, maybe the x chipset is too old, but it would have been great to get Matter over Thread support for them.
New-comers like Aqara will instead take up that space.
Even OpenVINO on an Intel iGPU is as fast with (I've heard) more accurate detection, and can be done on under 5W with a i3 mobile CPU or similar.
I’m not sure what’s the most optimal for cost/performance, but a Pi 5 8G + Hailo 8 looks like it will be a good option.
* https://www.reddit.com/r/frigate_nvr/s/ncxP1YQDfB
* https://github.com/blakeblackshear/frigate/blob/e773d63c16d9...
I tested frigate on this combination and I must recommend against it. The Raspberry Pi 5 lacks hardware accelerated video encode like it's predecessor the Pi 4. Due to this limitation, a Pi 5 will struggle with more than a few cameras, even with the 26 TOPS Hailo.
I think the decoder is more important than the encoder. The Pi 5 does have a h265 hardware decoder, if your cameras supports that.
Just accelerate it with AI instead! :)
I'd also keep an eye on the Rubik Pi 3 as it looks specifically designed to compete with the Pi 5 + AI kit and should provide a faster, cheaper, more efficient option. They're only just starting to ship and no support in Frigate yet, so just something to consider if you're not in a hurry to build a system.
* https://www.rubikpi.ai/
* https://liliputing.com/rubik-pi-is-a-compact-dev-board-with-...
* https://www.cnx-software.com/2025/01/09/qualcomm-qcs6490-rub...
Just get OAK cameras from luxonis.
If you want home surveillance, you can just tie a bunch to ethernet and they'll do on-device AI.
Frigate is surprisingly not that cpu intensive with if you have Coral.
I got repurposed HP G2 SFF desktop with old i5-6500 cpu running proxmox with bunch of VMs and LXC containers including frigate.
I am passing both coral USB through to frigate container for object detection and passing intel's gpu through for video decoding.
With 10 cameras continuously recording, corals inference cpu usage is about 12%, frigate CPU usage is about 5%, although a service called go2rtc which is used by frigate to read the cameras streams and restream them to frigate takes up about 15% of the cpu.
Overall my cpu usage fluctuates below 30% on that entire machine devoted to more than proxmox.
I did run the watt calculation on that machine and it was something reasonable, dont recall it right now
I'm running frigate on an SFF computer I got off eBay for $100 with an i7-8700. It averages around 14 watts. OpenVino for object detection and intel-qsv hardware acceleration preset.
how many cameras are you running with that? Whats your inference speed in frigate metrics?
6 cameras, 10.54ms. All other metrics are 0-3.5%
What are you trying to do, if you don't mind me asking?
I'm looking to upgrade my home surveillance setup, currently running Arlo Pro 2 cameras. They work fine, but I'd prefer higher resolution and to avoid saturating my internet upstream with frequent video uploads.
My needs are pretty much the same as people who buy camera bundles from big box stores. I want reliable motion detection for intruders, deliveries, and visitors, and the ability to watch videos recorded in the past couple of weeks.
Looks like the Hailo m.2 accelerator costs $190 whereas I bought a coral accelerator for $55. So not exactly comparable.
The Hailo-8L hat for the Pi is only ~$80 and has more than 3x the compute power of the Coral USB.
* https://www.pishop.us/product/raspberry-pi-ai-hat-13-tops/
Even the bigger Hailo-8 hat with >6x the compute of the Coral is only $135.
* https://www.pishop.us/product/raspberry-pi-ai-hat-26-tops/
Oh nice thanks for digging that up. I currently run Frigate (along with Home Assistant) on my HP Prodesk 400 with a 8th gen Intel i5 and the Coral USB. I wonder if it would run better with that Hailo-8 on my Pi5 with 4GB.
> I haven't seen a good USB port alternative for edge devices though.
Thunderbolt 4
I assume they meant "an accelerator device that plugs into a USB port"
TB4 is PCIe over a cable. USB is out of the picture after device initialization.
Finding anything that's not hundreds of dollars to go from Thunderbolt to actual PCIe slot or M.2 slot is tough though.
There are a couple solutions, but then you have to have hardware that has a Thunderbolt port as well... and those aren't everywhere, especially on cheaper computers and SBCs.
By "USB port alternative" they surely mean alternative accelerators that connect over USB, not alternatives to USB ports for connecting accelerators
Not sure if there is something new here but it looks like the same product that has been around for a few years now (wasn't Coral released in 2019-ish?)
Agreed. Hasn't this been out for years?
Yeah, I don't see anything new here, I am guessing that OP just chanced across it.
This is not something that is very useful or relevant these days - it's basically abandoned at this point and only works with older versions of Python etc.
Searching around, it appears that the Coral USB Accelerator does about 4 TOPS.
The Raspberry Pi 5 AI Kit which costs a few bucks more (and came out last year instead of in 2020) does 13 TOPS.
The Jetson Orin Nano Super, which costs $250, does 67 TOPS, and was just updated last month (although it's a refresh of the original product).
I own all three of these products and they are all very frustrating to work with, so you need to have a very specific use-case to make them worthwhile - if you don't, just stick with your machine's GPU.
> The Jetson Orin Nano Super, which costs $250, does 67 TOPS, and was just updated last month (although it's a refresh of the original product).
FWIW, there wasn't actually a physical revision/refresh, it's all software. So older owners can just update and get the boost as well.[0]
> With the same hardware architecture, this performance boost is enabled by a new power mode which increases the GPU, memory, and CPU clocks. All previous Jetson Orin Nano Developer Kits can use the new power mode by upgrading to the latest version of JetPack.
[0]: https://developer.nvidia.com/blog/nvidia-jetson-orin-nano-de...
This requires a hat.
See my other comment regarding efficiency of my Intel Xe iGPU.
Jetson is a different league though. These can run even LLMs (tho 16 GB version was overpriced when I bought during covid, so went for 8 GB). Ollama Just Works (tm); now compared to getting Ollama working with ROCm on my 6700 XT however, that was frustrating.
So, object detection with Tensorflow, works well w/these Coral TPUs. However, you can forget even running Whisper.cpp
One nice thing the Coral USB has for it though is it is USB. You can get it to work on practically any machine. Great for demos.
For old version of Python fire up a VM, OCI, use a decent package manager like uv or pipx.
Using Ollama/llama.cpp with Vulkan is both much easier, and works across more GPUs, than ROCm. I wish they'd merge the PR that adds it across the board :(
Coral's claim to fame was 2 TOPS/W. But, to everyone's point here, there's literally no news here since its core (Google's Edge TPU) wasn't updated. I wouldn't be surprised if there were newer products that perform better at this point.
Yea it has been out for years. I bought one and during the Covid supply chain crisis I sold it for $450 ish.
I have a couple of these, unfortunately I've been waiting for the ecosystem to get better and run newer/improve models to no avail. I attempted some YOLO ports (since Coral uses a specific architecture) and not sure if I'm just bad at this or it's actually hard, but beyond the basic examples with Google's own ecosystem I wasn't able to run anything else on these. I was hoping an upgrade from seeing this on HN, but it seems to be the same old one.
Well, a small shoutout for my project to facilitate YOLO on the EdgeTPU without any dependencies on Ultralytics. Some contributors helped a bit with more recent versions, but it seems like all you need is to twiddle a few of the input shapes. I say facilitate rather than port because someone else did the hard work actually implementing the converter to tflite.
https://github.com/jveitchmichaelis/edgetpu-yolo
You really do need the model to be defined in Tensorflow though. Channel ordering screws things up in PyTorch and it's not trivial to get round it, otherwise the compiler does all sorts of weird gymnastics to permute for you.
See, this looks great, so now I have to give this a try! Just a quick Q to avoid the pain of trying if it's known not to work, is there a major reason this might not work on a raspberry pi? Thanks for sharing
If you can get the libraries installed then the code should work. The only risk is that Coral is not particularly well maintained and you might need to downgrade to e.g. an older version of Debian. Feel free to post an issue if you run into trouble.
The only repeating use I've found for these so far is object detection in Frigate (NVR software).
Same. I have a few Corals I was trying to build some prototypes with. Gave up and found Hailo. Much more powerful. Compatible with far more models. The documentation isn’t great, but its far better than Coral.
which Hailo product are you using and for what?
Similar experience; broken software.
I put 8 of these on a mini-ITX computer on a fake moon in my backyard, AMA.
https://custom-images.strikinglycdn.com/res/hrscywv4p/image/...
I don’t know what’s more “wtf” - your first message or the actual follow-up with proof. Why?
Please post more pictures of your moon, I would like to see them.
So THAT'S where NASA filmed it!!
AMA? How about, why do you have a fake moon in your backyard?
That's none of your damn business and he'd thank you to stay out of his personal affairs.
Sincerely, Miles' council
He did say AMA :)
Do you emulate the ~1 sec latency a real moon rover would have?
What should I have for dinner?
Have you considered selling fake moon landing content to conspiracists as a side hustle?
that's super fun! what kind of jobs do you run on it?
FWIW, The M.2 and mini PCIe form factors are more cost effective. I added one to the WiFi+Bluetooth slot on a refurbished Dell desktop to perform object detection in my CCTV NVR.
https://frigate.video/
For anyone else looking at these devices to use with frigate, I found that using the openvino mode on a <$200 intel mini pc was more than adequate for 5 cameras, and probably could have handled more. Glad I tried it out first before buying an accelerator.
I have one of these. The USB model sucks. It overheats unless you put them in high efficiency (low performance) mode which defeats the purpose.
The mini-PCIe variant is much more reliable, but I ended up ditching the Coral entirely and replacing it with a GTX 1060.
It seems more than dead and only supports small neural networks. Viable alternatives are Hailo and Axelera (https://www.axelera.ai), which is a newer.
USB AI accelerator, in case anyone was wondering why USB needed to be accelerated.
Maybe I’m Stupid but I couldn’t figure out how to set the pull up or pull down resistors on these boards. Maybe with LLMs I can figure this out now…. Something to do with device tree confits?
What about the Coral? I been running frigate with mine for 2 years.
What's the deal with memory? Does it use the system RAM? I've been considering a cluster for local inference.
The data has to go across PCI / USB to get to the internal (tiny) SRAM. It's not a good choice for clustering, you should just use a GPU for that.
It has a tiny amount, I believe it was 8MiB. That's Mega, not Giga.
Dont know, sorry. But the performance is not that awesome, but works well enough for my setup doing realtime object detection with a few cameras.
My main reason for getting one was power efficency compared to a traditional GPU for this task.
I have a Coral M.2 and an Intel Xe iGPU on the same machine. Both passthrough to VM, Proxmox, iGPU with SR-IOV. On Proxmox I can see power usage of one camera for SR-IOV device utilizing intel_gpu_top: less than 0.1W.
iGPU for detection? Whats the performance on that?
Sounds cool but for me that would mean more upgrades for that server.
Coral runs on 0.5 watts , which is way less than the geforce 1080 i used before.
Video uses 0.07W at 0,6% Video according to intel_gpu_top
Idle is IIRC 0.01W.
How does a Coral unit compare to a current gen AMD or Intel CPU in terms of throughput for ML tasks?
It does not at all compare. The whole Coral/edge TPU project was the idea of a guy at Google (https://www.linkedin.com/in/billy-rutledge-1b750a4/) who never did anything interesting with it except a series of press releases and various hardware drops and demos). It has nothing to do with google's real TPUs. I still don't understand why this project has been allowed to live while talented engineers and projects were cancelled.
I don't get the impression Google has invested much more into Coral beyond the initial set of products.
In terms of longevity - it's kinda table stakes in the electronics industry. Nobody's going to integrate your edge AI chip into their smart CCTV camera or whatever if you can't promise 5+ years of availability. There are chipmakers still selling parts they launched in the 1980s.
a couple questions for people who have been using it. Where does this fit between a typical budget arm cortex and a gpu? and what are the practical sized models you could run on one of these?
This is way too outdated to be relevant in any way. Back in the day they had a board with a TPU on it, before everyone else did. That board ran object detection at a pretty good resolution at like 80fps in 2.5W power budget. I still have that board in my drawer - I never did find any use for it at its price point. Plus, because it's Google, I expected that they'd abandon the board within 2 years tops, which is exactly what happened. The board was like $100 IIRC which was a good chunk of cash when RaspberryPi was like $25. Nowadays there are _dozens_ of Chinese boards available with on-chip TPUs. Tooling still sucks mightily, but that's expected when dealing with embedded systems. Unlike with the Google board, you can usually build your own Linux for these using Yocto or Buildroot with minimal tweaks.
Is there a more cost effective solution for frigate nvr?
Rockchip are the most common "Chinese boards available with on-chip TPUs" as the GP comment puts it, and they are community supported by Frigate now: https://docs.frigate.video/frigate/hardware/#rockchip-platfo...
RK3566 boards are cheaper than a Raspberry Pi 5, but you get less model support, and no Frigate+ model.
A mini-PC like a Lenovo m720q or m920q can be had for about $120 on secondary markets. It works great for me as a frigate server. I'm using it with 6 cameras and plan to add couple of more. If you look around on ebay, you might able to find one with 16gb ram and even a m.2 or SSD drive included. Then get the m.2 (A+E) version of the Coral TPU board ($30-35). That can be inserted into the wifi board slot. The m720q has a i5-8400T. While it's a 8th gen intel CPU , its more than capable of handling Frigate , as well several other applications if you run it in Proxmox or even barebones install. It has one m.2 for storage slot. But can also hold one 2.5 inch SSD. So you can drop in a cheap nvme drive for boot & OS. Use a much cheaper larger SSD for video storage. (what i've done)
... and one day I'll get mine to work ...
copyright 2020 on their site lol.
copyright 2020 on their site lol