Any ideas are welcome thanks!
In general you’re going to want to go with a time difference of arrival (TDOA) system, as it can work one way to support many simultaneous locations. These generally require that you set up an array of anchors or bases that send synchronized ultra wideband radio pulses out. The “tags” are the individual receivers that calculate position based on when these are received. The RF behavior is different but ultimately it’s similar type of system to GPS.
Two-way ranging is a different technique that can work with fewer anchors and be more precise, but won’t scale nearly as well. Most commercial products will support both of these modes of operation. In addition, some products have a channel to support reasonably high-rate data transport as well.
If you search for ‘local positioning system TDOA UWB’ you’ll start getting in the right area. I would start small and test heavily with this in realistic venue situations as the protocols used are incredibly simple and could be subject to noise, reflections, etc. Most of the ones I've seen have relatively low power transmitters, you may want to see if a licensed band is an option. You may also need to integrate onboard imu/gps streams with a kalman filter or similar mechanism to patch over data loss and noise. GPS and/or visual failsafes will also be essential for safety. I'm sure there are plenty of regulations here as well if you want to go commercial.
Either way good luck!
A challenge with doing it from the ground is that the drones will be quite small relative to the size of the image. But with sufficient compute and several cameras, a tiling-based approach[2] should work.
If you want to do unique-identification you’ll also need object tracking[3].
This is exactly the type of project Roboflow (our startup) is built to empower! Happy to chat/help further (Eg we might be able to help source a good dataset to start from). And if it’s for non-commercial use it should be completely free.
[1] https://blog.roboflow.com/drone-computer-vision-autopilot/
I think some onboard positioning system might work best for your application. kognate suggests using "inside out" tracking based on the features observed by a camera on the drone. A nice thing here is that most FPV drones are already transmitting realtime video. It would require significant computational power on the ground to localize drones from their camera feeds though. See [2] for some inspiration.
Another idea that may be possible is to use inertial sensor fusion algorithms using the data from the IMUs onboard the drones to find out their trajectory in real time. However this is quite a tricky business. The sensors would have to be characterized extremely well and be able to deal with the highly dynamic forces that would be felt by a racing drone. Probably would make sense as a standalone module that accepts 5V from the drone's power system and has its own IMU(s) and telemetry radios.
[2] https://matthewearl.github.io/2021/03/06/mars2020-reproject/
RGB based detection will probably be too slow and error prone. Rather put active IR LEDs or similar markings which can be easily detected, use cameras which only let through IR and high framerate! Then use computer vision to spot blobs. Finally compute 3D position by triangulation.
Active IR tracking is still pretty much State of the Art for motion capturing and the like.
Short googling leads to OptiTrack, where they even advertise exactly this use case of drone tracking:
They put emitters/sensors in the hockey puck as well as on the players. The data gets processed and displayed on video for audiences as an "augmented reality" experience.
My understanding is that the puck has an infrared emitter that is tracked by sensors in various locations around the rink and this can locate the realtime position of the puck. The players also have sensors/transmitters and this makes it possible to have really responsive position tracking (the video in the link shows how it looks quite nicely).
I suppose the speed and erratic motion of a hockey puck is not unlike that of an FPV drone.
A quick example on how well this works, this is a few roombas bouncing around [1]
This is what that path integration can look like rendered in a video [2]
[1] https://transistor-man.com/PhotoSet/roomba_dance/animated/da...
As you noted in another comment, GPS by itself probably isn't accurate enough, but there is GPS augmentation tech. You put a base station in the area that measures the drift and sends that over the air to use as correction. I'm thinking you'd take the raw GPS from the drones, apply the correction, and hopefully get sub-meter positioning. Look up DGPS, WAAS for options there.
The other idea that comes to mind is to triangulate based on their radios. You'd have base stations around the perimeter, each measuring the signal strength and direction of the target frequencies. Positioning would be a matter of fairly simple trig + error correction. I don't know if there's anything doing this off-the-shelf, but indoor positioning systems may be a rabbit hole to go down (even if used outdoors).
A final idea is to use the video feed from the drones. You'd place QR Codes throughout the course, process the video feeds, and use the codes seen in each feed to tell which ones are ahead. Or instead of QR Codes, build a point cloud of each point on the course to use as position.
Sounds fun!
Doing it with machine vision would likely be challenging (and I say this having a fair bit of experience with AI/MV systems). The area you are covering a very large field of view, and drones are generally very small relatively speaking.
If you can't do it with native APIs, I would probably look into an RF style system with a small transmitter on each drone and then antennas places around the stadium to detect and triangulate the signals into 3D space.
You may want to predict future coordinates of drones to increase tracking accuracy.
Drones have inertia and when split into small enough chunks, trajectory in each chunk can be expressed as a Bezier curve. Given a few past coordinates you can predict the future one, so this helps with object detection and keeping track of each individual drone.
When doing object detection, instead of scanning the whole frame searching for a drone, you will be scanning only the areas it is likely to be, meaning you can run at higher FPS and with higher frame resolution.
https://hsto.org/r/w1560/webt/vq/ga/at/vqgaat7sqymkhlro_8vef...
You could also go the custom hardware route and trilaterate signals from small embedded transmitters. That would require a lot of effort, but it should work using FPGAs and/or analog electronics.
Another approach could be to use radar and/or RFID: http://rfidradar.com/howworks.html
There are a few different parts here from robotics that can help.
- Tracking allows you find a how a patch of pixels move. Look up "klt" and "SIFT features". Older
- Recognition allows you to find a given object. Look up "yolo". Newer
- Motion modeling allows you to predict where something should be, and can include the transponder ego data. Look up "kalman filter"
- All three of the above should be available in existing libraries
- If possible, engineer the environment, like putting easy to spot LED patterns on the drones. This is always the easiest
Then interpolate with accelerometer data from the drones telemetry?
Dunno seems hard anywhich way but having more than one tech involved to cross check seems like it might help.
The big boys use sensor blending and kinematic gps to get cm level accuracy.
However that requires good GPS coverage (ie no roof)
Second to that its fiducial tracking. Either attacking markers to each drone and using multiple cameras to work out 6 degrees of freedom, or giving the drones enough horsepower to do it onboard.
http://boofcv.org/index.php?title=Example_Tracker_Object
BoofCV also has a great Android App to check out its features.
Could decawave work for this? What if the rat goes behind a tree?
Once you have the tracking data how do you plan to view it ?