We would like to support this task with some sort of ML computer vision contraption that effectively counts the chicken movement for us.
Camera placement and therefore in picture features are going to be constant over the visual material. Light conditions are going to vary since the coop is placed outdoors. Some chicken might be overlapping, possibly making a pure CV based solution a bit harder.
There is already a body of video footage which we could use for ML training purposes. We are going to have an UP² AI vision devkit at our disposal via another department.
I checked for pre-trained models, but it seems that chicken counting has not been done yet. At least I could not find any relevant material so far.
So I guess we would have to somehow generate annotated training material from the footage, use that to train a ML network, verify with test material, and then apply this trained model to the live video feed.
What are your recommendations on how to approach this? How best to generate the training material? Which kind of NN would be suited for the task? Any advice on the workflow and tools would be highly appreciated.
Thanks in advance!
-rolf
https://github.com/jsbroks/coco-annotator/blob/master/README...
Which NN? we used yolo3, then used Opencv for tracking.