HACKER Q&A
📣 jsilence

CV and/or ML chicken tracking – recommendations for entry points?


In a research project at our agricultural department about a mobile chicken coop the movement of the chicken into and out of the coop is currently monitored by filming them and later manually counting the relevant movement on fast forward. This work is quite tedious.

We would like to support this task with some sort of ML computer vision contraption that effectively counts the chicken movement for us.

Camera placement and therefore in picture features are going to be constant over the visual material. Light conditions are going to vary since the coop is placed outdoors. Some chicken might be overlapping, possibly making a pure CV based solution a bit harder.

There is already a body of video footage which we could use for ML training purposes. We are going to have an UP² AI vision devkit at our disposal via another department.

I checked for pre-trained models, but it seems that chicken counting has not been done yet. At least I could not find any relevant material so far.

So I guess we would have to somehow generate annotated training material from the footage, use that to train a ML network, verify with test material, and then apply this trained model to the live video feed.

What are your recommendations on how to approach this? How best to generate the training material? Which kind of NN would be suited for the task? Any advice on the workflow and tools would be highly appreciated.

Thanks in advance!

-rolf


  👤 billconan Accepted Answer ✓
We built the annotation tool in house. But maybe you can use this

https://github.com/jsbroks/coco-annotator/blob/master/README...

Which NN? we used yolo3, then used Opencv for tracking.