I'm hoping for something that's free and can run on a modest desktop. It would really only need to detect one type of object, but then differentiate the individual objects within that type (sort of like facial recognition).
I have no idea if my project is even feasible, but I want to see what software is out there so that I'll know if I should give up the idea or keep looking into it.
To self-host just run
npx @roboflow/inference-server
Then you can POST an image at any of the models to localhost:9001 eg
base64 yourImage.jpg | curl -d @- "http://localhost:9001/some-model/1?api_key=xxxx"
And you get back JSON predictions.
There are also client libs[2] and sample code[3] for pretty much any language you might want to use it in. You can also run any of the models directly in a browser with WebGL[4], in a native mobile app[5], or on an edge device[6].
[1] https://universe.roboflow.com
[2] https://github.com/roboflow-ai/roboflow-python
[3] https://github.com/roboflow-ai/roboflow-api-snippets
[4] https://docs.roboflow.com/inference/web-browser
[5] https://docs.roboflow.com/inference/mobile-ios-on-device
The first step is accomplished by training a detection model to generate the bounding box around your object, this can usually be done by finetuning an already trained detection model. For this step the data you would need is all the images of the object you have with a bounding box created around it, the version of the object doesnt matter here.
The second step involves using a generalized image classification model thats been pretrained on generalized data (VGG, etc.) and a vector search engine/vector database. You would start by using the image classification model to generate vector embeddings (https://frankzliu.com/blog/understanding-neural-network-embe...) of all the different versions of the object. The more ground truth images you have, the better, but it doesn't require the same amount as training a classifier model. Once you have your versions of the object as embeddings, you would store them in a vector database (for example Milvus: https://github.com/milvus-io/milvus).
Now whenever you want to detect the object in an image you can run the image through the detection model to find the object in the image, then run the sliced out image of the object through the vector embedding model. With this vector embedding you can then perform a search in the vector database, and the closest results will most likely be the version of the object.
Hopefully this helps with the general rundown of how it would look like. Here is an example using Milvus and Towhee https://github.com/towhee-io/examples/tree/3a2207d67b10a246f....
Disclaimer: I am a part of those two open source projects.
If you need something a little more complex or that can recognize a wider variety of object variations within a single class, then you might like to experiment with something like Teachable Machines to see how you can train your own machine learning model. You can then export and download this trained model and run it locally with something like Python or Javascript on your own computer: https://teachablemachine.withgoogle.com/
Use that site to capture images from your web camera to find examples of each class of object and see if this tool can work for you.