It feels like using OpenAI's Ada to get text embeddings is probably not at all the best option at this point. What would be the best / most cost efficient way of getting text embeddings these days? Preferably open source.
A good place to start is this eval system:
Possibly consider cross-encoding for semantic search depending on your use case, but whenever cross-encoding is useful, generative embeddings like Ada are usually much better... There used to be embeddings useful for things like classifying sentences are entailing one another, whether a sentence was complete, but these are basically completely supplanted these days.
Do consider the all-mini type embeddings (default in sentence transformer) for speed or on-device use. They are half the size (and therefore less than half the computing for distance functions) so they are faster for large searches etc, which is useful if you run your own stuff with vector stores rather than a service.