How to distribute proprietary LLM prompt?
An interesting use-case for LLMs is to distribute an OSS LLM (like ollama) but specialize it with a custom, large prompt. This prompt may be proprietary. Is there any way to achieve this goal without putting the LLM behind a service API?
You could try generating many Q&A examples from a LLM with the desired prompt, then finetuning that model using those examples, but leaving out the prompt. If the prompt is very long and contains lots of information, you may even try to use a LLM to generate the Qs.
You can confuse the LLM to reveal its system prompt even if you put it behind a service API IMHO, as long as a user has access to the chat and can freely interact with the LLM.
Will it be a problem for you when--even with a hosted API--your proprietary prompt is leaked and other people or companies start using variations of it for their own purposes?