This is why closed LLMs developed so fast.
There are not so many inference and even less training hosting providers for OSS LLMs.
And even less with a full Pay-as you-Go billing.
For instance, Hugging Face endpoints are currently billed per model per VM. One has to configure a VM, choose a model, then pay by the usage.
This is great, but to try or switch off temporarily a model, you must configure a new endpoint.
Interested in WooCommerce vector search? https://wpsolr.com