i have a friend who owns an m1 max. i would like to "borrow" his gpu for llama 3 or SD. is there a way for me to use his compute when idle ? i do not want to remote into his machine, an easy local api would be fine (i could tailscale/zerotier) and then get the api that way.
Not sure whether there are anything similar for SD though.
> tailscale/zerotier
Same thing isn't it?
In any case it wouldn't be hard for you to just have an account on his machine, tailscale being perhaps the simplest setup. SSH in, cook his laptop at your leisure.
You make http requests to the shared server, those get proxied via the ssh tunnel to his machine, and the client on his machine could make the determination when/whether to run the workload.
Think you’d be way better off just paying for a service designed for this or renting a GPU from a service set up for this cost won’t be that significant.
Otherwise set up Ollama's API
Should I take this as an indicator that embedded GenAI is moving quite quickly?
(Also just wanted to say I find this thread incredibly cool generally, some very interesting stuff going on!!! :D )