Running LLMs Locally with Docker Model Runner and Python

(theaiops.substack.com)

1 points | by ramikrispin 4 hours ago

1 comments

  • ramikrispin 4 hours ago
    Docker Model Runner (DMR) is a new feature in Docker Desktop that enables running open-weighted LLMs locally, similar to Ollama. This tutorial shows how to call DMR from Python using the OpenAI API Python SDK. No prior Docker knowledge is required (operate as a server)