### Set up WebLLM Chat for local development Source: https://github.com/mlc-ai/web-llm-chat/blob/main/README.md Instructions to install Node.js and Yarn, configure local environment variables in `.env.local`, and then install dependencies and start the development server for WebLLM Chat. ```shell # 1. install nodejs and yarn first # 2. config local env vars in `.env.local` # 3. run yarn install yarn dev ``` -------------------------------- ### Build Docker image for WebLLM Chat Source: https://github.com/mlc-ai/web-llm-chat/blob/main/README.md Command to build a Docker image for the WebLLM Chat application, tagging it as `webllm_chat`. ```shell docker build -t webllm_chat . ``` -------------------------------- ### Build or export WebLLM Chat with Next.js Source: https://github.com/mlc-ai/web-llm-chat/blob/main/README.md Commands to build the WebLLM Chat application using Next.js. `yarn build` creates a standard Next.js build, while `yarn export` generates a static site for deployment. ```shell yarn build yarn export ``` -------------------------------- ### Run WebLLM Chat Docker container Source: https://github.com/mlc-ai/web-llm-chat/blob/main/README.md Command to run the WebLLM Chat Docker container, mapping port 3000 from the container to the host in detached mode. ```shell docker run -d -p 3000:3000 webllm_chat ``` -------------------------------- ### Run WebLLM Chat Docker container with proxy Source: https://github.com/mlc-ai/web-llm-chat/blob/main/README.md Commands to run the WebLLM Chat Docker container with a specified HTTP proxy URL. The `PROXY_URL` environment variable is used to configure the proxy, supporting both unauthenticated and authenticated proxies. ```shell docker run -d -p 3000:3000 \ -e PROXY_URL=http://localhost:7890 \ webllm_chat ``` ```shell docker run -d -p 3000:3000 \ -e PROXY_URL="http://127.0.0.1:7890 user pass" \ webllm_chat ``` === COMPLETE CONTENT === This response contains all available snippets from this library. No additional content exists. Do not make further requests.