Already have an account? Log in
Set Up OpenClaw with Local LLM on Mac Studio (M4 Max, 128GB RAM)
Bangun sistem OpenClaw stabil dengan LLM lokal di Mac Studio M4
Konfigurasi OpenClaw dengan LM Studio dan model lokal seperti Qwen. Pastikan agen berjalan otomatis dan terhubung ke Telegram/Discord.
Why This Role?
Dapatkan akses langsung ke founder dan kerjakan proyek dengan output jelas.
Required Skills
Keywords
View Original Description from Contra
Original description from Contra
Description: I’m looking for an experienced AI/LLM engineer to help set up OpenClaw on my Mac Studio (M4 Max, 128GB RAM) so that it runs reliably with a fully local model. The objective is a clean, stable setup where OpenClaw agents can run and call tools using a local LLM (no reliance on cloud APIs). I have already experimented with several configurations and want someone who knows this stack well and can get it working correctly and efficiently. Important Requirement: Please only apply if you have personally installed and run OpenClaw (or a very similar AI agent framework) using local models. This project requires hands-on experience with local LLM infrastructure. Current Environment Mac Studio (M4 Max, 128GB RAM) macOS LM Studio installed Local models tested previously (Qwen variants) OpenClaw previously installed but removed for a clean restart Scope of Work Install and configure OpenClaw from scratch Configure it to run with a local LLM via LM Studio Ensure tool calling works properly (web search, shell tools, etc.) Configure optimal model + quantization for this hardware Validate stable operation with a test agent Configure the system so that agents automatically start after a machine reboot Set up access so the agents can be used through Telegram and Discord Provide brief documentation of the setup Optional: performance tuning for the Mac M4 Max Preferred Models Examples include (open to recommendations): Qwen3.5 35B Qwen2.5-Coder-32B / 72B MiniMax M2.5 Other models known to work reliably with OpenClaw Deliverables Fully working OpenClaw setup Local model configured and running Tools functioning correctly Agents verified to auto-start after reboot Agents accessible via Telegram and Discord Step-by-step instructions so I can reproduce or update later Ideal Candidate Direct experience with OpenClaw or similar agent frameworks Strong familiarity with local LLM stacks Experience with LM Studio / llama.cpp / GGUF models Experience optimizing models on Apple Silicon (M-series) Experience integrating AI agents with Telegram and Discord bots Project Details Fixed-price preferred Remote session is fine Should take ~1–3 hours for someone familiar with the stack When applying, please include: Your experience running OpenClaw or similar agent frameworks Examples of local LLM environments you’ve configured Your recommended model for this hardware Confirmation that you have previously integrated AI agents with Telegram or Discord
Already have an account? Log in