LAS Local Agent Studio OSS Agent Orchestration

Visual. Local-first. Open source.

Design, run, and inspect multi-agent workflows on your own machine.

Build orchestration graphs with a React Flow canvas, mix providers per agent, stream traces live, and keep control over your models, keys, and runtime.

Local-first Ollama + OpenAI-Compatible + OpenAI Visual orchestration Bring your own providers
CEO coordinator · local ollama

Plans the workflow, delegates to specialists, and merges outputs.

Coordinator Router Traceable
Researcher openai-compatible

Finds context and produces concise upstream briefs.

Developer openai

Transforms the brief into implementation-ready output.

Any provider per agent

Mix local Ollama, OpenAI-compatible endpoints, and OpenAI in one graph.

Install fast

Use the one-line installer instead of cloning and wiring the project manually.

Inspect every run

Track node-by-node events, outputs, and execution state from a single trace view.

Install

Get running from a single command.

The installer downloads a versioned release artifact from GitHub and sets up a local launcher. After install, start the studio and configure your providers from the UI.

curl -fsSL https://raw.githubusercontent.com/harishkotra/local-agent-studio/main/install.sh | bash
01

Install

Run the curl command to fetch and install the latest packaged release.

02

Launch

Start the studio locally and open the app in your browser.

03

Configure

Add your provider keys or local Ollama instance, then build your orchestration.

Why This Exists

An orchestration tool that stays out of your way.

Bring your own stack

Users decide which provider and model each agent should run on. There is no forced platform lock-in.

Local by default

Keys stay on the user’s machine, workflows are portable, and the default operating mode does not require a hosted service.

Built for visual reasoning

Agents, routers, inputs, HTTP tools, and outputs are all composed on a graph users can actually inspect.

Open to extension

The long-term direction is provider and skill compatibility that fits how the ecosystem already works.

How It Works

Three steps from blank canvas to traceable workflow.

1

Create providers and profiles

Add Ollama, OpenAI-compatible, or OpenAI providers, then define reusable agent profiles with their own prompts, roles, notes, and models.

2

Connect your orchestration graph

Use the React Flow canvas to place agents, add inputs and outputs, and wire the handoff path visually.

3

Run and inspect

Execute the workflow locally, stream node events in real time, and inspect the full trace after completion.

Compatibility

Built for the way people already run models.

Supported today

  • Ollama for local model execution
  • OpenAI-compatible APIs such as Featherless.ai
  • OpenAI models through user-provided keys

Planned next

  • AgentSkills compatibility for ecosystem-aligned capabilities
  • Richer tool builders and workflow inputs
  • Release and installer polish for broader distribution

Open Source

Use it locally. Fork it. Improve it.

The product is open source and built in public. The repo has issues, roadmap work, and installation docs so contributors can get involved quickly.

FAQ

Common questions before you install.

Is this local-first?

Yes. The MVP is designed for single-user local execution and local credential storage.

Do I need OpenAI?

No. You can run with Ollama or any OpenAI-compatible provider instead.

Can different agents use different providers?

Yes. Each agent profile can point to its own provider and model configuration.

Where do I contribute?

Contributions happen in the main project repository on GitHub through issues and pull requests.