}

n8n + AI on Linux: Self-Hosted Workflow Automation with LLMs (2026)

n8n + AI on Linux: Self-Hosted Workflow Automation with LLMs (2026)

Most AI workflow tools either lock you into a cloud platform, charge per workflow run, or require writing code for every integration. n8n solves all three: it is open-source, self-hosted, and now ships with native AI nodes that connect directly to Ollama or OpenAI.

This guide walks through installing n8n on Linux, connecting it to a local LLM via Ollama, and building real automations — from a simple summarizer to a support ticket classifier and a daily news digest.

What is n8n?

n8n is an open-source workflow automation platform — like Zapier or Make, but self-hosted and free. In 2026 it added native AI nodes, making it the fastest way to build LLM-powered automations without writing code.

Why n8n + AI?

  • Build AI workflows visually (no Python required)
  • Self-hosted: your data never leaves your server
  • Use local Ollama models (zero API cost)
  • 400+ integrations: webhooks, databases, APIs, email

The core idea is a canvas of connected nodes. Each node does one thing — receive a webhook, call an LLM, send an email, write to a database. You connect them with edges that pass data forward. The AI Agent node is special: it runs a reasoning loop, picking tools from the nodes you connect to it until it has an answer.

Install n8n on Linux with Docker

Docker is the fastest way to get n8n running. It handles all dependencies and makes upgrades trivial.

# Create data directory
mkdir -p ~/n8n-data

# Run n8n
docker run -d \
  --name n8n \
  -p 5678:5678 \
  -e N8N_BASIC_AUTH_ACTIVE=true \
  -e N8N_BASIC_AUTH_USER=admin \
  -e N8N_BASIC_AUTH_PASSWORD=yourpassword \
  -v ~/n8n-data:/home/node/.n8n \
  --restart always \
  n8nio/n8n

# Access at http://localhost:5678

The -v flag mounts a host directory for persistence — your workflows survive container restarts and upgrades. The --restart always flag means n8n comes back up automatically after a reboot.

Check that it started:

docker logs n8n

You should see Editor is now accessible via: http://localhost:5678.

Connect n8n to Ollama

Ollama must be running before you configure the connection. Start it if needed:

ollama serve
ollama pull llama3.2  # or any model you prefer

In n8n, add a new credential:

  • Type: Ollama API
  • Base URL: http://host.docker.internal:11434 (if Ollama runs on the host machine)

host.docker.internal resolves to the host machine from inside a Docker container on Linux (requires Docker 20.10+). If it does not work on your setup, use the host's IP address directly.

Or run both on the same Docker network to avoid host networking:

docker network create n8n-net
docker run -d --network n8n-net --name ollama ollama/ollama
docker run -d --network n8n-net --name n8n \
  -e OLLAMA_HOST=http://ollama:11434 \
  -p 5678:5678 n8nio/n8n

With this setup, use http://ollama:11434 as the Ollama base URL in n8n credentials.

Your First AI Workflow: Summarize Text

This workflow receives text via HTTP POST and returns a summary. It takes about five minutes to build.

Create a workflow with these nodes:

  1. Webhook — receives POST request with text
  2. AI Agent — summarizes using Ollama llama3.2
  3. Respond to Webhook — returns the summary

Configuration for the Webhook node: - HTTP Method: POST - Path: summarize - Response Mode: Using "Respond to Webhook" node

Configuration for the AI Agent node: - Model: Ollama (select your credential) - Model Name: llama3.2 - System Prompt: "You are a concise summarizer. Return a 2-3 sentence summary of the text you receive." - User Message: {{ $json.text }}

Test it:

curl -X POST http://localhost:5678/webhook/summarize \
  -H "Content-Type: application/json" \
  -d '{"text": "Python is a high-level programming language..."}'

The response comes back through the Respond to Webhook node with the AI-generated summary.

Practical Workflow: Classify Support Tickets

Use case: a customer submits a ticket via your website, AI categorizes it instantly, and it routes to the right team — no human triage needed.

Nodes:

  1. Webhook (receive ticket JSON with subject and body)
  2. AI Agent with system prompt: "You are a support ticket classifier. Classify tickets as exactly one of: billing, technical, or general. Respond with only the category word."
  3. Switch node routing on the AI output:
  4. billing → Send Email to [email protected]
  5. technical → Send Email to [email protected]
  6. default → Send Email to [email protected]
  7. Respond to Webhook — return {"status": "routed", "category": "..."}

In the AI Agent, set the User Message to:

Subject: {{ $json.subject }}
Body: {{ $json.body }}

In the Switch node, add conditions matching the AI output text. Use the expression {{ $json.output.trim().toLowerCase() }} to normalize the model's response before matching.

Practical Workflow: Extract Data from PDFs

Useful for processing invoices, contracts, or reports automatically.

Nodes:

  1. HTTP Request — download PDF from a URL (or use a webhook to receive an uploaded file)
  2. Extract from PDF (n8n built-in node) — converts PDF to plain text
  3. AI Agent — prompt: "Extract the following fields from this document and return them as JSON: company_name, date, total_amount, line_items (array of description and amount). Return only valid JSON."
  4. Code node — JSON.parse($json.output) to convert the AI's string response to a structured object
  5. Google Sheets — append a row with the extracted fields

This workflow can process dozens of PDFs per minute and costs nothing if you use Ollama.

Practical Workflow: Daily News Digest

Automatically summarize the day's tech news and email it to yourself every morning.

Nodes:

  1. Schedule Trigger — set to run every day at 8:00 AM
  2. HTTP Request — fetch RSS feed XML from a tech news source (Hacker News, The Verge, etc.)
  3. XML node — parse the RSS XML into JSON items
  4. Limit node — take the first 20 items
  5. Aggregate node — combine all headline + description fields into one text block
  6. AI Agent — prompt: "Summarize these news headlines into a 5-bullet digest. Focus on the most significant developments. Be concise."
  7. Send Email — your daily digest lands in your inbox

You can extend this to pull from multiple RSS feeds by adding more HTTP Request nodes before step 3 and merging their outputs with a Merge node.

Run n8n as systemd Service

For production deployments on a dedicated server, running without Docker gives you more control over resource limits and logging:

npm install -g n8n

Create the service file:

# /etc/systemd/system/n8n.service
[Unit]
Description=n8n workflow automation
After=network.target

[Service]
Type=simple
User=ubuntu
ExecStart=/usr/local/bin/n8n start
Restart=always
Environment=N8N_BASIC_AUTH_ACTIVE=true
Environment=N8N_BASIC_AUTH_USER=admin
Environment=N8N_BASIC_AUTH_PASSWORD=yourpassword
Environment=N8N_HOST=0.0.0.0
Environment=N8N_PORT=5678
WorkingDirectory=/home/ubuntu

[Install]
WantedBy=multi-user.target

Enable and start:

sudo systemctl daemon-reload
sudo systemctl enable --now n8n
sudo systemctl status n8n

View logs:

journalctl -u n8n -f

For a production setup, put nginx in front of n8n to handle SSL termination, just as you would with any other web service. Point nginx to http://localhost:5678 and configure Let's Encrypt for HTTPS.

n8n AI Nodes Reference

n8n ships with purpose-built AI nodes that wrap common LLM patterns:

NodeWhat it does
AI AgentAutonomous agent that uses tools and LLMs
Basic LLM ChainSimple prompt → LLM → response
Information ExtractorExtract structured data from text
Text ClassifierClassify text into categories
Summarization ChainSummarize long documents
Question and AnswerQ&A over documents

The AI Agent node is the most powerful. It runs a ReAct-style loop: the model reasons about what to do, calls a sub-node tool (like a calculator, HTTP request, or code executor), reads the result, and continues until it decides it has a final answer. You can connect any n8n node as a tool to the AI Agent.

The Basic LLM Chain is simpler and faster: it takes a prompt template, fills in variables from earlier nodes, calls the LLM once, and passes the output forward. Use this when you know exactly what prompt you want and don't need multi-step reasoning.

Connect to OpenAI Instead of Ollama

Add a credential of type OpenAI API with your API key. All AI nodes support both OpenAI and Ollama — just switch the credential in the node configuration. The workflow structure stays identical.

When to use which:

  • Ollama: development, private data, zero cost, models like llama3.2 or mistral
  • OpenAI: production, highest quality, when you need GPT-4o's reasoning or vision capabilities

You can mix both in the same n8n instance — use Ollama for high-volume low-stakes tasks and OpenAI for tasks where quality matters more.

Backup and Export Workflows

Always export your workflows to version control:

  1. Open a workflow in n8n
  2. Click the three-dot menu → Export
  3. Save the JSON file to a git repository

You can also use the n8n REST API to export all workflows programmatically:

curl -u admin:yourpassword http://localhost:5678/api/v1/workflows \
  | jq '.' > workflows-backup.json

Restore by importing the JSON file through the UI or the API. This makes migrations and disaster recovery straightforward.

Related Guides

Leonardo Lazzaro

Software engineer and technical writer. 10+ years experience in DevOps, Python, and Linux systems.

More articles by Leonardo Lazzaro