Ansible Roles and Galaxy: Modular Playbooks for Production (2026)
Ansible playbooks start simple: a list of tasks that configure a server. As your infrastructure grows, those playbooks grow with it. A single file that installs nginx, configures SSL, sets up a database, creates users, and deploys an application becomes hard to read, impossible to test in isolation, and impossible to reuse across projects.
Roles solve this problem. A role is a self-contained, reusable unit of automation with a standard directory layout. You write a role once, reuse it across playbooks and projects, share it on Ansible Galaxy, and test it independently with Molecule. This is how production Ansible infrastructure is organized.
Why Roles
The practical benefits of roles over flat playbooks:
- Reusability: A
webserverrole can be applied to any host in any playbook without copy-pasting tasks. - Testability: Molecule runs a role against a container, verifies the result, and tests idempotency — all without touching real infrastructure.
- Shareability: Ansible Galaxy hosts thousands of community roles.
geerlingguy.nginxis ready to use immediately. - Separation of concerns: Each role has one responsibility. Your playbook just lists which roles to apply to which hosts.
- Variable scoping: Roles have their own variable namespaces. Defaults can be overridden by the playbook without editing the role.
Role Directory Structure
roles/
└── webserver/
├── tasks/
│ ├── main.yml # Entry point — Ansible runs this
│ └── ssl.yml # Included from main.yml
├── handlers/
│ └── main.yml # Handlers (e.g., restart nginx)
├── defaults/
│ └── main.yml # Low-priority default variables
├── vars/
│ └── main.yml # High-priority role variables
├── templates/
│ └── nginx.conf.j2 # Jinja2 templates
├── files/
│ └── index.html # Static files to copy
├── meta/
│ └── main.yml # Role metadata and dependencies
└── README.md # Role documentation
Every directory is optional except tasks/. Ansible only loads directories that exist.
Creating a Role
ansible-galaxy role init webserver
This creates the full directory structure under roles/webserver/. Initialize a new role for every distinct configuration concern: webserver, database, monitoring, users, firewall.
tasks/main.yml — The Entry Point
---
# roles/webserver/tasks/main.yml
- name: Install nginx
ansible.builtin.package:
name: nginx
state: present
- name: Ensure nginx is started and enabled
ansible.builtin.service:
name: nginx
state: started
enabled: true
- name: Deploy nginx configuration
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: "0644"
notify: restart nginx
- name: Include SSL tasks
ansible.builtin.include_tasks: ssl.yml
when: webserver_ssl_enabled | bool
Split large task files using include_tasks for conditional blocks and import_tasks for unconditional static imports. include_tasks is evaluated at runtime (supports when); import_tasks is parsed at load time (better for linting).
defaults vs vars
Both directories hold variables, but they have different priorities in Ansible's variable precedence order.
defaults/main.yml — low priority, meant to be overridden:
---
# roles/webserver/defaults/main.yml
webserver_port: 80
webserver_ssl_enabled: false
webserver_ssl_port: 443
webserver_worker_processes: auto
webserver_client_max_body_size: "10m"
webserver_server_name: "{{ ansible_fqdn }}"
Anyone using your role can override these by setting the variable anywhere with higher precedence (playbook vars, host vars, group vars, -e on the command line).
vars/main.yml — high priority, internal role constants:
---
# roles/webserver/vars/main.yml
# These are internal to the role and not expected to be overridden
_webserver_config_dir: /etc/nginx
_webserver_log_dir: /var/log/nginx
Use vars/ for values that must not be accidentally overridden. Use defaults/ for everything the role consumer should be able to configure.
Templates with Jinja2
Templates live in templates/ and use the .j2 extension. They are deployed with the template: module, which substitutes variables before copying.
{# roles/webserver/templates/nginx.conf.j2 #}
worker_processes {{ webserver_worker_processes }};
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
client_max_body_size {{ webserver_client_max_body_size }};
server {
listen {{ webserver_port }};
server_name {{ webserver_server_name }};
{% if webserver_ssl_enabled %}
listen {{ webserver_ssl_port }} ssl;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
{% endif %}
root /var/www/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
}
Jinja2 essentials for templates:
{{ variable }}— output a variable value{% if condition %}...{% endif %}— conditional block{% for item in list %}...{% endfor %}— loop{{ variable | default('fallback') }}— filter with default{{ ansible_hostname }}— Ansible facts are available as variables
Handlers
Handlers are tasks that run only when notified by another task, and only run once even if notified multiple times. This prevents nginx from restarting five times if five tasks each modify a config file.
---
# roles/webserver/handlers/main.yml
- name: restart nginx
ansible.builtin.service:
name: nginx
state: restarted
- name: reload nginx
ansible.builtin.service:
name: nginx
state: reloaded
- name: restart postgresql
ansible.builtin.service:
name: postgresql
state: restarted
Notify a handler with notify: in a task:
- name: Update nginx configuration
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
notify: reload nginx # Handler name must match exactly
Handlers run at the end of the play, after all tasks complete (unless you force them with meta: flush_handlers).
Role Dependencies in meta/main.yml
---
# roles/webserver/meta/main.yml
galaxy_info:
author: yourname
description: Install and configure nginx as a web server
license: MIT
min_ansible_version: "2.14"
platforms:
- name: Ubuntu
versions:
- jammy
- noble
- name: EL
versions:
- "9"
dependencies:
- role: common
- role: firewall
vars:
firewall_allowed_ports: [80, 443]
Dependencies in meta/main.yml are applied automatically before the role runs. The firewall role in this example receives firewall_allowed_ports scoped to this dependency invocation.
Using Roles in Playbooks
---
# site.yml
- name: Configure web servers
hosts: webservers
become: true
vars:
webserver_port: 8080
webserver_ssl_enabled: true
roles:
- common
- webserver
- monitoring
- name: Configure database servers
hosts: databases
become: true
roles:
- common
- role: database
vars:
db_max_connections: 200
For conditional role inclusion at the task level:
- name: Apply webserver role if web tag
ansible.builtin.import_role:
name: webserver
when: "'web' in group_names"
- name: Include role dynamically
ansible.builtin.include_role:
name: "{{ item }}"
loop:
- monitoring
- logging
Ansible Galaxy: Community Roles
Galaxy hosts thousands of community roles. Jeff Geerling's roles are a gold standard for RHEL/Ubuntu compatibility.
# Install a role
ansible-galaxy install geerlingguy.nginx
ansible-galaxy install geerlingguy.postgresql
# Roles are installed to ~/.ansible/roles/ by default
# or to roles/ in your project if ansible.cfg sets roles_path
requirements.yml with Version Pinning
Never use community roles in production without pinning to a specific version. Use requirements.yml:
---
# requirements.yml
roles:
- name: geerlingguy.nginx
version: "3.2.0"
- name: geerlingguy.postgresql
version: "3.4.3"
- src: https://github.com/myorg/myrole
scm: git
version: v1.2.0
name: myrole
collections:
- name: community.postgresql
version: "3.4.0"
- name: ansible.posix
version: "1.5.4"
Install all pinned requirements:
ansible-galaxy install -r requirements.yml
ansible-galaxy collection install -r requirements.yml
Commit requirements.yml to version control. Regenerate the installed roles from it in CI — never commit the installed role directories themselves.
Tags for Selective Execution
Tags let you run only specific parts of a playbook without running the whole thing:
# In tasks/main.yml
- name: Install nginx packages
ansible.builtin.package:
name: nginx
state: present
tags:
- install
- nginx
- name: Deploy nginx configuration
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
notify: reload nginx
tags:
- configure
- nginx
Run only tagged tasks:
# Run only configuration tasks (skip install)
ansible-playbook site.yml --tags configure
# Skip a specific tag
ansible-playbook site.yml --skip-tags install
# List all available tags
ansible-playbook site.yml --list-tags
Molecule: Testing Roles
Molecule tests roles by running them in containers (or VMs), verifying the result, and testing idempotency (running twice should produce no changes the second time).
pip install molecule molecule-plugins[docker]
# Initialize Molecule in an existing role
cd roles/webserver
molecule init scenario
# Directory structure added:
# molecule/
# └── default/
# ├── molecule.yml # Driver config (Docker)
# ├── converge.yml # Applies the role
# └── verify.yml # Assertions
A minimal molecule/default/molecule.yml:
driver:
name: docker
platforms:
- name: instance
image: geerlingguy/docker-ubuntu2404-ansible:latest
pre_build_image: true
provisioner:
name: ansible
verifier:
name: ansible
molecule/default/verify.yml — assertions after the role runs:
---
- name: Verify nginx is running
hosts: all
tasks:
- name: Check nginx service is active
ansible.builtin.service_facts:
- name: Assert nginx is running
ansible.builtin.assert:
that:
- "'nginx' in services"
- "services['nginx'].state == 'running'"
- name: Check nginx port is listening
ansible.builtin.wait_for:
port: 80
timeout: 5
Run the full test cycle:
molecule test # create → converge → verify → destroy
molecule converge # apply the role (keep container for debugging)
molecule verify # run verify.yml against running container
molecule login # SSH into the test container
molecule destroy # tear down containers
Idempotency is tested automatically: molecule test runs the role twice and fails if the second run reports changes.
Ansible roles with Molecule tests are the foundation of reliable, maintainable infrastructure automation. Write the role, test it locally, pin dependencies in requirements.yml, and apply it confidently to production.