Overview & Architecture
This post shows a pattern used in many professional automation setups: a controller VM with a public IP runs Ansible (or acts as a jump host), and private nodes live in a subnet that accepts SSH only from the controller subnet. Terraform provisions the resources and injects your SSH public key; Ansible uses key-based authentication and ProxyJump to reach private nodes.
Why passwordless SSH for Ansible?
Key-based authentication is non-interactive and more secure than passwords. It pairs perfectly with automation — Ansible can run tasks reliably without prompts. When private nodes are placed in a subnet that only accepts SSH from a controller, you get a secure, production-friendly pattern.
Terraform: what the provided main.tf does
It provisions:
- Resource group
- VNet and two subnets (controller + nodes)
- Network Security Groups to limit SSH
- Controller VM with public IP and
admin_ssh_key - Node VMs without public IPs
Key snippet (controller VM)
resource "azurerm_linux_virtual_machine" "controller" {
name = "controller-vm"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
size = "Standard_B1s"
admin_username = var.admin_username
disable_password_authentication = true
admin_ssh_key {
username = var.admin_username
public_key = file(var.ssh_public_key_path)
}
...
}
Variables & outputs
variables.tf
variable "admin_username" {
type = string
default = "azureuser"
}
variable "ssh_public_key_path" {
type = string
default = "~/.ssh/ansible_key.pub"
}
output.tf
output "controller_public_ip" {
value = azurerm_public_ip.controller_ip.ip_address
}
output "nodes_private_ips" {
value = azurerm_linux_virtual_machine.nodes[*].private_ip_address
}
Ansible inventory & playbooks
Inventory (ProxyJump)
[nodes] node1 ansible_host=10.0.2.4 node2 ansible_host=10.0.2.5 [nodes:vars] ansible_user=azureuser ansible_ssh_private_key_file=~/.ssh/ansible_key ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/ansible_key azureuser@"'
Optional play: push controller public key to nodes
---
- name: Ensure controller public key present on nodes
hosts: nodes
gather_facts: no
become: yes
tasks:
- name: read controller public key (local or delegate)
slurp:
src: /home/azureuser/.ssh/ansible_controller.pub
register: pubkey_raw
delegate_to: localhost
run_once: true
- name: add controller public key to authorized_keys
authorized_key:
user: azureuser
key: "{{ pubkey_raw.content | b64decode }}"
state: present
manage_dir: true
comment: "controller-key"
tags: ['ssh','infra']
Commands : run these
-
Generate a keypair
ssh-keygen -t ed25519 -f ~/.ssh/ansible_key -C "ansible@me"
-
Terraform apply
terraform init terraform plan -out=tfplan terraform apply tfplan # or: terraform apply -var="ssh_public_key_path=~/.ssh/ansible_key.pub"
-
Test ProxyJump & Ansible ping
ssh -i ~/.ssh/ansible_key -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/ansible_key azureuser@
" azureuser@10.0.2.4 ansible -i inventory.ini nodes -m ping -u azureuser --private-key ~/.ssh/ansible_key
Troubleshooting & best practices
- Permission denied: ensure private key permissions:
chmod 600 ~/.ssh/ansible_key. - ProxyJump fails: confirm you can SSH to the controller first; then test the ProxyCommand manually.
- Terraform problems: check quota, region, and image SKU availability for your selected VM size.
Want the full quickstart ZIP?
I'll prepare a ZIP containing cleaned `main.tf`, `variables.tf`, `output.tf`, `inventory.ini`, sample playbooks, README, and the PNG architecture image sized for Open Graph (1200×630).
Request Quickstart ZIP