Packer and Talos Image Factory on Proxmox

Hashicorp’s Packer is an open-source tool for creating identical machine images for multiple platforms from a single source configuration. It automates the process of building pre-configured images, ensuring consistency across environments. Packer uses a JSON or HCL templates to define the image build process, including provisioning steps.
During our 3000+ Clusters Part 2: The journey in edge compute with Talos Linux article i mentioned that we are using Packer to create our image. Below is a quick guide on how this is done.
Getting Started
The idea is for Packer to create the VM to our specification, boot up some sort of live OS where it can download Talos from their image factory API. This is an API where we can define how we want our image to look. In our case, i went for something small — Arch Linux. The image is then written to a boot disk on the VM node, shutdown and converted to a template with some sensible naming.
So, to start either download a live image ISO and add to your Promox host, or we can just us the download URL and let Packer/Proxmox do this for us.
We will also need an API token for Packer to interact with the PVE API, we will use this later.
Defining the build
$ mkdir talos_packer && cd talos_packer
Variables
Create a variables.pkr.hcl
and add the following, these are what we will use in our build:
variable "proxmox_username" {
type = string
}
variable "proxmox_token" {
type = string
}
variable "proxmox_url" {
type = string
}
variable "proxmox_nodename" {
type = string
}
variable "proxmox_storage" {
type = string
}
variable "proxmox_storage_type" {
type = string
}
variable "static_ip" {
type = string
}
variable "gateway" {
type = string
}
variable "talos_version" {
type = string
default = "v1.9.2"
}
Create a build file
Create a proxmox.pkr.hcl
file that defines our build. We need to create a source definition, and a packer definition, and will utilise the github.com/hashicorp/proxmox
plugin.
packer {
required_plugins {
proxmox = {
version = ">= 1.1.7"
source = "github.com/hashicorp/proxmox"
}
}
}
locals {
timestamp = timestamp()
}
source "proxmox-iso" "talos" {
proxmox_url = var.proxmox_url
username = var.proxmox_username
token = var.proxmox_token
node = var.proxmox_nodename
insecure_skip_tls_verify = true
iso_file = "local:iso/archlinux-x86_64.iso"
# If you want to use a mirror directly, the below works nice.
# iso_url = "https://mirrors.dotsrc.org/archlinux/iso/2024.02.01/archlinux-x86_64.iso"
# iso_checksum = "sha256:891ebab4661cedb0ae3b8fe15a906ae2ba22e284551dc293436d5247220933c5"
# iso_storage_pool = "local"
# iso_download_pve = true
unmount_iso = true
os = "l26"
scsi_controller = "virtio-scsi-pci"
network_adapters {
bridge = "vmbr0"
model = "virtio"
}
disks {
type = "virtio"
storage_pool = var.proxmox_storage
format = "qcow2"
disk_size = "20GB"
}
disks {
type = "virtio"
storage_pool = var.proxmox_storage
format = "qcow2"
disk_size = "10GB"
}
memory = 4096
cpu_type = "host"
sockets = 2
cores = 2
tags = "${var.talos_version};template"
ssh_username = "root"
ssh_password = "packer"
ssh_timeout = "15m"
qemu_agent = true
template_name = "talos-template-${var.talos_version}-qemu"
template_description = "${local.timestamp} - Talos ${var.talos_version} template"
boot_wait = "20s"
boot_command = [
"<enter><wait50s>",
"passwd<enter><wait1s>packer<enter><wait1s>packer<enter>",
"ip address add ${var.static_ip} broadcast + dev ens18<enter><wait>",
"ip route add 0.0.0.0/0 via ${var.gateway} dev ens18<enter><wait>",
"ip link set dev ens18 mtu 1300<enter>",
]
}
There are a few things custom to our environment here, for example, the naming of the template talos-template-${var.talos_version}-qemu
and various tags. The interesting part, (i think) is the boot_command
"<enter><wait50s>",
"passwd<enter><wait1s>packer<enter><wait1s>packer<enter>",
This basically creates actions, i.e. keystrokes and wait-times on the console of the VM. It basically mimics a user, it also defines some quirky network limitations we have, namely setting MTU.
In addition, we also define our disks — one for the Talos OS, and one for local-storage-provisioner for our Kubernetes workloads to use if needed. We define memory
, cpus
, and in our case qemu_agent = true
— i will get to this in a second.
Define local variables
In vars/local.pkvars.hcl
— these are values for the definitions we defined earlier, and will add the following:
proxmox_storage = "local"
proxmox_storage_type = "lvm"
talos_version = "v1.9.2" // or leave out to accept the default
static_ip = "10.0.30.163/25" // static IP of the arch linux vm
gateway = "10.0.30.129" // and gw
// you can also add more in here, like proxmox_host etc..
I prefer to use env
vars for my Proxmox configuration, that is proxmox_username
and proxmox_host
and so on, so i run this with the command itself.
Creating and writing the Talos Image Factory
The Talos Linux Image Factory, developed by Sidero Labs, Inc., offers a method to download various boot assets for Talos Linux.
The Talos Image factory service is an API (or webui) that can take a schema file and create a custom image for us to use. We are interested in the NoCloud imag for our builds, and we would like to add the siderolabs/qemu-guest-agent
to our build, hence why we set the guest_agent = true
in our Packer build file.

From here, we “simply” download our image through the API (through a proxy), with a schema, create a files/schema.yaml
and add the following:
customization:
systemExtensions:
officialExtensions:
- siderolabs/qemu-guest-agent
Then using dd
we can extract the image directly to our disks on the VM.
That can easily be scripted in our Packer build, this will be added to the proxmox.pkr.hcl
file following way:
build {
name = "release"
sources = ["source.proxmox-iso.talos"]
provisioner "file" {
source = "./files/schematic.yaml"
destination = "/tmp/schematic.yaml"
}
provisioner "shell" {
inline = [
"echo 'Setting Up Proxy'",
"export HTTP_PROXY=http://proxy.domain.com:8080",
"export HTTPS_PROXY=http://proxy.domain.com:8080",
"export NO_PROXY=domain.com",
"echo 'Requesting build image from Talos Factory'",
"ID=$(curl -kLX POST --data-binary @/tmp/schematic.yaml https://factory.talos.dev/schematics | grep -o '\"id\":\"[^\"]*' | sed 's/\"id\":\"//')",
"URL=https://pxe.factory.talos.dev/image/$ID/${var.talos_version}/nocloud-amd64.raw.xz",
"echo 'Downloading build image from Talos Factory: ' + $URL",
"curl -kL \"$URL\" -o /tmp/talos.raw.xz",
"echo 'Writing build image to disk'",
"xz -d -c /tmp/talos.raw.xz | dd of=/dev/vda && sync",
"echo 'Done'",
]
}
}
You should now have the following structure:

Building our image
Once all the “hard work” is done, we can simply build our image from the CLI. As stated earlier, i like to define some variables in my environment so i can easily script this via some automation later on.
We need a token create in our Promox instance, and we can define the following env
vars:
export PROXMOX_TOKEN_ID='user@domain.com!packer'
export PROXMOX_TOKEN_SECRET='my-key'
export PROXMOX_HOST="https://pve.domain.com:8006/api2/json"
export PROXMOX_NODE_NAME="my-node-name"
export CLUSTER_NAME="my-cluster-name"
Then run the build
packer build -on-error=ask \
-var-file="vars/local.pkrvars.hcl" \
-var proxmox_username="${PROXMOX_TOKEN_ID}" \
-var proxmox_token="${PROXMOX_TOKEN_SECRET}" \
-var proxmox_nodename="${PROXMOX_NODE_NAME}" \
-var proxmox_url="${PROXMOX_HOST}" .
Eventually, the build will hopefully complete

And we now have an image we can deploy using Cloud-Init NoCloud (or any other cloud-init method)
The process is now rinse and repeat for any new versions of Talos, simply adjust thevars/local.pkvars.hcl
to the version you need, and re-run the build. This can of course be fully automated in the CI of your choice 😃
Have fun!