This howto is outdated. Plz help updating it.
Every cluster will get their own deployment host. This very host is being used to build individual images for every node. At the same time the node will act as an Ansible Controller that can update the nodes configuration via Ansible.
There is only one Ansible core configuration for every host. This configuration will be used during the build process (OpenWrt Imagebuilder) to deploy the individual hosts configuration into the image. It will also be used in order to deploy updated configurations to an already deployed and running node.
To be a bit more specific. The Ansible roles for OpenWrt can not only deploy the configuration files to a OpenWrt but it can also be deployed to some folder. This is specifically useful if you want to build your own images with imagebuilder and directly include the right configuration for your very node.
Adjust your release if you like.
mkdir pplznetdraft
cd pplznetdraft
wget https://downloads.openwrt.org/releases/22.03.0-rc5/targets/x86/64/openwrt-imagebuilder-22.03.0-rc5-x86-64.Linux-x86_64.tar.xz
tar -xvf openwrt-imagebuilder-22.03.0-rc5-x86-64.Linux-x86_64.tar.xz
cd openwrt-imagebuilder-22.03.0-rc5-x86-64.Linux-x86_64
Put this script build.sh. In this example I create 60 images for n300-n359 while using OpenWrt 22.03 rc5.
#!/usr/bin/env bash
ansibledir="/pvedisk/ansible_pplznet"
workdir="/pvedisk/pplznetdraft"
imagebuilderdir="openwrt-imagebuilder-22.03.0-rc5-x86-64.Linux-x86_64"
imagepath="bin/targets/x86/64/openwrt-22.03.0-rc5-x86-64-generic-ext4-combined.img.gz"
PACKAGES="python3 bind-host luci-app-ddns ddns-scripts-services ddns-scripts-nsupdate ddns-scripts bind-client tcpdump babeld collectd collectd-mod-cpu collectd-mod-interface collectd-mod-iwinfo collectd-mod-load collectd-mod-memory collectd-mod-network collectd-mod-rrdtool e2fsprogs htop iftop luci luci-app-babeld luci-app-statistics luci-app-vnstat2 luci-app-wireguard mtr nmap-full prometheus-node-exporter-lua vim-fuller vnstat2 vnstati2 wg-installer-client wireguard-tools zlib"
FILES="files/"
oldindex=359
for index in {300..359}; do
# delete old configs
rm -rf ${workdir}/${imagebuilderdir}/files/*
# run ansible to deploy host specific config
cd ${ansibledir}
ansible-playbook buildimage_n${index}.yml
# build image
cd ${workdir}/${imagebuilderdir}
# if you want to build without packages:
#make image FILES="${FILES}"
make image PACKAGES="${PACKAGES}" FILES="${FILES}"
# mv and extract
cp ${imagepath} ../n${index}.img.gz
gunzip -f ../n${index}.img.gz
oldindex=${index}
done
This script will run Ansible in order to deploy the configuration files locally, in order to build them into the individual image.
In the Ansible path I have configured those 60 hosts. For every host there is a playbook, like this examplel for n300:
- hosts: n300.demo.junicast.de
become: true
gather_facts: false
connection: local
ignore_errors: yes
vars:
ansible_host: 127.0.0.1
skip_handlers: true
openwrt_deployroot: "/pvedisk/pplznetdraft/openwrt-imagebuilder-22.03.0-rc5-x86-64.Linux-x86_64/files/"
openwrt_system_deployroot: "{{ openwrt_deployroot }}"
openwrt_dropbear_deployroot: "{{ openwrt_deployroot }}"
openwrt_network_deployroot: "{{ openwrt_deployroot }}"
openwrt_firewall_deployroot: "{{ openwrt_deployroot }}"
openwrt_babeld_deployroot: "{{ openwrt_deployroot }}"
openwrt_dropbear_runimagebuilder: true
openwrt_network_runimagebuilder: true
openwrt_firewall_runimagebuilder: true
openwrt_babeld_runimagebuilder: true
roles:
- imp1sh.ansible_pplznet.pplznet_node_deploy
- imp1sh.ansible_openwrt.ansible_openwrtsystem
- imp1sh.ansible_openwrt.ansible_openwrtdropbear
- imp1sh.ansible_openwrt.ansible_openwrtnetwork
- imp1sh.ansible_openwrt.ansible_openwrtfirewall
- imp1sh.ansible_openwrt.ansible_openwrtbabeld
In the end there will be individual img files that I import into proxmox with this script.
#!/usr/bin/python3
# difference to create_virtenv2 is that all interfaces on vmbbr1
import os
vlans = [
{'guestid': 300,
'vlanid': 1300,
'nic': 'net0',
'bridge': 'vmbr1'},
{'guestid': 300,
'vlanid': 71,
'nic': 'net1',
'bridge': 'vmbr1'},
{'guestid': 300,
'vlanid': 2300,
'nic': 'net2',
'bridge': 'vmbr1'
{'guestid': 300,
'vlanid': 3000,
'nic': 'net3',
'bridge': 'vmbr1'},
{'guestid': 300,
'vlanid': 2999,
'nic': 'net4',
'bridge': 'vmbr1'},
[...] define all your vlans
]
for index in range(300, 360):
# print("qm create", index, "--agent 0 --autostart 0 --balloon 0 --cores 1 --cpu host --memory 128 --name node" + str(index))
runcommandqm = "qm create " + str(index) + " --agent 0 --autostart 0 --balloon 0 --cores 1 --cpu host --memory 128 --name node" + str(index)
print(runcommandqm)
os.system(runcommandqm)
runcommandimportdisk = "qm importdisk " + str(index) + " /pvedisk/pplznetdraft/n" + str(index) + ".img pvedisk"
# this for zfs
# runcommandactivatedisk = "qm set " + str(index) + " --scsihw virtio-scsi-pci --scsi0 pvedisk:vm-" + str(index) + "-disk-0"
# this for raw eg btrfs
runcommandactivatedisk = "qm set " + str(index) + " --scsihw virtio-scsi-pci --scsi0 pvedisk:" + str(index) + "/vm-" + str(index) + "-disk-0.raw"
print(runcommandimportdisk)
os.system(runcommandimportdisk)
print(runcommandactivatedisk)
os.system(runcommandactivatedisk)
#runcommandvlan = "qm set " + str(vlans[0]['guestid']) + " --" + str(vlans[0]['nic']) + " virtio,bridge=" + str(vlans[0]['bridge']) + ",tag=" + str(vlans[0]['vlanid'])
#print(runcommandvlan)
#os.system(runcommandvlan)
for vlan in vlans:
runcommandvlan = "qm set " + str(vlan['guestid']) + " --" + str(vlan['nic']) + " virtio,bridge=" + str(vlan['bridge']) + ",tag=" + str(vlan['vlanid'])
print(runcommandvlan)
os.system(runcommandvlan)
for index in range(300, 360):
runcommandstart ="qm start " + str(index)
print(runcommandstart)
os.system(runcommandstart)
Depending on how your volumes are called some adjustments might be needed. Also the bridge names may differ and the path where this disk is can be different. There is an example for ZFS and one for raw files.