You can also just install the 'warewulf4' package with
zypper, but please note
that for this package you have to replace '/var/warewulf' with '/var/lib/warewulf'
in the rest of this document.
Edit the file
/etc/warewulf/warewulf.conf and ensure that you've set the appropriate
configuration paramaters. Here are some of the defaults for reference assuming that
is the IP address of your cluster's private network interface:
The DHCP range ends at
192.168.200.99 and as you will see below, the first node static IP
address (post boot) is configured to
There are a number of services and configurations that Warewulf relies on to operate.
If you wish to configure all services, you can do so individually (omitting the
will print a help and usage instructions.
dhcpd service was not used before you will have to add the interface on which
the cluster network is running to the
DHCP_INTERFACE in the file
This will pull a basic VNFS container from Docker Hub and import the default running kernel from the controller node and set both in the "default" node profile.
--setdefault arguments above will automatically set those entries in the default
profile, but if you wanted to set them by hand to something different, you can do the
Next we set some default networking configurations for the first ethernet device. On modern Linux distributions, the name of the device is not critical, as it will be setup according to the HW address. Because all nodes will share the netmask and gateway configuration, we can set them in the default profile as follows:
Adding nodes can be done while setting configurations in one command. Here we are setting
the IP address of
eth0 and setting this node to be discoverable, which will then
automatically have the HW address added to the configuration as the node boots.
Node names must be unique. If you have node groups and/or multiple clusters, designate them using dot notation.
Note that the full node configuration comes from both cascading profiles and node configurations which always supersede profile configurations.
There are two types of overlays: system and runtime overlays.
System overlays are provisioned to the node before
/sbin/init is called. This enables us
to prepopulate node configurations with content that is node specific like networking and
Runtime overlays are provisioned after the node has booted and periodically during the normal runtime of the node. Because these overlays are provisioned at periodic intervals, they are very useful for content that changes, like users and groups.
Overlays are generated from a template structure that is viewed using the
commands. Files that end in the
.ww suffix are templates and abide by standard
text/template rules. This supports loops, arrays, variables, and functions making overlays
When using the overlay subsystem, system overlays are never shown by default. So when running
overlay commands, you are always looking at runtime overlays unless the
-s option is passed.
All overlays are compiled before being provisioned. This accelerates the provisioning process because there is less to do when nodes are being managed at scale.
Here are some of the common