Properly implement remote nodes as GlusterFS clients

Remote nodes are now true GlusterFS clients:
- Only install glusterfs-client packages (not server)
- Don't run glusterd service
- Don't contribute storage bricks
- Mount volume as client from full nodes
- Perfect for edge locations with high latency

Full nodes are GlusterFS servers:
- Install and run glusterfs-server
- Contribute storage bricks
- Participate in replication
- Must be used in low-latency environments

This prevents replication delays - writes only wait for full nodes,
not remote clients. Remote nodes get eventual consistency.
This commit is contained in:
Dominik Moritz Roth 2025-08-24 18:45:53 +02:00
parent b3db6f0f82
commit bb0514469d
2 changed files with 95 additions and 57 deletions

View File

@ -6,7 +6,7 @@
Secure AlmaLinux (RHEL) Server setup with LUKS encryption, Tang, TPM and RAID1 for Hetzner Dedicated Servers. Secure AlmaLinux (RHEL) Server setup with LUKS encryption, Tang, TPM and RAID1 for Hetzner Dedicated Servers.
## Features ### Features
- AlmaLinux Server base - AlmaLinux Server base
- Full disk encryption with LUKS - Full disk encryption with LUKS
@ -16,7 +16,7 @@ Secure AlmaLinux (RHEL) Server setup with LUKS encryption, Tang, TPM and RAID1 f
- SSH key-only access with early boot SSH via dropbear - SSH key-only access with early boot SSH via dropbear
- Best-in-class terminal: zsh + powerlevel10k + evil tmux - Best-in-class terminal: zsh + powerlevel10k + evil tmux
## Unlock Strategy ### Unlock Strategy
1. **Automatic unlock via Tang/TPM** (default): 1. **Automatic unlock via Tang/TPM** (default):
- Configure TPM2 and/or Tang servers in post-install.sh - Configure TPM2 and/or Tang servers in post-install.sh
@ -28,7 +28,7 @@ Secure AlmaLinux (RHEL) Server setup with LUKS encryption, Tang, TPM and RAID1 f
- Enter LUKS passphrase when prompted (twice, once per disk) - Enter LUKS passphrase when prompted (twice, once per disk)
- Used when automatic unlock fails or is not configured - Used when automatic unlock fails or is not configured
## Install ### Install
Boot your Hetzner server into rescue mode and run: Boot your Hetzner server into rescue mode and run:
@ -67,7 +67,10 @@ wget -qO- https://git.dominik-roth.eu/dodox/nullpoint/raw/branch/master/cluster-
``` ```
Choose your node type: Choose your node type:
- **Full node** - Contributes storage, becomes lighthouse, read/write access - **Full node** - Runs GlusterFS server, contributes storage, acts as lighthouse
- **Remote node** - Full read/write access, no local storage contribution - Use for servers in same datacenter/region with low latency
- **Remote node** - GlusterFS client only, no storage contribution
- Use for edge locations, different regions, or high-latency connections
- Avoids replication delays since writes don't wait for this node
Storage mounted at `/data/storage/` on all nodes. Storage mounted at `/data/storage/` on all nodes.

View File

@ -26,9 +26,9 @@ if [ "$EUID" -ne 0 ]; then
exit 1 exit 1
fi fi
# Install required packages # Install base packages
echo -e "${YELLOW}[+] Installing required packages...${NC}" echo -e "${YELLOW}[+] Installing base packages...${NC}"
dnf install -y curl tar glusterfs-server glusterfs-client || exit 1 dnf install -y curl tar || exit 1
# Download and install Nebula # Download and install Nebula
echo -e "${YELLOW}[+] Downloading Nebula ${NEBULA_VERSION}...${NC}" echo -e "${YELLOW}[+] Downloading Nebula ${NEBULA_VERSION}...${NC}"
@ -38,13 +38,8 @@ tar -zxf nebula-linux-amd64.tar.gz
mv nebula nebula-cert /usr/local/bin/ mv nebula nebula-cert /usr/local/bin/
chmod +x /usr/local/bin/nebula /usr/local/bin/nebula-cert chmod +x /usr/local/bin/nebula /usr/local/bin/nebula-cert
# Enable and start GlusterFS
systemctl enable glusterd
systemctl start glusterd
# Create directories # Create directories
echo -e "${YELLOW}[+] Creating directories...${NC}" echo -e "${YELLOW}[+] Creating directories...${NC}"
mkdir -p "$GLUSTER_BRICK_PATH"
mkdir -p "$GLUSTER_MOUNT_PATH" mkdir -p "$GLUSTER_MOUNT_PATH"
mkdir -p /data mkdir -p /data
mkdir -p "$NEBULA_CONFIG" mkdir -p "$NEBULA_CONFIG"
@ -132,6 +127,18 @@ create_cluster() {
local hostname=$(hostname) local hostname=$(hostname)
local node_ip="192.168.100.1" local node_ip="192.168.100.1"
# First cluster node must be full node
echo -e "${YELLOW}First cluster node must be a full node (storage provider)${NC}"
# Install GlusterFS server packages
echo -e "${YELLOW}[+] Installing GlusterFS server packages...${NC}"
dnf install -y glusterfs-server || exit 1
systemctl enable glusterd
systemctl start glusterd
# Create brick directory
mkdir -p "$GLUSTER_BRICK_PATH"
# Ask for lighthouse endpoints # Ask for lighthouse endpoints
echo -e "${YELLOW}Enter lighthouse endpoints (DNS names or IPs).${NC}" echo -e "${YELLOW}Enter lighthouse endpoints (DNS names or IPs).${NC}"
echo -e "${YELLOW}Recommended: Use a DNS name with redundant backing for HA.${NC}" echo -e "${YELLOW}Recommended: Use a DNS name with redundant backing for HA.${NC}"
@ -142,21 +149,7 @@ create_cluster() {
exit 1 exit 1
fi fi
# Ask about node type am_lighthouse="true"
echo -e "${YELLOW}Select node type:${NC}"
echo " 1) Full node (contributes storage, lighthouse, read/write)"
echo " 2) Remote node (no storage contribution, not lighthouse)"
read -p "Enter choice [1-2]: " node_type
if [ "$node_type" = "2" ]; then
is_remote="true"
am_lighthouse="false"
echo -e "${YELLOW}Configuring as remote node (no storage contribution)${NC}"
else
is_remote="false"
am_lighthouse="true"
echo -e "${YELLOW}Configuring as full node${NC}"
fi
# Generate Nebula CA # Generate Nebula CA
generate_nebula_ca generate_nebula_ca
@ -228,7 +221,7 @@ EOF
setup_firewall setup_firewall
# Create cluster registry # Create cluster registry
echo "${lighthouse_ip} lighthouse ${hostname}" > "${NEBULA_CONFIG}/cluster-registry.txt" echo "${node_ip} ${hostname} full $(date)" > "${NEBULA_CONFIG}/cluster-registry.txt"
# Create GlusterFS volume # Create GlusterFS volume
echo -e "${YELLOW}[+] Creating GlusterFS volume...${NC}" echo -e "${YELLOW}[+] Creating GlusterFS volume...${NC}"
@ -279,17 +272,28 @@ join_cluster() {
# Ask about node type # Ask about node type
echo -e "${YELLOW}Select node type:${NC}" echo -e "${YELLOW}Select node type:${NC}"
echo " 1) Full node (contributes storage, lighthouse, read/write)" echo " 1) Full node (contributes storage, lighthouse, read/write)"
echo " 2) Remote node (no storage contribution, not lighthouse)" echo " 2) Remote node (client only, no storage contribution)"
echo -e "${YELLOW}Note: Use remote nodes for locations with high latency to the cluster${NC}"
read -p "Enter choice [1-2]: " node_type read -p "Enter choice [1-2]: " node_type
if [ "$node_type" = "2" ]; then if [ "$node_type" = "2" ]; then
is_remote="true" is_remote="true"
am_lighthouse="false" am_lighthouse="false"
echo -e "${YELLOW}Configuring as remote node (no storage contribution)${NC}" echo -e "${YELLOW}Configuring as remote node (GlusterFS client only)${NC}"
# Install only GlusterFS client packages
echo -e "${YELLOW}[+] Installing GlusterFS client packages...${NC}"
dnf install -y glusterfs glusterfs-fuse || exit 1
else else
is_remote="false" is_remote="false"
am_lighthouse="true" am_lighthouse="true"
echo -e "${YELLOW}Configuring as full node${NC}" echo -e "${YELLOW}Configuring as full node (GlusterFS server)${NC}"
# Install GlusterFS server packages
echo -e "${YELLOW}[+] Installing GlusterFS server packages...${NC}"
dnf install -y glusterfs-server || exit 1
systemctl enable glusterd
systemctl start glusterd
# Create brick directory for full nodes
mkdir -p "$GLUSTER_BRICK_PATH"
fi fi
echo -e "${YELLOW}[+] Configuring Nebula (IP: ${my_ip})...${NC}" echo -e "${YELLOW}[+] Configuring Nebula (IP: ${my_ip})...${NC}"
@ -379,49 +383,80 @@ EOF
fi fi
# Register with cluster # Register with cluster
echo "${my_ip} ${hostname} $(date)" >> "${NEBULA_CONFIG}/cluster-registry.txt" node_type_str=$([ "$is_remote" = "true" ] && echo "remote" || echo "full")
echo "${my_ip} ${hostname} ${node_type_str} $(date)" >> "${NEBULA_CONFIG}/cluster-registry.txt"
# Handle GlusterFS setup based on node type # Handle GlusterFS setup based on node type
if [ "$is_remote" = "true" ]; then if [ "$is_remote" = "true" ]; then
# Remote node - just mount, don't contribute storage # Remote node - GlusterFS client only
echo -e "${YELLOW}[+] Mounting GlusterFS...${NC}" echo -e "${YELLOW}[+] Mounting GlusterFS as client...${NC}"
# Mount the volume with full read/write access # Find a full node to connect to (try first few IPs)
mount -t glusterfs 192.168.100.1:/${GLUSTER_VOLUME} ${GLUSTER_MOUNT_PATH} mount_successful=false
for ip in 192.168.100.1 192.168.100.2 192.168.100.3; do
if ping -c 1 -W 2 $ip > /dev/null 2>&1; then
echo -e "${YELLOW}Attempting to mount from $ip...${NC}"
if mount -t glusterfs ${ip}:/${GLUSTER_VOLUME} ${GLUSTER_MOUNT_PATH} 2>/dev/null; then
mount_successful=true
mount_ip=$ip
break
fi
fi
done
# Add to fstab if [ "$mount_successful" = "true" ]; then
grep -q "${GLUSTER_VOLUME}" /etc/fstab || echo "192.168.100.1:/${GLUSTER_VOLUME} ${GLUSTER_MOUNT_PATH} glusterfs defaults,_netdev 0 0" >> /etc/fstab # Add to fstab
grep -q "${GLUSTER_VOLUME}" /etc/fstab || echo "${mount_ip}:/${GLUSTER_VOLUME} ${GLUSTER_MOUNT_PATH} glusterfs defaults,_netdev,backup-volfile-servers=192.168.100.1:192.168.100.2:192.168.100.3 0 0" >> /etc/fstab
echo -e "${GREEN}Remote node configured - full access to cluster storage without contributing local storage${NC}" echo -e "${GREEN}Remote node configured - mounted cluster storage as client${NC}"
else
echo -e "${RED}Failed to mount GlusterFS volume!${NC}"
echo "Make sure at least one full node is running."
fi
else else
# Full node - full participation # Full node - GlusterFS server
echo -e "${YELLOW}[+] Joining GlusterFS cluster as full node...${NC}" echo -e "${YELLOW}[+] Joining GlusterFS cluster as server...${NC}"
# Try to probe existing nodes # Try to probe existing nodes
echo -e "${YELLOW}[+] Looking for existing GlusterFS peers...${NC}" echo -e "${YELLOW}[+] Looking for existing GlusterFS peers...${NC}"
gluster peer probe 192.168.100.1 2>/dev/null || echo "Could not reach 192.168.100.1" peer_found=false
for ip in 192.168.100.1 192.168.100.2 192.168.100.3; do
if [ "$ip" != "${my_ip}" ] && ping -c 1 -W 2 $ip > /dev/null 2>&1; then
if gluster peer probe $ip 2>/dev/null; then
echo "Connected to peer at $ip"
peer_found=true
break
fi
fi
done
# Wait for peer to be connected if [ "$peer_found" = "false" ]; then
echo -e "${YELLOW}No existing peers found - this might be normal for early nodes${NC}"
fi
# Wait for peer connection
sleep 3 sleep 3
# Create brick directory # Create brick directory
mkdir -p "${GLUSTER_BRICK_PATH}/brick1" mkdir -p "${GLUSTER_BRICK_PATH}/brick1"
# Add brick to existing volume and increase replica count if [ "$peer_found" = "true" ]; then
echo -e "${YELLOW}[+] Adding brick to GlusterFS volume...${NC}" # Add brick to existing volume
echo -e "${YELLOW}[+] Adding brick to GlusterFS volume...${NC}"
# Get current replica count
replica_count=$(gluster volume info ${GLUSTER_VOLUME} 2>/dev/null | grep "Number of Bricks" | grep -oE "[0-9]+" | head -1)
if [ ! -z "$replica_count" ]; then
new_replica_count=$((replica_count + 1))
gluster volume add-brick ${GLUSTER_VOLUME} replica ${new_replica_count} $(hostname):${GLUSTER_BRICK_PATH}/brick1 force
fi
fi
# Get current replica count # Mount the volume locally
replica_count=$(gluster volume info ${GLUSTER_VOLUME} 2>/dev/null | grep "Number of Bricks" | grep -oE "[0-9]+" | head -1) mount -t glusterfs localhost:/${GLUSTER_VOLUME} ${GLUSTER_MOUNT_PATH} 2>/dev/null ||
new_replica_count=$((replica_count + 1)) mount -t glusterfs 192.168.100.1:/${GLUSTER_VOLUME} ${GLUSTER_MOUNT_PATH} 2>/dev/null
# Add brick with increased replica count
gluster volume add-brick ${GLUSTER_VOLUME} replica ${new_replica_count} $(hostname):${GLUSTER_BRICK_PATH}/brick1 force
# Mount the volume
mount -t glusterfs 192.168.100.1:/${GLUSTER_VOLUME} ${GLUSTER_MOUNT_PATH}
# Add to fstab # Add to fstab
grep -q "${GLUSTER_VOLUME}" /etc/fstab || echo "192.168.100.1:/${GLUSTER_VOLUME} ${GLUSTER_MOUNT_PATH} glusterfs defaults,_netdev 0 0" >> /etc/fstab grep -q "${GLUSTER_VOLUME}" /etc/fstab || echo "localhost:/${GLUSTER_VOLUME} ${GLUSTER_MOUNT_PATH} glusterfs defaults,_netdev 0 0" >> /etc/fstab
echo -e "${GREEN}Full node configured - contributing storage to cluster${NC}" echo -e "${GREEN}Full node configured - contributing storage to cluster${NC}"
fi fi