Remote nodes are now true GlusterFS clients: - Only install glusterfs-client packages (not server) - Don't run glusterd service - Don't contribute storage bricks - Mount volume as client from full nodes - Perfect for edge locations with high latency Full nodes are GlusterFS servers: - Install and run glusterfs-server - Contribute storage bricks - Participate in replication - Must be used in low-latency environments This prevents replication delays - writes only wait for full nodes, not remote clients. Remote nodes get eventual consistency.
77 lines
2.4 KiB
Markdown
77 lines
2.4 KiB
Markdown
<div align="center">
|
|
<img src='./icon.svg' width="150px">
|
|
<h2>nullpoint</h2>
|
|
<br>
|
|
</div>
|
|
|
|
Secure AlmaLinux (RHEL) Server setup with LUKS encryption, Tang, TPM and RAID1 for Hetzner Dedicated Servers.
|
|
|
|
### Features
|
|
|
|
- AlmaLinux Server base
|
|
- Full disk encryption with LUKS
|
|
- Remote unlock via Tang server
|
|
- TPM-based boot verification
|
|
- mdadm RAID1 + XFS (RHEL standard)
|
|
- SSH key-only access with early boot SSH via dropbear
|
|
- Best-in-class terminal: zsh + powerlevel10k + evil tmux
|
|
|
|
### Unlock Strategy
|
|
|
|
1. **Automatic unlock via Tang/TPM** (default):
|
|
- Configure TPM2 and/or Tang servers in post-install.sh
|
|
- System unlocks automatically if conditions are met
|
|
- No manual intervention required
|
|
|
|
2. **Manual unlock via SSH** (fallback):
|
|
- SSH to server on port 22 (dropbear in early boot)
|
|
- Enter LUKS passphrase when prompted (twice, once per disk)
|
|
- Used when automatic unlock fails or is not configured
|
|
|
|
### Install
|
|
|
|
Boot your Hetzner server into rescue mode and run:
|
|
|
|
```bash
|
|
wget -qO- https://git.dominik-roth.eu/dodox/nullpoint/raw/branch/master/get.sh | bash
|
|
```
|
|
|
|
The installer will:
|
|
- Detect your SSH key from the current session
|
|
- Ask for hostname and username
|
|
- Generate a secure LUKS passphrase (SAVE IT!)
|
|
- Download and configure everything
|
|
- Run Hetzner's installimage automatically
|
|
|
|
---
|
|
|
|
<div align="center">
|
|
<img src='./icon_cluster.svg' width="150px">
|
|
<h2>nullpoint cluster</h2>
|
|
<br>
|
|
</div>
|
|
|
|
Encrypted network and storage pool using [Nebula](https://github.com/slackhq/nebula) mesh VPN and [GlusterFS](https://www.gluster.org/) distributed filesystem.
|
|
|
|
### Features
|
|
|
|
- **Encrypted mesh network** - All traffic encrypted via Nebula overlay (192.168.100.0/24)
|
|
- **Distributed storage** - Data replicated across all storage nodes
|
|
- **Simple joining** - Single preshared secret + lighthouse endpoint
|
|
- **Flexible nodes** - Full nodes (replicate data) or remote nodes (no storage)
|
|
|
|
### Setup
|
|
|
|
```bash
|
|
wget -qO- https://git.dominik-roth.eu/dodox/nullpoint/raw/branch/master/cluster-setup.sh | sudo bash
|
|
```
|
|
|
|
Choose your node type:
|
|
- **Full node** - Runs GlusterFS server, contributes storage, acts as lighthouse
|
|
- Use for servers in same datacenter/region with low latency
|
|
- **Remote node** - GlusterFS client only, no storage contribution
|
|
- Use for edge locations, different regions, or high-latency connections
|
|
- Avoids replication delays since writes don't wait for this node
|
|
|
|
Storage mounted at `/data/storage/` on all nodes.
|