Home | Bill Tetrault

Prompt
You are an expert Linux homelab architect, AI agent systems builder, and technical writer. Create a complete, beginner-friendly, step-by-step guide for a first-time installer.
Goal
Produce a practical guide for installing and configuring OpenClaw with Mission Control on a minimal Rocky Linux 9.7 system, using a local Ollama-hosted Gemma 4 model as the LLM backend, with a Telegram interface, and a themed agent team based on the cast of Silicon Valley.
System details
- OS: Rocky Linux 9.7 minimal install
- CPU: Intel i5-8700K
- RAM: 32 GB
- GPU: NVIDIA RTX 2070 8 GB VRAM
- Storage: 1 TB NVMe + 5 TB HDD
- NVIDIA drivers are not installed yet
- User is new to OpenClaw
Guide requirements
Write the guide for someone who has never installed OpenClaw before.
Use clear explanations before commands, explain why each major step matters, and avoid assuming prior OpenClaw knowledge.
Storage and filesystem requirements
- Use /data for anything directly related to this OpenClaw guide
- Include a recommended directory layout under /data, such as for:
- OpenClaw app files
- configs
- logs
- backups
- Ollama-related persistent data if appropriate
- Mission Control data
- Explain what should live on the NVMe vs the 5 TB HDD
- Include ownership and permissions steps
Installation scope
The guide must include:
- Base OS preparation for Rocky Linux 9.7 minimal
- Required packages and utilities
- NVIDIA driver installation for Rocky Linux 9.7
- Verification that the GPU is usable
- Ollama installation
- Choosing and pulling the largest Gemma 4 model that realistically fits in 8 GB VRAM, and explain the tradeoffs
- Configuring Ollama as a local service
- Installing OpenClaw
- Configuring OpenClaw to use the local Ollama Gemma 4 model
- Installing and configuring Mission Control
- Configuring a Telegram interface for OpenClaw
- Startup, enablement, and persistence across reboots
- Validation and smoke testing
- Troubleshooting section for common failure cases
- Security hardening basics for a home lab deployment
- Backup/update strategy
Model selection requirement
Because the system has an RTX 2070 with 8 GB VRAM, explicitly choose the largest Gemma 4 variant that should fit reasonably on this hardware when served by Ollama.
- Explain why that model was chosen
- Mention when the user should fall back to a smaller model
- Mention expected performance limitations
- Prefer a practical, likely-to-work choice over an overly ambitious one
Agent design requirement
Create a themed agent team for a company called Pied-Piper HQ.
Do not use Pied Piper as the company name.
Map these agents to roles inspired by Silicon Valley:
- Richard
- Gilfoyle
- Dinesh
- Big Head
- Jared
- Monica
For each agent, include:
- Purpose
- Personality prompt/style
- Primary responsibilities
- Example tasks
- Recommended guardrails
- Suggested OpenClaw/Mission Control placement or routing
The agent team should cover at least:
- Coding and refactoring
- Infrastructure operations
- Security and review
- Documentation and onboarding
- Project coordination
- Research / business / product support
Keep the character inspiration light and fun, but still professional and useful in a real homelab environment.
Mission Control requirement
Include a section that designs and configures a Mission Control view for Pied-Piper HQ.
The guide should describe:
- Suggested dashboards or panes
- Which agents appear where
- Recommended workflows
- Safe approval points for infrastructure-changing actions
- Suggested queues for coding, infra, docs, and review
Structure the response as a real install guide with these sections:
- Overview
- Architecture plan
- Prerequisites
- Storage layout under /data
- Rocky Linux prep
- NVIDIA driver install
- Ollama install and model setup
- OpenClaw install
- Mission Control install
- Telegram integration
- Pied-Piper HQ agent roster
- First-run validation
- Troubleshooting
- Security and maintenance
- Next improvements
Style requirements
- Make it a first-time-user guide
- Use numbered steps where order matters
- Include command blocks the user can paste
- Include short explanations after important commands
- Call out places where values must be customized
- Include verification commands after each major installation phase
- Prefer systemd services where applicable
- Prefer stable, maintainable choices over clever ones
Important constraints
- Assume the user starts from a minimal Rocky install
- Assume no NVIDIA driver is installed yet
- Assume the user wants to keep this mostly local-first
- Assume OpenClaw should use the local Ollama model, not a cloud API by default
- Use Pied-Piper HQ consistently throughout the guide
- Use /data consistently for guide-related storage
- Do not skip beginner context
If upstream project details are uncertain
If exact package names, repo names, service names, or install steps may vary by release, explicitly label them as:
- “verify in upstream docs before running”
- “example configuration”
- “adjust for current release naming”
Deliverable
Return only the finished guide.
Do not ask follow-up questions.
Do not summarize what you are going to do.
Just produce the guide.
Overview
This guide walks you through installing and wiring up a local-first Pied-Piper HQ assistant stack on a fresh Rocky Linux 9.7 minimal install, using:
- Ollama with a local Gemma 4 E4B model as the LLM backend.
- OpenClaw as the agent framework and gateway, with Mission Control as the web UI.
- A Telegram bot as your primary chat interface.
You’ll end up with a small team of themed agents (inspired by the Silicon Valley cast) that you can talk to from Telegram and manage in a Mission Control dashboard.
Architecture plan
At a high level, your setup will look like this:
- Rocky Linux 9.7 host
- NVIDIA drivers + CUDA stack for your RTX 2070.
- Systemd services for Ollama and OpenClaw.
- Ollama
- Installed natively on Rocky.
- Runs as a background service exposing a local HTTP API on
http://localhost:11434.
- Stores Gemma 4 model weights and related data under
/data/ollama.
- Model choice: Gemma 4 E4B via Ollama
- Gemma 4 comes in E2B, E4B, 26B A4B, and 31B.
- According to Google’s Gemma 4 memory table, E4B in 4‑bit quantization (
Q4_0) needs about 5 GB VRAM, while E2B needs about 3.2 GB, and the 26B/31B variants require 15.6 GB–17.4 GB VRAM or more.
- With an RTX 2070 (8 GB VRAM), the largest Gemma 4 tier that realistically fits for general use is Gemma 4 E4B in Q4_0, exposed in Ollama as
gemma4:e4b.
- Larger 26B/31B tiers are not practical on 8 GB VRAM; E4B is the “max safe” choice, and E2B is the fallback if you see OOMs or slowdowns.
- OpenClaw + Mission Control
- OpenClaw is installed and configured via Ollama (
ollama launch openclaw).
- The gateway runs as a background process; Mission Control is accessible via browser (e.g.
http://localhost:18789).
- OpenClaw is configured to use your local
gemma4:e4b model as its default provider, not cloud APIs.
- Telegram
- A Telegram bot created via BotFather with an API token and chat ID.
- OpenClaw’s
channels configuration wired to that bot using openclaw configure --section channels.
- Pied-Piper HQ
- A themed agent roster (Richard, Gilfoyle, Dinesh, Big Head, Jared, Monica).
- Mission Control dashboard views tuned around: coding, infra, security, docs, coordination, and research.
Prerequisites
Before you start:
- Base system
- Rocky Linux 9.7 minimal installed and booting cleanly.
- You have a non-root user with
sudo access (examples will use tetraserv).
- System is connected to the internet (for package and model downloads).
- Access
- SSH access or local console with copy–paste capability.
- A browser on your LAN to open Mission Control (or use text-mode via local browser/SSH tunneling).
- Accounts
- Telegram account on your phone or desktop.
- Ability to talk to @BotFather in Telegram to create a bot.
- Hardware
- Intel i5‑8700K, 32 GB RAM, RTX 2070 8 GB VRAM, 1 TB NVMe, 5 TB HDD (as provided).
- Enough free space and patience for model downloads (several GB per model).
Storage layout under /data
You’ll keep everything for this project under /data so it’s easy to back up, move, and snapshot.
Recommended layout
We’ll use:
- On NVMe (fast, low latency)
- Active models, apps, configs, logs:
/data/ollama – models and Ollama state.
/data/openclaw/app – OpenClaw app files (if we choose to clone or store local bits).
/data/openclaw/config – OpenClaw config files, agent definitions, routing configs.
/data/openclaw/logs – OpenClaw and Mission Control logs.
/data/mission-control – Mission Control-specific dashboards, layouts, and assets.
- On HDD (big and slower)
- Backups, archives, exported logs:
/data/backups – config and snapshot backups.
/data/archive – old logs, export dumps from Mission Control.
If the HDD is a separate block device and not mounted yet, mount it at e.g. /mnt/hdd and then bind-mount parts into /data/backups later. That is environment-specific, so adjust as needed.
Create directories and set ownership
- Create base layout (run as
root or with sudo):
sudo mkdir -p \
/data/ollama \
/data/openclaw/app \
/data/openclaw/config \
/data/openclaw/logs \
/data/mission-control \
/data/backups \
/data/archive
- Make your main user the owner so OpenClaw and tools can write there:
sudo chown -R tetraserv:tetraserv /data
sudo chmod -R 750 /data
- Replace
tetraserv with your actual username if different.
750 gives full access to the owner, read+execute to the group, no access to others (a decent default for homelab).
You can later tighten specific subdirectories (e.g. backups) with chmod 700.
Rocky Linux prep
These steps assume a fresh Rocky 9.7 minimal installation.
1. Update the base system
sudo dnf update -y
sudo dnf install -y epel-release
epel-release enables the Extra Packages for Enterprise Linux (EPEL) repository, which many tools rely on.

Act as a Senior DevOps and AI Solutions Architect. Provide a comprehensive, step-by-step technical guide to building an Agentic AI Home Lab on a minimal Rocky Linux 9.7 host.
Project Goals:
Core Tech: Use Docker Compose to manage the lab environment.
Agent Framework: Implement an MCP (Model Context Protocol) architecture. Recommend a specific MCP Client (e.g., Claude Desktop, Goose, or a CLI-based client) and an MCP Server that allows a Gemini-powered agent to interact with the local Linux filesystem and Docker socket.
LLM Integration: Configure the agent to use the Google Gemini API via API key.
System Administration \& Security:
Initial Setup: Detailed steps for creating a non-root sudo user and hardening the minimal Rocky install (disabling root SSH, configuring firewalld).
Connectivity: Instructions for generating Ed25519 SSH keys on Windows 11 and deploying them to the Rocky host.
Persistence: Configuration for Docker volumes and environment variable management (using .env files) to keep API keys out of compose files.
Technical Requirements:
Provide the specific dnf commands for dependencies (Node.js, Python, Docker-CE).
Include a sample docker-compose.yml for a starter service (like Nginx or Portainer) that the agent will eventually manage.
Explain the ‘loop’ of how the Agent uses MCP to execute shell commands or modify files on the host safely.
Tone: Technical, concise, and security-first.
Now I have enough to write a comprehensive, accurate guide. Let me compose this.
Agentic AI Home Lab on Rocky Linux 9.7 — Complete Build Guide
This guide walks you through standing up a security-hardened Rocky Linux 9.7 host, wiring in Docker, and deploying a Gemini-powered AI agent using Goose CLI as the MCP client with the official @modelcontextprotocol/server-filesystem and mcp-server-docker as MCP servers. The agent loop lets Gemini reason over your host and issue tool calls that are executed locally via MCP — with you staying in control of what surfaces are exposed.
Architecture Overview
Before writing a single command, understand the data flow:
┌────────────────────────────┐
│ Goose CLI (MCP Host/Client) │ ◄── You interact here
│ Provider: Google Gemini │
└─────────────┬──────────────┘
│ JSON-RPC 2.0 over stdio
┌────────┴────────────────────┐
│ MCP Servers │
│ ① mcp/filesystem (Docker) │ ← R/W scoped dirs
│ ② mcp-server-docker │ ← Docker socket
└──────────────────────────────┘
MCP uses JSON-RPC 2.0 messages exchanged over stdio between the host (Goose) and each server subprocess. The Gemini model sees a tool schema for every MCP capability and decides when to call it — it never touches the socket or filesystem directly.
Phase 1 — Rocky Linux 9.7 Initial Setup
Create a Non-Root Sudo User
# As root on first boot
useradd -m -s /bin/bash ailab
passwd ailab
usermod -aG wheel ailab
# Verify
id ailab # should show wheel group
Lock the root account from password login immediately:
Created using Perplexity AI
Agentic AI Home Lab on Proxmox
A step‑by‑step guide (from zero to working AI agents in Docker) written from the perspective of a recent computer science graduate.
1. What You Will Build
By the end of this guide you will have:
- A Proxmox VE host running on your homelab hardware.
- An Ubuntu Server VM dedicated to containers (Docker + Docker Compose).
- A basic “agentic AI” stack using a modern agent framework (for example LangGraph, CrewAI, or AutoGen) running inside Docker.
- A development workflow to:
- Edit code on your laptop.
- Build images in Docker.
- Deploy and test agents in your home lab.
- Optional: Portainer for container management via web UI.
2. Prerequisites
You do not need prior experience with Proxmox, Docker, or AI agents. You should have:
- A physical machine that will become your Proxmox host:
- 4+ cores, 16 GB+ RAM recommended.
- At least 256 GB SSD or NVMe.
- A second device (laptop/desktop) with:
- SSH client (Windows: PowerShell, macOS/Linux: Terminal).
- Web browser.
- Network:
- Home router handing out DHCP addresses.
- Ability to access your Proxmox host via local IP.
Accounts / software:
- Modern browser (Chrome, Edge, Firefox, etc.).
- GitHub account (optional but recommended).
- An OpenAI / compatible LLM API key (or local model later).
3. Proxmox VE Installation
3.1 Downloading Proxmox
- Go to the Proxmox VE download page.
- Download the latest Proxmox VE ISO.
- Use a tool such as Rufus (Windows) or
dd (Linux/macOS) to create a bootable USB.
- Boot your server from the USB.
- Choose “Install Proxmox VE”.
- Follow the wizard:
- Target disk: your main SSD/NVMe.
- Country, time zone, keyboard: configure as appropriate.
- Password: choose a strong root password and record it.
- Management network: typically your main NIC with DHCP.
-
After installation, the console will show a URL, for example:
https://192.168.1.50:8006
- On your laptop, open that URL and accept the browser’s TLS warning.
4. First Steps in Proxmox
4.1 Logging In
- Username:
root
- Realm:
pam
- Password: the one you set during install.
You will land on the Proxmox web UI.
4.2 Basic Proxmox Concepts
- Node: your physical Proxmox server.
- VM: full virtual machine (virtual hardware, runs its own OS).
- Container (LXC): lightweight OS-level virtualization.
For this guide, we will:
- Use a VM for Docker (simpler, clean separation).
- Optionally later use LXC if you prefer.
5. Create the Ubuntu Docker VM
5.1 Download an Ubuntu Server ISO
- Download Ubuntu Server LTS ISO.
- In the Proxmox UI:
- Select your node → “local” storage → “ISO Images” → “Upload”.
- Upload the Ubuntu ISO.
5.2 Create the VM
- Click “Create VM”.
- General:
- Node: your Proxmox node.
- VM ID: automatic or pick one (e.g., 100).
- Name:
ubuntu-docker.
- OS:
- ISO Image: select your Ubuntu Server ISO.
- Type: Linux.
- System:
- Leave default for a first build or enable QEMU/UEFI if you prefer.
- Disks:
- Bus/Device:
scsi.
- Disk size: 64–128 GB or more depending on your usage.
- CPU:
- Memory:
- 4–8 GB (more if you will run many containers).
- Network:
- Bridge:
vmbr0 (default bridge to your LAN).
- Finish and start the VM.
5.3 Install Ubuntu in the VM
- Open the VM console in Proxmox.
- Follow the Ubuntu installer:
- Language, keyboard.
- Install Ubuntu Server.
- Disk: use entire virtual disk.
- Create a user, for example:
- Enable OpenSSH server.
- Reboot into the installed system.
6. SSH Access and Basic Setup
6.1 Find VM IP Address
In the VM console:
Look for an inet address on ens18 or similar, such as 192.168.1.101/24.
6.2 SSH from Your Laptop
From your laptop/desktop:
Accept the host key and log in with your password.
6.3 System Updates
sudo apt update
sudo apt upgrade -y
sudo reboot
Reconnect via SSH after reboot.
7. Install Docker and Docker Compose
7.1 Install Docker Engine
On Ubuntu VM:
sudo apt update
sudo apt install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo $VERSION_CODENAME) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Add your user to the docker group:
sudo usermod -aG docker $USER
newgrp docker
Verify:
7.2 Install Docker Compose (v2 CLI)
Docker on Ubuntu now includes Docker Compose as docker compose. Test:
8. Optional: Install Portainer
Portainer is a web UI to manage Docker containers.
docker volume create portainer_data
docker run -d \
-p 8000:8000 \
-p 9443:9443 \
--name portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latest
Access Portainer at:
https://<ubuntu-vm-ip>:9443
9. Agentic AI Concepts (High Level)
Before deployment, understand key ideas:
- LLM (Large Language Model): A model like GPT that can generate and understand text.
- Tool use: Agents can call tools (APIs, scripts) to interact with the outside world.
- Agent: A process that uses an LLM plus tools, memory, and a planning loop to take actions.
- Multi-agent system: Several agents collaborating, often with roles (planner, researcher, executor).
We will start with:
- A single agent that can:
- Receive a task description.
- Call a web API or run a local script.
- Return a result.
Then you can expand to multi-agent workflows.
10. Choose an Agent Framework
You can pick any of the popular frameworks. Three common choices:
For a first build, pick one and stay consistent through this guide. The steps below use a generic “Python agent service” pattern that works for all three with small adjustments.
11. Create a Project Structure
On your Ubuntu VM (or cloned from GitHub), create a directory:
mkdir -p ~/agent-lab
cd ~/agent-lab
Example structure:
agent-lab/
docker-compose.yml
agent-service/
Dockerfile
requirements.txt
app.py
12. Write a Minimal Agent Service (Python)
12.1 requirements.txt
Example for a generic agent with OpenAI-compatible client:
fastapi
uvicorn[standard]
openai
langchain
Replace or extend with your chosen framework, for example langgraph or crewai.
12.2 app.py (Simple HTTP Agent)
from fastapi import FastAPI
from pydantic import BaseModel
from openai import OpenAI
import os
app = FastAPI()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
class Task(BaseModel):
prompt: str
@app.post("/agent")
async def run_agent(task: Task):
# Simple single-call agent (no tools) as a starting point
response = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[
{"role": "system", "content": "You are a helpful coding assistant in a homelab."},
{"role": "user", "content": task.prompt},
],
)
return {"result": response.choices[0].message.content}
This is intentionally minimal. Later you can:
- Add tools.
- Maintain state between calls.
- Use an agent framework abstraction instead of directly calling the API.
13. Dockerfile for the Agent
agent-service/Dockerfile:
FROM python:3.11-slim
WORKDIR /app
ENV PYTHONUNBUFFERED=1
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
14. Docker Compose Configuration
docker-compose.yml in agent-lab:
version: "3.9"
services:
agent-service:
build: ./agent-service
container_name: agent-service
ports:
- "8000:8000"
environment:
OPENAI_API_KEY: "${OPENAI_API_KEY}"
restart: unless-stopped
Create a .env file in agent-lab (never commit secrets):
cat <<EOF > .env
OPENAI_API_KEY=your-real-api-key-here
EOF
15. Build and Run the Agent Service
From ~/agent-lab:
docker compose build
docker compose up -d
Check status:
You should see agent-service running and listening on port 8000.
16. Test the Agent API
From your laptop (replace IP):
curl -X POST "http://192.168.1.101:8000/agent" \
-H "Content-Type: application/json" \
-d '{"prompt": "Write a short Dockerfile that prints Hello World and explain each line."}'
You should receive JSON with a text response.
17. Evolving Towards Agentic Behavior
The minimal service above is a simple “stateless” chat wrapper. To make it agentic, incrementally add:
- Tools: Functions that the agent can call for:
- Shell commands (carefully sandboxed).
- HTTP APIs.
- File operations inside a restricted directory.
- Memory: Store conversation context or task history.
- Planning loop: Let the model decide:
- What to do next.
- Which tool to call.
- When to stop.
At the framework level this often means:
- Defining tools/functions.
- Writing a main loop that:
- Sends the current state + tool schema to the model.
- Parses tool calls, executes them, and feeds results back.
- Persisting state in a database or simple file store.
Here is a conceptual pattern (pseudo-code style) for adding a tool:
import subprocess
from typing import List
def list_files() -> List[str]:
files = subprocess.check_output(["ls", "-1"], text=True).splitlines()
return files
Then expose list_files to your LLM using your agent framework’s tool mechanism.
Note: For safety, start with read-only tools and limit directories.
19. Using the Agent Lab to Test Docker Containers
Your home lab is now ready to:
- Spin up new services as containers.
- Let the agent:
- Generate or modify Dockerfiles.
- Build images via CI or scripts.
- Suggest or automate test sequences.
Example workflow:
- Clone a containerized app into
~/projects/app1.
- Use your agent to:
- Analyze its
Dockerfile.
- Propose improvements.
-
Build and run it with:
docker compose build
docker compose up -d
-
Capture logs:
docker logs <container-name>
- Feed relevant logs back to the agent for debugging help.
20. Monitoring and Maintenance
20.1 Basic Docker Commands
-
List containers:
-
View logs:
docker logs agent-service
-
Restart:
docker restart agent-service
-
Stop all:
docker stop $(docker ps -q)
20.2 Backups
21. Security Basics
- Never expose Docker daemon (
/var/run/docker.sock) to the internet.
- Keep Proxmox and Ubuntu updated.
- Use strong passwords and, ideally, SSH keys.
- Limit which services are accessible from outside your LAN.
- Consider:
- A reverse proxy (e.g., Traefik, Nginx Proxy Manager).
- Zero-trust access (e.g., Tailscale, Cloudflare Tunnel) if you want remote access.
22. Where to Go Next
Now that you have the basics:
- Swap the simple
app.py for:
- LangGraph, CrewAI, or AutoGen examples from their docs.
- Multi-agent workflows (planner, researcher, executor).
- Add:
- Vector database (e.g., Qdrant, Weaviate, Chroma) via Docker for retrieval.
- Observability tools (Prometheus, Grafana, Loki) to monitor containers.
- Automate:
- Use GitHub Actions to build images and deploy to your homelab via SSH.
23. Appendix: Common Commands Cheat Sheet
Proxmox
Ubuntu VM
-
Update system:
sudo apt update && sudo apt upgrade -y
Docker
-
Build:
-
Up:
-
Down:
-
Remove unused:
End of Markdown guide (v1).
Created using Perplexity AI
Rocky Linux 9 Docker Deployment Guide
Complete Beginner’s Guide to Docker Services on Proxmox
Table of Contents
- Prerequisites
- VM Hardware Recommendations
- Rocky Linux 9 VM Setup
- System Preparation
- Docker Installation
- User and Group Configuration
- Firewall Configuration
- Directory Structure Setup
- Docker Compose Configuration
- Service Configuration
- Deployment
- Post-Deployment Tasks
- Troubleshooting
- Tips and Reminders
Prerequisites
Before beginning this deployment, ensure you have:
- Proxmox VE 7.0+ installed and functional
- Administrative access to Proxmox
- Rocky Linux 9 ISO downloaded
- Basic familiarity with Linux command line
- Network connectivity for package downloads
- Sufficient storage for VM and container data
VM Hardware Recommendations
Minimum Specifications
- CPU: 4 vCores (host type recommended for Rocky Linux 9)
- Memory: 8GB RAM
- Storage: 60GB disk space
- Network: 1 NIC with internet access
Recommended Specifications
- CPU: 6-8 vCores (host or haswell+ type)
- Memory: 12-16GB RAM
- Storage: 100GB+ SSD storage
- Network: 1 Gbit NIC
Storage Breakdown
- OS: ~20GB
- Docker images: ~15GB
- Application data (/data): ~25GB
- Logs and backups: ~10GB
- Free space buffer: ~30GB
⚠️ Important: Rocky Linux 9 requires x86-64-v2 CPU features. In Proxmox, use “host” CPU type or ensure your CPU type supports AVX2 instructions.
Rocky Linux 9 VM Setup
Proxmox VM Creation
- Create new VM in Proxmox
- OS: Linux (6.x/2.6 Kernel)
- CPU: Set to “host” type with 4+ cores
- Memory: 8GB minimum
- Storage: 60GB+ on fast storage
- Network: Default bridge with DHCP or static IP
Rocky Linux 9 Installation
- Boot from ISO and select minimal installation
- Configure network with static IP (recommended)
- Create user account with sudo privileges
- Complete installation and reboot
System Preparation
Update System
# Update all packages
sudo dnf update -y
# Reboot to ensure kernel updates are active
sudo reboot
Install Essential Packages
# Install required utilities
sudo dnf install -y epel-release
sudo dnf install -y curl wget git nano vim htop tree
# Install development tools (optional but useful)
sudo dnf groupinstall -y "Development Tools"
Set Timezone (Optional)
# Set your timezone
sudo timedatectl set-timezone America/Chicago
# Verify
timedatectl
Docker Installation
Remove Conflicting Packages
# Remove podman and buildah if installed
sudo dnf remove -y podman buildah
Add Docker Repository
# Install dnf config manager
sudo dnf install -y dnf-utils
# Add Docker repository
sudo dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
Install Docker
# Install Docker and related packages
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# If you encounter containerd.io conflicts, use:
# sudo dnf install docker-ce --allowerasing -y
Start and Enable Docker
# Start Docker service
sudo systemctl start docker
# Enable Docker to start on boot
sudo systemctl enable docker
# Verify Docker is running
sudo systemctl status docker
Verify Installation
# Check Docker version
docker --version
# Check Docker Compose version
docker compose version
# Test Docker installation
sudo docker run hello-world
User and Group Configuration
Add User to Docker Group
# Add current user to docker group
sudo usermod -aG docker $USER
# Apply group changes without logout
newgrp docker
# Verify group membership
groups $USER
Create Service User (Optional but Recommended)
# Create a dedicated user for Docker services
sudo useradd -r -s /bin/false -d /data dockersvc
# Add dockersvc user to docker group
sudo usermod -aG docker dockersvc
Set Proper Permissions
# Ensure docker.sock has correct permissions
sudo chmod 666 /var/run/docker.sock
# Fix ownership if needed
sudo chown root:docker /var/run/docker.sock
Firewall Configuration
Basic Firewall Setup
# Start and enable firewalld
sudo systemctl start firewalld
sudo systemctl enable firewalld
# Add docker0 interface to trusted zone
sudo firewall-cmd --permanent --zone=trusted --add-interface=docker0
# Enable masquerading for Docker networking
sudo firewall-cmd --permanent --zone=public --add-masquerade
# Create a custom zone for Docker services (optional)
sudo firewall-cmd --permanent --new-zone=docker-services
sudo firewall-cmd --permanent --zone=docker-services --set-target=ACCEPT
Open Required Ports
# Homepage (Port 3000)
sudo firewall-cmd --permanent --zone=public --add-port=3000/tcp
# OpenSpeedTest (Ports 3001-3002)
sudo firewall-cmd --permanent --zone=public --add-port=3001/tcp
sudo firewall-cmd --permanent --zone=public --add-port=3002/tcp
# Portainer (Port 9000)
sudo firewall-cmd --permanent --zone=public --add-port=9000/tcp
# Nginx Proxy Manager (Ports 80, 81, 443)
sudo firewall-cmd --permanent --zone=public --add-port=80/tcp
sudo firewall-cmd --permanent --zone=public --add-port=81/tcp
sudo firewall-cmd --permanent --zone=public --add-port=443/tcp
# Pi-hole (Ports 53, 67, 80-alt)
sudo firewall-cmd --permanent --zone=public --add-port=53/tcp
sudo firewall-cmd --permanent --zone=public --add-port=53/udp
sudo firewall-cmd --permanent --zone=public --add-port=67/udp
sudo firewall-cmd --permanent --zone=public --add-port=8080/tcp
# Grafana (Port 3000-alt)
sudo firewall-cmd --permanent --zone=public --add-port=3001/tcp
# Prometheus (Port 9090)
sudo firewall-cmd --permanent --zone=public --add-port=9090/tcp
# GitLab (Ports 8081, 2424)
sudo firewall-cmd --permanent --zone=public --add-port=8081/tcp
sudo firewall-cmd --permanent --zone=public --add-port=2424/tcp
# SSH (should already be open)
sudo firewall-cmd --permanent --zone=public --add-service=ssh
# Apply all firewall changes
sudo firewall-cmd --reload
Verify Firewall Rules
# Check active zones
sudo firewall-cmd --get-active-zones
# List all rules for public zone
sudo firewall-cmd --zone=public --list-all
# Check if Docker integration is working
sudo firewall-cmd --zone=trusted --list-all
Directory Structure Setup
Create Main Data Directory
# Create the main /data directory
sudo mkdir -p /data
# Set ownership
sudo chown $USER:$USER /data
# Set permissions
sudo chmod 755 /data
Create Service Directories
# Create directories for each service
mkdir -p /data/homepage/config
mkdir -p /data/openspeedtest
mkdir -p /data/portainer
mkdir -p /data/nginx-proxy-manager/data
mkdir -p /data/nginx-proxy-manager/letsencrypt
mkdir -p /data/pihole/config
mkdir -p /data/pihole/dnsmasq
mkdir -p /data/grafana/data
mkdir -p /data/prometheus/config
mkdir -p /data/prometheus/data
mkdir -p /data/gitlab/config
mkdir -p /data/gitlab/logs
mkdir -p /data/gitlab/data
# Create shared directories
mkdir -p /data/logs
mkdir -p /data/backups
mkdir -p /data/compose
Set Directory Permissions
# Set proper ownership for specific services
sudo chown -R 472:472 /data/grafana/data # Grafana user
sudo chown -R 65534:65534 /data/prometheus/data # Prometheus user
sudo chown -R 1000:1000 /data/pihole # Pi-hole default user
# Ensure main user can access all directories
sudo chown -R $USER:$USER /data/homepage
sudo chown -R $USER:$USER /data/openspeedtest
sudo chown -R $USER:$USER /data/portainer
sudo chown -R $USER:$USER /data/nginx-proxy-manager
sudo chown -R $USER:$USER /data/gitlab
Docker Compose Configuration
Create Main Docker Compose File
# Navigate to compose directory
cd /data/compose
# Create the main docker-compose.yml file
nano docker-compose.yml
Complete Docker Compose Configuration
version: '3.8'
networks:
frontend:
driver: bridge
backend:
driver: bridge
volumes:
portainer_data:
prometheus_data:
grafana_data:
gitlab_data:
gitlab_logs:
gitlab_config:
services:
# Homepage - Dashboard
homepage:
image: ghcr.io/gethomepage/homepage:latest
container_name: homepage
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- /data/homepage/config:/app/config
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- PUID=1000
- PGID=1000
- HOMEPAGE_ALLOWED_HOSTS=localhost:3000,10.100.30.104:3000
networks:
- frontend
# OpenSpeedTest - Network Speed Testing
openspeedtest:
image: openspeedtest/latest
container_name: openspeedtest
restart: unless-stopped
ports:
- "3001:3000" # HTTP
- "3002:3001" # HTTPS
# For custom data volume (optional), comment out if not needed:
# volumes:
# - /data/openspeedtest:/var/log/nginx
networks:
- frontend
# Portainer - Docker Management
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
ports:
- "9000:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
networks:
- frontend
# Nginx Proxy Manager - Reverse Proxy
nginx-proxy-manager:
image: jc21/nginx-proxy-manager:latest
container_name: nginx-proxy-manager
restart: unless-stopped
ports:
- "80:80"
- "81:81"
- "443:443"
volumes:
- /data/nginx-proxy-manager/data:/data
- /data/nginx-proxy-manager/letsencrypt:/etc/letsencrypt
environment:
- DB_SQLITE_FILE=/data/database.sqlite
networks:
- frontend
# Pi-hole - DNS Ad Blocker
pihole:
image: pihole/pihole:latest
container_name: pihole
restart: unless-stopped
ports:
- "53:53/tcp"
- "53:53/udp"
- "67:67/udp"
- "8080:80/tcp"
volumes:
- /data/pihole/config:/etc/pihole
- /data/pihole/dnsmasq:/etc/dnsmasq.d
environment:
- TZ=America/Chicago
- WEBPASSWORD=changeme123
- DNS1=1.1.1.1
- DNS2=1.0.0.1
cap_add:
- NET_ADMIN
dns:
- 127.0.0.1
- 1.1.1.1
networks:
- frontend
# Prometheus - Metrics Collection
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: unless-stopped
ports:
- "9090:9090"
volumes:
- /data/prometheus/config:/etc/prometheus
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=30d'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--web.enable-lifecycle'
networks:
- backend
# Grafana - Metrics Visualization
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: unless-stopped
ports:
- "3003:3000"
volumes:
- grafana_data:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=changeme123
- GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource
networks:
- backend
- frontend
# GitLab - Git Repository and CI/CD
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
restart: unless-stopped
hostname: 'gitlab.local'
ports:
- "8081:80"
- "2424:22"
volumes:
- gitlab_config:/etc/gitlab
- gitlab_logs:/var/log/gitlab
- gitlab_data:/var/opt/gitlab
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.local:8081'
gitlab_rails['gitlab_shell_ssh_port'] = 2424
gitlab_rails['initial_root_password'] = 'changeme123'
shm_size: '256m'
networks:
- frontend
Service Configuration
Prometheus Configuration
# Create Prometheus configuration
cat > /data/prometheus/config/prometheus.yml << 'EOF'
global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: ['host.docker.internal:9100']
- job_name: 'docker'
static_configs:
- targets: ['host.docker.internal:9323']
EOF
Homepage Configuration
# Create basic Homepage configuration
cat > /data/homepage/config/settings.yaml << 'EOF'
title: Home Dashboard
headerStyle: clean
statusStyle: dot
layout:
- Services:
style: row
columns: 4
providers:
docker:
endpoint: unix:///var/run/docker.sock
EOF
# Create services configuration
cat > /data/homepage/config/services.yaml << 'EOF'
- Infrastructure:
- Portainer:
href: http://:9000
description: Docker Management
icon: portainer.png
server: my-docker
container: portainer
- Pi-hole:
href: http://:8080/admin
description: DNS Ad Blocker
icon: pi-hole.png
server: my-docker
container: pihole
- Nginx Proxy Manager:
href: http://:81
description: Reverse Proxy
icon: nginx-proxy-manager.png
server: my-docker
container: nginx-proxy-manager
- Monitoring:
- Grafana:
href: http://:3003
description: Metrics Visualization
icon: grafana.png
server: my-docker
container: grafana
- Prometheus:
href: http://:9090
description: Metrics Collection
icon: prometheus.png
server: my-docker
container: prometheus
- Development:
- GitLab:
href: http://:8081
description: Git Repository
icon: gitlab.png
server: my-docker
container: gitlab
- Speed Test:
href: http://:3001
description: Network Speed Test
icon: speedtest-tracker.png
server: my-docker
container: openspeedtest
EOF
# Create Docker configuration
cat > /data/homepage/config/docker.yaml << 'EOF'
my-docker:
host: unix:///var/run/docker.sock
EOF
Deployment
Deploy Services
# Navigate to compose directory
cd /data/compose
# Deploy all services
docker compose up -d
# Check deployment status
docker compose ps
# View logs for any issues
docker compose logs -f
Verify Services
# Check all containers are running
docker ps
# Check specific service logs
docker compose logs homepage
docker compose logs pihole
docker compose logs grafana
Post-Deployment Tasks
Pi-hole Setup
# Access Pi-hole admin interface at http://YOUR_IP:8080/admin
# Default password: changeme123
# Change the password:
docker exec -it pihole pihole -a -p
Nginx Proxy Manager Setup
# Access at http://YOUR_IP:81
# Default credentials:
# Email: admin@example.com
# Password: changeme
Grafana Setup
# Access at http://YOUR_IP:3003
# Default credentials:
# Username: admin
# Password: changeme123
# Add Prometheus data source: http://prometheus:9090
GitLab Setup
# GitLab will take several minutes to initialize
# Access at http://YOUR_IP:8081
# Username: root
# Password: changeme123
Security Hardening
# Change all default passwords immediately after deployment
# Update firewall rules to restrict access as needed
# Consider using Nginx Proxy Manager for SSL termination
# Regularly update container images
# Example: Update all containers
cd /data/compose
docker compose pull
docker compose up -d
Troubleshooting
Common Issues and Solutions
Container Won’t Start
# Check logs
docker compose logs SERVICE_NAME
# Check resource usage
docker stats
# Restart specific service
docker compose restart SERVICE_NAME
Port Conflicts
# Check what's using a port
sudo netstat -tulpn | grep :PORT
# Stop conflicting service
sudo systemctl stop SERVICE_NAME
Permission Issues
# Fix data directory permissions
sudo chown -R $USER:$USER /data/SERVICE_NAME
# Fix Docker socket permissions
sudo chmod 666 /var/run/docker.sock
Firewall Issues
# Check if ports are open
sudo firewall-cmd --zone=public --list-ports
# Add missing port
sudo firewall-cmd --permanent --zone=public --add-port=PORT/tcp
sudo firewall-cmd --reload
Docker Network Issues
# Restart Docker service
sudo systemctl restart docker
# Recreate networks
docker compose down
docker compose up -d
Resource Monitoring
# Monitor system resources
htop
# Monitor Docker resources
docker stats
# Check disk usage
df -h
du -sh /data/*
Tips and Reminders
Regular Maintenance
- Update containers monthly:
docker compose pull && docker compose up -d
- Monitor disk usage: Docker logs and images can consume significant space
- Backup configurations: Regular backups of
/data directory
- Security updates: Keep Rocky Linux updated with
dnf update
- Use SSD storage for better performance
- Monitor memory usage - increase VM memory if needed
- Consider resource limits in docker-compose.yml for production use
- Use caching where possible (Redis for caching layer)
Security Best Practices
- Change all default passwords before production use
- Use strong passwords for all services
- Enable SSL/TLS via Nginx Proxy Manager
- Restrict firewall rules to necessary ports only
- Regular security updates for both OS and containers
- Monitor logs for suspicious activity
Backup Strategy
# Create backup script
cat > /data/backup.sh << 'EOF'
#!/bin/bash
BACKUP_DIR="/data/backups/$(date +%Y%m%d)"
mkdir -p $BACKUP_DIR
# Backup configurations
tar -czf $BACKUP_DIR/configs.tar.gz /data/*/config/
tar -czf $BACKUP_DIR/compose.tar.gz /data/compose/
# Backup Docker volumes
docker compose -f /data/compose/docker-compose.yml stop
tar -czf $BACKUP_DIR/volumes.tar.gz /var/lib/docker/volumes/
docker compose -f /data/compose/docker-compose.yml start
echo "Backup completed: $BACKUP_DIR"
EOF
chmod +x /data/backup.sh
Useful Commands
# Quick service restart
docker compose restart SERVICE_NAME
# View all logs
docker compose logs -f
# Update specific service
docker compose pull SERVICE_NAME
docker compose up -d SERVICE_NAME
# Clean up unused resources
docker system prune -a
# Monitor resources
watch docker stats
This completes your Rocky Linux 9 Docker deployment guide. All services should now be accessible and functional. Remember to change default passwords and implement proper security measures before using in production environments.
Created using Perplexity AI
Windows Subsystem for Linux (WSL) Guide for Windows 11
Inspired by NetworkChuck’s practical approach, adapted for any Linux distro!
1. Prerequisites
- Windows 11 (preferred for best compatibility)
- 64-bit CPU with virtualization enabled
- Enable virtualization (Intel VT-x / AMD-V) in your BIOS/UEFI if not already enabled.
- Windows Terminal is recommended (available in the Microsoft Store).
2. Install WSL - Windows Subsystem for Linux
Open Windows Terminal as Administrator and run:
- This installs WSL 2 by default and also installs Ubuntu, but you can skip Ubuntu and install another distro (see below).
Reboot your PC when prompted.
3. Install a Different Linux Distribution
To see available Linux versions:
powershell
wsl --list --online
Example: Install Fedora Linux
powershell
wsl --install -d FedoraLinux-42
- Substitute
FedoraLinux-42 with any listing from the previous command (OpenSUSE, Debian, Alpine, etc).
Alternative: Via Microsoft Store
- Open the Microsoft Store
- Search for your preferred distro (e.g., Fedora, Debian, Alpine, SUSE)
- Click Install
4. Initial Setup
- On first launch, create a new UNIX username & password. This is your Linux account.
5. Keeping Your WSL Distributions Up-to-Date
Update the Linux Kernel (if prompted):
Update Linux Packages (inside your distro):
- For Debian/Ubuntu/derived:
sudo apt update && sudo apt upgrade -y
- For Fedora:
sudo dnf upgrade --refresh
- For openSUSE:
sudo zypper refresh
sudo zypper update
6. Basic WSL Management
- List installed distros:
wsl --list --verbose
- Shutdown WSL completely:
wsl --shutdown
- Change default distro:
wsl --set-default <distro-name>
- Terminate a single distro:
wsl --terminate <distro-name>
7. Troubleshooting: Resetting the Root Password
If you forget/lost your root password for a WSL distro:
a. Launch as Root (no password needed):
powershell
wsl -d <distro-name> -u root
b. Reset Your User Password
(Enter and confirm your new password.)
List all users if needed:
c. Exit and Relaunch Normally
- Close the terminal and start your Linux session as usual.
- Access Windows files from Linux:
cd /mnt/c
- Access Linux files from Windows:
Open Windows Explorer and go to: \\wsl$\\<distro-name>\\home\\<your-username>
- Backing Up & Restoring WSL Instances:
- Backup:
powershell
wsl --export <distro-name> <backup-file.tar>
- Restore:
powershell
wsl --import <new-distro-name> <install-location> <backup-file.tar>
Bonus: GUI Integration and Docker Support
A. GUI Integration with WSLg
Windows 11 (and updated Windows 10) includes WSLg (Windows Subsystem for Linux GUI), so you can run Linux graphical apps natively.
How to Use
- Launch GUI apps in your Linux terminal, e.g.:
- No extra setup needed; Windows handles display and audio.
- GUI apps appear alongside Windows apps.
To install more GUI apps:
- For Fedora:
sudo dnf install gedit xeyes x11-tools
- For openSUSE:
sudo zypper install gedit xeyes x11-tools
B. Docker Support on WSL2
- Install Docker Desktop (from the official Docker website).
- During setup, select “Use WSL2 instead of Hyper-V”.
- Launch Docker Desktop, verify your WSL2 distros are listed in
Settings > Resources > WSL Integration.
- Enable integration for your distro.
- From your WSL terminal, test Docker:
docker --version
docker run hello-world
Enjoy combining the best of Linux and Windows!
Video Preview: Linux on Windows……Windows on Linux

Watch here: Linux on Windows……Windows on Linux
Short Description:
This video by NetworkChuck provides an in-depth, practical guide to using WSL 2 on Windows 11. It covers installation, setup, command and GUI interoperability, Docker usage, file integration, and advanced troubleshooting—perfect for anyone wanting to combine the power of Linux with the convenience of Windows!
Created using Perplexity AI
1. Setting Up XCA and Creating a New Database
- Open XCA and create a new database (File → New) to store keys, CAs, and certificates.
- Use separate databases for different environments if needed for isolation.
2. Creating a Root CA with Templates
- Go to the Templates tab → New Template.
- Select a preset like [default] CA and create a Root CA template.
- On the Extensions tab, set a long validity period (e.g., 10-20 years) and key usage for CA signing rights.
- Create a new Root CA certificate with this template by clicking Certificates → New Certificate.
- Under Subject, fill in CA identifying info (Common Name, Organization).
- Click Generate new key for a secure RSA key (recommend 2048-bit minimum).
- Sign this certificate with its own key (self-signed).
- Create a new certificate using the Root CA for signing.
- Use the [default] CA template again but adjust validity for a shorter time (e.g., 5 years).
- Generate a new key for the Intermediate CA.
- The Intermediate CA is used for signing end-entity certificates, providing better operational security by keeping root CA offline.
4. Creating Certificates with Key Usages using Templates
5. Issuing Certificates from Templates
- Use Certificates → New Certificate, select the appropriate template.
- Fill relevant Subject fields, apply extensions automatically.
- Sign web server or DPI certificates with the Intermediate CA key.
- Export certificates and keys for deployment.
Security Best Practices for XCA and Home Lab PKI
- Use strong cryptographic algorithms (RSA 2048-bit or higher, or ECC).
- Keep your Root CA offline or highly protected; use Intermediate CA for routine signing.
- Protect private keys with passwords and store them securely (e.g., encrypted volumes, hardware security modules).
- Regularly rotate keys and certificates especially for intermediates and end-entity certs.
- Implement strict access controls and multi-factor authentication for CA management.
- Maintain backups of the CA databases and keys securely.
- Use certificate revocation lists (CRLs) or OCSP to manage revoked certificates.
- Document certificate policies, key usage constraints, and certificate lifetimes.
Importing CA Certificates on Systems
Windows
- Double-click the Root CA certificate file (.crt or .cer).
- Click “Install Certificate.”
- Choose Local Machine store and run as Administrator.
- Navigate to Trusted Root Certification Authorities → Certificates.
- Use Import wizard, select the CA cert and import.
- Confirm and finish; restart browsers if needed.
Linux (Ubuntu/Debian example)
- Copy your CA certificate (.crt) to
/usr/local/share/ca-certificates/:
sudo cp your-ca.crt /usr/local/share/ca-certificates/
- Update CA certificates:
sudo update-ca-certificates
- For other distros like RHEL/CentOS or Fedora, use the equivalent CA cert directory and trust update commands.
This guide equips users to establish a private PKI with XCA for home lab use, including creating root and intermediate CAs, issuing certificates with appropriate key usages, applying security best practices, and deploying CA certificates on client systems.
Created using Perplixity AI
Guide to Setting Up Custom Domain on Cloudflare Pages with GitHub Pages
1. Account Creation
- Create a Cloudflare account: Sign up for a free account on Cloudflare by providing your email and setting a password.
- Create a GitHub account: If you do not have one, create a GitHub account to host your website repository.
2. Prepare Your Website on GitHub Pages (Updated)
- Use Your Free Personal Website: Every GitHub user gets one free personal website at
username.github.io, tied to a repository named username.github.io. Content pushed to this repository publishes directly to your personal GitHub Pages domain.
- Create Your Repository: Use your personal website repository or create a separate repository for project sites.
- Use a Template Generator: It is highly recommended to use static site generators or templates for easier site management:
- Jekyll: Natively supported by GitHub Pages; allows blog and static site use with Markdown and Liquid templates.
- Other popular generators: Hugo, Gatsby, or simple HTML/CSS templates.
- Add Your Content: Push website files or Jekyll source (including
_config.yml) to the repository.
- Enable GitHub Pages: In repository settings, enable GitHub Pages to publish your site. Your site is then available at
https://username.github.io or https://username.github.io/repository-name.
3. Connect Cloudflare Pages to GitHub
- Log in to Cloudflare dashboard.
- Navigate to Workers & Pages > Pages, select Create a Project.
- Connect to your GitHub repository with your website files.
- Set build commands if needed (e.g.,
exit 0 for no build).
- Deploy to get your Cloudflare Pages subdomain (
<your-site>.pages.dev).
4. Add a Custom Domain on Cloudflare Pages
- In your Cloudflare Pages project, go to Custom domains > Setup a custom domain.
- Enter your custom domain (e.g.,
www.example.com).
- For apex domains, add your domain to Cloudflare and update nameservers at your domain registrar as provided by Cloudflare.
- For subdomains, add CNAME record pointing to your Cloudflare Pages subdomain if you keep DNS with another provider.
5. DNS Setup If Domain Is Not Hosted on Cloudflare
- At your current domain registrar:
- For full Cloudflare hosting (apex domains), update your nameservers to Cloudflare’s.
- For subdomains, add CNAME to point subdomain (e.g.,
www) to your Cloudflare Pages domain (<your-site>.pages.dev).
- Allow time for DNS propagation (up to 24-48 hrs).
6. Verify and Activate
- Verify domain ownership in Cloudflare Pages.
- Activate the custom domain.
- Cloudflare will provision SSL for HTTPS.
- Your custom domain now serves your website hosted on Cloudflare Pages backed by your GitHub repository.
This setup ensures automated deployment from GitHub with Cloudflare delivering content securely and efficiently.
Created using Perplixity AI
Watch “What is DNS? (and how it makes the Internet work)” on YouTube
This video by NetworkChuck explains how DNS (Domain Name System) serves as the crucial translator between the web addresses we type and the IP addresses computers use—illustrating how DNS failures can break internet and application connectivity in ways that mimic physical network problems.
Why DNS Issues Are Often Mistaken for Network Issues
DNS (Domain Name System) is a foundational component of all internet and internal network communication. When DNS fails, it can appear as if the entire network is down—even when physical connectivity is perfectly fine. This brief explains the impact on both client-server and server-server communication and why even experienced System Engineers often mistake DNS failures for network problems.
Impact on Client-Server and Server-Server Communication
- Clients rely on DNS: User devices and apps use DNS to resolve server names (e.g., mail, web, or API servers). DNS failure means clients can’t locate or connect to these services.
- Server-to-server communication breaks: Distributed applications, clusters, and microservices that reference each other by hostname will fail to connect if DNS does not resolve, even if all machines are online and reachable by IP.
- Automation and scheduled tasks fail: Scripts, services, and scheduled jobs that depend on name resolution will silently break, potentially causing outages that mimic broader network failures.
Why DNS Failures Mimic Physical Network Outages
- Identical symptoms: Timeouts, unreachable hosts, dropped connections, and “cannot connect” errors occur whether DNS or physical networking is broken.
- Partial connectivity confusion: Ping by hostname fails, but ping by direct IP succeeds—an easy diagnostic to overlook.
- Widespread effects: One DNS server outage can affect hundreds or thousands of systems at once, appearing as a catastrophic network problem.
- Complex environments deepen confusion: In networks with load balancers, proxies, or service discovery, DNS is deeply integrated. DNS failures can manifest as total unreachability or cascading application failures.
- Monitoring doesn’t always distinguish: Many monitoring and alerting systems use hostnames. A DNS failure makes all monitored endpoints appear down, regardless of real network health.
Example Scenario
A system engineer investigating why servers cannot communicate may:
- Observe all connections failing.
- Assume the switch, router, or firewall is at fault.
- Overlook that only name-based connections fail, while connections using direct IPs work.
- Spend hours checking physical and link-layer connectivity, before realizing the root cause is DNS (e.g., a misconfiguration, failed server, or expired record).
Key Takeaway
Because DNS underpins the very ability for computers to find each other, its failure breaks critical network functions—masking as a total network outage. Accurate troubleshooting requires checking both DNS and underlying connectivity, even for seasoned professionals.
Created using Perplixity AI
Tech Tip: Fixing 421 Misdirected Request SNI Issues Between HAProxy and Apache
Overview
The HTTP 421 Misdirected Request error occurs when Apache receives an HTTPS request with an SNI hostname that doesn’t match its configured virtual hosts. This often happens when HAProxy is used as a reverse proxy in front of Apache and does not properly forward or handle the Server Name Indication (SNI) during TLS negotiation.
Why It Happens
- Apache requires the correct SNI hostname during the TLS handshake to serve the appropriate site.
- Newer Apache versions enforce stricter SNI checks due to security improvements.
- If HAProxy does not forward SNI information correctly, Apache returns a 421 error indicating the request was sent to a server that cannot handle it.
How to Fix It
HAProxy Configuration (SSL Passthrough)
frontend https-in
bind *:443 ssl crt /etc/ssl/certs/haproxy.pem
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
default_backend apache-https-backend
backend apache-https-backend
mode tcp
server apache1 192.168.0.2:443 send-proxy-v2 ssl verify none sni str(1,32)
Key points:
- Use
mode tcp for SSL passthrough so HAProxy forwards the TLS handshake transparently.
sni str(1,32) extracts and forwards the client SNI hostname to Apache.
send-proxy-v2 enables PROXY protocol support if Apache is configured to accept it.
HAProxy Configuration (SSL Termination)
frontend https-in
bind *:443 ssl crt /etc/ssl/certs/haproxy.pem
mode http
default_backend apache-backend
backend apache-backend
mode http
server apache1 192.168.0.2:80 check
Notes:
- HAProxy terminates SSL and proxies plain HTTP to Apache.
- Ensure Apache virtual hosts are correctly configured to handle the forwarded Host header.
Apache Virtual Host Example
<VirtualHost *:443>
ServerName example.com
ServerAlias www.example.com
SSLEngine on
SSLCertificateFile /etc/ssl/certs/example.crt
SSLCertificateKeyFile /etc/ssl/private/example.key
</VirtualHost>
- Virtual hosts should have the correct
ServerName matching the SNI hostname.
- Ensure SSL certificates are properly configured.
Testing & Validation
Use curl to test and confirm no 421 errors:
curl -IkH "Host: example.com" https://haproxy-ip
Summary
- The 421 error is caused by an SNI mismatch between HAProxy and Apache.
- Properly forward SNI from HAProxy to Apache when using SSL passthrough.
- Configure Apache virtual hosts to match the SNI hostname.
- Validate the setup using HTTPS clients like curl.
This tip helps avoid 421 Misdirected Request errors in modern HAProxy-Apache reverse proxy TLS setups.
Created using Perplixity AI
▶️YouTube search: Learn Jekyll
If you’re looking to learn Jekyll and deploy static sites using GitHub Pages, these recent and popular YouTube videos offer step-by-step guidance for beginners and intermediate users.
1. Create an Online Resume Using GitHub Pages and Jekyll: Step-by-Step Guide
Published: May 2025 (41 min)
This comprehensive tutorial walks you through creating an online resume on GitHub Pages with Jekyll. Covers forking a template, editing with GitHub, and using GitHub Copilot for troubleshooting and personalization.

2. Learn Jekyll with GitHub Pages (Playlist)
Published: April 2025 (Multiple Episodes)
A playlist covering the essentials: starting your blog, themes, layouts, and advanced options for GitHub Pages hosting with Jekyll.
Note: For playlists, the thumbnail shows the first video.
3. Minimal Mistakes Jekyll Theme Tutorial
Published: Oct 2024 (1 hr 3 min)
Deep dive into the Minimal Mistakes theme with custom layouts, navigation, and other advanced features. Perfect for creating professional, polished websites.

4. Jekyll Static Site Generator & GitHub Pages Quickstart
Published: Oct 2024 (48 min)
A practical walkthrough from install to deployment: craft a local Jekyll site, push to GitHub, work with themes (Minima, Hacker), and automate with GitHub Actions. Great for first-timers!

5. How to Build a Website | Github Pages | Jekyll | Template
Published: Feb 2022 (11 min)
A beginner-friendly video for launching your personal
Created using Perplixity AI
Jekyll Front Matter Guide
Jekyll uses front matter, a block of YAML options placed at the top of your file, to add metadata and configure processing. This guide covers how to format front matter and all common configuration options for Jekyll posts and pages.
What is Front Matter?
Front matter is a YAML block at the very top of your file, surrounded by triple dashes (---). Jekyll reads this block to process the file—assigning templates, defining variables, or specifying metadata.
Example basic front matter:
---
layout: post
title: "Blogging Like a Hacker"
date: 2025-08-26
categories: jekyll guide
tags: [jekyll, blog, static-site]
author: "Jane Doe"
published: true
excerpt: "A quick intro to writing blog posts with Jekyll."
permalink: /blogging/introduction/
---
Anything below the front matter block is normal Markdown, HTML, or Liquid.
Required Syntax
- Start with three dashes (
---) on a new line.
- Add your key-value pairs (YAML).
- End with three dashes (
---) on a new line.
- No blank lines before the first dash!
Common Front Matter Variables
| Variable |
Applies to |
Description |
layout |
post/page |
Template file to use from _layouts (default, post, page, etc.) |
title |
post/page |
Title for the content |
date |
post/page |
Publish/sort date. Format: YYYY-MM-DD HH:mm:ss +/-ZZ:ZZ (time optional) |
categories |
post/page |
List or string; assign categories for grouping/posts |
tags |
post/page |
List or string; assign tags for filtering/metadata |
author |
post/page |
Author attribution |
published |
post/page |
true/false; mark as published/unpublished |
excerpt |
post/page |
Short summary for lists/previews |
permalink |
post/page |
Custom URL for the file/page |
description |
post/page |
SEO, meta, preview, or context description |
redirect_to |
page |
List or string; URLs to redirect this page to |
hidden |
page |
Prevent from showing in navigation (used in some themes) |
lang |
page |
Language code (e.g., en, for translation/localization support) |
toc |
page |
Enable/disable automatic table of contents |
showMiniToc |
page |
Enable/disable Github Docs-style mini TOC (true/false) |
draft |
post |
Mark as draft (not published until manually published) |
sitemap |
post/page |
Exclude from sitemap.xml (false) |
Examples
---
categories:
- guides
- markdown
tags:
- tips
- beginners
---
You may also use space-separated strings:
---
categories: guides markdown
tags: tips beginners
---
Multiline Variables
For long descriptions or excerpts, you can use YAML’s multiline format.
Folded style (line breaks become spaces):
---
description: >
This page demonstrates advanced usage of Jekyll front matter.
You can have multiple lines here.
---
Literal style (preserves newlines):
---
excerpt: |-
This is a paragraph
with multiple lines
exactly as written.
---
Custom Variables
You can create any key-value you want:
---
layout: page
title: Custom Page
custom_variable: "Display this value using "
---
Special Page Types
- Posts (in
_posts): Jekyll automatically adds a date and slug based on filename.
- Pages (anywhere): Use
layout: page or custom layouts, add custom front matter as needed.
Optional: Empty Front Matter
To enable Liquid templating on a plain page, use empty front matter:
Best Practices & Tips
- Always start files with front matter in triple dashes.
- Use lists for
categories and tags for clarity.
- Prefer ISO 8601 date formats for international compatibility.
- Avoid unsupported YAML features (anchors, advanced references).
- Escape single quotes using two single quotes or switch to double quotes for safety.
- Preview your site: Jekyll reporting will indicate front matter errors.
- Use custom variables to pass data globally to layouts, includes, or navigation.
Resources
You’re now ready to format front matter for any Jekyll post, page, or collection!
Created using Perplexity AI
Beginner’s Guide to Creating Markdown Files
Markdown is a lightweight markup language that makes it easy to format text for web pages, documentation, and README files. This guide will teach you everything you need to know to start creating your own .md files.
What is Markdown?
Markdown is a simple way to add formatting to plain text documents. It uses special characters and symbols to create headings, lists, links, and other formatting elements. The best part? It’s designed to be readable even in its raw form.
Creating Your First Markdown File
- Open a text editor (VS Code, Notepad++, Sublime Text, or even basic Notepad)
- Create a new file and save it with a
.md extension
- Example:
README.md, notes.md, guide.md
- Start writing using Markdown syntax
Basic Markdown Syntax
Headings
Use # symbols to create headings. More # symbols = smaller heading:
# Heading 1 (Largest)
## Heading 2
### Heading 3
#### Heading 4
##### Heading 5
###### Heading 6 (Smallest)
Text Formatting
**Bold text**
*Italic text*
***Bold and italic***
~~Strikethrough~~
Lists
Unordered Lists:
- First item
- Second item
- Third item
- Sub-item
- Another sub-item
Ordered Lists:
1. First item
2. Second item
3. Third item
1. Sub-item
2. Another sub-item
Links and Images
Links:
[Link text](https://www.example.com)
[GitHub](https://github.com)
Images:


Code
Inline code:
Use `backticks` for inline code.
Code blocks:
```
This is a code block
You can write multiple lines here
```
Code blocks with syntax highlighting:
```python
def hello_world():
print("Hello, World!")
```
Blockquotes
> This is a blockquote
> It can span multiple lines
>
> And even include multiple paragraphs
Tables
| Header 1 | Header 2 | Header 3 |
|----------|----------|----------|
| Row 1 | Data | Data |
| Row 2 | Data | Data |
Horizontal Rules
Create horizontal lines with three or more dashes:
Advanced Elements
Task Lists
- [x] Completed task
- [ ] Incomplete task
- [ ] Another task
Line Breaks
- End a line with two spaces for a line break
- Use a blank line for a paragraph break
Escaping Characters
Use backslash \ to escape special characters:
\*This won't be italic\*
\# This won't be a heading
Common File Types
README.md - Project documentation
CHANGELOG.md - Version history
CONTRIBUTING.md - Contribution guidelines
LICENSE.md - License information
Example README.md
Here’s a sample README file structure:
# Project Title
Brief description of your project.
## Features
- Feature 1
- Feature 2
- Feature 3
## Installation
```bash
git clone https://github.com/username/project.git
cd project
npm install
Usage
Explain how to use your project here.
Contributing
Pull requests are welcome. For major changes, please open an issue first.
License
MIT
```
Tips for Better Markdown
- Keep it simple - Markdown is meant to be readable
- Use consistent formatting - Pick a style and stick with it
- Preview your work - Many editors show live previews
- Learn as you go - Start with basics and add complexity over time
Where Markdown is Used
- GitHub - README files, issues, pull requests
- Static site generators - Jekyll, Hugo, Gatsby
- Documentation platforms - GitBook, Notion
- Note-taking apps - Obsidian, Typora
- Forums and chat - Discord, Slack, Reddit
Resources
Happy writing! 🎉 Start with the basics and gradually incorporate more advanced features as you become comfortable with Markdown.