Appendix B-Build-From-Scratch Linux Toolbox¶
A working Linux engineer should have implemented each of the following at least once.
B.1 A Self-Healing systemd Service¶
# /etc/systemd/system/self-heal.service
[Unit]
Description=Self-healing application
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/local/bin/myapp
WatchdogSec=30s
Restart=always
RestartSec=2s
StartLimitInterval=300
StartLimitBurst=5
TimeoutStopSec=10
NotifyAccess=main
# health-check via WatchdogSec: app calls sd_notify(0, "WATCHDOG=1") periodically
# if it stops, systemd kills and restarts
# (paste the hardening block from Appendix A here)
[Install]
WantedBy=multi-user.target
The application calls sd_notify(0, "READY=1") after init and sd_notify(0, "WATCHDOG=1") periodically. If the watchdog timer expires, systemd kills and restarts. This is the standard pattern for self-healing in production.
B.2 A Minimal Init (PID 1)¶
For containers or micro-VMs:
#include <sys/wait.h>
#include <unistd.h>
#include <signal.h>
int main(int argc, char **argv) {
if (argc < 2) return 1;
pid_t pid = fork();
if (pid == 0) execvp(argv[1], argv+1);
// PID 1 must reap zombies and forward signals
for (;;) {
int status;
pid_t p = waitpid(-1, &status, 0);
if (p == pid) return WEXITSTATUS(status);
}
}
tini or dumb-init in production; this is for understanding.
B.3 A Hand-Built Container¶
(Sketch-see Month 3 lab.)
- clone(CLONE_NEWUSER | CLONE_NEWPID | CLONE_NEWNS | CLONE_NEWUTS | CLONE_NEWIPC | CLONE_NEWNET | CLONE_NEWCGROUP, ...)
- Write UID/GID maps via /proc/<pid>/uid_map, gid_map.
- Mount proc, sysfs, tmpfs inside.
- pivot_root into rootfs.
- Configure veth pair on the host; move one end into the new netns.
- execve user command.
B.4 A Kernel Module Skeleton¶
#include <linux/module.h>
#include <linux/init.h>
#include <linux/kernel.h>
static int __init mymod_init(void) {
pr_info("mymod loaded\n");
return 0;
}
static void __exit mymod_exit(void) {
pr_info("mymod unloaded\n");
}
module_init(mymod_init);
module_exit(mymod_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("you");
MODULE_DESCRIPTION("skeleton");
Makefile:
obj-m += mymod.o
KDIR := /lib/modules/$(shell uname -r)/build
all:
$(MAKE) -C $(KDIR) M=$(PWD) modules
clean:
$(MAKE) -C $(KDIR) M=$(PWD) clean
B.5 An eBPF Skeleton (libbpf + CO-RE)¶
prog.bpf.c:
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
struct {
__uint(type, BPF_MAP_TYPE_RINGBUF);
__uint(max_entries, 1 << 20);
} events SEC(".maps");
SEC("tracepoint/syscalls/sys_enter_execve")
int handle_execve(void *ctx) {
char *e = bpf_ringbuf_reserve(&events, 16, 0);
if (!e) return 0;
bpf_ringbuf_submit(e, 0);
return 0;
}
char LICENSE[] SEC("license") = "GPL";
User-side: use libbpf skeletons (bpftool gen skeleton prog.bpf.o > prog.skel.h). The full pattern is in `libbpf-bootstrap - clone it, modify.
B.6 A Seccomp-bpf Allowlist¶
#include <seccomp.h>
scmp_filter_ctx ctx = seccomp_init(SCMP_ACT_KILL);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(read), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(write), 0);
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(exit_group), 0);
// ... only what you actually need
seccomp_load(ctx);
For systemd-managed services, prefer SystemCallFilter= directives; this raw API is for embedded code, sandboxes, and runtime libraries.
B.7 A udev Rule¶
/etc/udev/rules.d/99-mydev.rules:
SUBSYSTEM=="usb", ATTR{idVendor}=="abcd", ATTR{idProduct}=="1234", \
MODE="0660", GROUP="plugdev", SYMLINK+="mydev"
udevadm control --reload-rules && udevadm trigger.
B.8 A Repeatable VM Lab Setup¶
The final, hidden artifact: a Vagrantfile (or qemu script) that boots a fresh Ubuntu/Fedora VM with cloud-init, pre-installs bpftrace, perf, trace-cmd, auditd, cryptsetup, your toolbox above. Every lab in this curriculum should be reproducible from this base in under 5 minutes.