Cilium — How It Works¶
eBPF data plane internals, packet flow, Hubble observability pipeline, and Tetragon security enforcement.
eBPF Data Plane¶
Traditional CNIs use iptables — a linear chain of rules that becomes slower as rules grow (O(n)). Cilium replaces this with eBPF hash maps that provide constant-time O(1) lookups in kernel space.
flowchart LR
subgraph Traditional["iptables-based CNI"]
PKT1["Packet"] --> R1["Rule 1"] --> R2["Rule 2"] --> R3["Rule 3"] --> RN["Rule N\n(O(n) traversal)"]
end
subgraph Cilium_DP["Cilium eBPF Data Plane"]
PKT2["Packet"] --> MAP["eBPF Hash Map\n(O(1) lookup)"] --> Action["Allow / Drop / Redirect"]
end
style Traditional fill:#c62828,color:#fff
style Cilium_DP fill:#2e7d32,color:#fff
Packet Flow — Pod-to-Pod (Same Node)¶
sequenceDiagram
participant PodA as Pod A
participant VETH_A as veth (PodA)
participant TC_A as TC eBPF (egress)
participant TC_B as TC eBPF (ingress)
participant VETH_B as veth (PodB)
participant PodB as Pod B
PodA->>VETH_A: Send packet
VETH_A->>TC_A: eBPF TC hook (egress)
TC_A->>TC_A: L3/L4/L7 policy check
TC_A->>TC_A: Conntrack lookup
TC_A->>TC_B: Direct redirect (bpf_redirect)
TC_B->>VETH_B: Deliver to PodB veth
VETH_B->>PodB: Receive packet
Note over TC_A,TC_B: Bypasses host network stack entirely
Packet Flow — Pod-to-Service (ClusterIP)¶
sequenceDiagram
participant Pod as Client Pod
participant eBPF as eBPF (socket/TC)
participant CT as Conntrack Map
participant SVC as Service Map
participant Backend as Backend Pod
Pod->>eBPF: connect() to ClusterIP:port
eBPF->>SVC: Lookup Service → backend selection
eBPF->>eBPF: DNAT to backend Pod IP
eBPF->>CT: Create conntrack entry
eBPF->>Backend: Forward packet directly
Note over eBPF: Socket-level LB: resolved at connect(),<br/>not per-packet like kube-proxy/iptables
Hubble Observability Pipeline¶
flowchart TB
subgraph Kernel["Kernel Space"]
eBPF_H["eBPF Programs\n(TC, socket, XDP)"]
PerfBuf["Perf Event Buffer\n(flow events)"]
end
subgraph Userspace["Hubble Stack"]
HubbleAgent["Hubble Agent\n(per-node, embedded in cilium-agent)"]
HubbleRelay["Hubble Relay\n(cluster-wide aggregation)"]
HubbleUI["Hubble UI\n(service map, flow table)"]
end
subgraph Export["Export"]
Prom["Prometheus\n(metrics)"]
OTEL["OpenTelemetry\n(traces)"]
SIEM["SIEM / Log\n(JSON export)"]
end
eBPF_H -->|"perf events"| PerfBuf
PerfBuf --> HubbleAgent
HubbleAgent -->|"gRPC"| HubbleRelay
HubbleRelay --> HubbleUI
HubbleRelay --> Prom
HubbleRelay --> OTEL
HubbleRelay --> SIEM
style Kernel fill:#f9a825,color:#000
style Userspace fill:#7b1fa2,color:#fff
Tetragon — Runtime Security¶
flowchart LR
subgraph Kernel_T["Kernel"]
LSM["LSM Hooks"]
Kprobes["kprobes / tracepoints"]
eBPF_T["Tetragon eBPF\nprograms"]
end
subgraph Userspace_T["Tetragon Agent"]
PolicyEngine["Policy Engine\n(TracingPolicy CRDs)"]
EventProc["Event Processor"]
end
subgraph Actions["Enforcement"]
Log["Log event"]
Kill["Kill process\n(SIGKILL)"]
Alert["Send alert"]
end
LSM --> eBPF_T
Kprobes --> eBPF_T
eBPF_T --> EventProc
PolicyEngine --> eBPF_T
EventProc --> Log
EventProc --> Kill
EventProc --> Alert
style Kernel_T fill:#c62828,color:#fff
Tetragon Use Cases¶
| Use Case | TracingPolicy Target |
|---|---|
| Detect container escape | Monitor setns, unshare syscalls |
| Block crypto mining | Kill processes connecting to mining pools |
| File integrity | Alert on writes to /etc/passwd, /etc/shadow |
| Network forensics | Log all TCP connections from a namespace |
| Privilege escalation | Detect setuid(0) calls |
eBPF Map Types Used¶
| Map Type | Purpose |
|---|---|
| Hash map | Policy rules, service → endpoint mapping |
| LRU hash | Conntrack entries (connection state) |
| Array | Per-CPU counters, configuration |
| Ring buffer | Event export to Hubble |
| LPM trie | CIDR-based policy matching |