OpenNebula — How It Works¶
Internal mechanisms, VM lifecycle, the centralized daemon model, and the AI-powered DRS.
Centralized Architecture Model¶
Unlike OpenStack's distributed services, OpenNebula uses a centralized daemon model where oned is the single source of truth and orchestration. This dramatically simplifies operations, troubleshooting, and upgrades.
flowchart TB
subgraph Frontend["Front-End Node"]
ONED["oned\n(core daemon)"]
Sched["Scheduler"]
OneDRS["OneDRS\n(AI DRS)"]
DB["MySQL / SQLite"]
Sunstone["Sunstone UI"]
OneFlow["OneFlow"]
OneGate["OneGate"]
end
subgraph Host1["KVM Host 1"]
Libvirt1["libvirtd"]
VMs1["VMs"]
end
subgraph Host2["KVM Host 2"]
Libvirt2["libvirtd"]
VMs2["VMs"]
end
ONED -->|"SSH + drivers"| Host1
ONED -->|"SSH + drivers"| Host2
Sched --> ONED
OneDRS --> ONED
Sunstone --> ONED
ONED --> DB
style Frontend fill:#00758f,color:#fff
VM Provisioning Flow¶
sequenceDiagram
participant User as User / Sunstone
participant API as oned (XML-RPC / gRPC)
participant DB as MySQL
participant Sched as Scheduler
participant Host as KVM Host
User->>API: vm.allocate (template)
API->>DB: Store VM record (PENDING)
Sched->>DB: Read pending VMs
Sched->>Sched: Match requirements → host
Sched->>API: Deploy VM on Host X
API->>Host: SSH: deploy script
Host->>Host: Create libvirt domain
Host->>Host: Attach disks, configure NICs
Host->>Host: Start QEMU/KVM
Host-->>API: VM RUNNING
API->>DB: Update state → RUNNING
OneDRS (Distributed Resource Scheduler)¶
flowchart LR
Metrics["Host Metrics\n(CPU, RAM, I/O)"] --> DRS["OneDRS Engine"]
History["Historical Data\n(trends, patterns)"] --> DRS
Policies["Placement Policies\n(packing, striping, load-aware)"] --> DRS
DRS -->|"migration\nrecommendations"| ONED["oned"]
ONED -->|"live migrate"| Hosts["KVM Hosts"]
style DRS fill:#ff6f00,color:#fff
Storage Data Path¶
| Driver | Access Pattern | Best For |
|---|---|---|
| Shared FS (NFS) | Front-end exports, hosts mount | Simple, live migration works natively |
| SSH | Front-end copies images via SSH | No shared storage needed |
| Ceph (RBD) | Direct RBD access from each host | Scale, performance, live migration |
| LVM | Local LVM on each host | Performance (local I/O) |
| iSCSI / FC | SAN target from each host | Enterprise, existing SAN infrastructure |
Networking Model¶
flowchart TB
subgraph VM["Virtual Machine"]
VNIC["vNIC\n(virtio)"]
end
subgraph Host["Host Network"]
Bridge["Linux Bridge\nor OVS"]
VLAN["802.1Q VLAN\nor VXLAN"]
PF["Physical NIC"]
end
VNIC --> Bridge --> VLAN --> PF
subgraph SecurityGroups["Security"]
FW["iptables / nftables\n(security groups)"]
end
Bridge --> FW
style Host fill:#00758f,color:#fff