Container Breakout Detection with eBPF and Tracee
Containers are not VMs. If an attacker escapes the container boundary, they often gain access to the host. eBPF-based sensors give you a low-level view of kernel activity and make it possible to detect breakout techniques in real time. Tracee is a great tool for this because it ships with sane rules and a usable event model.
This post walks through a lab setup that uses Tracee to detect suspicious container activity such as host namespace access, sensitive file access, and kernel module operations.
Why eBPF matters for container security
Traditional host logs do not always tell you which container triggered a system call. eBPF can attach to kernel hooks and capture process, cgroup, and container IDs. That makes it ideal for detecting cross-namespace access or syscalls that are uncommon for containers.
Tracee provides a rule engine that detects common container escape techniques. It does not magically block the attack, but it gives you evidence and a starting point for response.
Lab prerequisites
- A Linux host with a recent kernel (5.8+ is a good baseline)
- Docker or containerd
- Tracee binary or container image
On Ubuntu, you can run Tracee in a container with the necessary privileges:
1
2
3
4
docker run --name tracee \
--rm -it --pid=host --cgroupns=host --privileged \
-v /etc/os-release:/etc/os-release \
aquasec/tracee:latest
Tracee will start streaming events to stdout. For long term logging, forward to a file or a SIEM.
Triggering suspicious activity
Run a benign container and then simulate suspicious actions:
1
2
3
4
5
6
# start a test container
docker run --rm -it alpine sh
# inside container, attempt to read host paths
cat /proc/kallsyms
cat /etc/shadow
Some of these actions might be blocked, which is fine. You want to see how Tracee reports the attempts.
Understanding Tracee events
Tracee emits JSON events that include eventName, processName, and container.id. A typical event for proc_kallsyms_access looks like this:
1
2
3
4
5
6
{
"eventName": "proc_kallsyms_access",
"container": {"id": "b7d..."},
"processName": "cat",
"args": {"pathname": "/proc/kallsyms"}
}
In a lab SIEM, you can store these events and correlate them with other host telemetry to identify the container and image responsible.
Namespace and capability signals
Breakouts often involve namespace manipulation or privilege changes. Watch for events like setns, unshare, or attempts to load kernel modules. Containers should rarely request capabilities like CAP_SYS_ADMIN or CAP_SYS_MODULE. Tracee can flag these events, and you can treat them as high severity by default.
In a lab, run a container with elevated capabilities and observe the event stream. This helps you understand what legitimate admin containers look like and prevents you from treating all capability usage as malicious.
Integration and storage
Tracee can output JSON to stdout, file, or webhook. For a lab SIEM, forward the events into OpenSearch or even a flat JSONL file that you can parse with Python. Normalize fields to a common schema so you can correlate container events with host logs.
If you already ingest container logs, add the container ID and image name as tags. That way you can pivot from a suspicious syscall to the exact image version and deployment that triggered it.
Baselines and noise reduction
Container workloads can be noisy. Build a baseline by running your normal lab containers for a day and recording the most common events. Then tune Tracee rules to focus on deviations. For example, if your web container never touches /proc or /sys, any access there should be high priority.
Use image-specific profiles. A build container will legitimately execute compilers and access /usr/include, while a minimal runtime container should not. Group rules by image or namespace so you do not suppress real signals in production-like containers.
Mapping to ATT&CK and severity
Mapping events to MITRE ATT&CK techniques helps you prioritize alerts. For example, access to /proc/kallsyms aligns with kernel information discovery, while attempts to load kernel modules align with privilege escalation. Tag these events with a technique ID and a severity score in your pipeline.
In a lab, build a small lookup table that maps Tracee event names to technique IDs. This makes it easier to create dashboards that show which tactics are being exercised during a test.
Response automation in a lab
Once you trust the signals, add a small response hook. For example, if a container triggers a high severity event, automatically stop the container and snapshot its filesystem. This gives you a repeatable workflow for incident response and analysis.
In a homelab, you can implement this with a simple webhook that triggers a script or an Ansible playbook. The goal is not to automate everything, but to make sure you can capture evidence quickly when a breakout indicator fires.
Custom rules for lab services
If you run specific lab services, write rules that fit your environment. For example, alert if a container tries to access /etc/shadow or if it loads a kernel module.
Tracee rule example (YAML):
1
2
3
4
5
rules:
- name: "container_reads_shadow"
description: "Container attempts to read /etc/shadow"
condition: "event == openat && args.pathname == '/etc/shadow'"
output: "Container read /etc/shadow"
Run Tracee with your custom rules file and verify the alert triggers.
Lab checklist
Use this checklist to validate your container detection setup:
- Confirm Tracee sees container IDs and image names in events.
- Trigger a high-risk event like
/proc/kallsymsaccess and verify the alert. - Store events in your SIEM and test a dashboard filter by container ID.
- Stop a container via a response hook and confirm evidence capture.
From detection to response
In a lab, you can respond manually by stopping the container and inspecting its image. In production, you would trigger an alert, collect container metadata, and possibly isolate the host.
A useful response workflow is:
- Identify the container ID and image.
- Capture the container filesystem for analysis.
- Check for privilege escalation attempts in host logs.
- Rotate credentials if secrets might be exposed.
Tuning and noise reduction
Tracee has a lot of rules. Start with a small subset and expand as you learn your baseline. If you run build containers or CI jobs, they might legitimately touch unusual files. Tag those images and reduce alerting on them.
Takeaways
eBPF makes container security visible. Tracee gives you a clean starting point, and in a lab you can quickly validate detections by running known actions. The important part is to build a feedback loop: test, alert, tune, and repeat. That is how you turn low-level kernel telemetry into reliable detection.