Upbit was hacked $37M Solana. How could we have hacked and protected it?
Upbit was hacked and lost $37M of Solana. Here's how it could have happened and how we could have defend it with eBPF and LSM.
Upbit, a Crypto exchange, was hacked on November 27, 2025, and 37 million dollars’ worth of Solana was stolen. We don’t know much about how the breach happened, but to move crypto you need the private keys, there’s no other way. So whatever the attack was, it ended with someone getting those keys.
In this post, let’s explore how we could have hacked this and how we could protect it from being hacked with eBPF Linux Security Modules to secure the runtime. We don’t know how the private keys were compromised, these are our assumptions about how someone could have compromised them.
A quick caveat, we at Bomfather do have an eBPF security solution that implements everything talked about in this blog post, but I wrote this blog post so that anyone can write their own eBPF code to implement runtime security!
Why use eBPF and LSM?
Why are we using eBPF and LSM? Trusting the kernel is better than the user space process, and LSM eliminates TOCTTOU vulnerabilities. eBPF lets you hook into the kernel without writing a kernel module
Why this approach?
Crypto is built on the assumption that everyone is trying to steal your money. You don’t trust servers, you don’t trust exchanges, you don’t trust anyone. You trust keys and signatures.
We’re doing the same thing, but for the runtime. Don’t trust userspace or root. Trust the kernel and trust cryptographic verification.
If we have a security agent running. To shut it down we should need a signed challenge response based on a nonce. To update policies around the agent, we should do the same thing. We copied this directly from how crypto works because it makes sense.
And LSM hooks solve a problem that’s been bugging security people forever TOCTTOU. You check if something is safe, then use it, but it has changed in the meantime. With LSM, the check happens at the exact moment of access. So, no gap.
Crypto was built assuming everyone on the network is trying to rob you. We’re just applying that same paranoia to the server.
For this, we assume we are running on a Linux machine and this approach works on everything, whether they use or don’t use containers.
How does YAML config turn into kernel enforcement?
The userspace agent reads your policy file, parses it, and writes the rules into eBPF maps. Maps are shared memory between userspace and kernel. When an LSM hook fires, the kernel side eBPF program looks up the policy in the map and decides to allow or deny. The YAML is for humans and the map is for the kernel.
Attack Vector 1: Just read the key file.
Attack: cat /path/to/hot-wallet-key.json
If you shell access (SSH, container escape, etc.), this is step one. The compromise is easy and it just works. I wish it wasn’t this easy.
Defense: An eBPF LSM hook denies all read and write by default, and only the “signer” gets read access. So, no one can read the private keys (not even with root). So you can provide root access to your employees without worrying about them accessing the keys.
This LSM policy is the easiest way to avoid a random user/process from reading the Private Keys.
- container_path: “ns:pod:container”
# The format is Kubernetes namespace, pod name, and container name. If you’re not running k8s, leave it empty, and the policy applies to the host.
executables:
- path: “/app/signer”
directories:
- path: “/hotwallet”
permission: “read”
SEC(”lsm/file_open”)
int BPF_PROG(lsm_file_open, struct file *file) {
/*
* Default deny file access to protected paths.
* Only explicitly allowed executables can read protected files.
* Even root is denied.
*/
// Get the file path being opened
// Get the current process’s executable path
// Check if the file is in a protected directory
// Check if this executable is allowed to access this directory
// If not allowed, return -EPERM
}
Attack Vector 2: Backdoor the Binary.
Suppose we had an insider attack, which is a lot easier. Think about it, if we can bribe an insider with 5% of the stolen assets and he can move to a country that wouldn’t extradite prisoners, then it is millions of dollars’ worth of risk that someone could be motivated to take. The insider can replace the binary and then leave the country the moment the tokens are stolen.
Attack: Replace the signing binary with a modified version that exfiltrates keys.
Defense: Immutable executables and SHA256 verification. eBPF LSM can prevent binaries from being modified. We should not only protect the “signer” of the transaction, which has access to the private keys, but also the platform’s critical system binaries.
# Critical executables that cannot be modified at runtime
immutable_executables:
# Application Binaries
- path: “/app/signer”
# Container Runtime Infrastructure
- path: “/bin/runc”
- path: “/bin/k3s”
- path: “/bin/containerd-shim-runc-v2”
# Network Infrastructure
- path: “/bin/cni”
- path: “/bin/aux/xtables-nft-multi”
- path: “/bin/aux/nft”
This is the solution for vectors 2,6, and 7.
SEC(”lsm/file_open”)
int BPF_PROG(lsm_file_open, struct file *file) {
/*
* Block writes to immutable executables and directories.
*/
// If open mode is write
// Check if path is a protected executable
// Check if path is under a protected directory
// If protected and write attempted, return -EPERM
}
SEC(”lsm/path_rename”)
int BPF_PROG(lsm_path_rename, struct path *old_dir, struct dentry *old_dentry,
struct path *new_dir, struct dentry *new_dentry) {
/*
* Block rename of protected files.
*/
// Check if source or destination is protected
// If either is protected, return -EPERM
}
SEC(”lsm/path_unlink”)
int BPF_PROG(lsm_path_unlink, struct path *dir, struct dentry *dentry) {
/*
* Block deletion of protected files.
*/
// Check if file is protected
// If protected, return -EPERM
}
The next question would be: how do we know the above executables weren’t compromised before the eBPF security agent started enforcing the policies? When the eBPF agent monitors, it would deny any delete, modify, or symlink to these protected executable files.
To ensure the executables weren’t compromised, we’d need to use kernel SHASUM validation.
Fsverity handles this. Fsverity is a kernel feature that computes the Merkle tree hash of a file’s contents. If anything changes, the kernel invalidates the digest.
Then, at execution time, the eBPF hook checks bpf_get_fsverity_digest() https://docs.ebpf.io/linux/kfuncs/bpf_get_fsverity_digest/ against your expected hash.
If there is a mismatch, then we can deny the execution of the binary. This function works only on LSM and not on tracepoints. There is a fantastic talk about this from the Linux Plumbers Conference BPF_LSM + fsverity for BinaryAuthorization - Song Liu, Boris Burkov.
Attack Vector 3: Memory dump/ptrace
“But we don’t store keys on disk. We use a KMS.” Cool. But the key still has to exist in memory to sign a transaction.
Wherever it came from, file, KMS, HSM API, or remote vault at some point, the signing process has the key material in RAM. That’s the target.
Here is a problem, How do we provide administrators with access to maintain the systems without allowing them to siphon funds from the crypto wallet? The solution is to make sure that they can’t be able to ptrace the security agent or the signer or replace the signer binary.
Here’s the problem, you can’t trust admins. Not because they’re bad people, but because compromised accounts are a thing. An attacker with root can use ptrace to attach to any process and dump its memory. That’s how you’d pull keys out of RAM. So we have a conundrum. We need to give people root to maintain the system. But we can’t let them ptrace the signer or the security agent.
Attack: Attach a debugger to the signer process, dump the heap, and extract key material.
Again, this is where eBPF comes to the rescue.
SEC(”lsm/ptrace_access_check”)
int BPF_PROG(lsm_ptrace_access_check, struct task_struct *child, unsigned int mode) {
/*
* Block ALL ptrace operations on protected processes:
* - GDB/debugger attachment
* - strace system call tracing
* - /proc/<pid>/mem access
* - process_vm_{readv,writev} syscalls
* - Any other ptrace-based inspection
*
* prevents debugging.
*/
// Check if ptrace blocking is disabled
// Check if the target PID is part of the protected processes list
// Check if the security agent is being ptrace’d. If we compromise the security agent, then we compromise everything.
// if any of these checks fail, return -EPERM
}
Attack Vector 4: Kill the security agent
All of this depends on the eBPF security agent. What if a malicious user with sudo privileges kills the agent? Then it’s game over.
Attack- sudo kill -9 <agent-pid>Then, retry vectors 1-3.
Defense: The agent blocks its own kill signals. You can’t kill it without a Public Key Infrastructure challenge response with a nonce. No Key, no shutdown.
SEC(”lsm/task_kill”)
int BPF_PROG(lsm_task_kill, struct task_struct *p, struct kernel_siginfo *info,
int sig, const struct cred *cred) {
/*
* Block kill signals to the security agent.
*/
// Check if target process is the security agent
// If yes, return -EPERM
}
Here is a detailed write up and a video on how to prevent this. https://substack.bomfather.dev/p/stopping-kill-signals-against-your?r=2bngj5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Attack Vector 5: Update Policy as Root (eBPF Maps)
Most eBPF security agents use eBPF maps to communicate security policies to the kernel. The maps are globally writable by any privileged user.
Attack - sudo bpftool update eBPF map policy to disable the security. Then retry vectors 1-3.
Defense: The agent would have to write an eBPF LSM policy to prevent the maps from being updated. It can be updated only by the user process that starts the eBPF agent. We can make it better by using the PKI mentioned above with a nonce to update the policies, so we know it’s trusted.
SEC(”lsm/bpf_map”)
int BPF_PROG(lsm_bpf_map, struct bpf_map *map, fmode_t fmode) {
/*
* Block unauthorized modifications to security policies.
* Only the security agent can update its own policy maps.
*/
// Check if this is a security policy map
// Check if the calling process is the security agent
// If not the agent, return -EPERM
}
We wrote about how to do this in depth and also included a video to explain this https://substack.bomfather.dev/p/attacking-and-securing-ebpf-maps
Attack Vector 6: Write to sensitive directories
Now you can’t read the keys or backdoor the signer. But what if you drop a malicious binary into a directory that’s in the PATH or LD_LIBRARY_PATH? Or write a cron job? Or modify a config file?
Attack: echo “malicious script”> /etc/cron.d/exfil or cp /tmp/evil /usr/local/bin/signer-helper
Defense: Have an eBPF LSM Policy that will prevent any writes to any of these directories.
immutable_directories:
- path: “/private-keys”
- path: “/etc/LD_LIBRARY_PATH”
Attack Vector 7: Replace system executables
Similar to Vector 2, but broader. Why target just the signer? Replace ls, cat, bash - any tool an admin might run. Wait for someone to interact with the keys.
Attack: cp /tmp/evil-bash /bin/bash
Defense: Global readonly executables. Not just an immutable signer binary. All system binaries are protected from modification.
immutable_executables:
- /bin/ls
Attack Vector 8: LD_PRELOAD hijacking
The “signer” is legit and passed fsverity, only process allowed to read the keys.
Attack: Use the LD_PRELOAD env variable. LD_PRELOAD=/tmp/evil.so ./signer
The injected library intercepts every function call. When the signer reads the private key, the injected malicious code sees it too. LSM allows the read because it’s the authorized process, but it has been hijacked. LD_PRELOAD is normally a good process which is used in a lot of places but it can be used maliciously.
Defense: Use eBPF to track which processes are allowed to use the LD_PRELOAD environment variable. You can track this at the kernel when a process starts.
We aren’t using the LSM hook here, and there is a possibility of TOCTTOU. There is a tracepoint where we capture the ENV data and secure it in the LSM hook.
SEC(”tracepoint/syscalls/sys_enter_execve”)
int trace_execve(struct trace_event_raw_sys_enter* ctx) {
/*
* Capture environment variables at process start.
* Flag processes that have LD_* variables set.
*/
// Read environment variables from execve args
// If any starts with “LD_”, flag this process
}
SEC(”lsm/bprm_check_security”)
int BPF_PROG(lsm_bprm_check_security, struct linux_binprm *bprm, int ret) {
/*
* Block unauthorized LD_PRELOAD.
* Only allowlisted executables can use LD_* environment variables.
*/
// Get executable path
// If process was flagged for LD_* usage
// Check if this executable is allowed to use LD_*
// If not allowed, return -EPERM
}
Turns out real world processes actually use it. For example, k3s and cni (cloud native Infrastructure) use this. We wrote about this and provided an actual example https://substack.bomfather.dev/p/ld_preload-the-invisible-key-theft?r=2bngj5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Attack Vector 9: Compromise before boot.
All of this assumes the eBPF agent is running and was started on a clean system. But how do we trust that? What if the attacker loaded a rootkit in the bootloader? What happens if they modified the kernel.
Attack: Modify the bootloader or kernel image, load malicious code before the eBPF agent starts. Defense: The system should be configured to use a Secure Boot chain and a Signed bootloader, signed kernel. The system won’t boot if anything in the chain is tampered with. By the time userspace starts and the eBPF agent loads, you have cryptographic proof that nothing was modified.
What’s left?
Hardware implants.
Side-channel attacks.
Compromised chips from the factory.
I can’t help you there. Different threat model. Different budget. If someone is decapping your CPU, you have other problems.
Most of the time, digital access is way easier than physical. Physical attacks cost real money, you need people, equipment, and you need to be somewhere. Compromising one employee or finding one vulnerability? That’s a laptop and patience.
So it gets stolen because cat /secrets/key.json worked. Or someone attached gdb and dumped memory. Or an insider replaced a binary and flew to a country without an extradition treaty.
Upbit lost $37M. We don’t know which attack vector it was. But vectors 1-9? All stoppable with eBPF LSM. No HSM. No air-gapped signing setup. Just kernel enforcement on regular Linux.
You don’t have to stop North Korea. You just have to be harder to rob than the next exchange!
