Control Barrier Functions – Placeholder
This article was written entirely by Claude Sonnet 4.6 and will serve as a placeholder until I write my first post.
May 9th, 2026
What is Safety?
In control theory, we often care not just about where a system goes, but about where it never goes. A robot arm should reach a target pose — but it must never collide with an obstacle on the way there. An autonomous vehicle should follow a reference trajectory — but it must never leave its lane or rear-end the car ahead.
Formally, we define a safe set \(\mathcal{S} \subset \mathbb{R}^n\) and ask: can we design a controller that keeps the system inside \(\mathcal{S}\) for all time?
\[\mathcal{S} = \{x \in \mathbb{R}^n : h(x) \geq 0\}\]
Here \(h : \mathbb{R}^n \to \mathbb{R}\) is a safety function whose zero superlevel set defines the safe region. The state \(x\) is safe when \(h(x) \geq 0\), and unsafe when \(h(x) < 0\).
Invariance and Nagumo’s Theorem
The key concept is forward invariance: a set \(\mathcal{S}\) is forward invariant for a system \(\dot{x} = f(x)\) if every trajectory starting in \(\mathcal{S}\) remains in \(\mathcal{S}\) for all future time.
A classical result (Nagumo, 1942) tells us exactly when a closed set is forward invariant: the vector field must point inward (or tangentially) at every boundary point. For a smooth \(h\), this condition reduces to
\[\dot{h}(x) = \nabla h(x)^\top f(x) \geq 0 \quad \text{whenever } h(x) = 0.\]
The problem with this condition is that it only constrains the derivative on the boundary. When the system is well inside \(\mathcal{S}\), no constraint is imposed at all — the controller is free to drive the state toward the boundary without any corrective action until it’s too late.
Control Barrier Functions
Control Barrier Functions (CBFs), introduced by Ames et al., relax the boundary condition into a global inequality. A continuously differentiable function \(h\) is a CBF for the control-affine system
\[\dot{x} = f(x) + g(x)\,u\]
if there exists a class \(\mathcal{K}\) function \(\alpha\) such that for all \(x \in \mathbb{R}^n\):
\[\sup_{u \in \mathcal{U}} \left[\nabla h(x)^\top (f(x) + g(x)\,u)\right] \geq -\alpha(h(x)).\]
The term \(-\alpha(h(x))\) is the key innovation. When \(h(x)\) is large (state well inside \(\mathcal{S}\)), the right-hand side is a large negative number, imposing almost no constraint. As \(h(x) \to 0\) (state approaching the boundary), the bound tightens. Once \(h(x) < 0\) (unsafe), the bound becomes positive, demanding that \(\dot{h}\) push back toward safety.
A common linear choice is \(\alpha(s) = \gamma s\) for some \(\gamma > 0\), giving the condition
\[\dot{h}(x, u) \geq -\gamma\, h(x).\]
This ensures \(h\) decays no faster than exponentially, which is enough to guarantee \(h(x(t)) \geq 0\) for all \(t \geq 0\) whenever \(h(x(0)) \geq 0\).
Safety Filters via QP
The power of CBFs comes from how cleanly they integrate with existing controllers. Given a nominal controller \(u_{\text{nom}}(x)\) designed for performance (e.g., a PD controller or an MPC policy), we can find the closest safe control input by solving a small quadratic program at each timestep:
\[u^*(x) = \underset{u \in \mathcal{U}}{\arg\min} \;\|u - u_{\text{nom}}(x)\|^2\] \[\text{s.t.} \quad \nabla h(x)^\top(f(x) + g(x)\,u) \geq -\gamma\, h(x).\]
This is a single linear constraint in \(u\), so the QP has a closed-form solution and runs in microseconds. The result is a safety filter: an online projection of the nominal input onto the set of safe inputs. When \(u_{\text{nom}}\) is already safe, the filter passes it through unmodified. When it isn’t, the filter makes the minimal intervention to restore safety.
Relationship to CLFs
CBFs pair naturally with Control Lyapunov Functions (CLFs). A CLF \(V\) certifies stability — it guarantees the system converges to a goal. The CLF-CBF QP unifies both objectives:
\[\min_{u,\,\delta}\; u^\top H u + p\,\delta^2\] \[\text{s.t.} \quad \dot{V}(x,u) \leq -\lambda V(x) + \delta \quad \text{(CLF, soft)}\] \[\quad\quad\quad \dot{h}(x,u) \geq -\gamma\, h(x) \quad\;\; \text{(CBF, hard)}\]
The CLF constraint is relaxed with slack \(\delta\) so that safety (CBF, hard constraint) always takes precedence over convergence. The penalty \(p\) controls how much we care about reaching the goal. This framework reduces the entire design problem — stability and safety — to a real-time QP.
Limitations and Open Problems
CBFs are elegant, but several challenges arise in practice:
- Construction. Finding a valid \(h\) for a complex safe set is non-trivial. For obstacles defined by non-convex geometry or learned from data, constructing a smooth, valid CBF is an active research area.
- Relative degree. If \(u\) does not appear in \(\dot{h}\) (e.g., \(\nabla h \cdot g = 0\)), the CBF constraint is vacuous. High-order CBFs (HOCBFs) handle this by differentiating until control appears, but each differentiation introduces additional assumptions.
- Feasibility. The CBF and CLF constraints can be simultaneously infeasible, particularly near the boundary of \(\mathcal{S}\). Careful design of \(\alpha\) and \(\mathcal{U}\) is needed.
- Uncertainty. Standard CBFs assume exact knowledge of \(f\) and \(g\). Robust and stochastic extensions — using tubes, chance constraints, or concentration inequalities — are needed for real systems with model error and noise.
Citation
@article{claude2026cbf,
title = {Control Barrier Functions},
author = {Claude Sonnet 4.6}
}