Autonomy, As It Actually Exists
Systems, automation, and human oversight in autonomous flight.
Autonomy is often spoken about as a destination. In practice, it is a process. Most systems described as autonomous today are not independent, self-directing agents operating without oversight. They are layered systems that combine automation, human judgment, safety constraints, and carefully bounded decision-making.
That reality is where Autonomy Stack begins.
This newsletter exists as a working notebook for thinking through how autonomous and automated flight systems are actually built and operated. It focuses on the structure beneath the surface. The software layers, control logic, data flows, failure modes, and human-in-the-loop mechanisms that together make modern UAV systems function in the real world.
This is not a space for marketing claims or speculative futurism. It is a place to examine systems as they exist today, with all their limits intact.
When autonomy is discussed here, it is not treated as a binary condition. It is treated as a stack of capabilities that must cooperate under uncertainty. Perception feeds planning. Planning feeds control. Control operates inside safety envelopes. Humans remain part of the system, whether through supervision, intervention, or design decisions made long before a vehicle ever takes flight. Regulation and airspace rules shape behavior just as strongly as code does.
Most drones today are better described as automated than autonomous. They execute instructions well and repeat behaviors reliably, but they struggle when assumptions break. They depend on humans to resolve ambiguity and absorb risk. That gap between automation and autonomy is not a weakness to gloss over. It is the most interesting part of the system.
Autonomy Stack is where that gap is explored.
Posts here may examine how flight control abstractions are designed, how telemetry reveals behavior that interfaces conceal, how safety systems interact with autonomy claims, or how data pipelines influence decision-making long after a flight ends. Some writing will be technical. Some will be conceptual. Some will be incomplete ideas shared early because they are worth thinking about in public.
Publishing is intentionally irregular. This is not a feed optimized for cadence or engagement. It is a place to slow down and write when there is something worth examining carefully.
Some posts draw on ongoing experimental and analytical work, including Flight Data Lab and Flight Lab. These projects are not the focus of the newsletter, but they provide real material from which observations emerge. They ground theory in data and constraint.
Autonomy Stack is not a guide, a course, or a prediction engine. It does not promise timelines or breakthroughs. It does not frame autonomy as inevitable or frictionless. Autonomous systems advance slowly because they must. Safety, oversight, and reliability demand it.
This space exists to stay with that reality rather than skipping past it.
If you are interested in how autonomous flight systems actually behave, how they are constrained, and how they are built layer by layer rather than imagined whole, you are in the right place.

