EyesOn · 2026-04-17

How WebRTC Actually Works for Drone Video — And Why the Architecture Decisions Matter

Every drone streaming platform on the market tells you they use WebRTC. What they don't tell you is how the architecture around it determines whether you get 200ms of usable latency or 4 seconds of buffered footage delivered through a data center in Virginia.

WebRTC is a protocol, not a product. The same underlying standard that makes EyesOn stream your DJI controller OSD in real time is the same one that can be configured badly enough to make live video unusable. The difference is in the decisions made between the drone and the screen — where the signal goes, how many hops it takes, what processes it touches along the way.

This isn't a pitch. It's an explanation of how the pipeline works, what breaks it, and why the hosting decisions you make before you ever fly a single mission determine the quality of every stream you'll ever run.

What WebRTC Is Actually Doing

The Basics Without the Marketing Spin

WebRTC — Web Real-Time Communication — is an open standard that enables peer-to-peer audio and video transmission directly between endpoints. It was designed for low-latency communication. Video calls. Voice chat. Applications where you need the signal to travel as fast as the infrastructure allows.

At its core, a WebRTC stream involves a few key components:

When everything is configured correctly and the TURN relay is either unnecessary (P2P path established) or hosted close to your operation, you get sub-second delivery. When the relay is a cloud server in a remote region, you get whatever latency that routing introduces on top of the encoding and decoding time.

Where Cloud Platforms Break the Promise

Every SaaS drone streaming platform that advertises WebRTC is technically accurate. They do use WebRTC. What they don't advertise is that their TURN relay infrastructure — the server that handles connections when direct P2P isn't possible — is centralized. Your drone video leaves the field, hits their server cluster, and gets redistributed to your viewers.

For a team in Eugene, Oregon streaming to a field commander two miles away, that stream might route through a data center in the midwest, back to the Pacific Northwest, and arrive several hundred milliseconds later than it needed to. That's not a WebRTC problem. That's an architecture problem.

It's also a data custody problem. The video is on their infrastructure, processed by their systems, and subject to their retention policies and breach surface.

EyesOn's Architecture and Why It Produces 200ms Latency

The Relay Lives on Your Server

EyesOn is self-hosted. You run the TURN server. You run the signaling server. The entire WebRTC infrastructure lives on hardware you control — whether that's a VPS, an on-premise server, or a bare metal machine sitting in your operations center.

The practical result: when your M30T's controller transmits through the EyesOn Android companion app and the stream hits your TURN relay, it's hitting a machine on your network or as close to your operation as you've chosen to place it. There's no detour through a commercial cloud region. The path from drone to screen is as short as your network topology makes it.

That's how you get ~200ms end-to-end latency consistently. Not because EyesOn uses a different version of WebRTC than anyone else — because the relay isn't in someone else's data center.

The Companion App's Role in OSD Capture

Most drone streaming approaches treat the video feed as the only signal worth transmitting. EyesOn's Android companion app captures the full DJI controller screen — which means everything the operator sees is what the remote viewer sees. Battery percentage. GPS coordinates. Altitude. Return-to-home status. Flight mode. All of it, in real time, at the same sub-second latency as the video.

This matters more than it sounds. In a SAR operation, the incident commander watching the stream from the command post doesn't just need to see what the camera sees — they need to see that the aircraft is at 200 feet AGL, has 14 minutes of battery remaining, and is holding a GPS lock with 8 satellites. That's operational data. Losing it means the commander is flying partially blind on mission-critical decisions.

Capturing the controller screen instead of just the video output is a deliberate architectural choice. It treats the drone operator's full information environment as the signal, not just the camera frame.

What Self-Hosting Means for Reliability

Cloud platforms have scheduled maintenance windows, unexpected outages, and deprecation cycles. They can change their pricing model, kill a tier, or get acquired. These aren't hypothetical risks — they're recurring events in the SaaS market.

With EyesOn, the software keeps running after your subscription lapses. There's no remote kill switch. If the subscription ends, updates stop, support access changes — but your server keeps streaming. That's not a common feature. Most SaaS platforms are designed so that lapsed billing means immediate service interruption. EyesOn isn't built that way, because drone operations don't get to schedule around a billing cycle.

The Latency Stack — Where Time Gets Lost

Every Hop Has a Cost

Latency in a live video stream is cumulative. Each process in the chain adds time:

The only hop in that chain that EyesOn gives you full control over is the server infrastructure — signaling, STUN, and TURN. That's also the hop where cloud platforms introduce the most variability. By owning that infrastructure, you're not eliminating latency from the stack, but you're removing the part of it that's outside your control.

Why 200ms vs. 4 Seconds Is a Real Operational Difference

Two hundred milliseconds of latency is effectively imperceptible in practice. Four seconds is not. Four seconds means that when you're guiding a ground team to a subject's position using live overhead video, what you're seeing is where the subject was four seconds ago. In a foot chase, that's 50 meters of error. In a flood rescue, that's the difference between an accurate hand signal and a boat going the wrong direction.

This is not an exaggerated scenario. Before working on EyesOn, I flew real missions where latency in consumer cloud streaming platforms made coordination genuinely difficult. The 1 AM Doberman recovery in Springfield wasn't a scenario where four-second delayed video was acceptable — Beau was deep under canopy and I was guiding Bryan in via radio using real-time thermal acquisition. A 4-second lag between what I saw and what I told him would have had him walking past the dog in the dark.

Sub-second latency isn't a spec on a features list. It's an operational requirement for anything more than recording footage to review later.

Choosing Your EyesOn Tier Based on Infrastructure Needs

Matching Architecture to Operation Scale

EyesOn's tier structure reflects real differences in deployment complexity:

The Docker-based deployment means the server setup is consistent across tiers. You're not dealing with different software versions or feature sets by tier — you're dealing with different license scopes and support access levels.

The Honest Technical Takeaway

WebRTC is not magic. It's a well-designed open standard that performs as advertised when the infrastructure around it is configured to let it. The platforms charging $1,500 to $5,000 per drone per year aren't providing better WebRTC — they're providing convenience at the cost of latency, data custody, and price-per-unit economics that don't scale.

If you're running drone operations where live video latency is operationally significant — public safety, SAR, infrastructure inspection, security patrol — the architecture of your streaming platform is not a background technical detail. It's a mission parameter.

EyesOn exists because the alternative was paying cloud rates for a service I could run better on my own hardware. The same logic applies to any operator who takes their video pipeline seriously.

Your server. Your relay. Your latency.

[EyesOn is available now at BarnardHQ.com — Personal, Professional, Enterprise, and Managed tiers with Docker deployment and the companion Android app included in setup.]

DroneStreamingWebRTCSelfHostedEyesOnBarnardHQDroneTechLiveStreamingDroneOpsPublicSafetyDJIEnterpriseLowLatencyDroneVideoSAROpenSource
← Back to all posts