What Happens to Your Drone Feed When the Internet Goes Down
The cellular tower serving the job site goes offline at 11:42 AM. You have a DJI M30T airborne over a flooded road corridor, your incident commander is watching the feed from a command trailer 400 meters away, and the LTE connection that was delivering your WebRTC stream just dropped to zero bars. The drone is still flying. The cameras are still recording. But the feed that everyone was watching is gone.
This is not a hypothetical. It happens on flood response operations in Oakridge. It happens on wildfire perimeters in Southern Oregon where every cell tower in the drainage is either overloaded with responder traffic or physically gone. It happens on construction sites in dead zones, on private agricultural land in the Willamette Valley where the nearest tower is six miles away, and on any SAR operation in the Coast Range foothills west of Eugene where coverage maps are optimistic fiction.
The question is not whether your connection will fail. It is what your streaming architecture does when it does.
The Problem With Cloud-Dependent Streaming
Most commercial drone streaming platforms are built on a cloud relay model. The drone feed leaves the controller, travels to a data center somewhere in the Midwest or the Pacific coast, and then gets redistributed to viewers from there. That architecture has some real advantages — it makes multi-viewer distribution easy, it offloads processing from the operator's local hardware, and it simplifies the software stack for the vendor.
It also means that every frame of your video has to make a round trip through infrastructure you do not control and cannot guarantee.
When you are flying a DJI M30T at 350 feet over an incident area and the LTE uplink drops, a cloud-dependent platform does not gracefully degrade. It cuts. Your incident commander's browser tab goes gray. The feed stops. Whatever was on screen at the moment of disconnection is the last thing anyone in your command structure saw.
For a commercial real estate inspection or a wedding flyover, that is an inconvenience. For an active flood response, a wildfire mapping mission, or a 2:30 AM security patrol over an industrial yard, it is a capability failure at the worst possible moment.
The platforms that charge per drone — DroneSense at $1,500 to $5,000 per aircraft per year, FlytBase with its metered per-viewer-minute model — are not designed around the edge case where connectivity is unreliable. They are designed around the assumption that you have a stable uplink. That assumption is wrong on a significant percentage of real-world deployments.
How EyesOn Handles Edge and Offline Conditions
EyesOn is self-hosted. The server running your WebRTC relay lives on hardware you control — whether that is a rack-mounted machine at your base, a ruggedized mini-PC in your operations vehicle, or a server at a facility with its own power and network infrastructure. The feed path goes from the DJI controller to your local network to your local server to your viewers. No cloud intermediary. No third-party relay.
The practical consequence of that architecture shows up in the situations that matter most.
Local Network Streaming Survives WAN Failure
When the internet connection dies, a self-hosted EyesOn instance keeps delivering video to anyone on the local network. If your incident commander is connected to the same Wi-Fi hotspot or local mesh network as your operations setup, they continue receiving the stream at the same sub-200ms WebRTC latency they had before the WAN dropped.
This is not a feature that requires any special configuration. It is a direct consequence of where the server lives. The relay is local. The viewers are local. The route never touched the internet in the first place.
For operations where you bring a dedicated hotspot or a Starlink terminal to the site — which is standard kit for BarnardHQ deployments in remote terrain — your local network remains functional even when the satellite link is saturated or temporarily unavailable. The feed continues. The drone continues. The mission continues.
The Android Companion App Captures OSD Regardless of Stream Status
EyesOn's Android companion app captures the full DJI controller screen, including all OSD data — altitude, distance, speed, battery percentage, signal strength, GPS coordinates, flight mode. That capture happens at the device level. It does not depend on a cloud connection to function.
When the stream is live and viewers are watching, they see the full controller screen with all that telemetry overlaid. When the WAN drops and local viewers are watching via the self-hosted relay, they still see that same OSD data. The thermal sensor readouts, the zoom camera feed, the laser rangefinder data from an M30T — all of it stays in the stream as long as the drone is flying and the app is capturing.
This matters more than it sounds. During a SAR operation or an emergency response, the OSD is often as important as the video itself. Knowing that the aircraft is at 4.8 km distance, 280 feet AGL, with 34% battery remaining — that is operational data that drives decisions on the ground. Losing it because a cloud relay went down is not acceptable.
What Happens to the Recording
DJI enterprise platforms record locally to the aircraft's onboard storage regardless of whether any streaming connection is active. The M30T stores video on its own SD card. The M4TD does the same. A connection failure mid-mission does not create a gap in your recorded footage — the drone was capturing to local storage the entire time.
EyesOn does not change that. What it adds is the ability to review that footage through your own infrastructure, with your own access controls, without handing it to a cloud provider that may have data retention policies you have not read carefully enough.
The combination — local recording on the aircraft, local relay on your server — means that a connectivity failure affects your remote viewers' experience but does not affect the data integrity of the mission itself.
Starlink as a Practical Backstop
For operations in genuinely remote terrain — the kind of country west of Junction City where the Jonathan House SAR search covered 800 acres of Coast Range foothills with limited cell coverage, or the kind of wildfire perimeter work in Southern Oregon where tower infrastructure is either absent or overwhelmed — Starlink provides an uplink path that terrestrial cellular cannot.
BarnardHQ carries Starlink as standard equipment for extended operations. When the Starlink terminal is active and maintaining a WAN connection, remote viewers outside the local network can access the EyesOn stream over that link. When the terminal experiences weather interruption or orbital handoff latency spikes, the local relay keeps the on-site team connected.
Starlink plus a self-hosted EyesOn instance is a genuinely resilient architecture. It is not bulletproof — no communications stack in the field is — but it provides two independent failure points rather than one single dependency on a cloud relay that you cannot control and cannot reach when it fails.
What This Costs vs. What Cloud Dependency Costs
EyesOn Personal tier is $149 setup plus $39 per month, or $390 for a full year. First-year total: $617. That covers one server, unlimited drones, unlimited viewers. There is no per-drone fee, no per-viewer-minute charge, and no service interruption if a payment lapses — the software keeps running on your server.
The platforms built on cloud dependency charge differently. DroneSense starts at $1,500 per drone per year and scales from there. FlytBase meters your costs based on how many viewer-minutes your streams consume — a model that penalizes you for running longer missions or briefing larger teams. LiveU's hardware entry point is north of $10,000.
None of those platforms can offer local-network resilience during WAN failure, because all of them require the cloud relay to function. The architecture difference is not a marketing claim. It is a physical fact about where the relay server lives.
The Specific Failure Mode You Should Plan For
Every serious operator should war-game this scenario before they need to respond to it in the field: you are airborne, something important is happening in your camera frame, and your uplink to external stakeholders drops.
With a cloud-dependent platform, the answer is: your remote viewers are blind, and you are managing the drone, the mission, and the communications failure simultaneously.
With a self-hosted EyesOn instance and a local network, the answer is: anyone physically at the operation site continues watching the live feed, the OSD data stays intact, the recording on the aircraft is uninterrupted, and you deal with the remote viewer problem separately when connectivity restores.
That is not a small difference in pressure at the moment it happens.
The practical step is straightforward: run EyesOn on a machine you bring to the field, keep a local Wi-Fi hotspot active for on-site team members, and treat your WAN connection as a bonus for remote viewers rather than a requirement for your core team. The architecture supports that model. The cloud platforms structurally cannot.
If you are flying missions where connectivity is genuinely guaranteed and stable, the self-hosted model still gives you data sovereignty and zero per-drone costs. But if you have ever watched a stream die mid-mission because a tower was overloaded or a cable was cut, the value of keeping the relay on your own hardware is not theoretical.
← Back to all posts