Real-time IoT architectures work in factories with reliable connectivity. On farms, they lose data. Here is how to choose between store-and-forward and real-time ingestion for agricultural operations, and when you need both.
Kinesiis Engineering Team
Cloud Engineering
When agricultural operations start building IoT infrastructure, the default assumption is real-time: sensors transmit data continuously, a cloud ingestion layer processes it immediately, and dashboards update live. This architecture works well in factories and processing facilities with wired networks and cellular backup. It fails predictably on farms, where connectivity is intermittent, power is constrained, and the cost of data loss varies dramatically by sensor type. The choice between store-and-forward and real-time ingestion is one of the first architectural decisions in any agricultural IoT project, and getting it wrong means either losing data or overbuilding infrastructure.
Key takeaways
A real-time IoT architecture sends sensor readings to the cloud as they are collected. The sensor transmits, the ingestion layer receives, and the data is available for processing within seconds. This requires continuous network connectivity between the sensor and the cloud endpoint.
In agricultural environments, continuous connectivity is not available for most sensor locations. Remote paddocks, grain storage facilities, and livestock monitoring points often have cellular coverage that drops out for hours at a time. When the connection drops, a real-time architecture has no mechanism to recover the data collected during the outage. It is simply lost.
For some sensor types (ambient temperature logging, soil moisture trends), losing a few hours of data is acceptable. For others (grain temperature monitoring where a spike indicates spoilage risk), any data gap is a problem. The architecture needs to reflect this difference.
Store-and-forward architecture buffers data on the device or a nearby edge gateway. When connectivity is available, the buffer syncs to the cloud. When connectivity drops, data continues to accumulate locally. The ingestion layer handles out-of-order and delayed records as a normal operating condition.
The tradeoff is latency. Data arrives in the cloud minutes or hours after collection, not seconds. Dashboards show recent data, not live data. For most agricultural telemetry (weather, soil, equipment utilisation), this delay is acceptable. Field managers check conditions once or twice a day, not continuously.
The cost is in device storage and edge infrastructure. Sensors need local storage (flash memory or an SD card) or a nearby edge gateway with storage capacity. The edge gateway needs power and weather protection. These are real costs, but they are usually lower than the cost of the connectivity infrastructure required for reliable real-time transmission from remote locations.
Most agricultural IoT deployments end up with a hybrid architecture. Routine telemetry (temperature, humidity, soil moisture, equipment hours) uses store-and-forward. Critical alerts (grain temperature above threshold, water system pressure drop, livestock distress signals) use real-time transmission with priority queuing.
The alert path typically uses a different connectivity mechanism: satellite messaging, SMS, or a dedicated cellular connection with higher reliability than the bulk data path. The alert payload is small (a few bytes) and infrequent, so the cost of a more reliable transmission channel is manageable.
This hybrid approach optimises for what actually matters: complete telemetry data for trend analysis and operational planning, plus immediate notification for conditions that require urgent action. It avoids the false choice between real-time everything (expensive, unreliable) and store-and-forward everything (no urgent alerting).
The right architecture for a given agricultural operation depends on the answer to one question: for each sensor type, what is the cost of losing data during a connectivity outage?
If the answer is 'we'll check it tomorrow anyway,' store-and-forward is sufficient. If the answer is 'we need to know within minutes or we risk spoilage, equipment damage, or animal welfare issues,' that sensor needs a real-time alert path.
Map every sensor type to one of these categories before choosing an architecture. The result is usually 80-90% store-and-forward and 10-20% real-time alerts, which is a much cheaper and more reliable system than attempting real-time for everything.
In summary
The choice between store-and-forward and real-time IoT architecture in agriculture should be driven by operational requirements, not technology preferences. Real-time sounds better but fails in environments without reliable connectivity. Store-and-forward preserves data completeness at the cost of latency. Most agricultural operations need both: store-and-forward for the bulk of their telemetry and a real-time alert path for the small number of conditions that require immediate action. The architecture that works is the one designed around the connectivity environment you actually have, not the one you wish you had.
See it in production
Sensor telemetry from 40+ IoT devices centralised and accessible
Read case study →Talk to us. We will scope an engagement before any work begins.