By early 2026, robotaxis are completing millions of rides monthly across
cities from San Francisco to Shanghai—yet a single rainstorm can still paralyze
entire fleets.
While the industry races toward full autonomy, adverse weather has emerged as the critical bottleneck
preventing widespread deployment. The challenge isn’t just about better sensors; it’s about fundamentally
understanding what cameras and LiDAR cannot perceive: the physical grip between tire and road. Companies
like Easyrain are pioneering a solution that bridges this gap through virtual sensing and active safety
systems, offering a pathway to truly weather-resilient autonomous mobility.
The Current Landscape: Autonomy Hits a Wall
The robotaxi market has reached an inflection point. Waymo, the market leader, operates over 2,500
vehicles completing 450,000 paid rides weekly as of late 2025, with plans to
scale to 1 million rides per week by the end of 2026. In China, Baidu‘s Apollo Go delivered 3.1 million fully driverless rides in
Q3 2025
alone, operating across more than 20 cities.
Yet beneath these impressive numbers lies a persistent vulnerability. Tesla‘s limited Austin deployment faced
immediate scrutiny when vehicles reportedly “gave up” during heavy downpours, requiring passenger extraction
mid-ride. Even Waymo’s more sophisticated sensor suite exhibited “indecisive” behavior in rain, pulling over
frequently or failing to locate pickup points due to sensor noise. NHTSA opened a formal review in June 2025 specifically
targeting Tesla’s “Vision-Only” performance in low-visibility conditions.
The economic implications are stark. For a fleet of 1,000 robotaxis, a single 4-hour rainstorm can block
hundreds of thousands of dollars in potential revenue while eroding the user trust essential for mass
adoption. In cities like Seattle, Boston, or London—where precipitation occurs 15-20% of the
year—”fair-weather” autonomy is fundamentally unviable as a public transit replacement.
The “Blind Spot” of Sensors: Why Cameras Are Not Enough
The autonomous vehicle industry has invested billions in sensor technology, yet adverse weather exposes
fundamental limitations in how machines perceive their environment.
LiDAR, the backbone of most robotaxi perception systems,
relies on laser pulses to build 3D
point clouds. Raindrops and snowflakes scatter these beams, creating
“salt-and-pepper” noise that degrades range and generates phantom obstacles. As precipitation intensity
increases, detection accuracy plummets—forcing vehicles to slow dramatically or stop entirely.
Cameras, particularly in Tesla‘s vision-only architecture, face even more severe
constraints. Water droplets on lenses cause occlusion, wet asphalt creates blinding glare from streetlights,
and snow obliterates the contrast needed to distinguish lane markings. As Geotab’s analysis notes, cameras “physically cannot see through heavy
precipitation better than a human eye”—yet unlike humans, they lack decades of experiential learning to
compensate.
Radar offers weather resilience but trades it for resolution. While radio waves penetrate
rain and fog effectively, standard automotive radar lacks the vertical detail to accurately classify
stationary objects. The result: false positives that cause unnecessary braking or, worse, failure to detect
genuine hazards obscured by weather-induced clutter.
The industry response has been to add more sensors—thermal cameras, higher-resolution radar, redundant LiDAR
arrays. But this approach inflates vehicle costs by thousands of dollars while still failing to address the
core issue.
The Grip Physics Dilemma
Sensors perceive geometry, not friction. This is the fundamental paradox facing autonomous
systems in adverse weather.
A LiDAR can map a snow-covered road surface with millimeter precision. A camera can detect ice crystals. But
neither can reliably distinguish between wet asphalt offering adequate traction and black
ice that will cause complete loss of control the instant brakes are applied. The critical gap is tactile feedback—the physical
sensation human drivers
use to “feel” when grip is deteriorating.
Aquaplaning exemplifies this challenge. When a thin water layer forms between tire and road,
the vehicle essentially hydroplanes, losing both steering authority and braking effectiveness. By the time
wheel slip sensors detect the problem, control is already compromised. Autonomous systems, lacking the
predictive capability humans develop through experience, typically respond with extreme caution—slowing to
impractical speeds or refusing to operate entirely.
Current approaches attempt to infer grip from wheel slip data or correlate weather forecasts with
conservative speed limits. But these are reactive measures. What’s needed is predictive
capability: the ability to detect loss of traction before it occurs, giving the
vehicle time to adjust trajectory, speed, or route.
This is where the industry’s sensor-centric approach reaches its limit. No camera, LiDAR, or radar array can
measure the coefficient of friction between rubber and ice. The solution requires a different paradigm
entirely.
Bridging the Gap: The Role of Virtual Sensing and Active Safety
Solving the weather problem demands technologies that complement visual perception with haptic
intelligence—systems that give autonomous vehicles the sense of “touch” they currently lack.
Predicting the Invisible
DAI (Digital Advanced
Information) represents a fundamental shift from observing the environment to feeling it.
This virtual sensor platform analyzes vehicle dynamics—microscopic variations in wheel rotation, suspension
feedback, and chassis behavior—to detect aquaplaning, snow, and ice in real-time, without requiring
additional hardware.
Unlike traditional traction control systems that react to wheel slip, DAI provides predictive
detection, identifying partial aquaplaning or early grip reduction before tire slip occurs. For
a robotaxi navigating an unfamiliar city during a rainstorm, this translates to actionable intelligence:
“Zone ahead has standing water; reduce speed by 15 km/h” rather than waiting for the wheels to hydroplane
and triggering emergency protocols.
The system’s independence from internet connectivity, cloud services, or AI inference means it operates with
millisecond-level latency—critical when aquaplaning can develop in less than a second. By detecting
irregular terrain, tire wear, and even wheel misalignment through the same dynamic analysis, DAI functions
as a comprehensive “nervous system” for the vehicle, continuously monitoring the interface between machine
and road.
Restoring Control
Detection alone is insufficient if the vehicle cannot respond effectively. This is where active intervention
changes the equation.
AIS (Aquaplaning Intelligent
Solution) is the first active system capable of restoring grip before control is
lost. By intelligently spraying pressurized fluid ahead of the tires, AIS eliminates the water
layer that causes aquaplaning—transforming a dangerous loss-of-control event into a managed scenario where
ABS and ESC can function normally.
For robotaxi operations, this represents critical safety redundancy. Even if the vehicle’s
planning system miscalculates and enters a deep puddle at excessive speed, AIS provides a physical
countermeasure. Testing demonstrates -20% braking distance on heavy wet surfaces and
+225% lateral traction increase in aquaplaning conditions—performance margins that can mean
the difference between a safe stop and a collision.
The system’s modular architecture, with configurations starting at just 2.7kg, makes it viable for
integration into purpose-built robotaxis like Zoox‘s carriage-style vehicles or retrofitted platforms like
Waymo’s Jaguar I-Pace fleet. Critically, AIS enables the use of low
rolling-resistance tires that optimize
energy efficiency—a key consideration for electric autonomous fleets where range directly impacts
operational economics.
Fleet Intelligence
Individual vehicle capability must scale to fleet-level intelligence. ERC (Easyrain Road Cloud) aggregates real-time road
surface data from equipped vehicles, creating dynamic maps of grip conditions across entire operational
areas.
For a robotaxi fleet manager, this transforms weather from an unpredictable disruption into a manageable
variable. When ERC identifies a section of downtown experiencing aquaplaning conditions, the dispatch system
can reroute vehicles proactively, redistributing demand to safer zones while maintaining service
availability. The platform’s integration of tire health data—wear levels, pressure deviations, alignment
issues—enables predictive maintenance that prevents weather-related failures before they occur.
This shared intelligence model addresses what market analysts identify as a critical scaling
requirement: the ability to operate reliably in variable conditions without requiring perfect information
about every road segment. As the fleet grows, so does the precision of the road intelligence—a network
effect that makes each additional equipped vehicle more capable than the last.
Future Outlook
The path to Level 5 autonomy—vehicles capable of operating anywhere, anytime, in any condition—requires
acknowledging that perception alone is insufficient. The industry’s sensor revolution has delivered
extraordinary capabilities in good weather. But true autonomy demands systems that can predict and respond
to the physical dynamics that visual sensors cannot capture.
The integration of virtual sensing platforms like DAI with active safety systems like AIS represents a
convergence of complementary technologies: one that detects invisible threats, the other that physically
neutralizes them. When coupled with cloud intelligence systems like ERC, this creates a resilient
architecture capable of scaling beyond fair-weather operational design domains.
Regulatory frameworks are evolving to reflect this reality. NHTSA’s scrutiny of vision-only systems and the EU’s
strict ODD requirements signal that authorities recognize the limitations of current approaches. Future
certifications will likely mandate demonstrable capabilities in adverse conditions—not as edge cases, but as
core competencies.
For the robotaxi industry, the weather problem is no longer a distant concern. It is the immediate barrier
separating today’s promising pilots from tomorrow’s ubiquitous mobility networks. Solving it requires moving
beyond the assumption that adding more cameras and LiDAR will eventually be sufficient. It demands
technologies that give autonomous systems what they fundamentally lack: the ability to feel the road beneath
them, respond when vision fails, and learn from every rainy mile driven.
The future of autonomous mobility will not be built on perfect weather. It will be built on systems
resilient enough to operate when the rain won’t stop.