15.4 · Advanced

AI and GNSS: Machine Learning Meets Satellite Navigation

Introduction

Artificial intelligence and machine learning are transforming virtually every field of engineering, and GNSS is no exception. Researchers are applying neural networks, deep learning, and classical ML algorithms to problems that have resisted conventional solutions for decades - particularly multipath detection, Non-Line-of-Sight (NLOS) signal identification, and cycle slip detection in urban environments. While AI-based GNSS enhancements are largely confined to research contexts today, the trajectory toward practical deployment is clear.

Key Concept: AI improves GNSS robustness in urban environments where conventional methods struggle most. AI-based techniques are not yet in mainstream commercial receivers but represent the most active area of GNSS signal processing research. The field is evolving rapidly.

Machine Learning for Multipath Detection and Mitigation

Multipath - the reception of GNSS signals that have reflected off buildings or terrain before reaching the receiver - causes positioning errors of up to several metres in urban environments. Traditional detection methods rely on signal strength thresholds, elevation angles, or geometric consistency checks. These methods are effective for obvious multipath but miss subtle cases where multipath and direct signals arrive nearly simultaneously.

Machine learning approaches exploit the fact that multipath distorts the shape of the GNSS signal correlation function - the pattern of signal power versus code offset that the receiver uses to measure pseudorange. A clean line-of-sight (LOS) signal produces a symmetric triangular correlation peak; a multipath-contaminated signal produces an asymmetric, distorted peak. Support Vector Machines (SVMs) and Neural Networks trained on examples of LOS and NLOS correlation shapes have achieved NLOS detection rates above 97%. Convolutional Neural Networks applied to 2D delay-Doppler correlation maps have reached 98% classification accuracy, reducing positioning errors from 34 metres to 1.6 metres in severely urban environments - a dramatic improvement over conventional processing.

Neural Networks for NLOS Signal Classification

NLOS signals - those that reach the receiver only via reflection, with no direct path - cause the most severe pseudorange errors because the reflected path is always longer than the direct path, systematically biasing the range measurement. Detecting and excluding NLOS signals is more valuable than correcting them, since the bias magnitude depends on the unknown reflection geometry.

Several architectures have been applied to LOS/NLOS classification:

  • Feedforward neural networks: Trained on features extracted from RINEX data - elevation angle, SNR, pseudorange residuals, carrier-to-code divergence - to classify each satellite measurement as LOS or NLOS
  • Convolutional neural networks (CNN): Applied to receiver correlator output images or time-series representations of signal quality metrics, achieving state-of-the-art classification accuracy
  • CarNet: A generative CNN architecture that transforms multivariate time-series of GNSS measurements into 2D images for LOS/NLOS classification, capturing temporal patterns invisible to instantaneous classifiers
  • XGBoost: Gradient-boosted decision trees applied to RINEX-derived features and pseudorange residuals, providing computationally efficient classification suitable for real-time use

Deep Learning for Position Error Prediction

Rather than attempting to detect and remove individual bad measurements, some AI approaches directly predict the expected positioning error from contextual features - satellite count, DOP, SNR distribution, urban density indicators from map data, and historical error patterns at similar locations. Recurrent Neural Networks (RNNs) and Transformer architectures, which capture temporal dependencies in the error time series, have shown promising results for predicting and compensating systematic positioning errors in urban environments.

AI-Assisted Cycle Slip Detection

Cycle slips - abrupt jumps in carrier phase caused by temporary loss of satellite lock - are a persistent challenge in GNSS processing. Traditional detection algorithms (TurboEdit, geometry-free combination analysis) work well for large slips but struggle with small slips of one or two cycles that are masked by noise. Deep learning approaches, particularly LSTM (Long Short-Term Memory) networks applied to carrier phase time series, can detect subtle slip signatures that conventional algorithms miss. This is especially valuable in kinematic applications where high platform dynamics can cause frequent small slips.

Reinforcement Learning for Receiver Tracking Loop Optimisation

The tracking loops within a GNSS receiver - Phase Lock Loops (PLLs) and Delay Lock Loops (DLLs) - are traditionally designed with fixed parameters based on expected signal dynamics. Reinforcement learning offers the possibility of adaptive tracking loops that learn optimal bandwidth and filter settings for different signal environments. In theory, a tracking loop that adapts to the current environment could maintain lock in conditions where a fixed-bandwidth loop would lose it. This application is at the frontier of research and has not yet reached commercial deployment.

AI ApplicationMethodReported PerformanceDeployment Status
NLOS classificationCNN on correlator output98% accuracy; 34m→1.6m position improvementResearch
LOS/NLOS from RINEX featuresSVM + pseudorange residuals>80% improvement in 10m errorResearch
Position error predictionRNN / Transformer40%+ error reduction in urbanResearch
Cycle slip detectionLSTM time seriesSub-cycle slip detectionResearch/early prototypes
Multipath mitigation (spatial)Neural network regressionImproved over conventionalResearch
Tracking loop optimisationReinforcement learningTheoretical promiseResearch only

Limitations of AI in Safety-Critical GNSS

Despite impressive research results, deploying AI in safety-critical GNSS applications faces fundamental challenges. Neural networks are black boxes - their internal decision-making is not interpretable in the way required by aviation or railway safety standards. A classifier that achieves 98% NLOS detection accuracy still fails 2% of the time, and the nature of those failures is unpredictable. Safety certification frameworks such as DO-178C (aviation software) and EN 50128 (railway) require deterministic, verifiable behaviour that current deep learning architectures cannot provide.

Data-driven methods also require representative training data - a model trained on GNSS data from European city centres may not generalise to Middle Eastern desert cities or Asian high-rise districts. Distributional shift (the training environment differing from the deployment environment) is a known weakness of all ML systems and is particularly concerning in safety-critical navigation.

Note: The gap between research accuracy numbers and practical deployment in safety-critical systems is substantial. Peer-reviewed papers report performance on specific test datasets; real-world performance across diverse environments, edge cases, and adversarial conditions will be lower. Treat AI-GNSS performance claims in research papers as upper bounds achievable under ideal conditions.

Despite these limitations, AI will undoubtedly play an increasing role in GNSS signal processing - initially in non-safety-critical applications such as mapping, agriculture, and logistics, where performance improvements are valuable even if not certifiable, and progressively in safety applications as interpretable AI and formal verification methods mature.