Reliable traversability estimation is crucial for autonomous robots to navigate complex outdoor environments safely.
Existing self-supervised learning frameworks primarily rely on positive and unlabeled data; however, the lack of explicit
negative data remains a critical limitation, hindering the model’s ability to accurately identify diverse non-traversable regions.
To address this issue, we introduce a method to explicitly construct synthetic negatives and integrate them into
vision-based traversability learning as a training strategy that plugs into both Positive–Unlabeled (PU) and
Positive–Negative (PN) frameworks without modifying inference architectures. We also introduce an object-centric FPR evaluation
that measures errors specifically on the inserted negative regions, providing an annotation-free indicator of non-traversable recognition.