Purely camera-based perception leads to multiple issues in terms of occlusion as observed currently in autonomous driving. By transferring concepts of autonomous driving like V2X (Vehicle-to-Everything) to intra-logistics, we allow the fusion of data of different entities and modalities. This allows us to achieve more robust perception necessary for industrial use-cases involving multiple mobile robots and infrastructure. The approach has the potential to reduce costs of individual mobile robots through central infrastructure investments which allows it to scale better while leading to enhanced safety.