MWC Area

DEMO 1: 6G-EWOC Project

The demo presents an example of multimodal fusion, where sensor data is combined after processing to compensate for the shortcomings of the individual sensors.

  • Radar: A MIMO FMCW radar that detects object distances (represented by small squares in the image) using embedded analysis technology. It is calibrated to detect objects that match the typical size and velocity of people.

The detector recognizes objects with the size and movements patterns of people, but it may misidentify the type of object, detect multiple objects for a single person, or fail to detect a person altogether.

  • Camera: Identifies people (bounding box in the image) using a deep learning analysis model (YOLO-v8). 

The camera analysis is performed in projective space, extracting and recognizing features of object classified as “person”. However, it cannot determine the actual distance or size of the detected person. 

  • Fusion result: The fusion system associates a distance with detected persons and filters out objects that do not belong to the “person” class. 

While this is a simple example of multimodal fusion, it has two major drawbacks:

  1. “Late” or “decision-level” fusion

In this approach, different analysis models independently process data from each sensor and make separate decisions. The fusion system then combines these decisions to ensure that analysis errors are uncorrelated. However, this method lacks access to the original sensor data.

In contrast, an “early” or “data-level” fusion approach would combine raw sensor data, allowing for more accurate and informed decisions by mitigating errors introduced by individual monomodal analyzers.

  1. Limited sensor placement: 

The sensors are positioned in close proximity, providing only a single viewing angle. As a result, both the radar and the camera share a similar field of view and are subject to the same occlusions.

Possible solutions:

  • Implement fusion at the data level (early) or feature level (intermediate).
  • Increase the number of sensor viewpoints and ensure effective communication between them.


UPC groups and research centres involved in the 6G-EWOC Project: CCABA, IDEAI-UPC, CD6 & CommSensLab-UPC.
More information here.

DEMO 2: 6G-OpenLab & ELEGANT Projects

The demo showcases a machine learning-based traffic signal processing system designed for autonomous vehicles, controlled by edge computing nodes.

The demonstration features a small-scale testbed focused on image processing in edge computing, specifically for traffic signal detection in controlled urban environments within the 6G-OpenLab and ELEGANT infrastructure.

The system leverages advanced technologies, including secure, real-time video streaming with low latency and the execution of deep learning models on high-capacity edge computing nodes.

The logical diagram illustrates the real-time transmission of RGB video, low-latency wireless streaming, and the deployment of deep learning algorithms for traffic signal recognition. The system efficiently detects and processes traffic signals, utilizing edge computing nodes for real-time, high-performance decision-making.


UPC research centre involved in the OpenLab & ELEGANT Projects: CCABA.
More information here.

New Partnership Programme

CONNÈXIA UPC

The Connèxia Programme is the
new partnership programme
addressed to a limited number of
business to build a strategic
alliance
 with the UPC.

Connèxia is addressed to those
number of business interested in
having a close and holistic
relationship with the UPC to
secure their competitive edge
and drive growth.

UPC Technological Services


The technological services include facilities, infrastructures and largescale equipments dedicated to
offering specialised technical services.

More information on technological services below

Scroll to Top