NEC orchestrating a brighter world
NEC Laboratories Europe

Technology
Blog

 XavierCosta_Blog_post_MWC_crop.jpg blog_preview_5g-networks_edge_1.png

5G is specifically designed to accelerate the digital transformation of industries

NEC Laboratories Europe is helping lead the way for NEC with its core and applied research. In this recent article, Dr. Xavier Costa, Head of 5G &6G Networks at NEC Laboratories, recently discussed some of our latest research with the Spanish newspaper, Elmundo, including our open RAN solutions, cross-border autonomous driving edge approach, automated search and rescue drone technology (SARDO) and the future of smart surfaces.

 Blog_entry.jpg blog_preview_data-science_edge_2.png

A Study on Ensemble Learning for Time Series Forecasting and the Need for Meta-Learning

Time series forecasting estimates how a sequence of observations continues into the future. In this blog post, we discuss the performance of ensemble methods for time series forecasting. We obtained our insights from conducting an experiment that compared a collection of 12 ensemble methods for time series forecasting, their hyperparameters and the different strategies used to select forecasting models. Furthermore, we will describe our developed meta-learning approach that automatically selects a subset of these ensemble methods (plus their hyperparameter configurations) to run for any given time series dataset.

 Geopolitical_knowledge_graph_V4.jpg.png blog_preview_data-science_edge_1.png

Understanding Gradient Rollback

For many, including scientific researchers, artificial intelligence (AI) is a mystery – its reasoning opaque. AI systems and models are often referred to as “black boxes”; we do not understand the logic of what they do. Neural networks are powerful artificial intelligence tools trained to recognize meaningful data relationships and predict new knowledge. Nonetheless, it is not commonly understood how neural networks function or arrive at predictions. When AI systems affect our lives we need to ensure their predictions and decisions are reasonable. NEC Laboratories Europe has recently achieved a milestone in explainable AI research (XAI) by developing the method Gradient Rollback; this opens neural “black box” models and explains their predictions. Gradient Rollback reveals the training data that has the greatest influence on a prediction. Users can ascertain how plausible a prediction is by viewing its explanation (the training instances with the highest influence). The more plausible a prediction is the greater the likelihood that it will be trusted – a key factor in AI user adoption.

 scheme2.pdf-1.png blog_preview_data-science_edge_2.png

Inferring Dependency Structures for Relational Learning

Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.

Top of this page