Marco Brambilla. full professor of computer science and engineering at Politecnico di Milano, Italy

I’m Marco Brambilla, I’m full professor of computer science and engineering at Politecnico di Milano, Italy.
I lead the Data Science Lab at Politecnico di Milano, DEIB.
I’m the director of the Computer Science and Engineering B.Sc. and M.Sc. curricula at Politecnico.
My current research interests are AI Explainability and Transparency, Web Science, Big Data Analysis,  Social Media Analytics, and Model-driven Development.
I’m the co-inventor of the Interaction Flow Modeling Language (IFML) standard by the OMG, and of 2 patents on crowdsourcing and multi-domain search.
I have been involved in the creation of four startups: WebRatio, Servitly, Fluxedo, and Quantia.
You can find my publications on Google Scholar or Scopus.
My ORCID ID is 0000-0002-8753-2434

I teach:

  • Enterprise ICT Architectures
  • Systems and Methods for Big and Unstructured Data
  • Web Science (see course materials here)
  • Digital Innovation Lab
  • Model-driven Engineering (see book here)

Recent Posts

A Graph-based RAG for Energy Efficiency Question Answering

In this work, we investigate the use of Large Language Models (LLMs) within a Graph-based Retrieval Augmented Generation (RAG) architecture for Energy Efficiency (EE) Question Answering.First, the system automatically extracts a Knowledge Graph (KG) from guidance and regulatory documents in the energy field. Then, the generated graph is navigated and reasoned upon to provide users … Continue reading A Graph-based RAG for Energy Efficiency Question Answering

Integrating Large Language Models and Knowledge Graphs for Extraction and Validation of Textual Data

Large manufacturing companies in mission-critical sectors like aerospace, healthcare, and defense, typically design, develop, integrate, verify, and validate products characterized by high complexity and low volume. They carefully document all phases for each product but analyses across products are challenging due to the heterogeneity and unstructured nature of the data in documents. In our research, … Continue reading Integrating Large Language Models and Knowledge Graphs for Extraction and Validation of Textual Data

Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification

Transparency and explainability in image classification are essential for establishing trust in machine learning models and detecting biases and errors. State-of-the-art explainability methods generate saliency maps to show where a specific class is identified, without providing a detailed explanation of the model’s decision process. Striving to address such a need, we introduce a post-hoc method … Continue reading Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification

Unveiling Human-AI Interaction and Subjective Perceptions About Artificial Intelligent Agents

We developed a research that focuses on human-AI interactions, employing a crowd-based methodology to collect and assess the reactions and perceptions of a human audience to a dialogue between a human and an artificial intelligent agent. The study is conducted through a live streaming platform where human streamers broadcast interviews to a custom-made GPT voice … Continue reading Unveiling Human-AI Interaction and Subjective Perceptions About Artificial Intelligent Agents

More Posts