Marco Brambilla. full professor of computer science and engineering at Politecnico di Milano, Italy

I’m Marco Brambilla, I’m full professor of computer science and engineering at Politecnico di Milano, Italy.
I lead the Data Science Lab at Politecnico di Milano, DEIB.
I’m the director of the Computer Science and Engineering B.Sc. and M.Sc. curricula at Politecnico.
My current research interests are AI Explainability and Transparency, Web Science, Big Data Analysis,  Social Media Analytics, and Model-driven Development.
I’m the co-inventor of the Interaction Flow Modeling Language (IFML) standard by the OMG, and of 2 patents on crowdsourcing and multi-domain search.
I have been involved in the creation of four startups: WebRatio, Servitly, Fluxedo, and Quantia.
You can find my publications on Google Scholar or Scopus.
My ORCID ID is 0000-0002-8753-2434

I teach:

  • Enterprise ICT Architectures
  • Systems and Methods for Big and Unstructured Data
  • Web Science (see course materials here)
  • Digital Innovation Lab
  • Model-driven Engineering (see book here)

Recent Posts

Integrating Large Language Models and Knowledge Graphs for Extraction and Validation of Textual Data

Large manufacturing companies in mission-critical sectors like aerospace, healthcare, and defense, typically design, develop, integrate, verify, and validate products characterized by high complexity and low volume. They carefully document all phases for each product but analyses across products are challenging due to the heterogeneity and unstructured nature of the data in documents. In our research, … Continue reading Integrating Large Language Models and Knowledge Graphs for Extraction and Validation of Textual Data

Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification

Transparency and explainability in image classification are essential for establishing trust in machine learning models and detecting biases and errors. State-of-the-art explainability methods generate saliency maps to show where a specific class is identified, without providing a detailed explanation of the model’s decision process. Striving to address such a need, we introduce a post-hoc method … Continue reading Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification

Unveiling Human-AI Interaction and Subjective Perceptions About Artificial Intelligent Agents

We developed a research that focuses on human-AI interactions, employing a crowd-based methodology to collect and assess the reactions and perceptions of a human audience to a dialogue between a human and an artificial intelligent agent. The study is conducted through a live streaming platform where human streamers broadcast interviews to a custom-made GPT voice … Continue reading Unveiling Human-AI Interaction and Subjective Perceptions About Artificial Intelligent Agents

Policy Sandboxing: Empathy As An Enabler Towards Inclusive Policy-Making

Digitally-supported participatory methods are often used in policy-making to develop inclusive policies by collecting and integrating citizen’s opinions. However, these methods fail to capture the complexity and nuances in citizen’s needs, i.e., citizens are generally unaware of other’s needs, perspectives, and experiences. Consequently, policies developed with this underlying gap tend to overlook the alignment of … Continue reading Policy Sandboxing: Empathy As An Enabler Towards Inclusive Policy-Making

More Posts