MSR 2026
Mon 13 - Tue 14 April 2026 Rio de Janeiro, Brazil
co-located with ICSE 2026

This program is tentative and subject to change.

Mon 13 Apr 2026 11:10 - 11:20 at Oceania V - Session 1-A: AI Agents & Automation

Infrastructure as Code (IaC) enables automated provisioning of large-scale cloud and on-premise environments, reducing the need for repetitive manual setup. However, this automation is a double-edged sword: a single misconfiguration in IaC scripts can propagate widely, leading to severe system downtime and security risks. Prior studies have shown that IaC scripts often contain security smells—bad coding patterns that may introduce vulnerabilities—and have proposed static analyzers based on symbolic rules to detect them. Yet, our preliminary analysis reveals that rule-based detection alone tends to over-approximate, producing excessive false positives and increasing the burden of manual inspection. In this paper, we present IntelliSA, an intelligent static analyzer for IaC security smell detection that integrates symbolic rules with neural inference. IntelliSA applies symbolic rules to over-approximate potential smells for broad coverage, then employs neural inference to filter false positives. While LLMs can effectively perform this filtering, reliance on LLM APIs introduces high cost and latency, raises data governance concerns, and limits reproducibility and offline deployment. To address the challenges, we adopt a knowledge distillation approach: an LLM teacher generates pseudo-labels to train a compact student model—over 500× smaller—that learns from the teacher’s knowledge and efficiently classifies false positives. We evaluate IntelliSA against two static analyzers and three LLM baselines (Claude-4, Grok-4, and GPT-5) using a human-labeled dataset including 241 security smells across 11,814 lines of real-world IaC code. Experimental results show that IntelliSA achieves the highest F1 score (83%), outperforming baselines by 7–42%. Moreover, IntelliSA demonstrates the best cost-effectiveness, detecting 60% of security smells while inspecting less than 2% of the codebase.

Preprint (2601.14595v1.pdf)4.32MiB
Slides (1110_Fu.pptx)17.46MiB

This program is tentative and subject to change.

Mon 13 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

11:00 - 12:30
Session 1-A: AI Agents & AutomationTechnical Papers / Industry Track / MSR Program at Oceania V
11:00
10m
Talk
Toward Linking Declined Proposals and Source Code: An Exploratory Study on the Go Repository
Technical Papers
Sota Nakashima Kyushu University, Masanari Kondo Kyushu University, Mahmoud Alfadel University of Calgary, Aly Ahmad University of Calgary, Toshihiro Nakae DENSO CORPORATION, Hidenori Matsuzaki DENSO CORPORATION, Yasutaka Kamei Kyushu University
Pre-print
11:10
10m
Talk
IntelliSA: An Intelligent Static Analyzer for IaC Security Smell Detection Using Symbolic Rules and Neural Inference
Technical Papers
Qiyue Mei The University of Melbourne, Michael Fu The University of Melbourne
Pre-print File Attached
11:20
10m
Talk
Model See, Model Do? Exposure-Aware Evaluation of Bug-vs-Fix Preference in Code LLMs
Technical Papers
Ali Al-Kaswan Delft University of Technology, Netherlands, Claudio Spiess University of California, Davis, Prem Devanbu University of California at Davis, Arie van Deursen TU Delft, Maliheh Izadi Delft University of Technology
Pre-print
11:30
10m
Talk
A Match Made in Heaven? AI-driven Matching of Vulnerabilities and Security Unit Tests
Technical Papers
Emanuele Iannone Hamburg University of Technology, Quang-Cuong Bui Hamburg University of Technology, Riccardo Scandariato Hamburg University of Technology
Pre-print
11:40
10m
Talk
PhantomRun: Auto Repair of Compilation Errors in Embedded Open Source Software
Technical Papers
Han Fu , Sigrid Eldh Ericsson AB, Mälardalen University, Carleton University, Kristian Wiklund Ericsson AB, Andreas Ermedahl Ericsson AB; KTH Royal Institute of Technology, Philipp Haller KTH Royal Institute of Technology, Cyrille Artho KTH Royal Institute of Technology, Sweden
11:50
10m
Talk
Promises, Perils, and (Timely) Heuristics for Mining Coding Agent Activity
Technical Papers
Romain Robbes CNRS, LaBRI, University of Bordeaux, Théo Matricon CNRS, LaBRI, University of Bordeaux, Thomas Degueule CNRS, Andre Hora UFMG, Stefano Zacchiroli LTCI, Télécom Paris, Institut Polytechnique de Paris, Palaiseau, France
Pre-print
12:00
10m
Talk
From Logic to Toolchains: An Empirical Study of Bugs in the TypeScript Ecosystem
Technical Papers
TianYi Tang Simon Fraser University, Saba Alimadadi Simon Fraser University, Nick Sumner Simon Fraser University
Pre-print
12:10
10m
Talk
Are We All Using Agents Now? An Empirical Study of Core and Peripheral Developers’ Use of Coding Agents
Technical Papers
Shamse Tasnim Cynthia University of Saskatchewan, Joy Krishan Das University of Saskatchewan, Banani Roy University of Saskatchewan
12:20
5m
Talk
Context Engineering for AI Agents in Open-Source Software
Technical Papers
Seyedmoein Mohsenimofidi Heidelberg University, Matthias Galster University of Canterbury, Christoph Treude Singapore Management University, Sebastian Baltes Heidelberg University
Pre-print
12:25
5m
Talk
A Blueprint for Trustworthy Code Annotation at Scale: An LLM-Powered Pipeline for Industrial Software Analytics
Industry Track