MSR 2026
Mon 13 - Tue 14 April 2026 Rio de Janeiro, Brazil
co-located with ICSE 2026

Below is the detailed program schedule for the event, including all sessions, talks, tutorials, and plenary activities organized by day, time, and room.

First Day Schedule

Session 1-A (Room A)

Time Session & Details Track / Program
09:00 - 10:30 Plenary: Opening + Keynote I Room A
09:00 30m Day opening
09:30 60m Keynote MSR + ICPC (EMERSON)
10:30 - 11:00 Break Monday Morning Break

Session 1-A (Room A)

Time Session & Details Track / Program
11:00 - 12:30 Session 1-A: AI Agents & Automation Room A
11:00 10m Toward Linking Declined Proposals and Source Code: An Exploratory Study on the Go Repository
11:10 10m IntelliSA: An Intelligent Static Analyzer for IaC Security Smell Detection Using Symbolic Rules and Neural Inference
11:20 10m Model See, Model Do? Exposure-Aware Evaluation of Bug-vs-Fix Preference in Code LLMs
11:30 10m A Match Made in Heaven? AI-driven Matching of Vulnerabilities and Security Unit Tests
11:40 10m PhantomRun: Auto Repair of Compilation Errors in Embedded Open Source Software
11:50 10m Secret Leak Detection in Software Issue Reports using LLMs: A Comprehensive Evaluation
12:00 5m Context Engineering for AI Agents in Open-Source Software
12:05 5m A Blueprint for Trustworthy Code Annotation at Scale: An LLM-Powered Pipeline for Industrial Software Analytics
12:10 10m Modeling Sampling Workflows for Code Repositories
12:20 10m Are We All Using Agents Now? An Empirical Study of Core and Peripheral Developers’ Use of Coding Agents

Session 1-B (Room B)

Time Session & Details Track / Program
11:00 - 12:30 Session 1-B: Quality & Security Room B
11:00 10m Where Do Smart Contract Security Analyzers Fall Short?
11:10 10m An Empirical Study of Vulnerabilities in Python Packages and Their Detection
11:20 10m Does Programming Language Matter? An Empirical Study of Fuzzing Bug Detection
11:30 10m An Empirical Study on Line-Level Software Defect Prediction
11:40 10m Characterizing and Modeling the GitHub Security Advisories Review Pipeline
11:50 10m Linux Kernel Recency Matters, CVE Severity Doesn’t, and History Fades
12:00 5m Finding Important Stack Frames in Large Systems
12:05 5m Stop Comparing Apples and Oranges: Matching for Better Results in Mining Software Repositories Studies
12:10 10m From Logic to Toolchains: An Empirical Study of Bugs in the TypeScript Ecosystem
12:20 10m LogSieve: Task-Aware CI Log Reduction for Sustainable LLM-Based Analysis

Poster Session (Hall)

Time Session & Details Track / Program
12:30 - 14:00 Lunch Break Monday Lunch
14:00 - 15:30 Sesion 2 - Posters Poster Area
14:00 90m Mining challenge Posters
14:00 - 15:30 Sesion 2 - Discussion Poster Area/Room A/Room B
14:00 90m Discussions grous
15:30 - 16:00 Break Monday Afternoon Break

Session 3-A (Room A)

Time Session & Details Track / Program
16:00 - 17:30 Session 3-A: AI Evaluation & Vision Room A
16:00 10m Speed at the Cost of Quality? The Impact of LLM Agent Assistant on Software Development
16:10 10m LLM-Based Detection of Tangled Code Changes for Higher-Quality Method-Level Bug Datasets
16:20 10m Adversarial Bug Reports as a Security Risk in Language Model-Based Automated Program Repair
16:30 10m Investigating Autonomous Agent Contributions in the Wild: Activity Patterns and Code Change over Time
16:40 10m Analyzing GitHub Issues and Pull Requests in nf-core Pipelines
16:50 10m Beyond Single Code Changes: An Empirical Study of Topic-Based Code Review Practices in Gerrit for OpenStack
17:00 5m Ask, Then Think: Enhancing LLM Performance with Socratic Reasoning
17:05 5m Beyond the Prompt: Assessing Domain Knowledge Strategies for High-Dimensional LLM Optimization in Software Engineering
17:10 10m Bridging Design and Implementation: A Study of Multi-Agent LLM Architectures for Automated Front-End Generation
17:20 10m From Logic to Toolchains: An Empirical Study of Bugs in the TypeScript Ecosystem

Session 3-B (Room B)

Time Session & Details Track / Program
16:00 - 17:30 Session 3-B: Maintenance & Tutorial Room B
16:00 10m Source Code Hotspots: A Diagnostic Method for Quality Issues
16:10 10m Evolving Kubernetes: A Technical Debt Perspective
16:20 10m How do third-party Python libraries use type annotations?
16:30 10m Coordination at Scale in Large Distributed Development: The Case of Kubernetes
16:40 5m Underutilization in Research GPU Clusters: SE Challenges
16:45 5m Does Impact Analysis Support the Review of Changes to Build Specifications?
16:50 40m Turotial 1: Running Large Language Models at Scale for Mining Software Repositories: Lessons Learned from HPC-Based Batch Inference

Second Day Schedule

Plenary (Room A)

Time Session & Details Track / Program
09:00 - 10:30 Plenary: Awards & Keynote II Room A
09:00 60 m Keynote: Pick
10:00 30m MIP 2016 Presentation
10:30 - 11:00 Break Tuesday Morning Break

Session 1-A (Room A)

Time Session & Details Track / Program
11:00 - 12:30 Session 1-A: AI & Autonomous Agents Room A
11:00 30m + 10 Q&A and moving rooms Vision 1: Waiting for title
11:40 10m Evaluating the Use of LLMs for Automated DOM-Level Resolution of Web Performance Issues
11:50 10m Are Coding Agents Generating Over-Mocked Tests? An Empirical Study
12:00 10m Consistent or Sensitive? Automated Code Revision Tools Against Semantics-Preserving Perturbations
12:10 10m Beyond the Prompt: An Empirical Study of Cursor Rules
12:20 10m Promises, Perils, and (Timely) Heuristics for Mining Coding Agent Activity

Session 1-B (Room B)

Time Session & Details Track / Program
11:00 - 12:30 Session 1-B: Maintenance, Evolution & Processes Room B
11:00 30m + 10 Q&A and moving rooms Vision 1: Waiting for title
11:40 10m Combining Example-Based and Rule-Based Program Transformations to Resolve Build Conflicts
11:50 10m Mining Quantum Software Patterns in Open-Source Projects
12:00 10m Analyzing Dependency Distribution Changes Arising from Code Smell Interactions
12:10 10m The Value of Effective Pull Request Description
12:20 5m Can Data Mining Help to Survive the Annual Compiler Upgrade?
12:25 5m How Does Experience Influence Developer Perceptions of Atoms of Confusion?
12:30 - 14:00 Lunch Break Tuesday Lunch

Session 2-A (Room A)

Time Session & Details Track / Program
14:00 - 15:30 Session 2-A: Ecosystems & Methods Room A
14:00 30m + 10 Q&A and moving rooms Vision 2: The State of Data Mining, Benchmarks, Double Blind Trials, and Software Engineering Systems in Industry
14:40 10m Quantifying Competitive Relationships Among Open-Source Software Projects
14:50 10m Role of CI Adoption in Mobile App Success: An Empirical Study of Open-Source Android Projects
15:00 10m ML in a Box: Analyzing Containerization Practices in Open Source ML Projects
15:10 10m An Empirical Study of Policy as Code: Adoption, Purpose, and Maintenance
15:20 10m Tracing Stereotypes in Pre-trained Transformers: From Biased Neurons to Fairer Models

Session 2-B (Room B)

Time Session & Details Track / Program
14:00 - 15:30 Session 2-B: Quality Room B
14:00 30m + 10 Q&A and moving rooms Vision 2: The State of Data Mining, Benchmarks, Double Blind Trials, and Software Engineering Systems in Industry
14:40 10m How are MLOps Frameworks Used in Open Source Projects? An Empirical Characterization
14:50 10m Do We Agree on What an “Audit” Is? Toward Standardized Smart Contract Audit Reporting
15:00 10m AFGNN: API Misuse Detection using Graph Neural Networks and Clustering
15:10 10m An Empirical Analysis of Cross-OS Portability Issues in Python Projects
15:20 10m Learning Compiler Fuzzing Mutators from Historical Bugs
15:30 - 16:00 Break Tuesday Afternoon Break

Tutorial + Registered Reports (Room A)

Time Session & Details Track / Program
16:00 - 17:00 Tutorial + Registered reports talks Room A
16:00 5m Parameterized Tests in Practice: Adoption, Styles, and Impact in Apache Java Projects
16:05 5m Causal Inference for the Effect of Code Coverage on Bug Introduction
16:10 5m Automated Testing of Task-based Chatbots: How Far Are We?
16:15 5m The Influence of Code Smells in Efferent Neighbors on Class Stability
16:20 40 min Tutorial 2: Selecting the Data Source that Matter: Fine-Tuning Domain-Specific Ecosystem Studies with MARIN

Demo Session (Room B)

Time Session & Details Track / Program
16:00 - 17:00 Demo Session & Discussion Room B
16:00 45m Interactive Tool Showcase & Discussion

Closing Ceremony (Room A)

Time Session & Details Track / Program
17:00 - 17:30 Plenary: Closing Ceremony Room A
16:45 15m mining challenge top 3 presentations
17:00 15m Closing Remarks
17:15 15m Invitation to MSR 2027

This concludes the program schedule for both days. Please refer to the tables above for exact times, session titles, and room assignments.