MSR 2026
Mon 13 - Tue 14 April 2026 Rio de Janeiro, Brazil
co-located with ICSE 2026
Tue 14 Apr 2026 12:10 - 12:20 at Oceania V - Session 1-A: AI & Autonomous Agents Chair(s): Filipe Cogo

While Large Language Models (LLMs) have demonstrated remarkable capabilities, research shows their effectiveness is highly dependent not only on explicit prompts but also on the broader context provided. This requirement is particularly pronounced in software engineering, where the goals, architecture, and collaborative conventions of an existing project play critical roles in the response quality. To support this, many AI coding assistants have introduced ways for developers to author persistent, machine-readable directives that encode a project’s unique constraints. While this practice is growing, the content of these directives remains unstudied.

This paper presents the first large-scale empirical study to characterize this emerging form of developer-provided context. Through a qualitative analysis of 401 open-source repositories containing cursor rules, we developed a comprehensive taxonomy of project context that developers consider essential, organized into four high-level themes: Conventions, Guidelines, Project Information, and LLM Directives. Our study also explores how this context varies across different project types and programming languages, offering implications for the next generation of context-aware AI developer tools.

Tue 14 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

11:00 - 12:30
Session 1-A: AI & Autonomous AgentsTechnical Papers / MSR Program at Oceania V
Chair(s): Filipe Cogo Centre for Software Excellence, Huawei Canada
11:00
10m
Talk
Speed at the Cost of Quality: How Cursor AI Increases Short-Term Velocity and Long-Term Complexity in Open-Source Projects
Technical Papers
Hao He Carnegie Mellon University, Courtney Miller Carnegie Mellon University, Shyam Agarwal Carnegie Mellon University, Christian Kästner Carnegie Mellon University, Bogdan Vasilescu Carnegie Mellon University
Pre-print Media Attached
11:10
10m
Talk
LLM-Based Detection of Tangled Code Changes for Higher-Quality Method-Level Bug Datasets
Technical Papers
Md Nahidul Islam Opu University of Manitoba, Shaowei Wang University of Manitoba, Shaiful Chowdhury University of Manitoba
Pre-print
11:20
10m
Talk
Adversarial Bug Reports as a Security Risk in Language Model-Based Automated Program Repair
Technical Papers
Piotr Przymus Nicolaus Copernicus University in Toruń, Poland, Andreas Happe TU Wien, Jürgen Cito TU Wien
Pre-print
11:30
10m
Talk
Investigating Autonomous Agent Contributions in the Wild: Activity Patterns and Code Change over Time
Technical Papers
Răzvan Mihai Popescu Delft University of Technology, David Gros University of California, Davis, Andrei Botocan Delft University of Technology, Rahul Pandita GitHub, Inc., Prem Devanbu University of California at Davis, Mali Izadi TU Delft
11:40
10m
Talk
Evaluating the Use of LLMs for Automated DOM-Level Resolution of Web Performance Issues
Technical Papers
Gideon Peters Concordia University, SayedHassan Khatoonabadi Concordia University, Emad Shihab Concordia University
11:50
10m
Talk
Are Coding Agents Generating Over-Mocked Tests? An Empirical Study
Technical Papers
Andre Hora UFMG, Romain Robbes CNRS, LaBRI, University of Bordeaux
Pre-print Media Attached
12:00
10m
Talk
Consistent or Sensitive? Automated Code Revision Tools Against Semantics-Preserving Perturbations
Technical Papers
Shirin Pirouzkhah University of Zurich, Souhaila Serbout Quantena AG, Alberto Bacchelli IfI, University of Zurich
Pre-print
12:10
10m
Talk
Beyond the Prompt: An Empirical Study of Cursor Rules
Technical Papers
Shaokang Jiang University of California, Irvine, Daye Nam University of California, Irvine
Pre-print
12:20
10m
Talk
Bridging Design and Implementation: A Study of Multi-Agent LLM Architectures for Automated Front-End Generation
Technical Papers
Caren Rizk Concordia University, SayedHassan Khatoonabadi Concordia University, Emad Shihab Concordia University