Modern Software Engineering in the AI Era

Workshop

February 4, 2026 | 10:30 - 16:30 ()
Organised by the Decision System Lab, University of Wollongong

About the Workshop

Software engineering (SE) is moving toward a future in which a development team may consist of only a few human developers, supported by tens or even hundreds of autonomous AI agents that design, build, test, and continuously evolve software at machine speed. This workshop explores how rapidly we are approaching this reality and what it means for the foundations of software engineering. It features talks from leading software engineering researchers from North Carolina State University, University of Sydney, Monash University, Macquarie University, CSIRO Data61, and the University of Wollongong, and challenges participants to rethink core SE practices (requirements, design, testing, maintenance, and accountability) when AI becomes an active collaborator rather than a tool. The workshop also serves as a catalyst for collaboration, providing a forum to share recent work, explore joint research directions and define concrete initiatives such as collaborative grant proposals, co-authored publications and related activities.

Keynote Speaker

Professor Tim Menzies

Professor Tim Menzies

North Carolina State University, USA

Bio: Tim Menzies (Ph.D. UNSW 1995, ACM Fellow, IEEE Fellow, ASE Fellow) is a globally recognized leader in software engineering research, best known for his pioneering work in data-driven, explainable, and minimal AI for software systems. Over the past two decades, his contributions have redefined defect prediction, effort estimation, and multi-objective optimization, emphasizing transparency and reproducibility. As the co-creator of the PROMISE repository, Tim helped establish modern empirical software engineering, showing that small, interpretable AI models can outperform larger, more complex ones. Currently, he works as a full Professor in computer science at NC State, USA. He is the director of the Irrational Research lab (mad scientists r'us). His research has earned over $19+ million in funding from agencies such as NSF, DARPA, and NASA, as well as from private companies like Meta, Microsoft and IBM. Tim has published over 300 papers, with more than 24,000 citations, and advised 24 Ph.D. students. He is the editor-in-chief of the editor-in-chief of the Automated Software Engineering journal and an associate editor for IEEE TSE. His work continues to shape the future of software engineering, focusing on creating AI tools that are not only intelligent but also fair, transparent, and trustworthy. For more information, visit timm.fyi.of, NC state http://timm.fyi

Presenters

Associate Professor Xi Zheng

A/Prof. Xi Zheng

Macquarie University, Australia

Bio: A/Prof. Xi Zheng (Macquarie University, Australia) is an ARC Future Fellow (2024–2028) whose research focuses on testing and verification of learning-enabled cyber-physical systems, with applications to autonomous vehicles and UAVs. He has secured over $2.4M in competitive funding and published extensively in top venues such as ICSE, FSE, and TSE. His research outputs have been adopted in industry by partners including Ant Group and UAV companies. Beyond research, he has taken on significant leadership and service roles, serving as TPC Chair (MobiQuitous 2026), OC/TPC member (ICSE 2026, FSE 2026, PerCom 2026, CAV 2025). He also co-founded the TACPS workshop series and is co-organizer of the Shonan Seminar #235 and Dagstuhl Seminar 202501048 (2026) on neurosymbolic AI and LLMs for reliable autonomous systems.

Dr. Xiao Cheng

Dr. Xiao Cheng

Macquarie University, Australia

Bio: Xiao Cheng is a lecturer (~U.S.Assistant Professor) at School of Computing, Faculty of Science and Engineering, Macquarie University. His research lies at the intersection of Programming Languages (PL) and Software Engineering (SE), focusing on enhancing the security and reliability of modern software systems through formal method-based program analysis and verification techniques. He is also exploring the integration of artificial intelligence with classical PL/SE tasks to further enhance these domains.
His papers have been published in top-tier conferences and journals in the field of software engineering (TOSEM, FSE, ICSE, ISSTA), programming languages (OOPSLA) and security (TDSC, NDSS), and awarded ACM SIGSOFT Distinguished Paper Award for FSE 2024 and ACM SIGPLAN Distinguished Paper Award for OOPSLA 2020. He is one of the major contributors of the SVF project, a widely-used open-source framework for code analysis and verification, and the author of the DeepWukong project, which is the second most cited TOSEM work of the past five years (2/1042) according to ACM.

Dr. Yongqiang Tian

Dr. Yongqiang Tian

Monash University, Australia

Bio: Dr. Yongqiang Tian is a lecturer (Assistant Professor) at Monash University. He holds a dual Ph.D. from the University of Waterloo and the Hong Kong University of Science and Technology. His research lies in the areas of software testing and debugging, with a particular focus on compilers and deep learning systems. The proposed techniques have uncovered over 200 bugs in widely used software systems, including GCC, LLVM, and TVM. His work has been published in leading peer-reviewed journals and conferences such as TOSEM, ICSE, ASPLOS, FSE, ASE, ISSTA, EmSE, and IJCAI. His research has received support from prominent funding agencies and industry partners, including Microsoft and Cisco. He also contributes actively to the research community, serving as the SIGSOFT Information Director, a reviewer, and a program committee member for several top-tier conferences and journals.

Dr. James Hoang

Dr. James Hoang

CSIRO, Australia

Bio: James Hoang is a Research Scientist at the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia, within CSIRO’s Data61. At Data61, he leads a subgroup of research scientists focusing on quantum software engineering and related foundational research problems.
His research interests lie in inventing and developing AI agentic systems to address challenging problems in complex software systems, including software repository mining, large-scale code search, automated bug detection, and program analysis. Through this work, he aims to improve software quality, accelerate developer productivity, reduce software maintenance costs, and promote the development of responsible and trustworthy AI-assisted software engineering tools.
James’s research has been published in leading flagship conferences and journals in software engineering and security, including ICSE, ASE, IEEE Security & Privacy, and IEEE Transactions on Software Engineering (TSE). He received his Ph.D. from the School of Computing and Information Systems at Singapore Management University, where he conducted his doctoral research under the supervision of Professor David Lo.

Dr. Hong Jin Kang

Dr. Hong Jin Kang

University of Sydney, Australia

Bio: Hong Jin Kang is a Lecturer at the University of Sydney. Before this role, he spent his postdoc at the University of California, Los Angeles. He obtained his PhD from Singapore Management University. With a broad goal of boosting developer productivity, his research focuses on using AI to integrate human knowledge into automated software development techniques. His research has led to publications appearing in top-tier venues, including conferences such as ICSE, FSE, and ASE, and journals such as TSE, and TOSEM. Through industrial collaborations, his research has been deployed within large industrial companies and have discovered security vulnerabilities that were assigned CVE IDs.

Dr. Guoxin Su

Dr. Guoxin Su

University of Wollongong, Australia

Bio: Dr Guoxin Su is a senior lecturer at the University of Wollongong (UOW), Australia. Prior to this, he was a senior research fellow with the School of Computing at the National University of Singapore. He received his PhD from the University of Technology Sydney in 2013. Dr Su's research focuses on the intersection of AI, data science, and software engineering. His primary research directions include formal methods, probabilistic verification, multi-agent systems, deep reinforcement learning, and stream processing. He has published over 60 papers, including publications in premier journals and conferences such as IEEE TSE, ICSE, ESEC/FSE, AAAI, AMMAS, NeurIPS, and Pattern Recognition. He has served as SPC/PC in a number of prestigious conferences and supervised more than 10 PhD students to completion.

Professor Hoa Khanh Dam

Professor Hoa Khanh Dam

University of Wollongong, Australia

Bio: Hoa Khanh Dam is a Professor of Software Engineering in the School of Computing and Information Technology at the University of Wollongong, Australia, and Director of the Decision Systems Lab. His research focuses on the intersection of Software Engineering and Artificial Intelligence, with an emphasis on AI-driven solutions to improve software quality, cybersecurity, and developer productivity. He also investigates methodologies for building autonomous, cyber-resilient AI/IoT and multi-agent systems. His work has received multiple Best Paper Awards (WICSA, APCCM, ASWEC) and an ACM SIGSOFT Distinguished Paper Award (MSR), and has been published in leading journals (IEEE TSE, JSS, EMSE) and conferences (ICSE, ASE, FSE, IJCAI, AAMAS). He is Deputy Editor-in-Chief of Automated Software Engineering and has served in senior editorial, program chair and committee roles across major international venues.

Schedule

Time Session
10:30 - 11:15

Opening & Keynote Speech: Modern SE in the AI age

Presenter: Prof. Tim Menzies

Abstract: Everyone says AI needs more: more parameters, more data, more GPUs, more money. Really? Complexity has costs, and they're not always obvious until something goes boom. In this talk, I'll walk through thirty years of building AI systems that worked because they were simple, not despite it. Like the expert system for raising pigs that outperformed the human expert who wrote its rules—built by a junior master's student (i.e. me). Or modern "compact AI" methods that match GPT-class results using 1/1000th of the data. Here's the heresy: in most real software systems, complexity collapses. Behavior funnels. The winning move isn't throwing more compute at the problem—it's asking a better question. If you've ever suspected that the AI hype machine is missing something important, come find out what.

11:15 - 11:45 Towards Verifiable Autonomous Systems with NeuroSymbolic Reasoning

Presenter: A/Prof. Xi Zheng

Abstract: Learning-enabled autonomous systems—such as self-driving vehicles and intelligent drones—pose unprecedented challenges for safety assurance due to the opaque and unpredictable nature of deep neural networks. This talk introduces NeuroStrata, a new neurosymbolic architecture for autonomous systems, which marks a paradigm shift from black-box learning to interpretable, reasoning-based intelligence. By integrating neural perception with symbolic reasoning, NeuroStrata enables certifiable AI, bridging the gap between data-driven adaptability and formal verifiability. This vision is now being realized through a neurosymbolic perception module deployed in collaboration with an Australian drone company, demonstrating real-world feasibility for safety-critical applications.

11:45 - 12:15 LLM-Powered Whole-Program Analysis

Presenter: Dr. Xiao Cheng

Abstract: Program analysis is fundamental to software quality assurance, as it enables automated reasoning about program behavior and the verification of properties against specifications. Large language models (LLMs) have shown great promise for program analysis due to their strong capability for semantic understanding, acquired through extensive training on large-scale codebases. However, because of the hallucination tendency of LLMs, they may produce spurious or counterfactual outputs, hurting the accuracy of program behavior reasoning. This problem becomes significantly more serious when applying LLMs to whole-program analysis, where the target is not toy, function-level code but industrial-scale codebases consisting of millions of lines of code. These limitations hinder the practical adoption of LLM-powered program analysis.

In this talk, I will present three works on LLM-powered whole-program analysis and its key applications: library API specification generation (ICSE 2026), fuzz driver enhancement (NDSS 2026), and codebase-level program repair (FSE 2026). I will elaborate on our methodologies for effectively combining LLMs with fundamental program analysis techniques and for mitigating hallucinations in LLM-powered whole-program analysis through program decomposition, verification, and retrieval-augmented generation techniques.

12:15 - 13:00 Lunch Break
13:00 - 13:30 Supercharge Compiler Engineering with LLMs

Presenter: Dr. Yongqiang Tian

Abstract: The field of compiler engineering has long relied on deep human expertise and painstaking manual effort for tasks like finding subtle bugs or optimizing performance. This labour-intensive process, however, fundamentally limits the pace of development and analysis. This talk presents our recent work on how Large Language Models (LLMs) can fundamentally supercharge this process, allowing engineers to operate with unprecedented speed and efficiency. We will demonstrate how LLMs can intelligently (1) generate candidates for missed peephole optimizations and (2) implement language-specific transformations to automate the debugging workflow.

13:30 - 14:00 Towards autonomous normative multi-agent systems for Human-AI software engineering teams

Presenter: Professor Hoa Khanh Dam

Abstract: This talk envisions a transformative paradigm in software engineering, where Artificial Intelligence, embodied in fully autonomous agents, becomes the primary driver of the core software development activities. We introduce a new class of software engineering agents, empowered by Large Language Models and equipped with beliefs, desires, intentions, and memory to enable human-like reasoning. These agents collaborate with humans and other agents to design, implement, test, and deploy software systems with a level of speed, reliability, and adaptability far beyond the current software development processes. Their coordination and collaboration are governed by norms expressed as deontic modalities - commitments, obligations, prohibitions and permissions - that regulate interactions and ensure regulatory compliance. These innovations establish a scalable, transparent and trustworthy framework for future Human-AI software engineering teams.

14:00 - 14:30 Architectural Patterns for Designing Quantum AI Systems

Presenter: Dr. James Hoang

Abstract: Utilising quantum computing technology to enhance artificial intelligence systems is expected to improve training and inference times, increase robustness against noise and adversarial attacks, and reduce the number of parameters without compromising accuracy. However, moving beyond proof-of-concept or simulations to develop practical applications of these systems while ensuring high software quality faces significant challenges due to the limitations of quantum hardware and the underdeveloped knowledge base in software engineering for such systems. In this work, we have conducted a systematic mapping study to identify the challenges and solutions associated with the software architecture of quantum-enhanced artificial intelligence systems. The results of the systematic mapping study reveal several architectural patterns that describe how quantum components can be integrated into inference engines, as well as middleware patterns that facilitate communication between classical and quantum components. Each pattern realises a trade-off between various software quality attributes, such as efficiency, scalability, trainability, simplicity, portability, and deployability. The outcomes of this work have been compiled into a catalogue of architectural patterns.

14:30 - 15:00 Real-time deep reinforcement learning systems: fault-tolerant design and multi-objective analysis

Presenter: Dr. Guoxin Su

Abstract: Deep reinforcement learning (DRL) has emerged as a powerful paradigm for solving complex decision-making problems. However, DRL-based systems still face significant dependability challenges particularly in real-time environments due to the simulation-to-reality gap, out-of-distribution observations, and the critical impact of latency. Latency-induced faults, in particular, can lead to unsafe or unstable behaviour, yet existing fault-tolerance approaches to DRL systems lack formal methods to rigorously analyse and optimise performance and safety simultaneously in real-time settings. In this talk, I will present a formal framework for the design and analysis of real-time switching mechanisms between DRL agents and alternative controllers. Our framework models switching behaviour using timed automata and introduces multi-objective model checking to evaluate the switch design against both soft and hard performance requirements. A GPU-accelerated implementation of our novel method demonstrates superior scalability compared to the state-of-the-art probabilistic model checking tools.

15:00 - 15:30 Towards Human-Centered Program Analysis for Secure Software Engineering

Presenter: Dr. Hong Jin Kang

Abstract: Program analysis is essential for software correctness and security, yet its practical impact is often limited by tools that do not align with developers' reasoning processes or accommodate multiple perspectives. A human-centered approach can help close this gap.

This talk will present a multi-agent framework for vulnerability detection in which multiple LLM agents adopting different roles debate and refine vulnerability hypotheses, improving detection performance and enabling the discovery of multiple CVEs. It will also cover an interrogative debugger for taint analysis that allows developers to ask "why," "why-not," and "what-if" questions to explain unexpected or missing security warnings, reducing cognitive load and improving sensemaking. Finally, we will outline future directions for advancing program analysis tools to address long-standing challenges for effective program analysis.

15:30 - 16:30 Discussions

Open discussion on research collaboration opportunities and future directions

Venue

University of Wollongong
Room 6.209
Northfields Ave, Wollongong NSW 2522
Australia