AI-Agent Support for Requirements Engineering
The master’s thesis should investigate the potential of multi-agent systems (MAS), possibly enhanced with Large Language Models (LLMs), for supporting requirements engineering processes.
Requirements engineering is a critical phase in software development, but it is often challenged by conflicting stakeholder perspectives, ambiguous language, and the need for continuous refinement. Traditional approaches rely heavily on manual elicitation, negotiation, and validation, which are time-consuming, error-prone, and difficult to scale. Multi-agent systems offer a promising approach by modeling stakeholders, requirements analysts, and validators as autonomous agents that interact to generate, negotiate, and refine requirements collaboratively.
To explore this, relevant methods for agent-based modeling, negotiation, and requirement validation should be reviewed and analyzed. LLMs may be incorporated to strengthen agents’ natural language understanding and reasoning capabilities, enabling them to transform unstructured stakeholder input into structured requirements and to engage in negotiation dialogues.
The research focus should include:
- Multi-Agent Systems (MAS): Simulating autonomous agents with different roles (e.g., requirement generator, stakeholder proxy, validator).
- Large Language Models (LLMs): Processing and interpreting natural language input, supporting reasoning, and generating requirement artifacts.
- Coordination and Negotiation Mechanisms: Handling conflicts and achieving consensus among agents.
- Integration: With existing requirements engineering tools or modeling environments.
The system should be designed to be modular and extensible, enabling the addition of new agent types, negotiation strategies, or reasoning methods. The evaluation should include a comparison of MAS-supported requirements engineering with conventional/manual approaches in terms of efficiency, scalability, quality of requirements, and stakeholder satisfaction.
