Decision-Centric Architecture Reviews (DCAR) are a lightweight method for evaluating software architectures by focusing on the rationale behind key design decisions. DCAR uses a structured process with defined roles, including reviewers, a lead architect, developers, and a management/customer representative, to systematically uncover, document, and evaluate architecture decisions and their underlying decision forces. This topic explores how AI agents can take over specific DCAR roles to make architecture reviews more accessible, scalable, and efficient. Students will investigate which roles lend themselves to AI support, design and configure appropriate agents, and implement them using technologies like MCP or API integrations.
Prof. Dr. Uwe van Heesch
Decision-Centric Architecture Reviews (DCAR) have been used for over a decade to systematically evaluate software architectures. Unlike scenario-based methods such as ATAM, which typically consume considerable time and resources, DCAR was designed from the ground up to be lightweight and is therefore the favored method for architecture evaluation in agile contexts. DCAR takes a decision-by-decision approach: stakeholders select a set of architecture decisions (e.g. choice of an architectural pattern, selection of a middleware framework, or technology trade-offs) and analyze them in the context of relevant decision forces, that is constraints, risks, business goals, experience, and organizational considerations that push an architect toward or away from a specific solution.
A DCAR follows nine sequential steps:
The process involves several defined roles: a review team (external or internal reviewers with architecture experience), the lead architect, one or two developers, and a management/customer representative. During the evaluation session, participants identify decisions and their interrelationships (visualized in decision relationship diagrams), elicit and weigh decision forces for and against each solution, and finally vote on each decision using a traffic-light scheme (green = good, yellow = acceptable, red = reconsider). Industrial experiences have shown that a full DCAR evaluation, including reporting, can be conducted in fewer than five person-days.
With the rise of agentic AI, a natural question arises: Can some of these roles be fulfilled, or at least supported, by AI agents? An AI agent could, for instance, act as a reviewer that challenges architectural decisions by identifying counter-forces, or as a moderator that guides participants through the nine DCAR steps, or as a documentation assistant that captures decisions, forces, and rationale in real time. This could lower the barrier for conducting architecture reviews, especially in smaller teams that lack the personnel to fill all DCAR roles.
The concrete form of AI support is deliberately left open for students to explore. Possible directions include fully autonomous AI agents that assume a DCAR role, semi-automated assistants that prepare review artifacts, or interactive agents that guide a human reviewer through the process.
In the first half of the semester, you write a research paper investigating the DCAR method, including its process steps, roles, artifacts (decision relationship diagrams, decision documentation templates, force lists), and evaluation criteria, as well as the current state of AI-assisted software architecture evaluation.
In the second half, you build a proof-of-concept in which one or more DCAR roles are supported or replaced by AI agents. The primary focus of this practical part is the agent configuration itself:
The author of the DCAR method is available for interviews and can provide first-hand insights into the method, its evolution, and its potential for AI-assisted support.