Kontakt
stefan.bente[at]th-koeln.de
+49 2261 8196 6367
Discord Server
Prof. Bente Personal Zoom
Adresse
Steinmüllerallee 4
51643 Gummersbach
Gebäude LC4
Raum 1708 (Wegbeschreibung)
Sprechstunde nach Vereinbarung
Terminanfrage: calendly.com Wenn Sie dieses Tool nicht nutzen wollen, schicken Sie eine Mail und ich weise Ihnen einen Termin zu.

AI-Assisted Decision-Centric Architecture Reviews (DCAR)

Decision-Centric Architecture Reviews (DCAR) are a lightweight method for evaluating software architectures by focusing on the rationale behind key design decisions. DCAR uses a structured process with defined roles, including reviewers, a lead architect, developers, and a management/customer representative, to systematically uncover, document, and evaluate architecture decisions and their underlying decision forces. This topic explores how AI agents can take over specific DCAR roles to make architecture reviews more accessible, scalable, and efficient. Students will investigate which roles lend themselves to AI support, design and configure appropriate agents, and implement them using technologies like MCP or API integrations.

Supervisor

Prof. Dr. Uwe van Heesch

Background

Decision-Centric Architecture Reviews (DCAR) have been used for over a decade to systematically evaluate software architectures. Unlike scenario-based methods such as ATAM, which typically consume considerable time and resources, DCAR was designed from the ground up to be lightweight and is therefore the favored method for architecture evaluation in agile contexts. DCAR takes a decision-by-decision approach: stakeholders select a set of architecture decisions (e.g. choice of an architectural pattern, selection of a middleware framework, or technology trade-offs) and analyze them in the context of relevant decision forces, that is constraints, risks, business goals, experience, and organizational considerations that push an architect toward or away from a specific solution.

A DCAR follows nine sequential steps:

  1. Preparation
  2. DCAR Introduction
  3. Management Presentation
  4. Architecture Presentation
  5. Forces and Decision Completion
  6. Decision Prioritization
  7. Decision Documentation
  8. Decision Evaluation
  9. Retrospective and Reporting

The process involves several defined roles: a review team (external or internal reviewers with architecture experience), the lead architect, one or two developers, and a management/customer representative. During the evaluation session, participants identify decisions and their interrelationships (visualized in decision relationship diagrams), elicit and weigh decision forces for and against each solution, and finally vote on each decision using a traffic-light scheme (green = good, yellow = acceptable, red = reconsider). Industrial experiences have shown that a full DCAR evaluation, including reporting, can be conducted in fewer than five person-days.

With the rise of agentic AI, a natural question arises: Can some of these roles be fulfilled, or at least supported, by AI agents? An AI agent could, for instance, act as a reviewer that challenges architectural decisions by identifying counter-forces, or as a moderator that guides participants through the nine DCAR steps, or as a documentation assistant that captures decisions, forces, and rationale in real time. This could lower the barrier for conducting architecture reviews, especially in smaller teams that lack the personnel to fill all DCAR roles.

The concrete form of AI support is deliberately left open for students to explore. Possible directions include fully autonomous AI agents that assume a DCAR role, semi-automated assistants that prepare review artifacts, or interactive agents that guide a human reviewer through the process.

Objective(s)

In the first half of the semester, you write a research paper investigating the DCAR method, including its process steps, roles, artifacts (decision relationship diagrams, decision documentation templates, force lists), and evaluation criteria, as well as the current state of AI-assisted software architecture evaluation.

In the second half, you build a proof-of-concept in which one or more DCAR roles are supported or replaced by AI agents. The primary focus of this practical part is the agent configuration itself:

  • Designing system prompts, process descriptions, and skills that enable an AI agent to fulfill a specific DCAR role (e.g. reviewer, devil’s advocate, moderator, documentation assistant)
  • Equipping agents with the necessary technical capabilities via MCP servers, tool-use APIs, or similar integrations (e.g. access to architecture documentation, generation of decision relationship diagrams, force analysis)
  • Defining the interaction model between human participants and AI agents during a DCAR session
  • Evaluating the quality of AI-assisted reviews compared to purely human-driven ones

The author of the DCAR method is available for interviews and can provide first-hand insights into the method, its evolution, and its potential for AI-assisted support.

Possible Research Question(s)

  • Which DCAR roles can be meaningfully supported or replaced by AI agents, and what are the limitations?
  • How should AI agents be configured (e.g. via system prompts, tool access, knowledge bases) to effectively fulfill a specific DCAR role such as the reviewer or the devil’s advocate?
  • Can an AI agent reliably identify and weigh decision forces (pros and cons) for a given architecture decision, comparable to an experienced human reviewer?
  • Does AI-assisted DCAR produce review results of comparable quality to traditional, fully human-driven reviews?
  • How can technologies like MCP (Model Context Protocol) or tool-use APIs be leveraged to give AI agents the technical capabilities needed for architecture review tasks (e.g. accessing architecture documentation, generating decision relationship diagrams)?

Sources

  1. van Heesch, U., Eloranta, V.-P., Avgeriou, P., Koskimies, K., Harrison, N.: Decision-Centric Architecture Reviews, IEEE Software, vol. 31, no. 1, Jan/Feb 2014, pp. 69–76. Introduces the DCAR method with its nine-step process, roles, decision forces concept, and industrial experiences from large-scale projects. https://doi.org/10.1109/MS.2013.140
  2. Toth, S., Zörner, S.: Risk analysis with lightweight architecture reviews, heise online / iX 14/2024, April 2025. Compares DCAR with other architecture evaluation methods such as ATAM, PBAR, TARA, Pre-Mortem, and LASR, and discusses their suitability for different project contexts. https://www.heise.de/en/background/Risk-analysis-with-lightweight-architecture-reviews-10355745.html
  3. Embarc Software Consulting GmbH: Es muss nicht immer ATAM sein - Decision-Centric Architecture Reviews (DCAR), YouTube. https://www.youtube.com/watch?v=-5U4ERA5gBE
  4. Erder, M., Pureur, P.: Continuous Architecture: Sustainable Architecture in an Agile and Cloud-Centric World, Morgan Kaufmann, 2015. Discusses DCAR in the context of continuous and agile architecture practices. https://www.google.de/books/edition/Continuous_Architecture/xxYoCgAAQBAJ