Kontakt
stefan.bente[at]th-koeln.de
+49 2261 8196 6367
Discord Server
Prof. Bente Personal Zoom
Adresse
Steinmüllerallee 4
51643 Gummersbach
Gebäude LC4
Raum 1708 (Wegbeschreibung)
Sprechstunde nach Vereinbarung
Terminanfrage: calendly.com Wenn Sie dieses Tool nicht nutzen wollen, schicken Sie eine Mail und ich weise Ihnen einen Termin zu.

Does agentic AI lead to de-skilling when learning a new language?

  This page is still in draft state. Please note that the content may be subject to change.

Agentic AI companions like Claude Code are transforming software development, but there is growing evidence that heavy reliance on AI coding assistants may impair developers’ skill formation — particularly when learning new languages or frameworks. This topic investigates the de-skilling hypothesis through literature research and a small-scale controlled experiment comparing AI-assisted vs. traditional learning outcomes.

Background

Agentic AI is leading to an immensely powerful and productive new style of developing software. You basically interact with the AI like with some kind of “always online, never tiring” companion with whom you do peer programming - but with only your AI counterpart being on the keyboard. Your part then is to very carefully review what your AI companion has changed in the repository. You go through that step by step, Ask questions on code that you don’t understand, request changes, and build a developer memory by formulating coding and architectural principles that you would like to be used in the future.

One of the major problems or one of the major fears in the community and in the research community is currently that this approach will lead to a massive de-skilling of yourself as a developer. When you tackle a new technology, programming language, or architectural style using an agentic AI as a developing companion: Do you still commit your learnings into your active memory, so that you can access it long term? Or are you destined to fall into the vibe coding trap, and by and by, will understand less and less of what your AI companion does?

This is something that in this topic we would like to put to the test.

Objective(s)

You do research to find out what current studies tell you about de-skilling in the software development and software architecture area. Based on your learnings, you design a little experiment that will help you tell, in a very small and non-representative setting of course, if the de-skilling effect can actually be observed.

That could be done, for instance, by giving all of your team members the task to learn two new languages, architecture styles, or frameworks they do not know yet. One using an agentic AI like Claude Code, the other in the old-fashioned manual style with no AI at all and just https://stackoverflow.com/ and the likes as your guide. At the end, all team members take an “exam” and compare the manual learning with the AI assisted learning.

Possible Research Question(s)

  • Does using an agentic AI coding companion (e.g. Claude Code) to learn a new programming language or framework lead to measurably lower comprehension and retention compared to unassisted learning?
  • Which specific skill dimensions (e.g. debugging, code reading, architectural reasoning) are most affected by AI-assisted learning?
  • Can specific interaction strategies with the AI (e.g. asking “why” questions, requesting explanations before code) mitigate the de-skilling effect?

Sources (Example)

  1. Anthropic Research (2026): How AI Assistance Impacts the Formation of Coding Skills — A controlled study finding that AI-assisted developers scored 17% lower on comprehension tests, with the largest gaps in debugging ability. Also shows that interaction strategy matters: asking “why” questions preserves learning outcomes. https://www.anthropic.com/research/AI-assistance-coding-skills