This page is still in draft state. Please note that the content may be subject to change.
Agentic AI companions like Claude Code are transforming software development, but there is growing evidence that heavy reliance on AI coding assistants may impair developers’ skill formation — particularly when learning new languages or frameworks. This topic investigates the de-skilling hypothesis through literature research and a small-scale controlled experiment comparing AI-assisted vs. traditional learning outcomes.
Agentic AI is leading to an immensely powerful and productive new style of developing software. You basically interact with the AI like with some kind of “always online, never tiring” companion with whom you do peer programming - but with only your AI counterpart being on the keyboard. Your part then is to very carefully review what your AI companion has changed in the repository. You go through that step by step, Ask questions on code that you don’t understand, request changes, and build a developer memory by formulating coding and architectural principles that you would like to be used in the future.
One of the major problems or one of the major fears in the community and in the research community is currently that this approach will lead to a massive de-skilling of yourself as a developer. When you tackle a new technology, programming language, or architectural style using an agentic AI as a developing companion: Do you still commit your learnings into your active memory, so that you can access it long term? Or are you destined to fall into the vibe coding trap, and by and by, will understand less and less of what your AI companion does?
This is something that in this topic we would like to put to the test.
You do research to find out what current studies tell you about de-skilling in the software development and software architecture area. Based on your learnings, you design a little experiment that will help you tell, in a very small and non-representative setting of course, if the de-skilling effect can actually be observed.
That could be done, for instance, by giving all of your team members the task to learn two new languages, architecture styles, or frameworks they do not know yet. One using an agentic AI like Claude Code, the other in the old-fashioned manual style with no AI at all and just https://stackoverflow.com/ and the likes as your guide. At the end, all team members take an “exam” and compare the manual learning with the AI assisted learning.