Generative AI in Software Development: Upskilling vs Deskilling - The Battle for Engineering Craftsmanship

HongHong
4 min read

The hum of generative AI is everywhere in software development. Tools like Spring AI promise to turn complex AI engineering into a matter of simple API calls. GitHub Copilot suggests entire functions. ChatGPT drafts SQL queries and API integrations in seconds. It feels like development has entered an "easy mode" – but is this empowerment or the slow erosion of engineering craftsmanship?

We’re witnessing two competing narratives unfold. On one side, the upskilling camp argues AI acts as a force multiplier for skilled engineers. It automates boilerplate code, routine debugging, and repetitive tasks, freeing experts to focus on high-value problems: system architecture, novel algorithm design, or tackling ambiguous edge cases. As the IBM research notes, humans shift from creation to curation and direction, demanding skills like critical evaluation, strategic AI prompting, and contextual problem-solving. Senior engineers aren't replaced; they're amplified, leveraging AI to operate at higher levels of abstraction and innovation.

Conversely, the deskilling narrative warns of a troubling leveling effect. Studies on GitHub Copilot show junior developers completing tasks significantly faster, sometimes matching the output speed of seasoned pros with AI assistance. Research highlighted by Crowston found non-programmers could generate functional HTML code with ChatGPT at speeds rivaling professionals. This raises a critical question: if foundational coding tasks become largely automated, how do junior engineers develop the deep intuition, debugging instincts, and problem-solving resilience traditionally built through hands-on struggle? The "missing ladder rung" problem looms large – if entry-level tasks vanish into AI, where does foundational skill-building happen?

This isn't just about productivity metrics. It strikes at the core identity of engineering. When complex tasks are reduced to crafting prompts and integrating pre-built AI outputs, does "engineering" become more about API orchestration than deep technical mastery? The Spring AI article’s provocative statement – that "AI engineering" often just means calling an LLM over HTTP – captures this tension. Are we cultivating a new breed of "builders" empowered by accessible AI tools, or are we creating a dependency that erodes the very expertise needed to validate, optimize, and secure AI-generated solutions?

The risk of over-reliance is real. If engineers uncritically accept AI outputs, subtle bugs, security flaws, or inefficient patterns proliferate. Crowston’s research flags this as a particular liability for less experienced developers who lack the expertise to spot flawed AI suggestions. This isn't hypothetical – consider an AI-generated SQL query that works functionally but introduces a massive performance bottleneck, or a Copilot-suggested algorithm that's mathematically unsound for edge cases. Without deep understanding, engineers become troubleshooters in a system they don’t fully comprehend, leading to organizational fragility when complex failures occur.

So, where does the balance lie? The future likely involves a hybrid reality, and the outcome depends heavily on deliberate choices:

  1. Prioritize Critical Evaluation Skills: Treat AI output as a first draft, not a final product. Training must emphasize rigorous testing, code review of AI suggestions, and understanding the "why" behind generated solutions. Prompt engineering isn’t just about getting output; it’s about framing problems effectively and knowing how to interrogate the results.
  2. Preserve the Learning Curve: Organizations must intentionally design roles and tasks that prevent skill atrophy. This could mean "training mode" projects with limited AI assistance for juniors, mandatory deep dives into AI-generated code, or rotating engineers onto legacy systems requiring manual understanding. We cannot automate the accumulation of engineering intuition.
  3. Redefine "Senior" Expertise: The value of senior engineers will shift even more towards architecture, system design, complex integration, risk management, and mentoring – guiding the use of AI tools effectively and ensuring robust outcomes. Their deep experience becomes vital for solving novel problems AI hasn’t encountered.
  4. Focus on the Human Edge: AI struggles profoundly with true creativity, complex interpersonal dynamics, ethical reasoning, and navigating ambiguous, ill-defined problems. These uniquely human skills become more valuable, not less. As the Laetitia at Work analysis emphasizes, soft skills like navigating conflict or reading a room remain irreplaceable and take years to develop.

The "easy mode" of AI-powered development isn't inherently good or bad. It's a powerful tool. Used thoughtlessly, it risks creating a generation of engineers with surface-level skills, unable to operate without their AI co-pilot, vulnerable to the brittleness of systems they don't deeply understand. Used strategically, it can democratize access to development, amplify human potential, and free engineers from drudgery to focus on truly innovative, impactful work. The difference lies not in the technology, but in how we choose to integrate it – prioritizing deep understanding alongside efficiency, and ensuring that as AI handles the "easy," we continuously cultivate the hard-won expertise that makes engineering an art as much as a science.

References:

0
Subscribe to my newsletter

Read articles from Hong directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Hong
Hong

I am a developer from Malaysia. I work with PHP most of the time, recently I fell in love with Go. When I am not working, I will be ballroom dancing :-)