A couple of years ago, mentioning Generative AI in software companies led to skepticism and strict security measures. IT departments viewed tools like ChatGPT and GitHub Copilot as potential sources of intellectual property leaks and significant risks. Corporate policies were explicit: “prohibited or strictly regulated usage.” The vast majority of programmers used artificial intelligence (AI) at their own risk and without official permission.
But industry standards changed in the blink of an eye. What was once a threat has now become the standard. The question is no longer “Should we use AI?” but “How can we integrate it more efficiently?” Some companies are now measuring developer productivity based on their adoption and effective use of tools. We have moved from prohibition to a tacit obligation.
The evidence of this change is everywhere, and we developers have witnessed it. Recent studies and surveys paint a clear picture:
- GitHub Survey (October 2024): Revealed that 99% of developers in the United States already use AI tools in their workflows. GitHub Copilot, its flagship tool, claims to be responsible for generating up to 40% of the code in supported languages.
- JetBrains Survey (Developer Survey 2024): Corroborates this trend, indicating that over 70% of developers have incorporated AI into their processes, primarily for autocompletion, code generation, and error detection. This has given rise to a flourishing ecosystem of tools. From Copilot and Amazon CodeWhisperer to specialized courses teaching “Prompt Engineering for Developers” and mandatory ongoing training within companies. The critical skill is no longer just knowing how to program, but knowing how to orchestrate AI to program for you or alongside you.
My Experience: The Project Where AI Writes the Code (And Everything Else)
My current project has demonstrated the shift in software roles during a legacy migration from Java to C#. This is not a traditional migration project; it is a laboratory of extreme automation based on artificial intelligence where our role as programmers has been radically redefined.
These are the steps we have followed, working hand-in-hand with AI:
Ticket and Documentation Generation: User stories are no longer written manually. An AI prompt analyzes the legacy Java code and automatically generates the ticket description, acceptance criteria, and necessary technical documentation.
C# Code Generation: The core of the process. We use advanced prompts and AI tools to automatically translate Java functions and classes into C#. Our task is not to write code from scratch, but to provide the precise context.
Unit Test Creation: After coding and verifying its functionality, you can easily create unit tests with a simple prompt in Copilot.
Automated Code Review: The code review process is also AI-assisted. A tool handles static analysis, checks patterns, suggests improvements, and detects obvious inconsistencies before a human peer reviews it. All of this happens in the pull request, where feedback is provided through comments on the code lines.
Validation via Automated Testing: The method for testing the fidelity of the migration is very simple. Automated tests are run, feeding the same data into both systems (the legacy Java and the new C#), and the results are compared. If they match, the migrated code is correct.
In this ecosystem, my primary function has ceased to be “programming.” I have become a supervisor of AI-generated code. My value lies in:
- Designing the right prompts to get the desired output.
- Understanding the business context to validate that the AI has not made a conceptual error.
- Making high-level architectural decisions that the AI is not yet capable of assuming.
- Keeping constant communication with the client to avoid migrating outdated components and to enhance the code during the migration.
Conclusion and Reflection:
This new role of “supervisor” is incredibly efficient and undoubtedly marks the future of our craft. However, an uncomfortable and inevitable question arises: Along the way, are we beginning to lose the analytical capacity and programming logic that has always characterized us?
The dependence on AI for routine tasks frees us to think about more complex problems. But what about the junior developer who no longer needs to struggle for hours with a loop or a data structure because the AI provides it instantly? Are they missing the opportunity to forge their intuition and mental ability for problem-solving?
Programming was never just about the final code; it was about the thought process, the methodical decomposition of a problem, the teamwork to build code from ideas, and the creative search for a solution. If we outsource that thought process to an AI, we risk becoming technicians who only know how to push the right button, without truly understanding the machinery underneath.
The challenge for the new generation of developers (and for those of us in this transition) will be to find the balance. We must embrace the power of AI without failing to actively cultivate our skills in logical thinking, deep analysis, and creative problem-solving. There is no near-future scenario where AI has total control over a project and human intervention becomes unnecessary. We build software with AI for use by human beings, and as long as this continues, our role as programmers will remain relevant. Maintaining analysis skills and logic is not optional; detecting AI errors as code supervisors will be a necessary skill for competitiveness in the technology fields in the near future.
Top comments (14)
I think we should still learn to code, because how else are we to validate the code that's written by AI ? So we still need to understand the basics of coding, the web (HTML/CSS/JS), the HTTP protocol, and so on ...
You could say "but, AI will also do the code reviews" - but then we'd be totally at the mercy of what the AI tools are doing ...
Final conclusion (at least for now): mastering the basics will still be important - and even writing some code "by hand" will remain important.
Thanks for your comment. For my current project, we require an AI code review from Copilot and two code reviews from developers. This entails one endorsement from AI followed by two approvals from humans. We need a senior review to ensure quality, and I believe this will continue for a long time.
I fixed the article's format and would appreciate it if you could share it.
Share it? :-)
However, my point, basically (but I'm sure you understood that) was that devs, even when they would mainly be acting as "managers" or "directors" (of AI tools), will still need to learn at least basic coding skills, and have a grasp of the 'fundamentals' (HTML, CSS, JS, HTTP etc) - because, how else are they going to assess whether what the AI tools produce make any sense at all?
So, at least that part of the education (training) of devs will (should) remain, for the foreseeable future ...
But, I completely agree that the focus, both of education/training, and of the day to day work, is going to shift, no doubt about it ... a lot will change, even when some things will remain the same.
Yes, I agreed! Leveraging AI as a supplementary tool will be seamless for experienced developers who possess the knowledge to comprehend AI's operations and identify potential issues in the process. However, I am concerned about how novice developers, who rely solely on AI for their development tasks, will address any problems that may arise from these tools.
"I am concerned about how novice developers, who rely solely on AI for their development tasks, will address any problems that may arise from these tools"
That's exactly why I'm saying that mastering the basics/fundamentals (also by, or especially by, junior/novice devs) will remain important (necessary), even when a large part of the code will be 'written' by AI tools ...
@leob I would go even one step further, and say that we need not just basic, but deep understanding of our craft to review and ideate on complex problems and solutions.
Although a lot of things @mteheran stated here stand.
I think we should definitely adopt AI, but in a strategic manner - augmenting our abilities and potential by leveraging human strengths (critical thinking, judgment, creativity) and AIs (pattern recognition, scale, performance) in synergy.
I read this and thought, nah AI is just an overpriced autcomplete
Brilliant, that's a dose of sorely-needed antidote to the AI hype :-)
I believe AI still has limitations that require human oversight. A human companion is essential to catch errors, guide improvements, and ensure AI systems behave responsibly and effectively
Thank you for sharing your experience.
Your post made me think about how I want to work with AI in the future.
Thanks for your comment.
Great post.
The “AI code supervisor” vision probably captures the short-term trajectory well. Humans will prompt, verify, and apply judgment. But that framing assumes that software will continue to be structured around what fits in human working memory—our tendency to bundle complexity into a few clean abstractions we can reason about consciously.
That constraint has actually served us. It nudges us toward simple, compressible models, and by Solomonoff induction, simpler explanations have a stronger prior. In that sense, our limited working memory acts as a kind of inductive filter, making us surprisingly effective and sample-efficient in problems that can be expressed through clean conceptual structure.
We do operate in high-dimensional spaces too—recognizing a cat in an image involves thousands of interacting features—but that process happens outside conscious reasoning. We can perform the recognition without being able to articulate it.
AI systems don’t inherit that same cognitive bottleneck. They can blend symbolic structure with sub-symbolic pattern recognition without needing everything to collapse into a human-legible abstraction. As these systems take on more of the “thinking,” it's possible that software development drifts toward latent-space manipulation, where the core units of organization are no longer concepts we can fully grasp or name.
If that happens, our role won’t be to “understand all the code” in the traditional SICP sense. It will be closer to probing and steering behavior, accepting that parts of the system exceed our capacity for explicit comprehension. Our taste for simplicity will remain useful—but maybe only within the subset of problems that are amenable to being simplified at all.
We’re moving from writing code to guiding AI ; but knowing the fundamentals is still what keeps us in control.
good idea, interesting to try