Logo

How do you envision the role of AI in software development evolving in the future?

Last Updated: 26.06.2025 01:15

How do you envision the role of AI in software development evolving in the future?

In the past 3 years there’s been 3 pivotal moments:

I think that “vibe coding” ie giving a brief description of what you want to achieve and get fully functional code as a result is going to have very limited impact. It works, yes, but in very specific cases, but it doesn’t scale well, and the economies it creates are not worth the trouble in the general case.

agent-centric IDEs (cursor, windsurf, claude code…) which empower agents to reason with an entire codebase and provide more actionable answers / perform more useful tasks.

They say that the Democrats media is able to gaslight their ignorant followers. How true is that, and is the fact that Democrats have echoed that Jan 6th was worse than 9/11 or Pearl Harbor proof of that, via gaslighting their ignorant followers?

We are entering a new phase of uncertainty. In the late 2010s/early 2020s (“pre-copilot era”) the developer experience was concentrating around fewer tools with large adoption. Now the market for these tools is fractionated again.

We’re still waiting to see how the dust is going to settle IMO.

the introduction of code completion tools (github Copilot etc. ) which liberate devs from memorizing precise syntax,

NASA's Parker Solar Probe spots powerful magnetic explosion aimed at the sun's surface - Live Science

The trends I expect to continue are:

a larger part of the code in codebases is going to be generated. This doesn’t mean that a large portion of the tasks that were once handled by humans can be entirely delegated to AI, but rather, in a typical commit, an increasingly large proportion of the lines of code changed will be done automatically.

conversational LLM agents (chatGPT, claude etc.) that can accelerate research, simulate brainstorming and perform small technical tasks,

Aphantasia: Why Some Minds Are Blind to Images - Neuroscience News

In the “pre-copilot era”, there was a general push towards code quality, as in: developers were nudged into making code that was easier to maintain by their fellow developers. Code quality is going to evolve into: code that AI agents find easy to work with. Those two things are not incompatible, but it means things like more comments, more tests.

developers will spend less time typing code and more time thinking about code. ie describing their projects. Discussing what they want to achieve with an agent, which requires reasoning and formalizing what they want to accomplish.

Developers will spend more time on quality insurance, both upstream and downstream. Thinking - how should this piece of code integrate in the larger whole. What are the signals that it’s broken. What logs, testing, monitoring and alerting should I put in place.

Undercover cops in New York are riding the subways with iPods on to entice robbery. Is that a form of entrapment? If not, why not?