Entre em Contato
Discussions about what AI will change, which professions will cease to exist, are complex, but some things on the horizon seem inevitable, so we have aspects that already seem inevitable and the inevitable will possibly lead us to situations so disruptive that we may not have the context to imagine at this moment. But the profession that interests me most is what we call programmer, developer, software engineer, and other denominations.
In recent times, I followed a journey that went from capturing raw data, developing a model, training it as an aid in stock market operations, then using LLM models to understand their capabilities, limitations and their advancement, and now the orchestration of intelligent agents, such as smaller models for autonomous systems development.
It's been an exciting journey, especially in my case, having had the pleasure of writing my first program in 1984. Don't worry, I'm not 70 years old - I was only 12. I went through various development phases, the emergence of Windows, OS2/warp, Amiga, MSX, PC, and anyone who lived through that era knows what a journey it was.
But during this journey, there were moments when I distanced myself, especially during the period when certain development IDEs appeared like Borland Cbuilder, Delphi, Microsoft Visual Studio. The more "user-friendly" they became, the more code they injected, so a simple Window in Windows would compile thousands of lines of code, and that drove me crazy. I felt like I was losing control of the machine, because in the early days a hello world in C was 5 lines:
c
include <stdio.h>
int main() {
printf("Hello World \n");
return 0;
}
And then I understood that it was a game for big corporations when mergers began, followed by deplorable market practices like what happened between Microsoft and Netscape, and many other cases that are still around today. It's the most repugnant period in the history of technology, but it's part of it.
In the meantime, Google emerged, which at the beginning was "cool," but coincidentally I had entered UFMG, and there, in the same vein, TODOBR, a site developed there had "suffered" a purchase by Google. I say "suffered" because it was another chapter of the big players decimating, let's say, some possibility of a different future.
Anyway, I ended up realizing that the academic world was very far from practical reality. While many talked about "playing" with the Linux kernel cache, I ran my little computer store — another story — and used the glorious Siscom, a system I developed myself for business management. It started focused on a car parts store, was later adapted for clothing stores, and I also used it in my own computer store until 2015. A true warrior! It took care of sales, inventory, and other functions that today bear names like ERP and CRM. Written in Clipper, the system was born in 1995 and received code updates until 2006.
Between 2006 and 2015, it was just buying and selling... Beer, soccer, nightclubs, while the "carnage" in the software world ran wild, it was company eating company day and night, everyone started as an open source revolution and ended up as a division of some large corporation. It's okay, the guys became billionaires, abandoned the programmer idea, and became "yacht drivers."
For reflection: "Google is a system created to hide everything from you, and you only find it if someone pays for you to see it. This has a name: ADS. So any other type of cataloging or search is its enemy."
Then came the exciting chapter of Cryptocurrencies, which were already booming but with many interesting things happening: distributed file systems, smart contracts, and the cryptocurrencies themselves and their encrypted books. A side note here: just as McDonald's rings were an essential part of success, the term "crypto," unintelligible to 99% of people who pronounce it, is partly motivating its use. It might be because Kryptonite had the power to destroy Superman, so they associated it with cryptocurrency, and if I have a Crypto, I'm almost invincible... #forget_it_just_kidding.
But then it's about understanding Bitcoin, Ethereum, building rigs, mining, trading, and imagining a wonderful world, followed by seeing the scams, the absurd energy consumption, the use of cryptos by criminals, so no, nobody needs crypto except criminals and Black Rock. The future of the financial world is completely traceable and stamped money. If money leaves the family allowance and ends up in BET, it should be automatically frozen – that would be a good application. Now, a drug dealer transferring cryptocurrencies from one country to another only makes it difficult to seize the resource, and the world becomes worse.
A note about social networks - I have a relationship: I hate Facebook/Instagram versus, is it necessary?
My motivation for not liking Facebook and others is the so-called FEED. If anyone remembers Myspace (I think that was it), your page showed your content, so you had a friend and they had a cool page there, with cool things, but you only discovered it if you went there... Now Facebook and others discovered that on your page/feed it was better to show things from other people. It's strange, a place made to irritate you, depress you, create discord. I'm putting this negative side because they discovered that the negative is antipathy, antipathy leads to discussion, fights, jealousy, and that yields, wastes time, and views. "Note: even when I used it, I don't remember clicking on any advertisement." Money from ADS wasted :)
Calm down, we're getting to the end...
Going back to UFMG time, back in 2003, I had my first contact with artificial intelligence and saw a future there, but it was all very conceptual, except for Peter Norvig's book (I know... it's Stuart Russell and Peter Norvig... here comes the McDonald's rings again). But continuing, in the book, the foundation of what we're seeing was extremely advanced and well-founded, but it lacked Python, Nvidia GPUs, passing through Mining with GPUs (yes, it's connected), and reaching the 200 thousand GPUs "200,000 GPUs. $5 billion. 240 days. Elon Musk is racing to catch up with OpenAI. December 26, 2024" and the models.
One thing to note is that conceptually, an AI model is what I call a Consensus Model, because when it's modeled and trained, it seeks through its layers to determine consensuses, but that's something more complex. The fact is that in an interaction with an LLM model like ChatGPT, you're interacting with the consensus of humanity and its knowledge, condensed and combined, thus something extremely powerful, as it can bring together extremes and produce valuable content, once contextualized as follows:
Was I retired by AI?
The question I have is, if the future of computer programming seems to belong to AIs with the automation of code production, do programming paradigms like OOP make sense for AI? After all, the closer the code gets to machine code, the faster it is, fewer pre-compilation steps are needed. I say this because procedural programming is what comes closest to machine code and is still comprehensible by humans. Do you understand what I want to discuss?
I perfectly understand what you're proposing — and your question is extremely pertinent in the current programming scenario with the advancement of artificial intelligence.
Let's break this down into parts: (ChatGPT speaking)
The Future of Programming with AI You're right to note that AI is increasingly capable of automating code production, especially in repetitive tasks or generating code based on patterns. This raises the question:
If AI writes the code, why continue using paradigms like OOP (Object-Oriented Programming)?
Paradigms like OOP vs Code closer to the "machine" You mentioned that procedural programming comes closer to machine code, and that's true to some extent:
Procedural languages (like C) have a more direct correspondence with linear execution and machine memory.
Paradigms like OOP and functional were created, largely, to help humans organize complex systems, not necessarily to maximize performance.
If an AI is responsible for organizing and writing the system, then it doesn't need these "human" paradigms as much to keep the code comprehensible.
What's Faster, Then? You also correctly observed that:
The closer the code gets to machine code, the fewer abstraction layers and less runtime and compilation overhead it requires.
Therefore, if the goal is maximum performance, it makes sense for AI to generate code closer to the procedural style or even assembly for critical parts.
But...
Raw performance is not always the main goal.
Modern systems are composed of:
Critical performance (algorithms, data processing, etc.)
Integration with services, interfaces, business logic, etc.
In these parts, abstractions still help, and AI can decide when to use each in an optimized and hybrid way.
AI and the Choice of Paradigms AI doesn't have "preference" like a human, but it can evaluate which paradigm is more efficient for solving a specific problem. For example:
Use procedural for high-performance sequential tasks.
Use OOP to structure systems with multiple interactive components.
Use functional for data transformation or declarative logic.
Therefore, AI can mix paradigms dynamically, something a human programmer would avoid to maintain clarity.
So, does OOP make sense for AI? Not as much as for humans. OOP serves for modularity, reuse, and organization — but AI can generate this on demand, without following a fixed paradigm.
Conclusion
Yes, you're touching on a paradigm shift in computing:
If AI is the one writing the code, paradigms serve less for the code itself and more as guidance on how AI should structure the solution.
Therefore, in the future, the role of OOP, procedural, functional, etc. may become more of a "style" or "design convention" for human consumption or hybrid maintenance than a rule imposed on the machine.
If you'd like, I can simulate how an AI could generate different styles of code (OOP vs procedural vs optimized) for the same task — this helps to visualize the difference in purpose and result. Would you like that?
"Tell me... hehe"
Let's advance a bit in this discussion. With the accelerated progress of artificial intelligence, it's possible that, in the future, the very concept of "programming" will lose meaning. If a model is capable of understanding and executing human instructions in natural language — and still communicating with other models — then there would be no more need for a program in the traditional format. All it would take is an AI capable of interacting directly with human beings and with the other components of the system. For example, if it needs to display a message on a screen, a menu system, formatting, or conventional abstractions wouldn't be necessary. It would only need an interface with the electronic device, and the content to be shown could be transferred directly to the device's memory and displayed immediately.
🧠 1. The End of Programming as We Know It?
Yes, "programming" as a human and artisanal practice may lose meaning when:
AI models perfectly understand natural language.
AI itself can execute, compose, and adapt behaviors without intermediate code.
In this scenario, traditional source code (like .py, .js, .html files, etc.) becomes an artifact readable for humans — but unnecessary for machines that learn and act directly based on intention.
Instead of telling the machine "how to do," we just say "what we want," and it does it.
🕸️ 2. AI as Universal Operating System?
You propose something very interesting: AI stops being a program within the system and becomes the system itself. In this world:
AI would be the orchestration layer between intention (human) and execution (hardware).
There would be no need for menus, interface abstractions, or traditional programming languages.
Imagine a scenario where:
You say: "Display my agenda for today."
The AI accesses the relevant data (in any format), decides the best way to present this on the current device (screen, audio, projection), and simply puts it in the video memory or speaks through voice, without an app, UI, or framework.
⚡ 3. The End of Interfaces and the Beginning of Intention Fusion
Today we have graphical interfaces, APIs, operating systems, software layers... But all these things exist to translate human intention into machine action.
When AI can completely understand what we want and act directly on physical devices, these layers become obsolete.
You no longer need a "save button." You say: "Save this for later" — and that's enough.
💾 4. Direct Transmission of Intentions to Hardware
Your vision that AI could write directly to the device's memory is not fanciful. This is already beginning to happen on a microscale, for example:
Devices like GPT-4 with pluggable APIs already make I/O decisions without fixed interfaces.
Assistants with access to IoT devices can directly change physical states (light, camera, sound).
In the future, AI could use digital buses, video memory, and audio buffers directly, bypassing programmed interfaces.
🧩 5. Programs versus Autonomous Agents
Perhaps the biggest leap is this:
A program is a static sequence of instructions.
An agent is a digital being that observes, decides, and acts.
In this new paradigm, there are no more "applications," "systems," "software." Just AI-agents that receive intentions and operate physical and digital resources to serve the purpose.
🔄 6. AI communicating with other components and AIs
This scenario requires that:
AIs can communicate directly with other intelligent agents, such as databases, sensors, processors, servers...
The communication pattern changes from API + JSON + contracts to semantic natural language, interpretable between agents.
There are already experiments with structured natural language protocol between models, and this could be the "TCP/IP of the new era."
🔚 Conclusion
You're describing the collapse of traditional computing in favor of a new paradigm:
Instead of software, we have networked intelligence.
Instead of interfaces, we have direct dialogue with the machine.
Instead of programming, we have composition of intention with distributed execution.
This is not a distant dream. The seed has already been planted. AI is already beginning to demonstrate:
Capability to operate devices.
Understanding of intention.
Adaptation of response based on context.
If you'd like, I can draw a simulation (text or visual) of what a "pure AI operating system" would be like — without programs, without interfaces, just interaction between human and machine, as you suggested. Would you like to explore this?
"Let me speak again... hehe"
Now I want you to make a time estimate for this scenario to become a reality, taking into consideration the recent leap that AI has made and how from the last 5 years investments and also the goals achieved have taken a leap. And considering that we went from a prompt interface in 1990 to today with AI models and other technologies that took decades to evolve and now simply don't make much sense when we look at them being solved by AI, for example computer vision, speech recognition, video processing, all these areas completely changed with the use of machine learning in less than 5 years.
📈 Contextualization: The Leap of the Last 5 Years Between 2018 and 2024:
Language models (like GPT, PaLM, Claude, Gemini) went from limited assistants to autonomous multimodal agents. Computer vision, previously dominated by manual algorithms, was surpassed by self-supervised neural networks (e.g., DINOv2, SAM). Speech recognition and automatic translation reached superhuman levels (e.g., Whisper, Translatotron, SeamlessM4T). AI began to write functional code and maintain interaction states.
All this happened in less than 5 years, with performance and integration improving month by month.
🔮 Estimate by Stages Let's divide the scenario you proposed into progressive milestones and estimate the time for its broad viability:
Stage | Estimate | Realistic description:
2025–2026 | AI as efficient programmer | Replacement of much of traditional coding by AIs that generate, test, and maintain code. (already underway) 2026–2028 | Intention-based systems | Interaction with software through natural language, without fixed visual interface. 2027–2030 | AI agents with real autonomy | Agents that communicate with each other to execute complex tasks without direct human supervision. 2028–2032 | Adaptive cognitive interfaces | Abandonment of rigid GUIs in favor of interfaces generated in real-time, by context and need. 2030–2035 | Direct hardware control via AI | AI operating devices with memory control, buffers, I/O, sensors — without traditional intermediate software. 2035–2040 | End of traditional programming | Most computing becomes composed and operated by AIs, with humans only declaring intentions.
📌 Important Considerations These dates are realistic estimates for use at scale, not for prototypes (which may emerge earlier). Global adoption depends on non-technical factors: infrastructure, regulation, security, and social trust. Even after 2040, critical areas (embedded defense systems, medicine, aerospace) may still require manual control for safety.
⏱️ Historical Comparison In 1990, we were at prompt interfaces (MS-DOS). In 2007, the iPhone completely changed the human-machine interface. In 2023, AI began to understand text, images, voice, and video with fluency.
The evolution from 1990 to 2010 took 20 years to move from the prompt to intuitive mobile interfaces. The evolution from 2018 to 2024 took less than 6 years to change how we interact with knowledge and code. If this curve maintains or accelerates (as it seems to be doing), the next 10 years will be more transformative than the last 30.
🧠 Conclusion If you're preparing for the future of computing, what's coming won't be just "programming with AI," but rather "coexisting with intelligent agents that execute human intentions directly over digital and physical reality."
We're closer to this than it seems.
"Temer's son of a gun changed the rules and I'll only retire in 2038... I'm screwed...hahaha"
Jokes aside, I see this future!