top of page

How will AI reshape Software Engineering? Part 2: Constraints that shape Software Engineering today

Writer: Yevgen NebesovYevgen Nebesov


This is Part 2 of an article on how AI will reshape software engineering. Check Part 1 here.


The main argument of this article is that software engineering today is shaped by two human-induced constraints:

  • Humans have limited cognitive capacity.

  • Human work requires effort and time.


The following sections explain how these constraints influence the structure and principles of software engineering and explore the technical and social innovations designed to mitigate their impact.


Constraint 1: Humans have limited cognitive capacities


Humans have limited cognitive capacity—this has always been the case. However, limited does not mean small. It means that some tasks require more knowledge and skills than a single mind can possess. We no longer live in the era of Alan Turing, when computational systems could be built by one person—a genius who was simultaneously a software engineer, business analyst, hardware engineer, and tester, all in one.


Today, the systems we build are far more complex. We need specialists working in different roles at various stages of the Systems Development Life Cycle. To manage this complexity, we have introduced distinct categories of work, such as business analysis, requirements engineering, systems design, systems implementation, quality assurance, and operations.


However, this decomposition has introduced a new challenge: integration. When different people work in separate functions and produce different artifacts, how can they ensure that these artifacts remain aligned?


Over the past eighty years, two key strategies have emerged to address this challenge:

  1. Expanding the capabilities of individual practitioners.

  2. Enhancing the efficiency of information exchange between individuals.


Strategy 1: Expanding the capabilities of individuals


Two computers might share the same database or network file system, but this is nothing compared to two threads on the same machine sharing process memory. The same applies to people: two individuals on the same team will never be as aligned as one person working within their own mind.


However, if every team member is a specialist whose cognitive capacity is already fully occupied with the deep knowledge required to master their specific discipline, how can these specialists expand their expertise and become more general?


The answer is simple: reduce the depth of their tasks to trade specialization for broader capabilities.


Example: High-Level Programming Languages


Before the invention of high-level programming languages like Fortran, Lisp, or Algol, coding in assembly language was an arduous task. Engineers needed a deep understanding of CPU architecture to write code and had to manage a difficult-to-maintain codebase. This left them with little cognitive capacity for other critical tasks, such as system design and specification.


High-level programming languages changed this by abstracting away low-level details, enabling developers to focus not just on coding but also on broader aspects like analysis and design.


Example: Object-Oriented-Programming


The invention of Object-Oriented Programming (OOP) has helped engineers encapsulate complex business domain entities behind abstractions such as classes and interfaces.


Instead of managing the combinatorial complexity of both depth (implementation details of domain entities) and breadth (interactions between domain entities) simultaneously, engineers can now separate these concerns. This reduction in cognitive load has allowed engineers to move beyond coding and become more involved in business analysis.


Strategy 2: Enhancing the efficiency of information exchange between individuals


The second alignment strategy focuses on facilitating information exchange between individuals. This is analogous to multiple computers requiring proper middleware to operate coherently.


Over the past decades, numerous social and technical innovations have emerged to support this strategy. Here are some examples:


Example: Social Innovation - Teams


Teams function as virtual boundaries around a group of people. The concept is simple: by bringing individuals together under a shared set of tasks and guiding them through structured meetings, communication is streamlined, leading to the creation of aligned artifacts.


This process is similar to placing components within the same computational pod, where resources such as the network namespace and filesystem are shared, enabling seamless interaction and coordination.


Example: Social Innovation - DevOps


No, DevOps is not just about Kubernetes or CI/CD automation. While teams serve as a fundamental social platform that groups individuals based on a shared set of tasks, DevOps can be seen as an add-on to this platform—one that assigns a shared end-to-end responsibility to the team for the deliverables it produces.


This add-on acts as a catalyst for communication and collaboration among team members, ultimately leading to better alignment between the artifacts they create.


Example: Design Innovation: Domain-Driven Design


Domain-Driven Design (DDD) to English (or any other general-purpose language) is what Fortran is to Assembler. It introduces a high-level communication language across organizational functions, fostering shared cognition among individuals in different roles. This shared understanding helps achieve congruency between the problem and solution spaces, ultimately leading to the creation of aligned artifacts.


Example: Model-Based-Systems-Engineering


The idea behind Model-Based Systems Engineering (MBSE) is as simple as it is powerful. Instead of creating a multitude of artifacts and then trying to keep them externally aligned, MBSE enables the creation of an intrinsically connected set of artifacts—primarily designs and specifications at different levels of system granularity—within a single model.


All other artifacts are considered derivatives or views of this model.

Unfortunately, this theoretical utopia often fails in practical reality, where bringing the critical set of project artifacts under the umbrella of a single tool, such as Sparx Enterprise Architect, proves to be nearly impossible.


Example: Application Lifecycle Management Tools


Application Lifecycle Management (ALM) tools, such as HP ALM, Siemens Polarion, or Azure DevOps, generalize the concept of Model-Based Systems Engineering (MBSE) to a broader range of artifacts, enabling their integration within a single system.


While these tools do not inherently ensure alignment between artifacts, the establishment of links between them typically contributes to greater cohesion and congruency of their contents.


Example: Org Design Frameworks


Finally, multiple frameworks for organizational modeling and design, such as Team Topologies, OrgTopologies, EDGY, and UnFix, have emerged in the past years. The implicit goal of these frameworks is to ensure coherence and congruency between the various entities of sociotechnical systems—people, roles, and artifacts (Check this post for more details on sociotechnical systems ).


This long list of innovations represents just a small fraction of what has been achieved over the past decades to overcome and manage the limitations of human cognitive capacity. Even fundamental principles such as Agile and frameworks like Scrum, LeSS, and SAFe fall into this category of innovation.


Constraint 2: Human work requires effort and time


The second constraint shaping human-led software engineering today is the simple fact that human work requires effort and time. Since software differs from hardware in its softness, most of the time, we are not actually creating something new but reworking existing structures. Unlike buildings, software systems are not built additively—we don’t just lay another layer of bricks; rather, we partially redesign previous layers every time we add something new.


This rework is what primarily determines our understanding of future effort. As a result, we typically strive to minimize it. We employ two key strategies to achieve this: avoiding rework and keeping rework small.


As with the first constraint—limited cognitive capacity—numerous innovations have emerged to address the challenge of managing rework efficiently.


Strategy 1: Avoid rework


Avoiding rework is very simple in theory—one just has to think ahead. This is where analysis comes into play. We try to prevent mistakes that could lead to undesired refactoring by analyzing the problem and making informed decisions before executing work further down the Software Development Life Cycle (SDLC).


For example, we know that rewriting an entire codebase from one programming language to another could take months of effort, so we strive to make critical decisions upfront to avoid such costly rework. We want to prevent the shit from hitting the fan.


I believe the greatest innovation in rework management was the emergence of systems architecture as a technical discipline. Ultimately, the role of systems architects is to navigate complexity and reduce residual risks associated with rework.


Strategy 2: Keeping rework small


From a certain point onward, upfront analysis costs more than the expected rework. So we stop analyzing and start doing. Nevertheless, we aim to hedge risks and contain the possible consequences of mistakes.


We want to prevent the shit from hitting the fan—but if we can’t, we at least try to keep the blast radius small.


Our first step in containment is introducing abstractions. Abstractions help minimize the scope of potential changes by hiding non-essential details.


More than twenty patterns from the classical GoF book revolve around abstractions. All five S.O.L.I.D. principles employ abstraction as a core mechanism.


Not only do design innovations promise containment through abstraction—many technical innovations do as well.


  • The JVM abstracts operating systems.

  • Virtual machines and containers abstract infrastructure.

  • Serverless models abstract infrastructure provisioning.


All these innovations encapsulate complexity within a box of abstractions. And, unsurprisingly, this box comes with a price—whether in the form of reduced performance, limited flexibility, increased troubleshooting efforts, or hidden dependencies.


Yet, we humans are willing to pay this price, because abstraction helps contain the scope of rework—and our work requires effort and time. That is the reason.


Conclusion


Software engineering is shaped by two fundamental human constraints. Most innovations of recent decades can be traced back to these limitations. But will these innovations still be needed if AI alleviates these constraints?


The next and final part presents a projection of how software engineering might evolve in the coming years and seeks to answer this question.

Subscribe to our newsletter

 

Comments


bottom of page