In a recent meeting, one of my colleagues casually introduced me to a new partner as an important “cog.” It slipped out during the opening pleasantries, a description rich with implication.
Cog: a metaphor steeped in the literary history of post-industrial alienation. I found it humorous. After all, I’ve stumbled over my own choice of words many times during the constant context switching of video conference roulette. My colleague simply misspoke in haste, intending to refer to the process I would be helping them with.
Yes, of course.
But at times, in the ephemeral breath of a passing word—uttered without malice, absent of intent—one glimpses a fractal of subtext, a luminous shard that cleaves through the fabric of routine. The spell of the present dissolves, and a wider meta-narrative reveals itself, breaking through the surface of the waters in which we are immersed.
What stuck with me was that, despite any illusions I might have of self importance, it was a fair and accurate description. The more I meditated on the idea, the more I came to embrace the reality that I am indeed like a cog in a very large institutional machine. One person’s sense of alienation is another’s sense of belonging to a purpose greater than the sum of its parts.
I was reconciling the western cultural values of individualism vs. collective purpose. Should I take comfort that I’m not just any ordinary cog — but an important one? Should have the spiritual humility to accept that serving a greater collective can always have a thread of quiet honor if it helps and serves others?
Yes.
I don’t blame individuals for the social structures that influence and shape behavior. Read a bit of neuroscientist Robert Sapolsky’s work and one might even question the possibility of free will itself, much less the power of social conformity.
We are social animals. Our sense of self-worth is deeply tied to how we’re valued by society. While the fear of economic insecurity is certainly real in the modern world, many of us harbor a deeper anxiety: the fear of becoming irrelevant—of having nothing meaningful to offer in exchange for recognition or belonging. Despite whatever criteria of professional success we’ve internalized, what we unconsciously crave is fairness and social validation; it’s a primal need to feel useful to the collective.
Engineers and those who lead them are frequently reduced to function: necessary, but peripheral. Indeed, the machinery matters more than the mechanic. An individual’s contribution is ephemeral, but the institution’s work lives on.
It’s not unlike opening a year book of your high school, but from five years after you attended. There are photos of all the familiar chambers, hallways, staff and rituals that you and your classmates once animated, but the faces and names are unrecognizable. As Ram Dass once said, “You went into somebody training when you took birth, and you ended up somebody. I bet you think you’re real.” He saw this as a kind of cosmic joke: we become so invested in our role identities that we forget our deeper nature, which is timeless and interconnected with everything.
Scaling Humans
Born of Weberian rationalism and animated by legal fiction, the modern corporation emerges not as a man but as a machine of sanctioned abstraction—an entity whose personhood is purely symbolic, yet whose will supersedes the sum of its constituents. Like a Leviathan stitched from contracts and capital flows, it operates beyond empathy, guided not by conscience but by code, preserving its continuity across generations as the flesh-and-blood actors within dissolve into anonymity.
Anthropologists and social theorists argue that as human societies grew beyond kin-based tribes, they required formalized structures—bureaucracies and institutions—to manage complexity, coordinate strangers, and transmit norms across generations. Bureaucracy is not a flaw, but the inevitable scaffolding of rational-legal authority that replaced charismatic and traditional forms of rule. From this scaffolding rose states, corporations, and nations—systems capable of scaling cooperation across millions, enabling humanity not just to survive, but to organize, extract, and dominate the planetary order.
Corporations dominate as the primary bureaucratic form in modern capitalism because they efficiently concentrate capital, scale decision-making, and align labor toward profit under a legal structure that grants perpetual life and limited liability. Their hierarchical organization enables control across global markets, while their abstract nature allows for an arrangement of distributed accountability, making them highly adaptable vessels for capital accumulation and systemic coordination beyond the reach of states or individuals.
To understand this, it helps to use the lens of social theory to unpack the various types of corporate cultures that emerge over time.
Product-led corporate cultures center the product itself as the primary driver of growth, customer acquisition, and retention. In these environments, teams focus on building intuitive, self-serve experiences that allow users to discover value quickly—often before ever speaking to a salesperson.
Sales-led cultures prioritize relationship-building and enterprise deals, often involving longer sales cycles and high-touch engagement.
Engineering-led cultures emphasize technical excellence and innovation. These companies are often driven by solving hard problems or building robust infrastructure.
Design-led cultures prioritize aesthetics, usability, and emotional resonance.
Mission-led cultures rally around a core purpose or social impact, using that vision to guide product and business decisions.
Each paradigm has trade-offs. Product-led cultures are usually the most dominant in the software industry, can scale quickly but may struggle with complex enterprise needs. Sales-led models can drive big revenue but risk becoming overly dependent on a few accounts. Engineering-led cultures can build brilliant systems that no one uses. The best companies will blend elements of each, adapting their culture to their market, maturity, and goals.
Managerial Abstraction
Corporate cultures evolve a high reliance on managerial abstraction. As companies scale, there’s a natural push to abstract decision-making away from builders and toward coordination layers: product managers, program managers, and operational leads. This is functional, because someone has to wrangle the tremendous complexity of scale, but it often displaces builders from strategic conversations, relegating them to execution. This becomes problematic if there’s a lack of nuanced expertise among the personnel that are part of the managerial abstraction layer. Put simply, if the wrong social hierarchy reifies and a gap develops between the products and the people who build them, management risks blind spots where they won’t know what they don’t know.
There are notable exceptions throughout history, as corporate cultures experimented with solutions to this perennial problem.
In the great industrial cathedrals of the 20th century, a few visionary companies dared to redraw the sacred geometry of the factory floor—not to elevate management above the workers, but to bring them shoulder to shoulder, eye to eye, breath to breath.
At Toyota, in the crucible of postwar Japan, Taiichi Ohno reimagined the factory not as a hierarchy of command but as a living organism. Managers walked the floor daily, not as overseers but as learners, absorbing the rhythm of the line, the cadence of kaizen. The layout itself was a testament to humility: open, cellular, and fluid, designed so that supervisors could see, hear, and respond in real time. The line was not a wall—it was a bridge.
In Sweden, Volvo’s Kalmar plant became a quiet revolution. Gone were the endless rows of conveyor belts; in their place, self-managed teams worked in circular pods, with managers embedded like mentors in a guild. The architecture whispered a new philosophy: that dignity and productivity could coexist, that proximity bred trust, not surveillance.
Even in the steel heart of America, at NUMMI—a joint venture between GM and Toyota in Fremont, California—the layout was transformed. Managers no longer watched from glass towers. They stood beside workers, pulling the same andon cords, solving the same problems. The floor became a commons, not a battlefield.
These were not just factories. They were manifestos in concrete and steel, declarations that the distance between mind and hand, between planner and maker, need not be so vast. In drawing management closer to the work, these companies redrew the very blueprint of industrial power.
The Agile software movement emerged as a response to the inefficiencies of traditional corporate hierarchies, promoting flatter structures and team autonomy. By empowering cross-functional units to make decisions independently, Agile diminished the need for top-down oversight. It replaced layers of managerial abstraction with visible, iterative workflows—prioritizing working solutions over bureaucratic procedure. The result was a shift from control to collaboration, designed to keep pace with a rapidly changing business environment.
However, over the course of my career—even during the height of “peak Agile”—a subtle but significant shift has become evident. Software developers, once the crown jewels of tech companies—courted with perks and revered for their expertise—have increasingly been marginalized, as management has co-opted Agile’s language and adapted it for their own ends. Rather than being viewed as vital contributors to innovation, developers are now often treated as fungible components, cogs in a machine, or merely a means to an end.
Once a manifesto for creative autonomy, many of Agile’s founders are now among its sharpest critics. Dave Thomas declared “Agile is dead,” lamenting its transformation into a commercialized brand. Martin Fowler warns of “Agile in name only,” rituals devoid of purpose, while Ron Jeffries urges developers to abandon Agile altogether. Others, like Melissa Perri, argue it’s lost its focus, accelerating delivery but not value. What began as a rebellion against control has, they say, become yet another tool of it.
Every few decades, a disruptive paradigm reshapes corporate culture—only to be absorbed and defanged by the very systems it challenged. As adoption peaks, revolutionary ideas are repackaged into familiar hierarchies, dressed in new jargon but driven by the same metrics and control. What began as transformation ends in management-branded mimicry.
There’s an old joke from IBM’s “Big Blue” era when they were the bloated bureaucratic juggernaut with a target on their back.
How many IBM employees does it take to change a lightbulb?
Just one—but they’ll need to schedule a strategy session, assemble a cross-functional team, put together a cost proposal, seek funding approval and produce a marketing strategy paper before touching the ladder. Work is estimated to begin sometime next year.
Seldom is such a prodigious torrent of institutional will—manifested in plans, protocols, regulations, oversight committees, and bureaucratic choreography—expended upon a craft so fluid and unbounded as software creation. In a domain where the threshold to entry is scarcely more than curiosity and a keyboard, one often finds the paradox laid bare: what a legion of managers, legal minds, and administrators labor over for weeks or even seasons, a solitary engineer may summon forth in a matter of days, or hours, with nothing but clarity of vision and access to the compiler.
In the haze of managerial abstraction, the connective tissue that once bound customer, creator, and stakeholder frays to near invisibility. The organization becomes a theater of compartmentalized performance—each actor rehearsing their part in splendid isolation, oblivious to the arc of the greater narrative. Hierarchies ossify into ritual, and what was once purposeful coordination calcifies into recursive ceremony. Workers gather in clusters, not to collaborate, but to orbit discrete objectives shaped more by internal incentives than shared vision. Time itself is cannibalized by the machinery of alignment—endless calendars populated with meetings, while the original purpose drifts silently into irrelevance. In such a system, productivity does not falter from laziness or lack of talent, but from a tragic amnesia of why the work began.
We can spend weeks beholding elegant renderings of software concepts and plaster them across presentations, brochures and marketing sites, but as Magritte reminded us, “Ceci n’est pas une pipe.” The image isn’t the thing. After years working across disciplines—as a UI designer, software engineer, marketing consultant, CTO, engineering manager, salesperson, and product manager—I’ve seen firsthand the pressures and blind spots in each role. None are without difficulty, but some are clearly more essential than others in the context of software product development. In smaller startups, redundant roles may be collapsed. Exceptional software developers, on the other hand, are indispensable. Put a good sales person together with a great software engineer and you have the archetypal garage startup. Everything else is optional.
Of Makers & Managers
The original 1984 Macintosh is considered a major milestone in product engineering and design. Its development also happens to be one of the best examples of the tensions and compromises between product vision, cost and engineering reality that resulted in something that was all sizzle and no steak. The first edition was the greatest mass-market prototype ever sold—a demonstration of the future’s user interface and form factor that you could buy and place on a desk, but do little with, as nearly every byte of memory and cycle of compute was optimized to prove a concept, not to accomplish practical work. On paper, it was impossible, and despite being a superlative museum piece that spawned countless references and even an entire Hollywood film, anyone who actually bought the Macintosh 128k mark I can tell you it was a beautiful desktop toy version of the Xerox Alto that had to proceed through multiple revisions before it grew into a practical personal computing appliance. It was a machine built for demo day. It never would have existed without the relentless ambition of a brash young Steve Jobs to “bend” reality, but the product would have never shipped without the incredible feats of engineering by Bill Atkinson and the engineering team: QuickDraw used precomputed tables and bitwise operations, squeezing every ounce of performance from the anemic Motorola 68000. The entire operating system was written in assembly language, which allowed them to control memory usage down to the byte. Memory-mapped I/O allowed custom interrupt routines to handle input and screen refreshes without wasting cycles. Even the fonts were bitmaps, hand-tuned for legibility and speed.
Over the past decade, I’ve developed a few working theories on why software engineers have steadily lost ground in corporate hierarchies—even in companies ostensibly built on software products and services. The decline isn’t accidental. It’s structural. And artificial intelligence will eventually completely disrupt software development as we know it.
To unpack this, a bit of historical context helps.
Software engineers wielded tremendous power in the past. In the 1990s through the 2000s, they were scarce in an era of sudden and rapidly accelerating technological disruption, leading to a mismatch between supply and demand. This new generation of software elite, often young and intolerant of traditional corporate culture norms, dressed in street clothes, worked the odd hours that maximized productivity, and commanded high salaries and lavish benefits from the corporations that competed with disruptive startups for their talent. These software engineers were a unique breed during a unique era. Their intelligence was necessarily higher than average. There were few of the modern crutches developers have today in the form of frameworks and endless GitHub code snippets to copy-pasta together working software. Individuals entered the field for various reasons, but rarely because they enjoyed or excelled at interpersonal communication. With their unique skill set often came a natural understanding of advanced problem solving, pattern recognition, systems theory, and game theory. Many of these brash young engineers were smarter and more critical to the company’s business continuity than those who managed them, and they knew it.
I wasn’t one of those superstars. I was a mere dilettante — a philosopher, graphic artist, musician, economist, anthropologist and technology generalist who developed a pragmatic proficiency in software development and systems architecture, not because I particularly enjoyed it, but because it was an obstacle in my path to make creative products a reality without the benefit of adequate capital in small ventures. For whatever reason, that led me on the fast track to management and consultancy by age 30, and in my interactions with stakeholders, immediately observed the peculiar psycho-social dynamics between technologists and the stakeholder customers, which is essentially the same anxiety and distrust the average consumer has with their auto mechanic if they don’t take the time to explain the work.
To this day, the awkward dynamic between engineering and management has evolved, but still exists. At this point, my duty is to bridge the gap between the technologists I champion and the product visions that will move the market and make that investment profitable. I have the deepest respect for what engineers do — the long thankless hours they work, and the dismissive insults they often bear. I simultaneously understand the tremendous psychological pressures that executives & managers face to imagine new ideas, make incredibly difficult decisions on how to allocate a finite pool of capital and assemble teams to build winning products and features on compressed timelines.
But at the end of the day, in software companies, without the systems, there is no business. Frustration with a lack of quality content on Netflix this week is irrelevant when the power goes out. No reputable software corporation operates without a disaster recovery plan. Those plans are almost entirely comprised of actions by engineers. A SAAS company could send their executives and middle managers to an off-grid meditation retreat and productive strategy session for two weeks and essential technical operations could hum along as normal — even more efficiently without the interruptions of meetings and status updates. Do the same with your engineers and operations could encounter catastrophic failures within days or even hours if something goes wrong. There is no super-engineer who can fix all problems in today’s complex corporate software architectures, which means that frontline monitoring engineers can only do so much: calling the system leads and running a playbook that usually amounts to an exotic variation of, “turning it off and back on again”.
That’s the same tension between management and labor that has long existed under industrial capitalism, and the same awkward scenario that played out during the pandemic, when many medical and blue-collar workers were labeled “essential,” regardless of the perceived mortality risks. It also explains why the promise of AI automation is so appealing to software companies, where labor is the largest expense and continuity of service is critical.
When an intelligent engineer is engaged on a project for a product-led corporate culture that treats them as an order-taker rather than a stakeholder, the first thing they do is determine how knowledgeable their “customers” are about the universe of possible solutions. Product-led cultures often staff middle management positions with individuals who have an understanding of user experience or the unique needs of the market segment, but limited expertise in software engineering. This usually involves having a conversation with the product and project managers involving a strategically placed high-jargon language that might be called “techno-babble” to get a sense of how they react or respond. The results range from uncomfortable silence or admission of ignorance (low expertise, non assertive), to immediate push-back to discuss the solution in basic user-experience terms (low expertise, highly assertive), to dialog involving heuristics to translate and understand the techno-babble (mid expertise, highly assertive) to direct discussion in high-jargon language about the specific technology (high expertise).
If the engineers are ethical, capable and take pride in their work, they will proceed to work with the stakeholders using common-denominator language and complete and deliver a solution and source code that is well-planned, efficient, fast, free from defect and accurately documented. The code will be structured and organized as a set of units and functions that are testable and reasonably well commented so that future collaborators or maintainers can understand it when the original authors have moved on.
If the engineer is lazy, inexperienced or unethical, they will crank out whatever slop of spaghetti code to make a pull request (a way of requesting a review to change the codebase), often copied from other sources that runs on their machine well enough to finish the bare minimum requirement in an hour and say they spent all day on it during the next standup, peppering the corporate channel with a few technical comments and questions along the way. If problems arise, and depending on the peer group of engineers on the team reviewing the code, they can always try to use techno-babble as a hedge. Ignorant or overtaxed managers are often none the wiser. The irony is, if the results are “fast enough” to meet the sprint goal and satisfy the user story with a well designed demonstrable user-interface, everyone got the short-sighted win they wanted for that sprint.
Over time, under the conditions of product-led culture with low management expertise, such inefficient developer habits can be self-reinforcing. This is especially true for contractors who are paid by the hour rather than the job. By delaying progress or citing exaggerated “technical blockers”, teams can easily push a project weeks to months past projected deadlines. Developers learn this early in their careers. Some teams even tacitly collude to do so if their manager pisses them off or refuses to accept reasonable deadlines. Managers, under intense scrutiny and pressure, can be left feeling powerless and frustrated.
Engineers, once embedded in a large project know — either consciously or subconsciously — that they have a choice: to build the solution well so they can finish it and move on from the project assignment as soon as possible, or design it such that it is always “almost there” and needs constant small bug fixes and updates, ensuring months or years of gainful employment.
If you think that’s a problem, you’d be right. And guess which is more likely to occur under the incentive structures of hourly temporary hire developers?
Over time, the managers and product stakeholders either observe and correct these problems with the right contractual or employment incentives, or learn to distrust software engineers’ tacit technical control over their performance, which can lead to a vicious cycle of low-trust that reinforces alienation among the developers.
Globalization
In the mid to late 2000s, as Internet connectivity spread globally, it occurred to businesses that there was no particular reason why software developers needed to be supervised in an office, in the same city, or even the same country. Thousands of person-hours worth of computer code can now be transported across the planet in seconds. Programming languages are global languages that transcend the boundaries of natural language. Open source software that forms the backbone of the modern Internet is built by millions of global volunteers who have never met or even spoken to each other. A person who only speaks Cantonese can review, debug and edit the source code of a person who only speaks Portuguese in a fruitful collaboration.
Companies began hiring contractors in global markets and discovered that, thanks to the strength of the dollar, highly skilled software engineers could be employed from other countries at a fraction of the cost of their American counterparts. Over time, this trend, coupled with an ever-growing supply of computer science graduates, gradually eroded the dominance once held by the U.S. software engineering elite.
As companies and their managers increasingly turned to offshore software development talent, the field of software engineering experienced an unfortunate consequence: a gradual erosion of stakeholder status within corporate culture. In contrast to the often aloof and assertive demeanor of U.S. engineers, offshore engineers were frequently polite and sometimes overtly deferential toward management. Even without factoring in perceptions of entitlement among clients I once heard cynically described as, “full time salaried software development customers”, the mere availability of globally competitive labor contributed to the perception of software engineers as occupying an order-taker status in the corporate hierarchy that needed to be closely managed to deliver to our unique market needs. Similar to trends seen in other sectors during the peak of globalization, maximizing cost efficiency in software development led companies to prioritize U.S.-based management skilled in sales, marketing, finance, and operations. The technical work of coding could be outsourced to flexible offshore teams as needed, without the complications of payroll taxes or employee benefits.
As software engineering matured, the mystique surrounding it eroded. Once a rarefied skill set, it’s now widely taught, globally distributed, and supported by abundant tools, libraries, and frameworks. This has led to the perception—accurate or not—that much of the work is repeatable, replaceable, or automatable. Organizations start to see engineers less as creators and more as interchangeable parts in a machine.
Many tech companies now optimize aggressively for quarterly results or investor sentiment. This leads to a focus on surface-level features, speed, and metrics, often at the expense of engineering excellence and innovation. Engineers who push for sound architecture or long-term stability may be seen as obstacles rather than enablers.
History has repeatedly demonstrated that while software companies can coast on product-led feature additions for years, those companies will eventually be disrupted, either internally through aggressive investment in research and development or externally by competitors who invest in R&D or venture capital bets on startups with bold ideas. All are usually engineering-led or engineering intensive endeavors.
The Return of The Generalist
As Artificial Intelligence inevitably augments or replaces large sectors of labor, the art of software engineering will enter a new phase, likely shifting to fewer people with broader skill sets in user experience, software design, data science and systems architecture.
Having been born in the 1970s, I’m a part of a cohort that journalists and sociologists have called Generation X — the last pre-internet generation — an age group that rose to adulthood and began careers in a largely “analog” world, before the rapid acceleration of digital transformation.
At the dawn of the digital revolution in the 1990s, many people speculated but nobody was sure of what the future would bring.
While some members of GenX continued down the path of relative technological ignorance until the widespread adoption of the smart phone, others were among the early adopters of that technological innovation, along with the celebrated baby boomer pioneers like Steve Jobs, Bill Gates and Larry Ellison. We found ourselves with a tremendous, albeit short-lived advantage in a labor market where young, adaptable minds were needed to harness the powers of this emerging, highly valued technology that was rapidly changing the world.
Over the past twenty five years I’ve found myself working in a wide range of roles as I’ve traversed that so-called digital divide. That holistic skillset was once considered “unicorn” for a partner in a tech startup — and will still be familiar to any smaller company with big ideas and limited resources. However, today that’s perceived as too diverse for a massive industry sector that has surged from around 60 billion in market cap in 1990 to around ten trillion today, and is dominated by mid to large corporations that seek a high level of role specialization and domain expertise that can be siloed, managed and scaled.
Our experiences in our salad days and early career often shape our expectations of work life. Individuals who begin their careers in entrepreneurial cultures often struggle with the bureaucracy required to scale human resources in large organizations. The inverse is also true — individuals who graduate from college and go directly into large corporate bureaucracies are baffled by the lack of resources and formal processes if they transition to a small startup.
However, those who are conditioned to be auto-didactic generalists may have a foundational advantage over those who have hyper-specialized. Capitalism’s success is in no small part due to its social Darwinian incentives for risk taking, innovation and creative destruction, with no inherent concern for the welfare of its participants, where wealth and general welfare is sometimes a convenient byproduct.
What that means for an individual working in a capitalist society like the United States is an ability and willingness to accept uncertainty and adapt to radical change, usually through several reinventions during their careers, which is antithetical to what most people would consider a sense of security.
Social critics have observed that the trend of specialized degrees in higher education, largely driven by the return on investment (or non-dischargeable debt) required to justify paying inflated and ever-increasing tuition costs, may prove to have been optional to those who thought it a wise investment. In the age of AI labor disruption, those individuals may have otherwise benefited from a cost-efficient course of study in the waning classical liberal arts tradition — to produce the next generation of “well-rounded” young adults who are exposed to a wide sample of the sciences, mathematics, literature, philosophy, and history, who understand how to think critically, are well read, love knowledge for its own sake, write and communicate effectively and understand the broad arc of human civilization and the many tragic consequences of institutional collapse, lest we repeat them.
Anecdotal evidence of such well-rounded talent can be found in surprising places. One would expect that the best fund managers would be drawn from the brightest graduates in finance, but several firms prefer to hire only liberal arts majors. Daniel Rasmussen, the founder of Verdad Fund Advisors noted in a 2017 interview with Business Insider that, “Students of history and literature are more trained to understand the existence of multiple perspectives and to engage with them, and so can often more accurately understand the human dynamics that drive stock market flows.”
Alex Karp, the CEO of Palantir and friend of fellow Gen X scion Peter Thiel is another example. Arguably among the most powerful and controversial living people on earth today, Karp’s passion for social theory led him to earn a B.A. in philosophy (similar to Thiel) and later a Ph.D in neoclassical social theory from Goethe University in Frankfurt, where he studied under the influential German philosopher Jürgen Habermas. His dissertation explored the relationship between language, aggression, and culture, reflecting a strong interest in how societal structures influence human behavior, which overlaps well with Thiel’s post-graduate work on mimetic theory and friendship with literary critic and social scientist René Girard.
Karp’s worldview is heavily influenced by European philosophical traditions, particularly German thinkers. He’s known for integrating these ideas into his business ethos, emphasizing ethical responsibility, civil liberties, and the societal impact of technology. Unlike many Silicon Valley leaders, Karp often critiques the tech industry’s profit-driven mindset and advocates for regulatory oversight to prevent misuse of powerful technologies.
He also practices what he preaches. Karp is known for leading meditation sessions at Palantir and promoting mindfulness and well-being among employees, blending philosophical introspection with corporate culture.
Karp & Thiel are controversial and outspoken champions of “hard political realism” rooted in the traditions of western enlightenment rationality and ethics. They are not without their detractors. Writer Stephen Diehl argues that Thiel’s philosophy is a “profound misunderstanding of philosophy, human nature, society, and progress”. Diehl points out that Thiel’s framework—rooted in Christian eschatology, mimetic theory, and a belief in technological determinism—romanticizes hierarchy and heroic individualism while dismissing democratic norms and collective progress. He sees Thiel’s ideas as intellectually inconsistent and self-serving, designed to justify elite dominance under the guise of innovation.
As the world labor market lurches shakily into the rapidly changing era of Artificial Intelligence, will specialization become a liability rather than an asset? In just a few short years, LLMs are on par with the capabilities of a graduate research assistant. The once elite echelons of software engineering are one of the first highly specialized occupations competing with AI and losing. Programmers are faced with the dilemma of either adopting AI tools to 10x their productivity or losing their positions in a rapidly approaching future where much of the arduous process of writing custom code can be done entirely by AI with minimal human intervention in a process that has been dubbed by its proponents as, “vibe coding.”
The principles of Vibe Coding will be familiar to anyone who’s worked in music production over the past 30 years. What once required a large studio, dozens of expensive microphones, hundreds of thousands of dollars of specialized equipment and a group of highly talented musicians can now be emulated rather convincingly by anyone with a laptop and a small outboard audio interface. Don’t have expensive guitar or bass amplifiers? No problem — that can be emulated by software. Can’t play drums or don’t have good drums in a good room? No problem, you can program hyper-realistic drum performances or have an AI drummer follow along with your guitar part. Can’t sing? No problem, we can use pitch correction. Can’t play an instrument at all? No problem, just play one key on a keyboard and the band will follow the general idea. Don’t have a musical bone in your body? No problem, just describe the type of music you want and an AI will generate the entire finished recording for you.
Given the apparent clarity of this trajectory, specialization is suddenly and unexpectedly facing a future where it may plummet in value to society. We may no longer need a paralegal or a lawyer who spent years in school and training to review or draft a contract. We may no longer need a physician who sacrificed years of their life to analyze the battery of imaging and lab results to reach an accurate diagnosis of disease. Meanwhile, the “useless” skills and soft skills we traditionally associate with the humanities and general life experience are back in the vanguard: interpersonal skills, empathy, theoretical science, creative AI-assisted design, advanced problem solving and even (gasp) areas of philosophy such as ethics, epistemology, metaphysics and aesthetics.