Software development and jazz music share a remarkable similarity in their essence: both are grounded in creativity, exploration, and improvisation.
Software engineering is essentially scientific experimentation. Researchers formulate hypotheses and design experiments to test their ideas, much like jazz musicians who create melodies and harmonize on the spot. Both processes involve a blend of structured frameworks and spontaneous innovation. Software developers, like jazz musicians, must remain flexible and open-minded, ready to adjust their approach based on emerging data or changing dynamics within the ensemble.
Just as a jazz musician responds to the rhythm and melody of fellow performers, software engineers react to experimental results. This iterative process of hypothesizing, testing, observing, and refining is akin to the call-and-response nature of jazz, where each note played can inspire new directions and unexpected discoveries. The beauty of both lies in the balance between discipline and freedom—while there are rules and theories to guide them, the real magic happens in the moments of improvisation.
Ultimately, both celebrate the pursuit of the unknown. They thrive on curiosity, innovation, and the willingness to explore uncharted territories, turning uncertainty into a creative force. Just as jazz transforms individual notes into a cohesive and dynamic piece, software development teams weave disparate observations into a comprehensive understanding of great products that delight users.
The Apple Macintosh “Artists”
In late 1983, a 27-year-old Steve Jobs was at the helm of the Macintosh group at Apple, a company he had co-founded seven years earlier. The rival Apple Lisa computer had just been launched by another team, boasting more budget, personnel, and marketing power than the Macintosh group. The Macintosh launch had been delayed multiple times, and the project was fraught with challenges—unfinished engineering, unreliable disk drives, and unfulfilled software promises. The factory for production didn’t even exist.
Facing these hurdles, Jobs decided to take the 100-member Macintosh team on an off-site retreat to a beachside hotel in Carmel, California. The aim was to boost team spirit and prepare for the final push. Here, Jobs delivered a pivotal lesson to his talented yet sometimes undisciplined team, emphasizing the difference between creativity and innovation.
Standing before an easel in a narrow conference room, Jobs revealed a bold statement: “REAL ARTISTS SHIP.” He explained to his team that true artists don’t just create; they complete their work and deliver it to the world. “You are all artists,” he told them. “But real artists don’t hang on to their creations. Real artists ship. Matisse shipped. Picasso shipped. You are going to ship too.”
With this rallying cry, Jobs inspired the Macintosh team to push through their challenges and deliver a product that would revolutionize personal computing.
In 1979, the Macintosh was just an idea proposed by Jef Raskin, a veteran of the Apple II team. He envisioned an affordable, user-friendly computer, named Macintosh, that could sell for $1,000 if produced in high volumes. However, his proposal failed to gain traction among Apple’s board of directors and engineers, who were preoccupied with the Lisa project and issues with the Apple III.
By September 1980, the Macintosh project faced cancellation, but Raskin managed to secure a three-month extension. At the same time, Steve Jobs, then Apple’s vice president, was struggling to find his niche within the company. His interest in the Macintosh project eventually led to his appointment as its manager.
Jobs transformed the project with a small, dedicated team working independently of the corporate mainstream. He recruited talented engineers with promises of stock options, and the project, though still seen skeptically as “Steve’s folly,” began to gain credibility.
The Macintosh team prioritized simplicity and efficiency in their design, using inexpensive parts and creating user-friendly software inspired by the Lisa workstation. The lack of strict definitions allowed them to innovate continuously, leading to a better product. This small, tightly-knit team thrived on collaboration and rapid prototyping, with minimal bureaucratic interference.
One of Jobs’ first moves was to provide the team with a separate workspace behind a Texaco service station, away from Apple’s corporate headquarters. There were no set work hours or development schedules, fostering a culture of creativity and rapid iteration. Weekly meetings with Jobs allowed the team to report progress and stay aligned with the project’s goals.
The collaborative and flexible approach enabled the Macintosh team to make significant design tradeoffs and innovate effectively, resulting in a groundbreaking personal computer that would eventually redefine the industry.
Ship Early, Fail Fast & Move On
“Successful startups wiggle — they try something, they try something else and they’re very quick to discard an old idea corporations may spend years on with a belief system that is factually false, and they don’t actually change their opinion until after they’ve lost all the contracts, even though all the signs were there. Nobody wanted to talk to them nobody cared about the product and yet they kept pushing it. So if you’re a CEO of a larger company what you want to do is basically figure out how to measure this innovation so that you don’t waste a lot of time.“
– Eric Schmidt in a 2024 interview on Diary of a CEO podcast.
The “fail fast” approach, also known as “fail often” or “fail cheap,” is a business management concept and organizational psychology theory that promotes a trial-and-error process to quickly assess the long-term viability of a product or strategy. This methodology encourages businesses, particularly in the tech industry and Silicon Valley, to cut their losses early rather than continue investing in doomed projects.
The core idea is to identify potential failures before significant investment is made. By addressing concerns at the earliest stages, companies can avoid extensive research and development costs and problematic product rollouts. A critical principle within this concept is to “contain the downside risk—fail cheaply.” This means recognizing failure early to minimize negative impacts on an employee’s position, job, or career.
However, the “fail fast” approach has its critics. Some argue it creates a culture of mediocrity and overestimates the learning benefits of failure. Despite these criticisms, it remains a cornerstone of agile methodology, which values speed of execution over perfection. In today’s complex business environment, the initial solution often becomes the foundation for further development and learning.
Facebook’s evolution of mottos—from “Done is better than perfect” to “Move fast, break things,” and later to “Move fast, but please, please, don’t break anything”—illustrates the practical application of the “fail fast” philosophy.
Several principles of agile methodology are linked to the “fail fast” concept, including:
- Achieving customer satisfaction by delivering valuable work early and consistently.
- Trusting and supporting team members to try, fail, and learn from their failures.
- Providing regular feedback to ensure teams are aware of any failures.
- Breaking down large projects into smaller, quickly completed tasks to reduce risk and foster experimentation.
- Measuring progress by the amount of work done, with “fail fast” creating more measurable steps for proper tracking.
In essence, the “fail fast” approach is about embracing failure as a stepping stone to innovation and success, fostering a corporate culture that values learning and adaptability.
The Mythical Project Plan
“Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.”
– Douglas Hofstadter in his book Gödel, Escher, Bach: An Eternal Golden Braid (1979)
A study by McKinsey found that, regardless of project size, 53% of all IT projects are not completed on time, 41% are not completed within budget and 56% fail to deliver the intended benefits.
Under 1% of IT projects meet all three of these core goals, and only one in 14 is on time and within budget. These projects that failed to meet expectations exceeded their budgets by 75%, overran their schedules by 46%, and generated 39% less value than predicted, on average. McKinsey has also conducted research on large projects with budgets greater than 15 million US, and due to the operational scale, these trend worse, some even spiraling into what are called “black swan” projects where a series of mishaps and overruns lead to large losses or even bankruptcy.
These numbers represent a broad range of IT project categories across many different industry sectors, and while the big trend stats abstract away the details, this research indicates that the success rates in the category of “new software development” are worse than average.
Experienced financial analysts, software engineers, architects, product owners and project managers are aware of optimism and overshoot. Why does it persist? And why is it more prevalent in larger companies undertaking larger projects?
Superficially, the answer is straightforward: the larger the project scope, the more people are involved with more points of potential miscommunication and the less they will be able to accurately plan for all possible contingencies. Smaller teams are more likely to execute effectively thanks to less administrative overhead — it’s easy to coordinate a project involving five people, but fifty and beyond is a different matter. Any business that successfully scales beyond the cottage startup phase must implement standardized processes, procedures, practices, and intentional redundancies to ensure business continuity, but where the line is drawn between autonomous agile business units and sprawling bureaucracies is difficult, and once bureaucracy is established it tends to expand toward the latter and is too infrequently redressed until the structural problems become endemic and entrenched as culture.
In computer science we have familiar systems patterns like garbage collection that reclaim resources from non-productive or less essential instructions for productive use. Similarly, our methodologies for collaborating emphasize recursive iteration on small units of work and recursive improvement in the spirit of the scientific method: experiment until satisfactory solutions emerge. Conversely, bureaucracies emphasize planning, reproducibility, consistency, predictability, stability and security at scale through expansion of processes, rules, measurement, oversight, management, regulations and controls. Bureaucracies as human cultures seek to preserve norms and methods as institutions that are formal, well-organized, hierarchical and difficult to change. To put it in software terms, bureaucratic operating systems don’t “ship” with efficient garbage collection installed by default.
Learning From Failure
I find the most interesting business stories fall into two broad categories: how great companies soar to success, and how great companies recover from near catastrophic failure. There’s not much to learn from the romantic startup trope, as we tend to only focus on the few bets that win and disregard the vast majority that don’t. It’s much more interesting to look at great companies between their second and third acts and whether the story turns out to be a comedy or tragedy.
There’s an economic sea change happening. When capital markets tighten, there are fewer private bets and generally a higher aversion to risk. Innovation requires a lot of bets. When the velocity of capital decreases, large corporations and government remain at the table. Silicon Valley’s overnight success stories have been so celebrated in the annals of publications like Fast Company over the past thirty years that it’s easy to overlook the slow companies that have navigated the three act play multiple times in an epic saga that in some cases spans decades.
In this category there are two companies we can learn from: Eastman Kodak and Nokia.
Nokia
On February 11, 2011, at a press conference in London, Nokia CEO Stephen Elop unveiled a new strategic direction for the company. He announced a “strategic partnership” with Microsoft, focusing on integrating Microsoft’s Windows Phone into Nokia’s smartphone lineup. This shift involved phasing out Nokia’s in-house Symbian and MeeGo operating systems, a process initially expected to conclude by 2016 but completed by January 2014. Elop, quoting Winston Churchill, emphasized optimism in the face of challenges.
The decision to partner with Microsoft was finalized just hours before the announcement. Nokia’s chairman, Ollila, backed the alliance, expressing confidence in a strong recovery for the business. This move marked a significant change in Nokia’s strategy as it aimed to regain its competitive edge in the rapidly evolving smartphone market.
It was a sensible but risky strategy. Hindsight is 20/20. The strategy failed.
In 2010, before Elop took the reins, Nokia held a commanding 33% market share. By the time he departed, this figure had plummeted to under 3%. Elop, a former Microsoft executive, had tried to steer Nokia through the burgeoning smartphone wars against Apple’s iOS and the open-source Android platform, ultimately failing. His attempt to leverage Microsoft’s mobile OS proved disastrous for both companies, but Microsoft could absorb the loss. They sold Nokia’s phone division in 2016 for a mere $350 million and shifted their focus to cloud computing, whereas Nokia was left reeling with a cumulative €4.9 billion loss and a diminished executive team.
Despite the collapse of its phone business, another division of Nokia, the Nokia Solutions and Networks department, thrived under the leadership of Rajeev Suri. Appointed in 2009, Suri turned this unprofitable division into a profitable and stable one by 2014. When he became CEO in 2014, Suri embarked on a comprehensive restructuring plan, cutting costs and refocusing the company.
Suri’s strategy centered on financial stability, rebuilding shareholder confidence through share buybacks funded by a deal with Microsoft. This strategy paid off when Nokia acquired Alcatel-Lucent in 2015 for $16.6 billion, gaining access to Bell Labs and their 29,000 patents, allowing Nokia to invest heavily in 5G technology.
Nokia’s fortunes began to change significantly in 2018 with the introduction of Reefshark chips, which tripled their 5G bandwidth capability and reduced power consumption. This innovation led to partnerships with major telecom companies like NTT DOCOMO, AT&T, and Vodafone, securing over 300 commercial 5G agreements by late 2024 and capturing 29% of the global 5G market outside China.
By 2022, Suri’s restructuring efforts had increased Nokia’s revenue by over $20 billion. Today, Nokia’s market cap stands at about $25 billion, a tenth of its peak, but with newfound stability.
Stephen Elop’s risky strategies ultimately led to his and Nokia’s mobile division’s downfall, whereas Rajeev Suri’s focus on financial stability and strategic investments has allowed Nokia to survive and thrive in the technology sector.
Eastman Kodak
Eastman Kodak is near and dear to me. My family and all my childhood friends’ families worked for Eastman Kodak Chemical Company and in 1994, the highly profitable chemical business split from the debt-saddled photography and imaging business. Once a titan in the camera industry, Kodak controlled 90% of the market and boasted a $30 billion valuation. However, the company’s reluctance to embrace the digital revolution led to a sharp decline, culminating in bankruptcy in 2013.
Kodak’s descent was lengthy and arduous, involving 19 months of bankruptcy proceedings. Despite pioneering the digital camera, Kodak failed to capitalize on it, and by 2013, they were saddled with $6.75 billion in liabilities. The bankruptcy process helped eliminate $4.1 billion of debt, but Kodak still owed over $2.5 billion. The company hoped to raise $2 billion by selling its digital patent portfolio but only secured $527 million.
A significant portion of Kodak’s debt was owed to the UK Kodak Pension Plan, which represented $2.8 billion in pension obligations. Kodak managed to sell its imaging and document units to the Pension Plan for $650 million, which led to a $2.1 billion debt reduction. Although creditors were left with minimal repayment, Kodak was allowed to exit bankruptcy in August 2013 under the supervision of a federal bankruptcy judge.
In March 2014, Jeffrey J. Clarke took over as CEO, bringing a wealth of experience from companies like HP, Travelport, and Orbitz. Clarke focused on listening to Kodak’s employees and prioritizing rapid innovation. He streamlined Kodak’s operations, fostering a culture of agility and innovation.
Under Clarke’s leadership, Kodak pivoted to the commercial printing market. In February 2013, amid bankruptcy, Kodak unveiled the Prosper 5000XLi Press Printer. This high-speed, high-quality printer was a game-changer, but it wasn’t enough to immediately restore profitability. However, Kodak persisted, releasing the even more advanced Prosper 6000 in June 2014, which further boosted their position in the printing industry.
Kodak’s commitment to innovation paid off. By 2016, they recorded their first annual profit post-bankruptcy, and in 2022, they launched the Prosper 7000, the world’s fastest digital press. Alongside their success in commercial printing, Kodak saw a resurgence in film photography, driven by a growing niche market that valued analog experiences. Esteemed directors like Christopher Nolan and Quentin Tarantino continued to use Kodak film for their projects.
By 2023, Kodak reported over $1.1 billion in revenue and a net income of $75 million. The company, now significantly different from its pre-bankruptcy self, had found stability through innovation and strategic leadership. Kodak’s turnaround from the brink of collapse is a testament to the right leadership and the power of reinvention, mirroring the resilience seen in other companies like Nokia.
Planning Big Problems
The problem of accurately planning complex projects has been around as long as humans have been collaborating to build complex novel things. From bridges to dams to nuclear plants to large software projects, humans have managed to coordinate these projects and sometimes they finish on time and under budget. In the early days, it was assumed that software would follow patterns similar to any large scale engineering project: any component that had a reasonable precedent should be relatively predictable and these components would combine to allow for a reasonable forecast of project time and cost. This turned out to be untrue because the means of building software beyond assembly code shifted to human-friendly programming languages that are abstractions, with fewer obvious limits and many possible (and often highly inefficient) paths to solve the same problem. Even in situations where the technical aspects of a software project are highly predictable, such as on-boarding an organization onto a well established enterprise software platform, administrative overhead, bureaucratic inefficiency, resistance to change, poor fit for purpose and dysfunctional communication can radically change the timeline.
Early methods of software project management focused on extensive research and planning up front, sometimes known as waterfall planning, where each stage of a project would cascade to the next in a well planned and executed sequence, starting with a comprehensive stage of design and documentation of requirements. The problem was that it didn’t work: the plans inevitably changed when problems and limitations were discovered during coding or stakeholders provided feedback. Customers and stakeholders change their minds. An unexpected technical discovery, whether a groundbreaking innovation or catastrophically incorrect design assumption during a project lifecycle could change the scope in a matter of days. Moreover, the competitive market landscape is constantly changing during a product development life cycle. A competitor’s release and feature set could make the current plans obsolete overnight.
Perhaps the best-known description of this problem was articulated in The Mythical Man-Month by Fred Brooks, which included “Brooks’ Law,” which states that “Adding more people to a late software project makes it later.” By the 1990s, the problem was so obvious to the engineers developing software that a group of software developers got together to draft and sign a revolutionary declaration called The Agile Manifesto that soberly accepted the impossibility of large project estimation and demanded that teams instead focus on small deliverable increments of working demonstrable software that would eventually build up to a satisfactory release that would be recursively shared with stakeholders.
You cannot “transform” to Agile (whatever that means). A gazelle is born agile. Gazelles don’t take 2-day classes and bring in armies of coaches to become agile. They just are. A CEO who wants more agility needs to change themselves first. Once they are, the org will follow.
– Allan Holub
The core objective of the agile way was to admit that stakeholders rarely knew precisely what they wanted, and obviate the resulting pitfalls of spending massive amounts of time and money on things that were unclear and unknown. It would promulgate what software engineers had been doing all along. It would substitute experimentation, speed, transparency, and constant feedback on evolving prototypes for the slow, highly orchestrated planning approaches that hadn’t worked. The idea was that when given this degree of flexibility and transparency, stakeholders could see results faster, provide feedback, and more accurately forecast and announce release dates as the projects materialized, or even cancel them if the project was trending to be too costly or impossible. In other words, an agile project is essentially experimental research and development until it proves to be feasible and incubates into a shippable high quality consumer software product.
Agile emphasized the importance of small, relatively autonomous teams working toward small incremental goals, because this method of working had perennially proven to be more efficient.
Somehow along the way, the term “Agile” was co-opted by corporate consultants and transformed into something that was more palatable to their customers who wanted to jump on the “Agile” bandwagon that they read about in Forbes or Fast Company magazine, but still demanded predictability — a compromise antithetical to the original premise. The mutations that emerged are usually a hybridization of agile and waterfall, often called agilefall: ambitious and ambiguous promises from product management in a 5-10 minute pitch deck to secure funding, little to no planning or feasibility discovery up front, agile methodology at the team level to “adapt” to constantly evolving requirements, and arbitrarily promised deadlines at the outset that the product would meet all the promised features on time and on budget.
On the positive side, this “agilefall” approach accepts the reality that too much planning is a waste of time, because too many technical variables are unknowable at the outset of a project, and that specific requirements are emergent and subject to change. On the negative side, there is a promise to deliver a product at a specific date no matter what, with insufficient feasibility discovery to estimate accurate timelines. There is typically a Project or Program Management layer that is tracking the trendline of fixed milestones toward the target completion dates of a project, all while the milestone requirements are emergent and ambiguous. While there are practically unlimited combinations of this sort of “agilefall” in practice, few of them accept the fundamental realities of time and uncertainty that the original agile manifesto sought to address. It’s simply anathema to the common sensibilities of any business person accustomed to fixed price or cost plus business arrangements. The paradox is that asking a group of salaried employees to estimate the cost of building something that has never been built before is at best, what engineers call a SWAG, or a Scientific Wild Ass Guess.
Work Small & Fail Fast
We often find that small to medium sized companies with inspirational leaders who staff their teams with experienced and motivated talent execute on projects more effectively than large bureaucratic organizations. It’s practically a trope at this point that Silicon Valley innovated because so much concentrated private venture capital was willing to gamble on small teams that demonstrated the potential to innovate and disrupt. As a musician, this is a familiar strategy. It’s like a record label that signs a roster of artists and hopes that at least one or two will recoup their investment. A venture capital fund can make ten bets and if one paid off, it would not only cover a loss on the others, but potentially reap massive rewards in an eventual IPO.
Superlative examples like Instagram, who at the point of their acquisition by Meta employed only thirteen people demonstrated that extremely successful consumer software products worth millions don’t need an army of employees to be successful. In fact, perhaps the opposite is true.
One of the key differences between a lean fast company and a bloated one is the administrative overhead that is often an inherent byproduct of management. While effective management is absolutely essential to organization and execution, in a lean company, the number of managers is kept to a minimum and the org chart is not very deep. Engineering, program, product and project manager roles may be vertically consolidated or the team leads who have a nuanced and complete understanding of project progress communicate directly with senior management and stakeholders on a regular basis.
The issue of middle management bloat has been a subject of scrutiny since the rise of the knowledge worker economy. Middle managers often find themselves entangled in the role of information coordinators among their peers, which detracts from their capacity to lead and oversee the detailed execution of project goals. Iconic leaders like Steve Jobs and Elon Musk, despite their distinct personalities, shared a hands-on approach in their areas of expertise and a relentless work ethic, which helped them eliminate inefficiencies within their organizations.
While managers may not be experts in every domain they oversee, the most effective ones possess a blend of project management skills and sufficient domain knowledge to engage effectively with their teams. In contrast, the bureaucratic incentive structure often results in upwardly spinning information, distorting facts due to a lack of understanding, and hiring subordinates or collaborating with colleagues who do not pose a threat to their status. In the worst cases, this leads to creating busywork to simulate value, rather than genuinely delivering it.
Balancing Agile At Scale
Alan Holub offers a set of valuable heuristics for being agile. He outlines them in his lectures and writings:
- Psychological safety, respect, and trust are fundamental to any successful organization. These elements form the bedrock for effective work, which is inherently interconnected. Changing one aspect means altering the entire system; piecemeal adjustments are insufficient for genuine improvement.
- Processes should serve the people who use them, placing individuals at the forefront. When employees develop these processes, they are more effective and efficient. Collaboration is key, not negotiation or isolated heroic efforts. Optimal results are achieved when customers, business people, and developers work together.
- Embracing change at any stage—in organizations, processes, products, and plans—is crucial. Rigidity is incompatible with agility. Focusing on outcomes rather than mere output ensures higher quality results. Knowledge work, distinct from factory or construction tasks, requires continuous learning. This learning encompasses both product development and the methods used to create them.
- Continuous improvement stems from observing work practices and addressing problems as they arise. This proactive approach focuses on systems rather than individuals, fostering ongoing enhancements. Simplicity is vital in everything from organizational structure to product details, avoiding unnecessary complications for unpredictable futures.
- Everything is transient in the workplace, including products, organizations, and processes. Experimentation is a constant. Enhancing customers’ lives and work requires valuable, consistent deliverables. This process involves frequent communication, understanding user needs, and collaborative solutions.
- Holistic thinking is paramount. Working on complete products rather than isolated projects eliminates the need for traditional project management. Rapid, continuous feedback ensures swift adjustments, maintaining high-quality standards without compromising efficiency.
- Quality remains non-negotiable in all aspects, from testing to final delivery. Strategic plans, rather than tactical maneuvers, guide the organization. Predictions and estimates are inherently unreliable and should not be confused with promises.
- Progress is measured by delivering valuable items into customers’ hands, with flexibility for changes in their needs. Management’s role is to provide strategic guidance and support, trusting teams to determine execution methods. Effective teams are autonomous, self-organizing, and self-managing, selecting their tools and methodologies.
- Autonomy does not negate coordination; alignment on strategic goals and implementation technology is essential. Stable, self-selecting teams perform best, with work brought to them, not the reverse. Teams should have all necessary skills to achieve goals independently.
- Employees must begin each day refreshed, ensuring optimal performance. Core drivers include relatedness, autonomy, mastery, and purpose, while rewards and punishments are counterproductive. Effective communication relies on proximity and rich media, with face-to-face interactions being ideal.
- Management dysfunction often stems from fear and a lack of transparency. Ensuring transparent processes can mitigate these issues, fostering a healthier, more effective work environment.