Startups are known for the mantra “move fast and break things.” This motto, popularised by Facebook’s early days, captures a willingness to push products out rapidly and worry about consequences later. For many young companies, speed and disruption have been badges of honour, a way to outpace incumbents by any means necessary. We at Technology Policy Advisory have seen this ethos fuel incredible growth. But as technology evolves, especially with powerful new tools like artificial intelligence (AI) entering the fray, can startups afford to follow this model indiscriminately? In today’s landscape, true innovation may require a more balanced approach that couples agility with responsibility.
.
The Classic Startup Ethos: Moving Fast and Iterating
The startup world traditionally operates at breakneck speed, driven by the imperative to achieve rapid growth and market fit. Conventional wisdom encourages launching a minimum viable product (MVP) quickly, then iterating based on user feedback. This lean approach has given rise to some of the most transformative companies of our era. For example, Uber began as a bare-bones ride-hailing app in a single city and gradually refined its features; fast forward, and Uber became a global giant valued in the tens of billions, operating in over 70 countries and with over 9.4 billion trips carried out in 2023. Similarly, Airbnb, which started with a simple room-sharing website, grew into a platform worth an estimated $30 billion, serving 220 countries and regions and 2 billion guest arrivals all time. These successes underscore how starting small and iterating rapidly can turn a startup into an industry juggernaut.
This culture of speed has also popularised strategies like “blitzscaling,” which prioritises explosive growth over efficiency or stability. The idea is that if you’re not breaking things, you’re probably not moving fast enough. Indeed, moving quickly and accepting occasional mistakes allowed many startups to outrun slower competitors. However, the blitzscaling era taught hard lessons too: while it produced some mega-successes, it also led to a colossal failure rate, with many companies scaling unsustainably and flaming out. In other words, growing “at all costs” often incurs hidden costs of its own, from technical debt to customer mistrust.
.
High Stakes of Cutting-Edge Tech
Today, startups are increasingly building on emerging technologies that carry far higher stakes than a typical app or website. Artificial intelligence is a prime example, not just an incremental feature, but a paradigm shift in how products and services operate. AI’s power lies in processing vast data, recognising patterns, and making decisions at superhuman speed. This offers startups incredible capabilities, but it also introduces new complexities and risks. AI algorithms can be opaque, leading to the notorious “black box” problem where even the creators struggle to explain how decisions are made. If something goes wrong with an algorithm, diagnosing and fixing it is not as straightforward as debugging a simple app. As one computer scientist put it, ensuring an AI algorithm is fair and explainable is a challenge that’s still “quite far off” from being solved.
Some applications of AI might seem low-risk, say, a machine learning model suggesting products to online shoppers. But others venture into ethically fraught territory. Consider facial recognition or predictive policing algorithms: if the data feeding these systems is biased or incomplete, the software can end up discriminating against certain groups with serious real-world consequences.
Biased training data led Amazon to scrap an AI recruiting tool when it taught itself to reject women candidates, by mimicking patterns in the male-dominated résumés it was trained on. We have also seen an AI chatbot (Microsoft’s infamous “Tay”) go off the rails and start spewing hate speech within hours of being exposed to toxic online inputs. These examples highlight a sobering truth: when startups deploy advanced tech in the real world, unintended outcomes aren’t just minor glitches; they can be headline-making blunders or, worse, harm people.
Crucially, these higher-stakes innovations blur the lines between moving fast and breaking things that matter. In less regulated arenas, a startup might get away with an “ask forgiveness, not permission” approach. But when the “things” at risk include privacy, safety, or lives, the margin for error narrows. Startups can no longer assume that if something breaks, they can simply iterate their way out of trouble. Innovation now comes bundled with accountability to users, to society, and to regulators.
.
Pause and Plan: Integrating Innovation Responsibly
Given the transformative potential and pitfalls of cutting-edge technologies, it’s wise for startups to pause and plan before leaping into the fray. This doesn’t mean losing the agility that defines a startup; it means channelling that agility in a thoughtful direction. Whether a company is sketching its first business plan or scaling up for a new market, introducing a powerful technology (like AI, biotech, or fintech innovations) calls for a clear-eyed strategy.
First, assess the fit. How does this new technology align with your startup’s core mission and values? Not every hot technology is right for every business. If AI or another innovation doesn’t fundamentally enhance your product’s value proposition, bolting it on for buzz can do more harm than good. If it does align, decide whether it’s central to your solution or just a supporting tool. This will guide how much investment and oversight it warrants.
Next, prepare the groundwork. Ask what infrastructure, talent, and data you need to implement the technology properly. Incorporating something like AI often requires expert data scientists, robust computing resources, and careful data curation. Under the hood, an AI-driven platform is very different from a traditional software service. Investing in testing and validation processes is essential. For example, if you deploy a machine learning feature, how will you monitor its outputs for errors or bias? Startups should build in mechanisms for human review or fail-safes, especially early on.
Always consider unintended consequences. This is perhaps the most important mindset shift. It’s impossible to foresee every outcome, but you can identify many “what ifs” in advance. What if your AI chatbot starts responding inappropriately? What if your recommendation algorithm inadvertently favours or excludes a certain demographic? By brainstorming failure modes, you can implement safeguards or at least respond faster when issues arise. In practice, this might involve running a product through diverse test user groups to catch biases, or setting strict boundaries on an algorithm’s autonomy (e.g. having a human double-check high-stakes AI decisions). We have learned that even humorous quirks in an AI can erode user trust, while serious ethical blunders can derail a company’s trajectory. In sensitive domains, say a healthtech startup using AI diagnostics, moving fast without adequate validation could endanger lives. In short, speed must be balanced with due diligence.
This balanced approach doesn’t mean slowing to a crawl. It means being smart about when to speed up and when to tap the brakes. A startup can still be nimble and innovative while also building a culture of responsibility from day one. Doing so might save time in the long run, preventing PR disasters or regulatory run-ins that would consume far more resources to fix than the upfront effort to avoid them.
.
Ethics and Regulation
As startups integrate technology deeper into society’s fabric, governments and regulators around the world are paying close attention. We live in a time when data has been likened to “the new oil”, a resource as valuable and powerful as oil was in the industrial age. But like oil, if misused, data can pollute and cause damage.
Regulators have woken up to the fact that algorithms can shape financial outcomes, job prospects, justice, and more, and they are determined to ensure innovation doesn’t come at the expense of fundamental rights and fairness.
A wave of ethical and legal guidelines is emerging to govern how new tech is used. Bias and discrimination in AI systems are prime targets. Legislatures are keen to prevent AI from making consequential decisions that unfairly disadvantage people. In practice, this has led to laws in some jurisdictions that specifically regulate automated decision-making systems and mandate transparency.
Many privacy laws now even give individuals the right to opt out of being subject to purely algorithmic decisions.
Importantly, the regulatory focus isn’t only on AI. Other frontier industries are seeing similar scrutiny. Fintech startups, for instance, have to navigate financial regulations meant to protect consumers and stability. Health and biotech startups cannot simply “move fast” with experimental treatments without satisfying rigorous safety approvals.
Even platform-based startups like ride-sharing and home-sharing have faced regulatory backlash once their innovations began affecting public interests. Uber’s growth, for example, was famously fueled by skirting taxi laws, launching in cities even when it was technically illegal and then leveraging its popularity to change rules. This bold approach helped Uber become a household name, but not without a trail of legal battles and fines (Pennsylvania once fined Uber $11.4 million for operating without permission). Likewise, Airbnb’s success in enabling short-term rentals sparked concerns about housing costs in many cities; in response, local governments from Barcelona to New York have imposed stricter rules on short-term rentals to protect the long-term housing supply. These cases illustrate that when innovation “breaks” something important, be it laws, markets, or community norms, regulators will eventually step in to fix it.
For startup leaders, the rise of regulation is not necessarily a hindrance; it can be a guidepost. Embracing ethical design and compliance early can become a competitive advantage. It builds trust with users and preempts costly showdowns with authorities.
We advise startups to stay informed about the policy landscape in their sector. If you operate in AI, be aware of emerging standards on data handling, explainability, and user consent. If you’re in a gig-economy or sharing-economy space, keep an ear to the ground on labour laws and local ordinances. Engaging proactively, perhaps participating in industry self-regulation or certifications, startups can help shape the rules in pragmatic ways and avoid being caught flat-footed.
After all, regulators ultimately want technology to succeed responsibly. The old habit of charging ahead without regard for governance is “increasingly untenable” in the current climate.
.
The Path to Balanced Innovation
In the exciting, pressure-cooker environment of startups, it’s tempting to focus on being first to market and worry about consequences later. But as we integrate technologies that impact people’s lives at a profound level, innovation can’t be just about speed or disruption for its own sake. True innovation is as much about responsibility as it is about revolution. The mantra of “move fast and break things” may still have its place when experimenting in low-stakes domains, but when those “things” include societal values, customer trust, or human wellbeing, a more tempered approach wins out.
For the startup aiming to lead in the modern era, blending innovation with responsibility is the blueprint for lasting success. Moving fast is still possible, but let’s make sure we are not simply breaking things, but building things that make the world better without breaking its trust. In pursuing growth with guardrails and progress with principles, startups can indeed have the best of both worlds: the thrill of innovation and the goodwill that comes from doing it right.