What I Learned Scaling Nx to 100+ Engineers
If you’re building a platform with more than a handful of services, you eventually have to decide how you organize your code. The three common options:
Polyrepo: every service or app gets its own repository. Simple to reason about at first, but you pay a coordination tax as the number of repos grows. Shared libraries become versioned packages, cross-cutting changes become multi-repo PRs, and keeping tooling consistent across dozens of repos is a full-time job.
Monorepo with basic tooling: everything in one repository, managed with something like npm/yarn/pnpm workspaces. You get colocated code and easier cross-project changes, but you’re on your own for caching, affected-project detection, and task orchestration.
Monorepo with a build system: everything in one repository, managed by a tool like Nx, Turborepo, Bazel, or similar. These tools give you dependency-aware task execution, caching, and the ability to only build and test what actually changed. This is the path we took with Nx, scaling it to over 70 services and around 100 engineers.
It was worth it. But it took more discipline than I expected.
Decide on your dependency strategy early
The first decision you need to make is whether you’re going with a single set of shared dependencies at the root or allowing individual projects to manage their own. Nx supports both, and that’s the problem. If you don’t pick one and enforce it, you’ll end up with both.
That’s what happened to us. Most projects pulled from the root package.json, but a few had been created with their own nested package-lock.json. This caused dependency drift between those projects and the rest of the repo. One project might be running React 18.2 while everything else was on 18.3. Tests would pass in isolation and fail in CI. Debugging these issues always took longer than it should have because the root cause (a stale lockfile in a nested directory) wasn’t obvious.
Pick a strategy, document it, and lint for violations. If you find a nested lockfile, that should be a CI error, not something that quietly drifts for months.
Remote caching is not optional
We required branches to be up to date with main before merging. This is a common policy, but it means that every time someone merges a PR, every other open PR needs to rebase and re-run CI. Without caching, this meant full rebuilds on every rebase. CI times ballooned. Engineers were spending more time waiting for green checks than writing code.
Remote caching changed this completely. We built our own custom remote cache rather than using Nx Cloud, which gave us more control over storage and retention policies. If the code hadn’t changed, the cached result from a previous run would be reused. Rebasing onto main, when your project wasn’t affected by the merged changes, went from a 30-minute CI run to under a minute.
If you’re running a monorepo at any real scale without remote caching, you’re paying a massive hidden tax on every PR.
Affected detection is powerful but opaque
Nx can determine which projects are affected by a given change and only run tasks for those projects. This is one of its most important features at scale. It’s also one of the most frustrating when it gets it wrong.
We regularly saw projects triggered by CI when, as far as we could tell, nothing relevant had changed. The hashing logic that Nx uses to determine whether a project needs to rebuild is not particularly easy to inspect. You can see that a project was marked as affected, but working backward from there to understand why (which file change, which dependency edge) involves more digging than it should.
It would help enormously if there was a straightforward way to diff the hashes between two commits and see exactly which inputs changed for a given project. Without that, you end up with engineers re-running CI on PRs that shouldn’t have been affected, or worse, assuming the tool is wrong and ignoring the signal.
Get your CODEOWNERS right
In a monorepo with many teams, the CODEOWNERS file is how you route reviews. It maps directory patterns to teams. This is simple in theory and fragile in practice.
If your folder convention is apps/team-name/project-name and someone creates a project at apps/project-name (skipping the team directory), it might match a different CODEOWNERS rule. Now a team that has no context on the change is required to approve the PR. This blocks velocity and annoys everyone involved.
We found that CODEOWNERS drift was one of the most common sources of friction. It’s worth investing in automation that validates new projects conform to your directory conventions and that CODEOWNERS entries stay current.
Breaking changes hit differently in a monorepo
When a shared dependency introduces a breaking change, you feel it everywhere at once. We use Vitest for testing across the repo. When Nx updated its Vitest plugin to v4, it included a breaking change in how test configuration worked. This wasn’t a matter of updating one project. It meant updating the test configuration for practically every project in the repo.
In a polyrepo world, you’d roll this out incrementally. In a monorepo, you either update everything at once or you maintain two parallel configurations. Neither option is great. This is the tradeoff: you get consistency and visibility across all your projects, but you also get the blast radius of every breaking change applied to all your projects simultaneously.
Shared packages need careful consideration
Our design system lived in the monorepo alongside around 20 business-facing frontend applications. This meant that any change to the design system triggered CI for all 20 downstream apps.
On one hand, this is the whole point. You know immediately whether a design system change breaks something. On the other hand, it means that even a small token update or a new icon triggers 20 build and test pipelines. The overhead added up, and it made the design system feel expensive to change even when the changes were trivial.
In practice, the design system didn’t change often once it was established, so this was manageable. But if I were setting this up again, I’d think carefully about whether a heavily-depended-on package like a design system belongs in the same repo as its consumers, or whether it’s better served as a versioned external package where consumers opt into updates on their own schedule.
One practical middle ground: keep the design system source in the monorepo, but have downstream feature libraries depend on the published version rather than the repo version. This way a design system change triggers its own build and publish pipeline, but doesn’t cascade CI across every consumer. Consumers update to new versions on their own cadence. This has its own tradeoffs. You lose the immediate integration signal, and you can end up with different apps running different versions of the design system. But for an upstream package that changes frequently and has many downstream consumers, decoupling the CI graph is often worth it.
Tailwind in a monorepo has rough edges
Tailwind is great as a technology, and LLMs are particularly good at writing and editing it. But in a monorepo context, it felt awkward. Tailwind’s content scanning treats the repo as a single project. You configure it with glob patterns that match file types across directories, and it generates a single set of utility classes.
This caused issues for us. React feature libraries (shared packages intended to be consumed by apps) were picking up Tailwind tokens in their bundled output. The result was unexpected styling in the consuming application because the library was shipping CSS that didn’t belong to it.
Getting Tailwind scoping right in a monorepo takes deliberate configuration, and it’s not well-documented for this use case. If you’re using Tailwind across many packages in a single repo, budget time for getting the content paths and build boundaries correct.
Your CI platform matters more than you think
We initially ran CI on an internal enterprise platform built on Jenkins. It was the required tooling at the company. It was also designed for small, individual repositories, and it showed.
With more than 20 PRs open at once, the platform struggled. Builds queued. Parallelism was limited. The feedback loop that makes a monorepo productive (fast, targeted CI on affected projects) was completely negated by a CI system that couldn’t keep up.
When we switched to GitHub Actions, the problems disappeared almost immediately. Nx’s affected detection plus GitHub Actions’ parallelism meant we could run targeted CI for dozens of concurrent PRs without bottlenecking. The monorepo went from feeling like a burden to feeling like the productivity multiplier it was supposed to be.
If your CI platform can’t handle the concurrency and parallelism that a large monorepo demands, the monorepo will make things worse, not better. The tool matters less than the infrastructure running it.
Monorepos and agentic coding
One thing I didn’t anticipate when we adopted Nx is how well monorepos pair with agentic coding tools. Having all your services, libraries, and configuration in a single repository means a coding agent has full visibility into the platform. It can trace a type from the API layer through shared packages into the frontend. It can make a cross-cutting change and run the relevant tests, all without switching repositories or guessing about downstream effects.
You could approximate this by cloning multiple repos into a single workspace and writing a high-level document explaining the layout. But a monorepo with unified tooling gives you this for free. The agent can run nx affected to know what it broke. It can run nx test project-name to validate a change. The feedback loop is tight, and agents thrive on tight feedback loops.
I think this alone will push more teams toward monorepos in the coming years. The organizational overhead is real, but the leverage you get from giving a coding agent access to your entire platform in a single context is significant.
Would I do it again?
Yes.
The benefits of a monorepo at scale are real. Shared tooling, unified caching, cross-project visibility, atomic changes that span multiple services. We were able to standardize build, test, and deploy pipelines across 70+ services. Engineers could move between teams and find the same setup everywhere.
The Nx team specifically has been great. I was a regular in their Slack, and they consistently went out of their way to help when we ran into issues. The community around Nx is responsive and genuinely helpful, which matters when you’re running into edge cases that the documentation doesn’t cover.
Turborepo is another solid option, particularly if its opinions align with your existing tooling. It’s more opinionated than Nx, which can be a feature or a limitation depending on your setup.
The main thing I’d emphasize to anyone considering this path: the tool is the easy part. The discipline is what makes it work. Dependency strategy, CI infrastructure, code ownership, directory conventions. Get these right and a monorepo is a multiplier. Get them wrong and you’ll spend more time fighting the tool than building software.