Digital Archaeology for Engineering Leaders đ§±
Tech Due Diligence â When You Inherit an R&D Org You Donât Know
How to Run Tech Due Diligence When You Land in a Team You Donât Know â ïž
So hereâs the situation.
Youâve just inherited an R&D team in another country. You donât speak the language â of the code or the people. The architecture diagram is from 2018. There are 42 GitHub repositories, and no one remembers what half of them do. The team shrugs and says things like, âOh yeah, that microservice kinda just works â donât touch it.â
Congrats! Youâre now the captain of this spaceship. đ
In this context, tech due diligence isnât a checkbox exercise. Itâs detective work đ”đŒââïž. Itâs anthropology. Itâs digital archaeology. Youâre trying to figure out whatâs been built, how it works (or doesnât), who understands what, and where the landmines are buried â all while smiling in team meetings and pretending like youâre not silently panicking.
Iâve been there. This post is the playbook I wish someone had handed me on Day 1. Itâs how I start making sense of the mess, build trust with the team, improve the developer experience, and move the org forward â without rewriting everything or accidentally nuking morale.
Start with the People, Not the Code đ©đŒâđ»
When you step into a new team, especially in unfamiliar territory, the instinct to open GitHub and start poking around is strong. But resist it! The code can wait. The real story â and the real leverage â starts with the people. Spend time with the team at all levels, not just leads or architects. Talk to the developers whoâve been around forever, and especially the ones whoâve just joined. Chat with QA, DevOps, even the person who maintains the build scripts but sits on a different Slack channel.
Ask how they feel about the system. Whatâs painful? Whatâs delightful? Whatâs been broken so long they stopped noticing? These stories give you a visceral sense of how the system operates in practice. They also reveal early insights about morale, confidence, and team health. People will hint at the parts of the system they avoid or whisper about the service that âjust worksâ but no one dares to touch. These arenât just anecdotes â theyâre early indicators of DevEx friction points.
Whatâs more, starting with conversations helps build psychological safety. Start also discussing more personal matters. What they like? What are their career aspirations? It tells the team youâre listening, not judging. Youâre not here to bulldoze â youâre here to understand. That shift in posture can unlock collaboration youâll need later when you propose change.
Build a Knowledge Map and Capability View đșïž
Once youâve had enough conversations to spot themes, begin stitching them together into a working map of how knowledge and capability are distributed. This isnât about charting a perfect org diagram or building a Notion wiki just yet. Instead, youâre trying to answer a few practical questions: Who understands what? Who are the go-to folks for particular parts of the system? Where do we have redundancy â and where do we have fragile, single-person dependencies?
This kind of map often starts with post-it notes or a whiteboard (virtual or physical). Youâre mapping code ownership, operational expertise, and institutional memory â and identifying where the gaps are. Often, what emerges is a patchwork: certain repos no one claims, tools that only one engineer knows how to maintain, and pipelines everyone complains about but no one feels responsible for.
These gaps are not just technical risks; theyâre DevEx liabilities. When developers donât know who to ask or lack confidence that anyone understands a service deeply, it slows down progress and makes change feel dangerous. Building this knowledge map gives you the foundation to improve onboarding, reduce key-person risk, and surface the hidden work that keeps everything afloat.
Try to Understand Whatâs Actually There đł
Now that youâve talked to people and started piecing together how things are connected, itâs time to dig into the repos. Ask for architecture diagrams and other formal docs â even if theyâre out of date. What matters isnât precision but intention. These diagrams tell you how the system was supposed to be structured, and that context is invaluable.
From there, go into GitHub and start exploring. Look at which repos are active, which ones have recent commits, and which ones havenât been touched in months (or years). Try to match code with deployed services. Often, youâll find orphaned repos still consuming cloud resources, contributing nothing but cost and confusion. Document what you can, and pay attention to what isnât documented â those are often the parts no one wants to deal with.
At this stage, youâre not trying to do a full technical review. Youâre orienting yourself â mapping terrain, not optimizing it. Youâre answering, âWhatâs actually live? Whatâs running? And whoâs touching what?â Gaps in clarity here directly affect DevEx: if engineers donât know whether a repo is active, or if modifying a service feels risky, it slows down everything else.
Run Static Analysis â and Not Just for the Score đȘ
Now that you have a sense of whatâs running, itâs time to go deeper. Static analysis tools like SonarQube, CodeClimate, or DeepSource can help you get a quick view into maintainability, complexity, duplication, and code smells. You donât need to treat these scores as gospel â but they give you a consistent way to compare services and spot outliers.
Focus on where the rough edges are. Youâre looking for clusters of technical debt, but more importantly, for signs of inconsistent engineering standards. Are there pockets of clean, well-tested code? Are there entire services riddled with warnings and skipped tests? These differences often reflect not just technical maturity, but how much love and attention each part of the system receives.
Healthy codebases improve confidence, and confidence improves velocity. Developers are more willing to ship when they trust what theyâre working with. Static analysis surfaces the invisible factors that make engineering feel slow or stressful â and fixing them is often the fastest path to improving DevEx across the board.
Code Coverage Tells a Story đ
Code coverage isnât about chasing arbitrary thresholds. Itâs about understanding how safe it feels to make changes. A service with robust test coverage and stable builds is one that developers feel comfortable evolving. On the flip side, a codebase limping along with 12% test coverage â and a reputation for breaking in production â breeds fear. That fear changes how teams behave.
Developers avoid touching fragile code. They invent workarounds. They silently accumulate risk. Low coverage becomes a tax on innovation and experimentation. Itâs not just a testing issue â itâs a trust issue.
So coverage data becomes a proxy for psychological safety. Youâre not aiming for perfection, but for enough confidence that changes donât feel like walking on glass. That confidence is a cornerstone of a healthy DevEx environment.
Treat the Code Like a Crime Scene đȘ
This is where things get really revealing. Instead of reading the code, start reading its history. Dig into Git logs and analyze how files evolve over time. Look for patterns â which modules are changed most frequently? Which ones have the largest pull requests? Are there hotspots where every fix requires multiple reworks?
This kind of forensic analysis helps you pinpoint where technical debt lives, but also where cognitive load is highest. One file with hundreds of commits, all made by the same person? Thatâs a risk disguised as a hero. Long-lived PRs with high churn? Thatâs complexity bleeding into delivery.
Tornhillâs âYour Code as a Crime Sceneâ nails this perspective. Youâre not looking for bad code â youâre looking for fragile process, silent friction, and hidden risk. All of which affect how developers work day to day. When the codebase stops surprising you, thatâs when DevEx improves.
Secrets and Dependencies Will Surprise You đ€
Security is not a project you check off â itâs a living process. And left unchecked, entropy always wins. Run scans for exposed secrets with tools like Gitleaks or TruffleHog. Audit dependencies using Snyk or your package managerâs built-in tools. Yes, the results will be messy. Thatâs expected.
But every exposed secret or outdated library is an operational risk. Worse, itâs often a symptom of broken process â secrets committed out of convenience, packages left behind because the upgrade path was painful.
Cleaning this up is about more than just hardening security. Itâs about reducing stress, increasing flow, and giving your engineers more headspace. Thatâs the kind of change that meaningfully improves DevEx.
Walk the Pipeline đ¶đ»
Next, follow the delivery journey. Pick a real change â ideally a bug fix or small feature â and trace it from the moment code is committed to when it lands in production. Whatâs fast? Whatâs slow? Where does it break?
Talk to the team as you do this. Ask what frustrates them most about the build process. Which tests are flaky? How long does it take to get a review? How often do deploys fail for mysterious reasons? These arenât just process hiccups â theyâre invisible forces dragging down productivity and morale.
A slow or unreliable CI/CD pipeline is a hidden DevEx killer. It turns shipping into a gamble. Streamlining this journey is one of the highest-leverage improvements you can make, because it touches every engineer, every day.
Track How the System Behaves, Not Just What It Contains đčïž
At this point, youâve looked at the systemâs structure. Now look at how it behaves.
DORA metrics give you a clean, actionable view into how well your engineering org is functioning. How fast do you ship? How often do things break? How quickly do you recover?
You can extract this from GitHub and CI/CD data, or plug in tools like LinearB or getDX. Either way, donât obsess over the numbers â focus on whatâs slowing the team down. Long PR review times, brittle integration tests, unclear ownership â these will show up as red flags.
When teams operate in a high-trust, low-friction environment, these metrics trend in the right direction. When they donât, itâs often a reflection of systemic issues that make good DevEx impossible.
Understand the Stack with a Tech Radar đ§
Once youâve got a grip on how the system works, turn your attention to what itâs made of. Build a tech radar to capture the tools, frameworks, libraries, and platforms in use â and more importantly, their status. Is this framework being actively developed? Is that database still serving production, or just a leftover experiment?
A tech radar helps you spot unnecessary sprawl, highlight deprecated tools, and identify where the stack is aging in place. Itâs also a fantastic way to empower developers â giving them clarity on which tools are supported and which ones are on the way out.
Good tech choices reduce friction. Bad or ambiguous ones increase it. Reducing stack chaos is a gift to DevEx.
Synthesize with a Tech Health Radar đ„
Now itâs time to pull everything together. A Tech Health Radar is your way of showing â visually and honestly â where things hurt.
Youâre capturing reality, not presenting a polished status report. Maybe onboarding is slow. Maybe test failures are random. Maybe no one understands how the deployment system actually works. Thatâs okay. What matters is surfacing those issues clearly and consistently.
This radar becomes your shared map. It helps you have better conversations with leadership, make smarter bets with the team, and prioritize invisible work thatâs often deprioritized. If you want to advocate for better DevEx, this is your tool.
Drive Change with SMART OKRs đŻ
Finally, once you understand the system, you can start to improve it â deliberately. That means setting goals that are specific, measurable, and connected to real pain.
Forget vague promises to âclean up tech debt.â Instead, pick clear outcomes tied to your radar. Improve test reliability. Raise confidence in deployment. Reduce security exposure. Shorten feedback loops.
đŻObjective: Improve Code Quality and Security (Q3 2025)
We want to move from reactive cleanups to a proactive security and quality posture â with each KR shifting a measurable RAG status.
đđ»KR1: Scan 100% of active repositories for hardcoded secrets by July 31, and remove any found
Red: No scanning in place, secrets show up in prod occasionally
Amber: Manual scanning in a few repos, issues caught late
Green: Automated scanning across all active repos, zero hardcoded secrets in main branches
â Needle: Currently Amber â Targeting Green
đđ»KR2: Increase average SonarQube maintainability rating from B to A across the top 10 repos by September 15
Red: C or below â code is hard to understand, maintain, or extend
Amber: B â maintainable, but technical debt is growing
Green: A â high readability, clean complexity, reduced bugs over time
â Needle: Currently Amber â Targeting Green
đđ»KR3: Reduce the number of critical CVEs in production dependencies by 90% by end of Q3
Red: Multiple known CVEs in production, unmanaged
Amber: CVEs tracked but patching is ad hoc or lagging
Green: Near-zero critical CVEs, updates automated or prioritized
â Needle: Currently Red â Targeting Green
đŻ Objective: Increase Delivery Confidence and Developer Trust (Q3 2025)
Reduce frustration, flakiness, and failed builds. Get devs back to trusting the system â and shipping with confidence.
đđ»KR1: Raise unit test coverage to at least 80% for the top 3 services (by usage) by August 31
Red: Coverage <50%, bugs often hit prod
Amber: 50â79%, tests exist but donât fully cover logic
Green: 80%+, meaningful coverage with reduced regression issues
â Needle: Currently Amber â Targeting Green
đđ»KR2: Optimize CI pipeline to ensure average build time is under 10 minutes for all services by September 15
Red: Builds regularly take 15+ minutes, devs avoid pushing
Amber: Inconsistent times, some services optimized, some lag
Green: Sub-10-minute builds across the board, fast feedback loop
â Needle: Currently Red â Targeting Green
đđ»KR3: Improve CI/CD reliability to >90% successful runs on first try by quarter end
Red: Frequent flaky failures, retries required
Amber: 70â89% success, some systems solid, others brittle
Green: 90%+ success on first run, devs confident in pipeline
â Needle: Currently Amber â Targeting Green
These OKRs arenât about making your slide deck look good. Theyâre about aligning leadership with engineering â and showing the team that progress is real, visible, and worth investing in.
The TL;DR
When you inherit an unfamiliar R&D org, donât start by rewriting code. Start by listening. Map the knowledge. Understand whatâs running. Pay attention to behavior, not just structure.
Use tools like SonarQube, Git history, DORA metrics, and yes, âYour Code as a Crime Sceneâ to surface whatâs really going on. Then fix the right things â in the right order â with buy-in from the team.
And always remember: better developer experience isnât a luxury. Itâs the foundation for sustainable, joyful, high-performing engineering.
Youâve got this. Now go tame that jungle and maybe bring some extra coffee âïž.






This is a very nice one! đ