SR&ED has a reputation for being unpredictable. Ask any founder or R&D lead who has been through the process and you will hear some version of “it feels like a coin flip.” A colleague files on borderline work and gets approved. You file on a slam dunk and get denied. Over time, the whole program starts to feel arbitrary.
It isn’t.
When you look at enough outcomes across industries and company sizes, patterns show up. The differences between companies that consistently succeed and those that consistently struggle have nothing to do with the sophistication of your technology or the prestige of your consultants. They come down to how you think about your own work, and whether that thinking shows up in the record.
Winners document as they work. Losers reconstruct after the fact.
This is the single most reliable predictor of outcomes we have seen. It is also the one companies are most resistant to hearing, because it means the problem started months before anyone thought about filing.
Companies that win SR&ED claims tend to have records created during the work. Not polished reports. Messy notes, Slack threads, commit messages, whiteboard photos, engineering journals with coffee stains on them. Imperfect, but real. They carry the weight of having been written by someone in the middle of solving a problem, not someone trying to remember what happened eighteen months ago.
Companies that lose tend to have cleaner documentation. That should alarm you. The cleanliness is the tell. It means the documentation was written all at once, well after the work was finished, specifically for the claim. Reconstructed narratives are too coherent, too linear, too neatly structured around the outcome. Real R&D is messy. The documentation should reflect that.
Two biotech teams stabilizing a compound for transport at variable temperatures. Team A kept a running lab notebook with dated entries, including crossed-out calculations and a note reading “this contradicts what we expected, revisit assumptions.” Team B wrote a clean summary after the project shipped, describing the challenge and solution in polished paragraphs. Team A’s claim is far stronger. Not because the work was better. Because the evidence is credible.
The best SR&ED documentation looks like it was written by someone solving a problem, not someone filing a tax claim.
Winners describe what they didn’t know. Losers describe what they built.
This is the pattern that surprises people most. And it is the one that most directly reflects whether a team actually understands what SR&ED rewards.
When you read a strong claim, the emphasis is on the questions the team faced at the outset. What didn’t they know? What couldn’t they predict? The narrative starts with uncertainty and works forward. When you read a weak claim, the emphasis is on the solution the team delivered. Here is what we built. Here is how it works. It reads like a product spec.
The program rewards the investigation, not the outcome [1]. A project that fails spectacularly can be a stronger SR&ED claim than one that succeeds on the first try, if the failed project involved genuine uncertainty and the successful one was just good engineering. Most people find that counterintuitive. It is still true.
Consider two manufacturing firms developing a new coating process. The first firm’s claim opens with: “We developed a proprietary coating that achieves 40% better adhesion than industry standard.” The second opens with: “Existing adhesion models predicted that our substrate-coating combination would delaminate under thermal cycling. We did not know whether modifying the cure profile, the surface preparation, or the formulation itself would resolve the failure mode.”
One tells reviewers there was a product to build. The other tells them there was a genuine problem to solve. Guess which one gets funded.
Winners can explain why the answer wasn’t obvious
Strong claims articulate why standard approaches were insufficient. Not just “we tried something new” but “here is why the established methods were expected to fail in our specific situation, and here is the technical reasoning behind that expectation.”
Weak claims skip this entirely. They jump from “we had a goal” to “we achieved it” without explaining why the work required something beyond what a competent professional would already know how to do.
This is where a lot of legitimate R&D gets rejected. Plenty of engineering work is genuinely hard without being genuinely uncertain. If a qualified engineer could have laid out a viable approach on a whiteboard before anyone wrote a line of code, the outcome was not uncertain. It was labor-intensive. Those are different things, and reviewers know the difference even when the applicant does not.
A real-time data pipeline that needs sub-millisecond latency with strict ordering guarantees across distributed nodes. One team’s claim describes the architecture they shipped. The other describes why the message broker’s built-in ordering couldn’t handle their throughput, why their custom partitioning scheme introduced race conditions they didn’t anticipate, and how each failure redirected the investigation in a direction they couldn’t have predicted at the outset. The second claim gives a reviewer something to work with. The first gives them a product datasheet.
Winners treat SR&ED as part of their engineering process
In companies that consistently win, the engineering team is involved in the claim from day one. When they hit a problem where the outcome is uncertain, someone flags it. When they run experiments, someone notes the hypothesis and the result. It is not extra work layered on top of development. It is a natural extension of how they already operate when doing their most challenging technical work.
In companies that consistently lose, SR&ED is something that happens in accounting. A consultant shows up after the fiscal year ends, interviews the engineering team for a few hours, and writes up a claim based on whatever people can remember. The result is a claim disconnected from the actual technical narrative.
The companies that get the most out of SR&ED are the ones that see it as a byproduct of good engineering discipline, not a financial exercise that happens once a year. And no, this does not mean SR&ED should dictate your engineering process. It means the habits that produce strong claims (recording what you tried, noting what didn’t work, articulating why a problem is hard) are the same habits that produce strong engineering. If capturing SR&ED evidence feels like a completely separate activity from your actual work, something is misaligned.
Show your wrong turns
This one is short because the point is simple: failed experiments are not embarrassing. They are evidence.
Strong claims include tests that did not produce expected results, architectures that were abandoned, approaches that looked promising and then fell apart. The narrative includes wrong turns and dead ends, and it explains what was learned from each one.
Weak claims present a clean path from problem to solution. The space between the challenge and the answer is empty. No evidence that multiple approaches were considered. No record of anything that did not work.
The absence of wrong turns undermines the claim that uncertainty existed. If everything you tried worked, it is hard to argue the outcome was unpredictable. If you only show the final solution, a reviewer has no way to distinguish “we investigated under genuine uncertainty” from “we knew what to build and we built it.”
The difference isn’t luck
Most companies land on the losing side of these patterns. Not because their work does not qualify, but because nothing in their process captures the evidence that it does. The work was real. The uncertainty was real. But the record doesn’t show it, and the record is all a reviewer ever sees.
That is a solvable problem. It does not require overhauling how your team works. It requires building a few small habits into work you are already doing.
If you read through this thinking “that is exactly what we do not do,” take our readiness check. It is the fastest way to find out whether your work is likely to qualify and what it would take to make sure the record reflects it.
References
[1] Canada Revenue Agency, “Eligibility of Work for SR&ED Investment Tax Credits Policy,” 2015. [Online]. Available: https://www.canada.ca/en/revenue-agency/services/scientific-research-experimental-development-tax-incentive-program/eligibility-work-investment-tax-credits/eligibility-work-investment-tax-credits-2015.html