Author:
R&D Tax Advisors
Role:
CPAs
Publish Date:
Dec 29, 2025
The Question
“We don’t really do R&D… so this probably doesn’t apply to us, right?”
This is one of the most common self-disqualifications founders and CFOs make — and it usually happens fast.
No analysis.
No discussion.
Just an assumption.
In many cases, that assumption is wrong.
In some cases, it’s right.
The problem isn’t that companies say “we don’t do R&D.”
It’s why they say it — and what that misunderstanding leads to.
Why People Picture R&D as Labs and White Coats
When most people hear “research and development,” they picture something very specific:
• scientists in labs
• prototypes made of hardware
• breakthroughs that feel academic
• work that looks like pure invention
Software development doesn’t look like that on the surface. It’s iterative, incremental, fast-moving, and often invisible once it’s shipped.
So founders conclude:
“We’re just building product. That’s not R&D.”
That intuition makes sense — but it doesn’t match how the R&D credit actually works.
The credit doesn’t reward how impressive the work looks.
It rewards whether the work involved technical uncertainty and systematic experimentation.
That distinction matters.
What Software R&D Actually Looks Like (In Practice)
In software companies, qualifying R&D almost never looks like a single “big breakthrough.”
It looks like teams trying to solve problems where the outcome isn’t known in advance.
That often includes:
designing new architectures or data models
improving performance, scalability, or reliability
resolving bottlenecks that don’t have obvious solutions
refactoring systems to support new functionality
building internal tools because off-the-shelf options don’t work
experimenting with algorithms, workflows, or integrations
None of that requires a lab.
But it does require technical judgment, failed attempts, and iteration.
Those are the hallmarks of R&D — even if the work feels routine to the people doing it.
Why Companies Incorrectly Rule Themselves Out
Most self-disqualification comes from one of a few common beliefs.
Some companies think R&D only applies if they’re inventing something entirely new to the world.
Others believe that because their product already exists, ongoing development can’t qualify.
Some assume that refactoring or performance work is “just maintenance.”
Others worry that using modern frameworks or cloud tools automatically disqualifies them.
None of these are reliable tests.
The R&D credit doesn’t ask whether your work is novel globally.
It asks whether you faced technical uncertainty and had to experiment to resolve it.
When companies skip that analysis and jump straight to “we don’t qualify,” they often miss legitimate opportunities — or, just as bad, misunderstand what would qualify later.
Where the Confusion Cuts Both Ways
Misunderstanding eligibility creates two opposite — but equally costly — outcomes.
On one side, companies leave value on the table.
They assume nothing qualifies, so they never look closely at their work. Over time, that becomes institutional belief: “we’ve never done R&D.”
On the other side, some companies do the opposite.
They hear that “software qualifies” and assume everything qualifies. Routine implementation, customer-specific work, cosmetic changes — all treated as R&D.
That leads to weak claims, audit risk, and eventual disallowance.
Both outcomes come from the same root issue: not understanding what the credit is actually testing.
When “We Don’t Do R&D” Is Actually the Right Answer
It’s important to say this plainly:
sometimes the answer really is no.
If your engineering work is primarily:
implementing well-understood solutions
configuring existing tools
maintaining stable systems
making cosmetic or content-level changes
delivering customer-specific customizations without technical uncertainty
then the R&D credit may not be a good fit — or may only apply in narrow situations.
Saying “no” in those cases isn’t conservative.
It’s accurate.
The goal isn’t to force qualification.
It’s to be honest about where uncertainty and experimentation actually exist.
Why This Matters More Than People Realize
Eligibility misunderstandings don’t just affect whether a company claims the credit. They affect how companies approach it.
Companies that incorrectly rule themselves out never build documentation habits.
Companies that incorrectly assume everything qualifies often build fragile claims.
Both outcomes create problems later — especially when companies grow, file amended claims, or face diligence in an acquisition.
Understanding eligibility early creates leverage:
better documentation
better scoping
more realistic expectations
lower risk over time
That’s why this question matters even if the answer ends up being “not yet.”
The Takeaway
“We don’t do R&D” is often a shortcut — not a conclusion.
For software companies, R&D rarely looks dramatic. It looks like teams solving hard problems, testing approaches, and learning through iteration.
Sometimes that work qualifies.
Sometimes it doesn’t.
The mistake isn’t saying no.
The mistake is saying no without understanding what the question actually asks.
The companies that get the most value from the R&D credit aren’t the ones stretching definitions.
They’re the ones that take eligibility seriously — early, honestly, and with clear eyes.
That’s how you avoid both missed opportunities and bad claims.



