This year, the §280C election R&D tax credit decision is back on the table for every company claiming the credit, and the question of whether to use AI instead of time tracking keeps coming up in nearly every large tech company conversation we are having. We wanted to address both in one place, because while they often surface together, one is a fairly clean call and the other has real audit risk depending on how you approach it.
The §280C Decision Is Usually Straightforward
Now that the §280C reduced credit election is back in play, the decision for most companies comes down to tax posture. It is less a strategic puzzle than a clear-cut choice based on where the company stands financially.
Companies generating taxable income almost always elect the reduced credit. Taking the gross credit would require reducing §174 research expenditures, which increases current taxable income at exactly the wrong time. Preserving the full §174 deduction while accepting a slightly lower credit is the better trade in a profitable year.
Companies in a loss position frequently reach the opposite conclusion. For them, the gross credit is often more attractive, even though it reduces the §174 deduction and lowers the NOL carryforward. When taxable income is not a near-term concern, maximizing the current-year credit tends to outweigh the downstream impact on future NOLs.
The key is making this call early and with your full tax picture in view, not revisiting it at filing.
The Time Tracking Question Is More Complicated
The push to replace time-tracking tools with AI is something we are now hearing across many large technology organizations, particularly those with engineering talent that came up through companies like Google. The objection is rarely about cost. It is about whether structured time tracking adds anything meaningful to the engineering function. Many teams view tools like Clarity as administrative overhead that sits outside their natural workflow.
The alternative being discussed is to rely on systems engineers already use, such as Jira, Git, and Confluence, and then apply AI to analyze those tools and show what work was performed and how it maps to R&D activity. The appeal is obvious. These systems capture real work as it happens, and AI can process them at scale.
In our view, relying solely on that approach introduces audit risk that is worth understanding before any decisions are made.
What the IRS Actually Requires
The IRS does not mandate a specific tool or require formal time sheets. There is no regulation that says Clarity or any other time-tracking platform must be used.
What the IRS does require is that Qualified Research Expenses be quantified by Business Component using a method that is reasonable, consistent, and inspectable. That standard is the lens through which any documentation approach will eventually be evaluated.
Time tracking has historically been the cleanest way to satisfy it. It creates a direct, auditable connection between payroll dollars and specific projects, and it does so with records that were created in the ordinary course of business. We refer to it as the gold standard not because it is required, but because it is genuinely hard to replicate.
Where AI Fits and Where It Does Not
AI is genuinely useful in an R&D credit program. Identifying qualifying projects, pulling together the technical narrative, showing what uncertainties existed and how the team worked through them, analyzing tickets, commits, and design docs to tell that story. We have written about what AI can and cannot do in this context, and that line matters a lot here.
The problem shows up when AI is asked to do something different: to both define the qualifying projects and generate the labor allocations for those projects after the year is over. That is a different ask entirely. In an examination, the IRS wants to understand the methodology, not just see the output. They want to know how you got there, whether you could get there again the same way, and whether the records you are pointing to actually existed when the work was happening. Allocations built by a model at year-end, with nothing anchoring them to decisions made during the year, do not hold up well to that kind of scrutiny.
It is tempting to frame this as an AI limitation. We think it is more accurate to call it a substantiation problem. The source material matters as much as the method.
What Should Replace Time Tracking If a Company Moves Away From It
If engineering decides to eliminate tools like Clarity, something still needs to do what those tools did: give you a defensible, traceable way to connect employee time to specific Business Components.
The approaches we see working best are hybrid ones. AI handles the heavy lifting on visibility and reduces the administrative burden, but the labor allocations are still anchored to human-defined controls that existed during the year. Staffing plans, sprint assignments, technology roadmaps, project governance records. Things an examiner can actually look at and test. AI can analyze and synthesize all of that. What it cannot do is replace it.
What We Are Seeing Go Wrong Mid-Year
In several situations this year, engineering teams made mid-year changes to their time-tracking approach without bringing the tax function into the decision. The result in each case was partial-year data, a greater reliance on manual allocation methods, and more exposure in the event of an examination. None of that was intentional. The tax team simply was not in the room when the tooling decision was made.
This is becoming more common, and it reflects a broader gap between how engineering leadership and tax leadership think about these systems. To an engineer, replacing a tool that feels low-value is a reasonable operational decision. To a tax examiner, it is a change to the methodology used to substantiate millions of dollars in credits.
Eliminating time tracking without replacing the underlying allocation discipline is not really a technology decision. It is an audit risk decision, and it should be made with tax leadership involved.
The Bottom Line
AI has a real role in R&D credit programs, and we are actively building around it. But right now it works best layered on top of a defined methodology, not as a substitute for the allocation discipline that methodology requires. If your engineering team is pushing to get rid of structured time tracking, that conversation needs to include your tax team before anything changes.
If you are working through either of these questions, we’re happy to talk.