May 4, 2026
When an Alleged “AACE-Compliant” Delay Analysis Is Anything But
I recently testified in an arbitration hearing involving a delay dispute between a developer and its general contractor on a multimillion-dollar, multi-building apartment complex. During that hearing, the opposing delay expert and I each presented our analyses and testified about whether the general contractor had reliably established critical path delay. The dispute involved familiar project issues: phased turnover obligations, inspection timing, utility coordination, weather, substantial completion sign-offs, and alleged impacts to later-building work. But the central issue was more fundamental: whether the delay expert who claimed to rely on AACE RP 29R-03 actually performed the disciplined forensic schedule analysis that RP 29R-03 requires.
One of the more troubling trends I see in construction delay disputes is the misuse of accepted industry delay analysis standards, including AACE International Recommended Practice No. 29R-03. Some so-called delay experts cite RP 29R-03 as though merely invoking it gives their opinions credibility. They say they relied on it. They say they followed it. They use the terminology. They identify a delay analysis Method Implementation Protocol (MIP). They present charts and tables. But when the analysis is tested, it becomes clear that the expert did not actually follow the discipline the Recommended Practice requires, not even a little bit.
That is not a harmless technical shortcut. It is an abuse of the credibility that AACE RP 29R-03 has earned in the construction claims and project controls industry. RP 29R-03 is intended to guide disciplined forensic schedule analysis. It is not intended to be used as a label pasted onto an unsupported delay narrative. When an expert cites RP 29R-03 but does not validate the schedules, does not demonstrate the controlling path, does not test cause and effect, does not address concurrency or pacing, and does not explain the limitations of the selected method, the citation becomes misleading.
In my recent testimony, this problem was front and center. The general contractor’s delay expert claimed to have performed a planned-versus-as-built analysis consistent with AACE RP 29R-03, specifically MIP 3.1, an observational/static logic/gross method. But the work product did not, in my opinion, support at any credible level the certainty expressed in the expert’s conclusions. My arbitration presentation focused on why the expert’s findings were unsupported, flawed, subjective, and unreliable, including the absence of schedule validation, the use of static logic assumptions, the failure to adequately address concurrency and pacing, and the presentation of an alleged as-built critical path that included high-level bar chart summaries with no logic links.
The issue was not whether the project experienced problems. It did. It is not uncommon that projects of this size and complexity can and do. The issue was whether the expert proved that the alleged problems delayed the controlling path of work. That is the difference between project chronology and forensic delay analysis.
RP 29R-03 Is Not a Credibility Shield
AACE RP 29R-03 is an important industry standard because it provides a structured framework for forensic schedule analysis. But it does not perform the analysis for the expert. It does not validate a baseline schedule. It does not determine whether an update is reliable. It does not convert a bar chart into a CPM schedule. It does not prove critical path delay simply because an expert says the words “planned versus as-built.”
That is why the misuse of RP 29R-03 is so concerning.
When an expert cites RP 29R-03, counsel and tribunals may assume the analysis followed an accepted technical framework. That assumption can be dangerous. The real question is not whether the report mentions RP 29R-03 or other industry-accepted references. The real question is whether the report actually does what RP 29R-03 requires for the selected method and the opinion offered.
A credible delay analysis must validate the schedule data, identify the controlling path, test the relationship between alleged events and completion impact, evaluate concurrent delays, consider pacing, and disclose the limitations of the method. If those steps are missing, the report may contain RP 29R-03 language, but it does not contain RP 29R-03 discipline.
That was the problem in this particular case.
The Scheduling Record Became Weakest When It Mattered Most
One of the central facts in the arbitration was that the general contractor’s scheduling practices deteriorated during the same period when its expert later alleged critical delays occurred.
Earlier in the project, the general contractor maintained more conventional schedule updates. Later, the schedule structure changed. Completed activities were removed. Logic was no longer preserved in a way that allowed critical path movement to be objectively tested. Eventually, the contractor relied on look-ahead and bar chart-style schedules that could communicate broad time expectations but could not support critical path calculations.
That should have been a major warning sign for the contractor’s expert.
If the general contractor claims that owner, utility, jurisdictional, weather, or inspection issues critically delayed the project, the contemporaneous schedule record should allow that claim to be tested. It should show the planned sequence, remaining work, actual progress, logic relationships, driving predecessors, float, and forecasted completion path. If the schedule record does not show those things, the expert must confront that problem directly.
In my opinion, the expert did not do that, not even closely.
Instead, the expert relied on the contractor’s deteriorated scheduling record while still offering critical path opinions. That approach raised a fundamental credibility problem. If the schedules do not contain the logic needed to calculate the critical path, then an expert cannot simply fill the gap with unsupported or verifiable judgment and call the result a forensic schedule analysis.
A Bar Chart Is Not a Critical Path Schedule
One of the more remarkable points during the hearing was the general contractor’s view of scheduling itself. The general contractor’s superintendent testified that it was common to schedule the project using bar charts because that was the best way to communicate timeline expectations to subcontractors.
I understand why field personnel may like bar charts. They are easy to read. They can be useful in weekly meetings. They can help communicate broad sequencing expectations to trade contractors.
But that testimony revealed a fundamental misunderstanding of the purpose of a project schedule.
A project schedule is not prepared primarily for subcontractors to glance at a timeline. A proper project schedule is a planning, coordination, forecasting, risk management, accountability, and contract administration tool. On a delayed project, it also becomes one of the most important records for determining causation.
A bar chart can show dates and durations. It can visually suggest a sequence. But a bar chart without logic ties does not calculate the critical path. It does not identify driving predecessors. It does not show float. It does not reveal whether an activity controlled completion. It does not show whether the critical path shifted from one building, phase, trade, or system to another.
A bar chart is a picture of time. A CPM schedule is a logic-based model of causation.
Those are not the same thing.
That distinction should be obvious to a qualified delay expert. Yet the general contractor and its expert effectively treated forward-looking bar chart schedules as though they could support critical path conclusions. From a forensic scheduling standpoint, that is not a minor weakness. It is foundational.
Source Validation Is Not Optional
Another major failure was the lack of demonstrated schedule validation.
AACE RP 29R-03 dedicates an entire section to schedule validation. Schedule validation is not a suggestion. It must be performed to support the credibility of the delay analysis. The RP does not invite an expert to accept a contractor’s schedules at face value and then proceed directly to delay allocation. Source validation is a prerequisite to reliable analysis. Before relying on a baseline schedule, progress update, look-ahead, reconstructed as-built, daily report, or schedule exhibit, the analyst must evaluate whether that source is reliable for the purpose being assigned to it.
That includes reviewing logic integrity, open ends, constraints, lags, calendars, missing activities, scope inclusion, actual dates, contract conformance, and whether the schedule reflected the contractor’s real plan or merely an administrative reporting document.
Without that step, the expert cannot distinguish true controlling work from preferential sequencing, distorted status, contractor-created float consumption, incomplete logic, or after-the-fact narrative structure.
In the arbitration, my concern was not merely that the expert used imperfect schedules. Most forensic analyses require working with imperfect project records. The problem was that the expert did not adequately explain how the flaws in the scheduling record affected the reliability of the opinions. The expert relied on the record as though it could prove critical path causation, even though the record did not preserve the information necessary to objectively support that conclusion.
That is precisely the type of misuse that gives RP 29R-03 a false appearance of compliance.
Event Descriptions Are Not Delay Proof
The same fundamental problem appeared across the alleged delay events: the contractor’s schedules did not demonstrate cause and effect between the events being alleged and the critical delays being claimed.
That distinction is essential. A project record may show that an event occurred. It may show that an activity finished later than planned. It may even show that multiple problems existed during the same general period. But none of that proves that the event delayed the project’s controlling path. To establish critical path delay, the schedule must show how the alleged event affected a specific activity, how that activity affected its successors, how the impact moved through the schedule logic, and how it ultimately delayed the applicable completion milestone.
The contractor’s schedules did not do that. They did not provide a reliable logic-based demonstration of how the alleged events drove critical path delay when the events occurred. Instead, the expert relied heavily on event descriptions, timing relationships, and after-the-fact narrative explanations. That approach may describe project difficulty, but it does not prove forensic delay causation.
This is where the expert’s reliance on AACE RP 29R-03 became especially problematic. RP 29R-03 does not treat chronology as causation. It does not allow an analyst to identify an event, observe that the project finished late, and then assign delay responsibility without demonstrating the intervening schedule logic. A credible delay opinion must show the causal chain from event, to affected activity, to driving path, to milestone impact.
That causal chain was missing. The schedules did not objectively show which work was controlling, how the alleged impacts changed the controlling path, whether other activities were concurrent, or whether the contractor was pacing or otherwise constrained by its own performance. Without that proof, the analysis reduced complex project delay issues to a simple narrative: an event happened, time passed, and the event was assigned as critical delay.
That is not enough. A delay expert must do more than describe events. The expert must prove that those events caused critical delay through a transparent, logic-based schedule analysis.
Concurrency and Pacing Cannot Be Assumed Away
One of the most serious omissions was the failure to adequately address concurrency and pacing, especially with the simplistic analysis methodology the expert used.
Delay responsibility often turns on what else was happening at the same time. If an alleged owner-caused issue occurred during a period when contractor-caused conditions were also controlling completion, that must be analyzed. If the contractor slowed its work because another issue had already delayed the project, or because it was pacing its effort to match a later available date, that must also be analyzed.
An expert cannot simply ignore those questions and still claim a reliable allocation of delay responsibility.
In my opinion, the opposing expert’s report did not adequately test whether contractor-caused conditions overlapped the alleged impacts. It did not sufficiently evaluate whether production slowdowns reflected contractor pacing rather than owner-caused delay. It did not reliably distinguish between apparent delay and actual critical path delay.
That omission is especially important when the expert claims to have followed RP 29R-03. The Recommended Practice recognizes that method selection and implementation must be appropriate to the facts, available records, and opinions being offered. A gross observational method may identify activities that finished later than planned. But that does not mean it can reliably prove interim cause and effect, shifting critical paths, concurrency, pacing, and responsibility allocation without further analysis.
The Credibility Problem
This is why the central issue was not simply methodological disagreement. It was credibility.
An expert report loses credibility when it cites a respected industry practice but does not follow the analytical discipline behind it. It loses credibility when it relies on unvalidated schedules. It loses credibility when it treats bar charts as though they can calculate critical paths. It loses credibility when it ignores the general contractor’s poor scheduling practices during the critical period. It loses credibility when it describes events but does not prove causation.
The report may still look polished. It may include exhibits, tables, schedule excerpts, and technical terminology. But credibility does not come from format. It comes from transparent, reproducible, logic-based proof.
That proof was missing.
The Practical Lesson for Counsel
For construction attorneys and in-house counsel, the lesson is straightforward: do not accept an expert’s RP 29R-03 citation, or representation that an expert used any industry schedule delay analysis citation, at face value.
Ask the harder questions.
- Did the expert validate the schedules?
- Did the expert identify the critical path using logic-based analysis?
- Did the expert explain whether and how the critical path shifted?
- Did the expert analyze concurrency?
- Did the expert consider pacing?
- Did the expert distinguish field problems from critical path delay?
- Did the expert explain the limitations of the selected method?
- Could another qualified analyst reproduce the result?
If the answer is no, then the report may be using RP 29R-03 terminology without actually following RP 29R-03 discipline.
Final Thought
The abuse of AACE RP 29R-03 by some so-called delay experts should concern everyone involved in construction disputes. RP 29R-03 is a respected forensic schedule analysis reference. It should not be used as window dressing for unsupported opinions. It should not be invoked to give credibility to a report that does not validate the schedules, does not prove the controlling path, does not test causation, and does not address concurrency or pacing.
In the arbitration where I testified, the general contractor’s delay expert said he relied on and followed RP 29R-03. But in my opinion, the analysis told a different story. The report relied on a weakened scheduling record, treated bar chart information as though it could support critical path conclusions, failed to adequately confront the contractor’s scheduling failures, and offered delay opinions that were too dependent on inference rather than demonstrable critical path proof.
That is not merely a technical flaw.
It is a credibility problem.
AACE RP 29R-03 deserves better than to be cited without being followed. So do tribunals, counsel, owners, contractors, and project participants who depend on forensic schedule analysis to fairly determine responsibility for delay.
Stephen P. Warhoe, Ph.D., P.E., CCP, CFCC, is a Vice President with Long International and a construction delay expert with more than 40 years of experience in design, construction, project controls, and dispute resolution. He has served as a testifying expert on major domestic and international disputes involving schedule delays, productivity loss, and damages on projects exceeding US$6 billion in value. Dr. Warhoe is a former President of AACE International, a recipient of its 2025 Lifetime Achievement Award, and a primary author or contributor to several widely cited AACE Recommended Practices.
Long International provides expert schedule delay and construction claims consulting, project controls and risk analysis, and arbitration and litigation support tailored to complex infrastructure and industrial projects. Its professionals assist with schedule quality assurance, delay and impact quantification, entitlement and damages assessments, and expert testimony services. For more information, contact Stephen at swarhoe@long-intl.com.
ADDITIONAL RESOURCES
Blog
Discover industry insights on construction disputes and claims, project management, risk analysis, and more.
MORE
Articles
Articles by our engineering and construction claims experts cover topics ranging from acceleration to why claims occur.
MORE
Publications
We are committed to sharing industry knowledge through publication of our books and presentations.
MORE
RECOMMENDED READS
The Role and Benefit of a Consultant in a Construction Project: Part 1
This is the first blog post in a two-part series on the role and benefit of a construction consultant.
READ
A Tale of Two Claims: Same Contract, Two Fates
The success of a claim often hinges not on what happened in the field, but how it was documented, analyzed, and communicated.
READ
Construction Claims Prevention and Resolution
This article identifies solutions and suggests programs that one can use to prevent, mitigate, and manage claims.
READ