Prosecution Insights
Last updated: April 19, 2026
Application No. 18/767,201

AUTOMATING VEHICLE DAMAGE INSPECTION USING CLAIMS PHOTOS

Non-Final OA §103
Filed
Jul 09, 2024
Examiner
CHU, DAVID H
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Cambridge Mobile Telematics Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
81%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
532 granted / 682 resolved
+16.0% vs TC avg
Minimal +3% lift
Without
With
+2.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
32 currently pending
Career history
714
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
57.8%
+17.8% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 682 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 8 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chatfield et al. (PGPUB Document No. US 2022/0138860) in view of Li et al. (PGPUB Document No. US 2018/0260793). Regarding claim 8, Chatfield teaches a non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations, comprising: Capturing, as a captured set of images, a set of images of a vehicle, wherein the set of images are captured from different viewpoints around the vehicle (capturing images of the vehicle from varying angles as demonstrated in FIG.3A-C (Chatfield: 0019, 0021, 0023)); Mapping, as a mapped image set, the captured set of images using an image classification model (mapping regions of the images to vehicle parts using deep learning classifiers (Chatfield: 0024-0026), wherein “the classifiers utilized may be trained to evaluate multiple images, such as evaluating multi-frame portions of video files” (Chatfield: 0028)); Performing image-level damage detection on each image in the aligned image set to estimate a damage probability for each pixel in each image; and predicting part damage and severity for each image (dynamic classifiers used in the application make damage assessments (such as whether a part is damaged or not, whether a part should be repaired or replaced, labor hours, or damage severity level), and, for these assessments, generate confidence levels associated with these assessments (Chatfield: 0036). The Examiner submits that damage assessment based on the “multiple images” (Chatfield: 0028) is consistent with pixels that are part of the images also being assessed for damage, as presently claimed). However, Chatfield does not expressly teach but Li teaches aligning, as an aligned image set, the mapped image set onto a three-dimensional vehicle model for the vehicle (Some embodiments then project the images onto the 3D model of the vehicle using the camera angles determined during the alignment process (Li: 0164, 0249-0250, 0287, FIG.27)); Organizing the captured set of images (applying the damage classifying (“organizing”) AI of Chatfield to the set of images captured by Li (Li: 0233, FIG.26, step 2606)); Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to create and display a 3D model from the images captured by Chatfield according to the teachings of Li, because this enables applying the teachings of Li for assessing damages only from a set of still images. Claim(s) 1 is a corresponding method claim(s) of claim(s) 8. The limitations of claim(s) 1 are substantially similar to the limitations of claim(s) 8. Therefore, it has been analyzed and rejected substantially similar to claim(s) 1. Claim(s) 15 is a corresponding computer system claim(s) of claim(s) 8. The limitations of claim(s) 15 are substantially similar to the limitations of claim(s) 8. Therefore, it has been analyzed and rejected substantially similar to claim(s) 15. Note, the combined teachings above teach discloses a computing system (Chatfield: 00016, FIG.2). Claim(s) 4, 11 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chatfield in view of Li as applied to the claim(s) above, and further in view of Murez (PGPUB Document No. US 2021/0279943). Regarding claim 11, the combined teachings above do not expressly teach but Murez teaches the non-transitory, computer-readable medium of claim 8, wherein the aligning the mapped image set onto a three-dimensional vehicle model for the vehicle, comprises: Discretizing a surface of the three-dimensional vehicle model, wherein each discretized point is referred to as a voxel (features projected into a 3D voxel volume (Murez: 0024)); Learning a high-dimensional embedding to represent each voxel and each image pixel, wherein a high similarity score in an embedding space yields a pixel-voxel correspondence (using convolutional neural network (high-dimensional embedding), features are projected (“embedding”) into a 3D voxel volume (Murez: 0024)); Determining an optimal camera pose using at least the pixel-voxel correspondence (the disclosed camera intrinsics and extrinsics (Murez: 0024) are known in the art as parameters comprising pose information); And determining a mapping between each image pixel and each voxel using ray tracing with the optimal camera pose (“the extracted features from each frame are then back-projected using known camera intrinsics and extrinsics into a 3D voxel volume wherein each pixel of the voxel volume is mapped to a ray in the voxel volume” (Murez: 0024)). Claims 4 and 18 are similar in scope to claim 11. Claim(s) 5, 12 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chatfield in view of Li as applied to the claim(s) above, and further in view of Bouette et al. (PGPUB Document No. US 2023/0377047). Regarding claim 12, the combined teachings above teach the non-transitory, computer-readable medium of claim 8, wherein aligning, as an aligned image set, the mapped image set onto a three-dimensional vehicle model for the vehicle yields a part correspondence for each pixel in each image of the aligned image set (“classifying AI displays a list 315 of parts of the vehicle having a greatest probability of being classified correctly from the image.” (Chatfield: 0024)) and artificial intelligence used to estimate the damage probability for each pixel in each image (providing a real-time damage estimate using an artificial intelligence (AI) (Chatfield: 0011)). However, the combined teachings above do not expressly teach the artificial intelligence being a neural network model (“convolutional neural network is trained to recognize and assess damage” (Bouette: 0048)). The combined teachings above contained a device which differed the claimed process by the substitution of the steps of using artificial intelligence for predicting damage. Bouette teaches the substituted step of using a neural network for predicting damage. Both methods disclosed by the combined teachings and Bouette were known in the art to effectively assess damage. The artificial intelligence teaching of the combined teachings above could have been substituted with the neural network teaching of Bouette. The results would have been predictable and resulted in equally predicting damage. Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Claims 5 and 19 are similar in scope to claim 12. Allowable Subject Matter Claims 2, 3, 6, 7, 9, 10, 13, 14, 16, 17, 19 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to David H Chu whose telephone number is (571)272-8079. The examiner can normally be reached M-F: 9:30 - 1:30pm, 3:30-8:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID H CHU/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jul 09, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §103
Mar 24, 2026
Examiner Interview Summary
Mar 24, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602881
ELECTRONIC DEVICE AND METHOD FOR PROVIDING AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12591402
AUGMENTED REALITY COLLABORATION SYSTEM WITH ANNOTATION CAPABILITY
2y 5m to grant Granted Mar 31, 2026
Patent 12591695
METHOD OF IMAGE PROCESSING FOR THREE-DIMENSIONAL RECONSTRUCTION IN AN EXTENDED REALITY ENVIRONMENT AND A HEAD MOUNTED DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12524907
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-EXECUTABLE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12494011
RAY TRACING HARDWARE AND METHOD
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
81%
With Interview (+2.7%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 682 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month