Prosecution Insights
Last updated: April 19, 2026
Application No. 18/406,733

MACHINE LEARNING BASED CYCLE TIME TRACKING AND REPORTING FOR VEHICLES

Non-Final OA §103
Filed
Jan 08, 2024
Examiner
MILIA, MARK R
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Rivian Ip Holdings LLC
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
82%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
340 granted / 583 resolved
-3.7% vs TC avg
Strong +24% interview lift
Without
With
+23.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
609
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
22.2%
-17.8% vs TC avg
§112
13.3%
-26.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 583 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: Reference numerals 500, 520, and 525 in reference to Fig. 5 and reference numeral 1600 in reference to Fig. 16. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “420” has been used to designate both Threshold and Duration in Fig. 4. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: Paragraph 37, reference numeral 105 should read 102. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4 and 7-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Henderson et al. (US 2024/0135714) in view of ElHattab et al. (US 11,158,177). Regarding claims 1, 13, and 19, Henderson discloses a non-transitory computer-readable medium storing processor executable instructions, a method, and a system, comprising: one or more processors, coupled with memory, to: identify one or more models trained with machine learning relating to physical characteristics of vehicles and location designations associated with one or more vehicle areas (see paras 15, 26, 36, 46, and 58-59, a first trained machine learning model identifies a vehicle 108 via images and a second trained machine learning model identifies a vehicle 108 via audio); receive, from one or more cameras, a video stream that captures a vehicle disposed in a vehicle area comprising a location designation (see paras 25-26, one or more cameras are used to generate a video stream used to determine a vehicle location); determine, based on an analysis of a plurality of frames of the video stream with the one or more models, a type of the vehicle disposed in the vehicle area and a duration the vehicle is disposed in the vehicle area (see paras 39, 44, 57, 62, 67-68, and 90, a first trained machine learning model identifies a vehicle 108 type and duration a vehicle spends at a particular location); and perform, based on the type of the vehicle, an action to cause delivery of the vehicle from the vehicle area (see paras 67-68 and 72, a first trained machine learning model identifies a vehicle 108 type and duration a vehicle spends at a particular location, a workflow of the vehicle can be controlled to move the vehicle from one zone to another based on a predetermined time period). Henderson does not disclose expressly a comparison of the duration of a threshold, an action to cause delivery of an object from the area. ElHattab discloses perform, based on a comparison of the duration with a threshold, an action to cause delivery of an object from the area (see col 39 line 11-col 40 line 46, a threshold can be used to trigger an action, such as alerting a user of the need to perform a task). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the time duration threshold used to trigger an action, as described by ElHattab, with the system of Henderson. The suggestion/motivation for doing so would have been to avoid bottlenecks and ensure efficient vehicle workflow thereby saving time and money. Therefore, it would have been obvious to combine ElHattab with Henderson to obtain the invention as specified in claims 1, 13, and 19. Regarding claims 2, 14, and 20, Henderson further discloses determine the type of the vehicle matches a predetermined vehicle type established for the vehicle area; and perform the action responsive to the match (see paras 67-68, 72, and 90, a first trained machine learning model identifies a vehicle 108 type and duration a vehicle spends at a particular location, a workflow of the vehicle can be controlled to move the vehicle from one zone to another based on a predetermined time period). Regarding claims 3 and 15, ElHattab further discloses provide, for display via a graphical user interface, an indication of the vehicle disposed in the vehicle area and the duration (see Fig. 8D, 8F, and 11B, col 47 line 39-col 48 line 16, col 48 lines 25-44, and col 57 line 56-col 58 line 53, a display via a GUI of the manufacturing area is displayed to a user). Regarding claim 4, Henderson further discloses determine, based on an identifier associated with the vehicle area and the type of the vehicle, a status of a workflow for the vehicle (see paras 39, 67-68, and 72, a first trained machine learning model identifies a vehicle 108 type and duration a vehicle spends at a particular location, a workflow of the vehicle can be controlled to move the vehicle from one zone to another based on a predetermined time period); and ElHattab further discloses provide, for display via a graphical user interface, an indication of the vehicle disposed in the vehicle area and the status (see Fig. 8D, 8F, and 11B, col 47 line 39-col 48 line 16, col 48 lines 25-44, and col 57 line 56-col 58 line 53, a display via a GUI of the manufacturing area is displayed to a user). Regarding claims 7 and 16, ElHattab further discloses provide at least one of a visual alarm or an audio alarm that indicates the duration is greater than or equal to the threshold (see col 38 line41-col 41 line 49, visual and audio alarms can be utilized). Regarding claim 8, Henderson further discloses wherein the one or more models comprise a multi-modal model (see para 15, a first trained machine learning model identifies a vehicle 108 via images and a second trained machine learning model identifies a vehicle 108 via audio). Regarding claims 9 and 17, Henderson further discloses wherein the one or more models are trained with training data generated to represent a plurality of features of the type of the vehicle captured from a plurality of perspectives of the one or more cameras (see paras 20, 25-26, 39, and 90, a first trained machine learning model identifies a vehicle 108 via images capture by one or more cameras, vehicle type and/or manufacturer are determined). Regarding claim 10, Henderson further discloses wherein the one or more models are trained with training data generated to represent a plurality of features of the type of the vehicle captured from at least one camera with noise (see para 38, a first trained machine learning model can correct for image distortion/noise). Regarding claim 11, Henderson further discloses detect, based on a first one or more frames of the plurality of frames input into the one or more models, the vehicle disposed in the vehicle area at a first time stamp; identify, based on a second one or more frames of the plurality of frames input into the one or more models, an absence of the vehicle in the vehicle area at a second time stamp; and determine the duration based on a difference between the second time stamp and the first time stamp (see paras 28 and 67-68, timestamps are used to aid in the movement of a vehicle 108 from one zone to another). Regarding claims 12 and 18, Henderson further discloses detect, based on a first one or more frames of the plurality of frames input into the one or more models, the vehicle disposed in the vehicle area at a first time stamp; identify, based on a second one or more frames of the plurality of frames input into the one or more models, an absence of the vehicle in the vehicle area at a second time stamp; detect, based on a third one or more frames of the plurality of frames input into the one or more models, the vehicle disposed in the vehicle area at a third time stamp (see paras 28 and 67-68, timestamps are used to aid in the movement of a vehicle 108 from one zone to another); and determine, based on the second time stamp, the third time stamp, and a buffer threshold, that an obstacle between the one or more cameras and the vehicle area prevents the vehicle from capture in the video stream in the second one or more frames (see paras 28, 38-39, and 67-68, timestamps are used to aid in the movement of a vehicle 108 from one zone to another, the trained machine learning model can correct for image distortion). Allowable Subject Matter Claims 5 and 6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK R MILIA whose telephone number is (571) 272-7408. The examiner can normally be reached Monday-Friday, 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at 571-270-3438. The fax number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARK R MILIA/ Primary Examiner, Art Unit 2681
Read full office action

Prosecution Timeline

Jan 08, 2024
Application Filed
Jan 07, 2026
Non-Final Rejection — §103
Apr 02, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602843
METHOD FOR CONVERTING ENDOSCOPE IMAGES TO NARROW BAND IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12591972
DEVICE FOR INFERRING MATERIAL DENSITY IMAGE, CT SYSTEM, STORAGE MEDIUM, AND METHOD OF CREATING TRAINED NEURAL NETWORK
2y 5m to grant Granted Mar 31, 2026
Patent 12575888
PREDICTING STEREOSCOPIC VIDEO WITH CONFIDENCE SHADING FROM A MONOCULAR ENDOSCOPE
2y 5m to grant Granted Mar 17, 2026
Patent 12579187
INFORMATION-PROCESSING DEVICE, INFORMATION-PROCESSING METHOD AND INFORMATION-PROCESSING PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12578309
Method, Device And Program For Detecting, By Ultrasound, Defects In A Material
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
82%
With Interview (+23.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 583 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month