Prosecution Insights
Last updated: April 19, 2026
Application No. 18/921,915

Systems, Methods, and Apparatus for using Remote Assistance to Annotate Images of an Environment

Non-Final OA §102§103
Filed
Oct 21, 2024
Examiner
MILLER, PRESTON JAY
Art Unit
3661
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Waymo LLC
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
75%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
28 granted / 50 resolved
+4.0% vs TC avg
Strong +19% interview lift
Without
With
+18.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
39 currently pending
Career history
89
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 50 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims 2. This office action is in response to application with case number 18/921,915 filed on 10/21/2024, in which claims 1-20 are presented for examination. Information Disclosure Statement 3. The information disclosure statement(s) (IDS(s)) submitted on 10/21/2024 has/have been received and considered. Examiner Notes 4. The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. The prompt development of a clear issue requires that the replies of the Applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure (see MPEP §2163.06). Applicant is reminded that the Examiner is entitled to give the Broadest Reasonable Interpretation (BRI) of the language of the claims. Furthermore, the Examiner is not limited to Applicant’s definition which is not specifically set forth in the claims. SEE MPEP 2141.02 [R-07.2015] VI. PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS: A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123. Claim Rejections - 35 USC § 102 5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 6. Claim(s) 1, 4-6, 8-12, and 15-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dolgov et al. (US-20190019349-A1). In regard to claim 1 , Dolgov discloses a method comprising (Dolgov, in at least [0031], discloses methods and systems for remote assistance): receiving, at a computing device, sensor data and a request for assistance from a vehicle, wherein the sensor data depicts a trajectory of the vehicle in an environment (Dolgov, in at least Figs. 4A-4D, 6B, [0142 & 0152], discloses the vehicle requests remote assistance in substantially real-time [i.e., receiving a request for assistance from a vehicle], which the computing system [i.e., at a computing device] uses as a means for alerting the human operator. At block 622, the computing system operates by receiving image data from the autonomous vehicle of an environment of the autonomous vehicle [i.e., receiving, at a computing device, sensor data from a vehicle, wherein the sensor data depicts a trajectory of the vehicle in an environment]. Examiner notes, as illustrated by Fig. 4C-4D, the sensor data, such as the image of the environment, depicts the trajectory of the vehicle in the environment); displaying, by the computing device and based on the sensor data, a representation of the environment showing the trajectory of the vehicle (Dolgov, in at least in at least Figs. 1, 4A-4D, 6B, and [0050 & 0155], discloses navigation/pathing system 142 determines a driving path for vehicle 100. At block 626, the computing system operates by providing at least one image to an operator [i.e., displaying, by the computing device and based on the sensor data, a representation of the environment showing the trajectory of the vehicle] from the memory, wherein at least one image comprises previously-stored image data related to the at least one object of an environment of the autonomous vehicle has a detection confidence below a threshold. Examiner notes, as illustrated by Fig. 4C-4D, the representation of the environment shows the trajectory of the vehicle); receiving an input provided by an operator, the input assigning a bulk annotation to a plurality of objects located in the environment (Dolgov, in at least Fig. 6B, and [0156], discloses at block 628, the computing system operates by receiving an operator input. The operator provides a correct identification of the object having the low detection confidence [i.e., receiving an input provided by an operator, the input assigning a bulk annotation to a plurality of objects located in the environment]); and providing, based on the input, a response to the vehicle, the response conveying the bulk annotation assigned to the plurality of objects (Dolgov, in at least Fig. 6B, and [0157], discloses at block 630, in response to receiving an operator input, the computing system operates by providing an instruction to the autonomous vehicle for execution by the autonomous vehicle [i.e., providing, based on the input, a response to the vehicle, the response conveying the bulk annotation assigned to the plurality of objects] by way of a network). In regard to claim 4 , Dolgov discloses the method of claim 1, wherein receiving the input provided by the operator comprises: receiving a first selection of an area from the representation of the environment (Dolgov, in at least Fig. 4E and [0125], discloses the control menu 418 allows the operator to input guidance to the vehicle in a number of different ways (e.g., selecting from a list of operations, typing in a particular mode of operation, selecting a particular region of focus within an image of the environment, etc.) [i.e., receiving a first selection of an area from the representation of the environment]); and receiving a second selection that assigns the bulk annotation to the plurality of objects located in the area, wherein the bulk annotation associates a label with each object of the plurality of objects located in the area (Dolgov, in at least Fig. 4E, and [0126], discloses the human operator indicates a natural-language question 420 to identify the object identified as the temporary stop sign 404. Examiner notes, as illustrated by Fig. 4E, a label “Stop Sign” is associated with object 404 based on the human operator selection. That is, receiving a second selection that assigns the bulk annotation to the plurality of objects located in the area, wherein the bulk annotation associates a label with each object of the plurality of objects located in the area). In regard to claim 5 , Dolgov discloses the method of claim 1, further comprising: displaying a selectable option with the representation of the environment, wherein the selectable option enables bulk annotation of the plurality of objects (Dolgov, in at least Fig. 4E, [0125 & 0144-0148], discloses the control menu 418 allows the operator to input guidance to the vehicle in a number of different ways (e.g., selecting from a list of operations, typing in a particular mode of operation, selecting a particular region of focus within an image of the environment, etc.). The user-interface includes various selectable and non-selectable elements for presenting aspects of the at least one image [i.e., displaying a selectable option with the representation of the environment], such as windows, sub-windows, text boxes, and command buttons. The GUI enables the human operator to select an area of interest in the pre-stored data for further analysis. Such an area of interest includes important objects in the environment that the vehicle did not correctly identify or did not attempt to identify, or includes any object for which the human operator believes their feedback is desired [i.e., wherein the selectable option enables bulk annotation of the plurality of objects]. The computing system displays, to the human operator, an image of the pre-stored data that the vehicle may have annotated with the alleged identities of various relevant objects). In regard to claim 6 , Dolgov discloses the method of claim 1, wherein the representation of the environment shows a construction site positioned along the trajectory of the vehicle (Dolgov, in at least [0091], discloses the vehicle is configured to determine objects based on the context of the data. Street signs related to construction generally have an orange color. Accordingly, the vehicle is configured to detect objects that are orange, and located near the side of roadways as construction-related street signs [i.e., wherein the representation of the environment shows a construction site positioned along the trajectory of the vehicle]), and wherein the bulk annotation is assigned to a plurality of construction elements positioned proximate the construction site (Dolgov, in at least [0109], discloses if the object at issue is an orange construction cone, the human operator enters via a keyboard, or speak via a microphone, a response including the words “construction cone” [i.e., wherein the bulk annotation is assigned to a plurality of construction elements positioned proximate the construction site]). In regard to claim 8 , Dolgov discloses the method of claim 1, wherein receiving sensor data and the request for assistance from the vehicle comprises: receiving image data from one or more cameras coupled to the vehicle (Dolgov, in at least Fig. 1, and [0026 & 0044], discloses the remote assistance process acquires (e.g., via cameras, LIDAR, radar, and/or other sensors) [i.e., receiving image data from one or more cameras coupled to the vehicle] environment data including an object or objects in the vehicle's environment. Camera 130 includes one or more devices (e.g., still camera or video camera) configured to capture images of the environment of vehicle 100); and wherein displaying the representation of the environment showing the trajectory of the vehicle comprises: displaying the image data received from the one or more cameras (Dolgov, in at least Fig. 4D, and [0124], discloses Fig. 4D shows a GUI on a remote computing system that is presented to a human operator. The GUI 412 includes separate sub-windows 414 and 416. The first sub-window 414 includes the vehicle's sensor data representation of its environment. The second sub-window 416 includes a video stream of a portion of the environment [i.e., displaying the image data received from the one or more cameras]). In regard to claim 9 , Dolgov discloses the method of claim 1, wherein receiving sensor data and the request for assistance from the vehicle comprises: receiving lidar data from one or more lidars coupled to the vehicle (Dolgov, in at least [0159], discloses the computing system receives the sensor data, and detect the tactile event based on the received data. The sensor data come from sensors such as an IMU, accelerometer, RADAR, LIDAR [i.e., receiving lidar data from one or more lidars coupled to the vehicle], impact sensor, etc.); and generating the representation of the environment based on the lidar data (Dolgov, in at least Fig. 4E, and [0125], discloses a GUI that contains a first sub-window showing the vehicle's sensor data representation of its environment [i.e., generating the representation of the environment based on the lidar data] and a second sub-window showing a video stream of a portion of the vehicle's environment). In regard to claim 10 , Dolgov discloses the method of claim 1, wherein the computing device is positioned remotely from the vehicle (Dolgov, in at least Fig. 3A, 6A and [0136], discloses a computing system (e.g., remote computing system 302 or server computing system 306) [i.e., wherein the computing device is positioned remotely from the vehicle] operates in a rewind mode as shown by method 600. Examiner notes, as depicted by Fig. 3A, the server and remote computer systems are positioned remotely from the vehicle). In regard to claim 11 , Dolgov discloses the method of claim 1, wherein the request for assistance indicates that the vehicle is stopped (Dolgov, in at least Fig. 6A, and [0137-0138], discloses block 602 further includes periodically determining if the vehicle is stopped [i.e., wherein the request for assistance indicates that the vehicle is stopped] and/or determining if the vehicle has been stopped for a predetermined threshold period of time. After each minute the vehicle is stopped, the review criterion triggers remote assistance). In regard to claim 12 , Dolgov discloses a system comprising (Dolgov, in at least [0031], discloses methods and systems for remote assistance): a vehicle (Dolgov, in at least Fig. 2, and [0036], discloses vehicle 200 [i.e., a vehicle] includes sensor unit 202); and a computing device positioned remote from the vehicle (Dolgov, in at least Fig. 3A, 6A and [0136], discloses a computing system (e.g., remote computing system 302 or server computing system 306) [i.e., wherein the computing device is positioned remotely from the vehicle] operates in a rewind mode as shown by method 600. Examiner notes, as depicted by Fig. 3A, the server and remote computer systems are positioned remotely from the vehicle), receive sensor data and a request for assistance from the vehicle, wherein the sensor data depicts a trajectory of the vehicle in an environment (Dolgov, in at least Figs. 4A-4D, 6B, [0142 & 0152], discloses the vehicle requests remote assistance in substantially real-time [i.e., receive a request for assistance from the vehicle], which the computing system uses as a means for alerting the human operator. At block 622, the computing system operates by receiving image data from the autonomous vehicle of an environment of the autonomous vehicle [i.e., receive sensor data, wherein the sensor data depicts a trajectory of the vehicle in an environment]. Examiner notes, as illustrated by Fig. 4C-4D, the sensor data, such as the image of the environment, depicts the trajectory of the vehicle in an environment); display, based on the sensor data, a representation of the environment showing the trajectory of the vehicle (Dolgov, in at least in at least Figs. 1, 4A-4D, 6B, and [0050 & 0155], discloses avigation/pathing system 142 determines a driving path for vehicle 100. At block 626, the computing system operates by providing at least one image to an operator [i.e., display, based on the sensor data, a representation of the environment showing the trajectory of the vehicle] from the memory, wherein at least one image comprises previously-stored image data related to the at least one object of an environment of the autonomous vehicle has a detection confidence below a threshold. Examiner notes, as illustrated by Fig. 4C-4D, the representation of the environment shows the trajectory of the vehicle); receive an input provided by an operator, the input assigning a bulk annotation to a plurality of objects located in the environment (Dolgov, in at least [Fig. 6B], and [0156], discloses at block 628, the computing system operates by receiving an operator input. The operator provides a correct identification of the object having the low detection confidence [i.e., receive an input provided by an operator, the input assigning a bulk annotation to a plurality of objects located in the environment]); and provide, based on the input, a response to the vehicle, the response conveying the bulk annotation assigned to the plurality of objects (Dolgov, in at least Fig. 6B, and [0157], discloses at block 630, in response to receiving an operator input, the computing system operates by providing an instruction to the autonomous vehicle for execution by the autonomous vehicle [i.e., provide, based on the input, a response to the vehicle, the response conveying the bulk annotation assigned to the plurality of objects] by way of a network). In regard to claim 15 , Dolgov discloses the system of claim 12, wherein the computing device is further configured to: receive a first input selecting an area of the representation of the environment (Dolgov, in at least Fig. 4E and [0125], discloses the control menu 418 allows the operator to input guidance to the vehicle in a number of different ways (e.g., selecting from a list of operations, typing in a particular mode of operation, selecting a particular region of focus within an image of the environment, etc.) [i.e., receiving a first selection of an area from the representation of the environment]); and receive a second input that assigns a label to each object located in the area that matches a first type of object (Dolgov, in at least Fig. 4E, and [0126], discloses the human operator indicates a natural-language question 420 to identify the object identified as the temporary stop sign 404. Examiner notes, as illustrated by Fig. 4E, a label “Stop Sign” is associated with object 404 based on the human operator selection. That is, receiving a second input that assigns a label to each object located in the area that matches a first type of object). In regard to claim 16 , Dolgov discloses the system of claim 12. Claim 16 recites a system having substantially the same features of claim 5 above, therefore claim 16 is rejected for the same reasons as claim 5. In regard to claim 17 , Dolgov discloses the system of claim 12. Claim 17 recites a system having substantially the same features of claim 6 above, therefore claim 17 is rejected for the same reasons as claim 6. In regard to claim 18 , Dolgov discloses the system of claim 12, wherein the sensor data is provided by a camera or a lidar coupled to the vehicle (Dolgov, in at least Fig. 1, and [0026 & 0044], discloses the remote assistance process acquires (e.g., via cameras, LIDAR, radar, and/or other sensors) [i.e., wherein the sensor data is provided by a camera or a lidar coupled to the vehicle] environment data including an object or objects in the vehicle's environment. Camera 130 includes one or more devices (e.g., still camera or video camera) configured to capture images of the environment of vehicle 100). In regard to claim 19 , Dolgov, as modified by David, teaches the system of claim 12, wherein the computing device is further configured to: receive a first input selecting an area of the representation of the environment (Dolgov, in at least Fig. 4D, and [0125], discloses the control menu 418 allows the operator to input guidance to the vehicle in a number of different ways, such as selecting a particular region of focus within an image of the environment [i.e., receive a first input selecting an area of the representation of the environment]); based on the first input, generate a bounding region around the area (Dolgov, in at least Fig. 4E (reproduced and annotated below for Applicant’s convenience), and [0126], discloses the human operator indicates a natural-language question 420 to identify the object identified as the temporary stop sign 404. Examiner notes, as illustrated by Fig. 4E of Dolgov, a bounding region is generated); and PNG media_image1.png 574 1121 media_image1.png Greyscale Annotated Fig. 4E of Dolgov - Generating a bounding region maintain the bounding region around the area after updating the representation of the environment based on additional sensor data (Dolgov, in at least [0126], discloses when an identification is confirmed, the identification is added to a global map [i.e., maintain the bounding region around the area after updating the representation of the environment based on additional sensor data]. When the identification is added to the global map, other vehicles may not have to request an identification of the object in the future). In regard to claim 20 , Dolgov discloses a non-transitory computer-readable medium storing instructions, the instructions being executable by one or more processors to perform operations comprising (Dolgov, in at least [0164], discloses methods are implemented as computer program instructions encoded on a non-transitory computer-readable storage media in a machine-readable format, or on other non-transitory media or articles of manufacture): receiving sensor data and a request for assistance from a vehicle, wherein the sensor data depicts a trajectory of the vehicle in an environment (Dolgov, in at least Figs. 4A-4D, 6B, [0142 & 0152], discloses the vehicle requests remote assistance in substantially real-time [i.e., receiving a request for assistance from a vehicle], which the computing system uses as a means for alerting the human operator. At block 622, the computing system operates by receiving image data from the autonomous vehicle of an environment of the autonomous vehicle [i.e., receiving sensor data, wherein the sensor data depicts a trajectory of the vehicle in an environment]. Examiner notes, as illustrated by Fig. 4C-4D, the sensor data, such as the image of the environment, depicts the trajectory of the vehicle in an environment); displaying, based on the sensor data, a representation of the environment showing the trajectory of the vehicle (Dolgov, in at least in at least Figs. 1, 4A-4D, 6B, and [0050 & 0155], discloses avigation/pathing system 142 determines a driving path for vehicle 100. At block 626, the computing system operates by providing at least one image to an operator [i.e., displaying, based on the sensor data, a representation of the environment showing the trajectory of the vehicle] from the memory, wherein at least one image comprises previously-stored image data related to the at least one object of an environment of the autonomous vehicle has a detection confidence below a threshold. Examiner notes, as illustrated by Fig. 4C-4D, the representation of the environment shows the trajectory of the vehicle); receiving an input provided by an operator, the input assigning a bulk annotation to a plurality of objects located in the environment (Dolgov, in at least [Fig. 6B], and [0156], discloses at block 628, the computing system operates by receiving an operator input. The operator provides a correct identification of the object having the low detection confidence [i.e., receiving an input provided by an operator, the input assigning a bulk annotation to a plurality of objects located in the environment]); and providing, based on the input, a response to the vehicle, the response conveying the bulk annotation assigned to the plurality of objects (Dolgov, in at least Fig. 6B, and [0157], discloses at block 630, in response to receiving an operator input, the computing system operates by providing an instruction to the autonomous vehicle for execution by the autonomous vehicle [i.e., providing, based on the input, a response to the vehicle, the response conveying the bulk annotation assigned to the plurality of objects] by way of a network). Claim Rejections - 35 USC § 103 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claim(s) 2, 7 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dolgov et al. (US-20190019349-A1) in view of Korjus et al. (US-20210209367-A1). In regard to claim 2 , Dolgov discloses the method of claim 1, accordingly the rejection of claim 1 is incorporated. Dolgov is silent on all limitations of the claim. However, Korjus teaches wherein each object of the plurality of objects is a first type of object, and wherein the bulk annotation assigns a classification to each object of the plurality of objects based on the first type of object (Korjus, in at least [0030 & 0066], teaches the method comprises identifying a type of occlusion present in the preprocessed data [i.e., wherein each object of the plurality of objects is a first type of object]. That is, upon positively identifying occlusion, a further step comprises detecting what the occlusion corresponds to. The method further comprises classifying the detected occlusion [i.e., wherein the bulk annotation assigns a classification to each object of the plurality of objects based on the first type of object]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to modify Dolgov in view of Korjus with a reasonable expectation of success, as both inventions are directed to the same field of endeavor – mobile robots – and identify the object type and classify the detected objects and the combination would provide for increased safety of operations (Korjus, see at least [0001]). In regard to claim 7 , Dolgov discloses the method of claim 1, accordingly the rejection of claim 1 is incorporated. Dolgov is silent on all limitations of the claim. However, Korjus teaches wherein the bulk annotation is assigned to a plurality of vegetation positioned proximate a road associated with the trajectory of the vehicle (Korjus, in at least [0121], teaches the neural network first is trained on images where a certain type of occlusion (such as occlusion of a traffic road via parked cars, vegetation etc.) is annotated, and then applied to the images detected by the mobile robot [i.e., wherein the bulk annotation is assigned to a plurality of vegetation positioned proximate a road associated with the trajectory of the vehicle]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to modify Dolgov in view of Korjus with a reasonable expectation of success, as both inventions are directed to the same field of endeavor – mobile robots – and use the method of Korjus to train and annotate the vegetation along the trajectory of the vehicle and the combination would provide for increased safety of operations (Korjus, see at least [0001]). In regard to claim 13 , Dolgov discloses the system of claim 12. Claim 13 recites a system having substantially the same features of claim 2 above, therefore claim 13 is rejected for the same reasons as claim 2. 9. Claim(s) 3, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dolgov et al. (US-20190019349-A1) in view of Korjus et al. (US-20210209367-A1) and further in view of Kario et al. (US-20240163402-A1). In regard to claim 3 , Dolgov, as modified by Korjus, teaches the method of claim 2, accordingly the rejection of claim 2 is incorporated. Dolgov, as modified by Korjus, is silent on all limitations of the claim. However, Kario teaches wherein the response further indicates that the bulk annotation assigned to the plurality of objects is applicable to additional objects that match the first type of object for a predetermined amount of time (Kario, in at least [0059-0061], teaches performing surveillance on the target and/or the area over a predetermined period of time wherein identifying the target and one or more properties of the target based on data gathered at a first point in time in the predetermined period of time. The set of steps comprising identifying the target at a second point in time in the predetermined period of time based on the one or more properties [i.e., wherein the response further indicates that the bulk annotation assigned to the plurality of objects is applicable to additional objects that match the first type of object for a predetermined amount of time]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to modify Dolgov, as modified by Korjus, in view of Kario with a reasonable expectation of success, as both inventions are directed to the same field of endeavor – vehicle systems – and identify or match an object in a predetermined period of time and the combination would provide for detecting objects using a fixed angle and/or known angles (Kario, see at least [0003]). In regard to claim 14 , Dolgov, as modified by Korjus, teaches the system of claim 13. Claim 14 recites a system having substantially the same features of claim 3 above, therefore claim 14 is rejected for the same reasons as claim 3. Conclusion 10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. David et al. (US-20220388543-A1) teaches a vehicle that renders a trust zone on the display to indicate safe trajectories around an object. 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Preston J Miller whose telephone number is (703)756-1582. The examiner can normally be reached Monday through Friday 7:30 AM - 4:30 PM EST. 12. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. 13. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramya P Burgess can be reached at (571) 272-6011. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. 14. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /P.J.M./Examiner, Art Unit 3661 /Tarek Elarabi/Primary Examiner, Art Unit 3661
Read full office action

Prosecution Timeline

Oct 21, 2024
Application Filed
Jan 28, 2026
Non-Final Rejection — §102, §103
Apr 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12559091
CONTROL DEVICE FOR CONTROLLING SAFETY DEVICE IN VEHICLE
2y 5m to grant Granted Feb 24, 2026
Patent 12490678
VEHICLE LOCATION WITH DYNAMIC MODEL AND UNLOADING CONTROL SYSTEM
2y 5m to grant Granted Dec 09, 2025
Patent 12466388
Method for Operating a Motor Vehicle Drive Train and Electronic Control Unit for Carrying Out Said Method
2y 5m to grant Granted Nov 11, 2025
Patent 12454806
WORK MACHINE
2y 5m to grant Granted Oct 28, 2025
Patent 12447827
Electric Vehicle Control Device, Electric Vehicle Control Method, And Electric Vehicle Control System
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
75%
With Interview (+18.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 50 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month