Prosecution Insights
Last updated: April 19, 2026
Application No. 18/180,350

AUTOMATED AND ENHANCED REMOTE ASSISTANCE AND FLEET RESPONSE

Final Rejection §103§112
Filed
Mar 08, 2023
Examiner
MILES, JONATHAN WADE
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
GM Cruise Holdings LLC
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
406 granted / 578 resolved
+18.2% vs TC avg
Strong +48% interview lift
Without
With
+48.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
5 currently pending
Career history
583
Total Applications
across all art units

Statute-Specific Performance

§101
0.6%
-39.4% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
33.1%
-6.9% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 578 resolved cases

Office Action

§103 §112
DETAILED ACTION Response to Amendment This Office action is in response to the claims filed May 19, 2025. Claims 1-3, 11-14, 16, 18, and 19 are amended, and claims 1-20 are pending and addressed below. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 3 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 3 repeats the new limitations in the amendments made to claim 1. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the statutory requirements. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 8-10, and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Dolgov et al. (“Dolgov”; US 20190019349) in view of Sharma et al. (“Sharma” US 20190220003). Regarding claim 1, Dolgov discloses a method for providing remote assistance to a vehicle subsequent to a remote assistance (RA) event (abstract), the method comprising: receiving perception data generated by onboard sensors of the vehicle (sensor system 104); receiving supplemental data generated by at least one source external to the vehicle (“data storage 114 may store data such as roadway maps”, wherein the maps are data generated external the vehicle and stored in the data storage for use by the vehicle); processing the perception data and the supplemental data using a digital twin co-simulation process for simulating an environment of the vehicle, wherein the digital twin co-simulation process uses a digital twin comprising a digital representation of the vehicle ([0161]; “In the vehicle simulator, the remote operator may be able to see a simulated view and feel tactile events as if the operator was in the autonomous car. Block 646 may also include generating a simulation in response to the tactile event. The simulation may include a predetermined period of time before the tactile event. In practice, the simulation may enable the operator to see and feel what the autonomous vehicle occupants saw and felt leading up to the tactile event. The operator may provide an input at block 648 after experiencing the simulated tactile event.”; and providing results of the digital twin co-simulation process to a remote assistance (RA) service, wherein the results of the digital twin co-simulation process are used by the RA service to determine an RA response ([0162]; “In practice, the operator may provide an identification of the cause of the tactile event and/or provide an instruction for the autonomous vehicle to perform.”); but Dolgov fails to disclose receiving proximate vehicle perception data generated by onboard sensors of another vehicle proximate to the vehicle and processing the proximate vehicle perception data in the simulation process. However, Sharma discloses an autonomous vehicle receiving proximate vehicle perception data generated by onboard sensors of another vehicle proximate to the vehicle ([0034]; “assisting in sharing their sensor and/or object detection data”). It would have been obvious to one having ordinary skill in the art at the time of applicant’s effective filing date to combine the proximate vehicle perception data taught by Sharma with the method of Dolgov because it builds an accurate 3-D map of the environment (Sharma, [0034]). The motivation for the modification would have been to accurately maps objects that may be blocked from the vehicle (Sharma, [0034]). Furthermore, it would have been obvious that when Sharma is combined with Dolgov, the enhanced 3-D mapping would be processed and used in the simulation process to provide an enhanced 3-D map of the proximate area including blocked objects. Regarding claims 2-3, Dolgov in view of Sharma disclose the method of claim 1, and Sharma further discloses: (claim 2) wherein the other vehicle is distinct from the first vehicle (see Fig. 1; vehicle 102 uses proximate vehicle perception data from vehicles 104 and 106); and (claim 3) wherein the proximate vehicle perception data is also processed using the digital twin co-simulation process for simulating the environment of the vehicle (see rationale for combining in claim 1, wherein it would have been obvious to used to enhanced 3-D mapping taught by Sharma to improve the mapping from the vehicle in Dolgov). Regarding claims 4-6, 8-10, Dolgov in view of Sharma discloses the method of claim 1, and Dolgov further discloses: (claim 4) wherein the perception data comprises at least one of camera data, light detection and ranging (LIDAR) data, radio detection and ranging (RADAR) data, inertial measurement unit (IMU) data, and global positioning system (GPS) data ([0041]; “Sensor system 104 can include various types of sensors, such as Global Positioning System (GPS) 122, inertial measurement unit (IMU) 124, radar 126, laser rangefinder/LIDAR 128, camera 130, steering sensor 123, and throttle/brake sensor 125, among other possible sensors.”); (claim 5) wherein the supplemental data comprises at least one of traffic information, traffic signal information, weather information, vulnerable road user (VRU) information, emergency information, mapping information, and legal information ([0056]; “data storage 114 may store data such as roadway maps, path information, among other information”); (claim 6) wherein the RA response comprises redirecting the vehicle ([0111]; “Upon receipt of the remote assistance data by the vehicle, or perhaps sometime thereafter, the vehicle may control itself to operate in a manner that is in accordance with the remote assistance data. For example, the vehicle may alter its movement, such as by stopping the vehicle, switching the vehicle to a human-controlled mode, changing a velocity of vehicle (e.g., a speed and/or direction), and/or another movement alteration.”); (claim 8) wherein the RA response comprises predicting a time at which the vehicle can be redirected ([0061] discloses “Computer system 112 may use the outputs from the various sensors to determine information about objects in a field of view of the vehicle, and may determine distance and direction information to the various objects” and [0111] further discloses “Upon receipt of the remote assistance data by the vehicle, or perhaps sometime thereafter, the vehicle may control itself to operate in a manner that is in accordance with the remote assistance data. For example, the vehicle may alter its movement, such as by stopping the vehicle, switching the vehicle to a human-controlled mode, changing a velocity of vehicle (e.g., a speed and/or direction), and/or another movement alteration.” In order for the RA response to make an effective change in control based on the RA data and in response to the object, the RA must predict the time at which the change will take place based on the location and movement of the object as is sensed in [0061].); (claim 9) further comprising advising a passenger of the vehicle of the RA response ([0095]; “For example, the remote computing system may be a computing device within the vehicle that is separate from the vehicle, but with which a human operator can interact while a passenger or driver of the vehicle, such as a touchscreen interface for displaying remote assistance information.”); and (claim 10) wherein the RA service comprises a human operator ([0078]; “The remote computing system may enable a human operator to provide this support in real-time or less frequently than real-time”). Regarding claim 13, Dolgov discloses a method for providing a remote assistance (RA) response to an autonomous vehicle (AV) subsequent to a determination that the AV is stuck ([0136]; “a vehicle may be stuck and await a command before moving again. The term stuck generally may mean that a vehicle is not sure how to proceed until a further command is received.”), the method comprising: receiving by an RA service a plurality of possible options for the RA response ([0108]; “the remote computing system may provide the human operator with multiple options for instructing the vehicle”), wherein the plurality of possible options for the RA response are determined by a digital twin co-simulation process that processes perception data generated by onboard sensors of the AV and AV supplemental data obtained from at least one source external to the AV to simulate operation of the AV, wherein the digital twin co-simulation process uses a digital twin comprising a digital representation of the AV ([0161]; “the remote operator may be located at a vehicle simulator. In the vehicle simulator, the remote operator may be able to see a simulated view and feel tactile events as if the operator was in the autonomous car. Block 646 may also include generating a simulation in response to the tactile event. The simulation may include a predetermined period of time before the tactile event. In practice, the simulation may enable the operator to see and feel what the autonomous vehicle occupants saw and felt leading up to the tactile event”); selecting by the RA service one of the plurality of possible options for the RA response ([0108]; “For instance, the remote computing system may display two GUI elements on a touchscreen representing options from which the human operator may choose: “Yes, this is a stop sign. Stop at the stop sign,” or “No, this is not a stop sign. Do not stop.” Other examples are possible as well.”) ; and directing the AV to operate in accordance with the selected one of the plurality of possible options ([0110]; “the remote computing system may transmit, to the vehicle, remote assistance data that includes a representation of the human operator's feedback regarding the environment data, whether in the form of an instruction to control the vehicle, a correct identification of the object at issue, and/or some other form of feedback. The remote computing system may transmit the remote assistance data wirelessly or by some other manner”), wherein each of the plurality of possible options has associated therewith a confidence score ([0095]; “In response to determining that an object has a detection confidence that is below the threshold, the vehicle may transmit, to the remote computing system, a request for remote assistance with the identification of the object”) ([0108]; “the remote computing system may provide the human operator with multiple options for instructing the vehicle”); but Dolgov fails to disclose the co-simulation process processing perception data generated by onboard sensors of another vehicle proximate to the AV. However, Sharma discloses an autonomous vehicle receiving proximate perception data generated by onboard sensors of another vehicle proximate to the vehicle ([0034]; “assisting in sharing their sensor and/or object detection data”). It would have been obvious to one having ordinary skill in the art at the time of applicant’s effective filing date to combine the proximate vehicle perception data taught by Sharma with the method of Dolgov because it builds an accurate 3-D map of the environment (Sharma, [0034]). The motivation for the modification would have been to accurately maps objects that may be blocked from the vehicle (Sharma, [0034]). Furthermore, it would have been obvious that when Sharma is combined with Dolgov, the enhanced 3-D mapping would be processed and used in the simulation process to provide an enhanced 3-D map of the proximate area including blocked objects. Regarding claim 14, Dolgov in view of Sharma disclose the method of claim 13, and Sharma further discloses: (claim 14) wherein the other vehicle is distinct from the first vehicle (see Fig. 1; vehicle 102 uses proximate vehicle perception data from vehicles 104 and 106). Regarding claims 15-18, Dolgov in view of Sharma discloses the method of claim 13, Dolgov further disclosing: (claim 15) wherein the perception data comprises at least one of camera data, light detection and ranging (LIDAR) data, radio detection and ranging (RADAR) data, inertial measurement unit (IMU) data, and global positioning system (GPS) data ([0041]; “Sensor system 104 can include various types of sensors, such as Global Positioning System (GPS) 122, inertial measurement unit (IMU) 124, radar 126, laser rangefinder/LIDAR 128, camera 130, steering sensor 123, and throttle/brake sensor 125, among other possible sensors.”); (claim 16) wherein the supplemental data comprises at least one of traffic information, traffic signal information, weather information, vulnerable road user (VRU) information, emergency information, mapping information, and legal information ([0056]; “data storage 114 may store data such as roadway maps, path information, among other information”); (claim 17) wherein the selected one of the plurality of possible options comprises redirecting the AV, initiating a vehicle recovery event (VRE), and predicting a time at which the AV can be redirected (The claim only required “the selected one of the plurality of possible options” and does not require all three options being taught. [0111]; “Upon receipt of the remote assistance data by the vehicle, or perhaps sometime thereafter, the vehicle may control itself to operate in a manner that is in accordance with the remote assistance data. For example, the vehicle may alter its movement, such as by stopping the vehicle, switching the vehicle to a human-controlled mode, changing a velocity of vehicle (e.g., a speed and/or direction), and/or another movement alteration.”); and (claim 18) further comprising advising a passenger of the AV of the selected one of the plurality of possible options ([0095]; “For example, the remote computing system may be a computing device within the vehicle that is separate from the vehicle, but with which a human operator can interact while a passenger or driver of the vehicle, such as a touchscreen interface for displaying remote assistance information.” One having ordinary skill in the art would find it obvious that “remote assistance information” would include the selected RA option.). Regarding claim 19, Dolgov discloses a non-transitory computer-readable medium having stored thereon instructions that, when executed by a processor of a computer, ([0006]) cause the computer to: process perception data generated by onboard sensors of a vehicle and supplemental data received from at least one source external to the vehicle using a digital twin co-simulation process for simulating an operation of the vehicle in real-time, wherein the digital twin co-simulation process uses a digital twin comprising a digital representation of the vehicle ([0161]; “the remote operator may be located at a vehicle simulator. In the vehicle simulator, the remote operator may be able to see a simulated view and feel tactile events as if the operator was in the autonomous car. Block 646 may also include generating a simulation in response to the tactile event. The simulation may include a predetermined period of time before the tactile event. In practice, the simulation may enable the operator to see and feel what the autonomous vehicle occupants saw and felt leading up to the tactile event”); provide results of the digital twin co-simulation process to a remote assistance (RA) service, wherein the RA service selects an RA response from the results ([0162]; “In practice, the operator may provide an identification of the cause of the tactile event and/or provide an instruction for the autonomous vehicle to perform.”); and directing the vehicle to implement the RA response ([0110]; “the remote computing system may transmit, to the vehicle, remote assistance data that includes a representation of the human operator's feedback regarding the environment data, whether in the form of an instruction to control the vehicle, a correct identification of the object at issue, and/or some other form of feedback. The remote computing system may transmit the remote assistance data wirelessly or by some other manner”); wherein the onboard sensors of the vehicle comprise at least one of a camera, a light detection and ranging (LIDAR) sensor, a radio detection and ranging (RADAR) sensor, an inertial measurement unit (IMU) sensor, and a global positioning system (GPS) ([0041]; “Sensor system 104 can include various types of sensors, such as Global Positioning System (GPS) 122, inertial measurement unit (IMU) 124, radar 126, laser rangefinder/LIDAR 128, camera 130, steering sensor 123, and throttle/brake sensor 125, among other possible sensors.”); and wherein the supplemental data comprises at least one of perception data generated by onboard sensors of another vehicle proximate the vehicle, traffic information, traffic signal information, weather information, vulnerable road user (VRU) information, emergency information, mapping information, and legal information ([0056]; “data storage 114 may store data such as roadway maps, path information, among other information”); but Dolgov fails to disclose processing perception data generated by onboard sensors of another vehicle proximate to the vehicle for use in the co-simulation process. However, Sharma discloses an autonomous vehicle receiving proximate perception data generated by onboard sensors of another vehicle proximate to the vehicle ([0034]; “assisting in sharing their sensor and/or object detection data”). It would have been obvious to one having ordinary skill in the art at the time of applicant’s effective filing date to combine the proximate vehicle perception data taught by Sharma with the method of Dolgov because it builds an accurate 3-D map of the environment (Sharma, [0034]). The motivation for the modification would have been to accurately maps objects that may be blocked from the vehicle (Sharma, [0034]). Furthermore, it would have been obvious that when Sharma is combined with Dolgov, the enhanced 3-D mapping would be processed and used in the co-simulation process to provide an enhanced 3-D map of the proximate area including blocked objects. Regarding claim 20, Dolgov in view of Sharma disclose the non-transitory computer-readable medium of claim 19, and Dolgov further discloses wherein the RA response comprises redirecting the vehicle, initiating a vehicle recovery event (VRE), or predicting a time at which the vehicle can be redirected ([0111]; “Upon receipt of the remote assistance data by the vehicle, or perhaps sometime thereafter, the vehicle may control itself to operate in a manner that is in accordance with the remote assistance data. For example, the vehicle may alter its movement, such as by stopping the vehicle, switching the vehicle to a human-controlled mode, changing a velocity of vehicle (e.g., a speed and/or direction), and/or another movement alteration.”). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Dolgov in view of Sharma as applied to claim 1 above, and further in view of Xia (US 20200082304). Regarding claim 7, Dolgov in view of Sharma discloses the method of claim 1, but not wherein the RA response comprises initiating a vehicle recovery event (VRE). However, Xia teaches a RA response comprising initiating a VRE ([0053]; “when the security staff in the vehicle cannot recover the system, then the system log of the unmanned vehicle is acquired and sent to a remote dispatcher, so that the system recovery operation of the unmanned vehicle may be performed with remote assistance by the remote dispatcher”). It would have been obvious to one have ordinary skill in the art at the time of applicant’s effective filing date to initiate a VRE as a RA response as taught by Xia with the method of Dolgov in view of Sharma because it would lead to recovery of the vehicle when it is otherwise stuck. The motivation for the modification would have been to recover expensive equipment. Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Dolgov in view of Sharma as applied to claim 1 above, and further in view of Kentley et al. (“Kentley” US 20170123421). Regarding claims 11-12, Dolgov in view of Sharma discloses the method of claim 1, but not (claim 11) wherein results of the digital twin co-simulation process comprise a ranked list of options, or (claim 12) wherein the ranked list of options comprises, for each of the options, a confidence score associated with the option. However, Kentley discloses remote assistance utilizing a ranked list of options, each option ranked by a confidence score associated with the option ([061]; “The path guidance data may be configured to assist a teleoperator in selecting a guided trajectory from one or more of the candidate trajectories. In some instances, the path guidance data specifies a value indicative of a confidence level or probability that indicates the degree of certainty that a particular candidate trajectory may reduce or negate the probability that the event may impact operation of an autonomous vehicle. … The selection may be made via an operator interface that lists a number of candidate trajectories, for example, in order from highest confidence levels to lowest confidence levels.”). It would have been obvious to one having ordinary skill in the art at the time of applicant’s effective filing date to combine the ranked list of options taught by Kentley with the method of Dolgov in view of Sharma because it assists the remote assistance operator in selecting the best option (Kentley, [0061]). The motivation for the modification would have been to resolve the condition such that the AV may return to a normative operation state (Kentley, [0061]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Wade Miles whose telephone number is (571)270-7777. The examiner can normally be reached Monday-Friday 10:00 am - 7:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WADE MILES/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Mar 08, 2023
Application Filed
Feb 12, 2025
Non-Final Rejection — §103, §112
May 19, 2025
Response Filed
Aug 21, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12446892
DUAL SIDE SPRING V-CLIP FOR SURGICAL TREATMENT OF LEFT ATRIAL APPENDAGE
2y 5m to grant Granted Oct 21, 2025
Patent 12193935
PULL-THROUGH CHORDAE TENDINEAE SYSTEM
2y 5m to grant Granted Jan 14, 2025
Patent 12193680
OCCLUSION CLIP
2y 5m to grant Granted Jan 14, 2025
Patent 12185959
ASPIRATION THROMBECTOMY SYSTEM AND METHODS FOR THROMBUS REMOVAL WITH ASPIRATION CATHETER
2y 5m to grant Granted Jan 07, 2025
Patent 12161343
Implant Detachment
2y 5m to grant Granted Dec 10, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+48.1%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 578 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month