Prosecution Insights
Last updated: April 19, 2026
Application No. 18/415,917

METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR AUTONOMOUSLY OR SEMI-AUTONOMOUSLY OPERATING A MOTOR VEHICLE

Final Rejection §103§112
Filed
Jan 18, 2024
Examiner
KIM, ANDREW SANG
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
DR. ING. H.C. F. PORSCHE AG
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
87%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
146 granted / 175 resolved
+31.4% vs TC avg
Minimal +4% lift
Without
With
+3.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
22 currently pending
Career history
197
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 175 resolved cases

Office Action

§103 §112
DETAILED ACTION This Office Action is in response to Applicant’s Amendment and Remarks filed on 10/31/2025. This Action is made FINAL. Claims 1-8 and 12-18 received on 10/31/2025 are considered in this Office Action. Claims 1-8 and 12-18 are pending for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The drawings were previously objected to as failing to comply with 37 CFR 1.84(p)(5) because the unlabeled rectangular boxes shown in the drawings did not provide descriptive text labels. In response to the Applicant’s amendment to the specification, the objection has been withdrawn. In response to the Applicant’s amendment, objection of claim 1 is withdrawn. Claims 2-3, 6-8 and 12 were previously rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In response to the Applicant’s amendment, the rejection is withdrawn. Claims 16-20 were previously rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In response to the Applicant’s amendment to claims 16 and 20, the rejection has been withdrawn. In response to the amendment of claim 14, the rejection under 35 U.S.C. 101 is withdrawn, as the amended claim is directed to an apparatus. Applicant’s arguments with respect to claims 1, 14 and 18-19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: Claim 13: driving assistance system (generic placeholder) autonomous or semi-autonomous operation of a motor vehicle (function) Claim 13: analysis module (generic placeholder) comparing real operating data and virtual operating data (function) Because this/these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Regarding “driving assistance system”, it is interpreted to cover the corresponding structure of computer and equivalents thereof as supported by FIG. 1 (elements 6 and 10) and a portion of paragraphs [0020] and [0024] and of the specification reproduced below: [0020] the at least one computer is caused to perform the method described above. [0024] The on-board computer 10 is used together with an actuator 14 to represent a driving assistance system 6 in the motor vehicle. Regarding “analysis module”, it is interpreted to cover the corresponding structure of software component within the computer and equivalents thereof as supported by FIGs. 1-2 (wherein FIG. 2 show the software architecture/components and the input data “11” and output data “13” corresponds to the block representing the computer) and a portion of paragraphs [0020], [0023], [0024], [0027] and [0035] of the specification reproduced below: [0020] the at least one computer is caused to perform the method described above. [0023] Fig. 2 schematically illustrates a software architecture having different versions that are operated and analyzed for testing purposes in a shadow mode. [0024] The electrical/electronic architecture that is illustrated schematically in Fig. 1 includes a control unit 100. The control unit 10 is, for example, an on-board computer of a motor vehicle that can be operated autonomously or semi-autonomously. The on-board computer 10 is used together with an actuator 14 to represent a driving assistance system 6 in the motor vehicle. [0027] The input data 11 can be processed in the control unit 10 in both real mode 8 and shadow mode 9. [0035] The software versions 21, 22 represent second or further software versions. The input data 11 are processed with software versions 21, 22 in shadow mode 9 to generate virtual operating data 25. The virtual operating data 25 are compared to the real operating data 12 in an analysis module 23. The resulting analysis data 26 are fed to an output module 24. The output module 24 then is used to feed the output data 13 to the display If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Examiner’s Note - 35 USC § 101 Regarding claims 1, 14 and 17-18, additional limitation of “reporting the comparison to the driver and thereby making the driver aware of advantages of switching from the first version of the hardware and/or software to the second version of the hardware and/or software” applies or use the judicial exception (step of comparison) in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, thus integrating the judicial exception into a practical application as supported by at least para. [0016]-[0018]: “the comparison between the real emergency braking behavior of the driver and the emergency braking behavior of the enhanced emergency braking assistant could create a significant incentive for purchase with regard to a reduced accident risk”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 13 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Geluk (US 20250026372 A1), in view of Limbacher (US20210016789A1). Regarding claim 1, Geluk teaches a method for autonomously or semi-autonomously operating a motor vehicle using a driving assistance system that is operated in real terms with a first version of hardware and/or software (para. [0046]: “The vehicle further includes a first autonomous driving control configured to support the vehicle's driving by receiving the sensed vehicle's surroundings and sensed vehicle's operating parameters and the first autonomous driving control (ADF 1.0) generating vehicle driving commands for at least partly controlling the vehicle's driving”; para. [0057]: “In this type of shadow testing, a vehicle VHC is being driven by a human driver HDR supported by an autonomous driving function ADF 1.0 or safety system SFS […] As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running”, wherein first autonomous driving control (ADF 1.0) corresponds to first version of hardware and/or software, and is actually performed and NOT simulated), the method comprising: recording real operating data of the motor vehicle during operation of the driving assistance system with the first version of hardware and/or software (FIG. 1; para. [0046]: “The vehicle further includes a first autonomous driving control configured to support the vehicle's driving by receiving the sensed vehicle's surroundings and sensed vehicle's operating parameters and the first autonomous driving control (ADF 1.0) generating vehicle driving commands for at least partly controlling the vehicle's driving”; para. [0071]: “The vehicle VHC sensors SNR include sensors recording the driver's DRV driving actions like steering, braking, or accelerating as well as the vehicle's VHC states.”, sensors SNR data comprises of real operating data); recording the real operating data of the motor vehicle while operating the motor vehicle by the driver with or without assistance from the driving assistance system (para. [0046]: “The vehicle further includes a first autonomous driving control configured to support the vehicle's driving by receiving the sensed vehicle's surroundings and sensed vehicle's operating parameters and the first autonomous driving control (ADF 1.0) generating vehicle driving commands for at least partly controlling the vehicle's driving.”; para. [0057]: “a vehicle VHC is being driven by a human driver HDR supported by an autonomous driving function ADF 1.0 or safety system SFS”; para. [0071]: “The vehicle VHC sensors SNR include sensors recording the driver's DRV driving actions like steering, braking, or accelerating as well as the vehicle's VHC states.”, wherein ADF 1.0 indicates with or without assistance from the driving assistance system); operating the driving assistance system virtually in a shadow mode with at least a second version of hardware and/or software (FIG. 1; FIG. 2; para. [0057]: “As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running on the vehicle VHC or remotely in a cloud environment”; para. [0062]: “a digital twin DTW respectively digital twin DTM module is simulating the vehicle operation applying a second autonomous driving control ADF X.Y”); determining virtual operating data of the motor vehicle while operating the driving assistance system with the second version of the hardware and/or software (FIG. 3; para. [0057]: “As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running on the vehicle VHC or remotely in a cloud environment, shadow mode SWM module SMM is receiving sensor data SRD from the sensors SNR and control actions HAC of the human driver HDR but not taking control. This revised software makes decisions about how to drive based on the sensor SNR outputs”; para. [0062]: “a digital twin DTW respectively digital twin DTM module is simulating the vehicle operation applying a second autonomous driving control ADF X.Y generating vehicle operation control commands for at least partly controlling the digital twin's driving”; para. [0063]: “DriveTwin DTW, which may be considered a simulation of the complete vehicle's driving, the DriveTwin DTW includes modules for: […]”; para. [0072]: “The driving module DMD may generate driver's DRV driving actions like steering, braking, or accelerating. The detachment enables an evaluation of at least one different event sequence of the driving during and after the incident and a comparison of the performance of the virtual driving module DMD with the performance of the real driver DRV.”, wherein a simulation of the complete vehicle’s driving comprises of operation control commands and sensor data which corresponds to virtual operating data, and is visualized in FIG. 3 VS2); comparing the real operating data of the motor vehicle in a driving mode with the virtual operating data of the motor vehicle (FIG. 3; para. [0057]: “This revised software makes decisions about how to drive based on the sensor SNR outputs. Those decisions are compared to the decisions of a human driver or the older version of the autonomous driving function ADF 1.0 or safety system SFS”; para. [0066]: “The method includes a comparing module CPM comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors”; para. [0072]: “The driving module DMD may generate driver's DRV driving actions like steering, braking, or accelerating. The detachment enables an evaluation of at least one different event sequence of the driving during and after the incident and a comparison of the performance of the virtual driving module DMD with the performance of the real driver DRV”; para. [0071]: “These are fed into the simulation, wherein a controller provides that the simulated vehicle copies the operation of the actual vehicle, which simulation is illustrated in FIG. 5 as a separate vehicle, so called the (first) shadow vehicle VS1”, wherein FIG. 3 shows VS1 and VS2 which are created based on the actual driving and simulated driving, respectively, wherein their trajectories, operations are compared); reporting the comparison to the driver and thereby making the driver aware of advantages of switching from the first version of the hardware and/or software to the second version of the hardware and/or software (FIG. 3; para. [0041]: “the method may further include comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors. The method may further include performing at least one of the following: (i) displaying the comparison results via a human machine interface or a vehicle display;”; para. [0066]: “The method includes a comparing module CPM comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors, wherein the comparison results are output to a vehicle display DSP […] The driving module DRM respectively the autonomous driving control ADC may be changed or improved based on the comparison results and the driving module output”; para. [0074]: “A comparing evaluation of the performance of the virtual driving module DMD with the performance of the real driver DRV enables to improve the driving module DMD if the real driver makes decisions and driving actions leading to a more beneficial result. After the incident, the driving module may act as an advisor for the driver or may display evaluations of the incident.”; para. [0032]: “outputting a comparison of the autonomous driving control driving control actions or outputs and the human drivers driving actions. This output may be displayed to the driver, e.g., via a vehicle console and may report as an assessment of the driver's driving considering an estimation of perception precision, perception time, reaction time, rection type (steering, braking, both), and maybe other aspects. This report may be optionally done for different versions of the autonomous driving control”, wherein as shown in FIG. 3, the comparison of reaction time to the child for second version as shown by VS2 is improved compared to VS1, and subsequent versions will improve thus indicating the driver aware of advantages of switching from the first version of the hardware and/or software to the second version of the hardware and/or software) and wherein the second version of the hardware and/or software comprises a freeway pilot that is tested in the shadow mode (para. [0022]-[0023]: “An autonomous driving control may also be understood as any driving supportive feature or module like an adaptive cruise control an autonomous driving system or a driving safety system”, wherein autonomous driving corresponds to a freeway pilot) and wherein the reporting of the comparison includes identifying a (para. [0032]: “outputting a comparison of the autonomous driving control driving control actions or outputs and the human drivers driving actions. This output may be displayed to the driver”, wherein route or sections driven by the driver is shown), but fails to specifically teach identifying a freeway section repeatedly traveled by the driver that can be driven fully automatically by the freeway pilot. However, Limbacher teaches driving supportive feature comprising a freeway pilot (para. [0014]: “Various piloted driving functions for road-bound motor vehicles have already been proposed, which can be provided by autopilot systems which are designed for completely automatically guiding the motor vehicle. One example is the so-called “traffic jam pilot,” which can completely take over vehicle guidance in operating situations with slow traffic. Other such driving functions include, for example, a highway pilot and a city pilot.”), and wherein the reporting of the comparison (FIG. 4; para. [0032]: ““Autopilot available: in 10 minutes” or “Autopilot available: for 17 km.” The same can be provided with regard to the recommendation information, so that, for example, the following can be displayed: “Autopilot activation recommended.”; para. [0053]: ““You could activate the autopilot for about 15 minutes to continue your movie””; para. [0055]: “Thus, for example, a symbol can be used which indicates the activatability state “autopilot system=available” by means of a white color, but it is colored green when the piloted driving function is activated”) includes identifying a freeway section repeatedly traveled by the driver (para. [0028]: “driver-related recommendation information can also be determined for road sections lying in front of the motor vehicle, wherein such predictive road data can be obtained from various sources. For example, it is conceivable to obtain predictive road data at least partially from a navigation system of the motor vehicle, be it by analyzing the road currently being traveled and/or on the basis of an already predetermined route that the motor vehicle is following, wherein additional historical data on preferred roads/routes of the driver can also be used to predict the future path of the motor vehicle if no predetermined route is present”, wherein preferred road indicate repeatedly traveled by the driver) that can be driven fully automatically by the freeway pilot (para. [0069]: “activation of the piloted driving function, which, for example, can be a traffic jam pilot, a highway pilot, or the like, appears meaningful to the driver based on the preferences and/or the current state of said driver”). Geluk is considered to be analogous to the claimed invention because it is in the same field of validating new version of driver assistance functions using the shadow-mode. Limbacher is considered analogous to the claimed invention because it is reasonably pertinent to the problem of recommendation of driving assistance function. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the ADAS functions of Geluk to incorporate the teachings of Limbacher and include freeway pilot, which is an example corresponding to driver assisting function. Doing so would result in testing and validation of freeway pilot in sections that the driver drives, which comprises of preferred routes. Regarding claim 2, Geluk in view of Limbacher teaches the method of claim 1. Geluk further teaches wherein input data is fed to the driving assistance system during operation with the first version and the second version of the hardware and/or software, the input data being the same during operation of both the first version and the second version of the hardware and/or software (FIG. 1; FIG. 2; para. [0015]: “simulating the vehicle operation by a digital twin of the vehicle, the digital twin including a second autonomous driving control generating vehicle operation control commands for at least partly controlling the digital twin's driving; aligning the vehicle simulation with the vehicle's driving by feedback of the sensed vehicle's surroundings and the sensed vehicle's operating parameters;”; para. [0057]: “As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running on the vehicle VHC or remotely in a cloud environment, shadow mode SWM module SMM is receiving sensor data SRD from the sensors SNR and control actions HAC of the human driver HDR but not taking control.”; para. [0068]: “the operating parameters of the vehicle VHC include sensed vehicle operating actions VAC of a driver and vehicle's states VST. The illustrated system is provided with an edge device including a processor CPU simulating the vehicle's operation. […]. This operation simulation running on the at least one processor CPU receives at least partly the sensor's SNR output as an input including at least partly the sensed vehicle operating actions VAC and states VST. The system with the digital twin module DTM or DriveTwin DTW includes a driving module DRM or autonomous driving control (ADF X.Y) generating the vehicle operating actions VAC”; also see para. [0061]-[0062], wherein “SWM module SMM is receiving sensor data SRD from the sensors SNR” indicates that the digital twin or SMM testing the other version is also receiving the same sensor data thus indicating input data being the same during operation of both the first version and the second version). Regarding claim 3, Geluk in view of Limbacher teaches the method of claim 2. Geluk further teaches wherein the input data comprise sensor data, driver data, and/or partner control unit data (para. [0057]: “As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running on the vehicle VHC or remotely in a cloud environment, shadow mode SWM module SMM is receiving sensor data SRD from the sensors SNR and control actions HAC of the human driver HDR but not taking control.”). Regarding claim 13, Geluk in view of Limbacher teaches a system for autonomous or semi-autonomous operation of a motor vehicle using a driving assistance system (FIG. 1; FIG. 2; para. [0057]: “a vehicle VHC is being driven by a human driver HDR supported by an autonomous driving function ADF 1.0 or safety system SFS.”; para. [0068]: “The illustrated system is provided with an edge device including a processor CPU simulating the vehicle's operation. This processor CPU may be located at the vehicle VHC or remotely in a cloud environment CLD.”; para. [0022]: “The real vehicle and the DriveTwin, respectively, use safety systems like an advanced driver assistant system (ADAS) and/or Adaptive Cruise Control (ACC), which operate as a computer implemented module on basis of a method”, wherein the CPU in the vehicle is used to operate ADAS and also simulate ADAS using the DriveTwin) according to the method of claim 1 (See Claim 1 above), the system comprising an analysis module in which the real operating data of the motor vehicle in driving mode is compared to the virtual operating data of the motor vehicle (FIG. 2 CPM; para. [0066]: “The method includes a comparing module CPM comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors”; para. [0072]: “The driving module DMD may generate driver's DRV driving actions like steering, braking, or accelerating. The detachment enables an evaluation of at least one different event sequence of the driving during and after the incident and a comparison of the performance of the virtual driving module DMD with the performance of the real driver DRV”; para. [0019]: “This digital twin vehicle module may be operated by an edge device processor as part of the real vehicle enabling autonomous real time digital twin operation”; para. [0038]: “to expose a digital twin/xDT to real world traffic data in a closed loop evaluation”, wherein FIG. 2 and “closed loop” indicates that the CPM is a module of the DriveTwin (DTW), thus performs the function using a processor). Regarding claim 16, Geluk in view of Limbacher teaches the method according to claim 1. Limbacher further teaches wherein the reporting the comparison to the driver includes reporting to the driver an amount of time that could be dedicated by the driver to secondary activities during automated driving on the identified freeway section repeatedly traveled by the driver that can be driven fully automatically by the freeway pilot (FIG. 4; para. [0053]: “an activation recommendation related to it can also be assigned a time period that can be linked to the time information […] for example, a message such as “You could activate the autopilot for about 15 minutes to continue your movie” can be output acoustically”; para. [0028]: “additional historical data on preferred roads/routes of the driver can also be used to predict the future path of the motor vehicle”; para. [0072]: “Different types of data are included as driver data for the determination of the recommendation information. At first, these are the historical data describing the driver behavior in the past”, wherein the route is selected based on historical data which comprises of preferences of driver thus indicating identified freeway section repeatedly traveled by the driver that can be driven fully automatically by the freeway pilot). Claims 4-8 are rejected under 35 U.S.C. 103 as being unpatentable over Geluk in view of Limbacher, and further in view of Sadakuni (US20240025433A1). Regarding claim 4, Geluk in view of Limbacher teaches the method of claim 1. Geluk further teaches wherein reporting the comparison to the driver comprises displaying the comparison to the driver (para. [0041]: “the method may further include comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors. The method may further include performing at least one of the following: (i) displaying the comparison results via a human machine interface or a vehicle display”), but fails to specifically teach along with an offer to purchase the second version of the hardware and/or software. However, in the same field of endeavor, Sadakuni teaches displaying the comparison to the driver along with an offer to purchase the second version of the hardware and/or software (FIG. 15; Abstract: “An additional function extraction unit extracts an additional assistance function that is a driver assistance function that can be newly added for each registered driver based on the driving history”; para. [0042]: “When installation of the additional assistance function does not involve service work on the vehicle 100, such as software update, an update program is transmitted from the OTA center server 150 to the vehicle 100.”; para. [0043]: “a purchase button image 43A is displayed in a proposal image 43 on the center display 40 (see FIG. 15 ). When the registered driver performs an input operation such as tapping on the purchase button image 43A,”; para. [0079]: “display control unit 92 causes the center display 40 to display a driving diagnostic image 41 as exemplified in FIG. 15 . Furthermore, the display control unit 92 displays the proposal image 43, the exterior camera moving image 45, and a noted action explanation image 47 so as to be arranged in the driving diagnostic image 41”, wherein the recommended function comprises of a software update). Sadakuni is considered to be analogous to the claimed invention because it is in the same field of recommending driver assistance function to the driver. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the display showing the comparison of outputs of Geluk to incorporate the teachings of Sadakuni and provide a comprehensive display which comprises of diagnostic, explanation information and option to purchase the recommended function. Doing so would result in the basis for proposing the additional assistance function or an updated software is presented, such as the comparison of results shown by Geluk in view of Limbacher, thus the registered driver can feel a sense of satisfaction upon purchase (Sadakuni, para. [0010]). Regarding claim 5, Geluk in view of Limbacher and further in view of Sadakuni teaches the method of claim 4. The combination of Geluk and Sadakuni further teaches wherein displaying the comparison to the driver comprises displaying the comparison on a display device in the motor vehicle (Geluk para. [0041]: “According to an embodiment, the method may further include comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors. The method may further include performing at least one of the following: (i) displaying the comparison results via a human machine interface or a vehicle display”; Sadakuni FIG. 15; Sadakuni para. [0010]: “since the basis for proposing the additional assistance function is presented, the registered driver can feel a sense of satisfaction upon purchase”). Regarding claim 6, Geluk in view of Limbacher and further in view of Sadakuni teaches the method of claim 5. Sadakuni further teaches wherein the offer to purchase the second version of the hardware and/or software is made via the display device in the motor vehicle (Sadakuni FIG. 15), and wherein the offer to purchase the second version of the hardware or software is accepted by the operator of the motor vehicle via the display device (FIG. 15; para. [0043]: “a purchase button image 43A is displayed in a proposal image 43 on the center display 40 (see FIG. 15 ).”; para. [0144]: “such as tapping by the registered driver on the purchase button image 43A displayed on the center display 40”). Regarding claim 7, Geluk in view of Limbacher and further in view of Sadakuni teaches the method of claim 6. Sadakuni further teaches further comprising installing the second version of the software as an over-the-air update after the offer to purchase the second version of the hardware and/or software is accepted by the operator of the motor vehicle (para. [0042]: “When installation of the additional assistance function does not involve service work on the vehicle 100, such as software update, an update program is transmitted from the OTA center server 150 to the vehicle 100.”; para. [0039]: “As will be described below, the OTA center server 150 can transmit update programs for software and firmware installed in various ECUs of the vehicle 100 by wireless communication (over-the-air).”; para. [0144]: “Further, the display control unit 92 displays the purchase button image 43A and the free trial button image 43B when the process in step S52 is performed in the proposal image 43. When an input operation for acquiring the additional assistance function is received (S60), such as tapping by the registered driver on the purchase button image 43A displayed on the center display 40, the display control unit 92 causes the center display 40 to display the workable date and time received from the service work setting unit 162 of the registered store terminal 160 (S64).”). Regarding claim 8, Geluk in view of Limbacher and further in view of Sadakuni teaches the method of claim 6. Sadakuni further teaches further comprising scheduling a vehicle service via the display device after the offer to purchase the second version of the hardware and/or software is accepted by the operator of the motor vehicle (para. [0144]: “When an input operation for acquiring the additional assistance function is received (S60), such as tapping by the registered driver on the purchase button image 43A displayed on the center display 40, the display control unit 92 causes the center display 40 to display the workable date and time received from the service work setting unit 162 of the registered store terminal 160 (S64).”; para. [0019]: “The service work setting unit extracts a workable date and time when service work of the additional assistance function is able to be performed at the registered store when the device is in stock at the store. In the proposal image, when an input operation for acquiring the additional assistance function is received from the registered driver, the display control unit causes the display unit to display the workable date and time”). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Geluk in view of Limbacher and further in view Turan (US 20220222172 A1). Regarding claim 12, Geluk in view of Limbacher teaches the method of claim 1. Geluk further teaches wherein the second version of the hardware and/or software comprises an ADAS function that is tested in the shadow mode (para. [0022]-[0023]: “According to the disclosure, the complete real vehicle is escorted by a digital twin (i.e., the DriveTwin). The real vehicle and the DriveTwin, respectively, use safety systems like an advanced driver assistant system (ADAS) and/or Adaptive Cruise Control (ACC), which operate as a computer implemented module on basis of a method. The method includes: acquiring sensor output data; evaluating the acquired data; and generating an adaptive cruise control or autonomous driving or safety system output such as status information, (e.g., a warning), which is output via a human-machine interface (HMI) and/or a vehicle/driving control signal and/or traffic status information. An autonomous driving control may also be understood as any driving supportive feature or module like an adaptive cruise control an autonomous driving system or a driving safety system”,), but fails to specifically teach an enhanced emergency braking assistant. However, Turan teaches an ADAS comprising of an enhanced emergency braking assistant (para. [0003]: “A driver assistance system for motor vehicles, for example an adaptive cruise control, an automatic emergency brake system, a lane keeping system or the like, or also a combination of all these systems, includes sensory components, for example radar sensors, cameras, etc., actuating components for control interventions into the drive system, the brake system, the steering, etc., and an electronic data processing system, including associated software, which controls the sensory components, processes sensor data and generates control commands for the actuating components therefrom”; Abstract: “A method for validating software functions in a driver assistance system for motor vehicles”). Geluk is analogous to the claimed invention because it pertains to validating updated version of ADAS function using shadow-mode. Turan is considered analogous to the claimed invention because it is reasonably pertinent to introducing various ADAS functions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the ADAS system of Geluk in view of Limbacher and incorporate the automatic emergency brake system of Turan. Doing so will enhance safety of the user by performing emergency braking, and further allow validation of upgrades to the ADAS function as taught by Geluk. Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Geluk in view of Limbacher and further in view Milani (US20220171616A1). Regarding claim 14, Geluk in view of Limbacher teaches (See claim 1) but fails to specifically teach a non-transitory computer-readable medium having processor-executable instructions stored thereon, wherein the processor-executable instructions, when executed by one or more processors. However, in the same field of endeavor, Milani teaches a non-transitory computer-readable medium having processor-executable instructions stored thereon, wherein the processor-executable instructions, when executed by one or more processors (Claim 19: “A non-transitory machine-readable storage medium on which is stored a computer program for testing an updated version of an application for a vehicle, the computer program, when executed by a vehicle-external computing system, causing the vehicle-external computing system to perform the following steps”). Geluk and Milani are both considered to be analogous to the claimed invention because they are in the same field of validating new version of driver assistance functions using the shadow-mode. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Geluk in view of Limbacher and incorporate the teachings of Milani and store the method in a computer program. Doing so would result in efficient use of the computing system, and further allow distribution of the functions performed (Milani, para [0017]). Regarding claim 15, Geluk in view of Limbacher and further in view of Milani further teaches the non-transitory computer-readable medium according to claim 14. Geluk and Milani further teaches wherein the non-transitory computer-readable medium is arranged within an on-board computer of a motor vehicle (Geluk para. [0068]: “As explained above in the context of FIG. 1 , the operating parameters of the vehicle VHC include sensed vehicle operating actions VAC of a driver and vehicle's states VST. The illustrated system is provided with an edge device including a processor CPU simulating the vehicle's operation. This processor CPU may be located at the vehicle VHC or remotely in a cloud environment CLD. This operation simulation running on the at least one processor CPU receives at least partly the sensor's SNR output as an input including at least partly the sensed vehicle operating actions VAC and states VST.”; Milani FIG. 2 control unit 120; Milani Claim 16: “a test mode is activated on a computer unit of the at least one vehicle, which executes the present version of the application, in which the input data and the output data are detected and transferred to the vehicle-external computing system”, wherein the simulation and comparison may be performed by the CPU in the vehicle), and wherein the on-board computer of the motor vehicle is configured to communicate with a cloud (Geluk para. [0057]: “As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running on the vehicle VHC or remotely in a cloud environment”; Milani FIG. 2; Milani para. [0031]: “input data 130 of the vehicle, which are used as the function input, are sent to computing system 100 via wireless data link 105 (possible in the stationary state, possibly also via WLAN). As a result, the application may be operated on the computing system with the aid of real inputs and outputs from vehicle 110”; para. [0005]: “this type may be downloaded or transferred to a vehicle from a vehicle-external computing unit (informally referred to as a “cloud”).”). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Geluk, in view of JURGEN (DE102021000595A1), and further in view of Wang (US 20180194344 A1). The merged PE2E English translation and foreign copy of JURGEN referenced by the Examiner is attached as an NPL in the previous Office Action. Regarding claim 17, Geluk teaches a method for autonomously or semi-autonomously operating a motor vehicle using a driving assistance system that is operated in real terms with a first version of hardware and/or software (para. [0046]: “The vehicle further includes a first autonomous driving control configured to support the vehicle's driving by receiving the sensed vehicle's surroundings and sensed vehicle's operating parameters and the first autonomous driving control (ADF 1.0) generating vehicle driving commands for at least partly controlling the vehicle's driving”; para. [0057]: “In this type of shadow testing, a vehicle VHC is being driven by a human driver HDR supported by an autonomous driving function ADF 1.0 or safety system SFS […] As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running”, wherein first autonomous driving control (ADF 1.0) corresponds to first version of hardware and/or software, and is actually performed and NOT simulated), the method comprising: recording real operating data of the motor vehicle during operation of the driving assistance system with the first version of hardware and/or software (FIG. 1; para. [0046]: “The vehicle further includes a first autonomous driving control configured to support the vehicle's driving by receiving the sensed vehicle's surroundings and sensed vehicle's operating parameters and the first autonomous driving control (ADF 1.0) generating vehicle driving commands for at least partly controlling the vehicle's driving”; para. [0071]: “The vehicle VHC sensors SNR include sensors recording the driver's DRV driving actions like steering, braking, or accelerating as well as the vehicle's VHC states.”, sensors SNR data comprises of real operating data); recording the real operating data of the motor vehicle while operating the motor vehicle by the driver with or without assistance from the driving assistance system (para. [0046]: “The vehicle further includes a first autonomous driving control configured to support the vehicle's driving by receiving the sensed vehicle's surroundings and sensed vehicle's operating parameters and the first autonomous driving control (ADF 1.0) generating vehicle driving commands for at least partly controlling the vehicle's driving.”; para. [0057]: “a vehicle VHC is being driven by a human driver HDR supported by an autonomous driving function ADF 1.0 or safety system SFS”; para. [0071]: “The vehicle VHC sensors SNR include sensors recording the driver's DRV driving actions like steering, braking, or accelerating as well as the vehicle's VHC states.”, wherein ADF 1.0 indicates with or without assistance from the driving assistance system); operating the driving assistance system virtually in a shadow mode with at least a second version of hardware and/or software (FIG. 1; FIG. 2; para. [0057]: “As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running on the vehicle VHC or remotely in a cloud environment”; para. [0062]: “a digital twin DTW respectively digital twin DTM module is simulating the vehicle operation applying a second autonomous driving control ADF X.Y”); determining virtual operating data of the motor vehicle while operating the driving assistance system with the second version of the hardware and/or software (FIG. 3; para. [0057]: “As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running on the vehicle VHC or remotely in a cloud environment, shadow mode SWM module SMM is receiving sensor data SRD from the sensors SNR and control actions HAC of the human driver HDR but not taking control. This revised software makes decisions about how to drive based on the sensor SNR outputs”; para. [0062]: “a digital twin DTW respectively digital twin DTM module is simulating the vehicle operation applying a second autonomous driving control ADF X.Y generating vehicle operation control commands for at least partly controlling the digital twin's driving”; para. [0063]: “DriveTwin DTW, which may be considered a simulation of the complete vehicle's driving, the DriveTwin DTW includes modules for: […]”; para. [0072]: “The driving module DMD may generate driver's DRV driving actions like steering, braking, or accelerating. The detachment enables an evaluation of at least one different event sequence of the driving during and after the incident and a comparison of the performance of the virtual driving module DMD with the performance of the real driver DRV.”, wherein a simulation of the complete vehicle’s driving comprises of operation control commands and sensor data which corresponds to virtual operating data, and is visualized in FIG. 3 VS2); comparing the real operating data of the motor vehicle in a driving mode with the virtual operating data of the motor vehicle (FIG. 3; para. [0057]: “This revised software makes decisions about how to drive based on the sensor SNR outputs. Those decisions are compared to the decisions of a human driver or the older version of the autonomous driving function ADF 1.0 or safety system SFS”; para. [0066]: “The method includes a comparing module CPM comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors”; para. [0072]: “The driving module DMD may generate driver's DRV driving actions like steering, braking, or accelerating. The detachment enables an evaluation of at least one different event sequence of the driving during and after the incident and a comparison of the performance of the virtual driving module DMD with the performance of the real driver DRV”; para. [0071]: “These are fed into the simulation, wherein a controller provides that the simulated vehicle copies the operation of the actual vehicle, which simulation is illustrated in FIG. 5 as a separate vehicle, so called the (first) shadow vehicle VS1”, wherein FIG. 3 shows VS1 and VS2 which are created based on the actual driving and simulated driving, respectively, wherein their trajectories, operations are compared); reporting the comparison to the driver and thereby making the driver aware of advantages of switching from the first version of the hardware and/or software to the second version of the hardware and/or software (FIG. 3; para. [0041]: “the method may further include comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors. The method may further include performing at least one of the following: (i) displaying the comparison results via a human machine interface or a vehicle display;”; para. [0066]: “The method includes a comparing module CPM comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors, wherein the comparison results are output to a vehicle display DSP […] The driving module DRM respectively the autonomous driving control ADC may be changed or improved based on the comparison results and the driving module output”; para. [0074]: “A comparing evaluation of the performance of the virtual driving module DMD with the performance of the real driver DRV enables to improve the driving module DMD if the real driver makes decisions and driving actions leading to a more beneficial result. After the incident, the driving module may act as an advisor for the driver or may display evaluations of the incident.”; para. [0032]: “outputting a comparison of the autonomous driving control driving control actions or outputs and the human drivers driving actions. This output may be displayed to the driver, e.g., via a vehicle console and may report as an assessment of the driver's driving considering an estimation of perception precision, perception time, reaction time, rection type (steering, braking, both), and maybe other aspects. This report may be optionally done for different versions of the autonomous driving control”, wherein as shown in FIG. 3, the comparison of reaction time to the child for second version as shown by VS2 is improved compared to VS1, and subsequent versions will improve thus indicating the driver aware of advantages of switching from the first version of the hardware and/or software to the second version of the hardware and/or software) and wherein the second version of the hardware and/or software comprises ADAS function that is tested in the shadow mode (para. [0022]-[0023]: “An autonomous driving control may also be understood as any driving supportive feature or module like an adaptive cruise control an autonomous driving system or a driving safety system”, wherein autonomous driving corresponds to a freeway pilot) and wherein the reporting of the comparison (para. [0032]: “outputting a comparison of the autonomous driving control driving control actions or outputs and the human drivers driving actions. This output may be displayed to the driver”, wherein route or sections driven by the driver is shown), but fails to specifically teach a park assistant and identify a parking operation that has been repeatedly performed by the driver that can be instead be performed fully automatically by the parking assistant. However, JURGEN teaches the second version of the hardware and/or software comprises a park assistant that is tested in the shadow mode (FIGs. 1-4; Abstract: “validating an assistance function of a driver assistance system of a vehicle”; pg. 2 last paragraph: “The control unit of the parking assistant is designed in such a way that a development status of the software is parallel to a serial status of the software in to operate a so-called shadow mode, whereby in the field, in particular by a vehicle user of the vehicle 1 , new assistance functions and updates of certain software can be tested”). Geluk and JURGEN are both considered to be analogous to the claimed invention because they are in the same field of validating new version of driver assistance functions using the shadow-mode. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the ADAS functions of Geluk to incorporate the teachings of JURGEN and include parking assist, which is an example corresponding to driver assisting function. Doing so would result in validating the updated version parking assist of parking assist, thus shortening the software development time (JURGEN pg. 2), and also provide parking assistance to the driver. Geluk in view of JURGEN fails to specifically teach identify a parking operation that has been repeatedly performed by the driver that can be instead be performed fully automatically by the parking assistant. However, Wang further teaches identify a parking operation that has been repeatedly performed by the driver (para. [0011]: “Autonomous vehicles can use such information for performing autonomous driving and/or parking operations. In many instances, a driver will repeat an identical or nearly identical parking maneuver on a daily basis. For example, a driver may drive onto a driveway of their home, and subsequently navigate the vehicle into a garage. As another example, a driver may drive to a parking lot, enter the parking lot entrance and then navigate the vehicle into a designated or reserved parking space. As part of the navigation, the driver may follow an approximately identical route each time the parking maneuver is completed, while remaining aware of pedestrians and other vehicles and avoiding potential collisions”) that can be instead be performed fully automatically by the parking assistant (FIG. 1B; para. [0014]: “Alternatively, if the vehicle 100 at stopping location 112 is within a threshold distance 114 from the start position 102, the vehicle may provide an indication and/or notification (e.g., as described above) that the autonomous parking maneuver is available. In some examples, the threshold distance may be on the order of less than a meter such that the amount of driving by the vehicle 100 outside of path 104 is extremely limited. As discussed further below, the autonomous parking navigation path 104 is generated based on recorded driver behaviors, and thus the length of the path 104 is expected to be safe for an autonomous parking maneuver”; para. [0011). Wang is considered analogous to the claimed invention because it is reasonably pertinent to the problem of recommendation of driving assistance function. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the parking assist of Geluk in view JURGEN to incorporate the teachings of Wang and identify frequently used parking lot, and recommend activation of autonomous parking. Doing so will enhance user experience by allowing autonomous parking assist to replace the manual parking performed by the user. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Geluk, in view KANG (US20210162988A1). Regarding claim 18, Geluk teaches a method for autonomously or semi-autonomously operating a motor vehicle using a driving assistance system that is operated in real terms with a first version of hardware and/or software (para. [0046]: “The vehicle further includes a first autonomous driving control configured to support the vehicle's driving by receiving the sensed vehicle's surroundings and sensed vehicle's operating parameters and the first autonomous driving control (ADF 1.0) generating vehicle driving commands for at least partly controlling the vehicle's driving”; para. [0057]: “In this type of shadow testing, a vehicle VHC is being driven by a human driver HDR supported by an autonomous driving function ADF 1.0 or safety system SFS […] As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running”, wherein first autonomous driving control (ADF 1.0) corresponds to first version of hardware and/or software, and is actually performed and NOT simulated), the method comprising: recording real operating data of the motor vehicle during operation of the driving assistance system with the first version of hardware and/or software (FIG. 1; para. [0046]: “The vehicle further includes a first autonomous driving control configured to support the vehicle's driving by receiving the sensed vehicle's surroundings and sensed vehicle's operating parameters and the first autonomous driving control (ADF 1.0) generating vehicle driving commands for at least partly controlling the vehicle's driving”; para. [0071]: “The vehicle VHC sensors SNR include sensors recording the driver's DRV driving actions like steering, braking, or accelerating as well as the vehicle's VHC states.”, sensors SNR data comprises of real operating data); recording the real operating data of the motor vehicle while operating the motor vehicle by the driver with or without assistance from the driving assistance system (para. [0046]: “The vehicle further includes a first autonomous driving control configured to support the vehicle's driving by receiving the sensed vehicle's surroundings and sensed vehicle's operating parameters and the first autonomous driving control (ADF 1.0) generating vehicle driving commands for at least partly controlling the vehicle's driving.”; para. [0057]: “a vehicle VHC is being driven by a human driver HDR supported by an autonomous driving function ADF 1.0 or safety system SFS”; para. [0071]: “The vehicle VHC sensors SNR include sensors recording the driver's DRV driving actions like steering, braking, or accelerating as well as the vehicle's VHC states.”, wherein ADF 1.0 indicates with or without assistance from the driving assistance system); operating the driving assistance system virtually in a shadow mode with at least a second version of hardware and/or software (FIG. 1; FIG. 2; para. [0057]: “As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running on the vehicle VHC or remotely in a cloud environment”; para. [0062]: “a digital twin DTW respectively digital twin DTM module is simulating the vehicle operation applying a second autonomous driving control ADF X.Y”); determining virtual operating data of the motor vehicle while operating the driving assistance system with the second version of the hardware and/or software (FIG. 3; para. [0057]: “As the shadow mode SWM testing a new revision of the autonomous driving function ADF 2.0 software is parallelly running on the vehicle VHC or remotely in a cloud environment, shadow mode SWM module SMM is receiving sensor data SRD from the sensors SNR and control actions HAC of the human driver HDR but not taking control. This revised software makes decisions about how to drive based on the sensor SNR outputs”; para. [0062]: “a digital twin DTW respectively digital twin DTM module is simulating the vehicle operation applying a second autonomous driving control ADF X.Y generating vehicle operation control commands for at least partly controlling the digital twin's driving”; para. [0063]: “DriveTwin DTW, which may be considered a simulation of the complete vehicle's driving, the DriveTwin DTW includes modules for: […]”; para. [0072]: “The driving module DMD may generate driver's DRV driving actions like steering, braking, or accelerating. The detachment enables an evaluation of at least one different event sequence of the driving during and after the incident and a comparison of the performance of the virtual driving module DMD with the performance of the real driver DRV.”, wherein a simulation of the complete vehicle’s driving comprises of operation control commands and sensor data which corresponds to virtual operating data, and is visualized in FIG. 3 VS2); comparing the real operating data of the motor vehicle in a driving mode with the virtual operating data of the motor vehicle (FIG. 3; para. [0057]: “This revised software makes decisions about how to drive based on the sensor SNR outputs. Those decisions are compared to the decisions of a human driver or the older version of the autonomous driving function ADF 1.0 or safety system SFS”; para. [0066]: “The method includes a comparing module CPM comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors”; para. [0072]: “The driving module DMD may generate driver's DRV driving actions like steering, braking, or accelerating. The detachment enables an evaluation of at least one different event sequence of the driving during and after the incident and a comparison of the performance of the virtual driving module DMD with the performance of the real driver DRV”; para. [0071]: “These are fed into the simulation, wherein a controller provides that the simulated vehicle copies the operation of the actual vehicle, which simulation is illustrated in FIG. 5 as a separate vehicle, so called the (first) shadow vehicle VS1”, wherein FIG. 3 shows VS1 and VS2 which are created based on the actual driving and simulated driving, respectively, wherein their trajectories, operations are compared); reporting the comparison to the driver and thereby making the driver aware of advantages of switching from the first version of the hardware and/or software to the second version of the hardware and/or software (FIG. 3; para. [0041]: “the method may further include comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors. The method may further include performing at least one of the following: (i) displaying the comparison results via a human machine interface or a vehicle display;”; para. [0066]: “The method includes a comparing module CPM comparing attributes of the detached simulation with the real vehicles operation recognized from the sensors, wherein the comparison results are output to a vehicle display DSP […] The driving module DRM respectively the autonomous driving control ADC may be changed or improved based on the comparison results and the driving module output”; para. [0074]: “A comparing evaluation of the performance of the virtual driving module DMD with the performance of the real driver DRV enables to improve the driving module DMD if the real driver makes decisions and driving actions leading to a more beneficial result. After the incident, the driving module may act as an advisor for the driver or may display evaluations of the incident.”; para. [0032]: “outputting a comparison of the autonomous driving control driving control actions or outputs and the human drivers driving actions. This output may be displayed to the driver, e.g., via a vehicle console and may report as an assessment of the driver's driving considering an estimation of perception precision, perception time, reaction time, rection type (steering, braking, both), and maybe other aspects. This report may be optionally done for different versions of the autonomous driving control”, wherein as shown in FIG. 3, the comparison of reaction time to the child for second version as shown by VS2 is improved compared to VS1, and subsequent versions will improve thus indicating the driver aware of advantages of switching from the first version of the hardware and/or software to the second version of the hardware and/or software) and wherein the second version of the hardware and/or software comprises ADAS function that is tested in the shadow mode (para. [0022]-[0023]: “An autonomous driving control may also be understood as any driving supportive feature or module like an adaptive cruise control an autonomous driving system or a driving safety system”, wherein autonomous driving corresponds to a freeway pilot), but fails to specifically teach a driverless parking garage service and wherein operating the driving assistance system virtually in the shadow mode comprises identifying a parking garage used by the driver that includes sensor infrastructure for automated valet parking and checking whether the driverless parking garage service is compatible with the sensor infrastructure of the identified parking garage (para. [0032]: “outputting a comparison of the autonomous driving control driving control actions or outputs and the human drivers driving actions. This output may be displayed to the driver”, wherein route or sections driven by the driver is shown), but fails to specifically teach a park assistant and identify a parking operation that has been repeatedly performed by the driver that can be instead be performed fully automatically by the parking assistant. However, KANG teaches an ADAS comprising of a driverless parking garage service (Abstract: “vehicle having an automated valet parking feature”), and operating the driving assistance system (para. [0038]: “The term “automated valet parking apparatus” may be referred to as an autonomous valet parking device or vehicle”; para. [0114]: “The infrastructure 300 determines whether the automated valet parking of the automated valet parking apparatus 400 is allowed in a parking facility on the basis of the checking results. According to forms, the infrastructure 300 determines whether the automated valet parking of the automated valet parking apparatus 400 is allowed on the basis of the list of sensors that normally operate and the list of desired functions or operations that can be performed by the automated valet parking apparatus 400”). KANG is considered analogous to the claimed invention because it is reasonably pertinent to offering driving assistance function. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the ADAS functions of Geluk to incorporate the teachings of KANG and include valet parking feature, which is an example corresponding to feature in an autonomous. Doing so would enable an automated valet parking service by which a vehicle parked at a specific parking spot in a parking lot autonomously moves to a predetermined pickup area so that a driver can conveniently leave the parking (KANG, para. [0006]), and further teach operating the driving assistance system virtually in the shadow mode comprises identifying a parking garage used by the driver that includes sensor infrastructure for automated valet parking and checking whether the driverless parking garage service is compatible with the sensor infrastructure of the identified parking garage, as the same operations will be performed virtually. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. YAMAURA (WO 2021075499 A1) teaches a parking support device 40 collates the received version information with the version information of the function of the parking lot system 1 and confirms whether or not the automatic valley parking function is consistent. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW S KIM whose telephone number is (571)272-7356. The examiner can normally be reached Mon - Fri 8AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James J Lee can be reached at (571) 270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.S.K./Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Jan 18, 2024
Application Filed
Jun 27, 2025
Non-Final Rejection — §103, §112
Oct 31, 2025
Response Filed
Feb 14, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594949
NOTIFICATION DEVICE, NOTIFICATION METHOD, AND NONTRANSITORY RECORDING MEDIUM PROVIDED WITH COMPUTER PROGRAM FOR NOTIFICATION DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12594940
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12589725
VEHICLE AND CONTROL METHOD FOR DETERMINING AN EMERGENCY SITUATION
2y 5m to grant Granted Mar 31, 2026
Patent 12583487
APPARATUS FOR CONTROLLING AUTONOMOUS DRIVING AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12565331
FIRE DETECTION SYSTEM AND METHOD FOR MONITORING AN AIRCRAFT COMPARTMENT AND SUPPORTING A COCKPIT CREW WITH TAKING REMEDIAL ACTION IN CASE OF A FIRE ALARM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
87%
With Interview (+3.8%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 175 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month