DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is responsive to application filed on 12/05/2022.
Claims 1-15 are presented for examination.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/24/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 (Does this claim fall within at least one statutory category?):
Claims 1-13 are directed to a method.
Claim 14 is directed to a system.
Claim 15 is directed to a product.
Therefore, claims 1-15 fall into at least one of the four statutory categories.
Step 2A, Prong 1: ((a) identify the specific limitation(s) in the claim that recites an abstract idea: and (b) determine whether the identified limitation(s) falls within at least one of the groups of abstract ideas enumerates in MPEP 2106.04(a)(2)):
Claim 1:
A method for minimizing a computational effort for executing a plurality of virtual tests of a device for driving a motor vehicle at least partly autonomously, comprising:
providing, by a testing unit, both a parameter set of a first virtual test and a parameter set of a second virtual test on driving situation parameters and configuration data of an algorithm that implements the first virtual test and the second virtual test “mental process i.e. concepts performed in the human mind or with pen and paper (including an observation, evaluation judgement, opinion) and/or mathematical concepts”;
determining, by the testing unit, an identical component and/or a difference component of the second virtual test in relation to the first virtual test on driving situation parameters and/or a point in time at which at least one parameter of the second virtual test varies compared with the first virtual test, using the parameter set of the first virtual test and the parameter set of the second virtual test and the configuration data of the algorithm that implements the first virtual test and the second virtual test “mental process i.e. concepts performed in the human mind or with pen and paper (including an observation, evaluation judgement, opinion) and/or mathematical concepts”; and
executing, by the testing unit, the first virtual test and the second virtual test while taking into account the identical component and/or the difference component of the second virtual test in relation to the first virtual test on driving situation parameters and/or the point in time at which the at least one parameter varies, so as to minimize the computational effort for test execution “mental process i.e. concepts performed in the human mind or with pen and paper (including an observation, evaluation judgement, opinion) and/or mathematical concepts”.
Step 2A, Prong 2 (1. Identifying whether there are any additional elements recited in the claim beyond the judicial exception; and 2. Evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application): The claim is directed to the judicial exception.
Claim 1 recites additional element of “testing unit”. The component recited at a high level of generality (e.g. a generic computer element for performing a generic computer functions) such that it amounts to no more than mere application of the judicial exception using generic computer component(s). Accordingly, the additional element(s) of each of these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Further, claim recites an additional element of “executing” which runs by generic computer components.
Step 2B: (Does the claim recite additional elements that amount to significantly more than the judicial exception? No): Further, as discussed above with respect to the integration of the abstract into a practical application, the additional elements of “test unit” amount to no more than mere instructions to apply the judicial exception using generic computer component(s). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, claim recites an additional element of “executing” which runs by generic computer components.
As per claims 2-11, the claims fall into “mental process i.e. concepts performed in the human mind or with pen and paper (including an observation, evaluation judgement, opinion) and/or mathematical concepts”.
As per claim 12, the claim falls into “a generic computer element for performing a generic computer functions”.
As per claim 13, the claim falls into “mental process i.e. concepts performed in the human mind or with pen and paper (including an observation, evaluation judgement, opinion) and/or mathematical concepts”.
As per Claim 14, claim 14 recites limitations analogous in scope to those of claim 1, and as such are similar rejected. Further, claim 14 recites additional elements of “one or more memories” and “one or more processors”. The components recited at a high level of generality (e.g. a generic computer element for performing a generic computer functions) such that it amounts to no more than mere application of the judicial exception using generic computer component(s). Accordingly, the additional element(s) of each of these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Further, as discussed above with respect to the integration of the abstract into a practical application, the additional elements of “one or more memories” and “one or more processors” amount to no more than mere instructions to apply the judicial exception using generic computer component(s). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
As per Claim 15, claim 15 recites limitations analogous in scope to those of claim 1, and as such are similar rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Wachenfeld et al (Walther Wachenfeld and Hermann Winner, Virtual Assessment of Automation in Field Operation A New Runtime Validation Method” pgs. 1-10, 2015) in view of US Patent No. 9, 715, 711 B1 issued to Konrardy et al.
1. Wachenfeld et al discloses a method for minimizing a computational effort for executing a plurality of virtual tests of a device for driving a motor vehicle at least partly autonomously (See: Summary: Highly automated vehicles being a new technology in public traffic have to fulfill the demanding safety requirements resulting from human driving. To assess automated systems in means of safety a new runtime validation method the “Virtual Assessment of Automation in Field Operation” is introduced; “3.1. VAAFO Concept Architecture”, Consequently, this makes the automation assessable for a time of some seconds. The situation assessment is based on the two world models that reflect the driver’s behavior and/or the behavior of automation. Based on the retrospective situation assessment the virtual behavior of the OUT is assessed and relevant cases are identified and logged), comprising:
providing, by a testing unit, both a parameter set of a first virtual test and a parameter set of a second virtual test on driving situation parameters and (See: “3.1 VAAFO Concept Architecture”, The situation assessment is based on the two world models that reflect the driver’s behavior and/or the behavior of automation. Based on the retrospective situation assessment the virtual behavior of the OUT is assessed and relevant cases are identified and logged. This new way of assessing the automation will be described in the following part using an example; “3.2 Assessment of Automation” When automatically assessing vehicle automation, the assessment needs additional information the vehicle automation doesn’t have during behavior planning. For example, a test driver who nowadays assesses a vehicle automation uses his own perception and cognition to compare his behavior planning with the execution of the automation to intervene if necessary. The test driver has additional information available due to his more advanced perception and cognition of the world (at least in most cases at present); “3.2 Assessment of Automation: One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality);
determining, by the testing unit, an identical component and/or a difference component of the second virtual test in relation to the first virtual test on driving situation parameters and/or a point in time at which at least one parameter of the second virtual test varies compared with the first virtual test, using the parameter set of the first virtual test and the parameter set of the second virtual test and the configuration data of the algorithm that implements the first virtual test and the second virtual test (See: “3.2 Assessment of Automation” One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality. Based on the perceived world, a parallel world is started where the automation drives the vehicle (row #3). As the automation is not aware of the obstacle, it doesn’t decelerate the vehicle and goes straight. In this example, the obstacle appears after the vehicle has passed it); and
executing, by the testing unit, the first virtual test and the second virtual test while taking into account the identical component and/or the difference component of the second virtual test in relation to the first virtual test on driving situation parameters and/or the point in time at which the at least one parameter varies, so as to minimize the computational effort for test execution (See: 3.2.1 The Parallel World”, One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality. Based on the perceived world, a parallel world is started where the automation drives the vehicle (row #3). As the automation is not aware of the obstacle, it doesn’t decelerate the vehicle and goes straight. In this example, the obstacle appears after the vehicle has passed it).
Wachenfeld et al does not disclose but Konrardy et al discloses configuration data of an algorithm that implements the first virtual test and the second virtual test (See: Col. 25 lines 38-49, Using the test result data received at block 702 and the reference data received at block 704, the server 140 determines the expected actual loss or operating data for the autonomous operation feature at block 706. The server 140 may determine the expected actual loss or operating data using known techniques, such as regression analysis or machine learning tools (e.g., neural network algorithms or support vector machines). The expected actual loss or operating data may be determined using any useful metrics, such as expected loss value, expected probabilities of a plurality of collisions or other incidents, expected collisions per unit time or distance traveled by the vehicle, etc.).
It would have been obvious before the effective filing date to combine autonomous operation features as taught by Konrardy et al to virtual assessment of automation in field operation of Wachenfeld et al would be to determine risk, price, and offer vehicle insurance policies (Konrardy et al, Col. 1 line 66 through col. 2 line 1).
2. Wachenfeld et al discloses the method according to claim 1, wherein the first virtual test and the difference component of the second virtual test in relation to the first virtual test on driving situation parameters are executed to minimize the computational effort required for the test execution (See: 3.2.1 The Parallel World”, One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality. Based on the perceived world, a parallel world is started where the automation drives the vehicle (row #3). As the automation is not aware of the obstacle, it doesn’t decelerate the vehicle and goes straight. In this example, the obstacle appears after the vehicle has passed it).
3. Wachenfeld et al discloses the method according to claim 2, wherein the difference component of the second virtual test in relation to the first virtual test on driving situation parameters is executed from the point in time at which the at least one parameter varies (See: 3.2.1 The Parallel World”, One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality. Based on the perceived world, a parallel world is started where the automation drives the vehicle (row #3). As the automation is not aware of the obstacle, it doesn’t decelerate the vehicle and goes straight. In this example, the obstacle appears after the vehicle has passed it; Figs. 3 and 4 and corresponding texts).
4. Wachenfeld et al discloses the method according to claim 1, wherein the difference component of the second virtual test in relation to the first virtual test on driving situation parameters chronologically follows the identical component of the second virtual test in relation to the first virtual test on driving situation parameters (See: 3.2.1 The Parallel World”, One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality. Based on the perceived world, a parallel world is started where the automation drives the vehicle (row #3). As the automation is not aware of the obstacle, it doesn’t decelerate the vehicle and goes straight. In this example, the obstacle appears after the vehicle has passed it; Figs. 3 and 4 and corresponding texts).
5. Wachenfeld et al discloses the method according to claim 1, wherein the first virtual test and the difference component of the second virtual test in relation to the first virtual test on driving situation parameters are executed sequentially (See: 3.2.1 The Parallel World”, One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality. Based on the perceived world, a parallel world is started where the automation drives the vehicle (row #3). As the automation is not aware of the obstacle, it doesn’t decelerate the vehicle and goes straight. In this example, the obstacle appears after the vehicle has passed it; Figs. 3 and 4 and corresponding texts).
6. Wachenfeld et al discloses the method according to claim 5, wherein the parameter set of the first virtual test on driving situation parameters and the configuration data of the algorithm that executes the first virtual test are duplicated, in order to generate the parameter set of the second virtual test on driving situation parameters at the point in time at which the at least one parameter varies, and are varied on account of the at least one parameter (See: “3.2.2 Retrospective Approach” Figure 3 shows this for the example stated above. The collision-free world (row #3) is post-processed and the obstacle detected in time step 2 s is already placed in the world model (row #4) from the beginning, as the static object doesn’t change during this time span. When repeating the trajectory of the automated vehicle, the vehicle collides with the obstacle. A second indicator that the automated vehicle behavior is not adequate is generated).
7. Wachenfeld et al discloses the method according to claim 6, wherein the first virtual test is executed on a first computing node, wherein, once the parameter set of the first virtual test on driving situation parameters and the configuration data of the algorithm that executes the first virtual test have been duplicated in order to generate the parameter set of the second virtual test on driving situation parameters, the second virtual test is executed on the first computing node or on a second computing (See: “3.2.2 Retrospective Approach”, Figure 3 shows this for the example stated above. The collision-free world (row #3) is post-processed and the obstacle detected in time step 2 s is already placed in the world model (row #4) from the beginning, as the static object doesn’t change during this time span. When repeating the trajectory of the automated vehicle, the vehicle collides with the obstacle. A second indicator that the automated vehicle behavior is not adequate is generated; “3.2.3 Credibility of Assessment”, A similar challenge occurs when the trajectory is the same but the retrospective assessment identifies a collision. This could result from two causes: either the perception suffers false positive detections or an accident isn’t reported (or isn’t severe)).
8. Konrardy et al discloses the method according to claim 1, wherein the configuration data of the algorithm that implements the first virtual test and the second virtual test comprise value ranges of driving situation parameters to be tested, a step size of the driving situation parameters to be tested, which in particular is either predetermined or parameterizable by the algorithm, and/or a number of simulations per iteration (See: Col. 22 lines 49-67, At block 502, the effectiveness of an autonomous operation feature is tested in a controlled testing environment by presenting test conditions and recording the responses of the feature. The testing environment may include a physical environment in which the autonomous operation feature is tested in one or more vehicles 108. Additionally, or alternatively, the testing environment may include a virtual environment implemented on the server 140 or another computer system in which the responses of the autonomous operation feature are simulated. Physical or virtual testing may be performed for a plurality of vehicles 108 and sensors 120 or sensor configurations, as well as for multiple settings of the autonomous operation feature; Col. 23 line 60 through Col. 24 line 3, At block 604, the autonomous operation feature is enabled within a test system with a set of parameters determined in block 602. The test system may be a vehicle 108 or a computer simulation, as discussed above. The autonomous operation feature or the test system may be configured to provide the desired parameter inputs to the autonomous operation feature. For example, the controller 204 may disable a number of sensors 120 or may provide only a subset of available sensor data to the autonomous operation feature for the purpose of testing the feature's response to certain parameters; Col. 24 lines 4-20, The test inputs may include simulated data presented by the on-board computer 114 or sensor data from the sensors 120 within the vehicle 108. In some embodiments, the vehicle 108 may be controlled within a physical test environment by the on-board computer 114 to present desired test inputs through the sensors 120. For example, the on-board computer 114 may control the vehicle 108 to maneuver near obstructions or obstacles, accelerate, or change directions to trigger responses from the autonomous operation feature. The test inputs may also include variations in the environmental conditions of the vehicle 108, such as by simulating weather conditions that may affect the performance of the autonomous operation feature (e.g., snow or ice cover on a roadway, rain, or gusting crosswinds, etc.)).
9. Wachenfeld et al discloses the method according to claim 1, wherein the parameter set of the second virtual test on driving situation parameters is provided before or during the execution of the first virtual test (See: 3.2.1 The Parallel World”, One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality. Based on the perceived world, a parallel world is started where the automation drives the vehicle (row #3). As the automation is not aware of the obstacle, it doesn’t decelerate the vehicle and goes straight. In this example, the obstacle appears after the vehicle has passed it; Figs. 3 and 4 and corresponding texts).
10. Wachenfeld et al discloses the method according to claim 1, wherein the identical component and/or the difference component of the second virtual test in relation to the first virtual test on driving situation parameters and/or the point in time at which the at least one parameter of the second virtual test varies compared with the first virtual test is/are determined before or during the implementation of the first virtual test (See: 3.2.1 The Parallel World”, One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality. Based on the perceived world, a parallel world is started where the automation drives the vehicle (row #3). As the automation is not aware of the obstacle, it doesn’t decelerate the vehicle and goes straight. In this example, the obstacle appears after the vehicle has passed it; Figs. 3 and 4 and corresponding texts).
11. Wachenfeld et al discloses the method according to claim 1, wherein a parameter set of a third virtual test on driving situation parameters and configuration data of an algorithm that implements the third virtual test are provided, and wherein an identical component and/or a difference component of the third virtual test in relation to the second virtual test on driving situation parameters and/or a point in time at which at least one parameter of the third virtual test varies compared with the second virtual test is/are determined (See: 3.2.1 The Parallel World”, One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality. Based on the perceived world, a parallel world is started where the automation drives the vehicle (row #3). As the automation is not aware of the obstacle, it doesn’t decelerate the vehicle and goes straight. In this example, the obstacle appears after the vehicle has passed it; Figs. 3 and 4 and corresponding texts).
12. Konrardy et al discloses the method according to claim 1, wherein the first virtual test and each further virtual test are executed in a cloud environment (See: Col. 23 lines 11-19, The test results may be recorded by the server 140. The test results may include responses of the autonomous operation feature to the test conditions, along with configuration and setting data, which may be received by the on-board computer 114 and communicated to the server 140. During testing, the on-board computer 1914 may be a special-purpose computer or a general-purpose computer configured for generating or receiving information relating to the responses of the autonomous operation feature to test scenarios).
13. Wachenfeld et al discloses the method according to claim 1, wherein the identical component and/or the difference component of the second virtual test in relation to the first virtual test on driving situation parameters and/or the point in time at which the at least one parameter of the second virtual test varies compared with the first virtual test is/are determined by analyzing the algorithm that simulates the parameter set or by establishing beforehand the point in time at which the at least one parameter of the second virtual test varies compared with the first virtual test (See: 3.2.1 The Parallel World”, One virtual world representation is built up based on the real sensor perception. In this virtual world, two trajectories or vehicle behaviors can be compared. One is the real trajectory the other the trajectory of the automation. Figure 2 illustrates this with an example: In reality (row #1) a human-driven vehicle drives in the right lane. It approaches an obstacle and goes around it by moving one lane to the left. The perceived worlds (rows #2 and #3) look similar to the first, but with the difference that the obstacle is not perceived before time step 2 s. A reason for this kind of false negative detection that is corrected over time could be the different characteristics of the mounted sensors. For example, the long range sensors like radar don’t detect the bush but the sideways mounted short range sensors like radar, ultrasonic, or 360° camera do. In this perceived world, one trajectory that is measured as well is the human driven one (row #2). The vehicle decelerates a bit at t1 = 1 s and goes around the obstacle at t2 = 2 s like in reality. Based on the perceived world, a parallel world is started where the automation drives the vehicle (row #3). As the automation is not aware of the obstacle, it doesn’t decelerate the vehicle and goes straight. In this example, the obstacle appears after the vehicle has passed it; Figs. 3 and 4 and corresponding texts).
As per Claims 14-15, claims 14-15 recite limitations analogous in scope to those of claim 1, and as such are similar rejected.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIBROM K GEBRESILASSIE whose telephone number is (571)272-8571. The examiner can normally be reached M-F 9:00 AM-5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rehana Perveen can be reached at 571 272 3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
KIBROM K. GEBRESILASSIE
Primary Examiner
Art Unit 2189
/KIBROM K GEBRESILASSIE/Primary Examiner, Art Unit 2189 01/30/2026