DETAILED ACTION
The present application, filed on or after March 16, 2013 is being examined under the first inventor to file provisions of the AIA .
Responsive to the communication dated 3/26/2025
Claims 1-20 are presented for examination.
Priority
ADS dated 11/15/2022 has been reviewed. There is no domestic benefit or foreign priority.
Drawings
The drawings dated 11/15/2022 have been reviewed. They are accepted as they illustrate every element of the claims.
Specification
The abstract dated 11/15/2022 has been reviewed. It has 135 words and 10 lines and no legal phraseology. It is accepted.
Claim Objections
Claim 17 objected to because of the following informalities: “plurality of feature values is based the simulation” should be “plurality of feature values is based on the simulation”. Appropriate correction is required.
Claim 18 objected to because of the following informalities: “wherein the executing the sensor test is based on” should be “wherein executing the sensor test is based on”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1.
Step 1: YES.
The claim recites “A computer-implemented system”.
STEP 2A PRONG ONE: YES.
The claim recites:
“receiving data ;
extracting, from the data, features ;
obtaining, from the data, response data ;
calculating a statistical measure of the response data;
and generating a data response prediction model to map the extracted features of the data to the statistical measure of the response data” which is a recitation of mathematical relationships, formulas, equations, or calculation.
STEP 2A PRONG TWO: NO. The claim does not recite additional elements that integrate the exception into a practical application of the exception because the claim does not have additional elements or a combination of additional elements that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception.
The claim receives data, extracts features from the data, obtains response data from the data, calculates a statistical measure of the response data, and generates a data response prediction model to map the calculated features of the data to the statistical measure of the response data. The claim is nothing more than a mathematical exercise resulting in a model which maps calculated features to a statistical measure.
While the claim recites: “A computer-implemented system, comprising: one or more non-transitory computer-readable media storing instructions, when executed by one or more processing units, cause the one or more processing units to perform operations comprising:” and then characterizes the operations performed by the computer, this is not indicative of a practical application because these are mere instructions to implement the abstract idea on a computer and the mere use of a computer is not a practical application.
While the claim recites: “road”, “collected from a reference driving scene”, “of the reference driving scene”, and “associated with a vehicle in the reference driving scene and responsive to the extracted features” when describing the mathematical operations performed, these merely relate the judicial exception to the field of endeavor called vehicles. Merely linking the claim to the field of vehicles is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
STEP 2B: NO. The clam does not recite additional elements which are significantly more than the abstract idea. As outlined above the claim merely recites a computer as a tool for implementing the abstract idea but merely using a computer does not make an improvement to the function of the computer and the recited computer is a generalized one and is therefore not a particular machine. Moreover, as the claim is merely performing mathematical calculations the claim does not make a transformation or reduction of a particular article to a different state or thing. While the claim characterizes the mathematical operations in the context of vehicles this merely links the mathematical operations to the technology of vehicles. This, however, is not significantly more than the abstract idea.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 9
STEP 1: YES. The claim recites, “A computer-implemented system”.
STEP 2A PRONG ONE: YES. The claim recites:
“determining a plurality of feature values ;
And applying a data response prediction model to the plurality of feature values to obtain a predicted statistical measure ” which is recitation of mathematical relationships, formulas, equations, or calculation.
STEP 2A PRONG TWO: NO. The claim does not recite additional elements that integrate the exception into a practical application of the exception because the claim does not have additional elements or a combination of additional elements that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception.
While the claim recites: “A computer-implemented system, comprising: one or more non-transitory computer-readable media storing instructions, when executed by one or more processing units, cause the one or more processing units to perform operations comprising:” this is not indicative of a practical application because these are mere instructions to implement the abstract idea on a computer and the mere use of a computer is a not a practical application.
While the claim recites “associated with a driving scene”, “road”, and “of a sensor response for the driving scene with respect to a vehicle in the driving scene” when describing the mathematical operations performed, these merely relate the judicial exception to the field of endeavor called vehicles. Merely linking the claim to the field of vehicles is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
STEP 2B: NO. The clam does not recite additional elements which are significantly more than the abstract idea. As outlined above the claim merely recites a computer as a tool for implementing the abstract idea but merely using a computer does not make an improvement to the function of the computer and the recited computer is a generalized one and is therefore not a particular machine. Moreover, as the claim is merely performing mathematical calculations the claim does not make a transformation or reduction of a particular article to a different state or thing. While the claim characterizes the mathematical operations in the context of vehicles this merely links the mathematical operations to the technology of vehicles. This, however, is not significantly more than the abstract idea.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 16
STEP 1: YES. The claim recites “A method comprising:”
STEP 2A PRONG ONE: YES. The claim recites
“determining a plurality of feature values
and querying a data response prediction model based on the plurality of feature values to obtain a predicted statistical measure ” which is recitation of mathematical relationships, formulas, equations, or calculation.
STEP 2A PRONG TWO: NO. The claim does not recite additional elements that integrate the exception into a practical application of the exception because the claim does not have additional elements or a combination of additional elements that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception.
While the claim recites “associated with a driving scene”, “road”, and “of a sensor response for the driving scene with respect to a vehicle in the driving scene, wherein the predicted statistical measure of the sensor response includes a predicted number of light detection and ranging (LIDAR) data points responsive to the driving scene” when describing the mathematical operations performed, these merely relate the judicial exception to the field of endeavor called vehicles. Merely linking the claim to the field of vehicles is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
STEP 2B: NO. The clam does not recite additional elements which are significantly more than the abstract idea. As outlined above the claim merely recites a computer as a tool for implementing the abstract idea but merely using a computer does not make an improvement to the function of the computer and the recited computer is a generalized one and is therefore not a particular machine. Moreover, as the claim is merely performing mathematical calculations the claim does not make a transformation or reduction of a particular article to a different state or thing. While the claim characterizes the mathematical operations in the context of vehicles this merely links the mathematical operations to the technology of vehicles. This, however, is not significantly more than the abstract idea.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 2
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites that the response data from claim 1 is sensing information and then describes how the sensing information is utilized in the mathematical operations of claim 1 which is recitation of mathematical relationships, formulas, equations, or calculation.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 3
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites the statistical measure from claim 1, which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 4
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites the extracted features from claim 3, originally from claim 1, which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 5
STEP 1: YES. A “computer-implemented system”.
STEP 2A: The claim recites the extracted features from claim 3, originally from claim 1, which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 6
STEP 1: YES. A “computer-implemented system”.
STEP 2A: The claim recites the extracted features from claim 3, originally from claim 1, which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 7
STEP 1: YES. A “computer-implemented system”.
STEP 2A: The claim recites the extracted features from claim 3, originally from claim 1, which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 8
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites “wherein the road data response prediction model is a generalized linear model (GLM)” which is recitation of mathematical relationships, formulas, equations, or calculation.
STEP 2B: NO. There are no additional elements that amount to significantly more than the judicial exception.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 10
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites the plurality of feature values from claim 9, which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 11
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites the predicted statistical measure from claim 10, originally from claim 9, which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 12
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites one or more of the plurality of feature values from claim 10, originally from claim 9, which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 13
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites the plurality of feature values from claim 10, originally from claim 9, which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles and give examples of feature values included in the system, which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 14
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites the operations performed by a processing unit, which is recitation of an abstract idea. The claim also recites “extracting, from the simulation, the plurality of feature values”, which is recitation of mathematical relationships, formulas, equations, or calculation.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. While the claim recites “executing a sensor test in a simulation”, this merely links the claim to the field of vehicle simulation.
The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 15
STEP 1: YES. A “computer-implemented system”.
STEP 2A: The claim recites “determining a fidelity of the simulation based on a comparison between a statistical measure of a reference sensor response and the predicted statistical measure of the sensor response” which is recitation of mathematical relationships, formulas, equations, or calculation.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Claim 17
STEP 1: YES. A method.
STEP 2A: The claim recites
“executing a sensor test in a simulation that simulates the driving scene;
and determining a fidelity of the simulation based on a comparison between a statistical measure of a reference sensor response and the predicted statistical measure of the sensor response obtained from the querying,
wherein the determining the plurality of feature values is based the simulation”
which is recitation of mathematical relationships, formulas, equations, or calculation.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 18
STEP 1: YES. A method.
STEP 2A: The claim recites “adjusting a parameter of the simulation based on a comparison between the statistical measure of the reference sensor response and a previous predicted statistical measure of a sensor response, wherein the executing the sensor test is based on the adjustment” which is recitation of mathematical relationships, formulas, equations, or calculation.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 19
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites the plurality of feature values from claim 16 which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim 20
STEP 1: YES. A “computer-implemented system”.
STEP 2A: YES. The claim recites the plurality of feature values from claim 16 which has been previously established to be a judicial exception.
STEP 2B: NO. The claim does not recite additional elements that amount to significantly more than the judicial exception. The additional elements in the claim merely link the claim to the field of vehicles which is not indicative of a practical application because the judicial exception is merely utilized to perform a mathematical operation and the results from the mathematical operation are not used or relied upon by any other elements in the claim to perform any application of, for example, controlling a vehicle.
Therefore, it is concluded that the claim is not found eligible under 35 USC 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 (Integrated System And Method For Road Data Collection And Simulation Scene Building) in view of Li_2021 (Visual-based Driving Scene Generator For Automatic Driving Simulation) in view of Zheng_2021 (Scene-aware Learning Network for Radar Object Detection) in view of Hou_2020 (Review on the new development of vibration-based damage identification for civil engineering structures: 2010-2019).
Claim 1. Wang_2021 makes obvious “ [collecting] road data collected” (title: “Integrated System And Method For Road Data Collection And Simulation Scene Building”; abstract: “The invention relates to a road data collection and simulation scene building integrated system”) from a reference driving scene (abstract: “solving the problem that the collecting vehicle road test cost is high in the existing technology…and can widely cover the driving scene of the potential type that the automatic driving vehicle may meet, greatly reducing the number of kilometers of the road test”; par 90: “it should include important static object information in the automatic driving scene”)
Wang_2021 makes obvious [recognizing], from the road data, features of the reference driving scene (par 9: “detecting the target object by the radar”; par: 85: “The detection algorithm can realize target detection with high accuracy of deep learning algorithm and algorithm acceleration, detection comprises performing feature recognition and tracking of the video image data and point cloud data, and identifying the target by a boundary frame mark in the data stream”). Additionally, Wang_2021 makes obvious that detecting objects is recognizing features (par 9: “the environment data fusion system…using the visual image collected by the camera to perform the sensing link; detecting and detecting the target object by the radar”; par 81-85: “In a preferred embodiment, as shown in FIG. 1, data fusion centre 2 comprises…a semantic recognition module 202… the semantic identification module uses the sensor collecting unit to obtain the video data and point cloud data, detecting and identifying can include any influence to the automatic driving decision of the traffic participant… The detection algorithm can realize target detection with high accuracy by deep learning algorithm and algorithm acceleration, detection comprises performing feature recognition”).
Wang_2021 makes obvious obtaining (par 90: “by obtaining the information of the data fusion centre and storing in the frame of time synchronization, the target information of each frame and the motion position information are projected to the map to generate the dynamic configuration data of the self-vehicle for driving scene and other road users”; par 95: “ inputting the vehicle environment data to the data fusion centre; obtaining the object fusion data after processing; inputting the object fusion data into the road environment simulation platform for simulation”), from road data, response data associated with a vehicle in the reference driving scene (par 43 “The data acquisition process of the invention collects and marks various automatic driving complex scenes defined in the (file); the response behaviour of the automatic driving vehicle in various scenes is fully recorded for different scenes; it is convenient for data analysis scene simulation and verification”; par 90: “the target information of each frame and the motion position information are projected to the map to generate the dynamic configuration data of the self-vehicle for driving scene and other road users”)
Wang_2021 makes obvious generating (abstract: “the collected environment data is used for scene simulation and generating corresponding test case”; par 90: “the target information of each frame and the motion position information are projected to the map to generate the dynamic configuration data of the self-vehicle for driving scene and other road users”) a road data response (par 43 “The data acquisition process of the invention collects and marks various automatic driving complex scenes defined in the (file); the response behaviour of the automatic driving vehicle in various scenes is fully recorded for different scenes; it is convenient for data analysis scene simulation and verification”;) [algorithm] (par 39: “As a further preferred technical solution, in the step (b), using the D-S evidence theory information synthesis algorithm for multi-source information fusion”; par 96: “The multi-source information fusion is carried out by using the synthesis algorithm”) to map the [recognized] features of the road data (par 85: “The detection algorithm can realize target detection with high accuracy of deep learning algorithm and algorithm acceleration, detection comprises performing”; par 90: “the target information of each frame and the motion position information are projected to the map to generate the dynamic configuration data of the self-vehicle for driving scene and other road users”; par 96: “A large amount of comprehensive environmental element information can map the recorded objects on the map by driving the simulator to reduce the test case scene”).
Additionally, Wang_2021 makes obvious that road data includes radar data and image data (abstract: “The invention relates to a road data collection and simulation scene building integrated system. The integrated system comprises a sensor collecting unit, a data fusion centre and a road environment simulation platform connected in turn; par 5: “the front view camera and the millimetre wave radar are combined with a set of collecting data fusion system; the collected front view image data and radar output data are collectively processed”).
While Wang_2021 makes obvious a method that could reasonable be implemented on a computer by a person of ordinary skill in the art, Wang_2021 does not expressly recite a computer-implemented system, comprising: one or more non-transitory computer-readable media storing instructions, when executed by one or more processing units, cause the one or more processing units to perform operations comprising:.
Li_2021, however, makes obvious a computer-implemented system, comprising: one or more non-transitory computer-readable media storing instructions, when executed by one or more processing units, cause the one or more processing units to perform operations comprising: (claim 9: “wherein the data processing system comprises: a processor; and a memory; the memory is coupled to the processor to store instructions; when the instructions are executed by the processor, causing the processor to perform operations; the operation comprises”).
Wang_2021 and Li_2021 are analogous art to the claimed invention because they are from the same field of endeavor called vehicles. Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 and Li_2021. The rationale for doing so would have been that Wang_2021 teaches a method that could reasonable be implemented on a computer by a person of ordinary skill in the art. Li_2021 teaches a computer-implemented system, comprising: one or more non-transitory computer-readable media storing instructions, when executed by one or more processing units, cause the one or more processing units to perform operations comprising. Therefore, it would have been obvious to a person of ordinary skill in the art combine the method from Wang_2021 with the computer-implemented system from Li_2021 for the benefit of implementing the system using a computer to obtain the invention as specified in the claims.
While Wang_2021 recites collecting road data collected from a reference driving scene, which may properly imply to one of ordinary skill in the art the notion of receiving, Wang_2021 does not expressly recite receiving road data from a reference driving scene.
Li_2021, however, makes obvious receiving image data collected (par 46: “In operation 501, the system can receive the image data collected by the vehicle (e.g., image collecting vehicle 101) of the camera in the live driving scene”; par 51: “The wireless communication system 612 and/or the user interface system 613 receives the information, processes the received information, plans the route or path from the starting point to the target point, and then drives the vehicle”) from a reference driving scene (title: “Visual based Driving Scene Generator for Automatic Driving Simulation”; abstract: “A system (and method) for generating a driving scene for an automatic driving simulator is described. The system may use a camera mounted to a vehicle (which is a means of economic calculation) to obtain real life driving scene data”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 and Li_2021. The rationale for doing so would have been Wang_2021 teaches collecting road data collected from a reference driving scene, wherein the road data includes image data. Li_2021 teaches receiving image data from a reference driving scene. Therefore, it would have been obvious to one of ordinary skill in the art to combine collecting road data from a reference driving scene from Wang_2021 with receiving image data collected from a reference driving scene from Li_2021, because Wang_2021 teaches that road data includes image data, for the benefit of receiving road data collected from a reference driving scene to obtain the invention as specified in the claims.
While Wang_2021 makes obvious recognizing features from the road data which may properly imply to one of ordinary skill in the art the notion of extracting, Wang_2021 in view of Li_2021 does not expressly recite extracting features from the road data.
Zheng_2021, however, makes obvious extracting, from radar data, features (section 2.1 par 3: “which builds 3D convolutional networks to extract features from radar snippets”; section 2.2 par 2: “Nobis et al. [18] extract and combine features of visual images and sparse radar data in the network encoding layers to improve the 2D object detection results”; section 2.3 par 1: “In the processing of radar data, a series of research [1, 3, 13, 21, 30] explores convolution neural networks to extract features of radar data. To obtain good feature representations for radar data Capobianco et al. [3] apply a convolutional neural network to the rage Doppler signature”).
Wang_2021, Li_2021, and Zheng_2021 are analogous to the claimed invention because they are from the same field of endeavor called vehicles. Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 and Zheng_2021. The rationale for doing so would have been that Wang_2021 teaches recognizing, from road data, features of the reference driving scene, wherein the road data includes radar data. Zheng_2021 teaches extracting features from radar data to improve detection. Therefore, it would have been obvious to one of ordinary skill in the art to combine recognizing, from road data, features of the reference driving scene from Wang_2021 with extracting, from radar data, features from Zheng_2021, for the benefit of obtaining good feature representation for radar data (Zheng_2021 section 2.3 par 1: “In the processing of radar data, a series of research [1, 3, 13, 21, 30] explores convolution neural networks to extract features of radar data. To obtain good feature representations for radar data Capobianco et al. [3] apply a convolutional neural network to the rage Doppler signature”) because radar data is road data (Wang_2021 par 9 and par 81-85) to obtain the invention as specified in the claims.
While Wang_2021 in view of Zheng_2021 makes obvious obtaining, from road data, response data associated with a vehicle in the reference driving scene, Wang_2021 in view of Li_2021 in view of Zheng_2021 does not teach obtaining, from road data, response data associated with a vehicle in the reference driving scene and responsive to the extracted features.
Hou_2020, however, makes obvious obtaining (page 7 par 2: “In these techniques, a global structure is divided into small manageable substructures, each of which is analysed independently to obtain its designated solution”; page 11 par 1: “The identification results obtained by searching the local area around the optimum solution found by PSO were more stable and accurate than those obtained by the PSO-based algorithm”) response data (page 12 par 9: “Feature extraction aims to fit either a data-driven or a physics-based model to the measured structural response data by using statistical or signal processing techniques”; page 20 par 6: “A number of methods have been developed to alleviate the need for direct measurement of temperature variations by using the measured response data only under varying environmental conditions”) associated with a vehicle (page 4 par 10: “Kong et al. [41] used the transmissibility of vehicle responses in a vehicle-bridge coupled system to detect bridge damage”; page 23 par 2: “Siringoringo and Fujino [265] estimated the first natural frequency of a bridge using the vehicle response as the driving velocity was below 30 km/h”) and responsive to the extracted features (page 12 par 9: “Feature extraction aims to fit either a data-driven or a physics-based model to the measured structural response data by using statistical or signal processing techniques”; page 13 par 1: “For structural damage identification, the ANN is used to establish a model representing the relationship between features extracted from structural vibration data and structural model parameters through a training process”).
Additionally, Hou_2020 makes obvious calculating (page 4 par 3: “The natural frequencies and mode shapes of undamaged and damaged frames were calculated on the basis of the Wittrick–Williams algorithm and further used for damage identification”; page 21 par 9: “The degree of nonlinearity was calculated from the data of the ground motion and structural vibration based on the Hilbert transform, which indicated whether damage occurred”) a statistical measure of the response data (page 12 par 9: “Feature extraction aims to fit either a data-driven or a physics-based model to the measured structural response data by using statistical or signal processing techniques”; page 15 par 5: “Multi-layer perceptron NNs were used for the statistical modelling of the structural responses; page 11 par 4: “Mahalanobis squared distance (MSD) is a statistical measure for outlier detection and has received wide applications because of its simplicity and computational efficiency”).
Wang_2021, Li_2021, Zheng_2021, and Hou_2020 are analogous art to the claimed invention because they are from the same field of endeavor called vehicles. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Wang_2021 in view of Zheng_2021 and Hou_2020. The rationale for doing so would have been that Wang_2021 teaches extracting, from road data, features of a reference driving scene and obtaining, from road data, response data associated with a vehicle in the driving scene. Hou_2020 teaches obtaining response data associated with a vehicle in the reference driving scene and responsive to the extracted features. Additionally, Hou_2020 teaches calculating a statistical measure of the response data. Therefore, it would have been obvious to one of ordinary skill in the art to combine extracting, from road data, features of the reference driving scene and obtaining, from road data, response data associated with a vehicle in the driving scene from Wang_2021 in view of Zheng_2021 with obtaining response data associated with a vehicle in the driving scene and responsive to the extracted features and calculating a statistical measure of the response data from Hou_2020 for the benefit of using a statistical measure to fit the feature extraction model and the response data (Hou_2020 page 12 par 9: “Feature extraction aims to fit either a data-driven or a physics-based model to the measured structural response data by using statistical or signal processing techniques”) to obtain the invention as specified in the claims.
Furthermore, Hou_2020 makes obvious obtaining a response prediction model to map the [input parameters] to the statistical measure of the response data (page 6 par 10: “Model updating methods modify model property matrices, such as mass, stiffness and damping matrices, to ensure that the analytical predictions of the updated model resemble experimental data as closely as possible”; page 15 par 2: “RF is an ensemble classifier that consists of a large number of decision trees [178]. The model prediction is obtained through combining the predictors of each individual tree by majority voting”; page 24 par 3: “Response surface methodology (RSM), which is a combination of mathematical and statistical techniques, can provide an approximate mathematical model mapping the input parameters of a physical system to its output responses”).
Additionally, Hou_2020 makes obvious that the prediction model is an algorithm (page 15 par 4: “An unsupervised learning algorithm only requires data from the intact state of a structure for training, which belongs to the outlier or novelty detection category. A model is trained by machine learning algorithms based on the data in the undamaged state. The trained model is then used to evaluate the structural condition when new measurement data are available”; page 18 par 8: “Unlike the previous multi-task SBL algorithm [215], the prediction error precision parameters were marginalised from hierarchical models to improve the learning robustness and characterise the posterior uncertainty”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Zheng_2021 with Hou_2020. The rationale for doing so would have been that Wang_2021 in view of Zheng_2021 teaches generating a road data response algorithm to map the extracted features of the road data. Hou_2020 teaches generating a prediction model to map the input parameters to the statistical measure of the response data and that the prediction model is an algorithm. Therefore, it would have been obvious to one of ordinary skill in the art to combine generating a road data response algorithm to map the extracted features of the road data from Wang_2021 in view of Zheng_2021 with generating a prediction model to map the input parameters to the statistical measure of the response data from Hou_2020 for the benefit of mapping the input parameters to output response data (page 24 par 3: “Response surface methodology (RSM), which is a combination of mathematical and statistical techniques, can provide an approximate mathematical model mapping the input parameters of a physical system to its output responses), because the features are input parameters (Hou_2020 page 15 par 1: “Two types of features, namely, the parameters of the AR model and the residual errors of the statistical parameters, were extracted from the time series data”) to obtain the invention as specified in the claims.
Claims 2, 3 are rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020.
Claim 2. Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 makes obvious all the limitations of claim 1. Therefore, Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 makes obvious generating the road data response prediction model comprises generating the road data response prediction model to map the extracted features to the statistical measure of the response data.
Additionally, Hou_2020 makes obvious additional limitation the response data (page 12 par 9: “Feature extraction aims to fit either a data-driven or a physics-based model to the measured structural response data by using statistical or signal processing techniques”; page 20 par 6: “A number of methods have been developed to alleviate the need for direct measurement of temperature variations by using the measured response data only under varying environmental conditions”) includes sensing information associated with the reference driving scene (page 24 par 9: “Li et al. [280] proposed a two-phase OSP scheme based on the Fisher information matrix. The first phase was to find out the sensor locations that reconstructed accurate responses. In the second phase, the optimal sensor locations were determined based on the sensitivity analysis with respect to the elemental stiffness parameter”; page 9 par 8: “the sensitivity matrix serves as the sensing matrix and is directly related to sensor locations. Sensor placement is a typical combinatorial problem, and the global optimum is difficult to obtain using conventional techniques”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 with Hou_2020. The rationale for doing so would have been that Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 teaches generating the road data response prediction model comprises generating the road data response prediction model to map the extracted features to the statistical measure of the response data. Hou_2020 teaches that the response data includes sensing information associated with the reference driving scene. Therefore, it would have been obvious to a person of ordinary skill in the art to combine generating the road data response prediction model comprises generating the road data response prediction model to map the extracted features to the statistical measure of the response data from Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 with wherein the response data includes sensing information from Hou_2020 for the benefit of generating the road data response prediction model to map the extracted features to the statistical measure of the sensing information to obtain the invention as specified in the claims.
Claim 3. Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 makes obvious all the limitations of claim 1. Zheng_2021 makes obvious additional limitation wherein the statistical measure includes an indication of a quantity of light detection and ranging (LIDAR) data points responsive to an object in the reference driving scene (abstract: “Object detection is essential to safe autonomous or assisted driving. Previous works usually utilize RGB images or LiDAR point clouds to identify and localize multiple objects in self-driving”; section 2.2 par 3: “The second type is detecting objects based on radar data only. To effectively and efficiently collect radar object annotation, with a calibrated camera or LiDAR sensor, some annotations are automatically generated by high-accuracy object detection algorithms on these data”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 with Zheng_2021. The rationale for doing so would have been that Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 teaches the computer-implemented system of claim 1. Zheng_2021 teaches the statistical measure includes an indication of a quantity of light detection and ranging (LIDAR) data points responsive to an object in the reference driving scene. Therefore, it would have been obvious to one of ordinary skill in the art to combine the computer-implemented system of claim 1 from Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 with the statistical measure includes an indication of a quantity of light detection and ranging (LIDAR) data points responsive to an object in the reference driving scene from Zheng_2021 for the benefit of reliable and accurate range detection (Zheng_2021 section 1 par 1: “Similar to LiDAR, millimeter-wave can function reliably and detect range accurately”) to obtain the invention as specified in the claims.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 in view of Sundaravalli_2018 (A Survey on Vehicle Classification Techniques)
Claim 4. Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 makes obvious all the limitations of claim 3. Sundaravalli_2018 makes obvious additional limitation wherein the extracted features include an indication of a dimension of the object (page 270 par 2: “classifiers are used to extract features and classification. The detected vehicle can be localized in boundary box by using 3D coordinates. To extract the vehicle features by using object dimension [length, width & height], volumetric feature [area, volume] and relative position [maximum height, mean height]”).
Wang_2021, Li_2021, Zheng_2021, Hou_2020, and Sundaravalli_2018 are analogous art to the claimed invention because they are from the same field of endeavor called vehicles. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 and Sundaravalli_2018. The rationale for doing so would have been that Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 teaches the computer-implemented system of claim 3 which includes extracted features. Sundaravalli_2018 teaches extracted features that include an indication of a dimension of the object. Therefore, it would have been obvious to one of ordinary skill in the art to combine extracted feature from the computer-implemented system from Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 and extracted features that include an indication of a dimension of the object from Sundaravalli_2018 for the benefit of localizing the detected vehicle in a boundary box (Sundaravalli_2018 page 270 par 2) to obtain the invention as specified in the claims.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020.
Claim 5. Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 makes obvious all the limitations of claim 3. Additionally, Wang_2021 teaches additional limitation wherein the extracted features include an indication of a distance from the object to the vehicle in the reference driving scene (par 20: “the millimetre wave radar is installed on the front bumper of the vehicle, for collecting the distance information data of the vehicle and the vehicle front object”; par 39: “obtaining the relative movement information of each sensor group target object according to the distance detection module”; par 86: “the distance detection module can obtain the relative distance between the target object and the collecting vehicle by the strictly calibrated radar sensor, angle and so on information”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 and Wang_2021. The rationale for doing so would have been that Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 teaches the computer-implemented system of claim 3 which includes extracted features. Wang_2021 teaches that the extracted features include a distance from the target object to the vehicle in the reference driving scene. Therefore, it would have been obvious to one of ordinary skill in the art to combine extracted features from the computer-implemented system of claim 3 from Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 with including an indication of a distance from the object to the vehicle in the reference driving scene from Wang_2021 for the benefit of maintaining a safe distance between the vehicle and object (Wang_2021 par 44: “to the target vehicle with a safe distance and continues to travel the following behaviour”) to obtain the invention as specified in the claims.
Claims 6, 7 are rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 in view of Miyahara_2003 (TARGET VEHICLE IDENTIFICATION BASED ON THE THEORETICAL RELATIONSHIP BETWEEN THE AZIMUTH ANGLE AND THE RELATIVE VELOCITY).
Claim 6. Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 makes obvious all of the limitations of claim 3, including extracted features of a target object in the reference driving scene.
Miyahara_2003 makes obvious additional limitation an indication (par 74: “the yaw rate method for determining the path of the target vehicle indicates that the target vehicle is out of the host vehicle’s path”; par 76: “the prior art method which utilizes yaw rate to determine target vehicle location indicates that the target vehicle is turning. In contrast, the algorithm or method of the present invention as represented by output line 232 does not indicate that the target vehicle is turning”) of an angle at which the [target vehicle] is located with respect to the vehicle (abstract: “A method for tracking a target vehicle through a curve in a roadway is disclosed. The method includes measuring an azimuth angle between the target vehicle and a host vehicle”; figure 2; figure 3).
Wang_2021, Li_2021, Zheng_2021, Hou_2020, and Miyahara_2003 are analogous art to the claimed invention because they are from the same field of endeavor called vehicles. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 and Miyahara_2003. The rationale for doing so would have been that Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 teaches extracting feature of a target object in the reference driving scene. Miyahara_2003 teaches an indication of an angle at which the target vehicle is located with respect to the host vehicle. Therefore, it would have been obvious to one of ordinary skill in the art to combine the extracted features of a target object in the driving scene from Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 with an indication of an angle at which the target vehicle is located with respect to the host vehicle for the benefit of tracking a target vehicle through a curve in a roadway (Miyahara_2003 abstract) to obtain the invention as specified in the claims.
Claim 7. Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 makes obvious all the limitations of claim 3, including extracted features of a target object in the reference driving scene.
Miyahara_2003 makes obvious additional limitation an indication of an angle between a path of the [target vehicle] and a path of the vehicle (abstract: “Further, the target vehicle is determined to be in the same lane or path of the host vehicle by evaluating how well the developed theoretical relationship fits the with the measured azimuth angle and calculated relative velocity. Therefore, the present invention determines the path of a target vehicle without relying on inaccurate conventional methods based on the yaw rate of the host vehicle”; summary: “In an aspect of the present invention a method for determining whether a target vehicle is in the path of the host vehicle is provided. The method includes using an azimuth angle and calculating relative velocity between a target vehicle and a host vehicle at a predefined time interval”; figure 6A; figure 6B).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 with Miyahara_2003. The rationale for doing so would have been that Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 teaches extracting features of a target object in the reference driving scene. Miyahara_2003 teaches an indication of an angle between a path of the target vehicle and a path of the host vehicle. Therefore, it would have been obvious to one of ordinary skill in the art to combine extracting features from a target object in the reference driving scene from Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 with an indication of an angle between a path of the target vehicle and a path of the host vehicle from Miyahara_2003 for the benefit of tracking a target vehicle through a curve in a roadway (Miyahara_2003 abstract) to obtain the invention as specified in the claims.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 in view of Kronprasert_2021 (Crash Prediction Models for Horizontal Curve Segments on Two-Lane Rural Roads in Thailand).
Claim 8. Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 makes obvious all the limitations of claim 1, including the road data response prediction model. Kronprasert_2021 makes obvious additional limitation wherein the road data (section 4.2 par 1: “The data on both road geometric conditions and conditions used in this study are classified into two categories:…traffic data gathered from the traffic rural road database”; section 6 par 2: “The datasets used in this study were collected from road data inventories of the rural road network”) response (section 3.2.1 par 1: “The Poisson safety performance function or Poisson regression model, based on the Poisson probability distribution, is the fundamental method used for modelling count response data”) prediction model is a generalized linear model (GLM) (section 1 par 4: “Crash prediction models by Safety Performance Functions (SPFs) are useful tools for describing the statistical associations between significant variables of roadway characteristics… Among statistical techniques which including Discrete-outcome Models, Data mining Techniques, Soft Computing Techniques, and Generalised Linear Models [11], Generalised Linear Models (GLM) have been broadly applied for studies conducted
on the associations between significant variables”; section 3 par 1: “the crash prediction models were developed using a Generalized Linear Model (GLM)”; section 6 par 2: “This study aims to develop Safety Performance Functions (SPFs) using Generalised Linear Model (GLM) techniques as a crash prediction model”).
Wang_2021, Li_2021, Zheng_2021, Hou_2020, and Kronprasert_2021 are analogous art to the claimed invention because they are from the same field of endeavor called vehicles. Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 with Kronprasert_2021. The rationale for doing so would have been that Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 teaches the road data response prediction model from the computer-implemented system of claim 1, which includes the road data response prediction model. Kronpraser_2021 teaches a generalized linear model (GLM) as a road data response prediction model. Therefore, it would have been obvious to one of ordinary skill in the art to combine the road data response prediction model from the computer-implemented system from Wang_2021 in view of Li_2021 in view of Zheng_2021 in view of Hou_2020 with the road data response prediction model that is a generalized linear model (GLM) from Kronprasert_2021 for the benefit of associating significant variables from the road data (Kronprasert_2021 section 1 par 4: “Generalised Linear Models [11], Generalised Linear Models (GLM) have been broadly applied for studies conducted on the associations between significant variables”) to obtain the invention as specified in the claims.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021.
Claim 9. Wang_2021 makes obvious determining a plurality of [characteristics] (par 31: “a positioning and track determining module”; par 32: “the locating and track determining module is used for locating and determining the track according to the target object relative motion information”; par 96: “determining the motion characteristics of various road users (including object and collecting vehicle relative position, speed, orientation angle, identifying ID, size and so on). The movement state information of other road users in the test environment can be determined by identifying the information and collecting the movement state information of the vehicle itself”) associated with a driving scene (abstract: “solving the problem that the collecting vehicle road test cost is high in the existing technology…and can widely cover the driving scene of the potential type that the automatic driving vehicle may meet, greatly reducing the number of kilometers of the road test”; par 90: “it should include important static object information in the automatic driving scene”).
Additionally, Wang_2021 makes obvious [obtaining] a road data response (par 43 “The data acquisition process of the invention collects and marks various automatic driving complex scenes defined in the (file); the response behaviour of the automatic driving vehicle in various scenes is fully recorded for different scenes; it is convenient for data analysis scene simulation and verification”;) [algorithm] to the plurality of [characteristics] for the driving scene with respect to a vehicle in the driving scene (par 39: “As a further preferred technical solution, in the step (b), using the D-S evidence theory information synthesis algorithm for multi-source information fusion”; par 96: “data fusion centre processing vehicle environment data method is as follows…and determining the motion characteristics of various road users (including object and collecting vehicle relative position, speed, orientation angle, identifying ID, size and so on)”).
While Wang_2021 does not expressly recite a plurality of feature values, prediction model, or feature values to obtain a predicted statistical measure of a sensor response.
Hou_2020, however, makes obvious a plurality of feature values (table 1 column 1; table 2 column 2; table 3 column 2; table 4 column 4; page 11 par 4: “Mahalanobis squared distance (MSD) is a statistical measure for outlier detection and has received wide applications because of its simplicity and computational efficiency…Statistical evaluations were performed on extracted damage features for each individual sensor location…The outlier analysis was then conducted on the basis of MSD. A field experimental study on a simply supported steel truss bridge showed that the inclusion of additional parameters in the outlier analysis might lead to more sensitive features”).
Additionally, Hou_2020 makes obvious a response prediction model (page 6 par 10: “Model updating methods modify model property matrices, such as mass, stiffness and damping matrices, to ensure that the analytical predictions of the updated model resemble experimental data as closely as possible”; page 15 par 2: “RF is an ensemble classifier that consists of a large number of decision trees [178]. The model prediction is obtained through combining the predictors of each individual tree by majority voting”; page 24 par 3: “Response surface methodology (RSM), which is a combination of mathematical and statistical techniques, can provide an approximate mathematical model mapping the input parameters of a physical system to its output responses”) to a plurality of feature values to obtain a predicted statistical measure (page 4 par 11: “Mahalanobis squared distance (MSD) is a statistical measure for outlier detection and has received wide applications because of its simplicity and computational efficiency…Statistical evaluations were performed on extracted damage features for each individual sensor location…The outlier analysis was then conducted on the basis of MSD. A field experimental study on a simply supported steel truss bridge showed that the inclusion of additional parameters in the outlier analysis might lead to more sensitive features”) of a sensor response (page 3 par 9: “Although baseline information is necessary, the proposed approach does not require many preinstalled sensors”; page 11 par 4: “Statistical evaluations were performed on extracted damage features for each individual sensor location. A sensor with a significant variation was identified as the one closest to the damage location”; page 23 par 1: “These methods extract the dynamic properties of the bridge, such as natural frequencies, from the measured responses of a passing vehicle instrumented with sensors”).
Additionally, Hou_2020 makes obvious that feature values are characteristics (page 20 par 8: “the statistical characteristics of the operational variations on a curvature were then extracted via PCA transformation…A sensor-clustering-based ARX method was applied to the free vibration acceleration data to calculate the damage features. Multilayer ANNs were then trained using the obtained damage features resulting from different temperature scenarios. Differences between the damage features from the time series and ANN analyses were used for damage detection”; page 21 par 5: “In addition, if changes in the structural dynamic characteristics due to damage are analogous to those due to varying environmental conditions, then the effectiveness of these algorithms cannot be guaranteed”).
Additionally, Hou_2020 makes obvious that the prediction model is an algorithm (page 15 par 4: “An unsupervised learning algorithm only requires data from the intact state of a structure for training, which belongs to the outlier or novelty detection category. A model is trained by machine learning algorithms based on the data in the undamaged state. The trained model is then used to evaluate the structural condition when new measurement data are available”; page 18 par 8: “Unlike the previous multi-task SBL algorithm [215], the prediction error precision parameters were marginalised from hierarchical models to improve the learning robustness and characterise the posterior uncertainty”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 and Hou_2020. The rationale for doing so would have been that Wang_2021 teaches determining a plurality of characteristics associated with a driving scene and obtaining a road data response prediction algorithm for the plurality of characteristics for the driving scene with respect to a vehicle in the driving scene. Hou_2020 teaches obtaining a response prediction model for the plurality of feature values to obtain a predicted statistical measure of a sensor response and that the prediction model is an algorithm. Therefore, it would have been obvious to one of ordinary skill in the art to combine determining a plurality of characteristics associated with a driving scene and obtaining a road data response prediction algorithm for the plurality of characteristics for the driving scene with respect to a vehicle in the driving scene from Wang_2021 with obtaining a response prediction model for the plurality of feature values to obtain a predicted statistical measure of a sensor response from Hou_2020, because Hou_2020 teaches that feature values are characteristics and the prediction model is an algorithm, for the benefit of fitting the model to the feature values and statistical measure (Hou_2020 page 12 par 9: “Feature extraction aims to fit either a data-driven or a physics-based model to the measured structural response data by using statistical or signal processing techniques”) to obtain the invention as specified in the claims.
Wang_2021 in view of Hou_2020 does not expressly recite applying the road data response prediction model.
Zheng_2021, however, makes obvious applying a prediction model (section 2.3 par 1: “To obtain good feature representations for radar data, Capobianco et al. [3] apply a convolutional neural network to the rage Doppler signature”; section 4.5 par 3: “Finally, by applying the scene-aware learning framework, we predict each scene with the corresponding model and achieve final AP of 53.41%. We can observe that adding each component contributes to the final results without any performance degradation”).
Additionally, Zheng_2021 makes obvious obtaining a prediction model from extracted features (section 2.1 par 3: “which combine the flow information and features extracted on one frame to obtain the prediction”; section 3.1 par 2: “In each branch, we fine-tune the SLNet based on the
universal model obtained in the first phase with radar snippets of the corresponding scene”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Hou_2020 and Zheng_2021. The rationale for doing so would have been that Wang_2021 in view of Hou_2020 teaches obtaining a road data response prediction model for the plurality of feature values. Zheng_2021 teaches obtaining and applying a prediction model from features. Therefore, it would have been obvious to one of ordinary skill in the art to combine obtaining a road data response prediction model from the plurality of feature values from Wang_2021 in view of Hou_2020 with obtaining and applying a prediction model from Zheng_2021 for the benefit of obtaining good feature representation (Zheng_2021 section 2.3 par 1) to obtain the invention as specified in the claims.
While Hou_2020 in view of Zheng_2021 in view of Wang_2021 makes obvious a method that could reasonable be implemented on a computer by a person of ordinary skill in the art, Hou_2020 in view of Zheng_2021 in view of Wang_2021 does not expressly recite a computer-implemented system, comprising: one or more non-transitory computer-readable media storing instructions, when executed by one or more processing units, cause the one or more processing units to perform operations comprising:.
Li_2021, however, makes obvious a computer-implemented system, comprising: one or more non-transitory computer-readable media storing instructions, when executed by one or more processing units, cause the one or more processing units to perform operations comprising: (claim 9: “wherein the data processing system comprises: a processor; and a memory; the memory is coupled to the processor to store instructions; when the instructions are executed by the processor, causing the processor to perform operations; the operation comprises”).
Claims 10, 11 are rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021.
Claim 10. Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 makes obvious all the limitations of claim 9, including the plurality of feature values.
Wang_2021 makes obvious additional limitation associated with an object in the driving scene (par 1: “The invention relates to a road object data collecting field, specifically to a road data collecting and simulating scene establishing integrated system and method”; par 90: “In the example, it should include important static object information in the automatic driving scene”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 with Wang_2021. The rationale for doing so would have been that Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 teaches the plurality of feature values associated with a driving scene. Wang_2021 teaches information associated with an object in the driving scene. Therefore, it would have been obvious to one of ordinary skill in the art to combine the plurality of feature values associated with a driving scene from Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 with information associated with an object in the driving scene from Wang_2021 for the benefit of obtaining good feature representation (Zheng_2021 section 2.3 par 1) to obtain the invention as specified in the claims.
Claim 11. Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 makes obvious all the limitations of claim 10, including the predicted statistical measure of the sensor response.
Zheng_2021 makes obvious additional limitation wherein the statistical measure includes an indication of a predicted number of light detection and ranging (LIDAR) data points responsive to the object (abstract: “Object detection is essential to safe autonomous or assisted driving. Previous works usually utilize RGB images or LiDAR point clouds to identify and localize multiple objects in self-driving”; section 2.2 par 3: “The second type is detecting objects based on radar data only. To effectively and efficiently collect radar object annotation, with a calibrated camera or LiDAR sensor, some annotations are automatically generated by high-accuracy object detection algorithms on these data”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 with Zheng_2021. The rationale for doing so would have been that Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 teaches the predicted statistical measure of the sensor response. Zheng_2021 teaches the inclusion of a predicted number of light detection and ranging (LIDAR) data points responsive to the object. Therefore, it would have been obvious to one of ordinary skill in the art to combine the predicted statistical measure of the sensor response from Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 with the inclusion of a predicted number of light detection and ranging (LIDAR) data points responsive to the object from Zheng_2021 for the benefit of reliable and accurate range detection (Zheng_2021 section 1 par 1: “Similar to LiDAR, millimeter-wave can function reliably and detect range accurately”) to obtain the invention as specified in the claims.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 in view of Sundaravalli_2018.
Claim 12. Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 makes obvious all the limitations of claim 10, including the one or more of the plurality of feature values.
Sundaravalli_2018 makes obvious additional limitation one or more features are associated with a three-dimensional (3D) bounding box representing a dimension of the object in a 3D space (page 270 par 2: “classifiers are used to extract features and classification. The detected vehicle can be localized in boundary box by using 3D coordinates. To extract the vehicle features by using object dimension [length, width & height], volumetric feature [area, volume] and relative position [maximum height, mean height]”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 and Sundaravalli_2018. The rationale for doing so would have been that Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 teaches the plurality of feature values. Sundaravalli_2018 teaches that one or more features are associated with a three-dimensional (3D) bounding box representing a dimension of the object in 3D space. Therefore, it would have been obvious to one of ordinary skill in the art to combine the plurality of feature values from Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 with features that include a three-dimensional (3D) bounding box representing a dimension of the object in 3D space from Zheng_2021 for the benefit of obtaining good feature representation for radar data (Zheng_2021 section 2.3 par 1: “In the processing of radar data, a series of research [1, 3, 13, 21, 30] explores convolution neural networks to extract features of radar data. To obtain good feature representations for radar data Capobianco et al. [3] apply a convolutional neural network to the rage Doppler signature”) to obtain the invention as specified in the claims.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 in view of Miyahara_2003.
Claim 13. Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 makes obvious all the limitations of claim 10, including the plurality of feature values.
Wang_2021 makes obvious a distance from the object to the vehicle in the driving scene (par 20: “the millimetre wave radar is installed on the front bumper of the vehicle, for collecting the distance information data of the vehicle and the vehicle front object”; par 39: “obtaining the relative movement information of each sensor group target object according to the distance detection module”; par 86: “the distance detection module can obtain the relative distance between the target object and the collecting vehicle by the strictly calibrated radar sensor, angle and so on information”).
Miyahara_2003 makes obvious an angle of a location of the object with respect to the vehicle (abstract: “A method for tracking a target vehicle through a curve in a roadway is disclosed. The method includes measuring an azimuth angle between the target vehicle and a host vehicle”; figure 2; figure 3) and an angle of a path of the object with respect to the vehicle (abstract: “Further, the target vehicle is determined to be in the same lane or path of the host vehicle by evaluating how well the developed theoretical relationship fits the with the measured azimuth angle and calculated relative velocity. Therefore, the present invention determines the path of a target vehicle without relying on inaccurate conventional methods based on the yaw rate of the host vehicle”; summary: “In an aspect of the present invention a method for determining whether a target vehicle is in the path of the host vehicle is provided. The method includes using an azimuth angle and calculating relative velocity between a target vehicle and a host vehicle at a predefined time interval”; figure 6A; figure 6B).
Claims 14, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 in view of Ngo_2021 (A Multi-Layered Approach for Measuring the Simulation-to-Reality Gap of Radar Perception for Autonomous Driving).
Claim 14. Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 makes obvious all the limitations of claim 9, including the operations and the driving scene. Additionally, Wang_2021 makes obvious a simulation that simulates the driving scene (title: “Integrated System And Method For Road Data Collection And Simulation Scene Building”; abstract: “The invention relates to a road data collection and simulation scene building integrated system… the collected environment data is used for scene simulation and generating corresponding test case, for solving the problem that the collecting vehicle road test cost is high in the existing technology”).
Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 teaches extracting, from road data, features of the reference driving scene. Additionally, Wang_2021 teaches that road data is collected from a driving scene and using a simulation to simulate the driving scene. Therefore, it would have been obvious to a person of ordinary skill in the art to combine extracting, from road data, features of the reference driving scene from Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 with the simulation to simulate the driving scene from Wang_2021 for the benefit of testing the vehicle in a cost efficient manner (Wang_2021 abstract: “the collected environment data is used for scene simulation and generating corresponding test case, for solving the problem that the collecting vehicle road test cost is high in the existing technology”) to obtain the invention as specified in the claims.
Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 makes obvious the computer-implemented system of claim 9, wherein the operations further comprise:
in a simulation that simulates the driving scene; and
Extracting, from the simulation, the plurality of feature values associated with the driving scene.
Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 do not expressly recite executing a sensor test.
Ngo_2021, however, makes obvious executing (page 1 par 4: “Although it is straightforward to measure the execution time of a simulation run”; page 2 par 7: “Due to their simplicity and fast execution, they are suitable for early testing of perception algorithms in either ideal conditions or under the assumption that sensor errors are negligible”) a sensor test in a simulation that simulates (page 1 par 1: “With the increasing safety validation requirements for the release of a self-driving car, alternative approaches, such as simulation-based testing, are emerging in addition to conventional real-world testing. In order to rely on virtual tests the employed sensor models have to be validated”; page 2 par 2: “In this paper, we propose a multi-level testing method for measuring the overall simulation-to-reality gap for virtual testing of perception functions (see Fig. 1). Our approach consists of a combination of an explicit and implicit sensor model evaluation. The former assesses the simulated sensor data directly, while the latter refers to an indirect evaluation by assessing the output of a downstream target application”).
Wang_2021, Hou_2020, Zheng_2021, Li_2021, and Ngo_2021 are analogous art to the claimed invention because they are from the same field of endeavor called vehicles. Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 and Ngo_2021. The rationale for doing so would have been that Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 teaches the operations further comprising a simulation that simulates the driving scene. Ngo_2021 teaches executing a sensor test in a simulation that simulates. Therefore, it would have been obvious to one of ordinary skill in the art to combine the operations further comprising a simulation that simulates the driving scene from Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 with executing a sensor test in a simulation that simulates from Ngo_2021 for the benefit of testing the vehicle in a cost efficient manner (Wang_2021 abstract: “the collected environment data is used for scene simulation and generating corresponding test case, for solving the problem that the collecting vehicle road test cost is high in the existing technology”) to obtain the invention as specified in the claims.
Claim 15. Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 in view of Ngo_2021 makes obvious all the limitations of claim 14, including the operations further comprise and the statistical measure of the sensor response.
Ngo_2021 makes obvious additional limitation determining a fidelity of the simulation (page 1 par 1: “We have shown the effectiveness of the proposed approach in terms of providing an in-depth sensor model assessment that renders existing disparities visible and enables a realistic estimation of the overall model fidelity across different scenarios”; page 3 par 1: “The number of samples is determined by a distribution conditioned on the radial distance defined by the real radar measurements”; page 5 par 3: “In order to determine a realistic overall model fidelity (simulation-to-reality gap), not only the tracking prediction but also the direct output of the model, here the radar point cloud, must be considered”; ) based on a comparison between a reference sensor response and the predicted sensor response (page 1 par 5: “Moreover, different approaches can be found in the literature that compare synthetically generated and real radar data qualitatively”; page 3 par 8: “After the real radar data and the simulated radar data are generated, both point clouds are compared in terms of their similarity”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 in view of Ngo_2021 and Ngo_2021. The rationale for doing so would have been that Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 in view of Ngo_2021 teaches the computer-implemented system of claim 14 and the statistical measure of the sensor response. Ngo_2021 teaches determining a fidelity of the simulation based on a comparison between a reference sensor response and the predicted sensor response. Therefore, it would have been obvious to one of ordinary skill in the art to combine the operations and statistical measure of the sensor response from Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Li_2021 in view of Ngo_2021 with determining a fidelity of the simulation based on a comparison between a reference sensor response and the predicted sensor response for the benefit of evaluating the quality of the data generated (Ngo_2021 page 2 par 1: “In addition to evaluating the radar data generated, the quality of the data must also be examined with respect to its applicability for an intended use, because the requirements on the radar simulation fidelity can vary greatly depending on the application”) to obtain the invention as specified in the claims.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 (EPSILON: An Efficient Planning System for Automated Vehicles in Highly Interactive Environments).
Claim 16. The same logic used in the rejection of claim 9 under 35 U.S.C. 103 can be used to show that Wang_2021 in view of Hou_2020 in view of Zheng_2021 makes obvious a method comprising:
determining a plurality of feature values associated with a driving scene; and
[applying] a road data response prediction model based on the plurality if feature values to obtain a predicted statistical measure of a sensor response for the driving scene with respect to a vehicle in the driving scene
Wang_2021 in view of Hou_2020 in view of Zheng_2021 teaches the statistical measure of the sensor response.
Additionally, Wang_2021 in view of Hou_2020 in view of Zheng_2021 makes obvious that the road data response prediction model is a road environment simulation platform that includes a map module (Wang_2021 par 31: “As a further preferred technical solution, the road environment simulation platform comprises a vehicle projection to the map module”).
Zheng_2021 makes obvious a predicted number of light detection and ranging (LIDAR) data points responsive to the driving scene (abstract: “Object detection is essential to safe autonomous or assisted driving. Previous works usually utilize RGB images or LiDAR point clouds to identify and localize multiple objects in self-driving”; section 2.2 par 3: “The second type is detecting objects based on radar data only. To effectively and efficiently collect radar object annotation, with a calibrated camera or LiDAR sensor, some annotations are automatically generated by high-accuracy object detection algorithms on these data”).
Wang_2021 in view of Hou_2020 in view of Zheng_2021 recites applying a road data response prediction model, which may properly imply to one of ordinary skill in the art the notion of querying, Wang_2021 in view of Hou_2020 in view of Zheng_2021 does not expressly recite querying.
Ding_2021, however, makes obvious querying using a map module (page 3 par 2: “The output of perception is synchronized and fed to a semantic map manager module which is responsible for organizing the data structures and providing querying interfaces for planning modules; page 14 par 1: “Note that the motivation for introducing the semantic-level action and forward simulation is not targeting ‘prediction accuracy’. Instead, it aims at providing a lightweight querying interface which can be coupled inside planning for realizing diverse future scenarios conditioned on the ego decision”).
Wang_2021, Hou_2020, Zheng_2021, and Ding_2021 are analogous art to the claimed invention because they are from the same field of endeavor called vehicles. Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Hou_2020 in view of Zheng_2021 and Ding_2021. The rationale for doing so would have been that Wang_2021 in view of Hou_2020 in view of Zheng_2021 teaches applying a road data response prediction model. Ding_2021 teaches querying using a map module. Therefore, it would have been obvious to one of ordinary skill in the art to combine applying a road data response prediction model that includes a map module from Wang_2021 in view of Hou_2020 in view of Zheng_2021 with querying using a map module from Ding_2021 for the benefit of achieving human-like driving behaviors in highly interactive traffic (Ding_2021 page 1 par 1: “We validate our planning system in both simulations and real-world dense traffic, and the experimental results show that our EPSILON achieves human-like driving behaviors in highly interactive traffic flow smoothly and safely without being over-conservative compared to the existing planning methods”) to obtain the invention as specified in the claims.
Claims 17, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 in view of Ngo_2021.
Claim 17. Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 makes obvious all the limitations of claim 16.
The same logic that was used to reject claim 14 and 15 under 35 U.S.C. 103 can be used to show that Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ngo_2021 makes obvious executing a sensor test in a simulation that simulates the driving scene; and determining a fidelity of the simulation based on a comparison between a statistical measure of a reference sensor response and the predicted statistical measure of the sensor response obtained from the querying,
wherein the determining the plurality of feature values is based on the simulation
Claim 16 states that a road data response prediction model was queried to obtain a predicted statistical measure of the sensor response. This would properly imply to one of ordinary skill in the art that the predicted statistical measure of the sensor response in claim 17 was obtained from the querying because querying was the process used to obtain the predicted statistical measure of the sensor response in claim 16.
Claim 18. Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 in view of Ngo_2021 make obvious all the limitations of claim 17, including the statistical measure of the sensor response.
Ngo_2021 makes obvious additional limitation adjusting a parameter of the simulation (page 1 par 3: “Hartstern et al. use probabilistic sensor models to identify the optimal sensor setup solution in early development stages since they provide a wide range of modification parameters and adjustable settings”; page 4 par 4; “Thus, OSPA metric has two adjustable parameters p and c that have meaningful interpretations as outlier sensitivity and cardinality penalty”) based on a comparison between a reference sensor response and the predicted sensor response (page 1 par 5: “Moreover, different approaches can be found in the literature that compare synthetically generated and real radar data qualitatively”; page 3 par 8: “After the real radar data and the simulated radar data are generated, both point clouds are compared in terms of their similarity”).
Additionally, Ngo_2021 makes obvious wherein the executing the sensor test is based on the adjustment (page 1 par 3: “Hartstern et al. use probabilistic sensor models to identify the optimal sensor setup solution in early development stages since they provide a wide range of modification parameters and adjustable settings”).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 in view of Ngo_2021 and Ngo_2021. The rationale for doing so would have been that Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 in view of Ngo_2021 teaches the method of claim 17 including the statistical measure of a sensor response. Ngo_2021 teaches adjusting a parameter of the simulation based on a comparison between the reference sensor response and a previous predicted sensor response and wherein executing the sensor test is based on the adjustment. Therefore, it would have been obvious to one of ordinary skill in the art to combine the statistical measure of a sensor response from Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 in view of Ngo_2021 with adjusting a parameter of the simulation based on a comparison between the reference sensor response and a previous predicted sensor response and wherein executing the sensor test is based on the adjustment from Ngo_2021 for the benefit of evaluating the quality of the data generated (Ngo_2021 page 2 par 1: “In addition to evaluating the radar data generated, the quality of the data must also be examined with respect to its applicability for an intended use, because the requirements on the radar simulation fidelity can vary greatly depending on the application”) to obtain the invention as specified in the claims.
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 in view of Sundaravalli_2018.
Claim 19. Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 makes obvious all the limitations of claim 16, including the plurality of feature values.
Sundaravalli_2018 makes obvious additional limitation wherein the features include at least one of: a width of an object in the driving scene; a length of the object; a height of the object (page 270 par 2: “classifiers are used to extract features and classification. The detected vehicle can be localized in boundary box by using 3D coordinates. To extract the vehicle features by using object dimension [length, width & height], volumetric feature [area, volume] and relative position [maximum height, mean height]”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 and Sundaravalli_2018. The rationale for doing so would have been that Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 teaches the plurality of feature values. Sundaravalli_2018 teaches that features include at least one of: a width of an object in the driving scene; a length of the object; a height of the object. Therefore, it would have been obvious to one of ordinary skill in the art to combine the plurality of feature values from Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 with features that include at least one of: a width of an object in the driving scene; a length of the object; a height of the object from Sundaravalli_2018 for the benefit of localizing the detected vehicle in a boundary box (Sundaravalli_2018 page 270 par 2) to obtain the invention as specified in the claims.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 in view of Miyahara_2003.
Claim 20. Wang_2021 in view of Hou_2020 in view of Zheng_2021 in view of Ding_2021 makes obvious all the limitations of claim 16, including the plurality of feature values.
Wang_2021 makes obvious additional limitation a distance between an object and the vehicle in the driving scene (par 20: “the millimetre wave radar is installed on the front bumper of the vehicle, for collecting the distance information data of the vehicle and the vehicle front object”; par 39: “obtaining the relative movement information of each sensor group target object according to the distance detection module”; par 86: “the distance detection module can obtain the relative distance between the target object and the collecting vehicle by the strictly calibrated radar sensor, angle and so on information”).
Miyahara_2003 makes obvious additional limitation an angle of a location of the object with respect to the vehicle (abstract: “A method for tracking a target vehicle through a curve in a roadway is disclosed. The method includes measuring an azimuth angle between the target vehicle and a host vehicle”; figure 2; figure 3).
Miyahara_2003 makes obvious additional limitation an angle of a path of the object with respect to the vehicle (abstract: “Further, the target vehicle is determined to be in the same lane or path of the host vehicle by evaluating how well the developed theoretical relationship fits the with the measured azimuth angle and calculated relative velocity. Therefore, the present invention determines the path of a target vehicle without relying on inaccurate conventional methods based on the yaw rate of the host vehicle”; summary: “In an aspect of the present invention a method for determining whether a target vehicle is in the path of the host vehicle is provided. The method includes using an azimuth angle and calculating relative velocity between a target vehicle and a host vehicle at a predefined time interval”; figure 6A; figure 6B).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN C DONOHUE whose telephone number is (571)272-8972. The examiner can normally be reached Monday-Friday 7:30am - 5:00pm (Alternate Fridays off).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emerson Puente can be reached at 5712723652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.C.D./Examiner, Art Unit 2187
/BRIAN S COOK/Primary Examiner, Art Unit 2187