DETAILED ACTION
Claims 1 and 14 have been amended as of 12/05/2025. Claims 1-20 are being examined with the priority date of August 2, 2021 in accordance with applicant’s claim for foreign priority.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Examiner has corrected the previous error in the entry of the foreign priority documentation, and acknowledges the receipt of the certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/02/2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Claim rejections under 35 USC 103
Applicant’s arguments, see Remarks filed 12/05/2025, with respect to the claim rejections made to claims 1-6 and 14-18 under 35 U.S.C. 102(a) have been fully considered by the examiner and are not persuasive. Applicant argues that Kloeden does not detection uncertainty, the examiner respectfully disagrees, Kloeden [0049] teaches that in situations where there is high delay time in detection, such as when a pedestrian is walking, the system can detect transition states and delays times of the moving object, further in [0058] the delay times are used in determining the confidence of detection for the detection region. The applicant’s specification paragraph [0006] notes that uncertainties are just any uncertainty in the position or object type, one of ordinary skill in the art would understand that computing delay times in detection would be analogous to an uncertainty in detection.
Applicant further argues that no sensor model is taught by Kloeden, the examiner respectfully disagrees. Kloeden denotes in [0066]-[0078] that the system has a BE calculation unit, and two sensors, and these sensors and calculation unit work together to generate plausibility data and other data that is classified according to a set of classification rules. Paragraph [0006] of the applicant’s specification defined the sensor models as containing the first and second sensor results, given that Kloeden uses a first and second rule for classifying the sensor results, this would be analogous to the claimed sensor models. Therefore, for at least the reasons above, the examiner respectfully maintains the rejections under 35 U.S.C. 103.
PNG
media_image1.png
210
308
media_image1.png
Greyscale
PNG
media_image2.png
240
304
media_image2.png
Greyscale
(Kloeden, [0049] and [0058])
PNG
media_image3.png
564
316
media_image3.png
Greyscale
(Kloeden, [0036]-[0040])
PNG
media_image4.png
282
488
media_image4.png
Greyscale
PNG
media_image5.png
182
182
media_image5.png
Greyscale
(Kloeden figure 1 and reference symbols)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 1-11, and 13-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kloeden (US 20140336866 A1) in view of Moustafa (US 20220126864 A1).
Regarding claim 1 Kloeden discloses; A method for merging sensor data, comprising: providing a sensor data set including first sensor data (Kloeden, [0006] receiving a first set of data from a signal from a first sensor);
analyzing the first sensor data (Kloeden, [0028] system analyzes the data using a calculation unit), generating a first sensor result and generating a first sensor model from a first analysis unit (Kloeden, [0037], first sensor data (first sensor result) is analyzed by the calculation unit (analysis unit), the result of this analysis is the first sensor model) executed on at least one processor (Kloeden, [0031] the data processing device reads code stored on a computer readable medium),
the first sensor result being based on the analysis of the first sensor data and including at least one of a type of at least one detected object in the first sensor data (Kloeden, [0049] first sensor data is used to detect obstacles/objects such as pedestrians walking),
or a position of the at least one detected object, the first sensor model being associated with the first sensor result (Kloeden, [0049] the calculation unit has a sensor model associated with the sensor data and resulting data as a function of the first sensor data),
describing uncertainties in detection of the at least one detected object (Kloeden, [0049] a delay on the detected object (in this case a pedestrian) can be calculated, which would be an uncertainty in detection),
and being dependent on a first uncertainty data set (Kloeden, [0049] the calculation unit has a sensor model associated with the analyzed sensor data, which generates information data (ID) about the obstacles (uncertainty data)), the first uncertainty data set being a subset of the sensor data set (Kloeden, [0049] first sensor data is used to detect obstacles/objects such as pedestrians walking, which is a subset of the sensor data);
[wherein the uncertainties described by the first sensor model comprise a first quantitative value of a certainty for a first detected object and a second quantitative value of a certainty for a second detected object, and wherein the first sensor model automatically adopts to a current situation of a vehicle detected by the sensors;]
generating a second sensor result and generating a second sensor model from a second analysis unit (Kloeden, [0040] Second sensor is associated with a control device and a second classifier, the second classifier being the model generated from the sensor data) executed on the at least one processor (Kloeden, [0031] the data processing device reads code stored on a computer readable medium), the second sensor model being associated with the second sensor result (Kloeden, [0040] second sensor has an associated second classifier model which is associated with the sensor results);
and merging the first sensor result and the second sensor result to form a fusion result from a fusion unit (Kloeden, [0059]Fusion data is a combination of the ID data, which is associated with the first sensor and the PD which is associated with the second sensor) executed on the at least one processor (Kloeden, [0031] the data processing device reads code stored on a computer readable medium), the merging being performed on the basis of the first sensor model and the second sensor model (Kloeden, [0019] Combination of 2 different data sets (PD and ID to form fusion data, [0065]Fusion data is a function of the ID and PD data from the first and second sensors, processed by the first and second classifier models).
Kloeden does not teach; wherein the uncertainties described by the first sensor model comprise a first quantitative value of a certainty for a first detected object and a second quantitative value of a certainty for a second detected object, and wherein the first sensor model automatically adopts to a current situation of a vehicle detected by the sensors;
However, in the same field of endeavor, Moustafa teaches;
wherein the uncertainties described by the first sensor model comprise a first quantitative value of a certainty for a first detected object (Moustafa, [0276] the type of obstacle or object or hazard detected may be determined, [0297]-[0298] the vehicle may generate a confidence score based on the data received from the sensors, which includes the object detection and object position data) and a second quantitative value of a certainty for a second detected object (Moustafa [0177] the system detects and tracks objects, the tracking will include an estimation of the object’s trajectory and movement in relation to the vehicle. [0257] the current position of the object and its velocity [0297]- [0298] the vehicle may generate a confidence score based on the data received from the sensors, which includes the object detection and object position data), and wherein the first sensor model automatically adopts to a current situation of a vehicle detected by the sensors (Moustafa, [0217]-[0218] the system may use autonomy at varying times and may leverage the sensors and models to support higher or lower autonomy based on the situation, indicating it adjusts based on the situation);
The combination of Kloeden and Moustafa would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Kloeden teaches a system that verifies the accuracy/uncertainty of whether or not an obstacle or object is present using a sensor system on a vehicle. Moustafa teaches a system the generate a confidence that an object is correctly classified and that the trajectory information and position data of the object. The motivation for combining the systems of Kloeden and Moustafa is that the confidence scoring capacity of Moustafa allows for the system to verify the trajectory, position and classification of an object to more accurately assess the situation when the vehicle is responding. (Moustafa, [0177], [0257], [0276], and [0297]- [0299])
Regarding claim 2 the combination of Kloeden and Moustafa teaches; The method according to Claim 1, wherein the first sensor data includes at least one of raw data from a first sensor or processed raw data from at least the first sensor (Kloeden, [0006] first sensor data is information data as a function of measurement from the first sensor signal).
Regarding claim 3 the combination of Kloeden and Moustafa teaches; The method according to Claim 1, wherein the sensor data set includes second sensor data (Kloeden, [0040] Raw data from a second sensor can be provided for analysis), and the method further comprises: analyzing the second sensor data (Kloeden, [0006] Plausibility data is determined as a function of the first or second sensor’s raw data, and it is analyzed using the second calculation rule), wherein the second sensor result is based on the analysis of the second sensor data (Kloeden, [0006] Plausibility data is determined as a function of the first or second sensor’s raw data, and it is analyzed using the second calculation rule), and the second sensor data includes at least one of raw data from a second sensor or processed raw data from at least the second sensor (Kloeden ,[0040] Raw data from the second sensor is provided, [0006] second sensor data is raw or can be analyzed to generate plausibility data).
Regarding claim 4 the combination of Kloeden and Moustafa teaches; The method according to Claim 3, wherein at least one of the first sensor or the second sensor is from a group consisting of a camera, radar, lidar and an ultrasonic sensor (Kloeden, [0037] first sensor can be a camera, [0040] Second sensor is a vehicle sensor such as a camera).
Regarding claim 5 the combination of Kloeden and Moustafa teaches; The method according to Claim 3, wherein the first uncertainty data set is different from the first sensor data (Kloeden, [0040] and [0041] Information data, associated with the first sensor data and Plausibility Data can be acquired from different sensors) and/or wherein the first uncertainty data set includes at least one from a group consisting of raw data from the first sensor (Kloeden, [0040] raw data from the first sensor), raw data from the second sensor (Kloeden, [0040] raw data from the second sensor, differs from the first ), processed raw data from at least the first sensor (Kloeden, [0040] raw data can be from the first or second sensor, [0065] REF data is processed raw data), and processed raw data from at least the second sensor (Kloeden, [0040] raw data can be from the first or second sensor, [0065] REF data is processed raw data)
Regarding claim 6 the combination of Kloeden and Moustafa teaches; The method according to Claim 3, wherein the second sensor model is dependent on a second uncertainty data set (Kloeden, [0040], second classifier, data used in this can be provided by a second sensor), wherein the second uncertainty data set is a subset of the sensor data set (Kloeden, [0009] second calculation rule is applied to different raw data, determines plausibility data from raw data [0040] Plausibility data is a function of raw data) which is different from the second sensor data (Kloeden, [0040] raw data can be acquired from a first or second sensor), and includes at least one from a group, consisting of raw data from the first sensor (Kloeden, [0040] raw data that is analyzed through the control device can be from a first or second sensor), raw data from the second sensor (Kloeden, [0040] raw data that is analyzed through the control device can be from a first or second sensor), processed raw data from at least the first sensor (Kloeden, [0037] the calculation unit processes raw data from a first or second sensor), and processed raw data from at least the second sensor (Kloeden, [0037] the calculation unit processes raw data from a first or second sensor).
Regarding claim 7, Kloeden discloses; wherein the first sensor model ([0037], the first classifier KL1 is a Bayes classifier that is determining ID data from raw data) and/or the second sensor model ([0040] second classifier KL2 relies on the Dempster-Shafer method and classifiers PD as a function of the raw data) [include at least one from a group consisting of a statistical measurement uncertainty, a classification uncertainty, a detection probability and a false alarm rate.]
In the same field of endeavor Moustafa teaches; include at least one from a group consisting of a statistical measurement uncertainty, a classification uncertainty ([1521] classification accuracy is measure to determine optimal sample ratio during training of the models), a detection probability and a false alarm rate.
Both Moustafa and Kloeden teach a vehicle with a sensor model configuration of multiple models or classifiers. However, Kloeden does not teach either model consisting of the listed measure in the presently filed invention. In the same field of endeavor, Moustafa teaches that the model uses classification accuracy measurement during training for training quality metrics. Addition of the classification accuracy of a model used to process and classify sensor data would improve the overall quality of the classification performed. To one of ordinary skill in the art motivated to create an improved sensor model this would have been obvious before the filing date of the presently claimed invention. Therefore, claim 7 is rejected over Kloeden in view of Moustafa.
Regarding claim 8 Kloeden teaches; wherein the generation of at least one of the first sensor model ([0037], the first classifier KL1 is a Bayes classifier that is determining ID data from raw data) or the second sensor model ([0040] second classifier KL2 relies on the Dempster-Shafer method and classifiers PD as a function of the raw data []
In the same field of endeavor Moustafa teaches; is performed by an algorithm that is dependent on at least one of the first uncertainty data set or the second uncertainty data set ([0175] multiple data sets can be utilized with the multiple machine learning models/sensor models, different models will be dependent on different data sets), respectively.
Kloeden discloses multiple classifiers that are set to generate data as a function of different sensor, however it does not teach the sensor classifiers as dependent on sets of uncertainty data or processed data. However, in the same field of endeavor Moustafa teaches multiple sensor models implemented on an AV, each model dependent on a different set of data respectively. Combination of the systems of Kloeden and Moustafa would have been obvious to one of ordinary skill in the art motivated to create a set of sensor models dependent on different sets of data. Therefore, claim 8 is rejected over Kloeden in view of Moustafa.
Regarding claim 9 Kloeden does not disclose; wherein the generation of the first sensor model is performed by a trained first machine learning system, and/or the generation of the second sensor model is performed by a trained second machine learning system.
In the same field of endeavor Moustafa teaches; wherein the generation of the first sensor model is performed by a trained first machine learning system ([0125] figure 120 B, a machine learning model generates a context model, [0787] Sensor data is fed into the machine learning algorithm, algorithm outputs a context model), and/or the generation of the second sensor model is performed by a trained second machine learning system ([0783] a machine learning algorithm takes in ground truth data and outputs a model, machine learning model can be of any type).
PNG
media_image6.png
260
400
media_image6.png
Greyscale
(Moustafa [0783] emphasis added)
PNG
media_image7.png
394
400
media_image7.png
Greyscale
(Moustafa [0783] emphasis added)
Kloeden does not disclose the use of machine learning models to generate the sensor models, however in the same field of endeavor Moustafa does disclose the use of machine learning to create models from data. Given that Moustafa teaches the use of models of generated by machine learning algorithms, therefore this would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Combination of the system of Kloeden with the model generation method of Moustafa would yield a system with the same functional capabilities as the presently claimed invention, therefore claim 9 is rejected over Kloeden in view of Moustafa.
Regarding claim 10 Kloeden does not disclose; wherein the first machine learning system and/or the second machine learning system is from a group including a deep neural network, probabilistic graphical models, Bayesian networks and Markov fields.
In the same field of endeavor Moustafa teaches; wherein the first machine learning system and/or the second machine learning system is from a group including a deep neural network ([0174] machine learning models used in the system may be a Bayesian or a deep learning model [0787] Machine learning algorithm may be a deep neural network), probabilistic graphical models, Bayesian networks and Markov fields.
Kloeden does not teach the use of a machine learning algorithm composed of a deep neural network, a PGM, a Bayesian network or a Markov field. However, in the same field of endeavor Moustafa teaches the use of deep neural networks in model generation. It would have been obvious to one of ordinary skill in the art to use deep learning neural networks in order to improve the speed and quality of the output model generated before the filing date of the presently claimed invention. Therefore claim 10 is rejected over Kloeden in view of Moustafa.
Regarding claim 11 Kloeden does not disclose; wherein the fusion unit is based on a Bayesian fusion method, a Kalman filter, a multi-model filter, a filter based on random finite sets or a particle filter
However, in the same field of endeavor Moustafa teaches; wherein the fusion unit is based on a Bayesian fusion method, a Kalman filter ([0777] Sensor fusion algorithms such as a Kalman filter can be used in sensor data fusion, [0848] during data fusion, filtering using a Kalman filter may be performed), a multi-model filter, a filter based on random finite sets or a particle filter, and the fusion unit is in conjunction with a data association method, a Dempster-Shafer fusion method ([0869] Data fusion (fused soft targets) may be determined using Dempster-Shafer theory, Fuzzy logic or Bayesian inference), fuzzy logic, probabilistic logics,
Kloeden does not teach a combination of fusion methods as those listed in the presently claimed invention. However, Moustafa teaches several different methods of data fusion, utilizing a combination of Kalman filtering, Bayesian fusion methods, Dempster-Shafer fusion methods and fuzzy logic. Moustafa teaches in [0869] of the specification, that soft targets (a type of fusion data taught in their claimed invention) may be fused in any suitable manner, following the listing of method included above. Given this teaching, it would be obvious to one of ordinary skill in the art before the filing date of the presently claimed invention, to combine these methods of data fusion as claimed in claim 11.
Regarding claim 13 Kloeden does not disclose; wherein the fusion result is a vehicle environment model. However, in the same field of endeavor Moustafa teaches: wherein the fusion result is a vehicle environment model ([0184] The vehicle possesses many sensors to gather environmental data from the surrounding area, such data can be used in combination to get a better picture of the data. [0257] Sensor data fusion can be used to create an environmental model of the surrounding area around the vehicle).
PNG
media_image8.png
376
400
media_image8.png
Greyscale
(Moustafa, [0184], emphasis added)
PNG
media_image9.png
136
394
media_image9.png
Greyscale
PNG
media_image10.png
298
402
media_image10.png
Greyscale
(Moustafa, [0257], emphasis added)
Kloeden does not disclose an environmental model, however in the same field of endeavor, Moustafa teaches creating an environmental model from sensor data to gather information on the vehicle’s surroundings. Given the method of generating an environmental model taught in Moustafa, one of ordinary skill in the art would have been motivated to apply this method to the system taught in Kloeden in order to create an autonomous vehicle with better capabilities for sensing its surroundings. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention to combine the system of Kloeden with the methods of Moustafa to create a system with the same capabilities as claimed in claim 13.
Regarding claim 14 the combination of Kloeden and Moustafa teaches; A sensor system for merging sensor data, comprising a first sensor (Kloeden, Fig1, SE1, [0006] receiving a first set of data from a signal from a first sensor);
and a signal processing device comprising at least one processor (Kloeden, [0031] the data processing device reads code stored on a computer readable medium), comprising:
a first analysis unit which configures the at least one processor (Kloeden, [0031] the data processing device reads code stored on a computer readable medium) to analyze first sensor data (Kloeden, Figure 1, BE, [0028] system analyzes the data using a calculation unit), and to generate a first sensor result including at least one of a type of at least one detected object in the first sensor data or a position of the at least one detected object (Kloeden, [0037], first sensor data (first sensor result) is analyzed by the calculation unit (analysis unit), the result of this analysis is the first sensor model [0049] first sensor data is used to detect obstacles/objects such as pedestrians walking (detecting a type of object)) and a first sensor model associated with the first sensor result (Kloeden, [0037], first sensor data (first sensor result) is analyzed by the calculation unit (analysis unit), the result of this analysis is the first sensor model), the first sensor result describing uncertainties in detection of the at least one detected object (Kloeden, [0049] a delay on the detected object (in this case a pedestrian) can be calculated, which would be an uncertainty in detection), the first sensor model being dependent on a first uncertainty data set, which is a subset of a sensor data set (Kloeden, [0037] calculation unit contains a first sensor, and determines data as a function of this first sensor data, information data (ID) (uncertainty data) is determined as being dependent on the first sensor data);
wherein the uncertainties described by the first sensor model comprise a first quantitative value of a certainty for a first detected object (Moustafa, [0276] the type of obstacle or object or hazard detected may be determined, [0297]-[0298] the vehicle may generate a confidence score based on the data received from the sensors, which includes the object detection and object position data) and a second quantitative value of a certainty for a second detected object (Moustafa [0177] the system detects and tracks objects, the tracking will include an estimation of the object’s trajectory and movement in relation to the vehicle. [0257] the current position of the object and its velocity [0297]- [0298] the vehicle may generate a confidence score based on the data received from the sensors, which includes the object detection and object position data), and wherein the first sensor model automatically adopts to a current situation of a vehicle detected by the sensors (Moustafa, [0217]-[0218] the system may use autonomy at varying times and may leverage the sensors and models to support higher or lower autonomy based on the situation, indicating it adjusts based on the situation)
a second analysis unit which configures the at least one processor (Kloeden, [0031] the data processing device reads code stored on a computer readable medium) to generate a second sensor result and a second sensor model associated with the second sensor result (Kloeden, Figure 1, KL2, second classifier/analysis, SE2 second sensor, [0040] Second sensor is associated with a control device and a second classifier);
and a fusion unit which configures the at least one processor (Kloeden, [0031] the data processing device reads code stored on a computer readable medium)to merge the first sensor result and the second sensor result, on the basis of the first sensor model and the second sensor model, to form a fusion result (Kloeden, Figure 1, SV, control device [0041] Control device calculates fusion data [0059]Fusion data is a combination of the ID data, which is associated with the first sensor and the PD which is associated with the second sensor).
The combination of Kloeden and Moustafa would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Kloeden teaches a system that verifies the accuracy/uncertainty of whether or not an obstacle or object is present using a sensor system on a vehicle. Moustafa teaches a system the generate a confidence that an object is correctly classified and that the trajectory information and position data of the object. The motivation for combining the systems of Kloeden and Moustafa is that the confidence scoring capacity of Moustafa allows for the system to verify the trajectory, position and classification of an object to more accurately assess the situation when the vehicle is responding. (Moustafa, [0177], [0257], [0276], and [0297]- [0299])
Regarding claim 15 the combination of Kloeden and Moustafa teaches; A vehicle, comprising the sensor system according to Claim 14 (Kloeden, [0011] System is designed for use on a vehicle).
Regarding claim 16 the combination of Kloeden and Moustafa teaches; The method according to Claim 2, wherein the first sensor model is based on raw data from the first sensor and raw data from a second sensor (Kloeden, [0052] the raw data provided to the system is from the first sensor and/or the second, [0056] the plausibility data is determined from the raw data (sensor model)).
Regarding claim 17 the combination of Kloeden and Moustafa teaches; The method according to Claim 16, wherein the first and second sensors are different types of sensors (Kloeden, [0040] the system has a first sensor and a second sensor which are different from one another).
Regarding claim 18 the combination of Kloeden and Moustafa teaches; The sensor system according to Claim 14, wherein the first sensor model is based on raw data from the first sensor and raw data from a second sensor (Kloeden, [0052] the raw data provided to the system is from the first sensor and/or the second, [0056] the plausibility data is determined from the raw data (sensor model)).
Regarding claim 19 the combination of Kloeden and Moustafa teaches; The sensor system according to Claim 18, wherein the first sensor comprises a camera and the second sensor comprises one of radar, lidar or an ultrasonic sensor (Kloeden, [0037] first sensor can be a camera).
Kloeden does not disclose that the second sensor comprises one of radar, lidar or an ultrasonic sensor. However, in the same field of endeavor, Moustafa teaches; the second sensor comprises one of radar, lidar or an ultrasonic sensor (Moustafa, [0184] the system may utilize a variety (plurality) of sensor data including cameras, LIDAR sensors, Radar sensors, which indicates a plurality of sensors, meaning a first and a second where they can be any combination of the previously listed sensors).
The combination of Kloeden and Moustafa would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Kloeden and Moustafa both teach system with multiple sensors generating data. Moustafa specifically teaches that the system may use data from multiple types of sensors to generate data, therefore it would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention to use two sensors where one sensor is a camera and one sensor is a radar, LIDAR or ultrasonic sensor. (Moustafa, [0184])
Claims 12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kloeden (US 20140336866 A1) in view of Moustafa (US 20220126864 A1) and Le Henaff (US 12258027).
Regarding claim 12 Kloeden does not disclose; wherein a first fallback sensor model is defined, which is independent of the sensor data set and if the first uncertainty data set is incorrect and/or incomplete, the first fallback sensor model is used instead of the first sensor model, and wherein the first sensor model comprises a dynamic sensor model and the first fallback sensor model is a static sensor model.
However in the same field of endeavor Moustafa teaches; wherein a first fallback sensor model is defined, which is independent of the sensor data set and if the first uncertainty data set is incorrect and/or incomplete, the first fallback sensor model is used instead of the first sensor model (Moustafa, [0220] when a sensor (925) is detected as being faulty, an independent separate sensor’s data may be used to supplement this data from the faulty sensor, which sensor is used as the fallback sensor is dependent on the sensor position, and can be from one sensor only or from multiple sensors independent on the one that is faulty).
PNG
media_image11.png
532
396
media_image11.png
Greyscale
(Moustafa [0220], emphasis added)
Kloeden does not disclose the use of a fallback sensor that is independent of the first sensor. However, Moustafa teaches that in the event of sensor failure, the system will utilize data from an independent sensor in a similar area of the vehicle to supplement in place of the failing sensor. Given the teachings of Moustafa, one of ordinary skill in the art would have been motivated to utilize a fallback sensor in order to prevent an autonomous vehicle system from failing in the event of a sensor malfunction. To one of ordinary skill in the art it would have been obvious to combine the method of Moustafa with the system of Kloeden to create a system with the same function capabilities as those claimed in claim 12.
The combination of Kloeden and Moustafa fails to teach; and wherein the first sensor model comprises a dynamic sensor model and the first fallback sensor model is a static sensor model.
However, in the same field of endeavor, Le Henaff teaches; and wherein the first sensor model comprises a dynamic sensor model (Le Henaff Column 7, Lines 50-67, sensor data is captured by the vehicle, which is being interpretated as the first sensor model, because the sensor data is capture of the environment continuously during driving this is a dynamic model) and the first fallback sensor model is a static sensor model (Le Henaff Column 7, Lines 50-67, and column 8 lines 1-5 if the data is determined as being faulty the data log may be modified to compensate for a faulty sensor or component, because this modification is of a portion of the data that has been modified upon the detection of a fault in the data, this model would be static since the data is not being regathered continuously).
The combination of Kloeden, Moustafa and Le Henaff would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The combination of Kloeden and Moustafa teaches a system for navigating an autonomous vehicle using multiple sensor models and a fallback sensor model to be used when one of the sensors fails. Le Henaff teaches that the first sensor model is a continuous or dynamic sensor model, and the fallback sensor model is a modification of a static portion of the faulty data. The motivation for the combination of Kloeden, Moustafa and Le Henaff would be that the ability of the system to create a static fallback model allows the vehicle sensor system to fix the erroneous data efficiently when a fault in the data is detected. (See Le Henaff Column 7 lines 50-67 and column 8 Lines 1-30)
Regarding claim 20 Kloeden does not disclose; The sensor system according to Claim 14, wherein a first fallback sensor model is defined, which is independent of the sensor data set and if the first uncertainty data set is incorrect and/or incomplete, the first fallback sensor model is used instead of the first sensor model, and wherein the first sensor model comprises a dynamic sensor model and the first fallback sensor model is a static sensor model.
However in the same field of endeavor Moustafa teaches; wherein a first fallback sensor model is defined, which is independent of the sensor data set and if the first uncertainty data set is incorrect and/or incomplete, the first fallback sensor model is used instead of the first sensor model (Moustafa, [0220] when a sensor (925) is detected as being faulty, an independent separate sensor’s data may be used to supplement this data from the faulty sensor, which sensor is used as the fallback sensor is dependent on the sensor position, and can be from one sensor only or from multiple sensors independent on the one that is faulty).
Kloeden does not disclose the use of a fallback sensor that is independent of the first sensor. However, Moustafa teaches that in the event of sensor failure, the system will utilize data from an independent sensor in a similar area of the vehicle to supplement in place of the failing sensor. Given the teachings of Moustafa, one of ordinary skill in the art would have been motivated to utilize a fallback sensor in order to prevent an autonomous vehicle system from failing in the event of a sensor malfunction. To one of ordinary skill in the art it would have been obvious to combine the method of Moustafa with the system of Kloeden to create a system with the same function capabilities as those claimed in claim 12.
The combination of Kloeden and Moustafa fails to teach; and wherein the first sensor model comprises a dynamic sensor model and the first fallback sensor model is a static sensor model
However, in the same field of endeavor, Le Henaff teaches; and wherein the first sensor model comprises a dynamic sensor model (Le Henaff Column 7, Lines 50-67, sensor data is captured by the vehicle, which is being interpretated as the first sensor model, because the sensor data is capture of the environment continuously during driving this is a dynamic model) and the first fallback sensor model is a static sensor model (Le Henaff Column 7, Lines 50-67, and column 8 lines 1-5 if the data is determined as being faulty the data log may be modified to compensate for a faulty sensor or component, because this modification is of a portion of the data that has been modified upon the detection of a fault in the data, this model would be static since the data is not being regathered continuously).
The combination of Kloeden, Moustafa and Le Henaff would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The combination of Kloeden and Moustafa teaches a system for navigating an autonomous vehicle using multiple sensor models and a fallback sensor model to be used when one of the sensors fails. Le Henaff teaches that the first sensor model is a continuous or dynamic sensor model, and the fallback sensor model is a modification of a static portion of the faulty data. The motivation for the combination of Kloeden, Moustafa and Le Henaff would be that the ability of the system to create a static fallback model allows the vehicle sensor system to fix the erroneous data efficiently when a fault in the data is detected. (See Le Henaff Column 7 lines 50-67 and column 8 Lines 1-30)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Timm (US 20190347488 A1), which teaches a sensor system for determining the area around and autonomous vehicle while it is traveling.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.E./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666