DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/30/2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS is being considered by the examiner.
Examiner’s Note
To help the reader, examiner notes in this detailed action claim language is in bold, strikethrough limitations are not explicitly taught and language added to explain a reference mapping are isolated from quotations via square brackets.
Response to Arguments
Applicant's arguments filed 12/05/2025have been fully considered but they are not persuasive. An explanation is provided below.
Applicant alleges on p.5 “The references mention various sensors but do not select the two items in any comparison based upon precision and error rates. In other words, the references are completely silent as to these features. Since at least one claim feature is not taught or suggested by the art, the claims are allowable over the art.”
The Examiner respectfully disagrees. Klotzbeucher calibrates an angle measurement of a vehicle sensor to obtain a sequence of detections associated with a target using ground truth values based on a minimum detected distance from a vehicle to a target such that the calculation that is more precise than the sensor measurement as stated on p.7 “due to calibration error, there is a spread in estimates of bearing to the detected object, which shows up as some deviation in the upper part 215 of the graph 200. It is desired to compensate for this error by calibrating the radar transceiver 110.” As such, applicant’s arguments are unpersuasive.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moustafa et al. (US 20220126864 hereinafter Moustafa) in view of Klotzbeucher et al. (EP 3761054 hereinafter Klotzbeucher).
Regarding claim 1, Moustafa teaches A method for assessing measuring uncertainties of at least one environment detection sensor of an ego vehicle, comprising (0798 “FIG. 124A illustrates an approach for learning weights for sensors under different contexts in accordance with certain embodiments. First, a model that detects objects as accurately as possible may be trained for each individual sensor, e.g., camera, LIDAR, or radar”; 0060 “FIG. 57 depicts a flow for triggering an action based on an accuracy of a linear classifier.”):
- recording an environment of an ego vehicle by means of at least one environment detection sensor of the ego vehicle (0166 “autonomous driving stacks may allow vehicles to self-control or provide driver assistance to detect roadways, navigate from one point to another, detect other vehicles and road actors (e.g., pedestrians (e.g., 135), bicyclists, etc.), detect obstacles and hazards (e.g., 120), and road conditions (e.g., traffic, road conditions, weather conditions, etc.),”);
- detecting at least one object which is located in a region in front of the ego vehicle in a direction of travel (0166 “autonomous driving stacks may allow vehicles to self-control or provide driver assistance to detect roadways, navigate from one point to another, detect other vehicles and road actors (e.g., pedestrians (e.g., 135), bicyclists, etc.), detect obstacles and hazards (e.g., 120), and road conditions (e.g., traffic, road conditions, weather conditions, etc.),”);
- specifying a sensor output of at least one environment detection sensor as a ground truth (0431 “Data scoring trainer 4928 trains models on categories and/or scores. In various embodiments, the instances of the detected objects and their associated scores and/or categories may be used as ground truth by the data scoring trainer 4928”)
- calculating a position of the object in relation to the ego vehicle at an earlier point in time based on data of a system for positioning (Abstract “Sensor data is received from a plurality of sensors”),
- comparing a sensor output at the earlier point in time with the calculated position of the object (0786 “In a particular embodiment, to assist with object tracking, when the ground truth data are available for different contexts and object position at various instants under these different contexts, the fusion weights may be determined from the training data using a combination of a machine learning algorithm that predicts context and a tracking fusion algorithm that facilitates prediction of object position.”); and
- assessing a measuring inaccuracy of the at least one environment detection sensor based on a result of the comparison (0790 “the fusion algorithm 12102 may take data (e.g., sensor data 12104) from various sensors and ground truth context info 12106 as input, fuse the data together using different weights, predict an object position using the fused data, and utilize a cost function (such as a root-mean squared error (RMSE) or the like) that minimizes the error between the predicted position and the ground truth position (e.g., corresponding location of object locations 12108).”; 0060 “FIG. 57 depicts a flow for triggering an action based on an accuracy of a linear classifier.”)
- wherein the system for positioning is an odometry system of the ego vehicle (0384 “FIG. 41 shows a variety of sensor inputs including non-line of sight, line of sight, vehicle state, and positioning.”; 0178 “A localization engine 240 may also be included within an in-vehicle processing system 210 . . . to determine a high confidence location of the vehicle and the space it occupies within a given physical space (or “environment”); 1003 “The autonomous vehicle can then use the new dimensions in its autonomous vehicle algorithms, including for example, the safe distance algorithm.””) such that the calculated position of the object at the earlier point in time is based on data from the odometry system (0846 “combining multiple images or LIDAR scans captured at slightly different times and accounting for the motion of the sensor between the two capture times. This combination of multiple images of the same scene enables improved resolution (super-resolution), noise reduction, and other forms of sensor fusion”; 0848 “In some embodiments, a time-recursive method of filtering may be used. A time-recursive image filter may use the previously filtered image at the previous time instant and combine it with image data sensed at the current time”).
Moustafa does not explicitly teach the strikethrough limitations. However, in a related field of endeavor, Klotzbeucher teaches
specifying a sensor output of at least one environment detection sensor as a ground truth when a distance between the ego vehicle and the detected object falls below a specifiable minimum distance between the ego vehicle and the detected object (Abstract “determining a corresponding sequence of ground truth values associated with the stationary target (140), wherein the ground truth values are determined based on a minimum detected distance from the vehicle (100) to the stationary target (140) and on a track of the vehicle”)wherein the calculation is more precise and subject to fewer errors than the sensor output (p.7 “due to calibration error, there is a spread in estimates of bearing to the detected object, which shows up as some deviation in the upper part 215 of the graph 200. It is desired to compensate for this error by calibrating the radar transceiver 110.”)
Furthermore, it would have been obvious to one of ordinary skill in the art, at the time of filing of the instant application, to include the sensor calibration system and method of Klotzbeucher with the autonomous vehicle system and method of Moustafa. One would have been motivated to do so in order to advantageously reduce complexity and improve signal processing resources (Klotzbeucher p.5). Further still, the Supreme Court in KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) provides that combining prior art elements according to known methods to yield predictable results may render a claimed invention obvious over such combination. Here, Klotzbeucher merely teaches that it is well-known to incorporate the particular ground truth features. Since both Moustafa and Klotzbeucher disclose similar ADAS technology, one of ordinary skill in the art would recognize that the combination of elements here has previously been executed according to known methods, thereby evidencing that such combination would yield predictable results.
Regarding claim 3, Moustafa in view of Klotzbeucher teach The method according to Claim 1, wherein the object is a further road user (Moustafa fig 1), a feature of the surroundings or a landmark.
Regarding claim 4, Moustafa in view of Klotzbeucher teach The method according to Claim 1, further comprising providing the assessed measuring inaccuracy to a at least one of a sensor fusion system or a driver assistance system (Moustafa 0126 “FIG. 121 depicts a fusion algorithm to generate a fusion-context dictionary in accordance with certain embodiments.”; 0790 “the fusion algorithm 12102 may take data (e.g., sensor data 12104) from various sensors and ground truth context info 12106 as input, fuse the data together using different weights, predict an object position using the fused data”).
Regarding claim 5, Moustafa in view of Klotzbeucher teach The method according to Claim 1, further comprising establishing at least one angle between the object and ego vehicle (Klotzbeucher p. 3 “estimating relative angle to a detected object with respect to, e.g., a boresight direction of the radar transceiver is a difficult problem in general.”).
Furthermore, it would have been obvious to one of ordinary skill in the art, at the time of filing of the instant application, to include the sensor calibration system and method of Klotzbeucher with the autonomous vehicle system and method of Moustafa. One would have been motivated to do so in order to advantageously reduce complexity and improve signal processing resources (Klotzbeucher p.5). Further still, the Supreme Court in KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) provides that combining prior art elements according to known methods to yield predictable results may render a claimed invention obvious over such combination. Here, Klotzbeucher merely teaches that it is well-known to incorporate the particular ground truth features. Since both the prior combination and Klotzbeucher disclose similar ADAS technology, one of ordinary skill in the art would recognize that the combination of elements here has previously been executed according to known methods, thereby evidencing that such combination would yield predictable results.
Regarding claim 6, Moustafa in view of Klotzbeucher teach The method according to Claim 1, wherein assessing the measuring inaccuracy further comprises considering further environmental factors (Moustafa 0297 “as shown in the example of FIG. 12, internal system health information 1210 may be provided (e.g., from one or more internal sensors and/or a system diagnostics module) along with data 1215 (from integrated or extraneous sensors) describing conditions of the external environment surrounding the vehicle (e.g., weather information, road conditions, traffic conditions, etc.) or describing environmental conditions along upcoming portions of a determined path plan, among other example inputs. The machine learning model 1205 may determine one or more types of events from these inputs, such as broken or otherwise compromised sensors (e.g., 1220) and weather (e.g., 1225) events, such as discussed above, as well as communication channel characteristics (1230) (e.g., such as areas of no coverage, unreliable signal, or low bandwidth wireless channels, which may force the vehicle to collect rich or higher-fidelity data for future use using event and classification models), and road condition and traffic events (e.g., 1235)”).
Regarding claim 7, Moustafa in view of Klotzbeucher teach The method according to Claim 6, wherein the further environmental factors comprise at least one of one or more current weather conditions or a time of day (Moustafa 0297 “as shown in the example of FIG. 12, internal system health information 1210 may be provided (e.g., from one or more internal sensors and/or a system diagnostics module) along with data 1215 (from integrated or extraneous sensors) describing conditions of the external environment surrounding the vehicle (e.g., weather information, road conditions, traffic conditions, etc.) or describing environmental conditions along upcoming portions of a determined path plan, among other example inputs. The machine learning model 1205 may determine one or more types of events from these inputs, such as broken or otherwise compromised sensors (e.g., 1220) and weather (e.g., 1225) events, such as discussed above, as well as communication channel characteristics (1230) (e.g., such as areas of no coverage, unreliable signal, or low bandwidth wireless channels, which may force the vehicle to collect rich or higher-fidelity data for future use using event and classification models), and road condition and traffic events (e.g., 1235)”).
Regarding claim 8, claim 8 recites substantially the same limitations as claim 1. Therefore, claim 8 is rejected for substantially the same reasons as claim 1. Moustafa further teaches the computing device may be external (0167 “For instance, as shown in the illustrative example of FIG. 1, supporting drones 180 (e.g., ground-based and/or aerial), roadside computing devices (e.g., 140), various external (to the vehicle, or “extraneous”) sensor devices (e.g., 160, 165, 170, 175, etc.), and other devices may be provided as autonomous driving infrastructure separate from the computing systems, sensors, and logic implemented on the vehicles (e.g., 105, 110, 115) to support and improve autonomous driving results provided through the vehicles, among other examples.”).
Claim(s) 10-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moustafa et al. (US 20220126864 hereinafter Moustafa) in view of Klotzbeucher et al. (EP 3761054 hereinafter Klotzbeucher) as applied to claims 1 and 8, and further in view of Hilligardt et al. (US 20190300007 hereinafter Hilligardt).
Regarding claim 10, the cited prior art teaches The system according to Claim 8,
The cited prior art does not explicitly teach the strikethrough limitations. However, in a related field of endeavor, Hilligardt teaches
wherein the odometry system comprises a wheel speed sensor, and the data from the odometry system comprises data from the wheel speed sensor (0040 “Additionally or alternatively, the odometry sensors 316 may include one or more encoders, Hall speed sensors, and/or other measurement sensors/devices configured to measure a wheel speed, rotation, and/or number of revolutions made over time.”).
Furthermore, it would have been obvious to one of ordinary skill in the art, at the time of filing of the instant application, to include the autonomous vehicle system and method of Hilligardtwith with the cited prior art. One would have been motivated to do so in order to advantageously increase the number of available information for an autonomous vehicle system (Hilligardt 0026). Further still, the Supreme Court in KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) provides that combining prior art elements according to known methods to yield predictable results may render a claimed invention obvious over such combination. Here, Hilligardt merely teaches that it is well-known to incorporate the particular ground truth features. Since both the prior combination and Hilligardt disclose similar ADAS technology for autonomous vehicles, one of ordinary skill in the art would recognize that the combination of elements here has previously been executed according to known methods, thereby evidencing that such combination would yield predictable results.
Regarding claim 11, the cited prior art teaches The system according to Claim 8,
The cited prior art does not explicitly teach the strikethrough limitations. However, in a related field of endeavor, Hilligardt teaches
wherein the data from the odometry system comprises a number of wheel revolutions of the ego vehicle (0040 “Additionally or alternatively, the odometry sensors 316 may include one or more encoders, Hall speed sensors, and/or other measurement sensors/devices configured to measure a wheel speed, rotation, and/or number of revolutions made over time.”).
Furthermore, it would have been obvious to one of ordinary skill in the art, at the time of filing of the instant application, to include the autonomous vehicle system and method of Hilligardtwith with the cited prior art. One would have been motivated to do so in order to advantageously increase the number of available information for an autonomous vehicle system (Hilligardt 0026). Further still, the Supreme Court in KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) provides that combining prior art elements according to known methods to yield predictable results may render a claimed invention obvious over such combination. Here, Hilligardt merely teaches that it is well-known to incorporate the particular ground truth features. Since both the prior combination and Hilligardt disclose similar ADAS technology for autonomous vehicles, one of ordinary skill in the art would recognize that the combination of elements here has previously been executed according to known methods, thereby evidencing that such combination would yield predictable results.
Regarding claim 12, the cited prior art teaches The method according to Claim 1,
The cited prior art does not explicitly teach the strikethrough limitations. However, in a related field of endeavor, Hilligardt teaches
wherein the odometry system comprises a wheel speed sensor, and the data from the odometry system comprises data from the wheel speed sensor (0040 “Additionally or alternatively, the odometry sensors 316 may include one or more encoders, Hall speed sensors, and/or other measurement sensors/devices configured to measure a wheel speed, rotation, and/or number of revolutions made over time.”).
Furthermore, it would have been obvious to one of ordinary skill in the art, at the time of filing of the instant application, to include the autonomous vehicle system and method of Hilligardtwith with the cited prior art. One would have been motivated to do so in order to advantageously increase the number of available information for an autonomous vehicle system (Hilligardt 0026). Further still, the Supreme Court in KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) provides that combining prior art elements according to known methods to yield predictable results may render a claimed invention obvious over such combination. Here, Hilligardt merely teaches that it is well-known to incorporate the particular ground truth features. Since both the prior combination and Hilligardt disclose similar ADAS technology for autonomous vehicles, one of ordinary skill in the art would recognize that the combination of elements here has previously been executed according to known methods, thereby evidencing that such combination would yield predictable results.
Regarding claim 13, the cited prior art teaches The method according to Claim 1,
The cited prior art does not explicitly teach the strikethrough limitations. However, in a related field of endeavor, Hilligardt teaches
wherein the data from the odometry system comprises a number of wheel revolutions of the ego vehicle (0040 “Additionally or alternatively, the odometry sensors 316 may include one or more encoders, Hall speed sensors, and/or other measurement sensors/devices configured to measure a wheel speed, rotation, and/or number of revolutions made over time.”).
Furthermore, it would have been obvious to one of ordinary skill in the art, at the time of filing of the instant application, to include the autonomous vehicle system and method of Hilligardtwith with the cited prior art. One would have been motivated to do so in order to advantageously increase the number of available information for an autonomous vehicle system (Hilligardt 0026). Further still, the Supreme Court in KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) provides that combining prior art elements according to known methods to yield predictable results may render a claimed invention obvious over such combination. Here, Hilligardt merely teaches that it is well-known to incorporate the particular ground truth features. Since both the prior combination and Hilligardt disclose similar ADAS technology for autonomous vehicles, one of ordinary skill in the art would recognize that the combination of elements here has previously been executed according to known methods, thereby evidencing that such combination would yield predictable results.
Conclusion
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to application’s disclosure:
Han et al. (US 12065147) discloses “A method of simultaneously estimating a movement and shape of a target vehicle using a preliminary distribution model of a tracklet may have high reliability in estimation performance even with a change in heading of the target vehicle (See abstract)”
Kim et al. (US 20200189525) discloses “An active vehicle control notification method may include receiving external data from a server by a vehicle, determining whether the external data received from the server and a control driving condition of the vehicle are matched, by the vehicle, when the external data and the control driving condition are not matched, correcting the external data based on the control driving condition inside the vehicle, and uploading the corrected external data to the server (See abstract)”.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISMAAEEL A SIDDIQUEE whose telephone number is (571)272-3896. The examiner can normally be reached on Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Kelleher can be reached on (571) 272-7753. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ISMAAEEL A. SIDDIQUEE/
Examiner, Art Unit 3648
/William Kelleher/Supervisory Patent Examiner, Art Unit 3648