Prosecution Insights
Last updated: April 19, 2026
Application No. 18/628,490

NEURAL NETWORKS TO USE SIMULATIONS TO ADJUST DATA

Non-Final OA §103
Filed
Apr 05, 2024
Examiner
RUSH, ERIC
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
383 granted / 628 resolved
-1.0% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
660
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 628 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 - 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shibata et al. U.S. Publication No. 2023/0325983 A1 in view of Wu et al. U.S. Publication No. 2020/0293064 A1. - With regards to claim 1, Shibata et al. disclose a processor, (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) comprising: one or more circuits (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) to use one or more neural networks to use first sensor information from one or more simulations of a device (Shibata et al., Figs. 2A - 3B, 5, 9 & 10, Pg. 2 ¶ 0025 - 0026, 0031, 0035 - 0038 and 0040, Pg. 3 ¶ 0051, Pg. 5 ¶ 0088 - 0089, Pg. 6 ¶ 0104 - 0105 and 0115, Pg. 12 ¶ 0203 - 0205 and 0212 - 0215, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 17 ¶ 0307) to adjust second sensor information of the device. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) Shibata et al. fail to disclose explicitly an autonomous device. Pertaining to analogous art, Wu et al. disclose using one or more neural networks to use first sensor information from one or more simulations of an autonomous device to adjust second sensor information of the autonomous device. (Wu et al., Abstract, Figs. 1 - 4, 7, 8 & 14A - 14D, Pg. 1 ¶ 0002 and 0004 - 0006, Pg. 2 ¶ 0027 - Pg. 3 ¶ 0033, Pg. 3 ¶ 0035, Pg. 4 ¶ 0041 and 0044, Pg. 8 ¶ 0080 - 0084, Pg. 9 ¶ 0086 - 0087, Pg. 13 ¶ 0129 - 0130, Pg. 14 ¶ 0135 and 0138, Pg. 18 ¶ 0171, Pg. 20 ¶ 0190 - 0193, Pg. 24 ¶ 0232) Shibata et al. and Wu et al. are combinable because they are both directed towards image processing systems that utilize neural networks to process image data captured from vehicle-mounted cameras. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Shibata et al. with the teachings of Wu et al. This modification would have been prompted in order to enhance the base device of Shibata et al. with the well-known and applicable technique Wu et al. applied to a comparable device. Adjusting sensor information of an autonomous device, as taught by Wu et al., would enhance the base device of Shibata et al. by allowing for it to improve the reliability of sensor data obtained and/or employed in a wider variety of situations and for it to be utilized in an increased number and variety of related and applicable applications and/or environments, such as autonomous driving functions, thereby improving its overall appeal, usefulness and marketability to potential end-users. Furthermore, this modification would have been prompted by the teachings and suggestions of Shibata et al. that their teachings may be applied to sensor data acquired from sensors mounted on a vehicle, that the sensor data may be used for various types of processing related to the vehicle including processing performed using artificial intelligence, that data generated during a travel simulation of the vehicle may be utilized to generate replacement data and that neural networks may be used to generate the replacement data, see at least page 2 paragraphs 0025 - 0026 and 0030, page 6 paragraphs 0104 - 0105, page 12 paragraphs 0203 - 0204 and 0212 - 0215, page 14 paragraphs 0248 - 0256 and page 18 paragraph 0315 of Shibata et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the base device of Shibata et al. would be utilized in connection with sensor data acquired from an autonomous device so as to allow for the base device of Shibata et al. to be utilized in an increased number of applications and/or environments and improve the reliability of sensor data obtained and/or employed in a wider variety of situations so as to improve its overall appeal, usefulness and marketability to potential end-users. Therefore, it would have been obvious to combine Shibata et al. with Wu et al. to obtain the invention as specified in claim 1. - With regards to claim 2, Shibata et al. in view of Wu et al. disclose the processor of claim 1, wherein the one or more circuits (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are to use the one or more neural networks to adjust the second sensor information at least by using the one or more neural networks to replace one or more portions of the second sensor information. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) - With regards to claim 3, Shibata et al. in view of Wu et al. disclose the processor of claim 1, wherein: the one or more circuits (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are further to label one or more first portions of the second sensor information as including validated data and one or more second portions of the second sensor information as including unvalidated data; (Shibata et al., Abstract, Figs. 2A - 5 & 12, Pg. 1 ¶ 0008, Pg. 3 ¶ 0048 - 0051 and 0055 - 0058, Pg. 5 ¶ 0081 - 0085, Pg. 8 ¶ 0137 - 0138 and 0141 - 0143, Pg. 12 ¶ 0203, 0213 and 0218, Pg. 15 ¶ 0267 - 0273 [Noise portions of the sensor data, image, in Shibata et al. correspond to one or more second portions including unvalidated data and the remaining portions of the sensor data, image, correspond to one or more first portions including validated data.]) and the one or more circuits (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are to use the one or more neural networks to adjust the second sensor information at least by using the one or more neural networks to adjust the one or more second portions. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) - With regards to claim 4, Shibata et al. in view of Wu et al. disclose the processor of claim 1, wherein the one or more circuits (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are further to identify the second sensor information as comprising corrupted data. (Shibata et al., Abstract, Figs. 2A - 5 & 12, Pg. 1 ¶ 0008, Pg. 3 ¶ 0048 - 0051 and 0055 - 0058, Pg. 5 ¶ 0081 - 0085, Pg. 8 ¶ 0137 - 0138 and 0141 - 0143, Pg. 12 ¶ 0203, 0213 and 0218, Pg. 15 ¶ 0267 - 0273) - With regards to claim 5, Shibata et al. in view of Wu et al. disclose the processor of claim 1, wherein the one or more circuits (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are further to use one or more simulators to generate the one or more simulations based, at least in part, on the second sensor information. (Shibata et al., Figs. 2A - 3B, 5, 9, 10 & 12, Pg. 3 ¶ 0060 - 0061, Pg. 4 ¶ 0066 - 0068 and 0070 - 0074, Pg. 5 ¶ 0088 - 0089, Pg. 6 ¶ 0104 - 0105 and 0115, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215) - With regards to claim 6, Shibata et al. in view of Wu et al. disclose the processor of claim 1, wherein the one or more circuits (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are to use the one or more neural networks to adjust the second sensor information at least by using the one or more neural networks to adjust a smoothness of the second sensor information. (Shibata et al., Figs. 2A - 3B, 9 & 10, Pg. 2 ¶ 0032 and 0040, Pg. 4 ¶ 0070 - 0073, Pg. 5 ¶ 0081 - 0083 and 0088 - 0093, Pg. 6 ¶ 0104 - 0105 and 0115, Pg. 7 ¶ 0135 - Pg. 8 ¶ 0138, Pg. 9 ¶ 0156 - 0158, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273) In addition, analogous art Wu et al. disclose using the one or more neural networks to adjust the second sensor information at least by using the one or more neural networks to adjust a smoothness of the second sensor information. (Wu et al., Pg. 2 ¶ 0032, Pg. 3 ¶ 0038 - Pg. 4 ¶ 0041, Pg. 4 ¶ 0043, Pg. 5 ¶ 0045 - 0046, Pg. 19 ¶ 0183 - 0185) - With regards to claim 7, Shibata et al. in view of Wu et al. disclose the processor of claim 1, wherein the one or more circuits (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are to use the one or more neural networks to adjust the second sensor information at least by using the one or more neural networks to modify one or more first portions of the second sensor information and maintain one or more second portions of the second sensor information. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) - With regards to claim 8, Shibata et al. disclose a system, (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 1 ¶ 0008, Pg. 2 ¶ 0024 - 0025 and 0043, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 12 ¶ 0203 - 0205 and 0208 - 0211, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0285 - Pg. 17 ¶ 0295) comprising: one or more processors (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) to use one or more neural networks to use first sensor information from one or more simulations of a device (Shibata et al., Figs. 2A - 3B, 5, 9 & 10, Pg. 2 ¶ 0025 - 0026, 0031, 0035 - 0038 and 0040, Pg. 3 ¶ 0051, Pg. 5 ¶ 0088 - 0089, Pg. 6 ¶ 0104 - 0105 and 0115, Pg. 12 ¶ 0203 - 0205 and 0212 - 0215, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 17 ¶ 0307) to adjust second sensor information of the device. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) Shibata et al. fail to disclose explicitly an autonomous device. Pertaining to analogous art, Wu et al. disclose using one or more neural networks to use first sensor information from one or more simulations of an autonomous device to adjust second sensor information of the autonomous device. (Wu et al., Abstract, Figs. 1 - 4, 7, 8 & 14A - 14D, Pg. 1 ¶ 0002 and 0004 - 0006, Pg. 2 ¶ 0027 - Pg. 3 ¶ 0033, Pg. 3 ¶ 0035, Pg. 4 ¶ 0041 and 0044, Pg. 8 ¶ 0080 - 0084, Pg. 9 ¶ 0086 - 0087, Pg. 13 ¶ 0129 - 0130, Pg. 14 ¶ 0135 and 0138, Pg. 18 ¶ 0171, Pg. 20 ¶ 0190 - 0193, Pg. 24 ¶ 0232) Shibata et al. and Wu et al. are combinable because they are both directed towards image processing systems that utilize neural networks to process image data captured from vehicle-mounted cameras. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Shibata et al. with the teachings of Wu et al. This modification would have been prompted in order to enhance the base device of Shibata et al. with the well-known and applicable technique Wu et al. applied to a comparable device. Adjusting sensor information of an autonomous device, as taught by Wu et al., would enhance the base device of Shibata et al. by allowing for it to improve the reliability of sensor data obtained and/or employed in a wider variety of situations and for it to be utilized in an increased number and variety of related and applicable applications and/or environments, such as autonomous driving functions, thereby improving its overall appeal, usefulness and marketability to potential end-users. Furthermore, this modification would have been prompted by the teachings and suggestions of Shibata et al. that their teachings may be applied to sensor data acquired from sensors mounted on a vehicle, that the sensor data may be used for various types of processing related to the vehicle including processing performed using artificial intelligence, that data generated during a travel simulation of the vehicle may be utilized to generate replacement data and that neural networks may be used to generate the replacement data, see at least page 2 paragraphs 0025 - 0026 and 0030, page 6 paragraphs 0104 - 0105, page 12 paragraphs 0203 - 0204 and 0212 - 0215, page 14 paragraphs 0248 - 0256 and page 18 paragraph 0315 of Shibata et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the base device of Shibata et al. would be utilized in connection with sensor data acquired from an autonomous device so as to allow for the base device of Shibata et al. to be utilized in an increased number of applications and/or environments and improve the reliability of sensor data obtained and/or employed in a wider variety of situations so as to improve its overall appeal, usefulness and marketability to potential end-users. Therefore, it would have been obvious to combine Shibata et al. with Wu et al. to obtain the invention as specified in claim 8. - With regards to claim 9, Shibata et al. in view of Wu et al. disclose the system of claim 8, wherein the one or more processors (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are to use the one or more neural networks to adjust the second sensor information at least by using the one or more neural networks to generate data to replace one or more portions of the second sensor information. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) - With regards to claim 10, Shibata et al. in view of Wu et al. disclose the system of claim 8, wherein: the one or more processors (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are further to generate a first set of labels identifying one or more first portions of the second sensor information as including validated data and a second set of labels identifying one or more second portions of the second sensor information as including unvalidated data; (Shibata et al., Abstract, Figs. 2A - 5 & 12, Pg. 1 ¶ 0008, Pg. 3 ¶ 0048 - 0051 and 0055 - 0058, Pg. 5 ¶ 0081 - 0085, Pg. 8 ¶ 0137 - 0138 and 0141 - 0143, Pg. 12 ¶ 0203, 0213 and 0218, Pg. 15 ¶ 0267 - 0273 [Noise portions of the sensor data, image, in Shibata et al. correspond to one or more second portions labeled as unvalidated data and the remaining portions of the sensor data, image, correspond to one or more first portions labeled as validated data.]) and the one or more processors (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are to use the one or more neural networks to adjust the second sensor information based, at least in part, on the first and second sets of labels. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) - With regards to claim 11, Shibata et al. in view of Wu et al. disclose the system of claim 8, wherein: the one or more processors (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are further to identify the second sensor information as comprising corrupted data; (Shibata et al., Abstract, Figs. 2A - 5 & 12, Pg. 1 ¶ 0008, Pg. 3 ¶ 0048 - 0051 and 0055 - 0058, Pg. 5 ¶ 0081 - 0085, Pg. 8 ¶ 0137 - 0138 and 0141 - 0143, Pg. 12 ¶ 0203, 0213 and 0218, Pg. 15 ¶ 0267 - 0273) and the one or more processors (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are to use the one or more neural networks to adjust the second sensor information at least by using the one or more neural networks to remove the corrupted data from the second sensor information. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) - With regards to claim 12, Shibata et al. in view of Wu et al. disclose the system of claim 8, wherein the one or more processors (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are further to use one or more simulators to generate the one or more simulations at least by simulating the second sensor information. (Shibata et al., Figs. 2A - 3B, 5, 9, 10 & 12, Pg. 3 ¶ 0060 - 0061, Pg. 4 ¶ 0066 - 0068 and 0070 - 0074, Pg. 5 ¶ 0088 - 0089, Pg. 6 ¶ 0104 - 0105 and 0115, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215) - With regards to claim 13, Shibata et al. in view of Wu et al. disclose the system of claim 8, wherein the one or more processors (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are to use the one or more neural networks to adjust the second sensor information at least by using the one or more neural networks to increase a smoothness across the second sensor information. (Shibata et al., Figs. 2A - 3B, 9 & 10, Pg. 2 ¶ 0032 and 0040, Pg. 4 ¶ 0070 - 0073, Pg. 5 ¶ 0081 - 0083 and 0088 - 0093, Pg. 6 ¶ 0104 - 0105 and 0115, Pg. 7 ¶ 0135 - Pg. 8 ¶ 0138, Pg. 9 ¶ 0156 - 0158, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273) In addition, analogous art Wu et al. disclose using the one or more neural networks to adjust the second sensor information at least by using the one or more neural networks to increase a smoothness across the second sensor information. (Wu et al., Pg. 2 ¶ 0032, Pg. 3 ¶ 0038 - Pg. 4 ¶ 0041, Pg. 4 ¶ 0043, Pg. 5 ¶ 0045 - 0046, Pg. 19 ¶ 0183 - 0185) - With regards to claim 14, Shibata et al. in view of Wu et al. disclose the system of claim 8, wherein the one or more processors (Shibata et al., Figs. 1, 6A, 6B, 9 & 10, Pg. 8 ¶ 0146 - Pg. 9 ¶ 0151, Pg. 16 ¶ 0285 - 0287, Pg. 16 ¶ 0290 - Pg. 17 ¶ 0295) are to use the one or more neural networks to adjust the second sensor information at least by using the one or more neural networks to modify one or more first portions of the second sensor information and maintain one or more second portions of the second sensor information. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) - With regards to claim 15, Shibata et al. disclose a method, (Shibata et al., Abstract, Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 3 2 0044 - 0051, Pg. 6 ¶ 0108 - 0118, Pg. 12 ¶ 0203 - 0205 and 0212 - 0215, Pg. 15 ¶ 0263 - 0273, Pg. 16 ¶ 0287 and 0294) comprising: using one or more neural networks to use first sensor information from one or more simulations of a device (Shibata et al., Figs. 2A - 3B, 5, 9 & 10, Pg. 2 ¶ 0025 - 0026, 0031, 0035 - 0038 and 0040, Pg. 3 ¶ 0051, Pg. 5 ¶ 0088 - 0089, Pg. 6 ¶ 0104 - 0105 and 0115, Pg. 12 ¶ 0203 - 0205 and 0212 - 0215, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 17 ¶ 0307) to adjust second sensor information of the device. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) Shibata et al. fail to disclose explicitly an autonomous device. Pertaining to analogous art, Wu et al. disclose using one or more neural networks to use first sensor information from one or more simulations of an autonomous device to adjust second sensor information of the autonomous device. (Wu et al., Abstract, Figs. 1 - 4, 7, 8 & 14A - 14D, Pg. 1 ¶ 0002 and 0004 - 0006, Pg. 2 ¶ 0027 - Pg. 3 ¶ 0033, Pg. 3 ¶ 0035, Pg. 4 ¶ 0041 and 0044, Pg. 8 ¶ 0080 - 0084, Pg. 9 ¶ 0086 - 0087, Pg. 13 ¶ 0129 - 0130, Pg. 14 ¶ 0135 and 0138, Pg. 18 ¶ 0171, Pg. 20 ¶ 0190 - 0193, Pg. 24 ¶ 0232) Shibata et al. and Wu et al. are combinable because they are both directed towards image processing systems that utilize neural networks to process image data captured from vehicle-mounted cameras. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Shibata et al. with the teachings of Wu et al. This modification would have been prompted in order to enhance the base device of Shibata et al. with the well-known and applicable technique Wu et al. applied to a comparable device. Adjusting sensor information of an autonomous device, as taught by Wu et al., would enhance the base device of Shibata et al. by allowing for it to improve the reliability of sensor data obtained and/or employed in a wider variety of situations and for it to be utilized in an increased number and variety of related and applicable applications and/or environments, such as autonomous driving functions, thereby improving its overall appeal, usefulness and marketability to potential end-users. Furthermore, this modification would have been prompted by the teachings and suggestions of Shibata et al. that their teachings may be applied to sensor data acquired from sensors mounted on a vehicle, that the sensor data may be used for various types of processing related to the vehicle including processing performed using artificial intelligence, that data generated during a travel simulation of the vehicle may be utilized to generate replacement data and that neural networks may be used to generate the replacement data, see at least page 2 paragraphs 0025 - 0026 and 0030, page 6 paragraphs 0104 - 0105, page 12 paragraphs 0203 - 0204 and 0212 - 0215, page 14 paragraphs 0248 - 0256 and page 18 paragraph 0315 of Shibata et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the base device of Shibata et al. would be utilized in connection with sensor data acquired from an autonomous device so as to allow for the base device of Shibata et al. to be utilized in an increased number of applications and/or environments and improve the reliability of sensor data obtained and/or employed in a wider variety of situations so as to improve its overall appeal, usefulness and marketability to potential end-users. Therefore, it would have been obvious to combine Shibata et al. with Wu et al. to obtain the invention as specified in claim 15. - With regards to claim 16, Shibata et al. in view of Wu et al. disclose the method of claim 15, wherein using the one or more neural networks to adjust the second sensor information comprises using the one or more neural networks to generate data, based, at least in part, on the first sensor information, to replace one or more portions of the second sensor information. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) - With regards to claim 17, Shibata et al. in view of Wu et al. disclose the method of claim 15, further comprising labeling corrupted data in the second sensor information as unvalidated data, (Shibata et al., Abstract, Figs. 2A - 5 & 12, Pg. 1 ¶ 0008, Pg. 3 ¶ 0048 - 0051 and 0055 - 0058, Pg. 5 ¶ 0081 - 0085, Pg. 8 ¶ 0137 - 0138 and 0141 - 0143, Pg. 12 ¶ 0203, 0213 and 0218, Pg. 15 ¶ 0267 - 0273 [Noise portions of the sensor data, image, in Shibata et al. correspond to corrupted data labeled as unvalidated data.]) and wherein using the one or more neural networks to adjust the second sensor information comprises using the one or more neural networks to correct the labeled corrupted data. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) - With regards to claim 18, Shibata et al. in view of Wu et al. disclose the method of claim 15, further comprising using one or more simulators to generate the one or more simulations at least by using the second sensor information to update one or more parameters of a simulated environment. (Shibata et al., Figs. 2A - 3B, 5, 9, 10 & 12, Pg. 2 ¶ 0025, Pg. 3 ¶ 0060 - 0061, Pg. 4 ¶ 0066 - 0068 and 0070 - 0074, Pg. 5 ¶ 0088 - 0095, Pg. 6 ¶ 0104 - 0105 and 0115, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0233 - 0235, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315 [The generated replacement data corresponding to the noise portion of the sensor data corresponds to a simulated environment.]) - With regards to claim 19, Shibata et al. in view of Wu et al. disclose the method of claim 15, wherein: using the one or more neural networks to adjust the second sensor information comprises using the one or more neural networks to increase a smoothness of the second sensor information. (Shibata et al., Figs. 2A - 3B, 9 & 10, Pg. 2 ¶ 0032 and 0040, Pg. 4 ¶ 0070 - 0073, Pg. 5 ¶ 0081 - 0083 and 0088 - 0093, Pg. 6 ¶ 0104 - 0105 and 0115, Pg. 7 ¶ 0135 - Pg. 8 ¶ 0138, Pg. 9 ¶ 0156 - 0158, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273) Shibata et al. fail to disclose expressly wherein: the second sensor information is a time series. In addition, Shibata et al. fail to disclose explicitly increasing a smoothness of the second sensor information with respect to time. Pertaining to analogous art, Wu et al. disclose wherein: the second sensor information is a time series, (Wu et al., Abstract, Figs. 4, 9 & 10, Pg. 2 ¶ 0030, Pg. 3 ¶ 0033 and 0035 - 0038, Pg. 5 ¶ 0048 - 0049 and 0056, Pg. 6 ¶ 0058, Pg. 10 ¶ 0096 - 0098, Pg. 11 ¶ 0105 - 0106) and using the one or more neural networks to increase a smoothness of the second sensor information with respect to time. (Wu et al., Pg. 2 ¶ 0032, Pg. 3 ¶ 0038 - Pg. 4 ¶ 0041, Pg. 4 ¶ 0043, Pg. 5 ¶ 0045 - 0046, Pg. 19 ¶ 0183 - 0185) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Shibata et al. in view of Wu et al. with additional teachings of Wu et al. This modification would have been prompted in order to enhance the combined base device of Shibata et al. in view of Wu et al. with the well-known and applicable technique Wu et al. applied to a comparable device. Adjusting a time series of sensor information and increasing a smoothness of the sensor information with respect to time, as taught by Wu et al., would enhance the combined base device by allowing for it to improve the reliability of an increased number and variety of types of sensor information that are typically employed and utilized in connection with autonomous vehicles and/or autonomous driving functions, such as sequences of captured images, and by further improving the quality and reliability of the second sensor information so as to enhance the ability of the combined base device to make accurate and reliable decisions based upon the second sensor information. Furthermore, this modification would have been prompted by the teachings and suggestions of Shibata et al. that their teachings may be applied to sensor data acquired from sensors mounted on a vehicle, that the sensor data may be used for various types of processing related to the vehicle including processing performed using artificial intelligence, that replacement data may be generated on the basis of neighboring pixel, that an average value of relevant neighboring pixels may be used to generate the replacement data and that a value equal to an adjacent pixel value may be used to generate the replacement data, see at least page 2 paragraphs 0025 - 0026 and 0030 and page 4 paragraphs 0070 - 0073 of Shibata et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a smoothness of a time series of sensor information would be increased with respect to time so as to allow for the combined base device to improve the reliability of an increased number and variety of types of sensor information and to further improve the quality and reliability of the second sensor information so as to enhance the ability of the combined base device to make accurate and reliable decisions based upon the second sensor information. Therefore, it would have been obvious to combine Shibata et al. in view of Wu et al. with additional teachings of Wu et al. to obtain the invention as specified in claim 19. - With regards to claim 20, Shibata et al. in view of Wu et al. disclose the method of claim 15, wherein using the one or more neural networks to adjust the second sensor information comprises using the one or more neural networks to modify one or more first portions of the second sensor information and maintain one or more second portions of the second sensor information. (Shibata et al., Figs. 1 - 5, 9, 10 & 12, Pg. 1 ¶ 0008, Pg. 2 ¶ 0025 - 0026, 0029 - 0032 and 0040, Pg. 3 ¶ 0051, Pg. 4 ¶ 0070 - 0074, Pg. 5 ¶ 0081, Pg. 6 ¶ 0104 - 0105 and 0115 - 0117, Pg. 12 ¶ 0203 - 0204 and 0212 - 0215, Pg. 13 ¶ 0219 - 0221, Pg. 14 ¶ 0248, 0252 and 0255, Pg. 15 ¶ 0259 and 0270 - 0273, Pg. 18 ¶ 0312 - 0315) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Branson et al. U.S. Publication No. 2017/0075959 A1; which is directed towards a system and method for receiving and processing streaming data, wherein substitute data is generated for missing data in the received streaming data. Kristensen et al. U.S. Publication No. 2021/0286923 A1; which is directed towards systems and methods for learning sensor models, wherein a sensor model is trained to predict virtual sensor data for a given scene configuration. Laftchiev et al. U.S. Publication No. 2022/0292301 A1; which is directed towards a system and method for training a neural network having autoencoder architecture to recover missing data, wherein the neural network is trained to produce a full set of sensor measurement data from an incomplete set of sensor measurement data. Seo et al. U.S. Publication No. 2020/0090322 A1; which is directed towards systems and methods for detecting regions of blindness or compromised visibility in sensor data, wherein a deep neural network is trained to predict regions of blindness or compromised visibility in sensor data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC RUSH/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Apr 05, 2024
Application Filed
Mar 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586229
COMPUTER IMPLEMENTED METHODS AND DEVICES FOR DETERMINING DIMENSIONS AND DISTANCES OF HEAD FEATURES
2y 5m to grant Granted Mar 24, 2026
Patent 12548292
METHOD AND SYSTEM FOR IDENTIFYING REFLECTIONS IN THERMAL IMAGES
2y 5m to grant Granted Feb 10, 2026
Patent 12548395
SYSTEMS, METHODS AND DEVICES FOR MONITORING BETTING ACTIVITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12541856
MASKING OF OBJECTS IN AN IMAGE STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12518504
METHOD FOR CALIBRATING AN OBJECT RE-IDENTIFICATION SOLUTION IMPLEMENTING AN ARRAY OF A PLURALITY OF CAMERAS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
97%
With Interview (+36.2%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 628 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month