Prosecution Insights
Last updated: April 19, 2026
Application No. 16/208,130

Machine Learning of Environmental Conditions to Control Positioning of Visual Sensors

Non-Final OA §102§103
Filed
Dec 03, 2018
Examiner
ANDERSON II, JAMES M
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Nvidia Corporation
OA Round
11 (Non-Final)
75%
Grant Probability
Favorable
11-12
OA Rounds
2y 11m
To Grant
85%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
513 granted / 684 resolved
+17.0% vs TC avg
Moderate +10% lift
Without
With
+10.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
715
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 684 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/02/2026 has been entered. Status of the Claims Claims 1-22 are pending with claims 1, 11, 15-16, and 19 being amended. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Peterson et al. (US 20160243988 A1) in view of Kim et al. (US 20180052457 A1). Concerning claims 1, 11, and 15, Peterson et al. (hereinafter Peterson) teaches one or more processors, comprising circuitry to: receive one or more sensory inputs from one or more visual sensors attached to a device (figs. 1-6: camera module 20, ¶0028: camera module 20 (such as a sideward and/or rearward facing imaging sensor or camera) may provide an image output for a vehicle vision system, such as a lane departure warning system or object detection system or blind zone alert system or surround view vision system other vehicle vision system or the like, and may utilize aspects of various imaging sensors or imaging array sensors or cameras or the like); identify, from the one or more sensory inputs, one or more objects obstructing a view of the device (¶0074: The image sensor provides image output as part of an object detection system or blind zone alert system that detect obstacles and/or objects around the environment of the vehicle); and cause an extensible apparatus upon which the one or more visual sensors are affixed to extend in a linear direction away from a side of the device to a different position relative to the device to obtain a view around the identified one or more objects obstructing the view of the device (¶0035 & ¶0065: The camera is automatically extended, retracted and/or pivoted in response to a triggering event in order to see further around an object or obstacle at or near the vehicle. Figs. 1 & 2 show camera module 20 attached to an extensible apparatus in order to extend in a linear direction away from a side of the device.). Not explicitly taught is us[ing] one or more machine learning models to generate information indicating an adjustment of the one or more visual sensors to optimize visual sensing of the one or more visual sensors based, at least in part, on the received one or more sensory inputs, the identified one or more objects, and environmental conditions. Kim et al. (hereinafter Kim), in the same field of endeavor, teaches the use of one or more machine learning models to generate information indicating an adjustment of the one or more visual sensors to optimize visual sensing of the one or more visual sensors based, at least in part, on the received one or more sensory inputs, the identified one or more objects, and environmental conditions (fig. 10: operation 1020, ¶0084: a driving situation model or other machine or deep learning may be used in estimating the driving situation based on information from sensors, determined conditions and previous training data; ¶0083: inputs from various sensors (e.g., speed, GPS, etc.), identified objects (e.g., obstacles or obtruding vehicles) and environmental conditions (e.g., weather, road conditions, etc.) may be used in estimating the driving situation. Based on the driving situation, controller 120 determines or sets the appropriate parameter (i.e., optimized visual sensing) associated with the width of the stereo camera). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Kim to the Peterson invention in order to estimate the driving situation of the vehicle. The addition of machine learning models would allow the Peterson invention to be trained or designed based on training data in order to make decisions and/or predictions without explicit instructions. Concerning claims 2 and 12, Kim teaches the one or more processors of claim 1, further comprising using the one or more processors to determine environmental conditions based at least in part on receiving one or more visual sensor inputs from the one or more visual sensors (¶0083: Also, the controller 120 may predict the road condition based on a result obtained by analyzing an image capturing a current driving road and/or the weather information.). Concerning claim 3, Peterson further teaches the one or more processors of claim 2, wherein the one or more visual sensors include one or more cameras (figs. 1-6: camera module 20 & ¶0028). Concerning claim 4, Peterson further teaches the one or more processors of claim 2, wherein the one or more visual sensor inputs include at least one of: one or more images of an environment surrounding the one or more visual sensors (figs. 13A-13B; ¶0035) or one or more video frames of an environment surrounding the one or more visual sensors (figs. 13A-13B; ¶0035). Concerning claim 5, Kim further teaches the one or more processors of claim 2, wherein using the processor to determine the environmental conditions includes: inputting the one or more visual sensor inputs to the one or more machine learning models (¶¶0083-0084), and receiving as output from the one or more machine learning models the environmental conditions (¶¶0083-0084). Concerning claim 6, Kim further teaches the one or more processors of claim 5, wherein the one or more machine learning models are trained using training data to learn the environmental conditions from visual sensor inputs (¶0084: previous training data and being designed based on such training data (e.g., information from such differing sensors or determined conditions and based on previous training data)). Concerning claim 7, Kim further teaches the one or more processors of claim 2, wherein the environmental conditions include a state of an environment surrounding the one or more visual sensors (¶0083: road conditions, weather, etc.). Concerning claim 8, Peterson further teaches the one or more processors of claim 1, wherein adjusting the one or more visual sensors comprises using a position control signal to instruct the one or more visual sensors from a current position to an extended position of a defined amount (¶0046). Concerning claim 9, Peterson further teaches the one or more processors of claim 1, wherein causing the one or more visual sensors to be adjusted includes causing the one or more visual sensors to change position rotationally (¶0038). Concerning claim 10, Peterson further teaches the one or more processors of claim 1, wherein causing the one or more visual sensors to be adjusted includes causing the one or more visual sensors to change position linearly (figs. 1-2: retracted and extended positions of camera module 20). Concerning claim 13, Kim further teaches the non-transitory computer readable medium of claim 11, wherein visual information of the environmental conditions is displayed to a user (¶0059). Concerning claim 14, Kim further teaches the non-transitory computer readable medium of claim 13, wherein the visual information is output to an automated system that makes a decision based at least in part on the visual information (¶0087). Concerning claim 16, Peterson further teaches the method of claim 15, wherein causing the extensible apparatus to extend includes changing the one or more visual sensors from a current position to an optimal position and includes causing the one or more visual sensors to change linearly with respect to the current position (figs. 1-2: retracted and extended positions of camera module 20; ¶0046). Concerning claim 17, Peterson further teaches the method of claim 16, wherein causing the one or more visual sensors to change from the current position to the optimal position includes causing the one or more visual sensors to change rotationally with respect to the current position (¶0038; ¶¶0046-0047). Concerning claim 18, Kim teaches the method of claim 15, further comprising using the one or more machine learning models, by a vehicle, to determine visual information of environmental conditions surrounding the vehicle (¶0084: previous training data and being designed based on such training data (e.g., information from such differing sensors or determined conditions and based on previous training data)). Concerning claim 19, Peterson, now incorporating the teachings of Kim, further teaches the method of claim 16, further comprising: using one or more additional machine learning models to adjust the one or more visual sensors from the optimal position to a new optimal position based at least in part on a state of an environment surrounding the one or more visual sensors (Peterson, ¶0065: the extension, retraction and/or pivoting of the camera are reactive to detections at or around the vehicle and are thus updated accordingly (e.g., detecting the blind spot region (see, ¶0005)); Kim, fig. 10: operation 1020, ¶0084: a driving situation model or other machine or deep learning may be used in estimating the driving situation based on information from sensors, determined conditions and previous training data). Concerning claim 20, Peterson further teaches the method of claim 19, wherein the optimal position is an extended position and the new optimal position is a retracted position (¶0005: a retracted position may be suitable for blind spot detection). Claims 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Peterson et al. (US 20160243988 A1) in view of Kim et al. (US 20180052457 A1) and Atwater et al. (US 20190340440 A1). Concerning claim 21, Peterson in view of Kim teaches the system of claim 1. Not explicitly taught is the system, wherein output of the one or more machine learning models indicates how to adjust the one or more visual sensors. Atwater et al. (hereinafter Atwater), in a similar field of endeavor, teaches using the output of one or more machine learning models that indicate how to adjust the one or more visual sensors (¶¶0074-0075: Machine learning techniques are used to learn preferred camera positions and locations based on the training data. The monitoring system may also learn how to position cameras at certain locations and the camera settings that may be utilized to obtain quality images.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Atwater to the Peterson in view of Kim invention in order to adjust the visual sensors to obtain high quality images (Atwater, ¶0075). Concerning claim 22, Peterson in view of Kim teaches the method of claim 15. Not explicitly taught is the system, wherein the one or more machine learning models are used to infer the adjustment. Atwater et al. (hereinafter Atwater), in a similar field of endeavor, teaches using the one or more machine learning models infer the adjustment (¶¶0074-0075: Machine learning techniques are used to learn preferred camera positions and locations based on the training data. The monitoring system may also learn how to position cameras at certain locations and the camera settings that may be utilized to obtain quality images.). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Atwater to the Peterson in view of Kim invention in order to adjust the visual sensors to obtain high quality images (Atwater, ¶0075). Response to Arguments Applicant’s arguments, see pages 6-8 of the remarks, filed 02/02/2026, with respect to rejection of claims 1-22 under 35 U.S.C. §§ 102 & 103 have been fully considered, but they are moot in view of new grounds of rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES M ANDERSON II whose telephone number is (571)270-1444. The examiner can normally be reached Monday - Friday 10AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN PENDLETON can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /James M Anderson II/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Dec 03, 2018
Application Filed
Aug 01, 2019
Non-Final Rejection — §102, §103
Dec 06, 2019
Response Filed
Mar 16, 2020
Final Rejection — §102, §103
May 14, 2020
Interview Requested
May 19, 2020
Response after Non-Final Action
May 19, 2020
Applicant Interview (Telephonic)
May 19, 2020
Applicant Interview
May 26, 2020
Response after Non-Final Action
Jun 01, 2020
Request for Continued Examination
Jun 04, 2020
Response after Non-Final Action
Jun 22, 2020
Examiner Interview (Telephonic)
Jun 22, 2020
Non-Final Rejection — §102, §103
Sep 28, 2020
Response Filed
Sep 30, 2020
Non-Final Rejection — §102, §103
Oct 20, 2020
Interview Requested
Feb 08, 2021
Response Filed
Mar 18, 2021
Final Rejection — §102, §103
Apr 13, 2021
Interview Requested
Apr 28, 2021
Examiner Interview Summary
Apr 28, 2021
Applicant Interview (Telephonic)
Jun 01, 2021
Response after Non-Final Action
Jun 29, 2021
Response after Non-Final Action
Jun 29, 2021
Notice of Allowance
Aug 23, 2021
Response after Non-Final Action
Oct 04, 2021
Response after Non-Final Action
Nov 16, 2021
Response after Non-Final Action
Mar 16, 2022
Response after Non-Final Action
May 23, 2022
Response after Non-Final Action
May 24, 2022
Response after Non-Final Action
May 25, 2022
Response after Non-Final Action
May 25, 2022
Response after Non-Final Action
Sep 26, 2023
Response after Non-Final Action
Dec 04, 2023
Non-Final Rejection — §102, §103
Feb 20, 2024
Applicant Interview (Telephonic)
Feb 22, 2024
Examiner Interview Summary
Apr 11, 2024
Response Filed
Jul 13, 2024
Final Rejection — §102, §103
Jul 30, 2024
Interview Requested
Aug 13, 2024
Applicant Interview (Telephonic)
Aug 13, 2024
Examiner Interview Summary
Sep 19, 2024
Response after Non-Final Action
Sep 28, 2024
Final Rejection — §102, §103
Dec 30, 2024
Response after Non-Final Action
Feb 19, 2025
Non-Final Rejection — §102, §103
Mar 28, 2025
Interview Requested
Apr 03, 2025
Applicant Interview (Telephonic)
Apr 03, 2025
Examiner Interview Summary
Jul 24, 2025
Response Filed
Nov 01, 2025
Final Rejection — §102, §103
Nov 19, 2025
Interview Requested
Feb 02, 2026
Request for Continued Examination
Feb 13, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561976
COMMENT GENERATION DEVICE AND COMMENT GENERATION METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12548437
SYSTEMS AND METHODS FOR POLICY CENTRIC DATA RETENTION IN TRAFFIC MONITORING
2y 5m to grant Granted Feb 10, 2026
Patent 12537949
METHODS AND APPARATUS FOR KERNEL TENSOR AND TREE PARTITION BASED NEURAL NETWORK COMPRESSION FRAMEWORK
2y 5m to grant Granted Jan 27, 2026
Patent 12534313
CAMERA-ENABLED LOADER SYSTEM AND METHOD
2y 5m to grant Granted Jan 27, 2026
Patent 12525019
INTELLIGENT AI SYSTEM FOR RAPID WEAPON THREAT ASSESSMENT IN VIDEO STREAMS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

11-12
Expected OA Rounds
75%
Grant Probability
85%
With Interview (+10.4%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 684 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month