Prosecution Insights
Last updated: April 19, 2026
Application No. 16/860,175

NOTIFICATIONS DETERMINED USING ONE OR MORE NEURAL NETWORKS

Non-Final OA §101§102§103
Filed
Apr 28, 2020
Examiner
RUDOLPH, VINCENT M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
7 (Non-Final)
44%
Grant Probability
Moderate
7-8
OA Rounds
5y 1m
To Grant
86%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
114 granted / 260 resolved
-18.2% vs TC avg
Strong +42% interview lift
Without
With
+42.0%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
37 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 260 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 6-7, 13, 19, 25 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Silverstein et al (US 10832484 B1). Regarding claim 1, Silverstein discloses one or more processors (Fig. 6 1102), comprising: circuitry to (Fig. 6 1102 and CPUs 1102A-1102D): monitor sensor data of a physical environment surrounding a virtual reality (VR) device to identify one or more user-defined events of interest in the physical environment (col 6 lines 29-40 machine learning module 114 may include one or more artificial neural networks configured to learn from various historical risk tolerance data received by VR device 102. For example, the VR device 102 may use machine learning module 114 to analyze historical risk tolerance data for accuracy for predicting potential risks for a specific user. The machine learning module 114 may correlate historical risk tolerance data for a specific user with crowdsourced risk tolerance data to determine patterns for predicting risks; col 8 lines 45-67 The process 300 begins by receiving event data from one or more devices communicatively coupled to a VR device. This is illustrated at step 305. In embodiments, the event data may be image data received from an externally facing camera disposed on the VR device (e.g., VR headset). In other embodiments, the event data may be received from one or more IoT devices. The event data may be generated and sent to the VR device in various formats (e.g., image data, audio data, motion sensor data, proximity data, location data, etc.) depending on the IoT device. In embodiments, the event data may include data regarding the type and location of an object relative to a user immersed within a VR simulation. For example, the event data may indicate a child is playing in an area in close proximity to the user. The process 300 continues by comparing the event data to a risk tolerance threshold for a first user. This is illustrated at step 310. For example, a risk tolerance threshold may be set to alert the user when an object is within a predetermined distance to the user. For example, the risk tolerance threshold may be set to alert the user when a child is playing within 5 ft of the user during the VR simulation. Based on the event data received from the communicatively coupled camera and/or the IoT devices, the VR system will determine if the event data meets the risk tolerance threshold such that a user would prefer to be warned of the risk.); determine, using one or more neural networks, a classification of the identified one or more user-defined events of interest based, at least in part, on a history of one or more previous responses of one or more users to the identified one or more user-defined events of interest (col 3 lines 5-13 the system compares the received event data to a risk tolerance threshold for a first user. The risk tolerance threshold may be an individualized threshold generated specifically for a respective user based on analysis of historical risk tolerance data. If the risk tolerance threshold is met, the system may push a notification to the VR device (e.g., user interface (UI) on a VR headset) indicating a potential risk to the first user has been detected; col 6 lines 17-28 In the illustrated embodiment, VR device 102 includes machine learning module 114. Machine learning module 114 may comprise various machine learning engines (artificial neural network, correlation engines, natural language processing engine, reinforcement feedback learning model, supervised learning model, etc.) to analyze event data generated from various sources (e.g., linked camera, IoT devices, etc.). In embodiments, machine learning module 114 may analyze historical data 120 and/or crowdsourcing data 118 located on database 116. Historical data may be any type of data generated by the system (e.g., historical risk tolerance data, historical event data, etc.); see examples in col 8 lines 17-33 e.g. historical risk tolerance data may indicate that user 204 does not allow their child 214 to climb on bookshelf 216; based on historical risk tolerance data, if the dog 218 is determined to be moving and/or chewing on the couch 206, the system may alert the user of the risk; col 10 lines 1-6 historical risk tolerance data comprises data indicative of actions taken by the first user in response to historical event data); and cause, based on the classification, a notification to be presented by a user interface of the VR device to the one or more users (col 8 lines 59-67 Based on the event data received from the communicatively coupled camera and/or the IoT devices, the VR system will determine if the event data meets the risk tolerance threshold such that a user would prefer to be warned of the risk; col 9 lines 10-15 If the risk tolerance threshold has been met, “yes” at step 315, the process continues by pushing a notification to the VR device indicating a potential risk to the first user has been detected. This is illustrated at step 320), wherein the notification has been filtered based on the history of the one or more previous responses of the one or more users to the identified one or more user-defined events of interest (col 3 lines 44-56 In embodiments, the risk tolerance threshold may be generated for a user by analyzing historical risk tolerance data associated with the specific user. The historical risk tolerance data may be indicative of actions taken by the first user in response to historical event data. For example, the system may analyze historical interactions collected from IoT devices throughout an environment to determine a user's tolerance to certain risks. For example, audio and visual data generated from an event where the user tells a child not to climb on an object may be gathered by the system through an IoT camera and IoT microphone. This data may be analyzed by machine learning to determine that the user considers a child climbing on an object a risk.). Regarding claim(s) 7 (drawn to a system): The rejection/proposed combination of Silverstein, explained in the rejection of processor claim(s) 1, anticipates/renders obvious the steps of the system of claim(s) 7 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 is/are equally applicable to claim(s) 7. Regarding claim(s) 13 (drawn to a method): The rejection/proposed combination of Silverstein, explained in the rejection of processor claim(s) 1, anticipates/renders obvious the steps of the method of claim(s) 13 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 is/are equally applicable to claim(s) 13. Regarding claim(s) 19 (drawn to a CRM): The rejection/proposed combination of Silverstein, explained in the rejection of processor claim(s) 1, anticipates/renders obvious the steps of the computer readable medium of claim(s) 19 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 is/are equally applicable to claim(s) 19. See Silverstein col 11 lines 58-65. Regarding claim(s) 25 (drawn to a system): The rejection/proposed combination of Silverstein, explained in the rejection of processor claim(s) 1, anticipates/renders obvious the steps of the system of claim(s) 25 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 is/are equally applicable to claim(s) 25. In addition, Silverstein teaches teach a camera to capture video data (Fig. 1 110); a microphone to capture audio data (col 3 lines 1-5); and memory for storing network parameters for the one or more neural networks (Fig. 6 1104 & col 6 lines 17-28). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-6, 8-11, 14-17, 20-23, 26-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Silverstein as applied to claim 1, 7, 13, 19, 25 above, and further in view of Naphade et al (US 20200410322). Regarding claim 2, Silverstein discloses the one or more processors of claim 1, but fails to teach wherein the one or more neural networks include an audio anomaly detector and a video anomaly detector for providing instance data and confidence data for the identified one or more user-defined events of interest in the physical environment, the audio anomaly detector taking as input audio data captured for the physical environment of the one or more users and the video anomaly detector taking as input video data captured for the physical environment of the one or more users. Naphade teaches wherein the one or more neural networks include an audio anomaly detector and a video anomaly detector (¶18 input data 104 is a collection of training images that are three-dimensional (3-D), and when obtained by one or more neural networks, is used to train one or more neural networks for anomaly detection in additional or new video frames that are fed to a trained network; input data 104 is audio data, such that when obtained by one or more neural networks, audio data is used to train none or more neural networks for speech recognition or speech anomaly detection purposes) for providing instance data and confidence data for the identified one or more user-defined events of interest in the physical environment (¶24 probabilistic model 114 generates an anomaly indicator 116 to indicate a likelihood of an anomalous event; ¶36 likelihood scores are indicative of whether an anomaly exists 510), the audio anomaly detector taking as input audio data captured for the physical environment of the one or more users (¶18 input data 104 is audio data, such that when obtained by one or more neural networks, audio data is used to train none or more neural networks for speech recognition or speech anomaly detection purposes) and the video anomaly detector taking as input video data captured for the physical environment of the one or more users (¶18 input data 104 is a collection of training images that are three-dimensional (3-D), and when obtained by one or more neural networks, is used to train one or more neural networks for anomaly detection in additional or new video frames that are fed to a trained network). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the one or more neural networks include an audio anomaly detector and a video anomaly detector for providing instance data and confidence data for the identified one or more user-defined events of interest in the physical environment, the audio anomaly detector taking as input audio data captured for the physical environment of the one or more users and the video anomaly detector taking as input video data captured for the physical environment of the one or more users from Naphade into the processor as disclosed by Silverstein. The motivation for doing this is to provide improvements to neural networks for inferring content. Regarding claim 3, the combination of Silverstein and Naphade discloses the one or more processors of claim 2, wherein the one or more neural networks include an event detector for determining the classification for each of the identified one or more user-defined events of interest (Silverstein col 10 lines 20-25 the system may analyze various data generated from the IoT devices to determine what event data should be classified as potential risks. For example, the system may analyze images of a dog chewing on an arm of a couch and audio data of the user reprimanding the dog to determine that a dog chewing on the couch is a risk and the user should be alerted of the risk when immersed in the VR simulation). Regarding claim 4, the combination of Silverstein and Naphade teaches the one or more processors of claim 3, wherein the one or more neural networks include a decision maker network for determining to cause the notification to be presented, the decision maker network using the instance data and the confidence data (Silverstein col 8 lines 45-58 the event data may be received from one or more IoT devices; col 8 lines 59-67 comparing the event data to a risk tolerance threshold for a first user), along with the classification (Silverstein col 8 lines 60-67 the risk tolerance threshold may be set to alert the user when a child is playing within 5 ft of the user during the VR simulation), to predict how likely the one or more users are to suspend use of the VR device in response to the identified one or more user-defined events of interest (Silverstein col 8 lines 59-67 Based on the event data received from the communicatively coupled camera and/or the IoT devices, the VR system will determine if the event data meets the risk tolerance threshold such that a user would prefer to be warned of the risk). Regarding claim 5, the combination of Silverstein and Naphade teaches the one or more processors of claim 4, wherein the circuitry is further to use a speech recognition module to provide the decision maker network with text for detected speech related to the identified one or more user-defined events of interest (Naphade ¶18 train one or more neural networks for speech recognition or speech anomaly detection purposes). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the circuitry is further to use a speech recognition module to provide the decision maker network with text for detected speech related to the one or more of the anomalous changes from Naphade into the processor as disclosed by Silverstein. The motivation for doing this is to provide improvements to neural networks for inferring content. Regarding claim 6, Silverstein discloses the one or more processors of claim 1, but fails to teach where Naphade teaches wherein the wherein the classification comprises one or more keywords indicating a type of anomaly (¶24 probabilistic model 114 generates an anomaly indicator 116 to indicate a likelihood of an anomalous event; ¶36 likelihood scores are indicative of whether an anomaly exists 510; ¶37 In at least one embodiment, probabilistic model (e.g., GMM) indicates whether objects are being anomalous. In at least one embodiment, if a car is travelling in a wrong direction, GMM would indicate an anomaly because, from being trained, it would learn that cars travel in opposite direction. In at least one embodiment, if a car is stuck in an intersection for a specific period of time, GMM would indicate an anomaly because, from being trained, it would learn that cars don't stay in an intersection for longer than ten seconds. As indicated above, in at least one embodiment, speech data or text data instead of video data is used to identify anomalous speech or text). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the wherein the classification comprises one or more keywords indicating a type of anomaly from Naphade into the processor as disclosed by Silverstein. The motivation for doing this is to provide improvements to neural networks for inferring content. Regarding claim 29, Silverstein discloses the one or more processors of claim 1, but fails to teach where Naphade teaches wherein the one or more user-defined events of interest can be designated as anomalies of interest (¶36 In at least one embodiment, if an anomaly is detected, an anomaly indicator is sent to a user or to a separate computing system 512. In at least one embodiment, GMM outputs an anomaly indicator 512 with a message that allows a user to identify which event or events have been deemed as anomalous (e.g., data that is different than normal data).). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the one or more user-defined events of interest can be designated as anomalies of interest from Naphade into the processor as disclosed by Silverstein. The motivation for doing this is to provide improvements to neural networks for inferring content. Regarding claim(s) 8-11 (drawn to a system): The rejection/proposed combination of Silverstein and Naphade, explained in the rejection of processor claim(s) 2-5, anticipates/renders obvious the steps of the system of claim(s) 8-11 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 2-5 is/are equally applicable to claim(s) 8-11. Regarding claim(s) 14-17 (drawn to a method): The rejection/proposed combination of Silverstein and Naphade, explained in the rejection of processor claim(s) 2-5, anticipates/renders obvious the steps of the method of claim(s) 14-17 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 2-5 is/are equally applicable to claim(s) 14-17. Regarding claim(s) 20-23 (drawn to a CRM): The rejection/proposed combination of Silverstein and Naphade, explained in the rejection of processor claim(s) 2-5 anticipates/renders obvious the steps of the computer readable medium of claim(s) 20-23 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 2-5 is/are equally applicable to claim(s) 20-23. See Silverstein col 11 lines 58-65. Regarding claim(s) 26-28 (drawn to a system): The rejection/proposed combination of Silverstein and Naphade, explained in the rejection of processor claim(s) 2-4, anticipates/renders obvious the steps of the system of claim(s) 26-28 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 2-4 is/are equally applicable to claim(s) 26-28. Claim 12, 18, 24 and 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Silverstein and Naphade as applied to claims 10, 16, 22 and 28 above, and further in view of Kumar et al (US Patent 9536355 B1). Regarding claim 12, Silverstein and Naphade discloses the system of claim 10, but fails to teach wherein the decision maker network is further to determine whether to take an action to reduce an immersiveness of an experience for the one or more users. Kumar teaches wherein the decision maker network is further to determine whether to take an action to reduce an immersiveness of an experience for the one or more users (col 6 lines 10-31 The AR device 101 may detect and identify a thermal anomaly based on the combination of AR device-based sensor data, user-based sensor data, physical object-based sensor data, and ambient-based sensor data; If the AR device 101 determines that one or more of the sensor data matches one or more of the preconfigured parameters, the AR device 101 notifies the user 102 by generating an audio or visual alert in the AR device 101; AR device 101 may further provide the user 102 with instructions on how to remedy or correct an operation on the physical objects 116, 118 to rectify the thermal anomaly). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the decision maker network is further to determine whether to take an action to reduce an immersiveness of an experience for the one or more users from Kumar into the system as disclosed by Silverstein and Naphade. The motivation for doing this is to provide improvements to methods for detecting a thermal anomaly in a physical environment using an augmented reality system. Regarding claim(s) 18 (drawn to a method): The rejection/proposed combination of Silverstein, Naphade and Kumar, explained in the rejection of system claim(s) 12, anticipates/renders obvious the steps of the method of claim(s) 18 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 12 is/are equally applicable to claim(s) 18. Regarding claim(s) 24 (drawn to a CRM): The rejection/proposed combination of Silverstein, Naphade and Kumar, explained in the rejection of system claim(s) 12, anticipates/renders obvious the steps of the computer readable medium of claim(s) 18 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 12 is/are equally applicable to claim(s) 18. See Silverstein col 11 lines 58-65. Regarding claim(s) 30 (drawn to a system): The rejection/proposed combination of Silverstein, Naphade and Kumar, explained in the rejection of system claim(s) 12, anticipates/renders obvious the steps of the system of claim(s) 30 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 12 is/are equally applicable to claim(s) 30. Response to Arguments Applicant's arguments filed 12/16/2025 have been fully considered but they are not persuasive. Regarding claim 1, the applicant argues that the prior art of record does not teach “wherein the notification has been filtered based on the history of the one or more previous responses of the one or more users to the identified one or more user-defined events of interest”. Regarding the above argument, the examiner respectfully disagrees. Silverstein teaches in col 3 lines 44-56 In embodiments, the risk tolerance threshold may be generated for a user by analyzing historical risk tolerance data associated with the specific user. The historical risk tolerance data may be indicative of actions taken by the first user in response to historical event data. For example, the system may analyze historical interactions collected from IoT devices throughout an environment to determine a user's tolerance to certain risks. For example, audio and visual data generated from an event where the user tells a child not to climb on an object may be gathered by the system through an IoT camera and IoT microphone. This data may be analyzed by machine learning to determine that the user considers a child climbing on an object a risk. Silverstein teaches in col 8 lines 59-67 Based on the event data received from the communicatively coupled camera and/or the IoT devices, the VR system will determine if the event data meets the risk tolerance threshold such that a user would prefer to be warned of the risk. That is, based on a history of previous user responses to user-defined events of interest, a notification is either presented or is not presented (e.g. filtered). Applicant’s arguments with respect to the rejection under 35 U.S.C. 101 have been fully considered and are persuasive. The rejection under 35 U.S.C. 101 of claims 1, 7, 13, 19 and 25 has been withdrawn. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN KY whose telephone number is (571)272-7648. The examiner can normally be reached Monday-Friday 9-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEVIN KY/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Apr 28, 2020
Application Filed
Apr 22, 2022
Non-Final Rejection — §101, §102, §103
Oct 27, 2022
Response Filed
Jan 05, 2023
Final Rejection — §101, §102, §103
Apr 24, 2023
Examiner Interview Summary
Apr 24, 2023
Applicant Interview (Telephonic)
Jun 12, 2023
Request for Continued Examination
Jun 22, 2023
Response after Non-Final Action
Jun 30, 2023
Non-Final Rejection — §101, §102, §103
Jul 13, 2023
Applicant Interview (Telephonic)
Jul 13, 2023
Examiner Interview Summary
Jan 08, 2024
Response Filed
Apr 04, 2024
Final Rejection — §101, §102, §103
Jun 24, 2024
Examiner Interview Summary
Jun 24, 2024
Applicant Interview (Telephonic)
Oct 09, 2024
Notice of Allowance
Mar 10, 2025
Request for Continued Examination
Mar 11, 2025
Response after Non-Final Action
Apr 01, 2025
Non-Final Rejection — §101, §102, §103
Apr 21, 2025
Interview Requested
May 07, 2025
Examiner Interview Summary
May 07, 2025
Applicant Interview (Telephonic)
Sep 04, 2025
Response Filed
Sep 12, 2025
Final Rejection — §101, §102, §103
Dec 16, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Feb 13, 2026
Non-Final Rejection — §101, §102, §103
Apr 02, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12525104
SURVEILLANCE SYSTEM AND SURVEILLANCE DEVICE
2y 5m to grant Granted Jan 13, 2026
Patent 12492533
SYSTEM AND METHOD OF CONTROLLING CONSTRUCTION MACHINERY
2y 5m to grant Granted Dec 09, 2025
Patent 12430871
OBJECT ASSOCIATION METHOD AND APPARATUS AND ELECTRONIC DEVICE
2y 5m to grant Granted Sep 30, 2025
Patent 12333853
FACE PARSING METHOD AND RELATED DEVICES
2y 5m to grant Granted Jun 17, 2025
Patent 12321856
METHOD, COMPUTER PROGRAM AND DEVICE FOR EVALUATING THE ROBUSTNESS OF A NEURAL NETWORK AGAINST IMAGE DISTURBANCES
2y 5m to grant Granted Jun 03, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
44%
Grant Probability
86%
With Interview (+42.0%)
5y 1m
Median Time to Grant
High
PTA Risk
Based on 260 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month