Prosecution Insights
Last updated: April 19, 2026
Application No. 17/985,275

Human Presence Sensor for Client Devices

Final Rejection §102§103
Filed
Nov 11, 2022
Examiner
DUONG, JOHNNYKHOI BAO
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
3 (Final)
66%
Grant Probability
Favorable
4-5
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
37 granted / 56 resolved
+4.1% vs TC avg
Strong +33% interview lift
Without
With
+32.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
10 currently pending
Career history
66
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
36.3%
-3.7% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 56 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claim(s) 1-4, 6-12, 14, 19, 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Valko (US 10,372,191 B2, 2019). Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Lattice 2019 (“Human Face Identification”, 2019) and Lattice 2021 (“User Tracking and Onlooker Detection Demonstration”, Dec 2021). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Optogate (“OPTOGATE PB-05”, July 2021). Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Mukherjee (US 2023/0008255 A1, Jul 2021). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Li (US 2019/0057246 A1, 2017 Foreign Priority). Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Choi (“‘Machine-Learning-Based Perception on a Tiny, Low-Power FPGA,’ a Presentation from Lattice Semi”, Oct 2020). Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Liu (“A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach”, 2020). Response to Arguments Applicant’s arguments (Remarks filed 02/03/2026) have been considered and are not fully persuasive. Applicant has the following argument (last 2 paragraphs of page 8 to the first 5 lines of page 9), reproduced below: PNG media_image1.png 583 982 media_image1.png Greyscale Upon further review of the reference and in light of applicant's argument, the examiner respectfully disagrees as follows: first of all, the Valko references states the following on column 5, first full paragraph, reproduced below: PNG media_image2.png 640 1056 media_image2.png Greyscale . “The operating system” at the end is interpreted to involve memory; one with ordinary skill in the art would understand that an operating system uses memory to operate the software on its systems. Secondly, the “user presence determination” “may be externalized and isolated from the main operation of the device”; one with ordinary skill in the art would understand dedicated memory may be needed to externalize the user presence determination; otherwise, it would not be able to operate its externalized computing systems. Additionally, the Valko reference states on column 7, 2nd to last paragraph, reproduced below: PNG media_image3.png 362 1066 media_image3.png Greyscale ; which shows the user presence determination may use machine learning models (interpreted from “machine learning based classifier or probabilistic decision system”); one with ordinary skill in the art would be put on notice that the external user presence determination may have dedicated memory to execute the one or more machine learning models—otherwise, the externalized device wouldn’t operate at all for user presence determination in the Valko reference. The arguments regarding claim rejections under 35 USC 103 provide no evidence, so the arguments shall be ignored. Accordingly, the claim rejections under 35 USC 102/103 remain rejected. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-4, 6-12, 14, 19, 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Valko (US 10,372,191 B2, 2019). Regarding claims 1 and 10, Valko teaches A computing device (Valko, column 2, lines 11-29, reproduced below: PNG media_image4.png 848 1058 media_image4.png Greyscale . “Computing system” is being interpreted as “computing device”), comprising: a processing module including one or more processors (Valko, see image above, “main processor and a presence sensor coupled to the main processor” are being interpreted as “one or more processors” that are part of a “processing module”); memory, communicatively coupled to the processing module (Valko, column 5, lines 22-23: “Specifically, a camera based sensor 106 may be communicatively coupled with the microprocessor 104.”), configured to store data (Valko, column 14, lines 53-56: “Additionally, the combination of data from the first and second sensors may include storing the data together and/or logically or mathematically combining the data”. “Storing the data” is being interpreted as using “memory”) and instructions associated with an operating system of the computing device (Valko, see column 2 image above, “the main processor changes a state of the computing system” is being interpreted to include “instructions associated with an operating system of the computing device”); and a human (Valko, column 3, lines 10-11: “As used herein, ‘user‘ may generally refer to a person or persons.” Which is being interpreted as “human”) presence sensor module (Valko, see column 2 image above, “presence sensor”), the human presence sensor module (Valko, see column 2 image above, “presence sensor”) including: an image sensor configured to capture imagery within a field of view of the image sensor (Valko, see column 2 image above, “The image sensor may be configured to capture 3-D images, depth images, RGB images, grayscale images”. Which is being interpreted to “capture imagery within a field of view of the image sensor”); dedicated memory configured to store one or more machine learning models (Valko, column 7, lines 58-61: “The presence determination may be based on multiple factors that are utilized in a neural network, support vector machine (SVM) or other machine learning based classifier 60 or probabilistic decision system”), the one or more machine learning models (Valko, column 7, lines 58-61: “The presence determination may be based on multiple factors that are utilized in a neural network, support vector machine (SVM) or other machine learning based classifier 60 or probabilistic decision system”) each being trained to identify whether one or more persons are present in the imagery (Valko, column 7, lines 58-61: “The presence determination may be based on multiple factors that are utilized in a neural network, support vector machine (SVM) or other machine learning based classifier 60 or probabilistic decision system”. Which is being interpreted as machine learning models that are trained); and a dedicated processing module (Valko, column 5, lines 7-20, reproduced below: PNG media_image5.png 644 1066 media_image5.png Greyscale . “Externalized” is being interpreted as part of “dedicated processing module” for presence determination) including at least one processing device configured to process the imagery received from the image sensor (Valko, see column 5 image above: “data fusion for the data coming from sensors” shows the imagery is processed from the image sensor) using the one or more machine learning models (Valko, column 7, lines 58-61: “The presence determination may be based on multiple factors that are utilized in a neural network, support vector machine (SVM) or other machine learning based classifier 60 or probabilistic decision system”. Which is being interpreted as one or more machine learning models) to determine whether one or more persons (Valko, column 3, lines 10-11: “As used herein, "user" may generally refer to a person or persons.” Which is being interpreted as “human”) are present in the imagery (Valko, see column 2 image above: “ If the processor determines that a user is present in the image”); wherein: imagery captured (Valko, column 5, lines 7-20, reproduced below: PNG media_image6.png 636 1058 media_image6.png Greyscale . “Camera” is being interpreted as capable of capturing imagery) by the image sensor of the human presence sensor module (Valko, see column 5 image above, “In some embodiments, the user presence determination and data related thereto may be externalized and isolated from the main operation of the device”. “User presence determination” is being interpreted as part of the “human presence sensor module”. As the user includes a person or persons as stated in column 3, lines 10-11) is restricted to the human presence sensor module (Valko, see column 5 image above, “In some embodiments, the user presence determination and data related thereto may be externalized and isolated from the main operation of the device”. “Externalized and isolated from the main operation of the device” is being interpreted as “restricted to the human presence sensor module”); and in response to detection (Valko, column 2, lines 11-29, reproduced below: PNG media_image4.png 848 1058 media_image4.png Greyscale . “An indication that a user has been determined to be present” is being interpreted as “in response to detection”) that one or more persons are present in the imagery (Valko, see column 2 image above: “user is present in the image” is being interpreted as “one or more persons are present in the imagery”), the human presence sensor module is configured to issue a signal to the processing module of the computing device (Valko, see column 2 image above: “an indication that a user has been determined to be present is sent from the processor to the main processor”. “An indication” is be interpreted as “issue a signal”. “Main processor” is being interpreted as part of “the computing device”), such that the computing device responds to the signal by executing one or more instructions associated with the operating system of the computing device (Valko, see column 2 image above: “the main processor changes a state of the computing system based on the indication”. “Based on the indication” is being interpreted as “computing device responds to the signal”. “Main processor changes a state” is being interpreted as “executing one or more instructions associated with the operating system of the computing device”). Regarding claim 2, Valko teaches The computing device of claim 1, wherein: the human presence sensor module (Valko, column 5, lines 7-35, reproduced below: PNG media_image7.png 884 712 media_image7.png Greyscale . “User presence determination” is being interpreted as involving a “human presence sensor module”) further includes a module controller (Volkov, see image above, “Specifically, a camera based sensor 106 may be communicatively coupled with the microprocessor 10”. “Microprocessor” is being interpreted as to include a “module controller”) operatively coupled to the image sensor (Valko, see image above, “Specifically, a camera based sensor 106 may be communicatively coupled with the microprocessor 10”. “Microprocessor” is being interpreted as to include a “module controller” that is operatively coupled to the image sensor), the dedicated memory and the dedicated processing module (Valko, see column 5 image above: “The camera based sensor may include a full image camera 108 that provides face detection capabilities with an integrated processor 110”. “Integrated processor” is being interpreted as involving a “dedicated processing module” with “dedicated memory” as user presence sensing data is isolated from the main computer processing unit); and the module controller is configured to receive a notification (Valko, column 5, lines 55-57: “ In some embodiments, the camera sensor may provide a binary output indicating that a user is or is not present”. “Binary output” is being interpreted to involve “a notification”) from the dedicated processing module about the presence of the one or more persons in the imagery (Valko, see column 5 image above: “The camera based sensor may include a full image camera 108 that provides face detection capabilities with an integrated processor 110”. “Face detection” is being interpreted as “one or more persons in the imagery”), and to issue the signal to the processing module of the computing device (Valko, column 6, lines 5-7: “As the camera moves from a state of operation, it provides an output which may be used by the device to change the state of the device”. “it provides an output which may be used by the device to change the state of the device” is being interpreted as “issue the signal to the processing module of the computing device”). Regarding claim 3, Valko teaches The computing device of claim 2, wherein the image sensor is further configured to: detect motion between sequential images (Valko, column 16, lines 40-42: “FIG. 9 illustrates the motion detection routine 212 as a flowchart starting by collecting multiple frames and, as such, memory may be implemented to store the multiple frames”. “multiple frames” is being interpreted as “sequential images”); and to issue a wake on approach signal to the module controller in order to enable the module controller (Valko, column 19, lines 11-14: “For example, in one embodiment, a lowest power state may implement only a camera, an image signal processing (ISP) device and an embedded processor that may calculate a presence value in real-time.” “Presence value in real-time” is being interpreted as “wake on approach signal” to activate the face detection in the next step) to cause one or more components of the human presence sensor module to wake up from a low power mode (Valko, column 19, lines 14-15: “In a next tier, a face detector chip and RAM may be turned on.” “Face detector chip” is being interpreted as one or more components of the human presence sensor module.) Regarding claim 4, Valko teaches The computing device of claim 2, wherein: the image sensor is further configured to detect motion between sequential images (Valko, column 16, lines 40-42: “FIG. 9 illustrates the motion detection routine 212 as a flowchart starting by collecting multiple frames and, as such, memory may be implemented to store the multiple frames”. “multiple frames” is being interpreted as “sequential images”); and the dedicated processing module is configured to start processing the imagery (Valko, column 19, lines 14-15: “In a next tier, a face detector chip and RAM may be turned on.” “Face detector chip” is being interpreted as “start processing the imagery” using the “dedicated processing module”; Valko, column 4, lines 29-30: “Detection of movement or user presence may result in the activation of a second sensor, and so forth.”) in response to the detection of motion (Valko, column 19, lines 11-14: “For example, in one embodiment, a lowest power state may implement only a camera, an image signal processing (ISP) device and an embedded processor that may calculate a presence value in real-time.” “Presence value in real-time” is being interpreted as part of “detection of motion”). Regarding claim 6, Valko teaches The computing device of claim 5, wherein the machine learning models (Valko, column 18, lines 48-51: “As may be appreciated, a neural network, support vector machine (SVM) or other classification system may be utilized in each of the forementioned routines to make a determination as to presence of a user.”; Valko, column 16, lines 35-39: “A body detector or body sensor may be configured to follow the same flow or a similar flow to that of the face detector. Moreover, the body detector may be implemented using an image sensor that has a lower resolution than that used for the face detector”. Which shows body detection using machine learning models) further include a model to detect at least a portion of a human face, a model to detect a human torso, a model to detect a human arm (Valko, column 19, lines 27-36, reproduced below: PNG media_image8.png 300 702 media_image8.png Greyscale . “Head” is being interpreted as “portion of a human face”), or a model to detect a human hand. Regarding claim 7, Valko teaches The computing device of claim 1, wherein the signal to the processing module of the computing device is an interrupt (Valko, column 3, lines 41-43: “For example, in some embodiments, a system awake may be initiated when it is determined that a user is approaching”. Which is being interpreted as using an interrupt), and the interrupt causes a process of the computing device to wake the computing device from a suspend mode or a standby mode (Valko, column 3, lines 41-43: “For example, in some embodiments, a system awake may be initiated when it is determined that a user is approaching”. “System awake” is being interpreted as “wake the computing device from a suspend mode or a standby mode”). Regarding claim 8, Valko teaches The computing device of claim 1, wherein the signal to the processing module of the computing device is an interrupt (Valko, column 7, lines 37-47, reproduced below: PNG media_image9.png 498 1058 media_image9.png Greyscale . The tiered system is being interpreted to involve an interrupt to initiate user identification), and the interrupt causes a process of the computing device to initiate face authentication (Valko, see column 7 image above, “identify the user (e.g., as a credentialed user)” is being interpreted to involve “face authentication”) using imagery other than the imagery obtained by the image sensor of the human presence sensor module (Valko, see column 7 image above: “Data from the second sensor, alone… may be used to further identify the user/person”. Which is being interpreted as “imagery other than the imagery obtained by the image sensor of the human presence sensor module”). Regarding claim 9, Valko teaches The computing device of claim 1, further comprising: a display module having a display interface (Valko, column 7, lines 48-57, reproduced below: PNG media_image10.png 452 1068 media_image10.png Greyscale . “May require entry of user credentials to fully access the device” is being interpreted to involve a display interface for a display module), the display module being communicatively coupled to the processing module (Valko, see column 7 image above: “A state of the computing device may change…the display may awake”. Which shows the computing device is communicatively coupled to the display), the display module being configured to present information to a user (Valko, see column 7 image above: “May require entry of user credentials to fully access the device” which is being interpreted as presenting information to a user.); wherein the signal to the processing module of the computing device is an interrupt (Valko, see column 7 image above: “A state of the computing device 100 may change based on the determination that a user is present…if the user is approaching the device, the display may awake”. The display was not awake, the signal that a user is present is sent (which is being interpreted to involve an interrupt), then the display becomes awakened), and the interrupt causes a process of the computing device to display information on the display module (Valko, see column 7 image above: “May require entry of user credentials to fully access the device” which is being interpreted as presenting information to a user.). Regarding claim 11, Valko teaches The method of claim 10, further comprising, in response to detection of the presence of the one or more persons (Valko, column 7, lines 48-57, reproduced below: PNG media_image11.png 456 1060 media_image11.png Greyscale . “If the user is approaching the device, the display may awake”. “User” is being interpreted as “one or more persons”. “The display may awake” is being interpreted as a “response” to the user presence), causing the computing device to wake on arrival of a person within the field of view of the image sensor (Valko, see column 7 image above: “If the user is approaching the device, the display may awake”). Regarding claim 12, Valko teaches The method of claim 10, further comprising, in response to detection of a person leaving the field of view of the image sensor (Valko, column 3, lines 48-53: “In some embodiments, the computing device may be configured to determine when a user moves away from the device or leaves the proximity of the device. In response, the device may enter a power saving mode, such as a display sleep mode, a system sleep mode, activation of a screen saver, and so forth”. A “user moves away from the device or leaves the proximity of the device” is being interpreted as “a person leaving the field of view of the image sensor”.), causing the computing device to lock so that authentication is required to access one or more programs of the computing device (Valko, column 1, lines 38-41: “Additionally, recovery from the power saving feature/mode may take time, may even require the user to enter credentials, and generally may be a nuisance to the user.” Which shows the “power saving feature/mode” may “require the user to enter credentials”, which is being interpreted as “lock so that authentication is required to access one or more programs of the computing device”). Regarding claim 14, Valko teaches The method of claim 10, further comprising, in response to detection of the presence of at least two persons in the imagery (Valko, column 7, lines 48-57, reproduced below: PNG media_image12.png 602 1418 media_image12.png Greyscale . “Multiple users are present” is being interpreted as “at least two persons in the imagery”. “If multiple users are present…the device may be powered to a secure state” is being interpreted as a response to the multiple user detection) performing at least one of issuing a notification to a user (Valko, see column 7 image above: “may require entry of user credentials” is being interpreted to involve a notification to a user. Examiner notes that since this is an “or” statement, the first limitation phrase is being considered) of the computing device or blocking one or more notifications from being presented to the user. Regarding claim 19, Valko teaches The method of claim 10, further comprising: detecting, by the image sensor, motion between sequential images of the captured imagery (Valko, column 16, lines 40-42: “FIG. 9 illustrates the motion detection routine 212 as a flowchart starting by collecting multiple frames and, as such, memory may be implemented to store the multiple frames”. “multiple frames” is being interpreted as “sequential images”); and causing one or more components of the human presence sensor module to wake up from a low power mode in response to detecting the motion (Valko, column 19, lines 14-15: “In a next tier, a face detector chip and RAM may be turned on.” “Face detector chip and RAM” are being interpreted as one or more components of the human presence sensor module in a lower power mode. These components wake up as a response to a user presence based on motion detection). Regarding claim 20, Valko teaches The method of claim 10, wherein the signal to the processing module of the computing device is an interrupt (Valko, column 7, lines 37-47, reproduced below: PNG media_image13.png 504 1058 media_image13.png Greyscale . “a second sensor may be activated” shows an interrupt, or a signal that “a user is present”), and the interrupt causes a process of the computing device to initiate face authentication (Valko, see column 7 image above: “Identify the user (e.g., as a credentialed user)” is being interpreted to involve “face authentication”) using imagery other than the imagery obtained by the image sensor of the human presence sensor module (Valko, see column 7 image above: “Data from the second sensor, alone… may be used to further identify the user/person” which is being interpreted as “imagery other than the imagery obtained by the image sensor of the human presence sensor module”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Lattice 2019 (“Human Face Identification”, 2019) and Lattice 2021 (“User Tracking and Onlooker Detection Demonstration”, Dec 2021). Regarding claim 5, Valko teaches The computing device of claim 1, wherein the one or more machine learning models (Valko, column 15, lines 46-50, reproduced below: PNG media_image14.png 334 704 media_image14.png Greyscale . “Neural networks, support vector machines, and/or some other form of probabilistic machine learning based algorithm” is being interpreted as “one or more machine learning models” that does user presence detection.) comprise However, Valko does not appear to specifically teach a first machine learning model trained to detect the presence of a single person in the imagery, and a second machine learning model trained to detect the presence of at least two people in the imagery. Pertaining to the same field of endeavor, Lattice 2019 teaches a first machine learning model trained to detect the presence of a single person in the imagery (Lattice 2019, pg 1, reproduced below: PNG media_image15.png 230 918 media_image15.png Greyscale .”Identify faces” of the registered face is being interpreted as “detect the presence of a single person in the imagery”. “VGG8-like CNN” is being interpreted as “first machine learning model” that was trained), and However, Lattice 2019 does not appear to explicitly teach a second machine learning model trained to detect the presence of at least two people in the imagery. Pertaining to the same field of endeavor, Lattice 2021 teaches a second machine learning model trained to detect the presence of at least two people in the imagery (Lattice 2021, pg 1, reproduced below: PNG media_image16.png 244 924 media_image16.png Greyscale . “Tracking onlookers and their intent to look onto the user screen” is being interpreted as “detect the presence of at least two people in the imagery”. “Based on Mobilenet v1 network” is being interpreted as “a second machine learning model”). Valko, Lattice 2019 and Lattice 2021 are considered to be analogous art because they are directed to human presence detection. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system for human presence detection that was externalized (as taught by Valko, column 5, lines 7-20) to include a first and second machine learning model to detect a single person and at least two people, respectively [added features in (as taught by Lattice 2019 and Lattice 2021) because the combination provides an improvement to edge devices smarter and spatial awareness (Lattice 2021). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Optogate (“OPTOGATE PB-05”, July 2021). Regarding claim 13, Valko teaches The method of claim 10, further comprising, in response to detection of a person leaving the field of view of the image sensor (Valko, column 3, lines 48-53: “In some embodiments, the computing device may be configured to determine when a user moves away from the device or leaves the proximity of the device. In response, the device may enter a power saving mode, such as a display sleep mode, a system sleep mode, activation of a screen saver, and so forth”. A “user moves away from the device or leaves the proximity of the device” is being interpreted as “a person leaving the field of view of the image sensor”.)… However, Valko does not appear to specifically teach muting a microphone. Pertaining to a similar field of endeavor, Optogate teaches at least one of muting a microphone of the computing device (Optogate, pg 2, ¶1, reproduced here: PNG media_image17.png 131 568 media_image17.png Greyscale . The optogate will mute the microphone if the user is a predefined distance away. As this is an “Or” statement, the “muting a microphone” limitation will be considered) or turning off a camera of the computing device, wherein the camera is not the image sensor of the human presence sensor module. Valko and Optogate are considered to be analogous art because they are directed to human detection. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system for human detection (as taught by Valko) to include muting the microphone (as taught by Optogate) because the combination provides an improvement audio quality (Optogate, pg 2, ¶2). Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Mukherjee (US 2023/0008255 A1, Jul 2021). Regarding claim 15, Valko teaches The method of claim 10 (Valko, column 1, lines 17-20: “The present disclosure is generally related to devices having computing capabilities and, more particularly, devices and systems for sensing the presence of a user in local proximity to the device.”), further comprising, Valko does not appear to specifically teach “privacy filter”, but does teach in response to at least two persons detected. Pertaining to the same field of endeavor, Mukherjee teaches in response to detection of the presence of at least two persons in the imagery (Mukherjee, see [0008] image below: “The presence of other persons found”), enabling a privacy filter on a display of the computing device (Mukherjee, [0008], reproduced below: PNG media_image18.png 742 1360 media_image18.png Greyscale ”Obscuring or darkening” is being interpreted as “privacy filter on a display”). Valko and Mukherjee are considered to be analogous art because they are directed to human presence detection. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system for human presence detection with user authentication with multiple human presences detected (as taught by Valko) to include a privacy filter with multiple human presences detected (as taught by Mukherjee) because the combination provides privacy protection from shoulder surfers (Mukherjee, [0002-0004]). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Li (US 2019/0057246 A1, 2017 Foreign Priority). Regarding claim 16, Valko teaches The method of claim 10 (Valko, column 1, lines 17-20: “The present disclosure is generally related to devices having computing capabilities and, more particularly, devices and systems for sensing the presence of a user in local proximity to the device.”), further comprising, Valko does not appear to specifically teach gesture detection. Pertaining to the same field of endeavor, Li teaches in response to detection of the presence of one person in the imagery (Li, [0006], reproduced below: PNG media_image19.png 686 998 media_image19.png Greyscale . “In response to detecting that the image includes the human face”. “the human face” is being interpreted as “presence of one person in the imagery”), performing gesture detection (Li, see [0006] image above: “performing gesture recognition”) based on additional imagery captured by the image sensor of the human presence sensor module (Li, see [0006] image above: “performing sequentially the gesture recognition in the plurality of detection regions”. “Plurality of detection regions” is being interpreted as “based on additional imagery captured by the image sensor” because from [0017]: “user's images detected within a predetermined time after the information of the human face is stored, the gesture recognition is performed based on the stored information of the human face”). Valko and Li are considered to be analogous art because they are directed to human presence detection. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system for human presence detection (as taught by Valko) to include gesture recognition (as taught by Li) because the combination provides an improvement to human-computer interaction (Li, [0003]). Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Choi (“‘Machine-Learning-Based Perception on a Tiny, Low-Power FPGA,’ a Presentation from Lattice Semi”, Oct 2020). Regarding claim 17, Valko teaches The method of claim 10 (Valko, column 1, lines 17-20: “The present disclosure is generally related to devices having computing capabilities and, more particularly, devices and systems for sensing the presence of a user in local proximity to the device.”), further comprising, However, Valko does not appear to explicitly teach gaze tracking. Pertaining to the same field of endeavor, Choi Video teaches in response to detection of the presence of one person in the imagery (Choi Video, pg 1, time stamp 9:31: “Attention tracking is basically trying to find out whether the user is looking at the camera or the user is looking at the other area. So whether the user has attention to their sensor.” “User” is being interpreted as “presence of one person in the imagery”. Which shows the user is detected, and in response the user attention is tracked), performing gaze tracking (Choi Video, pg 1, time stamp 9:31: “Attention tracking”) based on additional imagery captured by the image sensor of the human presence sensor module (Choi Video, pg 2, 9:43 caption: “If the user is looking at the sensor, then we show the green boxes”; pg 2, 9:47 caption: “If the user looking at the other area, it shows the…red boxes”. Examiner recommends viewing the original video in full color. Which shows additional imagery captured by the image sensor of the human presence sensor module to track the gaze of the user). Valko and Choi Video are considered to be analogous art because they are directed to human presence detection. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system for human presence detection (as taught by Valko) to include gaze tracking (as taught by Choi) because the combination provides an improvement to human-computer interaction (Valko, column 1, lines 33-41). Further, Valko teaches left and right user tracking (Valko, column 4, lines 12-16). Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valko, in view of Liu (“A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach”, 2020). Regarding claim 18, Valko teaches The method of claim 10 (Valko, column 1, lines 17-20: “The present disclosure is generally related to devices having computing capabilities and, more particularly, devices and systems for sensing the presence of a user in local proximity to the device.”), further comprising, in response to detection of the presence of one person in the imagery (Valko, column 7, lines 48-57, reproduced below: PNG media_image12.png 602 1418 media_image12.png Greyscale . “If the user is approaching the device” is being interpreted as a response to presence of one person in the imagery), Valko does not appear to specifically teach performing dynamic beamforming to cancel background noise based on additional imagery captured by the image sensor of the human presence sensor module. Pertaining to a similar field of endeavor, Liu teaches performing dynamic beamforming to cancel background noise based on additional imagery captured by the image sensor of the human presence sensor module (Liu, Abstract, reproduced below: PNG media_image20.png 706 1352 media_image20.png Greyscale . “Non-stationary noise” reduction is being interpreted as to include “dynamic beamforming”. “Optical camera” is being interpreted to include imagery captured by the image sensor). Valko and Liu are considered to be analogous art because they are directed to human detection. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system for human detection (as taught by Valko) to include performing dynamic beamforming to cancel background noise based on additional imagery captured by the image sensor of the human presence sensor module (as taught by Liu) because the combination provides an improvement to noise reduction (Liu, Abstract). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Zhang et al (US 2024/0086528 A1, Feb 2021 [PCT]) teaches shoulder surfer detection and user device automatically locking if the user walks away. Mese et al (US 11334138 B1, Mar 2021) teaches user presence detection and tracking that unlocks a user device based on user location using imagery. Choi PDF Slides ("Machine Learning Based Perception on a Tiny LowPower FPGA", Sept 2020) teaches a computing device with multiple sensors and multiple machine learning models that does user presence detection, shoulder surfer detection, and user registration and identification using images. Lenovo (“T14s Gen 2 and X13 Gen 2 User Guide”, 2021) discloses human presence detection with privacy alert and privacy protection. Kosugi et al (US 2022/0366721 A1, May 2021 [Foreign Priority]) discloses shoulder surfer detection. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY B DUONG whose telephone number is (571)272-1358. The examiner can normally be reached Monday - Thursday 10a-9p (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.B.D./Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Nov 11, 2022
Application Filed
Jul 11, 2025
Non-Final Rejection — §102, §103
Oct 10, 2025
Interview Requested
Oct 14, 2025
Response Filed
Oct 16, 2025
Applicant Interview (Telephonic)
Oct 16, 2025
Examiner Interview Summary
Oct 27, 2025
Non-Final Rejection — §102, §103
Oct 29, 2025
Applicant Interview (Telephonic)
Feb 03, 2026
Response Filed
Mar 19, 2026
Final Rejection — §102, §103
Apr 14, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586187
LESION LINKING USING ADAPTIVE SEARCH AND A SYSTEM FOR IMPLEMENTING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12525024
ELECTRONIC DEVICE, METHOD, AND COMPUTER READABLE STORAGE MEDIUM FOR DETECTION OF VEHICLE APPEARANCE
2y 5m to grant Granted Jan 13, 2026
Patent 12518510
MACHINE LEARNING FOR VECTOR MAP GENERATION
2y 5m to grant Granted Jan 06, 2026
Patent 12498556
Microscopy System and Method for Evaluating Image Processing Results
2y 5m to grant Granted Dec 16, 2025
Patent 12488438
DEEP LEARNING-BASED IMAGE QUALITY ENHANCEMENT OF THREE-DIMENSIONAL ANATOMY SCAN IMAGES
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
66%
Grant Probability
99%
With Interview (+32.8%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 56 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month