Prosecution Insights
Last updated: April 19, 2026
Application No. 18/488,423

ELECTRONIC DEVICE FOR TRACKING OBJECTS

Final Rejection §103
Filed
Oct 17, 2023
Examiner
NADKARNI, SARVESH J
Art Unit
2629
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
6 (Final)
72%
Grant Probability
Favorable
7-8
OA Rounds
2y 12m
To Grant
85%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
354 granted / 494 resolved
+9.7% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
37 currently pending
Career history
531
Total Applications
across all art units

Statute-Specific Performance

§101
1.1%
-38.9% vs TC avg
§103
72.6%
+32.6% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 494 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed December 29, 2025 have been fully considered but they are not persuasive. At pages 12-13 of the Remarks, Applicant alleges the references do not teach or disclose "based on a determination that the wearable device is outside of the FOV of the first image sensor, reduce a frame rate of the first image sensor during the second time period" and "track the hand based on a second input during a second time period while the first image sensor of the plurality of image sensors is operating with the reduced frame rate, wherein a frame rate of the second image sensor is higher than the frame rate during the second time period." Examiner respectfully disagrees. Examiner respectfully submits the references clearly disclose these limitations. Specifically, as properly addressed below, Miura discloses based on a determination the object is outside of the FOV of the first image sensor (FIGS.1-3, [0021]-[0022] and [0039]-[0043] depth camera 120 enters a low state based on determination of whether target object is outside field of view; further at FIGS. 6A-7 and [0056]-[0064]), reduce a frame rate of the first image sensor during the second time period (FIGS.1-3, [0021]-[0022] and [0039]-[0043] depth camera 120 enters a low state being and determination of whether target object is outside field of view, second time period is inherently after determination of object not within the field of view; further at FIGS. 6A-7 and [0056]-[0064]). Furthermore, Examiner respectfully submits the combination of references discloses tracking the hand based on a second input during a second time period (Yan, Abstract, both field of view tracker and non-FOV tracking subsystems; therefore, hand and image of hand is being tracked; FIGS. 5 measurements such controller measurement data 510 (i.e. hand) and HMD data 512 all of which are non-image based data, received from non-FOV tracker 344 at [0047]-[0048] and [0062]-[0070], FIG. 6 producing an HMD controller pose at steps 614 and 616 as disclosed at least at [0080]-[0082] with motion detection model used as the input during this time, the hand is being tracked along with the hand-held controller, as would be understood by one of ordinary skill in the art) while the first image sensor of the plurality of image sensors is operating with the reduced frame rate (Miura at FIGS. 1-3 and 6A-7 and [0034]-[0043] and [0058]-[0064] noting the capturing of the object/hand by both other cameras 100 and 110 even when the depth camera 120 enters a lowered state with capture rate of 10 frames/second), wherein a frame rate of the second image sensor is higher than the reduced frame rate during the second time period (Miura at FIGS. 1-3 and 6A-7 and [0021]-[0022], [0035]-[0043] and [0058]-[0064] cameras 100 and 110 remain at 30 frames/second regardless of whether the object is within or outside of FOV, specifically describing 100 and 110 not the target of power reduction at [0034]). As such, Examiner respectfully submits the limitations in question are clearly taught by the properly combined references below. Therefore, the claims stand rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 16-21, 23-25 and 33-34, and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Yan et al., US 2020/0372702 A1 (hereinafter “Yan”) in view of Shinoda et al., US 2020/0108308 A1 (hereinafter “Shinoda”) further in view of Miura, US 2018/0108145 A1 (hereinafter “Miura”). Regarding claim 1, Yan discloses an apparatus (FIG. 3 describing head mounted display HMD 317 and [0043]-[0044]) comprising: at least one memory (FIGS. 3-4 with memory 304 described at least at [0043] and [0053]); and at least one processor (302) coupled to the at least one memory (FIGS. 3-4 describing processor(s) 302 described at [0043] and [0053] and memory 304), the at least one processor being configured to: obtain, from a plurality of image sensors image sensor (image capture devices 138a and 138b), an image (captured image at [0010]) including a device (114) on a hand (132) of a user (FIG. 2 with image capture devices 138a and 138b as described at [0047] for providing image data to FOV [0087] and further FIGS. 6, 12A, 12b (114) and [0080] with step 606 yes branch and 610 describing FOV tracker capable of determining the controller pose in space based on the image-based controller state data, noting that the process of FIG. 6 is periodically occurring and frame capture and processing is happening in real time, and would require time determination for proper execution of the method steps of FIG. 6 (see, for example, [0077]-[0078]), and other methods such as FIGS. 7-11 for the commonly understood benefit of determination of whether the device is or is not within view; controller is in and on the hand of the user as depicted at least at FIGS. 12A-13B), wherein the plurality of image sensors includes at least a first image sensor (image capture device 138a) and a second image sensor (image capture device 138b); receive a first input (image data 518) from one or more sensors (138A, 138B) of the device (112, HMD) during a first time period (FIGS. 2-5 [0063]-[0064] HMD state 512; various controller measurements 510 are also received, including motion sensor 306 of controller 114 providing various data as described therein including distance and radar tracking and positional information, the temporal determination is inherent in the methods as discussed above); track the hand of the user based on the image and the first input during the first time period (Yan, Abstract, both field of view tracker and non-FOV tracking subsystems; therefore, hand and image of hand is being tracked; FIGS. 1A-5 describing rendering output images and interactions based on the received inputs as disclosed at least at [0035]-[0039] and [0040]-[0043], noting the hand is being tracked in accordance with the controller since it is a hand-held controller – see Abstract and [0025] and [0030]-[0034] and [0040]); determine, during a second time period (when not trackable during a previous display frame), that the device (114) is outside of a field of view (FOV) of the first image sensor of the plurality of image sensors (FIGS. 15A-15B described at [0102]-[0104], temporal determination is inherent in the method steps of FIGS. 6-11 as discussed above, various steps of these methods making a determination that the device is out of FOV, for example, further specifically see FIG. 6 with object out of FOV at steps 606-614) and disclosed at least at [0079]-[0082]); receive a second input from the one or more sensors (306) of the wearable device (114) during the second time period (FIG. 5 describing received input controller measurements 510 from sensors 306 from controller114 from a non-FOV tracker 344 as disclosed at [0063]-[0073]), temporal determination being inherent based on method processing of inputs); and track the hand (132) of the user based on the second input during the second time period using input from at least the second image sensor of the plurality of image sensors (Yan, Abstract, both field of view tracker and non-FOV tracking subsystems; therefore, hand and image of hand is being tracked; FIGS. 5 measurements such controller measurement data 510 (i.e. hand) and HMD data 512 all of which are non-image based data, received from non-FOV tracker 344 at [0047]-[0048] and [0062]-[0070], FIG. 6 producing an HMD controller pose at steps 614 and 616 as disclosed at least at [0080]-[0082] with motion detection model used as the input during this time, the hand is being tracked along with the hand-held controller, as would be understood by one of ordinary skill in the art). However, although Yan discloses a device that is handheld but does not explicitly disclose a wearable device and based on a determination that the wearable device is outside of the FOV of the first image sensor, reduce a frame rate of the first image sensor during the second time period, and does not explicitly disclose tracking the hand based on a second input during a second time period while the first image sensor of the plurality of image sensors is operating with the reduced frame rate, wherein a frame rate of the second image sensor is higher than the frame rate during the second time period. In the same field of endeavor, Shinoda discloses the device is a wearable device ([0063] describing tracking device that is worn on the predetermined body part such as 250 and 260 as illustrated at least at FIG. 2A ). Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the device of Yan to incorporate the wearable aspect of a device as disclosed by Shinoda because the references are within the same field of endeavor, namely, head-mounted virtual reality systems with devices usable within a point of view of the user. The motivation to combine these references would have been to improve device recognition and correspondence in the virtual space (see Shinoda at least at [0078]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success. However, Yan in view of Shinoda does not explicitly disclose based on a determination that the object is outside of the FOV of the first image sensor, reduce a frame rate of the first image sensor during the second time period, and does not explicitly disclose tracking the hand based on a second input during a second time period while the first image sensor of the plurality of image sensors is operating with the reduced frame rate, wherein a frame rate of the second image sensor is higher than the frame rate during the second time period. In the same field of endeavor, Miura discloses based on a determination the object is outside of the FOV of the first image sensor (FIGS.1-3, [0021]-[0022] and [0039]-[0043] depth camera 120 enters a low state based on determination of whether target object is outside field of view; further at FIGS. 6A-7 and [0056]-[0064]), reduce a frame rate of the first image sensor during the second time period (FIGS.1-3, [0021]-[0022] and [0039]-[0043] depth camera 120 enters a low state being and determination of whether target object is outside field of view, second time period is inherently after determination of object not within the field of view; further at FIGS. 6A-7 and [0056]-[0064]), and tracking the hand based on a second input during a second time period while the first image sensor of the plurality of image sensors is operating with the reduced frame rate (FIGS. 1-3 and 6A-7 and [0034]-[0043] and [0058]-[0064] noting the capturing of the object/hand by both other cameras 100 and 110 even when the depth camera 120 enters a lowered state with capture rate of 10 frames/second), wherein a frame rate of the second image sensor is higher than the reduced frame rate during the second time period (FIGS. 1-3 and 6A-7 and [0021]-[0022], [0035]-[0043] and [0058]-[0064] cameras 100 and 110 remain at 30 frames/second regardless of whether the object is within or outside of FOV, specifically describing 100 and 110 not the target of power reduction at [0034]). Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the device of Yan in view of Shinoda to incorporate the frame rate adjustment of Miura because the references are within the same field of endeavor, namely, user input devices capable of tracking gestures and inputs. The motivation to combine these references would have been to accurately detect a position of an object while reducing power consumption (Miura at [0006] and [0034]-[0035]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success. Regarding claim 2, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 1 (see above), wherein, to track the hand of the user, the at least one processor is configured to track movement of the wearable device relative to the apparatus (see Yan at least [0043]-[0046] describing pose tracker 326 capable of determining a current pose of HMD 112 and controller 114 within a frame of reference of HMD 112, and constructing an artificial reality content that is updated according to the movement of HMD 112 in relation to controller 114 for the display 110 of FIGS. 2-5 [104], and further FIG.15B (P1-P3)). Regarding claim 3, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 1 (see above), wherein, to track the movement of the wearable device, the one or more processors are configured to: determine the first position of the wearable device within a first coordinate system of the wearable device (Shinoda describing world coordinate system at least at [0059]); transform the first coordinate system of the wearable device to a second coordinate system of the apparatus (Shinoda at [0059] describing coordinate transformation of the real world coordinates to the camera coordinates system as would be understood by one of ordinary skill in the art); and determine the second position of the wearable device within the second coordinate system of the apparatus (Shinoda at [0059] describing transformation of the real world coordinates to the virtual world coordinates system; second position would be the result of the transformation as would be understood by one of ordinary skill in the art). Regarding claim 4, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 2 (see above), wherein the at least one processor is configured to: track, based on at the tracked movement of the wearable device, a location of a hand associated with the wearable device (see Yan at least FIG. 2 with image capture devices 138 and describing FOV at [0047]-[0049], further at least see Yan at least [0078]-[0080]; Shinoda at [0059] for known methodologies of coordinate systems and transformations accordingly). Regarding claim 5, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 4 (see above), wherein the at least one processor is configured to: capture the image based on a determination that the wearable device is within the FOV of at least the second image sensors (Yan at FIG. 6 generally determining at 606, 608 and 610 whether the device is within a FOV of the sensors/cameras as described at least at [0077]-[0082]; further at FIG. 7 generally at [0084]; Miura, FIGS. 1-3 and 6A-7, [0021]-[0022] and [0034]-[0043] and [0058]-[0064] depth camera 120 capturing when the object is within a FOV); and track, based on the image, the location of the hand relative to a first coordinate system of the wearable device (see Yan at least [0078]-[0080]; Shinoda at [0059] for known methodologies of coordinate systems and transformations accordingly; Miura generally at FIGS. 1-3 and 6A-7 determination of the hand within the imaging range, an X-Y coordinate system as depicted and described at [0021]-[0022], [0038]-[0043] and [0057]-[0060] tracked by 100 and 110). Regarding claim 16, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 1 (see above), wherein the at least one processor is configured to: determine, based on at least one of data from the wearable device or a command from the wearable device, one or more extended reality (XR) inputs to an XR application on the apparatus (Yan at [0028]-[0029] describing mixed reality and augmented reality as illustrated in FIG. 1A and 1B). Regarding claim 17, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 16 (see above), wherein the one or more XR inputs comprise at least one of a modification of a virtual element along multiple dimensions in space (see Yan at least FIGS. 1A and 1B with virtual element being a sword 126 used in multiple dimensions of space as described at least at [0025]-[0030]), a selection of the virtual element (see above, condition satisfied), a navigation event (see above, condition satisfied), or a request to measure a distance defined by at least one of a first position of the wearable device, a second position of the wearable device, or a movement of the wearable device (see above, condition satisfied). Regarding claim 18, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 17 (see above), wherein the virtual element comprises at least one of a virtual object rendered by the apparatus (see Yan at least FIGS. 1A and 1B with objects and environment rendered and further describing virtual object such as sword 126 at least at [0025]-[0030] and [0034]), a virtual plane in an environment rendered by the apparatus (see above, condition satisfied), or the environment rendered by the apparatus (see above, condition satisfied). Regarding claim 19, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 17 (see above), wherein the navigation event comprises at least one of scrolling rendered content or moving from a first interface element to a second interface element (see Yan at least FIGS. 1A and 1B with virtual objects 128A-C and [0028]-[0033] movement of the virtual hand toward one element or another as would be understood by one of ordinary skill in the art). Regarding claim 20, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 1 (see above), wherein the wearable device comprises a watch, a bracelet, a ring, or a glove (see Yan at least FIGS. 2A illustrating ring-like bracelet device worn around the palm as illustrated). Regarding claim 21, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 1 (see above), wherein the apparatus comprises a mobile device (see Yan [0002] and [0109]) or an extended reality (XR) device including a display (Yan [0028] and [0109] describing augmented and mixed reality or hybrid reality as known to one of ordinary skill in the art). Regarding claim 23, it is similar in scope to claim 1 above, the only difference being claim 24 is directed to a method (FIG. 6 of Yan). Therefore, claim 24 is similarly analyzed and rejected as claim 1. Regarding claim 24, it is similar in scope to claim 2 above; therefore, claim 24 is similarly analyzed and rejected as claim 2. Regarding claim 25, it is similar in scope to claim 5 above; therefore, claim 25 is similarly analyzed and rejected as claim 5. Regarding claim 33, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 1 (see above), wherein, to track the hand of the user, the at least one processor is configured track at least one of a gesture or a finger of the user (Yan at [0030]-[0036] rendering of the hand based on the movements and tracks and detects particular motions configurations, positions, and/or orientations of the controllers 114 that are received as inputs). Regarding claim 34, it is similar in scope to claim 33 above; therefore, claim 34 is similarly analyzed and rejected as claim 33. Regarding claim 36, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 1 (see above), wherein the apparatus comprises the plurality of image sensors (Yan at FIGS. 1A-2 and 138A, 138B [0039], Miura at FIGS. 1-3 [0021]-[0022] and [0039]-[0043] depth camera 120 and camera 100 and 110). Claims 6-15, 26-27, and 29-30 are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Shinoda further in view of Miura as applied to claim 1, and further in view of Wong, US 2022/0223017 A1 (hereinafter “Wong”). Regarding claim 6, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 1 (see above). However, Yan in view of Shinoda further in view of Miura does not explicitly disclose wherein the at least one processor is configured to: adjust, based on a determination that the wearable device is within the FOV of the first image sensor, a setting of the first image sensor, the setting comprising at least one of a power mode of the first image sensor or an operating state of the first image sensor. In the same field of endeavor, Wong discloses wherein the at least one processor is configured to: adjust, based on a determination that the wearable device is within the FOV of the first image sensor (noting determination of controller within POV is performed by Yan at FIG. 6 step 610 and discussed at [0080]), a setting of the first image sensor, the setting comprising at least one of a power mode of the first image sensor or an operating state of the first image sensor (Wong at FIGS. 1B-4 and [0019]-[0025] describing high power mode of a camera upon motion detection event). Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the virtual reality system of Yan in view of Shinoda further in view of Miura to incorporate the camera power saving modes as disclosed by Wong because the references are within the same field of endeavor, namely, camera detection systems. The motivation to combine these references would have been to improve power consumption and functionality of a battery operated camera system (see Wong at least at [0002]-[0005]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success. Regarding claim 7, Yan in view of Shinoda further in view of Miura further in view of Wong discloses the apparatus of claim 6 (see above), wherein, to adjust the setting of the first image sensor, the at least one processor is configured to change the power mode of the first image sensor from a first power mode to a second power mode, the second power mode being a higher power mode than the first power mode (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein). Regarding claim 8, Yan in view of Shinoda further in view of Miura further in view of Wong discloses the apparatus of claim 6 (see above), wherein, to adjust the setting of the first image sensor, the at least one processor is configured to change the operating state of the first image sensor from a first operating state to a second operating state, the second operating state being a higher operating state than the first operating state (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein). Regarding claim 9, Yan in view of Shinoda further in view of Miura further in view of Wong discloses the apparatus of claim 8 (see above), wherein the first operating state comprises a first frame rate, and wherein the second operating state comprises a second frame rate, the second frame rate being a higher frame rate than the first framerate (WONG at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein and further [0035] describing frame rate of the sensor and detection, frame rate increase would lead to higher resolution image, as is understood by one of ordinary skill). Regarding claim 10, Yan in view of Shinoda further in view of Miura further in view of Wong discloses the apparatus of claim 8 (see above), wherein the first operating state comprises a first resolution, and wherein the second operating state comprises a second resolution, the second resolution being a higher resolution than the first resolution (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein). Regarding claim 11, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 1 (see above). However, Yan in view of Shinoda further in view of Miura does not explicitly disclose wherein the at least one processor is configured to: adjust, based on the determination that the wearable device is outside of the FOV of the first image sensor apparatus, a setting of the first image sensor, the setting comprising at least one of a power mode of the first image sensor or an operating state of the first image sensor. In the same field of endeavor, Wong discloses wherein the at least one processor is configured to: adjust, based on the determination that the wearable device is outside of the FOV of the first image sensor apparatus (noting determination of controller outside of POV is performed by Yan at FIG. 6 step 606 and discussed at [0080]), a setting of the first image sensor, the setting comprising at least one of a power mode of the first image sensor or an operating state of the first image sensor (Wong at FIGS. 1B-4 and [0019]-[0025] describing high power mode of a camera upon motion detection event). Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the virtual reality system of Yan in view of Shinoda further in view of Miura to incorporate the camera power saving modes as disclosed by Wong because the references are within the same field of endeavor, namely, camera detection systems. The motivation to combine these references would have been to improve power consumption and functionality of a battery operated camera system (see Wong at least at [0002]-[0005]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success. Regarding claim 12, Yan in view of Shinoda further in view of Miura further in view of Wong discloses the apparatus of claim 11 (see above), wherein, to adjust the setting of the first image sensor, the at least one processor is configured to change the power mode of the first image sensor from a first power mode to a second power mode, the second power mode being a lower power mode than the first power mode (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein). Regarding claim 13, Yan in view of Shinoda further in view of Miura further in view of Wong discloses the apparatus of claim 11 (see above), wherein, to adjust the setting of the first image sensor, the at least one processor is configured to change the operating state of the first image sensor from a first operating state to a second operating state, the second operating state being a lower operating state than the first operating state (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein). Regarding claim 14, Yan in view of Shinoda further in view of Miura further in view of Wong discloses the apparatus of claim 13 (see above), wherein the first operating state comprises a first framerate, and wherein the second operating state comprises a second framerate, the second framerate being a lower framerate than the first framerate (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein and further [0029]-[0038] describing frame rate of the sensor and detection, frame rate increase would lead to higher resolution image, as is understood by one of ordinary skill). Regarding claim 15, Yan in view of Shinoda further in view of Miura further in view of Wong discloses the apparatus of claim 13 (see above), wherein the first operating state comprises a first resolution, and wherein the second operating state comprises a second resolution, the second resolution being a lower resolution than the first resolution (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein, the opposite would be inherently the state of the device). Regarding claim 26, it is similar in scope to claim 6 above; therefore, claim 26 is similarly analyzed and rejected as claim 6. Regarding claim 27, Yan in view of Shinoda further in view of Miura further in view of Wong discloses the method of claim 26 (see above), wherein adjusting the setting of the first image sensor comprises at least one of: changing the power mode of the first image sensor from a first power mode to a second power mode, the second power mode being a higher power mode than the first power mode (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein); or changing the operating state of the first image sensor from a first operating state to a second operating state, the second operating state being a higher operating state than the first operating state (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein). Regarding claim 29, it is similar in scope to claim 11 above; therefore, claim 29 is similarly analyzed and rejected as claim 11. Regarding claim 30, Yan in view of Shinoda further in view of Miura further in view of Wong discloses the method of claim 29 (see above), wherein adjust the setting of the first image sensor comprises at least one of: changing the power mode of the first image sensor from a first power mode to a second power mode, the second power mode being a lower power mode than the first power mode (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein); or changing the operating state of the first image sensor from a first operating state to a second operating state, the second operating state being a lower operating state than the first operating state (Wong at FIGS. 1B-4 described at [0019]-[0025] describing higher power and higher resolution upon detection of an object therein). Claims 31 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Shinoda further in view of Miura as applied to claim 1, and further in view of Holz, US 2014/0192206 A1 (hereinafter “Holz”). Regarding claim 31, Yan in view of Shinoda further in view of Miura discloses the apparatus of claim 1 (see above). However, Yan in view of Shinoda further in view of Miura does not explicitly disclose wherein, to reduce the frame rate of the first image sensor, the at least one processor is configured to: disable the first image sensor during the second time period. In the same field of endeavor, Holz discloses wherein, to reduce the frame rate of the first image sensor, the at least one processor is configured to: disable the first image sensor during the second time period ([0003] describing a sleep mode in lieu of a lowered frame rate, further described at least at [0025] and FIG. 2). Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the device of Yan in view of Shinoda further in view of Miura to incorporate the sleep mode of Holz because the references are within the same field of endeavor, namely, devices with camera capture input for a display system and user interfacing. The motivation to combine these references would have been to reduce the processing burden (see Holz at least at [0023]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success. Regarding claim 32, it is similar in scope to claim 31 above; therefore, claim 31 is similarly analyzed and rejected as claim 32. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hamasaki, US 2024/0290042 A1; Rojas et al., US 2023/0325967 A1; Taylor et al., US 2022/0326527 A1; THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARVESH J NADKARNI whose telephone number is (571)270-7562. The examiner can normally be reached 8AM-5PM M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LunYi Lao can be reached at (571) 272-7671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARVESH J NADKARNI/Examiner, Art Unit 2621 /LUNYI LAO/Supervisory Patent Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

Oct 17, 2023
Application Filed
May 04, 2024
Non-Final Rejection — §103
Jul 11, 2024
Interview Requested
Jul 23, 2024
Applicant Interview (Telephonic)
Jul 23, 2024
Examiner Interview Summary
Jul 30, 2024
Response Filed
Sep 24, 2024
Final Rejection — §103
Oct 31, 2024
Interview Requested
Nov 08, 2024
Examiner Interview Summary
Nov 08, 2024
Applicant Interview (Telephonic)
Nov 26, 2024
Response after Non-Final Action
Dec 11, 2024
Response after Non-Final Action
Dec 18, 2024
Request for Continued Examination
Dec 19, 2024
Response after Non-Final Action
Jan 09, 2025
Non-Final Rejection — §103
Mar 21, 2025
Interview Requested
Mar 27, 2025
Examiner Interview Summary
Mar 27, 2025
Applicant Interview (Telephonic)
Apr 04, 2025
Response Filed
Jun 03, 2025
Final Rejection — §103
Jul 21, 2025
Interview Requested
Jul 28, 2025
Applicant Interview (Telephonic)
Jul 28, 2025
Examiner Interview Summary
Aug 04, 2025
Response after Non-Final Action
Aug 27, 2025
Request for Continued Examination
Aug 28, 2025
Response after Non-Final Action
Sep 29, 2025
Non-Final Rejection — §103
Dec 10, 2025
Interview Requested
Dec 16, 2025
Applicant Interview (Telephonic)
Dec 16, 2025
Examiner Interview Summary
Dec 29, 2025
Response Filed
Feb 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573325
SCAN SIGNAL DRIVER CIRCUIT, DISPLAY PANEL, DISPLAY DEVICE, AND DRIVING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12560967
ANNULAR HOUSING FOR DETECTION DEVICE WITH FIRST AND SECOND FLEXIBLE SUBSTRATES
2y 5m to grant Granted Feb 24, 2026
Patent 12554334
PERSONALIZED CALIBRATION OF USER INTERFACES
2y 5m to grant Granted Feb 17, 2026
Patent 12548519
POWER SUPPLY SYSTEM, DISPLAY DEVICE INCLUDING THE SAME, AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Feb 10, 2026
Patent 12504831
TACTILE PRESENTATION APPARATUS AND TACTILE PRESENTATION KNOB
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
72%
Grant Probability
85%
With Interview (+13.7%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 494 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month