Prosecution Insights
Last updated: April 19, 2026
Application No. 18/666,948

VIDEO CAMERA OPTIMIZATIONS BASED ON ENVIRONMENTAL MAPPING

Non-Final OA §103
Filed
May 17, 2024
Examiner
CATTUNGAL, ROWINA J
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Apple Inc.
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
393 granted / 521 resolved
+17.4% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
33 currently pending
Career history
554
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 521 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to RCE filed 03/03/2026 in which claims 1-27 are pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/03/2026 has been entered. Information Disclosure Statement The information disclosure statements (IDS) submitted on 03/03/2026, 03/06/2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant’s arguments, see pages 8-9, filed 2/27/2026, with respect to the rejections of claims have been fully considered and amended claims are moot in view of new grounds of rejection made in view of Zheng et al. (CN115016752A ) (machine translation attached). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4, 11-12, 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Tarifa et al. (US 2021/0058551) (IDS provided 10/14/2024) in view Linde et al. (US 2019/0108652 A1) and Zheng et al. (CN115016752A ) (machine translation attached). Regarding claim 1, Tarifa discloses a method (Fig. 5 & Para[0004], [0037] teaches a method 500 for detecting a perceptible flicker) comprising: at a head-mounted device (HMD) having a processor (Fig. 2 Para[0037] teaches a virtual reality headset 132), one or more outward-facing cameras associated with one or more eye viewpoints (Para[0029] teaches the virtual reality system may continuously capture images of the real-world using inside-out cameras mounted on the headset): providing pass-through video via the HMD in which video captured via the one or more outward-facing cameras is presented on one or more displays to provide an approximately live view of a physical environment (Para[0004], [0029] teaches a virtual reality system may include capturing a real-world view through inside-out cameras and rendering the images for the use to view); determining an environment characteristic corresponding to an appearance of flicker associated with one or more light sources of the physical environment (Fig. 5 & Para[0035]-[0037] at step 530 one or more peaks based on the light intensity metrices are detected and a likelihood of perceptible flicker is determined in step 540; the perceptible flicker may be indicative of an offset between the frequency of light sources and the frame rate of the camera. In step 550 when the likelihood of perceptible flicker exceeds a threshold, the virtual reality headset 132 may cause the one or more cameras to change the frame rates to a second frame rate so to reduce or eliminated the perceptible flicker). Tarifa does not explicitly disclose based on the environment characteristic, determining an exposure parameter of the pass-through video, wherein the exposure parameter is an exposure length parameter that is adjusted to reduce the appearance of flicker; and adjusting an exposure of the one or more outward-facing cameras based on the determined exposure parameter. However Linde discloses based on the environment characteristic, determining an exposure parameter of the pass-through video (para[0028] teaches local area model provides the exposure settings for each camera 115, 120, 125, 130 for different positions of the cameras within the environment and allows the exposure settings to be adjusted as the location and orientation of the HMD 100 changes within the environment., wherein the exposure parameter is an exposure length parameter), wherein the exposure parameter is an exposure length parameter (Para[0025] teaches the imaging instructions may include the exposure settings for each camera 115, 120, 125, 130 (e.g., exposure length, gain, aperture, etc.); and adjusting an exposure of the one or more outward-facing cameras based on the determined exposure parameter (Para[0026] teaches the HMD controller synchronizes the exposure settings of each camera 115, 120, 125, 130 by centering the exposure length of each camera 115, 120, 125, 130 about the center time point. In other words, a midpoint of each exposure length is aligned at the same time point. This configuration accommodates the varying exposure lengths of the cameras 115, 120, 125, 130 and ensures that the same frame is captured by each camera. To center the exposures about the same time point, the HMD controller 100 calculates a time delay for each camera 115, 120, 125, 130 that allows each camera to begin image capture after a certain period of time relative to a reference time point. The HMD controller 100 determines an appropriate time delay for each camera 115, 120, 125, 130 according to the exposure length of the camera. Para[0065] teaches FIG. 5, once the HMD controller determines the center time point 525 and an appropriate time delay 530 for each camera 115, 120, 125, 130 based on the respective exposure frames 505, 510, 515, 520, the HMD controller sends the imaging instructions to each camera. This configuration accommodates the varying exposure lengths of the cameras 115, 120, 125, 130 and ensures that the same frame is captured by each camera. As the HMD 100 changes location and orientation within the environment of the HMD 100, the exposure settings of each camera may be adjusted, and thus, the synchronization information is adjusted accordingly.)). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker of Tarifa with the method of local area model allows the engine to efficiently adjust exposure settings of the cameras in the imaging assembly such that the engine does not have to analyze the depth information and amount of light detected by the imaging assembly at each new location and/or orientation of the HMD of Linde in order to provide provides the content having the maximum pixel resolution on the electronic display in a foveal region of the users gaze, the engine provides a lower pixel resolution in other regions of the electronic display, thus achieving less power consumption at the HMD and saving computing cycles of the HMD controller without compromising the visual experience of the user. Tarifa in view of Linde does not explicitly disclose wherein the exposure parameter is an exposure length parameter that is adjusted to reduce the appearance of flicker. However Zheng discloses wherein the exposure parameter is an exposure length parameter that is adjusted to reduce the appearance of flicker (Para[0068] –[0071] teaches S402: For each frame of real-world image, determine the exposure type of the left and right cameras based on the exposure duration of that real-world image. Generally, depending on the exposure duration, the exposure types of the left and right cameras on VR devices include short exposure and natural exposure. The exposure duration of the short exposure type is shorter than that of the natural exposure type, and the two exposure types switch cyclically according to the set frame rate. Therefore, in S402, for each frame of real environment image acquired, the exposure type of the left and right cameras can be determined based on the exposure duration of the current frame of real environment image.S403: When the exposure type is short exposure, process the real environment image to make the natural exposure real environment images of the two frames before and after the real environment image transition smoothly. Because the left and right cameras of VR devices are set to switch between short exposure and natural exposure modes, the brightness difference between two adjacent frames (one frame of short exposure real environment image and one frame of natural exposure real environment image) is large. If displayed directly, it may cause image flickering, dizziness, and prevent users from properly perceiving the real world. Therefore, it is necessary to process the short-exposure real-world images to ensure a smooth transition between the two naturally exposed real-world images before and after the short-exposure real-world image, thus solving the image flickering problem). t would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker using of anti-flicker light pulses Tarifa in view of Linde with the method in which short-exposure real environment image is discarded, a naturally-exposed real environment image is reserved, and the naturally-exposed real environment image is processed and then displayed of Zheng in order to provide a system in which the problem of image flicker is solved, and the user can perceive the real world in real time through the real environment image. Regarding claim 2, Tarifa discloses the method of claim 1 further comprising presenting the pass-through video of the HMD or recording and storing the pass-through video (Para[0029 ] teaches a client system 130 or a virtual reality system may render a virtual space for display to a user on a display device). Regarding claim 4, Tarifa discloses the method of claim 1, wherein the environment characteristic is based on an overall environment brightness of the physical environment (Fig. 3A, Para[0031]-[0034] teaches a mean light intensity per cameras is measured by taking an average of the light intensity of all pixels within a single frame or by averaging through multiple cameras, thus representing a measure of the environment brightness, which is ten used to get the number of peaks and estimate a perceptible flicker). Regarding claim 11, Tarifa discloses the method of claim 1, wherein the environment characteristic is based virtual content to be provided overlaid on the video in a view provided by the wearable electronic device.(Para[0029] the virtual space may be an augmented reality space in which virtual elements are overlaid on the real world. As an example and not by way of limitation, the virtual reality system may continuously capture images of the real world (e.g., using a camera on the headset of the user) and overlay virtual objects or avatars of other users on these images, such that a user may interact simultaneously with the real world and the virtual world ). Regarding claim 12, Tarifa discloses the method of claim 1, wherein the environment characteristic is based on a user movement relative to the one or more light sources. (Para[0019] teaches A virtual reality headset 132 may include sensor(s) 142, such as accelerometers, gyroscopes, magnetometers to generate sensor data that tracks the location of the headset device 132. [0036] the virtual reality headset may account for the short term discrepancies and use a historical likelihood of perceptible flicker to determine whether a perceptible flicker is actually detected. Regarding claim 26, Linde discloses the method of claim 1, wherein the exposure length parameter comprises a length of exposure time, a maximum allowable length for exposure time, a minimum allowable length for exposure time, or a target length for exposure time (Para[0062] –[0066] & FIG. 5 teaches each camera 115, 120, 125, 130 hasdifferent length of exposure frame 505, 510, 515, 52, respectively. To synchronize the image capture of the cameras 115, 120, 125, 130, an HMD controller (e.g., the HMD controller 350) determines a center time point 525 of the exposure frames 505, 510, 515, 520 and a corresponding time delay 530 for each camera 115, 120, 125, 130. The HMD controller evaluates the length of each exposure frame 505, 510, 515, 520 and determines the midpoint of each exposure frame. The HMD controller then aligns the midpoint of each exposure frame at the center time point 525. This configuration ensures that the cameras 115, 120, 125, 130 capture the same frame while maintaining individual exposure settings. As each exposure frame 505, 510, 515, 520 may have different lengths, the time delay for each camera 115, 120, 125, 130 varies accordingly. For example, since exposure frame 505 of camera 115 is longer than exposure frame 510 of camera 120, camera 115 has a shorter time delay than camera 120. Thus, camera 115 begins image capture before camera 120 such that the midpoints of the exposure frames 505, 510 align along the center time point 525. In the embodiment of FIG. 5, the time delay 530 is measured relative to the synchronization (“sync”) pulse 535 as the reference time point.). Motivation to combine as indicated in claim 1. Regarding claim 27, Zheng further discloses the method of claim 1, wherein providing the pass-through video comprises (Para[0004] teaches When entering the video perspective mode, the dual cameras of the VR head-mounted display device collect real-world images at the set frame rate, , and the spliced content is displayed on the left and right glasses of the VR head-mounted display device, so that the user can perceive the real world through the dual cameras on the VR head-mounted display device): providing video captured via one or more left-eye outward-facing cameras presented on a left-eye display to provide the approximately live view of the physical environment from a left-eye viewpoint; and providing video captured via one or more right-eye outward-facing cameras is presented on a right-eye display to provide the approximately live view of the physical environment from a right-eye viewpoint (figs. 3A -3B teaches the left-eye real environment image and the right-eye real environment image respectively collected by the left camera and the right camera are combined into one real environment image, that is, the environment images collected by the left camera and the right camera are displayed on one image), wherein: the pass-through video is modified to account for differences in camera position and eye position (Para[0048] teaches After enabling the video perspective mode, users can see the real environment through the dual cameras of the VR headset, para0067]-[0069] teaches when S401 is executed, after the VR device enters the video perspective mode, the left and right cameras respectively capture images of the real environment at a set frame rate and transmit them to the VR device's processor. The processor stitches the images of the real environment captured by the left and right cameras together to obtain a real environment image corresponding to the current frame. That is, the images of the real environment captured by the left and right cameras are displayed on one image.S402: For each frame of real-world image, determine the exposure type of the left and right cameras based on the exposure duration of that real-world image. Generally, depending on the exposure duration, the exposure types of the left and right cameras on VR devices include short exposure and natural exposure. The exposure duration of the short exposure type is shorter than that of the natural exposure type, and the two exposure types switch cyclically according to the set frame rate. Therefore, in S402, for each frame of real environment image acquired, the exposure type of the left and right cameras can be determined based on the exposure duration of the current frame of real environment image); the pass-through video is provided via a hardware-encoded rendering process (Para[0155] teaches Figure 9 of this application a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA),hardware components, or any combination thereof) that combines images from the cameras with virtual content in views provided to each eye (Para[0053] teaches When the video perspective mode is entered, the dual cameras on the VR headset capture images of the real world at a set frame rate. Based on the internal and external parameters of the cameras, each frame of the image is distorted, and the images captured by the dual cameras are stitched together. The stitched content is then displayed on the left and right lenses of the VR headset. In this way, the user can perceive the real world through the dual cameras on the VR headset. Para[0057] –[0058] & Fig. 3A, 3B teaches The images of the real environment captured by the left and right cameras are combined into a single real environment image, meaning that the environmental images captured by the left and right cameras are displayed on one image); and the exposure parameter is adjusted to reduce the appearance of flicker in the pass- through video (Para[0102] teaches brightness difference between adjacent frames of short-exposure and natural-exposure images by discarding the short-exposure real-environment images and retaining only the natural-exposure real-environment images. This solves the image flickering problem when the natural-exposure real-environment images are displayed after basic deformations such as distortion correction and cropping. This allows users to perceive the real-world environment through the real-environment images displayed on VR devices, thus improving the user experience.). Motivation to combine as indicted in claim 1. Claims 3, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Tarifa et al. (US 2021/0058551) (IDS provided 10/14/2024) in view of Linde et al. (US 2019/0108652 A1) and Zheng et al. (CN115016752A ) (machine translation attached) in further view of Sun et al. (US 2023/0100795 A1) (IDS provided US 10/14/2024). Regarding claim 3, Tarifa in view of Linde and Zheng discloses the method of claim 1, Tarifa in view of Linde and Zheng does not explicitly disclose wherein the environment characteristic is based on whether light sources are producing light with low or high temporal frequency. However Sun discloses wherein the environment characteristic is based on whether light sources are producing light with low or high temporal frequency (Para[0024] teaches that a metric to prioritize the frequency of the light sources is through the power-cycle function P(f)which discloses that the higher the frequency the less likely it can affect the image quality through flickering since more power cycles can be contained in a captured frame). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker using of anti-flicker light pulses Tarifa in view of Linde and Zheng with the method of reducing a flicker effect of multiple light sources in an image captured with an imaging device of Sun in order to provide a system of prioritizing the lighting frequencies, determining the exposure-time factorization sets, and adjusting the exposure time of the imaging device for additional light sources of the set of light sources to be detected so as to improve image quality with reduced flicker relative to the light sources. Regarding claim 16, Tarifa in view of Linde and Zheng discloses the method of claim 1, Tarifa in view of Linde and Zheng does not explicitly disclose, wherein determining the exposure parameter balances a flicker consideration and a motion-based blur consideration. However Sun discloses, wherein determining the exposure parameter balances a flicker consideration and a motion-based blur consideration (Para[0020], [0025] teaches prioritization is necessary because each light source may convey its own lighting frequency, and in order to reduce flicker as much as possible from all the frequencies, an exposure time must be long enough to cover the greatest common divisor. However, this exposure time could be too long and thereby cause an over-exposed image. Thus, the lighting frequencies are prioritized to identify those that most affect the image quality. Such a shift can be considered as motion in image-based motion metering, which could lead to more exposure change (e.g., more likely shorter exposure time to reduce motion blur), further impacting image quality). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker using of anti-flicker light pulses of Tarifa in view of Linde and Zheng with the method in which higher prioritized frequencies are then addressed first to improve image quality over other lighting frequency effects of Sun in order to provide a system in which Sun in order to provide a system of prioritizing the lighting frequencies, determining the exposure-time factorization sets, and adjusting the exposure time of the imaging device for additional light sources of the set of light sources to be detected so as to improve image quality with reduced flicker relative to the light sources Claims 5, 7 are rejected under 35 U.S.C. 103 as being unpatentable over Tarifa et al. (US 2021/0058551) (IDS provided 10/14/2024) in view of Linde et al. (US 2019/0108652 A1) and Zheng et al. (CN115016752A ) (machine translation attached) in further view of Herman (US 2020/0389582 A1) . Regarding claim 5, Tarifa in view of Linde and Zheng discloses the method of claim 1, Tarifa in view of Linde and Zheng does not explicitly disclose, wherein the environment characteristic is based on a time of day. However Herman discloses wherein the environment characteristic is based on a time of day (Para[022] teaches the exposure time of the camera may vary based on the ambient light environment. For example, a camera may have short exposures during bright day light and long exposure at twilight or at night time). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker of using of anti-flicker light pulses of Tarifa in view of Linde and Zheng with the method of identify flickering light sources correctly at the ranges of Herman in order to provide a system in which the effects of light source flicker are detected and filtered out Regarding claim 7, Tarifa in view of Linde and Zheng discloses the method of claim 1, Tarifa in view of Linde and Zheng does not explicitly disclose wherein the environment characteristic is based on a persistent digital map of the physical environment identifying flicker characteristics of each of the one or more light sources, wherein the persistent digital map is based on previously-obtained sensor data. However Herman discloses wherein the environment characteristic is based on a persistent digital map of the physical environment identifying flicker characteristics of each of the one or more light sources, wherein the persistent digital map is based on previously-obtained sensor data.(Para[0017]-[0018] by using a map information of road signage incorporating light sources with a characteristic flicker frequent (e.g., traffic lights) to assist the cameras in locating the light source for imaging purposes). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker of using of anti-flicker light pulses of Tarifa in view of Linde and Zheng with the method of suitable localization method to precisely locate the light sources by using the HD maps of Herman in order to provide a system in which the effects of light source flicker are detected and filtered out at relatively wider working distances between the cameras and light source. Claims 6, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Tarifa et al. (US 2021/0058551) (IDS provided 10/14/2024) in view of Linde et al. (US 2019/0108652 A1) and Zheng et al. (CN115016752A ) (machine translation attached) in further view of Steedly et al. (US 2020/0333878 A1). Regarding claim 6, Tarifa in view of Linde and Zheng discloses the method of claim 1, Tarifa in view of Linde and Zheng does not explicitly disclose, wherein the environment characteristic is based on determining that a light source of the physical environment is occluded . However Steedly discloses, wherein the environment characteristic is based on determining that a light source of the physical environment is occluded (Para[0038] teaches Data from the IMU on the handheld object 106 can further inform tracking, such as when the light sources might be occluded from view). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker and allows the engine to efficiently adjust exposure settings of the cameras in the imaging assembly of Tarifa in view of Linde and Zheng and reduces perceived user brightness of the handheld object light sources of Steedly in order to provide a Head-mounted device that improves quality of the images of the background environment for HMD pose tracking, , and extend battery life. Regarding claim 13, Tarifa in view of Linde and Zheng discloses the method of claim 1, Tarifa in view of Linde and Zheng does not explicitly disclose wherein determining the exposure parameter comprises determining to alter a normal exposure parameter to an exposure selected to reduce flicker. However Steedly discloses wherein determining the exposure parameter comprises determining to alter a normal exposure parameter to an exposure selected to reduce flicker (Para[0088] teaches change the duty cycle at a sufficient rate to hide any flicker from the human eye). t would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker and allows the engine to efficiently adjust exposure settings of the cameras in the imaging assembly of Tarifa in view of Linde and Zheng and reduces perceived user brightness of the handheld object light sources of Steedly in order to provide a Head-mounted device that improves quality of the images of the background environment for HMD pose tracking, , and extend battery life. Claims 8-9 is rejected under 35 U.S.C. 103 as being unpatentable over Tarifa et al. (US 2021/0058551) (IDS provided 10/14/2024) in view of Linde et al. (US 2019/0108652 A1) and Zheng et al. (CN115016752A ) (machine translation attached) in further view of Norris et al. (WO 2022/066400 A1) (IDS provided US 10/14/2024). Regarding claim 8, Tarifa in view of Linde and Zheng discloses the method of claim 1, Tarifa in view of Linde and Zheng does not explicitly disclose wherein the environment characteristic is based on 3D locations of light sources based on live or previously obtained sensor data. However Norris discloses wherein the environment characteristic is based on 3D locations of light sources based on live or previously obtained sensor data (Para[0082]- [0083] teaches images are used to generate the 3D map of the physical environment 905 at the device 900). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker of using of anti-flicker light pulses of Tarifa in view of Linde and Zheng with the method allows the HMD to automatically create a virtual image sensor in a physical environment or extended reality (XR) environment of Norris in order to provide a selfie picture or streams a selfie view of the user's avatar for that application. Regarding claim 9, Tarifa in view of Linde and Zheng discloses the method of claim 1, Tarifa in view of Linde and Zheng does not explicitly disclose, wherein the environment characteristic is based on 3D locations of surfaces of the physical environment based on live or previously-obtained sensor data. However Norris discloses wherein the environment characteristic is based on 3D locations of surfaces of the physical environment based on live or previously-obtained sensor data (Para[0082]- [0083] teaches XR environment is generated using Visual Inertial Odometry (VIO) or Simultaneous Localization and Mapping (SLAM) position tracking or the like at the device 900). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker of using of anti-flicker light pulses of Tarifa in view of Linde and Zheng with the method allows the HMD to automatically create a virtual image sensor in a physical environment or extended reality (XR) environment Norris in order to provide a selfie picture or streams a selfie view of the user's avatar for that application. Claim 10, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Tarifa et al. (US 2021/0058551) (IDS provided 10/14/2024) in view of Linde et al. (US 2019/0108652 A1) and Zheng et al. (CN115016752A ) (machine translation attached) in further view of Feng et al. (US 2023/0388658 A1) (IDS provided 10/14/2024) Regarding claim 10, Tarifa in view of Linde and Zheng discloses the method of claim 1, Tarifa in view of Linde and Zheng does not explicitly disclose wherein the environment characteristic is based on data from a sensor having one or more photodiodes. However Feng discloses wherein the environment characteristic is based on data from a sensor having one or more photodiodes. (Para[0168] photosensitive sensor a flicker sensor (Flicker Sensor) also converts an optical image on each pixel on a photosensitive surface into an electrical signal. However, the flicker sensor has only one pixel and is not light-filtered, so the electrical signal output by the flicker sensor is an electrical signal converted from an optical image on the only one pixel). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker of using of anti-flicker light pulses of Tarifa in view of Linde and Zheng with the method that enables adjusting the exposure time and the frame interval to the way, so that the electronic device display screen does not display the first artificial light source caused by the rolling of the bright and dark stripes of Feng in order to provide a system that improves the user experience. Regarding claim 14, Tarifa in view of Linde and Zheng discloses the method of claim 1, Tarifa in view of Linde and Zheng does not explicitly disclose wherein determining the exposure parameter comprises determining which of multiple light sources in the physical environment to use to select an exposure to reduce flicker. However Feng discloses wherein determining the exposure parameter comprises determining which of multiple light sources in the physical environment to use to select an exposure to reduce flicker (para[0005], [00159] teaches from a plurality of artificial light sources and a related apparatus. An electronic device may determine flicker frequencies of the plurality of artificial light sources, and select two of the flicker frequencies, denoted as F1 and F2. The electronic device determines, based on the two flicker frequencies, an exposure time and a frame interval used by the electronic device to obtain images subsequently). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker of using of anti-flicker light pulses of Tarifa in view of Linde and Zheng with the method that can eliminate a banding phenomenon caused by a single artificial light source and attenuate a banding phenomenon caused by other artificial light sources of Feng in order to provide a system in which avoiding scrolling bright and dark streaks in images on the electronic device. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Tarifa et al. (US 2021/0058551) (IDS provided 10/14/2024) in view of Linde et al. (US 2019/0108652 A1) and Zheng et al. (CN115016752A ) (machine translation attached) in further view of Feng et al. (US 2023/0388658 A1) (IDS provided 10/14/2024) and Sun et al. (US 2023/0100795 A1) (IDS provided US 10/14/2024). Regarding claim 15, Tarifa in view of Linde and Zheng in further view of Feng discloses the method of claim 14, Tarifa in view of Linde and Zheng in further view of Feng does not explicitly disclose wherein determining the exposure parameter comprises determining a score for each of the multiple light sources corresponding to flicker objectionability or flicker visibility. However Sun discloses wherein determining the exposure parameter comprises determining a score for each of the multiple light sources corresponding to flicker objectionability or flicker visibility (para[0005] teaches detecting a lighting frequency associated with each of the multiple light sources. The lighting frequencies of the light sources are then prioritized, relative to the flicker effect upon the image, to identify at least a first-prioritized lighting frequency and a second-prioritized lighting frequency. A first exposure-time factorization set for the first-prioritized lighting frequency is determined, and a second exposure-time factorization set for the second-prioritized lighting frequency is likewise determined). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker of using of anti-flicker light pulses and eliminate a banding phenomenon caused by a single artificial light source of Tarifa in view of Linde and Zheng in further view of Feng with the lighting frequencies are prioritized to identify those that most affect the image quality of Sun in order to provide a higher prioritized frequencies are then addressed first to improve image quality over other lighting frequency effects. Claims 17, 21 are rejected under 35 U.S.C. 103 as being unpatentable over Gruen US 2023/0222741 A1 (US corresponding to WO 2021/257172 A1, IDS provided US 10/14/2024) in view of Steedly et al. (US 2020/0333878 A1) and Zheng et al. (CN115016752A ) (machine translation attached). Regarding claim 17, Gruen discloses a head-mounted device (HMD) comprising: a motion sensor; a left-eye display; a right-eye display (Para [0019] & FIG. 1B, the head-mounted display device 106 includes a left display 112L configured to present the left display image pixel stream to a left eye of the user 102, and a right display 112R configured to present the right display image pixel stream to a right eye of the user 102); one or more left-eye outward-facing cameras associated with a left-eye viewpoint; one or more right-eye outward-facing cameras associated with a right-eye viewpoint (Para[0015] teaches the head-mounted display device 106 includes, in this example, a pair of outward-facing stereo cameras 108 configured to image a real-world physical scene 110. For example, the cameras 108 may be color (e.g., RGB) cameras. A left camera 108L is configured to capture a left camera image pixel stream and a right camera 108R is configured to capture a right camera image pixel stream); a non-transitory computer-readable storage medium; and one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the system to perform operations comprising (para[0063] Non-volatile storage device 606 includes one or more physical devices configured to hold instructions executable by the logic processors to implement the methods and processes described herein); providing pass-through video via the HMD in which video captured via the one or more left-eye outward-facing cameras is presented on the left-eye display to provide an approximately live view of a physical environment from the left-eye viewpoint and video captured via the one or more right-eye outward-facing cameras is presented on the right-eye display to provide the approximately live view of the physical environment from the right-eye viewpoint (Para[0050] [0056]-[0058] & Fig. 5 teaches the method 500 may be performed repeatedly for each corresponding camera/display of the video pass-through computing system. Thus, in one example, the method 500 may be performed for a left camera corresponding to a left display, and the method 500 may be repeated for a right camera corresponding to a right display. Video pass-through computing systems and methods are discussed in the context of left and right cameras “passing through” imagery to corresponding left and right displays); Gruen does not explicitly disclose determining an environment characteristic corresponding to an appearance of flicker associated with one or more light sources of the physical environment; based on the environment characteristic, determining an exposure parameter of the pass-through video wherein the exposure parameter is an exposure length parameter that is adjusted to reduce the appearance of flicker; and adjusting an exposure of the one or more left-eye outward-facing cameras and one or more right-eye outward-facing cameras based on the determined exposure parameter. However Steedly discloses determining an environment characteristic corresponding to an appearance of flicker associated with one or more light sources of the physical environment (Para[0088] teaches the control signal may be configured to modulate brightness of the light sources by instructing the handheld object to vary a duty cycle of power applied to each light source, at 1318. As discussed above, the logic device may instruct the handheld object to change the duty cycle at a sufficient rate to hide any flicker from the human eye); based on the environment characteristic, determining an exposure parameter of the pass-through video para[0066] –[0067] & Fig. 10 teaches [0066] Thus, to allow light pulses of sufficient width for more certain camera/light pulse synchronization while avoiding illumination of the light sources during an environmental tracking exposure, a light pulse sequence may utilize light pulses arranged in various patterns configured to have sufficiently similar overall integral intensities to maintain a uniform perceived brightness. Para[0091] If a light pulse 1410 of the one or more light source is not detected in the exposure 1408, the image sensing system is determined to be out of phase with the light source pulse frequency, and another handheld object tracking exposure is taken in a different portion of the time interval 1404. Likewise, when the handheld object tracking exposure 1408 is in the same portion of the time interval 1404 as the light pulse 1410, the acquisition of another exposure may be omitted. Para[0123] & Fig. 25 teaches At 2502, method 2500 comprises, for each camera in the stereo camera arrangement, receiving image data of a field of view of the camera. In some examples, the image data may comprise a sequence of handheld object tracking exposures and environmental tracking exposures in a 1:1 ratio. In other examples, the image data may comprise a greater number of handheld object tracking exposures than environmental tracking exposures, or a greater number of environmental tracking exposures than handheld object tracking exposures, depending upon whether a higher frame rate is desired for either type of exposure; and adjusting an exposure of the one or more left-eye outward-facing cameras and one or more right-eye outward-facing cameras based on the determined exposure parameter (Para[0134] & Fig. 26 teaches Method 2600 comprises, at 2602, receiving, from a first camera of the stereo camera arrangement, first image data from a perspective of the first camera, and at 2604, receiving, from a second camera of the stereo camera arrangement, second image data from a perspective of the second camera. The first image data and the second image data may be obtained by the first and second cameras at the same time, and may comprise any suitable exposure sequence and frame rate(s)). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method video-pass through to adjust exposure timing of camera and display presenting image pixel stream of Gruen with the method of anti-flicker light pulses of Steedly in order to provide a Head-mounted device that improves quality of the images of the background environment for HMD pose tracking, and reduces perceived user brightness of the handheld object light sources. Gruen in view of Steedly does not explicitly disclose wherein the exposure parameter is an exposure length parameter that is adjusted to reduce the appearance of flicker. However Zheng discloses wherein the exposure parameter is an exposure length parameter that is adjusted to reduce the appearance of flicker (Para[0068] –[0071] teaches S402: For each frame of real-world image, determine the exposure type of the left and right cameras based on the exposure duration of that real-world image. Generally, depending on the exposure duration, the exposure types of the left and right cameras on VR devices include short exposure and natural exposure. The exposure duration of the short exposure type is shorter than that of the natural exposure type, and the two exposure types switch cyclically according to the set frame rate. Therefore, in S402, for each frame of real environment image acquired, the exposure type of the left and right cameras can be determined based on the exposure duration of the current frame of real environment image.S403: When the exposure type is short exposure, process the real environment image to make the natural exposure real environment images of the two frames before and after the real environment image transition smoothly. Because the left and right cameras of VR devices are set to switch between short exposure and natural exposure modes, the brightness difference between two adjacent frames (one frame of short exposure real environment image and one frame of natural exposure real environment image) is large. If displayed directly, it may cause image flickering, dizziness, and prevent users from properly perceiving the real world. Therefore, it is necessary to process the short-exposure real-world images to ensure a smooth transition between the two naturally exposed real-world images before and after the short-exposure real-world image, thus solving the image flickering problem). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker of Tarifa in view of Steedly with the method in which short-exposure real environment image is discarded, a naturally-exposed real environment image is reserved, and the naturally-exposed real environment image is processed and then displayed of Zheng in order to provide a system in which the problem of image flicker is solved, and the user can perceive the real world in real time through the real environment image. Regarding claim 21, Steedly discloses the HMD of claim 17, wherein the environment characteristic is based on determining that a light source of the physical environment is occluded (Para[0038] teaches data from the IMU on the handheld object 106 can further inform tracking, such as when the light sources might be occluded from view). Motivation to combine as indicated in claim 17. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Gruen US 2023/0222741 A1 (US corresponding to WO 2021/257172 A1, IDS provided US 10/14/2024) in view of Steedly et al. (US 2020/0333878 A1) and Zheng et al. (CN115016752A ) (machine translation attached) in further view of Sun et al. (US 2023/0100795 A1) (IDS provided US 10/14/2024) Regarding claim 18, Gruen in view of Steedly and Zheng discloses the HMD of claim 17, Gruen in view of Steedly and Zheng does not explicitly disclose wherein the environment characteristic is based on whether light sources are producing light with low or high temporal frequency. However Sun discloses wherein the environment characteristic is based on whether light sources are producing light with low or high temporal frequency Para[0024] that a metric to prioritize the frequency of the light sources is through the power-cycle function P(f)which discloses that the higher the frequency the less likely it can affect the image quality through flickering since more power cycles can be contained in a captured frame) . It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method video-pass through to adjust exposure timing of camera and display presenting image pixel stream with the of anti-flicker light pulses of Gruen in view Steedly and Zheng with the method of reducing a flicker effect of multiple light sources in an image captured with an imaging device of Sun in order to provide a system of prioritizing the lighting frequencies, determining the exposure-time factorization sets, and adjusting the exposure time of the imaging device for additional light sources of the set of light sources to be detected so as to improve image quality with reduced flicker relative to the light sources. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Gruen US 2023/0222741 A1 (US corresponding to WO 2021/257172 A1) (IDS provided US 10/14/2024) in view of Steedly et al. (US 2020/0333878 A1) and Zheng et al. (CN115016752A ) (machine translation attached) in further view of Tarifa et al. (US 2021/0058551) (IDS provided 10/14/2024) Regarding claim 19, Gruen in view of Steedly and Zheng discloses the HMD of claim 17, wherein Gruen in view of Steedly and Zheng does not expclity disclose the environment characteristic is based on an overall environment brightness of the physical environment. However Tarifa discloses the environment characteristic is based on an overall environment brightness of the physical environment (Fig. 3A, Para[0031]-[0034] teaches a mean light intensity per cameras is measured by taking an average of the light intensity of all pixels within a single frame or by averaging through multiple cameras, thus representing a measure of the environment brightness, which is ten used to get the number of peaks and estimate a perceptible flicker). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of video-pass through to adjust exposure timing of camera and display presenting image pixel stream with the of anti-flicker light pulses of Gruen in view Steedly and Zheng with the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate of Tarifa in order to provide a system to adjust to remove any perceptible flicker. Claim 20, 22 are rejected under 35 U.S.C. 103 as being unpatentable over Gruen (US 2023/0222741 A1) (US corresponding to WO 2021/257172 A1) (IDS provided US 10/14/2024) in view of Steedly et al. (US 2020/0333878 A1) and Zheng et al. (CN115016752A ) (machine translation attached) in further view of Herman US 2020/389582 A1 . Regarding claim 20, Gruen in view of Steedly and Zheng discloses the HMD of claim 17, Gruen in view of Steedly and Zheng does not explicitly disclose, wherein the environment characteristic is based on a time of day. However Herman discloses wherein the environment characteristic is based on a time of day (Para[0022] teaches the exposure time of the camera may vary based on the ambient light environment. For example, a camera may have short exposures during bright day light and long exposure at twilight or at night time). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of video-pass through to adjust exposure timing of camera and display presenting image pixel stream with the of anti-flicker light pulses of Gruen in view Steedly and Zheng with the method of suitable localization method to precisely locate the light sources by using the HD maps of Herman in order to provide a system in which the effects of light source flicker are detected and filtered out at relatively wider working distances between the cameras and light source. Regarding claim 22, Gruen in view of Steedly and Zheng discloses the HMD of claim 17, Gruen in view of Steedly and Zheng does not explicitly disclose wherein the environment characteristic is based on a persistent digital map of the physical environment identifying flicker characteristics of each of the one or more light sources, wherein the persistent digital map is based on previously-obtained sensor data. However Herman discloses wherein the environment characteristic is based on a persistent digital map of the physical environment identifying flicker characteristics of each of the one or more light sources, wherein the persistent digital map is based on previously-obtained sensor data.(Para[0017]-[0018] by using a map information of road signage incorporating light sources with a characteristic flicker frequent (e.g., traffic lights) to assist the cameras in locating the light source for imaging purposes). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method video-pass through to adjust exposure timing of camera and display presenting image pixel stream with the of anti-flicker light pulses of Gruen in view Steedly and Zheng with the method of suitable localization method to precisely locate the light sources by using the HD maps of Herman in order to provide a system in which the effects of light source flicker are detected and filtered out at relatively wider working distances between the cameras and light source. Claims 23-24 is rejected under 35 U.S.C. 103 as being unpatentable over Gruen (US 2023/0222741 A1, US corresponding to WO 2021/257172 A1, IDS provided US 10/14/2024) in view of Steedly et al. (US 2020/0333878 A1) and Zheng et al. (CN115016752A ) (machine translation attached) in further view of Norris et al. (WO 2022/066400 A1) (IDS provided 10/14/2024). Regarding claim 23, Gruen in view of Steedly and Zheng discloses the HMD of claim 17, Gruen in view of Steedly and Zheng does not explicitly disclose wherein the environment characteristic is based on 3D locations of light sources based on live or previously obtained sensor data. However Norris discloses wherein the environment characteristic is based on 3D locations of light sources based on live or previously obtained sensor data (Para[0082]- [0083] teaches images are used to generate the 3D map of the physical environment 905 at the device 900). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method video-pass through to adjust exposure timing of camera and display presenting image pixel stream with the of anti-flicker light pulses of Gruen in view Steedly and Zheng with the method allows the HMD to automatically create a virtual image sensor in a physical environment or extended reality (XR) environment of Norris in order to provide a selfie picture or streams a selfie view of the user's avatar for that application. Regarding claim 24, Gruen in view of Steedly and Zheng discloses the HMD of claim 17, Gruen in view of Steedly and Zheng does not explicitly disclose, wherein the environment characteristic is based on 3D locations of surfaces of the physical environment based on live or previously-obtained sensor data. However Norris discloses wherein the environment characteristic is based on 3D locations of surfaces of the physical environment based on live or previously-obtained sensor data (Para[0082]-[0083] teaches XR environment is generated using Visual Inertial Odometry (VIO) or Simultaneous Localization and Mapping (SLAM) position tracking or the like at the device 900). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method video-pass through to adjust exposure timing of camera and display presenting image pixel stream with the of anti-flicker light pulses of Gruen in view Steedly and Zheng with the method allows the HMD to automatically create a virtual image sensor in a physical environment or extended reality (XR) environment of Norris in order to provide in order to provide a selfie picture or streams a selfie view of the user's avatar for that application. Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Gruen US 2023/0222741 A1 (US corresponding to WO 2021/257172 A1, IDS provided US 10/14/2024) in view of Steedly et al. (US 2020/0333878 A1) and Linde et al. (US 2019/0108652 A1) in further view of Zheng et al. (CN115016752A ) (machine translation attached). Regarding claim 25, Gruen discloses a non-transitory computer-readable storage medium, storing program instructions executable via one or more processors to perform operations comprising (para[0063] teaches non-volatile storage device 606 includes one or more physical devices configured to hold instructions executable by the logic processors to implement the methods and processes described herein): providing pass-through video in which video captured via one or more left-eye outward-facing cameras is presented on the left-eye display to provide an approximately live view of a physical environment from the left-eye viewpoint and video captured via one or more right-eye outward-facing cameras is presented on the right-eye display to provide the approximately live view of the physical environment from the right-eye viewpoint (Para[0050] [0056]- [0058] & Fig. 5 teaches the method 500 may be performed repeatedly for each corresponding camera/display of the video pass-through computing system. Thus, in one example, the method 500 may be performed for a left camera corresponding to a left display, and the method 500 may be repeated for a right camera corresponding to a right display. Video pass-through computing systems and methods are discussed in the context of left and right cameras “passing through” imagery to corresponding left and right displays); Gruen does not explicitly disclose determining an environment characteristic corresponding to an appearance of flicker associated with one or more light sources of the physical environment; based on the environment characteristic, determining an exposure parameter of the pass-through video, wherein the exposure parameter is an exposure length parameter that is adjusted to reduce the appearance of flicker; and adjusting an exposure of the one or more left-eye outward-facing cameras and one or more right-eye outward-facing cameras based on the determined exposure parameter. However Steedly discloses determining an environment characteristic corresponding to an appearance of flicker associated with one or more light sources of the physical environment (Para[0088] teaches the control signal may be configured to modulate brightness of the light sources by instructing the handheld object to vary a duty cycle of power applied to each light source, at 1318. As discussed above, the logic device may instruct the handheld object to change the duty cycle at a sufficient rate to hide any flicker from the human eye). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method video-pass through to adjust exposure timing of camera and display presenting image pixel stream of Gruen with the method of anti-flicker light pulses of Steedly in order to provide a Head-mounted device that improves quality of the images of the background environment for HMD pose tracking, and reduces perceived user brightness of the handheld object light sources. Gruen in view of Steedly does not explicitly disclose determining an exposure parameter of the pass-through video, wherein the exposure parameter is an exposure length parameter that is adjusted to reduce the appearance of flicker.; and adjusting an exposure of the one or more outward-facing cameras based on the determined exposure parameter. However Linde discloses based on the environment characteristic, determining an exposure parameter of the pass-through video (para[0028] teaches local area model provides the exposure settings for each camera 115, 120, 125, 130 for different positions of the cameras within the environment and allows the exposure settings to be adjusted as the location and orientation of the HMD 100 changes within the environment., wherein the exposure parameter is an exposure length parameter); wherein the exposure parameter is an exposure length parameter (Para[0025] teaches the imaging instructions may include the exposure settings for each camera 115, 120, 125, 130 (e.g., exposure length, gain, aperture, etc.); and adjusting an exposure of the one or more outward-facing cameras based on the determined exposure parameter (Para[0026] teaches the HMD controller synchronizes the exposure settings of each camera 115, 120, 125, 130 by centering the exposure length of each camera 115, 120, 125, 130 about the center time point. In other words, a midpoint of each exposure length is aligned at the same time point. This configuration accommodates the varying exposure lengths of the cameras 115, 120, 125, 130 and ensures that the same frame is captured by each camera. To center the exposures about the same time point, the HMD controller 100 calculates a time delay for each camera 115, 120, 125, 130 that allows each camera to begin image capture after a certain period of time relative to a reference time point. The HMD controller 100 determines an appropriate time delay for each camera 115, 120, 125, 130 according to the exposure length of the camera. Para[0065] teaches FIG. 5, once the HMD controller determines the center time point 525 and an appropriate time delay 530 for each camera 115, 120, 125, 130 based on the respective exposure frames 505, 510, 515, 520, the HMD controller sends the imaging instructions to each camera. This configuration accommodates the varying exposure lengths of the cameras 115, 120, 125, 130 and ensures that the same frame is captured by each camera. As the HMD 100 changes location and orientation within the environment of the HMD 100, the exposure settings of each camera may be adjusted, and thus, the synchronization information is adjusted accordingly.)). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker of Gruen in view of Steedly with the method of local area model allows the engine to efficiently adjust exposure settings of the cameras in the imaging assembly such that the engine does not have to analyze the depth information and amount of light detected by the imaging assembly at each new location and/or orientation of the HMD of Linde in order to provide provides the content having the maximum pixel resolution on the electronic display in a foveal region of the users gaze, the engine provides a lower pixel resolution in other regions of the electronic display, thus achieving less power consumption at the HMD and saving computing cycles of the HMD controller without compromising the visual experience of the user. Gruen in view of Steedly and Linde does not expclity disclose wherein the exposure parameter is an exposure length parameter that is adjusted to reduce the appearance of flicker. However Zheng discloses wherein the exposure parameter is an exposure length parameter that is adjusted to reduce the appearance of flicker (Para[0068] –[0071] teaches S402: For each frame of real-world image, determine the exposure type of the left and right cameras based on the exposure duration of that real-world image. Generally, depending on the exposure duration, the exposure types of the left and right cameras on VR devices include short exposure and natural exposure. The exposure duration of the short exposure type is shorter than that of the natural exposure type, and the two exposure types switch cyclically according to the set frame rate. Therefore, in S402, for each frame of real environment image acquired, the exposure type of the left and right cameras can be determined based on the exposure duration of the current frame of real environment image.S403: When the exposure type is short exposure, process the real environment image to make the natural exposure real environment images of the two frames before and after the real environment image transition smoothly. Because the left and right cameras of VR devices are set to switch between short exposure and natural exposure modes, the brightness difference between two adjacent frames (one frame of short exposure real environment image and one frame of natural exposure real environment image) is large. If displayed directly, it may cause image flickering, dizziness, and prevent users from properly perceiving the real world. Therefore, it is necessary to process the short-exposure real-world images to ensure a smooth transition between the two naturally exposed real-world images before and after the short-exposure real-world image, thus solving the image flickering problem). t would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in a virtual reality headset which detects the flicker and alert the user to modify and/or modify the frame rate to adjust to remove any perceptible flicker and efficiently adjust exposure settings of Gruen in view of Steedly and Linde with the method in which short-exposure real environment image is discarded, a naturally-exposed real environment image is reserved, and the naturally-exposed real environment image is processed and then displayed of Zheng in order to provide a system in which the problem of image flicker is solved, and the user can perceive the real world in real time through the real environment image. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROWINA J CATTUNGAL whose telephone number is (571)270-5922. The examiner can normally be reached Monday-Thursday 7:30-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROWINA J CATTUNGAL/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

May 17, 2024
Application Filed
Jun 05, 2025
Non-Final Rejection — §103
Sep 11, 2025
Examiner Interview Summary
Sep 11, 2025
Applicant Interview (Telephonic)
Oct 03, 2025
Response Filed
Nov 26, 2025
Final Rejection — §103
Feb 27, 2026
Response after Non-Final Action
Mar 03, 2026
Request for Continued Examination
Mar 12, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604092
AUTOMATED DEVICE FOR DRILL CUTTINGS IMAGE ACQUISITION
2y 5m to grant Granted Apr 14, 2026
Patent 12604076
ENDOSCOPE SYSTEM, CONTROL METHOD, AND PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12604036
METHOD AND APPARATUS OF ENCODING/DECODING IMAGE DATA BASED ON TREE STRUCTURE-BASED BLOCK DIVISION
2y 5m to grant Granted Apr 14, 2026
Patent 12604037
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12604038
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
88%
With Interview (+13.0%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 521 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month