Prosecution Insights
Last updated: April 19, 2026
Application No. 18/447,132

Visible Background Rejection Techniques for Shared-Camera Hardware

Final Rejection §103
Filed
Aug 09, 2023
Examiner
XU, XIAOLAN
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Sim Ip Hxr LLC
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
87%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
247 granted / 334 resolved
+16.0% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
371
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 334 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to amended claim 12 have been considered but are moot because the new ground of rejection does not only rely on citations of the references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over AKKAYA et al. (US 20190306386 A1) in view of Nguyen et al. (US 20120051631 A1). Regarding claim 12. AKKAYA discloses A system (figure 1, [0016] FIG. 1 shows aspects of an example camera 100) comprising: a camera module having a liquid crystal optical shutter (figure 1, [0016] FIG. 1 shows aspects of an example camera 100. Camera 100 includes a sensor array 104 of individually addressable sensor elements 106; [0021] an electronically switchable optical filter 114 is included; [0023] Optical filter 114 includes one or more layers of liquid crystals (LC) that are used to selectively block spectral light in the spectral light sub-band) and connected to a synchronization circuit ([0053] Electronic controller machine 120 is configured to switch optical filter 114 from the reflection state to the transmission state and, synchronously address sensor elements 106 of sensor array 104 to acquire a monochrome image) and a processing circuit ([0052] Electronic controller machine 120; [0059] another processing component for additional image processing (e.g., filtering, computer vision). In some examples, the processing component may be incorporated into the camera 100. In some examples, the processing component may be incorporated into a remote computing device in communication with the camera 100); wherein when the liquid crystal optical shutter is in an open state, a first image is produced based on incident visible light and infrared light ([0018] Such operation and materials of the sensor array allows for the same sensor array to be used to measure active light across a broad spectrum (e.g., ˜400-1100 nm) including ultraviolet, visible, NIR, and IR light; [0021] Configured for visible as well as IR imaging. In implementations in which both visible and IR response is required at each sensor element, all of the color filter elements may be highly transmissive in the IR band of interest. For this purpose, in implementations in which both visible and IR imaging are provided, an electronically switchable optical filter 114 is included; [0022] In the transmission state, optical filter 114 is configured to transmit light both inside and outside the spectral light sub-band. In some implementations, optical filter 114 may be broadly transmissive in the transmission state—i.e., transmitting all of the wavelengths blocked and transmitted in the reflection state); wherein when the liquid crystal optical shutter is in a closed state, a second image is produced based on incident infrared light ([0029] [0044] when the optical filter is in the reflection state, the optical filter 114 can be used for IR/depth imaging without interference from impinging spectral light outside of the IR light sub-band (e.g. visible light); [0048] An IR illuminator 118 is configured to emit active IR light to illuminate the subject 102; [0057] In time-of-flight (ToF) implementations, the illumination source—an IR emitter—may project pulsed or otherwise modulated IR illumination towards the subject. The sensor array of the depth-imaging camera may be configured to detect the phase offset between the illumination reflected back from the subject and the modulated emission. In some implementations, the phase offset of each sensor element may be converted into a pixel-resolved time-of-flight of the pulsed illumination, from the illumination source to the subject and then to the array. ToF data may then be converted into depth; [0083] an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; [0056] The term ‘depth video’ refers herein to a time-resolved sequence of depth maps (corresponding to object tracking); [0070] At 504 of method 500, an IR illuminator of the camera is activated to illuminate a subject with active IR light; [0086] the controller machine may be configured to activate the IR illuminator to illuminate a subject with the active IR light while the optical filter is in the reflection state) to produce a second image ([0022] In the reflection state, optical filter 114 is configured to block spectral light in a spectral light sub-band (e.g., visible light sub-band) and transmit light outside the spectral light sub-band (e.g., NIR or IR sub-bands); [0070] At 506 of method 500, each of a plurality of sensors of a sensor array of the camera is addressed to measure an aspect of the active IR light emitted from the IR illuminator and reflected from the subject back to each of the sensors; [0055] In combination depth- and flat-imaging applications, both of the above addressing modes may be used in an alternating (i.e., multiplexed) manner synchronously timed with corresponding switching the state of optical filter 114). However, AKKAYA doesn’t explicitly disclose wherein the processing circuit is configured to: estimate a correction frame based on the first image; and produce a third image based on the second image and the correction frame. Nguyen discloses estimate a correction frame based on the first image (figure 3, [0007] input depth information (correction frame, including background frame) of the video image (first image)); and produce a third image based on the second image and the correction frame (figures 6-9 (third image), [0045] the background subtraction module 131 may perform depth and IR thresholding (correction frame and second image), thus segmenting colored pixels and corresponding depth information of the images into three different regions including foreground (FG), background (BG), and unclear (UC); [0055] the BG is subtracted). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of AKKAYA according to the invention of Nguyen, to perform background image subtraction, in order to better track foreground objects. Regarding claim 13. AKKAYA discloses The system as in claim 12, wherein the synchronization circuit conducts synchronization of the infrared light with the liquid crystal optical shutter ([0053] electronic controller machine 120 is configured to switch optical filter 114 from the transmission state to the reflection state, synchronously modulate IR emitter 118, and address sensor elements 106 of sensor array 104 to acquire an IR image; [0055] In combination depth- and flat-imaging applications, both of the above addressing modes may be used in an alternating (i.e., multiplexed) manner synchronously timed with corresponding switching the state of optical filter 114). Regarding claim 14. AKKAYA discloses The system as in claim 13, wherein the first image has first image information ([0018] Such operation and materials of the sensor array allows for the same sensor array to be used to measure active light across a broad spectrum (e.g., ˜400-1100 nm) including ultraviolet, visible, NIR, and IR light; [0021] Configured for visible as well as IR imaging. In implementations in which both visible and IR response is required at each sensor element, all of the color filter elements may be highly transmissive in the IR band of interest. For this purpose, in implementations in which both visible and IR imaging are provided, an electronically switchable optical filter 114 is included; [0022] In the transmission state, optical filter 114 is configured to transmit light both inside and outside the spectral light sub-band. In some implementations, optical filter 114 may be broadly transmissive in the transmission state—i.e., transmitting all of the wavelengths blocked and transmitted in the reflection state); wherein the second image has second image information ([0022] In the reflection state, optical filter 114 is configured to block spectral light in a spectral light sub-band (e.g., visible light sub-band) and transmit light outside the spectral light sub-band (e.g., NIR or IR sub-bands); [0070] At 506 of method 500, each of a plurality of sensors of a sensor array of the camera is addressed to measure an aspect of the active IR light emitted from the IR illuminator and reflected from the subject back to each of the sensors). However, AKKAYA doesn’t explicitly disclose subtracting the second image information from the first image information. Nguyen discloses subtracting the second image information from the first image information ([0045] the background subtraction module 131 may perform depth and IR thresholding, thus segmenting colored pixels and corresponding depth information of the images into three different regions including foreground (FG), background (BG), and unclear (UC); [0055] the BG is subtracted). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of AKKAYA according to the invention of Nguyen, to subtract the background from the first image information, in order to better track foreground objects. Regarding claim 15. AKKAYA discloses The system as in claim 14, wherein the liquid crystal optical shutter is integrated into the camera module (figure 1, [0021] an electronically switchable optical filter 114 is included). Regarding claim 16. AKKAYA discloses The system as in claim 14, further comprising an infrared light source directed to objects outside the camera module where some infrared light from the infrared light source is reflected from outside the camera module into the camera module (figure 1, [0048] An IR illuminator 118 is configured to emit active IR light to illuminate the subject 102; [0053] electronic controller machine 120 is configured to switch optical filter 114 from the transmission state to the reflection state, synchronously modulate IR emitter 118, and address sensor elements 106 of sensor array 104 to acquire an IR image). Regarding claim 17. AKKAYA discloses The system as in claim 16, wherein the infrared light source produces linearly polarized infrared light ([0029] in the reflection state, IR light (e.g., 222 of FIG. 2A) in the IR light sub-band 226 is transmitted with high efficiency independent of its polarization; [0069] when the optical filter is switched to the reflection state, the optical filter may block visible light and transmit IR light regardless of polarization of the light). Regarding claim 18. AKKAYA discloses The system as in claim 14, further comprising a distinguisher that distinguishes foreground objects from background objects ([0057] the phase offset of each sensor element may be converted into a pixel-resolved time-of-flight of the pulsed illumination, from the illumination source to the subject and then to the array. ToF data may then be converted into depth (inherently foreground and background objects are distinguished by their depth)). Regarding claim 19. AKKAYA discloses The system as in claim 18, wherein at least one of the foreground objects is tracked ([0057] the phase offset of each sensor element may be converted into a pixel-resolved time-of-flight of the pulsed illumination, from the illumination source to the subject and then to the array. ToF data may then be converted into depth; [0056] The term ‘depth video’ refers herein to a time-resolved sequence of depth maps (corresponding to object tracking)). Regarding claim 20. AKKAYA discloses The system as in claim 19, wherein the at least one of the foreground objects is a hand ([0074] computing system 600 may take the form of camera 100 or electronic controller machine 120 of FIG. 1; [0075] Computing system 600 may optionally include a input subsystem 608; [0083] the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Example NUI componentry may include an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition (corresponding to hand recognition)). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached at (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOLAN XU/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Aug 09, 2023
Application Filed
Oct 08, 2025
Non-Final Rejection — §103
Jan 11, 2026
Response Filed
Mar 16, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598315
IMAGE ENCODING/DECODING METHOD AND DEVICE FOR DETERMINING SUB-LAYERS ON BASIS OF REQUIRED NUMBER OF SUB-LAYERS, AND BIT-STREAM TRANSMISSION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586255
CONFIGURABLE POSITIONS FOR AUXILIARY INFORMATION INPUT INTO A PICTURE DATA PROCESSING NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12587652
IMAGE CODING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12581120
Method and Apparatus for Signaling Tile and Slice Partition Information in Image and Video Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12581092
TEMPORAL INITIALIZATION POINTS FOR CONTEXT-BASED ARITHMETIC CODING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
87%
With Interview (+13.3%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 334 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month