Prosecution Insights
Last updated: April 19, 2026
Application No. 18/372,047

RGBIR CAMERA MODULE

Non-Final OA §102§103§112
Filed
Sep 22, 2023
Examiner
WU, ZHENZHEN
Art Unit
2637
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
302 granted / 381 resolved
+17.3% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
8 currently pending
Career history
389
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
51.4%
+11.4% vs TC avg
§102
27.6%
-12.4% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 381 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-4 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites the limitation "the second image" in line 1. There is insufficient antecedent basis for this limitation in the claim. Claims 3 and 4 are rejected as being dependent from claim 2. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 12-13 and 15-16 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Linzer (US 11,574,484 B1). As to claim 12, Linzer discloses a method (Fig.5) comprising: capturing, with an image sensor (Fig. 1: RGB-IR sensor 124), of a first frame of a user's face by reading image pixels from a pixel array of the image sensor (Fig.5: the RGB-IR image captured in step 202 corresponds to the claimed first frame. Col.15, lines 27-31: “the first and second image data channels may be processed by the processor 404 for 3D (e.g., depth) perception, liveness determination, 3D facial recognition, object detection, face detection, object identification, and facial recognition”); capturing, with the image sensor, a second frame by reading infrared (IR) pixels from the pixel array (Fig.5: the IR image obtained in step 210 corresponds to the claimed second frame); and extracting the second frame from the first frame to generate a third frame of the user's face (Fig.5; Col.14, lines 40-44: “In the step 212, the full resolution IR image data may be subtracted from the color+IR pixel data received from the step 204 and the pixel data interpolated using a visible light interpolation technique to generate full resolution color image data.”). As to claim 13, Linzer discloses the method of claim 12, further comprising authenticating the user based at least in part on the enhanced third frame of the user's face (Fig.6; Col.15, lines 22-31: the both full-resolution IR image and full-resolution RGB image are used for facial recognition). As to claim 15, Linzer discloses the method of claim 12, wherein the second image is captured using a rolling shutter pixel architecture (Col.9, lines 38-40: RGB-IR rolling shutter sensor). As to claim 16, Linzer discloses the method of claim 12, wherein the second frame is captured while operating in an IR flood mode (Col.7, lines 2-5: “the signal IR may comprise IR images containing a structured light pattern in at least a portion of the image when the IR projector is turned on “). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Linzer (US 11,574,484 B1) in view of Sugano et al. (US 2023/0288622 A1). As to claim 14, Linzer discloses the method of claim 12, but fails to disclose the extracting is only performed outdoors. However, Sugano et al. teaches the extracting is only performed outdoors ([0119-0120]: when in the outdoor environment, an IR image is captured for extracting a silhouette of a subject). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Linzer with the teaching of Sugano et al. to only perform the extracting outdoors, so as to suppress flare, ghost and halation, thereby, generating a more accurate RGB image. Claim(s) 1 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garud et al. (US 2024/0040268 A1) in view of Wang et al. (US 2021/0297607 A1). As to claim 1, Garud et al. discloses a camera module (Fig.1: system 100) comprising: an image sensor (Fig. 1: imaging subsystem 105) comprising: a color filter array (CFA) comprising a red filter, a blue filter, a green filter and at least one infrared (IR) filter ([0021]: color filter array (CFA) with RGB-IR filters); and an image signal processor (ISP) (Fig.1: central processing unit (CPU) 110 and image processing pipeline 115) configured to: initiate capture of a first frame by reading signal pixels from the pixel array (Fig.5A: the remosaiced RGB pixel data 521 corresponds to the claimed first frame); initiate capture of a second frame by reading IR pixels from the pixel array (Fig.5A: IR upsampled pixel data 516 corresponds to the claimed second frame); align the first and second frames ([0057]: “IR subtraction function 525 can use the following equations to determine a pixel value of RGB pixel data 530 corresponding to a pixel of the same color channel and location of remosaiced RGB pixel data 521, where R′, G′, and B′ are pixels of RGB pixel data 530: R′=R−IRR*IR; G′=G−IRG*IR; B′=B−IRB*IR.” That is, the remosaiced RGB pixel data 521, the IR upsampled pixel data 516 and IR subtraction factors 511 are aligned); and extract the second frame from the first frame to generate a third enhanced frame ([0056]: “IR subtraction function 525 can perform IR decontamination operations on remosaiced RGB pixel data 521 using IR subtraction factors 511 and IR upsampled pixel data 516”. Fig.5A: the RGB pixel data 530 produced by the IR subtraction function 525 corresponds to the claimed third enhanced frame). Garud et al. fails to disclose a microlens array; and a pixel array comprising pixels to convert light received through the color filter array into electrical signals. However, Wang et al. teaches a microlens array (Fig.7A and 7B: microlens array 710); and a pixel array comprising pixels to convert light received through the color filter array into electrical signals (Fig.7A and 7B: a plurality of photodiodes 742). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Garud et al. with the teaching of Wang et al. to include a microlens array; and a pixel array comprising pixels to convert light received through the color filter array into electrical signals, so as to focus light onto the pixel areas thereby improving light collection and efficiency. As to claim 8, Garud et al. in view of Wang et al. discloses the camera module of claim 1. The above combination fails to disclose the second frame is captured while operating in an IR flood mode. However, Wang et al. further teaches that the second frame is captured while operating in an IR flood mode ([0111]: “the IR pixel correction component 698 may be configured to capture a first frame with the flood projector 106 enabled and capture a second frame with the flood projector 106 disabled”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify the combination of Garud et al. and Wang et al. with the teaching of Wang et al. to capture the second frame while operating in an IR flood mode, so as to increase the amount of infrared light reflected from the scene, thereby improving the reliability of the captured infrared signal used for image processing. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garud et al. (US 2024/0040268 A1) in view of Wang et al. (US 2021/0297607 A1) as applied to claim 1 above, and further in view of Kim et al. (US 2015/0287766 A1). As to claim 2, Garud et al. in view of Wang et al. discloses the camera module of claim 1, but fails to disclose the second image is extracted from the first frame only when the ISP determines that the camera module is being operated outdoors or indoors where lighting has IR content. However, Kim et al. teaches the second image is extracted from the first frame only when the ISP determines that the camera module is being operated outdoors or indoors where lighting has IR content ([0124]: ”if external luminosity is relatively high (e.g., outdoor area or daytime), the image sensor 100 may generate a high-quality color image only based on an image signal in a visible light band by deactivating the infrared light detection pixel of the unit pixel 122. In addition, if external luminosity is relatively low (e.g., indoor area or nighttime), the image sensor 100 may generate a high-quality color image based on both an image signal in a visible light band and an image signal in an infrared light band by activating the infrared light detection pixel of the unit pixel 122”. The infrared light noise is then eliminated by subtracting the infrared components from light components detected by the visible light pixels as shown in Fig.21). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Garud et al. and Wang et al. with the teaching of Kim et al. to extract the second image from the first frame only when the ISP determines that the camera module is being operated outdoors or indoors where lighting has IR content, so as to reduce processing time and computing cost by only performing the extraction when it is necessary. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garud et al. (US 2024/0040268 A1) in view of Wang et al. (US 2021/0297607 A1) and Kim et al. (US 2015/0287766 A1) as applied to claim 2 above, and further in view of Kim et al. (US 2023/0126806 A1, hereinafter, Kim 2). As to claim 3, Garud et al. in view of Wang et al. and Kim et al. discloses the camera module of claim 2, but fails to disclose the ISP determines that the camera module is being operated outdoors based on a face identification receiver output or an ambient light sensor with IR channels. However, Kim 2 teaches the ISP determines that the camera module is being operated outdoors based on a face identification receiver output (Fig.5; [0156]: “the controller 110 may determine whether the first image may be acquired indoors or outdoors, based on processing of the first image to perform the face authentication process”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Garud et al., Wang et al. and Kim et al. with the teaching of Kim 2 to determine that the camera module is being operated outdoors based on a face identification receiver output, so as to optimize imaging performance based on environmental lighting conditions, thereby improving reliability and image quality. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garud et al. (US 2024/0040268 A1) in view of Wang et al. (US 2021/0297607 A1) and Kim et al. (US 2015/0287766 A1) as applied to claim 2 above, and further in view of Mahowald (US 2010/0072351 A1). As to claim 4, Garud et al. in view of Wang et al. and Kim et al. discloses the camera module of claim 2, but fails to disclose output of an ambient light sensor is used to identify indoor IR noise. However, Mahowald teaches output of an ambient light sensor is used to identify indoor IR noise ([0016]: the ambient light sensor (ALS) module 10 is capable of suppressing noise from infrared sources. [0026]: “ALS module 10 may be configured to block a potentially erroneous value of electronic signal 21, such as when an IR remote control is used in close proximity to window 12, which may cause ambient light sensor 16 to overestimate the intensity of ambient light … electronic device 1 may be configured to substitute an alternate value for electronic signal 21, such as while a threshold intensity of IR signal is being detected by IR sensor 18”. Therefore, the ALS module 10 detect IR noise). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Garud et al., Wang et al. and Kim et al. with the teaching of Mahowald to use the output of an ambient light sensor to identify indoor IR noise, so as to prevent IR contamination and enhance multi-spectral image quality. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garud et al. (US 2024/0040268 A1) in view of Wang et al. (US 2021/0297607 A1) as applied to claim 1 above, and further in view of Sakamoto et al. (US 2020/0112662 A1). As to claim 5, Garud et al. in view Wang et al. discloses the camera module of claim 1, but fails to disclose the image sensor is a rolling shutter image sensor. However, Sakamoto et al. teaches the image sensor is a rolling shutter image sensor (Fig.24 shows that different rows begin and end integration at different times within a single frame period, which suggests a rolling shutter operation). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Garud et al. and Wang et al. with the teaching of Sakamoto et al. to implement a rolling shutter image sensor, so as to provide simple pixel circuits, improve pixel sensitivity, and lower power consumption. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garud et al. (US 2024/0040268 A1) in view of Wang et al. (US 2021/0297607 A1) as applied to claim 1 above, and further in view of Kim et al. (US 2023/0353892 A1, hereinafter Kim 3). As to claim 6, Garud et al. in view of Wang et al. discloses the camera module of claim 1, but fails to disclose the image sensor is running in a secondary inter-frame readout (SIFR) mode. However, Kim 3 teaches the image sensor is running in a secondary inter- frame readout (SIFR) mode (Fig.11; [0135]: “the processor may output only the ROI image data 1110 at the first frame rate, … the processor may generate the entire region image data by combining the ROI image data 1110 and the RONI image data 1120 and output the entire region image data at the first frame rate”. That is, the image sensor operable in a secondary readout mode in which a subset of pixels is read at a different frame rate relative to a primary full-frame readout). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Garud et al. and Wang et al. with the teaching of Kim 3 to incorporate a secondary inter-frame readout (SIFR) mode, so as to enable different effective frame rates for different pixel subsets, improve readout flexibility and reduce latency for interest-region processing. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garud et al. (US 2024/0040268 A1) in view of Wang et al. (US 2021/0297607 A1) as applied to claim 1 above, and further in view of Sugiyama (US 2018/0367745 A1). As to claim 7, Garud et al. in view of Wang et al. discloses the camera module of claim 1, but fails to disclose the first frame is captured with a first exposure time and the second frame is captured with a second exposure time that is shorter than the first exposure time. However, Sugiyama teaches the first frame is captured with a first exposure time and the second frame is captured with a second exposure time that is shorter than the first exposure time (Fig.13; [0142]: “…it is possible to make an exposure time of the IR irradiation period short and to make an exposure time of the IR non-irradiation period long”. The image captured with exposure time of the IR irradiation period corresponds to the claimed second frame; the image captured with the exposure time of IR non-irradiation period corresponds to the claimed first frame). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Garud et al. and Wang et al. with the teaching of Sugiyama such that the first frame is captured with a first exposure time and the second frame is captured with a second exposure time that is shorter than the first exposure time, so as to prevent saturation of the image sensor caused by infrared radiation while still allowing the IR signal to be measured for IR decontamination from the RGB image, thereby improving the visible image quality. Claim(s) 9 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2021/0297607 A1) in view of Garud et al. (US 2024/0040268 A1). As to claim 9, Wang et al. in view of Wang et al. discloses a camera module (Fig.6: device 604) comprising: an image sensor (Fig. 6: image sensor 614; also see Figs.7A and 7B) comprising: a microlens array (Figs. 7A and 7B: microlens array 710); a color filter array (CFA) (Figs. 7A and 7B: spectral filter array 720) comprising a red filter, a blue filter, a green filter and at least one infrared (IR) filter (Figs.8A, 9A, 10A, and 11A; [0091]: “The example spectral patterns 800, 900, 1000, 1100 may match the spectral pattern of the spectral filter array 720”. Spectral patterns 800, 900, 1000 and 1100 comprise red filters, blue filters, green filters and infrared filters); and a pixel array (Figs. 7A and 7B: silicon layer 740) comprising pixels ([0088]: “A plurality of photodiodes 742 may be formed into the silicon layer 740”) to convert light received through the color filter array into electrical signals ([0055]: “The photodiode(s) of each pixel capture metrics of the light spectrum associated with the spectral filter”. [0056]: “…one or more processors of an image capture device may receive the respective spectral values (referred to herein as “raw data”) from the image sensor”. The raw data corresponds to the claimed electrical signals); and an image signal processor (ISP) (Fig.6: processing unit 620) configured to: initiate capture of a first frame by reading signal pixels from the pixel array (Fig.6: visible light data 618a); initiate capture of a second frame by reading IR pixels from the pixel array (Fig.6: infrared light data 618b). Wang et al. fails to disclose generating virtual frames to fill in missing frames during up sampling of the first frame. However, Garud et al. teaches generating virtual frames to fill in missing frames during up sampling of the first frame (Fig.5B; [0061]: "The conversion engine next uses RGB-IR pixel data 505 and green upsampled pixel data 536 to produce upsampled pixel data 541 via red/blue upsampling function 540”. The upsampled pixel data 541 corresponds to the claimed virtual frames). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang et al. with the teaching of Garud et al. to generate virtual frames to fill in missing frames during up sampling of the first frame, so as to increase the image resolution and facilitate alignment of image frames for further image processing operations. Method claim 17 recites substantially similar subject matter as disclosed in claim 9 above; therefore, it is rejected for the same reasons. Claim(s) 10-11 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2021/0297607 A1) in view of Garud et al. (US 2024/0040268 A1) as applied to claim 9 above, and further in view of Wei et al. (US 2015/0002629 A1). As to claim 10, Wang et al. in view of Garud et al. discloses the camera module of claim 9, but it fails to disclose the image sensor is running in an adaptive frame rate exposure mode when the virtual frames are generated. However, Wei et al. teaches the image sensor is running in a adaptive frame rate exposure mode ([0029]: “internal group functions, for example, group A and group B, each being programmable to include independent group settings, which can be, for example, one or more of group number, frame number, exposure time…” That is, the exposure time of RGB frame and the exposure time of IR frame are adjustable). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang et al. and Garud et al. with the teaching of Wei et al. such that the image sensor runs in an adaptive frame rate exposure mode when the virtual frames are generated, so as to allow the exposure parameters for visible-light and infrared imaging to be independently optimized, thereby improving signal quality for each spectral band. As to claim 11, Wang et al. in view of Garud et al. and Wei et al. discloses the camera module of claim 10. Wei et al. further discloses when operating in the adaptive frame rate exposure mode, signal pixel and IR pixel data are time-multiplexed and configured to be read at different frames and exposure times (Fig.5; [0031-0032]: the first time period A corresponds to the exposure time of the RGB data frame and the second time period B corresponds to the exposure time of the IR data frame. As shown in Fig.5, the RGB frames and IR frames are captured in separate frames with different exposure parameters, which corresponds to the claimed time-multiplexed capture of signal pixel and IR pixel data). Method claims 18-19 recite substantially similar subject matter as disclosed in claims 10-11, respectively; therefore, they are rejected for the same reasons. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHENZHEN WU whose telephone number is (571)272-2519. The examiner can normally be reached 8:30 am - 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SINH TRAN can be reached at (571)272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZHENZHEN WU/Examiner, Art Unit 2637 /SINH TRAN/Supervisory Patent Examiner, Art Unit 2637
Read full office action

Prosecution Timeline

Sep 22, 2023
Application Filed
Mar 07, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604106
Camera Assembly and Electronic Device
2y 5m to grant Granted Apr 14, 2026
Patent 12587756
IMAGING DEVICE AND ELECTRONIC APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12574634
METHOD OF CONNECTING CAMERA MODULES TO A CAMERA
2y 5m to grant Granted Mar 10, 2026
Patent 12563308
DEMOSAICING QUAD BAYER RAW IMAGE USING CORRELATION OF COLOR CHANNELS
2y 5m to grant Granted Feb 24, 2026
Patent 12452512
CAMERA MODULE, ELECTRONIC DEVICE AND VEHICLE INSTRUMENT
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
93%
With Interview (+13.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 381 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month