Prosecution Insights
Last updated: April 19, 2026
Application No. 18/847,887

CAMERA TRANSITION FOR IMAGE CAPTURE DEVICES WITH VARIABLE APERTURE CAPABILITY

Non-Final OA §102
Filed
Sep 17, 2024
Examiner
PRABHAKHER, PRITHAM DAVID
Art Unit
2638
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
511 granted / 650 resolved
+16.6% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
14 currently pending
Career history
664
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
31.6%
-8.4% vs TC avg
§112
16.3%
-23.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 650 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/17/24 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 2.) Claim(s) 1-3, 11-13, 21-23 and 26-28 is/are rejected under 35 U.S.C. 102 (a2) as being anticipated by Kim et al. (US Pub No.: 2022/0166936A1). Regarding Claim 1, Kim et al. disclose a method (Method of a camera module capturing an image, Paragraphs 0151-0168; Figures 19-20), comprising: receiving first image data from a first image sensor of a first camera (Camera module 1100a has an image sensor that senses an image of a target, Paragraphs 0151-0168; Figures 19-20); receiving second image data from a second image sensor of a second camera (Camera module 1100c has an image sensor that senses and receives an image of a target, Paragraphs 151-0168; Figures 19-20), wherein at least one of the first image data or the second image data is received through a variable aperture having an adjusted aperture setting applied by the first camera or the second camera (To adjust the shutter speeds and/or aperture values of the image sensors, the main processor may control mechanical devices included in the image sensors or may control pixels included in the image sensors, Paragraphs 0093-0094, Paragraphs 0093-0094, 0151-0168; Figures 19-20. Each of the camera modules include an actuator. The actuator 1130 may move the OPFE 1110 or an optical lens (hereinafter referred to as an “optical lens”) to a specific location. For example, the actuator 1130 may adjust a location of an optical lens such that an image sensor 1142 is placed at a focal length of the optical lens for accurate sensing, Paragraphs 0151-0168; Figures 19-20); determining a first output frame based on the first image data and the second image data by adjusting the second image data to match a characteristic of the first image data (merging of image data) (One camera module (e.g., 1100c) among the plurality of camera modules 1100a, 1100b, and 1100c may be, for example, a vertical shape of depth camera extracting depth information by using an infrared ray (IR). In this case, the application processor 1200 may merge image data provided from the depth camera and image data provided from any other camera module (e.g., 1100a or 1100b) and may generate a three-dimensional (3D) depth image, Paragraph 0168; Figures 19-20). In regard to Claim 2, Kim et al. disclose the method of claim 1, wherein adjusting the second image data comprises adjusting the second image data to match a depth of focus (DOF) of the first image data (As mentioned above, one camera module (e.g., 1100c) among the plurality of camera modules 1100a, 1100b, and 1100c may be, for example, a vertical shape of depth camera extracting depth information by using an infrared ray (IR). In this case, the application processor 1200 may merge image data provided from the depth camera and image data provided from any other camera module (e.g., 1100a or 1100b) and may generate a three-dimensional (3D) depth image, Paragraphs 0168-0177 and Figures 19-20). With regard to Claim 3, Kim et al. disclose the method of claim 2, wherein adjusting the second image data further comprises warping (merging of image data) the second image data to match a field of view (FOV) of the first image data (At least two camera modules (e.g., 1100a and 1100b) among the plurality of camera modules 1100a, 1100b, and 1100c may have different FOVs from each other. In this case, the at least two camera modules (e.g., 1100a and 1100b) among the plurality of camera modules 1100a, 1100b, and 1100c may include different optical lens, not limited thereto. the image generator 1214 may generate the output image by merging at least a portion of the image data respectively generated from the camera modules 1100a, 1100b, and 1100c having different FOVs from each other, depending on the image generating information Generating Information or the mode signal. Also, the image generator 1214 may generate the output image by selecting one of the image data respectively generated from the camera modules 1100a, 1100b, and/or 1100c having different FOVs from each other, depending on the image generating information Generating Information or the mode signal., Paragraphs 0168-0177 and Figures 19-20). Regarding Claim 11, Kim et al. disclose an apparatus (Electronic device with camera module group, Paragraphs 0151-0168; Figures 19-20), comprising: a first camera comprising a first image sensor (Camera module 1100a has an image sensor that senses and receives an image of a target, Paragraphs 151-0168; Figures 19-20), a second camera comprising a second image sensor (Camera module 1100c has an image sensor that senses and receives an image of a target, Paragraphs 151-0168; Figures 19-20), a variable aperture, wherein the variable aperture is included in one of the first camera or the second camera (To adjust the shutter speeds and/or aperture values of the image sensors, the main processor may control mechanical devices included in the image sensors or may control pixels included in the image sensors, Paragraphs 0093-0094, Paragraphs 0093-0094, 0151-0168; Figures 19-20); a memory storing processor-readable code (Internal memory, Paragraphs 0151-0168, 0172; Figures 19-20); and at least one processor coupled to the memory, to the first camera, and to the second camera, wherein the at least one processor is configured to execute the processor-readable code to cause the at least one processor to perform operations (Application processor 1200, Paragraphs 0151-0168, 0172, 0193; Figures 19-20) including: receiving first image data from the first image sensor (Camera module 1100a has an image sensor that senses and receives an image of a target, Paragraphs 151-0168; Figures 19-20), receiving second image data from the second image sensor (Camera module 1100c has an image sensor that senses and receives an image of a target, Paragraphs 151-0168; Figures 19-20), wherein at least one of the first image data or the second image data is received through the variable aperture according to an adjusted aperture setting applied by the first camera or the second camera (Each of the camera modules include an actuator. The actuator 1130 may move the OPFE 1110 or an optical lens (hereinafter referred to as an “optical lens”) to a specific location. For example, the actuator 1130 may adjust a location of an optical lens such that an image sensor 1142 is placed at a focal length of the optical lens for accurate sensing, Paragraphs 0151-0168; Figures 19-20); determining a first output frame based on the first image data and the second image data by adjusting the second image data to match a characteristic of the first image data (merging of image data) (One camera module (e.g., 1100c) among the plurality of camera modules 1100a, 1100b, and 1100c may be, for example, a vertical shape of depth camera extracting depth information by using an infrared ray (IR). In this case, the application processor 1200 may merge image data provided from the depth camera and image data provided from any other camera module (e.g., 1100a or 1100b) and may generate a three-dimensional (3D) depth image, Paragraph 0168; Figures 19-20). In regard to Claim 12, Kim et al. disclose the apparatus of claim 11, wherein the adjusting the second image data comprises adjusting the second image data to match a depth of focus (DOF) of the first image data (As mentioned above, one camera module (e.g., 1100c) among the plurality of camera modules 1100a, 1100b, and 1100c may be, for example, a vertical shape of depth camera extracting depth information by using an infrared ray (IR). In this case, the application processor 1200 may merge image data provided from the depth camera and image data provided from any other camera module (e.g., 1100a or 1100b) and may generate a three-dimensional (3D) depth image, Paragraphs 0168-0177 and Figures 19-20). With regard to Claim 13, Kim et al. disclose the apparatus of claim 12, wherein the adjusting the second image data further comprises warping (merging) the second image data to match a field of view (FOV) of the first image data (At least two camera modules (e.g., 1100a and 1100b) among the plurality of camera modules 1100a, 1100b, and 1100c may have different FOVs from each other. In this case, the at least two camera modules (e.g., 1100a and 1100b) among the plurality of camera modules 1100a, 1100b, and 1100c may include different optical lens, not limited thereto. the image generator 1214 may generate the output image by merging at least a portion of the image data respectively generated from the camera modules 1100a, 1100b, and 1100c having different FOVs from each other, depending on the image generating information Generating Information or the mode signal. Also, the image generator 1214 may generate the output image by selecting one of the image data respectively generated from the camera modules 1100a, 1100b, and/or 1100c having different FOVs from each other, depending on the image generating information Generating Information or the mode signal., Paragraphs 0168-0177 and Figures 19-20). With regard to computer program storing Claims 21-23, these claims correspond to apparatus claims 11-13 and are rejected as discussed in the above rejections to apparatus claims 11-13 (Also see Paragraphs 0193-0195). Regarding Claim 26, Kim et al. disclose an apparatus (Electronic device with camera module group, Paragraphs 0151-0168; Figures 19-20), comprising: a memory storing processor-readable code (Internal memory, Paragraphs 0151-0168, 0172; Figures 19-20); and at least one processor coupled to the memory, to the first camera, and to the second camera, wherein the at least one processor is configured to execute the processor-readable code to cause the at least one processor to perform operations (Application processor 1200, Paragraphs 0151-0168, 0172, 0193; Figures 19-20) including: receiving first image data from a first image sensor of a first camera (Camera module 1100a has an image sensor that senses and receives an image of a target, Paragraphs 151-0168; Figures 19-20), receiving second image data from a second image sensor of a second camera (Camera module 1100c has an image sensor that senses and receives an image of a target, Paragraphs 151-0168; Figures 19-20), wherein at least one of the first image data or the second image data is received through a variable aperture (VA) having an adjusted aperture setting applied by the first camera or the second camera (Each of the camera modules include an actuator. The actuator 1130 may move the OPFE 1110 or an optical lens (hereinafter referred to as an “optical lens”) to a specific location. For example, the actuator 1130 may adjust a location of an optical lens such that an image sensor 1142 is placed at a focal length of the optical lens for accurate sensing. To adjust the shutter speeds and/or aperture values of the image sensors, the main processor may control mechanical devices included in the image sensors or may control pixels included in the image sensors, Paragraphs 0093-0094, Paragraphs 0093-0094, 0151-0168; Figures 19-20); determining a first output frame based on the first image data and the second image data by adjusting the second image data to match a characteristic of the first image data (merging of image data) (One camera module (e.g., 1100c) among the plurality of camera modules 1100a, 1100b, and 1100c may be, for example, a vertical shape of depth camera extracting depth information by using an infrared ray (IR). In this case, the application processor 1200 may merge image data provided from the depth camera and image data provided from any other camera module (e.g., 1100a or 1100b) and may generate a three-dimensional (3D) depth image, Paragraph 0168; Figures 19-20). In regard to Claim 27, Kim et al. disclose the apparatus of claim 26, wherein the adjusting the second image data comprises adjusting the second image data to match a depth of focus (DOF) of the first image data (As mentioned above, one camera module (e.g., 1100c) among the plurality of camera modules 1100a, 1100b, and 1100c may be, for example, a vertical shape of depth camera extracting depth information by using an infrared ray (IR). In this case, the application processor 1200 may merge image data provided from the depth camera and image data provided from any other camera module (e.g., 1100a or 1100b) and may generate a three-dimensional (3D) depth image, Paragraphs 0168-0177 and Figures 19-20). With regard to Claim 28, Kim et al. disclose the apparatus of claim 27, wherein the adjusting the second image data further comprises warping (merging) the second image data to match a field of view (FOV) of the first image data (At least two camera modules (e.g., 1100a and 1100b) among the plurality of camera modules 1100a, 1100b, and 1100c may have different FOVs from each other. In this case, the at least two camera modules (e.g., 1100a and 1100b) among the plurality of camera modules 1100a, 1100b, and 1100c may include different optical lens, not limited thereto. the image generator 1214 may generate the output image by merging at least a portion of the image data respectively generated from the camera modules 1100a, 1100b, and 1100c having different FOVs from each other, depending on the image generating information Generating Information or the mode signal. Also, the image generator 1214 may generate the output image by selecting one of the image data respectively generated from the camera modules 1100a, 1100b, and/or 1100c having different FOVs from each other, depending on the image generating information Generating Information or the mode signal., Paragraphs 0168-0177 and Figures 19-20). 3.) Allowable Subject Matter Claims 4-10, 14-20, 24-25 and 29-30 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PRITHAM DAVID PRABHAKHER whose telephone number is (571)270-1128. The examiner can normally be reached Monday to Friday 8:00 am to 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached at 5712727372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Pritham David Prabhakher Patent Examiner Pritham.Prabhakher@uspto.gov /PRITHAM D PRABHAKHER/Primary Examiner, Art Unit 2638
Read full office action

Prosecution Timeline

Sep 17, 2024
Application Filed
Jan 22, 2026
Examiner Interview (Telephonic)
Feb 02, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598386
MEMS-based Imaging Devices
2y 5m to grant Granted Apr 07, 2026
Patent 12598373
VIDEO RECORDING METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593122
IMAGE PROCESSING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12593151
ANALOG-TO-DIGITAL CONVERTING CIRCUIT FOR OPTIMIZING DUAL CONVERSION GAIN OPERATION AND OPERATION METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12593129
APPARATUS AND METHODS FOR ADJUSTING ZOOM OF A PTZ CAMERA
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.1%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 650 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month