Prosecution Insights
Last updated: April 19, 2026
Application No. 18/668,613

IMAGE ACQUISITION APPARATUS AND METHOD EMPLOYING LENS ARRAY

Non-Final OA §103
Filed
May 20, 2024
Examiner
DANIELS, ANTHONY J
Art Unit
2637
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
97%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
658 granted / 828 resolved
+17.5% vs TC avg
Strong +17% interview lift
Without
With
+17.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
26 currently pending
Career history
854
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
21.4%
-18.6% vs TC avg
§112
18.0%
-22.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 828 resolved cases

Office Action

§103
DETAILED ACTION I. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . II. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). Receipt is also acknowledged of certified copies of papers required by 37 CFR 1.55. III. Claim Interpretation The limitation, upsampling unit, of claim 17 does not invoke interpretation under 35 U.S.C. 112(f). While the limitation recites a generic placeholder (unit) and is associated with the function of generating a plurality of upsampled images, the specification limits the unit to structure. That is, on p. 8, para. [0052], the specification states that “some embodiments are described in the accompanying drawings with respect to functional blocks, units and/or modules. Those skilled in the art will appreciate that these blocks, units and/or modules are physically implemented by logic circuits, discrete components, microprocessors, hard wire circuits, memory devices, wire connections, and other electronic circuits.” (Emphasis added). IV. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. A. Claims 1-4,6-9, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tsuchita (US 2013/0222553 A1) in view of Yokokawa et al. (US 2022/0159205 A1) and further in view of Demandolx et al. (US # 9,083,935 B2) As to claim 1, Tsuchita teaches an image acquisition apparatus (Fig. 4, image pickup apparatus “10”) comprising: an image sensor (Fig. 4, image pickup device “1”) comprising a microlens (Fig. 1, microlens “L2”) and a plurality of pixels arranged adjacent to each other and sharing the microlens ([0046], lines 2-4); and a processor (Fig. 4, digital signal processing unit “24”) configured to process an input image acquired by the image sensor ([0069]), wherein the processor is further configured to: generate a plurality of parallax images by extracting and combining sub-images having a same parallax from the input image (Figs. 5A and 5B; [0085], lines 1-3); and generate an output image by summing the plurality of parallax images (Fig. 5B, “2D IMAGE”; [0083], lines 1-5). Claim 1 differs from Tsuchita in that it further requires (1) that the processor generates RGB demosaic images by interpolating a color value of an empty pixel for each of the plurality of parallax images by using color values of surrounding pixels, (2) that the processor generates a plurality of upsampled images by upsampling each of the RGB demosaic images to an input image size, and (3) that the summing be weighted summing of one reference upsampled image selected from the upsampled images and the other upsampled images based on a similarity therebetween. However, in the same field of endeavor as the instant application, Yokokawa et al. teaches an imaging system (Fig. 10) that performs a superresolution image creation process ([0042]) that includes the step of adding same color pixels under a single microlens to produce a plurality of low-resolution color images ([0043]). Before combining the plurality of low-resolution images to create the superresolution image, the system performs demosaicking on the input low-resolution images (1) ([0048]) then upsamples the demosaicked images (2) ([0049], lines 1-4). Although Yokokawa et al. performs addition of the low-resolution images for a different reason than Tsuchita, the examiner nevertheless submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to demosaic each of Tsuchita’s A,B,C, and D parallax images then upsample them before synthesis to produce the 2D image. One of ordinary skill in the art would recognize that since the parallax images have a mosaic color pattern and have a reduced resolution compared to the originally-captured image, demosaicking and upsampling the parallax images before addition would produce a full-color 2D image with increased resolution. Further in the same field of endeavor as the instant application, Demandolx et al. teaches an imaging system (Fig. 2) that combines a plurality of images (Fig. 1) captured under different exposure conditions (col. 6, line 66 – col. 2, line 1). In combining the images, the system performs a weighted sum of the images based on a mismatch bias that informs a weighting coefficient and that represents a difference between a reference image and the remaining images to be combined (3) (Fig. 5, step “506”; col. 10, lines 34-38). The combining process also accounts for the gradient in image regions. That is, regions with high-frequency detail receive a weightier coefficient, while blurry or low-resolution regions receive lesser weight (col. 7, lines 32-34). Although Demandolx et al. combines images for a different reason that Tsuchita, the examiner nevertheless submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to add Tsuchita’s A,B,C, and D parallax images in the manner of Demandolx et al., where a reference image is selected, histograms of the remaining images are aligned with the reference image, coefficients are assigned to the remaining images based on a mismatch between the remaining images and the reference image, and the coefficients also selected to account for high-frequency regions. As Demandolx et al. notes that parallax presents itself due to interframe motion (col. 10, lines 38-46), the parallax differences in Tsuchita’s images can also be suppressed by weighting corresponding regions more heavily and weighting non-corresponding regions less while also de-emphasizing noisy image regions in the synthesized image. As to claims 2,6, and 7, Tsuchita, as modified by Yokokawa et al. and Demandolx et al., teaches the image acquisition apparatus of claim 1. Although it is not stated expressly in Yokokawa et al. and Demandolx et al., the examiner takes official notice to color demosaicking by linear interpolation, image upsampling by bilinear interpolation, and image region similarity determination by assessing a sum of absolute differences as well known in the art. One of ordinary skill in the art would have been motivated to use these methods to perform the demosaicking, upsampling, and similarity determination in Tsuchita, as modified by Yokokawa et al. and Demandolx et al., because they are comparatively simple processes that yield robust results. As to claim 3, Tsuchita, as modified by Yokokawa et al. and Demandolx et al., teaches the image acquisition apparatus of claim 1, wherein the image sensor comprises: a quad Bayer pattern array in which pixels arranged in a 2×2 matrix comprise respective color filters having a same color (see Tsuchita, Fig. 5); or a quad square Bayer pattern array in which pixels arranged in a 4×4 matrix comprise respective color filters having a same color. As to claim 4, Tsuchita, as modified by Yokokawa et al. and Demandolx et al., teaches the image acquisition apparatus of claim 3, wherein the processor is further configured to generate, as the plurality of parallax images, a parallax A image, a parallax B image, a parallax C image, and a parallax D image from the input image (see Tsuchita, Figs. 5A and 5B). As to claim 8, Tsuchita, as modified by Yokokawa et al. and Demandolx et al., teaches the image acquisition apparatus of claim 1, wherein the processor is further configured to detect an aliasing region of the upsampled images (see Demandolx et al., col. 7, lines 32-34, “…high gradient areas…”). As to claim 9, Tsuchita, as modified by Yokokawa et al. and Demandolx et al., teaches the image acquisition apparatus of claim 8, wherein the processor is further configured to increase a weight of the similarity in a case that the similarity between the reference upsampled image and the other upsampled images is lower than a preset value in the aliasing region (see Demandolx et al., col. 7, lines 32-34). Claim 17 is a method claim reciting steps substantially similar to the processor functions of claim 1 and also recites an upsampling unit not recited in claim 1. However, the processor of claim 1 can be construed as an upsampling unit. Also, Yokokawa et al. expressly discloses an upsampling unit (Fig. 5, upsampling unit “362”). Claims 18-20 are method claims reciting features or steps recited or recited as processor functions in claims 2,8, and 9, respectively. Therefore, they are rejected as detailed above. B. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Tsuchita (US 2013/0222553 A1) in view of Yokokawa et al. (US 2022/0159205 A1) and further in view of Demandolx et al. (US # 9,083,935 B2) and further in view of Gan et al. (US 2022/0108424 A1) As to claim 5, Tsuchita, as modified by Yokokawa et al. and Demandolx et al., teaches the image acquisition apparatus of claim 4. The claim differs from Tsuchita, as modified by Yokokawa et al. and Demandox et al., in that it requires that the processor comprises a plurality of logics configured to simultaneously interpolate each of the parallax A image, the parallax B image, the parallax C image, and the parallax D image, or a single logic configured to sequentially interpolate the parallax A image, the parallax B image, the parallax C image, and the parallax D image. In the same field of endeavor as the instant application, Gan et al. teaches an imaging system that produces a plurality of parallax images from pixels under a microlens and that uses a programmable logic device to sequentially demosaic the parallax images (Figs. 1,4B, and 5; [0064] and [0073], “…demosaic network…”; {A single demosaic network would only permit sequential demosaicking.}). In light of the teaching of Gan et al., the examiner submits that it would have been obvious to one of ordinary skill in the art to use a programmable logic device to sequentially perform the demosaicking of Tsuchita, as modified by Yokokawa et al. and Demandolx et al., as these devices can be implemented at low cost with reliable operation. C. Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Tsuchita (US 2013/0222553 A1) in view of Yokokawa et al. (US 2022/0159205 A1) in view of Demandolx et al. (US # 9,083,935 B2) and further in view of Nakano (US 2016/0132998 A1) As to claim 10, Tsuchita, as modified by Yokokawa et al. and Demandolx et al., teaches the image acquisition apparatus of claim 8. The claim differs from Tsuchita, as modified by Yokokawa et al. and Demandolx et al., in that it requires that the processor is further configured output edge result values by using directional filters, output an edge absolute value by performing absolute value processing of the edge result values, and generate an edge map by using an input value determined based on the edge absolute value and a control signal. In the same field of endeavor, Nakano teaches an image processing apparatus (Fig. 4) that develops an edge map based on application of a plurality of directional filters designed to detect edges along vertical, horizontal, and diagonal directions (e.g., Fig. 3; Eqs. (3)-(6) and (9)-(12)). Each filter outputs an absolute value of a gradient in a 3x3 image area, and the ultimate outputs of edge information for the 3x3 image area are the maxima of the absolute values of filter outputs ([0089] and [0090]). In light of the teaching of Nakano, the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to use Nakano’s method of edge map development when determining high-gradient areas in the parallax images of Tsuchita, as modified by Yokokawa et al. and Demandolx et al. One of ordinary skill in the art would recognize that this would allow for precise edge detection, while preventing overinclusion of potentially erroneous edge regions (see Nakano, e.g., [0008] and [0009]). As to claim 11, Tsuchita, as modified by Yokokawa et al., Demandolx et al., and Nakano, teaches the image acquisition apparatus of claim 10, wherein the directional filters comprise a horizontal directional filter, a vertical directional filter, a 45-degree directional filter, and a 135-degree directional filter (see Nakano, Fig. 3). As to claim 12, Tsuchita, as modified by Yokokawa et al., Demandolx et al., and Nakano, teaches the image acquisition apparatus of claim 11, wherein the control signal comprises one of: a first control signal to generate the edge map by using, as the input value, a maximum value among an edge absolute value of the horizontal directional filter, an edge absolute value of the vertical directional filter, an edge absolute value of the 45-degree directional filter, and an edge absolute value of the 135-degree directional filter (see Nakano, [0089] and [0090]); a second control signal to generate the edge map by using, as the input value, an average value of the edge absolute value of the horizontal directional filter, the edge absolute value of the vertical directional filter, the edge absolute value of the 45-degree directional filter, and the edge absolute value of the 135-degree directional filter; and a third control signal to generate the edge map by using, as the input value, a maximum value of a weighted average value of the edge absolute value of the horizontal directional filter and the edge absolute value of the vertical directional filter and a weighted average value of the edge absolute value of the 45-degree directional filter, and the edge absolute value of the 135-degree directional filter. D. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Tsuchita (US 2013/0222553 A1) in view of Yokokawa et al. (US 2022/0159205 A1) in view of Demandolx et al. (US # 9,083,935 B2) in view of Nakano (US 2016/0132998 A1) and further in view of Wang et al. (US 2017/0053379 A1) As to claim 13, Tsuchita, as modified by Yokokawa et al., Demandolx et al., and Nakano, teaches the image acquisition apparatus of claim 10. The claim differs from Tsuchita, as modified by Yokokawa et al., Demandolx et al., and Nakano, in that it requires that the processor is further configured to determine that the input value is not an edge based on the input value being less than a first threshold value, determine that the input value is an edge based on the input value being greater than a second threshold value, and perform linear interpolation based on determining that the input value is between the first threshold value and the second threshold value. In the same field of endeavor as the instant application, Wang et al. teaches an imaging system (Fig. 1) that performs a demosaicking process by performing an edge-based linear interpolation based on whether a horizontal or vertical gradient is greater than a first threshold and less than a second threshold ([0028], lines 1-10; {The claimed first threshold can be Wang’s second threshold, where the gradient is not an edge if it is lower than Wang’s second threshold and lower than Wang’s first threshold, and the claimed second threshold can be Wang’s first threshold.}). In light of the teaching of Wang et al., the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to demosaic the parallax images of Tsuchita, Yokokawa et al. and Demandolx et al., when gradients exhibit values disclosed by Wang et al. because this would ensure quality demosaicking when scene details exhibit complex texture and stark color differences. V. Allowable Subject Matter Claims 14-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: As to claim 14, the prior art fails to disclose or suggest performing white balance on an image before grouping pixels of the image into a plurality of parallax images. Hayasaka et al. (US 2010/0128152 A1) teaches an image capturing apparatus that groups pixels of a captured image into a plurality of parallax images. Before creating the parallax images, the apparatus performs defect detection and clamp processing. However, white balance is performed after parallax image creation (Fig. 5). Claims 15 and 16 are allowable because they depend on claim 14. VI. Additional Pertinent Prior Art Ichihara et al. (US 2016/0344952 A1) discloses a camera that produces a plurality of parallax images from a captured image having blocks of four pixels positioned under a single microlens and that performs a weighted synthesis of the parallax images based on a pixel gradients. Ahn (US 2024/0193801 A1) discloses an imaging system that synthesizes parallax images based on respective depths of the parallax images. VII. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY J DANIELS whose telephone number is (571)272-7362. The examiner can normally be reached M-F 9:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 571-272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANTHONY J DANIELS/Primary Examiner, Art Unit 2637 2/6/2026
Read full office action

Prosecution Timeline

May 20, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604094
CAMERA MODULE
2y 5m to grant Granted Apr 14, 2026
Patent 12604105
SIGNAL PROCESSING DEVICE AND METHOD, AND PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12593140
Automatic White-Balance (AWB) for a Camera System
2y 5m to grant Granted Mar 31, 2026
Patent 12581757
MULTIRESOLUTION IMAGER FOR NIGHT VISION
2y 5m to grant Granted Mar 17, 2026
Patent 12574643
PRECISE FIELD-OF-VIEW TRANSITIONS WITH AUTOFOCUS FOR VARIABLE OPTICAL ZOOM SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
97%
With Interview (+17.1%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 828 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month