Prosecution Insights
Last updated: April 19, 2026
Application No. 18/594,903

IMAGE SIGNAL PROCESSOR AND DEPTH MAP GENERATION METHOD

Non-Final OA §102§103
Filed
Mar 04, 2024
Examiner
TRUONG, NGUYEN T
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
SK Hynix Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
91%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
462 granted / 561 resolved
+24.4% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
16 currently pending
Career history
577
Total Applications
across all art units

Statute-Specific Performance

§101
6.5%
-33.5% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 561 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is sent in response to Applicant’s Communication received 04 March 2024 for application number 18/594,903. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract, Oath/Declaration, Claims. Claims 1-20 are presented for examination. Information Disclosure Statement The information disclosure statements (IDS) submitted on the following dates are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner: 3/4/24. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 16 and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Siddiqui et al. (US 2016/0012567). Regarding claim 16, Siddiqui discloses an image system (fig. 11) comprising: an image sensor including a plurality of pixels and configured to generate an image including pixel values corresponding to each of the plurality of pixels; and an image signal processor configured to perform, based on target resolution of an output image, a sampling operation on an image received from the image sensor and perform an image processing operation based on an intermediate image on which the sampling operation is performed; wherein the image processing operation includes a noise removal operation, and wherein the image signal processor includes a noise remover that performs a noise removal operation by guide filtering using the image received from the image sensor as a guide image (pars. 8, 9; figs. 10, 11). Regarding claim 20, Siddiqui discloses an image system (fig. 11) comprising: an image sensor including a plurality of pixels and configured to generate an input image; and a processor and memory configured to: generate a downsized left image and a downsized right image based on the input image; generate a depth map image indicating depth values based on the downsized left image and the downsized right image; remove noise by guide filtering the depth map image using the input image as a guide image; and generate an output image based on the guide-filtered depth map image (pars. 8, 9, 37, 39, 62). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-12, 14, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Siddiqui et al. (US 2016/0012567) in view of Javidnia et al. (US 2018/0027224). Regarding claim 1, Siddiqui discloses an image signal processor (fig. 11) comprising: a depth map generator configured to generate, based on a target resolution of an output image, a downsized left image and a downsized right image based on an input image received from an external device, and generate a depth map image indicating depth values corresponding to pixels included in the output image based on the downsized left image and the downsized right image (pars. 8, 9, 37, 39, 62); and a noise remover configured to perform guide filtering on the depth map image using the input image as a guide image and generate the output image (pars. 9, 27, 66). Siddiqui does not explicitly disclose receiving from an external device. In the same field of endeavor, Javidnia discloses receiving from an external device (fig. 13, #230, #232). It would have been obvious to one ordinary skill in the art, before the effective filing date of the claimed invention, with motivation to modify Siddiqui to include the teachings of Javidnia in order to obtain a depth map from images taken at different viewpoints (par. 43). Regarding claim 2, see teachings of claim 1. Siddiqui further discloses wherein the depth map generator calculates at least one disparity value between the left image and the right image based on similarity of the left image and the right image and calculates the depth values based on the at least one disparity value (Siddiqui, pars. 8, 37). Regarding claim 3, see teachings of claims 1 and 2. Siddiqui and Javidnia further discloses wherein the depth map generator determines the similarity based on a pixel value difference of a pixel included in the left image and a pixel included in the right image (Javidnia, pars. 43-45). Regarding claim 4, see teachings of claims 1 and 2. Siddiqui and Javidnia further discloses wherein the depth map generator determines the similarity based on a correlation between a pixel included in the left image and a pixel included in the right image (Javidnia, pars. 43-45). Regarding claim 5, see teachings of claim 1. Siddiqui and Javidnia further discloses wherein the noise remover sets a filtering area in the guide image corresponding to a target pixel included in the depth map image and determines a pixel value for the target pixel based on pixel values included in the filtering area (Siddiqui, pars. 8, 9). Regarding claim 6, see teachings of claims 1 and 5. Siddiqui and Javidnia further discloses wherein the noise remover sets the filtering area as an area centered on the target pixel and having a predetermined size (Javidnia, par. 45, “a window of pixels surrounding the given pixel P; fig. 10, “window size”). Regarding claim 7, see teachings of claims 1, 5, and 6. Siddiqui and Javidnia further discloses wherein the noise remover determines the pixel value for the target pixel based on an average pixel value of the pixel values included in the filtering area (Javidnia, pars. 45, 68). Regarding claim 8, see teachings of claims 1 and 5. Siddiqui and Javidnia further discloses wherein the noise remover sets the entire area of the guide image as the filtering area and calculates at least one weight based on a distance of the target pixel and each of pixels included in the filtering area (Javidnia, pars. 93, 133). Regarding claim 9, see teachings of claims 1, 5, and 8. Siddiqui and Javidnia further discloses wherein the noise remover determines the pixel value of the target pixel as a weight average value of the pixel values included in the filtering area calculated based on the at least one weight (Javidnia, pars. 129-133, “weighted median”). Regarding claim 10, Siddiqui discloses a method of operating an image signal processor, the method comprising: receiving a left image and a right image (pars. 8, 9). Siddiqui does not explicitly disclose receiving from an external device. In the same field of endeavor, Javidnia discloses receiving from an external device (fig. 13, #230, #232). It would have been obvious to one ordinary skill in the art, before the effective filing date of the claimed invention, with motivation to modify Siddiqui to include the teachings of Javidnia in order to obtain a depth map from images taken at different viewpoints (par. 43). Siddiqui and Javidnia further discloses determining at least one similarity value between a left pixel sampled from the left image according to a ratio determined based on target resolution of an output image and right pixels sampled from the right image according to the ratio (Siddiqui, pars. 8, 45; Javidnia, pars. 43-45); calculating a disparity value between the left pixel and a comparison pixel determined based on the at least one similarity value for the right pixels (Siddiqui, pars. 8, 37; Javidnia, pars. 43-45); generating, based on the disparity value, a depth map image indicating depth values corresponding to pixels included in the output image (Siddiqui, pars. 8, 9, 37, 39, 62; Javidnia, pars. 43-45); and performing a guide filtering operation on the depth map image using the left image as a guide image (Siddiqui, par. 66). Regarding claim 11, the claim is interpreted and rejected for the same reason as set forth in claim 3. Regarding claim 12, the claim is interpreted and rejected for the same reason as set forth in claim 4. Regarding claim 14, the claim is interpreted and rejected for the same reason as set forth in claim 5. Regarding claim 15, the claim is interpreted and rejected for the same reason as set forth in claims 6-9. Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Siddiqui et al. (US 2016/0012567) in view of Javidnia et al. (US 2018/0027224), and further in view of Shimizu et al (US 2014/0192163). Regarding claim 13, see teachings of claim 10. Although Siddiqui and Javidnia further discloses wherein calculating the disparity value comprises: determining a distance between the left pixel and the comparison pixel as the disparity value (Siddiqui, pars. 32-34; Javidnia, pars. 57, 165). Siddiqui does not explicitly disclose determining, as the comparison pixel, a pixel among the right pixels with a highest similarity to the left pixel. In the same field of endeavor, Shimizu discloses determining, as the comparison pixel, a pixel among the right pixels with a highest similarity to the left pixel (par. 3). It would have been obvious to one ordinary skill in the art, before the effective filing date of the claimed invention, with motivation to modify Siddiqui and Javidnia to include the teachings of Shimizu in order to obtain pixel matching (Shimizu, par. 3). Claim(s) 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Siddiqui et al. (US 2016/0012567) in view of Seo et al. (US 2016/0182896). Regarding claim 17, see teachings of claim 16. Although Siddiqui further discloses wherein the image processing operation is a depth map generation operation; and wherein the image signal processor further comprises a depth map generator that generates a left image and a right image based on the intermediate image and generates a depth map image, based on the left image and right image, indicating depth values corresponding to pixels included in the output image (pars. 8, 9; figs. 10, 11), Siddiqui does not explicitly disclose wherein the image sensor includes a plurality of phase difference pixels, wherein at least two of the plurality of phase difference pixels receive light from a same micro lens. In the same field of endeavor, Seo discloses wherein the image sensor includes a plurality of phase difference pixels, wherein at least two of the plurality of phase difference pixels receive light from a same micro lens (par. 67). It would have been obvious to one ordinary skill in the art, before the effective filing date of the claimed invention, with motivation to modify Siddiqui to include the teachings of Seo in order to detect a depth value (Seo, par. 9). Regarding claim 18, the claim is interpreted and rejected for the same reason as set forth in claim 2. Regarding claim 19, the claim is interpreted and rejected for the same reason as set forth in claim 5. Prior Art not relied upon: Please refer to the references listed in attached PTO-892, which are not relied upon for the claim rejections, since these references are pertinent to the disclosure. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NGUYEN T TRUONG whose telephone number is (571)272-5262. The examiner can normally be reached on Mon - Fri, 6AM - 2PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE ATALA can be reached on 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NGUYEN T TRUONG/Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Mar 04, 2024
Application Filed
Nov 15, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598300
CODING MODE DEPENDENT SELECTION OF TRANSFORM SKIP MODE
2y 5m to grant Granted Apr 07, 2026
Patent 12598292
INTRA MODE CANDIDATE CONFIGURATION METHOD AND VIDEO DECODING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12598302
IMAGE DECODING METHOD AND DEVICE FOR CODING CHROMA QUANTIZATION PARAMETER OFFSET-RELATED INFORMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12593075
PICTURE DATA ENCODING METHOD AND APPARATUS AND PICTURE DATA DECODING METHOD AND APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12593135
Panorama Camera Configuration Method and Panorama Camera Configuration System
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
91%
With Interview (+8.3%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 561 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month