Prosecution Insights
Last updated: April 19, 2026
Application No. 18/545,799

IMAGE PROCESSING BASED ON OBJECT CATEGORIZATION

Final Rejection §102§103§112
Filed
Dec 19, 2023
Examiner
CHIU, WESLEY JASON
Art Unit
2639
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
4 (Final)
61%
Grant Probability
Moderate
5-6
OA Rounds
2y 6m
To Grant
90%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
288 granted / 469 resolved
-0.6% vs TC avg
Strong +28% interview lift
Without
With
+28.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
32 currently pending
Career history
501
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
21.0%
-19.0% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 469 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/27/2026 is in compliance with the provisions on 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Amendments Acknowledgment of receiving amendments to the claims, which were received by the Office on 02/27/2026. Response to Arguments Applicant's arguments filed 02/27/2026 have been fully considered but they are not persuasive. In that remarks, applicant argues in substance: Applicant argues: “For instance, Applicant submits that Nachlieli fails to teach or suggest at least a "second confidence region" such that "the first category region and the second confidence region intersect at a second intersection region of the image" and "the second category region and the second confidence region intersect at a third intersection region of the image" as recited in currently amended claim 1. Applicant further submits that Nachlieli fails to teach or suggest "process the first intersection region of the image using a first image processing setting, the second intersection region of the image using a second image processing setting, and the third intersection region of the image using a third image processing setting, to generate a processed image" as recited in currently amended claim 1.” Examiner’s Response: Examiner respectfully disagrees. A new interpretation of Nachlieli is seen to disclose the amendments to the claims. See full rejection below for details. Claim Objections Claims 9, 13 and 19 are objected to because of the following informalities: In claim 9, change: “wherein the plurality of image processing settings includes the first image processing setting and the second image processing setting and the third image processing setting, wherein the plurality of intersection regions includes the first intersection region and the second intersection region and the third intersection region; and wherein, to process the first intersection region using the first image processing setting and the second intersection region using the second image processing setting and the third intersection region using the third image processing setting…” To: “wherein the plurality of image processing settings includes the first image processing setting, , and the third image processing setting, wherein the plurality of intersection regions includes the first intersection region, , and the third intersection region; and wherein, to process the first intersection region using the first image processing setting, In claim 13, change: “to process the first intersection region of the image using the first image processing setting and the second intersection region of the image using the second image processing setting and the third intersection region of the image using the third image processing setting, the at least one processor is configured to use an image signal processor (ISP) to process the first intersection region in the raw image data using the first image processing setting and to process the second intersection region in the raw image data using the second image processing setting and to process the third intersection region in the raw image data using the third image processing setting.” To: “to process the first intersection region of the image using the first image processing setting, , and the third intersection region of the image using the third image processing setting, the at least one processor is configured to use an image signal processor (ISP) to process the first intersection region in the raw image data using the first image processing setting, , and to process the third intersection region in the raw image data using the third image processing setting.”. In claim 19, change: “wherein the plurality of image processing settings includes the first image processing setting and the second image processing setting and the third image processing setting, wherein the plurality of intersection regions includes the first intersection region and the second intersection region and the third intersection region, and wherein processing the first intersection region using the first image processing setting and the second intersection region using the second image processing setting and the third intersection region using the third image processing setting includes processing the plurality of intersection regions of the image using respective image processing settings of the plurality of image processing settings.” To: “wherein the plurality of image processing settings includes the first image processing setting, , and the third image processing setting, wherein the plurality of intersection regions includes the first intersection region, , and the third intersection region, and wherein processing the first intersection region using the first image processing setting, Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-5 and 7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites “the first modifier” in line 3. It is unclear if “the first modifier” is referring to “a first modifier” recited in claim 2 (line 3), or if it refers to “a first modifier” recited in claim 3 (line 3). Claim 3 recites “the first deviation” in lines 4-5. It is unclear if “the first deviation” is referring to “a first deviation” recited in claim 2 (line 4), or if it refers to “a first deviation” recited in claim 3 (line 4). Claim 3 recites “the second modifier” in line 8. It is unclear if “the second modifier” is referring to “a second modifier” recited in claim 2 (line 8), or if it refers to “a second modifier” recited in claim 3 (line 7). Claim 3 recites “the second deviation” in line 10. It is unclear if “the second deviation” is referring to “a second deviation” recited in claim 2 (line 9), or if it refers to “a second deviation” recited in claim 3 (line 8). Claim 3 recites “the third modifier” in line 12. It is unclear if “the third modifier” is referring to “a third modifier” recited in claim 2 (line 12), or if it refers to “a third modifier” recited in claim 3 (line 11). Claim 3 recites “the third deviation” in lines 13-14. It is unclear if “the third deviation” is referring to “a third deviation” recited in claim 2 (line 13), or if it refers to “a third deviation” recited in claim 3 (line 12). Claims 4-5 and 7 are rejected as being dependent on claim 3. Claim 5 recites “the first deviation” in line 3. It is unclear if “the first deviation” is referring to “a first deviation” recited in claim 2 (line 4), or if it refers to “a first deviation” recited in claim 3 (line 4). Claim 5 recites “the second deviation” in line 5. It is unclear if “the second deviation” is referring to “a second deviation” recited in claim 2 (line 9), or if it refers to “a second deviation” recited in claim 3 (line 8). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-11, 14-15, 17-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Nachlieli et al. (US 2008/0298704 A1). Regarding claim 1, Nachlieli et al. (hereafter referred as Nach) teaches an apparatus for image processing (Nach, Fig. 1), the apparatus comprising: at least one memory (Nach, Paragraph 0090); and at least one processor (Nach, Paragraph 0089) coupled to the at least one memory, the at least one processor configured to: categorize a first category region of an image and a second category region of the image to identify that a first object is depicted in the first category region of the image and that a second object is depicted in the second category region of the image (Nach, Figs. 1, 3 and 6, Face map module 14, Paragraph 0031 and 0053, The first category region the region of a first face (Fig. 6, left dark region). The second category region is the region of a second face (Fig. 6, right dark region).); associate a first confidence region of the image with a first confidence level associated with the categorization, wherein the first category region and the first confidence region intersect at a first intersection region of the image (Nach, Fig. 1, Skin map module 16, Paragraph 0032, Fig. 2 and 8, Block 36, Paragraphs 0055-0056 and 0059, A first confidence region is an area where skin probability is low. The first intersection region may be considered to be the region of the hair of the left person of Figure 8. Probability is considered to be the confidence level.); associate a second confidence region of the image with a second confidence level associated with categorization, wherein the first category region and the second confidence region intersect at a second intersection region of the image (Nach, Fig. 1, Skin map module 16, Paragraph 0032, Fig. 2 and 8, Block 36, Paragraphs 0055-0056 and 0059, A second confidence region is an area where the skin probability is high. The second intersection region may be considered to be the region of the face of the left person of Figure 8.); and wherein the second category region and the second confidence region intersect at a third intersection region of the image (Nach, Fig. 8, The third intersection region may be considered to be a region with high skin probability and low face of the right person of Figure 8.); and process the first intersection region of the image using a first image processing setting (Nach, Paragraph 0066, High sharpening level range at locations with a low skin probability.), the second intersection region of the image using a second image processing setting (Nach, Paragraph 0067, Low sharpening level range at locations with a high skin probability.), and the third intersection region of the image using a third image processing setting (Nach, Paragraph 0068, Intermediate sharpening level range at locations with high skin probability range and low face probability range.), to generate a processed image (Nach, Fig. 1, image enhancement module 22, Paragraph 0028, Fig. 2, Block 38, Paragraph 0034 and 0066-0068,.). Claim 18 is rejected for the same reasons as claim 1. Regarding claim 2, Nach teaches the apparatus of claim 1 (see claim 1 analysis), wherein the at least one processor is configured to: generate a first modifier associated with the first intersection region of the image, wherein the first modifier identifies a first deviation from a first default image processing setting associated with the first object, and wherein the first image processing setting is based on application of the first deviation to the first default image processing setting (Nach, Figs. 8-10, Paragraphs 0066-0069 and 0073-0074, The sharpening factor according to a low skin probability value is a first modifier and deviates the image processing from the global sharpening parameter value (default processing setting).); generate a second modifier associated with the second intersection region of the image, wherein the second modifier identifies a second deviation from the first default image processing setting, and wherein the second image processing setting is based on application of the second deviation to the first default image processing setting (Nach, Figs. 8-10, Paragraphs 0066-0069 and 0073-0074, The sharpening factor according to a high skin probability value is a second modifier and deviates the image processing from the global sharpening parameter value (default processing setting).); and generate a third modifier associated with the third intersection region of the image, wherein the third modifier identifies a third deviation from a second default image processing setting associated with the second object, and wherein the third image processing setting is based on application of the third deviation to the second default image processing setting (Nach, Figs. 8-10, Paragraphs 0066-0069 and 0073-0074, The sharpening factor according to a high skin probability range and low face probability range is a third modifier and deviates the image processing from the global sharpening parameter value (default processing setting).). Regarding claim 3, Nach teaches the apparatus of claim 2 (see claim 2 analysis), wherein the at least one processor is configured to: generate a first modifier associated with the first intersection region of the image, wherein the first modifier identifies a first deviation from a default image processing setting, and wherein the first image processing setting is based on application of the first deviation to the default image processing setting (Nach, Figs. 8-10, Paragraphs 0066-0069 and 0073-0074, The sharpening factor according to a low skin probability value is a first modifier and deviates the image processing from the global sharpening parameter value (default processing setting).); generate a second modifier associated with the second intersection region of the image, wherein the second modifier identifies a second deviation from the default image processing setting, and wherein the second image processing setting is based on application of the second deviation to the default image processing setting(Nach, Figs. 8-10, Paragraphs 0066-0069 and 0073-0074, The sharpening factor according to a high skin probability value is a second modifier and deviates the image processing from the global sharpening parameter value (default processing setting).); and generate a third modifier associated with the third intersection region of the image, wherein the third modifier identifies a third deviation from the default image processing setting, and wherein the third image processing setting is based on application of the third deviation to the default image processing setting (Nach, Figs. 8-10, Paragraphs 0066-0069 and 0073-0074, The sharpening factor according to a high skin probability range and low face probability range is a third modifier and deviates the image processing from the global sharpening parameter value (default processing setting).). Regarding claim 4, Nach teaches the apparatus of claim 3 (see claim 3 analysis), wherein the default image processing setting is a default associated with at least one of the image or an image capture device (Nach, Figs. 8-10, Paragraphs 0063 and 0073-0074, Global control parameter value is considered to be the default associated with the image.), the image captured using the image capture device (Nach, Paragraph 0028). Regarding claim 5, Nach teaches the apparatus of claim 3 (see claim 3 analysis), wherein the default image processing setting identifies a default strength at which to apply a specified image processing function, wherein the first deviation from the default image processing setting includes a first difference from the default strength at which to apply the specified image processing function, and wherein the second deviation from the default image processing setting includes a second difference from the default strength at which to apply the specified image processing function (Nach, Figs. 8-10, Paragraphs 0068-0069 and 0073-0074, The sharpening factor according to a high/low probability value are modifiers and deviates the image processing from the global sharpening parameter value (default processing setting).). Regarding claim 6, Nach teaches the apparatus of claim 2 (see claim 2 analysis), wherein the first modifier includes at least one of an offset or a multiplier to apply to the first default image processing setting (Nach, Figs. 8-10, Paragraphs 0068-0069 and 0073-0074, The sharpening factor offsets the image processing setting from the default image processing setting.). Regarding claim 7, Nach teaches the apparatus of claim 3 (see claim 3 analysis), wherein the first modifier includes at least one of an offset or a multiplier to apply to the default image processing setting (Nach, Paragraph 0077, The modifier is a multiplier.). Regarding claim 8, Nach teaches the apparatus of claim 1 (see claim 1 analysis) wherein the at least one processor is configured to: categorize a plurality of category regions of the image to identify a plurality of predetermined objects depicted across the plurality of category regions of the image, wherein the plurality of category regions includes the first category region and the second category region (Nach, Figs. 3 and 6, A plurality of faces are identified. The predetermined object is faces.); and associate a plurality of confidence regions of the image with a plurality of confidence levels associated with the categorization, wherein the plurality of confidence regions includes the first confidence region and the second confidence region (Nach, Figs. 3 and 8, The plurality of confidence regions (different ranges of skin probability) are associated with the categorization.). Regarding claim 9, Nach teaches the apparatus of claim 1 (see claim 1 analysis) wherein the at least one processor is configured to: generate a categorization map that maps a plurality of predetermined objects to a plurality of category regions of the image, wherein the plurality of category regions includes the first category region and the second category region (Nach, Figs. 3 and 6, A plurality of faces are identified. The predetermined object is faces.); generate a confidence map that maps a plurality of confidence levels to a plurality of confidence regions of the image, wherein the plurality of confidence regions includes the first confidence region and the second confidence region (Nach, Figs. 3 and 8, The plurality of confidence regions (different ranges of skin probability) are mapped to the image.); and combine the categorization map and the confidence map to generate a combined map that maps information indicative of a plurality of image processing settings to a plurality of intersection regions of the image, wherein the plurality of image processing settings includes the first image processing setting and the second image processing setting and the third image processing setting, wherein the plurality of intersection regions includes the first intersection region and the second intersection region and the third intersection region (Nach, Fig. 10, Paragraphs 0066-0069 and 0073-0074); and wherein, to process the first intersection region using the first image processing setting and the second intersection region using the second image processing setting and the third intersection region using the third image processing setting, the at least one processor is configured to process the plurality of intersection regions of the image using respective image processing settings of the plurality of image processing settings (Nach, Fig. 10, Paragraphs 0066-0069 and 0073-0074). Claim 19 is rejected for the same reasons as claim 9. Regarding claim 10, Nach teaches the apparatus of claim 1 (see claim 1 analysis), wherein a plurality of modifiers are associated with a plurality of intersection regions of the image, wherein the plurality of modifiers identify a plurality of deviations from a default image processing setting, wherein the plurality of image processing settings are based on application of the plurality of deviations to the default image processing setting, and wherein the plurality of intersection regions include at least the first intersection region, the second intersection region, and the third intersection region (Nach, Figs. 8-10, Paragraphs 0068-0069 and 0073-0074, Nach discloses three sharpness modifiers. The modifiers deviate the image processing from the global sharpening parameter value (default processing setting)). Regarding claim 11, Nach teaches the apparatus of claim 1 (see claim 1 analysis), wherein the at least one processor is configured to: filter region data using at least one of a low-pass filter, a Gaussian filter, an average filter, a box blur filter, a lens blur filter, a radial blur filter, a motion blur filter, a smart blur filter, a surface blur filter, a blur filter, a rescaling filter, or a resampling filter, wherein the region data is indicative of the first intersection region, the second intersection region, and the third intersection region (Nach, Paragraph 0074-0075). Regarding claim 14, Nach teaches the apparatus of claim 1 (see claim 1 analysis), wherein the image processing setting is associated with at least one of noise reduction, sharpening (Nach, Figs. 8-10, Paragraphs 0068-0069 and 0073-0074), color saturation, color mapping, color processing, or tone mapping. Regarding claim 15, Nach teaches the apparatus of claim 1 (see claim 1 analysis), wherein the image processing setting is associated with at least one of a lens position, a flash, a focus, an exposure, a white balance, an aperture size, a shutter speed, an ISO, an analog gain, a digital gain, a denoising, a sharpening (Nach, Figs. 8-10, Paragraphs 0068-0069 and 0073-0074), a tone mapping, a color saturation, a demosaicking, a color space conversion, a shading, an edge enhancement, an image combining for high dynamic range (HDR), a special effect, an artificial noise addition, an edge-directed upscaling, an upscaling, a downscaling, and an electronic image stabilization. Regarding claim 17, Nach teaches the apparatus of claim 1 (see claim 1 analysis), further comprising: a display configured to display the processed image (Nach, Paragraph 0094). Regarding claim 20, Nach teaches the method of claim 18 (see claim 18 analysis), wherein the first image processing setting is associated with at least one of noise reduction, sharpening (Nach, Figs. 8-10, Paragraphs 0068-0069 and 0073-0074), color saturation, color mapping, color processing, tone mapping, lens position, flash, focus, exposure, white balance, aperture size, shutter speed, ISO, analog gain, digital gain, demosaicking, color space conversion, shading, edge enhancement, high dynamic range (HDR), a special effect, artificial noise addition, edge-directed upscaling, upscaling, downscaling, or electronic image stabilization. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nachlieli et al. (US 2008/0298704 A1) in view of Ho et al. (US 2021/0319536 A1). Regarding claim 12, Nach teaches the apparatus of claim 1 (see claim 1 analysis), wherein region data is indicative of the first intersection region, the second intersection region, and the third intersection region (Nach, Figs. 6, 8 and 10, Region data may be considered to be any of Figures 6, 8 or 10.). However, Nach does not teach wherein the at least one processor is configured to: upscale the region data using an upscaling algorithm modified using spatial weight filtering. In reference to Ho et al. (hereafter referred as Ho), Ho teaches downscaling an image to generate region data (Ho, Fig. 6, Paragraphs 0081-0083, Content map is considered to be region data.); and upscaling the region data using an upscaling algorithm modified using spatial weight filtering (Ho, Paragraphs 0083 and 0092, Interpolating content factors using content factors proximate to a pixel location is considered to be spatial weight filtering.). These arts are analogous since they are all related to image segmentation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Nach with the downscaling and upscaling method as seen in Ho to decrease the processing time for processing image data (Ho, Paragraph 0082). Claims 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nachlieli et al. (US 2008/0298704 A1) in view of Kim (US 2021/0304366 A1). Regarding claim 13, Nach teaches the apparatus of claim 1 (see claim 1 analysis), wherein, to process the first intersection region of the image using the first image processing setting and the second intersection region of the image using the second image processing setting, and the third intersection region of the image using the third image processing setting, the at least one processor is configured to use an image signal processor (ISP) to process the first intersection region in the image data using the image processing setting and to process the second intersection region in the image data using the second image processing setting and to process the third intersection region in the image data using the third image processing setting (Nach, Figs. 8-10, Paragraphs 0068-0069 and 0073-0074). However, Nach does not explicitly state wherein the image data is raw image data. In reference to Kim (hereafter referred as Kim2), Kim2 teaches one or more processors configured to: receive image data captured by an image sensor, wherein the image data is raw image data (Kim2, Fig. 1, Paragraph 0028). These arts are analogous since they are both related to imaging devices and performing image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Nach with the explicit teaching of receiving raw image data and processing raw image data from the image sensor as seen in Kim2 since it is known for an image sensor to output raw data to a processor to be processed and would provide similar and expected results for receiving data at the processor. Therefore, the limitation “the at least one processor is configured to use an image signal processor (ISP) to process the first intersection region in the raw image data using the first image processing setting and to process the second intersection region in the raw image data using the second image processing setting” is met. Claims 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nachlieli et al. (US 2008/0298704 A1) in view of Tangeland et al. (US 2016/0134838 A1). Regarding claim 16, Nach teaches the apparatus of claim 1 (see claim 1 analysis). However, Nach does not teach wherein, to categorize the first category region of the image and a second category region of the image to identify that the first object is depicted in the first category region of the image and that the second object is depicted in the second category region of the image, the at least one processor is configured to: identify that a first predetermined material is depicted in the first category region, and that that a second predetermined material is depicted in the second category region. In reference to Tangeland et al. (hereafter referred as Tangeland), Tangeland teaches wherein, to categorize a category region of an image to identify that an object is depicted in the category region of the image, a processor is configured to: identify that a predetermined material is depicted in the category region (Tangeland, fig. 3, Face detector 352, Paragraphs 0025 and 0056, Identifying hair is used to detect faces. Hair is considered to be the predetermined material.). These arts are analogous since they are both related to detecting faces. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Nach with the teaching of detect facial features to detect faces as seen in Tangeland since it is a known method of face detection and would provide similar and expected results for face detection. Further, identifying hair for a first face and identifying hair for a second face is considered teach identifying “that a first predetermined material is depicted in the first category region, and that that a second predetermined material is depicted in the second category region”. That is, the first predetermined material and the second predetermined material is a same predetermined material. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESLEY JASON CHIU whose telephone number is (571)270-1312. The examiner can normally be reached Mon-Fri: 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at (571) 272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WESLEY J CHIU/ Examiner, Art Unit 2639 /TWYLER L HASKINS/ Supervisory Patent Examiner, Art Unit 2639
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Apr 24, 2025
Non-Final Rejection — §102, §103, §112
Jun 20, 2025
Interview Requested
Jun 27, 2025
Examiner Interview Summary
Jun 27, 2025
Applicant Interview (Telephonic)
Jul 07, 2025
Response Filed
Jul 17, 2025
Final Rejection — §102, §103, §112
Aug 29, 2025
Interview Requested
Sep 10, 2025
Examiner Interview Summary
Sep 10, 2025
Applicant Interview (Telephonic)
Sep 18, 2025
Response after Non-Final Action
Oct 16, 2025
Request for Continued Examination
Oct 23, 2025
Response after Non-Final Action
Dec 02, 2025
Non-Final Rejection — §102, §103, §112
Feb 09, 2026
Interview Requested
Feb 18, 2026
Examiner Interview Summary
Feb 18, 2026
Applicant Interview (Telephonic)
Feb 27, 2026
Response Filed
Mar 18, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593139
IMAGE SIGNAL PROCESSOR AND METHOD FOR PROCESSING IMAGE SIGNAL
2y 5m to grant Granted Mar 31, 2026
Patent 12581211
IMAGING CIRCUIT AND IMAGING DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12581179
CAMERA MODULE AND VEHICLE COMPRISING SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12568319
Image device capable of switching between global shutter mode and dynamic vision sensor mode
2y 5m to grant Granted Mar 03, 2026
Patent 12563313
IMAGE SENSING DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
61%
Grant Probability
90%
With Interview (+28.2%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 469 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month