Prosecution Insights
Last updated: April 19, 2026
Application No. 18/031,585

IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS FOR EXTRACTING IMAGE DEFECT PORTION FROM TEST IMAGE READ FROM SHEET OUTPUT BY UNIFORM DENSITY IMAGE FORMATION

Final Rejection §102§112
Filed
Apr 12, 2023
Examiner
ORANGE, DAVID BENJAMIN
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Kyocera Document Solutions Inc.
OA Round
2 (Final)
34%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
63%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
51 granted / 151 resolved
-28.2% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
51 currently pending
Career history
202
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
20.2%
-19.8% vs TC avg
§112
32.0%
-8.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 151 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner Note The examiner has mapped art to the claims based on the technology described in the interview (as the examiner understood it) because the claims are not definite enough to apply art. One example of the level of confusion is that the examiner, after the interview, believed that this application was directed to detecting mistakes in printing. However, the remarks sound as though this invention is directed to correcting images. This level of confusion generally prevents the examiner from applying art. However, in this situation, the examiner hopes that applying art will convey his understanding of the invention and thus lead towards clearer claims. To add additional context to the below mapping, the examiner believes that the claims are directed to finding three types of errors in an image of printed paper, namely, vertical streaks, horizontal streaks and “noise points,” (misplaced dots or blotches). The streaks are identified by finding a line of black between adjacent areas of white. The Shankar reference (applied below in the 102) teaches detecting the same type of streaks (referred to as “ductor streaks”). Shankar teaches two methods for detecting and both match with the examiner’s best guess of the present invention. Shankar’s method 1 teaches comparison with adjacent pixels (§3.1.1.1, equations 1-6) and Shankar method 2 teaches use of a sliding window (akin to the claimed adjacent areas), see §3.1.1.2, Fig. 8. Response to Arguments Applicant’s arguments and amendment have persuasively overcome some of the 112 rejections. The remaining issues are addressed below. Examiner's Note Applicant argues: Applicant understands that Kim was not used in rejecting any of the claims. Examiner responds: That is correct, Kim was provided to inform Applicant of how the examiner understood the art. 112f Applicant argues: The term "image forming device" is not in the form of "means or step plus function." MPEP § 2181. Examiner responds: “Device” is a nonce word. Applicant argues: Moreover, the term "image forming" connotes sufficient structure to a person skilled in the art. Examiner responds: “Image forming” is a function. Selected 112 (Many of Applicant’s remarks are explaining the amendments. Some are persuasive, others are addressed below.) Applicant argues: “to highlight the portions (images) where the difference is large.” Examiner responds: The examiner would appreciate further explanation of a “large” difference. Is the idea that the colors are very different (like black versus white), or is the idea that there is an analysis of shapes (such as the size of the mark)? Are there normal situations in which the non-streak parts of the image are not entirely white? Applicant argues: This results in a processed result (image) that is essentially equivalent to the one from which the portions (images) where the difference is relatively small have been removed. Examiner responds: This sounds like the invention takes an incorrect printing and corrects the image, but that conflicts with specification [0007] that says “it is preferable to determine the causes of the image defects in the test image for each type of the image defect in order to simplify the determination process and improve the determination accuracy.” Applicant argues: Applicant notes that in image processing, sequentially selecting a "pixel of interest" (or "feature pixel") from a plurality of pixels constituting an image is a common practice. Examiner responds: “Feature pixel” is a term of art and is clearer than “pixel of interest.” While the phrase “feature pixel” does not appear in the specification, it mya be that certified translation of the priority document shows support for “feature pixel.” Applicant argues: In other words, the singular part is the image defect portion where a significant density difference has occurred. Examiner responds: Specification [0002] identifies density difference as distinct from streaks and noise points, such that correcting density differences is a different invention than what is presently claimed. Applicant argues: Accordingly, "corresponding" of each pixel between a plurality of images clearly means "corresponding at an absolute position." Examiner responds: The examiner is still confused. How might a pixel correspond to a location other than being at that location? Is the recitation of “absolute” an effort to distinguish from a relative position? Applicant argues: Therefore, in the case of "horizontal edge intensity map data" and "vertical edge intensity map data" in claim 2, the number of pixels in the map data is not reduced; rather, the difference is calculated only for one of the two second areas. Examiner responds: The examiner does not understand the implication. The examiner believes that the number of pixels are determined by the image’s resolution and area, but the examiner has not read anything about changes to resolution or area. Perhaps “density defects” refer to low resolution rather than splotches of ink? Applicant argues: Note that claim 2 has been amended to clarify that the pixels of interest in the first area used for generation of each of the "horizontal edge intensity map data" and "vertical edge intensity map data" are also sequentially selected from the test image as in the generation of the first and second map data. Examiner responds: Because Applicant highlighted this feature, it sounds more significant than the examiner’s initial impression. Is there a reason that one would take the pixels sequentially instead of in parallel? Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The current title is not accepted because it uses too many terms that are indefinite (see the below rejections). The abstract of the disclosure is objected to because it does not “enable the Office and the public generally to determine quickly from a cursory inspection the nature and gist of the technical disclosure.” 37 CFR 1.72(b). See the below indefiniteness rejections. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Objections Claim 1 is objected to because of the following informalities: Claim 1 twice recites “are not common to the first preprocessed image and the second preprocessed image,” but it appears that the intent is that the pixels do not appear in the [first/second] preprocessed image. Appropriate correction is required. Claim 20 references claim 1, but does not properly depend from claim 1 because the processor/instructions can exist without performance of any of the method steps. Here, claim 1 is a method but claim 20 is an apparatus, and the apparatus claim can be met without necessarily practicing the method. MPEP 608.01(n)(III) addresses the “test for proper dependency.” MPEP 607(III) states: Any claim which is in dependent form but which is so worded that it, in fact, is not a proper dependent claim, as for example it does not include every limitation of the claim on which it depends, will be required to be canceled as not being a proper dependent claim; and cancellation of any further claim depending on such a dependent claim will be similarly required. The applicant may thereupon amend the claims to place them in proper dependent form, or may redraft them as independent claims, upon payment of any necessary additional fee. Claim 20 is such a claim because it is directed to an apparatus rather than a method as in referenced claim 1. MPEP 608.01(n)(III). While, in the interest of compact prosecution, claim 20 has been examined, claim 20 is required to be cancelled. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Image forming device in claim 1 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1, 2, 7-9, and 20 (all claims) are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification asserts that it detects image defects with a “simple” process. See, e.g., Specification, [0008] and [0009]. In particular, the simple process has a small calculation load. Specification, [0224]. In contrast, Specification [0060]-[0062] state that pattern recognition uses a very large amount of calculation (which the examiner understands as being the opposite of simple). The specification asserts that the problem is solved with the technique of specification, [0231], that is, training with specific types of data. However, none of claims 1-20 capture this solution, and therefore there is not written description support for claims 1-20. In contrast, claim 15 recites using a trained model, and this reads on the pattern recognition of Specification [0060]-[0062] that uses a very large amount of calculation. Claims 1, 2, 7-9, and 20 (all claims) are rejected as a formality because the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph language in these claims does not have sufficient structure in the specification. This rejection matches the below indefiniteness rejection for the same language. Once that rejection is overcome, this one will be as well. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 2, 7-9, and 20 (all claims) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 2, 7-9, and 20 (all claims) are rejected because it is not clear what this invention is for. As above, it is not clear to the examiner if this invention is for detecting defects in printed images (specification, [0001]) or if it is for correcting images (as per the remarks, discussed above). This is not a utility rejection because the examiner does not have reason to question the utility of these inventions. Claim 1 recites “by uniform density image formation,” but this is new terminology. MPEP 2173.05(a). Claim 1 recites “pixel of interest,” but this is subjective. MPEP 2173.05(b)(IV). Claim 1 also recites “pixel of interest that is a pixel,” but this is unclear whether the claim reads on a pixel or only a pixel of interest. MPEP 2173.05(d). Claim 1 recites ““pixel of interest that is a pixel sequentially selected from … .” It is unclear if the “each” refers to “is a pixel” or “sequentially selected.” If the intent is “sequentially selected, what structural properties identify a pixel as having been selected sequentially? Claim 1 recites “converting a pixel value … into a converted value,” but it is not clear what this means. Is a converted value not a pixel value? Claim 1 recites “a converted value by a process of … .” The converted value is best understood as a product-by-process (MPEP 2113) because the steps are not part of the claim itself, but it is not clear what the implied structure would be. If the intent that this process is required, it should be written with active verbs, such as “processing … .” Claim 1 recites “amplifying a difference between pixel values,” but it is not clear what the process of “amplifying” is (e.g., does this mean identifying, multiplying, etc.). Further, it is unclear what is meant by “difference” because there is a three-way comparison (i.e., one first and two second areas). Claim 1 recites “the pixel of interest,” but this lacks sufficient antecedent basis because the earlier recitation uses “each pixel of interest,” and thus it is unclear which is meant. MPEP 2173.05(e). Claim 1 recites “singular parts,” at the third line from the end of claim 1, but this is new terminology. MPEP 2173.05(a). The other references to “singular part” were removed by amendment. Claim 1 recites “the conversion process,” but this lacks sufficient antecedent basis. MPEP 2173.05(e). Here, the claim recites that this instance of the conversion process uses a different direction, which conflicts with the recitation of “the.” Claim 1 recites “noise point,” but this is new terminology. MPEP 2173.05(a). Claim 1 recites “a reference range,” but this is subjective because there is not an objective standard to determine the range. MPEP 2173.05(b)(IV). Further, it is not clear what the range is of (e.g., color values, locations). Claim 2 recites “correcting each pixel value of the [first/second] map data with a corresponding pixel value of the [horizontal/vertical] edge intensity map data at an absolute position.” Here, it appears that the map data includes all of the pixels in the image, but the edge intensity map only includes an area of interest and one of two adjacent areas. It is not clear how to correct the pixels that do not correspond to pixels in the edge intensity map data. Claim 2 recites “generating, by the processor, the [first/second] preprocessed image by … ,” and then a different set of steps than were claimed for the generating in claim 1. This creates a conflict with the antecedent basis “the.” Perhaps the intent is to modify, rather than generate the preprocessed images? Claim 8 recites “representative value,” but this is subjective because there is not an objective standard to determine the value. MPEP 2173.05(b)(IV). Claim 9 recites “after aggregation,” but it is unclear what this refers to, i.e., aggregation of what? Dependent claims are likewise rejected. Claim 20, while an independent claim, incorporates the limitations of claim 1 and is thus similarly rejected. Claim 9 recites “the aggregated [first/second] preprocessed image,” but this lacks sufficient antecedent basis. MPEP 2173.05(e). Claim 20 recites “a processor for executing,” but this appears to be an intended use (as opposed to a processor coupled to memory, where the memory stores instructions). See also, In re Blue Buffalo (Fed. Cir. January 14, 2026, non-precedential, slip opinion retrieved from https://www.cafc.uscourts.gov/opinions-orders/24-1611.OPINION.1-14-2026_2632686.pdf) as to whether a processor is interpreted as “capable of.” Claims 1, 2, 7-9, and 20 (all claims) recite the claim elements identified in the 112(f) claim interpretation section (above) that are limitations that invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to clearly link the corresponding structure, material, or acts for the claimed function. MPEP 2181 states “In cases involving a special purpose computer-implemented means-plus-function limitation, the Federal Circuit has consistently required that the structure be more than simply a general purpose computer or microprocessor and that the specification must disclose an algorithm for performing the claimed function. See, e.g., Noah Systems Inc. v. Intuit Inc., 675 F.3d 1302, 1312, 102 USPQ2d 1410, 1417 (Fed. Cir. 2012); Aristocrat, 521 F.3d at 1333, 86 USPQ2d at 1239.” Therefore, each of these claim limitations is indefinite. Dependent claims are likewise rejected. Claim 20 is also rejected as per claim 1. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 7-9, and 20 (all claims) are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shankar NG, Ravi N, Zhong ZW. A real-time print-defect detection system for web offset printing. Measurement. 2009 Jun 1;42(5):645-52. Retrieved from https://www.sciencedirect.com/science/article/pii/S0263224108001838. (“Shankar”) 1. (Currently amended) An image processing method in which a processor extracts an image defect portion in a test image obtained through an image reading process on a sheet output by uniform density image formation by an image forming device, the method comprising: (Shankar, title, “A real-time print-defect detection system for web offset printing”) generating, by the processor, a first preprocessed image by executing, with a horizontal direction of the test image used as a processing direction, a conversion process of converting a pixel value of each pixel of interest that is a pixel sequentially selected from the test image into a converted value by a process of amplifying a difference between pixel values of a first area including the pixel of interest and pixel values of two second areas adjacent to the first area on both sides in the processing direction; (Shankar, §3.1.1, ROI [region of interest] detection, see either Method 1, equations 1 and 2, or Method 2, Figs. 8 and 9) generating, by the processor, a second preprocessed image by executing the conversion process with a vertical direction of the test image used as the processing direction; and (Shankar, §3.1.1, ROI [region of interest] detection, see either Method 1, equations 1 and 2, or Method 2, Figs. 8 and 9) executing, by the processor, an extraction process of extracting, as the image defect portion, an image of a vertical streak including a plurality of pixels that are present in the first preprocessed image and are not common to the first preprocessed image and the second preprocessed image, (Shankar, §4.1.2, Ductor streaks, “The identification of the streak faults is the same as the structure faults, except that the fault structures are analyzed, to check whether they represent any lines in the printing directions.”) an image of a horizontal streak including a plurality of pixels that are present in the second preprocessed image and are not common to the first preprocessed image and the second preprocessed image, and (Shankar, §4.1.2, Ductor streaks, “The identification of the streak faults is the same as the structure faults, except that the fault structures are analyzed, to check whether they represent any lines in the printing directions.” See also, Fig. 15.) an image of a noise point including one or more pixels that are common to the first preprocessed image and the second preprocessed image, among singular parts each consisting of one or more pixels having pixel values outside a reference range in the first preprocessed image and the second preprocessed image. (Shankar, §4.1.1, Color splashes and structural faults, Step 1, see also Fig. 14.) 2. (Currently amended) The image processing method according to claim 1, comprising: generating, by the processor, first map data by executing the main filter conversion process with the horizontal direction used as the processing direction; (See the mappings for claim 1) generating, by the processor, horizontal edge intensity map data by executing an edge enhancement filter process for the first area including the pixel of interest sequentially selected from the test image, on the test image targeting the first area of interest and one of the two adjacent second areas with the horizontal direction used as the processing direction; and (See the mappings for claim 1) generating, by the processor, the first preprocessed image by correcting each pixel value of the first map data with a corresponding pixel value of the horizontal edge intensity map data at an absolute position, and (Shankar, Table 1, “Correct %”) the second preprocessing includes: generating, by the processor, second map data by executing the main filter conversion process with the vertical direction used as the processing direction; (See the mappings for claim 1) generating, by the processor, vertical edge intensity map data by executing the edge enhancement filter process on the test image for the first area including the pixel of interest sequentially selected from the test image, targeting the first area of interest and one of the two adjacent second areas with the vertical direction used as the processing direction; and (See the mappings for claim 1) generating, by the processor, the second preprocessed image by correcting each pixel value of the second [[main ]]map data with a corresponding pixel value of the vertical edge intensity map data at the absolute position. (Shankar, Table 1, “Correct %”) 3. - 6. (Canceled) 7. (Currently amended) The image processing method according to claim 1, wherein the processor generates a plurality of the first preprocessed images and a plurality of the second preprocessed images by executing a plurality of times of the first preprocessing conversion processes with the horizontal direction used as the processing direction and a plurality of times of the second preprocessing conversion processes with the vertical direction used as the processing direction, with different sizes of the first area of interest and the adjacent second areas on the test image, and (See the mappings for claim 1) the processor further extracts the vertical streak first singular part, the horizontal streak second singular part, and the third singular part noise point by the singular part extraction process based on the plurality of first preprocessed images and the plurality of second preprocessed images. (See the mappings for claim 1) 8. (Currently amended) The image processing method according to claim 7, wherein the processor extracts a plurality of candidates of each of the image of the vertical streak first singular part, the image of the horizontal streak second singular part, and the image of the noise point third singular part corresponding to a plurality of test images by executing the singular part extraction process on each of the plurality of first preprocessed images and the plurality of second preprocessed images, and (See the mappings for claim 1) sets a representative value of pixel values of each pixel in the plurality of candidates as a pixel value of each pixel of the image of the vertical streak, the image of the horizontal streak, and the image of the noise point extracts the first singular part, the second singular part, and the third singular part by aggregating the plurality of candidates. (See the mappings for claim 1) 9. (Currently amended) The image processing method according to claim 7,wherein the processor aggregates sets a representative value of pixel values of each pixel of the plurality of first preprocessed images and a representative value of pixel values of each pixel of the plurality of second preprocessed images as a value of each pixel of the first preprocessed image and a value of each pixel of the second preprocessed image after aggregation into one image, and (See the mappings for claim 1) extracts the vertical streak first singular part, the horizontal streak second singular part, and the noise point third singular part by executing the singular part extraction process on the aggregated first preprocessed image and the aggregated second preprocessed image. (See the mappings for claim 1) 10. - 19. (Canceled) 20. (Original) An image processing apparatus comprising a processor for executing the processes of the image processing method according to claim 1. (See the mappings for claim 1) Conclusion Pertinent prior art is not provided because the level of confusion is too high. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID ORANGE/ Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Apr 12, 2023
Application Filed
Aug 18, 2025
Examiner Interview Summary
Aug 18, 2025
Examiner Interview (Telephonic)
Aug 29, 2025
Non-Final Rejection — §102, §112
Nov 25, 2025
Response Filed
Dec 16, 2025
Applicant Interview (Telephonic)
Dec 17, 2025
Examiner Interview Summary
Feb 06, 2026
Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567126
INFRASTRUCTURE-SUPPORTED PERCEPTION SYSTEM FOR CONNECTED VEHICLE APPLICATIONS
2y 5m to grant Granted Mar 03, 2026
Patent 11300964
METHOD AND SYSTEM FOR UPDATING OCCUPANCY MAP FOR A ROBOTIC SYSTEM
2y 5m to grant Granted Apr 12, 2022
Patent 10816794
METHOD FOR DESIGNING ILLUMINATION SYSTEM WITH FREEFORM SURFACE
2y 5m to grant Granted Oct 27, 2020
Patent 10433126
METHOD AND APPARATUS FOR SUPPORTING PUBLIC TRANSPORTATION BY USING V2X SERVICES IN A WIRELESS ACCESS SYSTEM
2y 5m to grant Granted Oct 01, 2019
Patent 10285010
ADAPTIVE TRIGGERING OF RTT RANGING FOR ENHANCED POSITION ACCURACY
2y 5m to grant Granted May 07, 2019
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
34%
Grant Probability
63%
With Interview (+29.4%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 151 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month