Prosecution Insights
Last updated: April 19, 2026
Application No. 18/531,480

IMAGE PROCESSING APPARATUS, LEARNING METHOD OF FEATURE EXTRACTOR, UPDATING METHOD OF IDENTIFIER, AND IMAGE PROCESSING METHOD

Non-Final OA §102§103
Filed
Dec 06, 2023
Examiner
KEUP, AIDAN JAMES
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Wakayama University
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
92%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
48 granted / 60 resolved
+18.0% vs TC avg
Moderate +12% lift
Without
With
+12.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
22 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 60 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status The status of claims 1-10 is: Claims 1-10 are pending. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “image processing apparatus” in claims 1-6 “feature extractor” in claims 1, 3, 6, and 8-9 “identifier” in claims 1, 4, and 8-9. Because these claim limitation(s) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 5-6, and 10are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Nogami et al. (U.S. Patent Publication No 2020/0118263, hereinafter “Nogami”). Regarding claim 1, Nogami discloses an image processing apparatus that estimates each region type of plural types of regions included in an image (Nogami Abstract: “A defect detecting unit detects a defect of an object in an input image”, defect region and non-defect region), comprising: a feature extractor configured to output a feature vector corresponding to each of pixels of an image that is input, as intermediate output data (Nogami [0120]: “In a third embodiment, the defect feature amount is extracted from a feature map generated using a CNN (Convolutional Neural Network)”); and an identifier configured to output output data in which region types of the respective pixels of the image are estimated, on the basis of the intermediate output data output from the feature extractor (Nogami [0127]: “Although the defect determination unit 112 is described here as generating the score map using a sigmoid function, which is often used at the output layer of a neural network, the score map can be generated through a different method instead. For example, the defect determination unit 112 may calculate the score for each pixel by inputting the feature vector v.sub.i,j of each pixel into an SVM”; Nogami [0128]: “Using the score map calculated by the defect determination unit 112, the defect separating unit 113 and the region setting unit 114 can carry out the process of specifying one defect at a time and the process of setting the ROI for each defect, in the same manner as in the first embodiment”), wherein the region types include at least two types of a first region and a second region (Nogami [0127]: “Although the defect determination unit 112 is described here as generating the score map using a sigmoid function, which is often used at the output layer of a neural network, the score map can be generated through a different method instead. For example, the defect determination unit 112 may calculate the score for each pixel by inputting the feature vector v.sub.i,j of each pixel into an SVM”, defect region and non-defect region), a region where the closest known feature vector is a first feature vector corresponding to a known pixel belonging to the first region is a domain of the first feature vector in a feature space (Nogami [0127]: “Although the defect determination unit 112 is described here as generating the score map using a sigmoid function, which is often used at the output layer of a neural network, the score map can be generated through a different method instead. For example, the defect determination unit 112 may calculate the score for each pixel by inputting the feature vector v.sub.i,j of each pixel into an SVM”), a region where the closest known feature vector is a second feature vector corresponding to a known pixel belonging to the second region is a domain of the second feature vector in the feature space (Nogami [0127]: “Although the defect determination unit 112 is described here as generating the score map using a sigmoid function, which is often used at the output layer of a neural network, the score map can be generated through a different method instead. For example, the defect determination unit 112 may calculate the score for each pixel by inputting the feature vector v.sub.i,j of each pixel into an SVM”), when the feature vector output from the feature extractor belongs to the domain of the first feature vector, the identifier estimates that a pixel corresponding to the feature vector belongs to the first region (Nogami [0128]: “Using the score map calculated by the defect determination unit 112, the defect separating unit 113 and the region setting unit 114 can carry out the process of specifying one defect at a time and the process of setting the ROI for each defect, in the same manner as in the first embodiment”), and when the feature vector output from the feature extractor belongs to the domain of the second feature vector, the identifier estimates that a pixel corresponding to the feature vector belongs to the second region (Nogami [0128]: “Using the score map calculated by the defect determination unit 112, the defect separating unit 113 and the region setting unit 114 can carry out the process of specifying one defect at a time and the process of setting the ROI for each defect, in the same manner as in the first embodiment”). Regarding claim 10, it is rejected under the same analysis as claim 10 above. Regarding claim 2, Nogami discloses the apparatus, wherein the feature space is three-dimensional (Nogami Fig. 11: shows the feature space is three-dimensional). Regarding claim 3, Nogami discloses the apparatus, wherein the feature extractor is a machine learning model (Nogami [0120]: “In a third embodiment, the defect feature amount is extracted from a feature map generated using a CNN (Convolutional Neural Network)”). Regarding claim 5, Nogami discloses the apparatus, wherein the image is a captured image of an object, and the first region is a defect region indicating a defect of the object (Nogami [0128]: “Using the score map calculated by the defect determination unit 112, the defect separating unit 113 and the region setting unit 114 can carry out the process of specifying one defect at a time and the process of setting the ROI for each defect, in the same manner as in the first embodiment”). Regarding claim 6, Nogami discloses a learning method of the extractor in the apparatus, comprising the steps of: preparing a learning image including the first region and the second region (Nogami [0123]: “On the other hand, the process for detecting the defect can be carried out in the same manner as in the first embodiment, and a CNN can be used to improve the efficiency of the processing”; Nogami [0090]: “Such training data can be prepared as follows. First, a human views the crack image indicated in FIG. 7A, and enters information of the position and width of the crack. FIG. 7B is a diagram illustrating a method for entering this information. As illustrated in FIG. 7B, the human enters the position of the crack indicated in FIG. 7A. For example, the creator of the data can specify the pixels at which a single crack is located, and can specify the pixels at which a different crack is located”); defining the first region that is known and the second region that is known in the learning image (Nogami [0123]: “On the other hand, the process for detecting the defect can be carried out in the same manner as in the first embodiment, and a CNN can be used to improve the efficiency of the processing”; Nogami [0090]: “Such training data can be prepared as follows. First, a human views the crack image indicated in FIG. 7A, and enters information of the position and width of the crack. FIG. 7B is a diagram illustrating a method for entering this information. As illustrated in FIG. 7B, the human enters the position of the crack indicated in FIG. 7A. For example, the creator of the data can specify the pixels at which a single crack is located, and can specify the pixels at which a different crack is located”); inputting the first region that is known and the second region that is known, to the feature extractor, and outputting the feature vector from the feature extractor (Nogami [0091]: “Then, on the basis of this data, a set including an image feature amount and a class label is prepared for a single crack, and the classifier F is trained using this set so as to determine a crack width. The image feature amount for the single crack can be extracted in the same manner as in steps S205 and S206”); and adjusting a parameter of the feature extractor such that the feature vector corresponding to the first region and the feature vector corresponding to the second region are separated from each other in the feature space (Nogami [0107]: “Additionally, in this case, the accuracy at which the attributes are determined can be improved by changing the parameters x (FIG. 6 or FIG. 7B), which are used to determine the range of the ROI, in accordance with the resolution of the input image. In the above-described example, the parameters x express the number of pixels with the defect on their center, and thus it is necessary to change the parameters x in accordance with the resolution in order to use the same part of the detection target as the ROI when the resolution of the input image changes”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Nogami in view of Mendes Rodrigues et al. (U.S. Patent Publication No 2019/0188855, hereinafter “Mendes”). Regarding claim 4, Nogami does not explicitly disclose the apparatus, wherein the identifier is a nearest neighbor identifier. (However Nogami does disclose that [0127]: “Although the defect determination unit 112 is described here as generating the score map using a sigmoid function, which is often used at the output layer of a neural network, the score map can be generated through a different method instead”). However, Mendes teaches the apparatus, wherein the identifier is a nearest neighbor identifier (Mendes [0068]: “For example, a simple classifier such as kNN (k-Nearest Neighbour) could be applied to determine whether a patch from a new scan falls into the defect class or the non-defect class”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the nearest neighbor identifier as taught by Mendes with the apparatus of Nogami because it would be a simple substitution (as suggested by Nogami [0127]). This motivation for the combination of Nogami and Mendes is supported by KSR exemplary rationale (B) Simple substitution of one known element for another known element to obtain predictable results. Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Nogami in view of Theverapperuma et al. (U.S. Patent Publication No 2022/0024485, hereinafter “Theverapperuma”). Regarding claim 7, Nogami does not explicitly disclose the method, wherein the step d) includes adjusting the parameter such that a loss function decreases, and the loss function is a function in which attractive force acts between feature vectors corresponding to the same region type among the region types and repulsive force acts between feature vectors corresponding to different region types among the region types. However, Theverapperuma teaches the method, wherein the step d) includes adjusting the parameter such that a loss function decreases (Theverapperuma [0097]: “The depth images generated as a result of processing the training images can then be compared to corresponding ground truth depth information (e.g., the correct depth value for each pixel in a training image) to adjust the CNN by changing weights and/or bias values for one or more layers of the CNN such that a loss function is minimized”); and the loss function is a function in which attractive force acts between feature vectors corresponding to the same region type among the region types and the loss function is a function in which attractive force acts between feature vectors corresponding to the same region type among the region types (Theverapperuma [0097]: “The depth images generated as a result of processing the training images can then be compared to corresponding ground truth depth information (e.g., the correct depth value for each pixel in a training image) to adjust the CNN by changing weights and/or bias values for one or more layers of the CNN such that a loss function is minimized”, the loss function is implied to act this way if it is to improve the CNN). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the loss function of Theverapperuma with the method of Nogami because implementing a loss function would improve the accuracy of the method by further refining the parameters of the extractor which would make its extractions more accurate. This motivation for the combination of Nogami and Theverapperuma is supported by KSR exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. Allowable Subject Matter Claims 8-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AIDAN KEUP whose telephone number is (703)756-4578. The examiner can normally be reached Monday - Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AIDAN KEUP/Examiner, Art Unit 2666 /Molly Wilburn/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Dec 06, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602774
Regional Pulmonary V/Q via image registration and Multi-Energy CT
2y 5m to grant Granted Apr 14, 2026
Patent 12597140
METHOD, SYSTEM AND DEVICE OF IMAGE SEGMENTATION
2y 5m to grant Granted Apr 07, 2026
Patent 12597168
METHOD FOR CONVERTING NEAR INFRARED IMAGE TO RGB IMAGE AND APPARATUS FOR SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12592082
DEVICE AND METHOD FOR PROVIDING INFORMATION FOR VEHICLE USING ROAD SURFACE
2y 5m to grant Granted Mar 31, 2026
Patent 12586182
Multi-Prong Multitask Convolutional Neural Network for Biomedical Image Inference
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
92%
With Interview (+12.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 60 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month