Prosecution Insights
Last updated: April 19, 2026
Application No. 18/689,637

DEFECT CLASSIFICATION SYSTEM

Non-Final OA §103
Filed
Mar 06, 2024
Examiner
MOYER, ANDREW M
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Maintech Co. Ltd.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
89%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
326 granted / 427 resolved
+14.3% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
8 currently pending
Career history
435
Total Applications
across all art units

Statute-Specific Performance

§101
8.3%
-31.7% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
22.8%
-17.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 427 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 6-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Koyama et al., JP-H10-302049 A (English machine translation provided and serving as the Official translation and hereinafter referred to as “Koyama”) in view of Ramakrishnan et al., US 2019/0010005 A1 (hereinafter referred to as “Ramakrishnan”), Takehara, US 2020/0387756 A1 (hereinafter referred to as “Takehara”), and Morimoto, JP 2019-212073 A (English machine translation provided and serving as the Official translation and hereinafter referred to as “Morimoto”). Regarding claim 6, Koyama discloses a defect classification system for classifying defect information based on a defect in paper that has passed through a papermaking process into a corresponding defect cause item among a plurality of defect cause items respectively based on causes of defects previously set (see Koyama pg. 5, where a “computer” is disclosed, and “[t]his image identification device is used for identification of various images, but here, in the manufacturing process of paper or cloth, whether or not there is a defect such as adhesion of a foreign substance such as an insect, a hole or oil bleeding on an object in a manufacturing process such as paper or cloth”), the defect classification system comprising: imaging means for causing an imaging device to image the paper that has passed through and acquiring image data obtained by the imaging (see Koyama pg. 19, where “[t]he image detection and identification device also includes an image pickup device 2 including a CCD video camera and a line sensor for picking up an image of an object to be processed, and a defect image such as the attachment of a foreign substance and a hole from the image picked up by the image pickup device 2”); detection means for detecting the defect in the paper in the image data (see Koyama pg. 6, where “[t]he exception detection unit 22 detects a defect included in the image and detects no defect”); extraction means for extracting a feature amount of the defect (see Koyama pgs. 6 and 7, where “[a]nd a feature amount extracting unit 23 that extracts a feature amount for identifying an image and outputs the extracted feature amount as a feature amount signal” and also “[t]hat is, the feature amount extracting unit 23 is configured by an in-image feature amount extracting unit that extracts a feature amount based on the in-defect image (based on the image of the defect portion that is the identification target image)”); calculation means for causing a classification model to calculate a certainty factor in the defect cause item on the basis of the feature amount of the defect (see Koyama pgs. 9-12, where “[t]he judgment conditions in the judgment result output unit 323 are as follows: the certainty degree of each defect name calculated by the white defect certainty degree calculation unit 3221 or the black defect certainty degree calculation unit 3222 and two criteria for evaluating them” and “[t]he extracted feature amounts (geometric shape feature amount, density pattern feature amount within defect) are output to the identification unit 31 as feature amount signals (geometric shape signal i231, density pattern signal within defect i233)” and “[w]hen the identification signals i311 to i31n are input to the certainty factor calculation unit 322 (step S107B; Connection), the certainty factor is calculated by either the white defect certainty factor calculation unit 3221 or the black defect certainty factor calculation unit 3222 and is output to the determination result output unit 323 (step S108B)”); and display means for displaying (see Koyama pg. 12, where “[t]his output result is displayed on the monitor 62 and recorded as needed”), wherein the classification model is caused to learn using machine learning from a relationship between respective feature amounts of defects previously stored and the plurality of defect cause items (see Koyama pgs. 6-10, where “[i]n the case where “0” functions as a learning unit, an image (reference image) obtained by capturing a defect that has been known in advance is captured by the image capturing unit 1” and “the image data is processed by the image processing unit 20, and learning is performed by the judgment processing unit 30 based on the result” and “[w]hen functioning as learning means, the identification section 31 performs a predetermined learning operation on the basis of the characteristic amount signals i231 and i233 output from the characteristic amount extraction section 23, and identifies from one of the output terminals” and “. . . (a) a function of manually setting a defect name and the like when the identification data 41 is stored in the data holding unit 40 by learning in advance”). Koyama does not explicitly disclose a dry part in a papermaking process; a classification model in which a reference feature amount is previously set; and displaying the certainty factor, wherein the classification model is caused to learn the reference feature amount. However, Ramakrishnan discloses a defect classification system for classifying defect information based on a defect in paper that has passed through a dry part in a papermaking process into a corresponding defect cause item, the defect classification system comprising: imaging means for causing an imaging device to image the paper that has passed through the dry part and acquiring image data obtained by the imaging (see Ramakrishnan Fig. 3, and paras. 0002, 0042, and 0043, where “[d]rying section 134 includes camera 156 that is positioned between rolls 114 and 116 . . .” and “[t]he diagnostic patterns are compared with operational web defect patterns of the corresponding components to determine the source(s) of web defects that are detected”). It would have been obvious to one of ordinary skill in the art before the effective filing date to use the teaching and suggestion by Ramakrishnan to mount a camera in the drying section of paper manufacturing for detecting defects, by using the defect detection technique of Koyama, because it is predictable that performing the defect detection simultaneously with drying would increase efficiency instead of performing them serially. Furthermore, Takehara discloses a classification model in which a reference feature amount is previously set to calculate a certainty factor in the cause item on the basis of the feature amount, wherein the classification model is caused to learn the reference feature amount using machine learning from a relationship between respective feature amounts previously stored and the plurality of cause items (see Takehara Figs. 1-4, and paras. 0024-0040, where “[t]hat is, in the learned model, if, for example, classification into two candidate labels is to be performed, as illustrated in FIG. 2, it is determined whether the object image P is classified as either one of the candidate labels based on whether the extracted feature amounts of the object image P are equal to or larger than a set boundary line L or equal to or smaller than the set boundary line L” and “[i]n the present embodiment, the classification evaluation unit 36 evaluates (analyzes) the object image P based on the learned model, and calculates reliability for each of the candidate labels”). It would have been obvious to one of ordinary skill in the art before the effective filing date to use the machine learning of Takehara to learn in advance the feature amount reference boundary lines for the classification model of Koyama, as previously modified by Ramakrishnan, because it is predictable that learning these feature amount reference boundary lines in advance would save time by precomputing beforehand the defect cause and reliability/certainty for each possible feature amount. Furthermore, Morimoto discloses display means for displaying the certainty factor (see Morimoto pg. 7, where “[f]or example, for the discrimination target located on the left side of the screen area a1, the discrimination result of the label name “label 1” and the discrimination accuracy “50%” is displayed together with an image showing the discrimination target in a highlighted display.”). It would have been obvious to one of ordinary skill in the art before the effective filing date to use the certainty factor display technique of Morimoto on the certainty factors of Koyama, as previously modified by Ramakrishnan and Takehara, because it is predictable that doing so would make it easier for users to verify, explain, and/or confirm the decisions made by the classifier by both knowing the output defect cause, but also the level of certainty and/or reliability assigned to the decision. Regarding claim 7, Koyama discloses wherein the certainty factor of the defect cause item is calculated in the calculation means (see Koyama pgs. 9-12, where “[t]he judgment conditions in the judgment result output unit 323 are as follows: the certainty degree of each defect name calculated by the white defect certainty degree calculation unit 3221 or the black defect certainty degree calculation unit 3222 and two criteria for evaluating them” and “[t]he extracted feature amounts (geometric shape feature amount, density pattern feature amount within defect) are output to the identification unit 31 as feature amount signals (geometric shape signal i231, density pattern signal within defect i233)” and “[w]hen the identification signals i311 to i31n are input to the certainty factor calculation unit 322 (step S107B; Connection), the certainty factor is calculated by either the white defect certainty factor calculation unit 3221 or the black defect certainty factor calculation unit 3222 and is output to the determination result output unit 323 (step S108B)”), and further comprising classification means for classifying the defect information into the defect cause item (see Koyama pg. 5, where “[t]his image identification device is used for identification of various images, but here, in the manufacturing process of paper or cloth, whether or not there is a defect such as adhesion of a foreign substance such as an insect, a hole or oil bleeding on an object in a manufacturing process such as paper or cloth”). Koyama does not explicitly disclose wherein the certainty factor for each of the defect cause items is calculated, and the certainty factor having a maximum value among the plurality of certainty factors. However, Takehara discloses wherein the certainty factor for each of the cause items is calculated in the calculation means, and further comprising classification means for classifying the information into the cause item as the certainty factor having a maximum value among the plurality of certainty factors (see Takehara Figs. 1-4, and para. 0039, where “[s]pecifically, the classification determination unit 38 extracts the candidate label with the highest reliability from among the multiple candidate labels, and determines whether the object image P is classified as the extracted candidate label”). It would have been obvious to one of ordinary skill in the art before the effective filing date to use the maximum certainty/reliability selection technique of Takehara to pick the defect cause of Koyama, because it is predictable that defect cause classification accuracy would be maximized by calculating the certainty/reliability of each possible defect cause and picking the one that is most certain and/or most reliable. Regarding claim 8, Koyama discloses wherein the classification model is caused to perform learning using machine learning from a relationship between the feature amount of the defect and the defect cause item into which the defect is classified (see Koyama pgs. 6-10, where “[i]n the case where “0” functions as a learning unit, an image (reference image) obtained by capturing a defect that has been known in advance is captured by the image capturing unit 1” and “the image data is processed by the image processing unit 20, and learning is performed by the judgment processing unit 30 based on the result” and “[w]hen functioning as learning means, the identification section 31 performs a predetermined learning operation on the basis of the characteristic amount signals i231 and i233 output from the characteristic amount extraction section 23, and identifies from one of the output terminals” and “. . . (a) a function of manually setting a defect name and the like when the identification data 41 is stored in the data holding unit 40 by learning in advance”). Koyama does not explicitly disclose to further perform learning; and the certainty factor having the maximum value of which is a previously set predetermined value or less. However, Takehara discloses the certainty factor having the maximum value of which is a previously set predetermined value or less (see Takehara Figs. 1-4, and para. 0040, where “[a]s illustrated in FIG. 4, if the maximum reliability is equal to or larger than a first threshold K1, the classification determination unit 38 determines that the object image P is classified as the candidate label with the maximum reliability”). It would have been obvious to one of ordinary skill in the art before the effective filing date to use the maximum certainty/reliability selection technique of Takehara to pick the defect cause of Koyama, because it is predictable that defect cause classification accuracy would be maximized by calculating the certainty/reliability of each possible defect cause and picking the one that is most certain and/or most reliable. Furthermore, Morimoto discloses wherein the classification model is caused to further perform learning using machine learning from a relationship between the feature amount of the defect the certainty factor of which is a previously set predetermined value or less and the defect cause item into which the defect is classified (see Morimoto pgs. 2 and 7, where “[a] teacher data acquisition unit that adds and stores, characterized in that it comprises a learning unit for reconstructing the classifier to relearn using the new training data stored in the storage unit” and “Next, when there is a label whose discrimination accuracy is less than or equal to the threshold (step S3: YES), the determination unit 13 proceeds to a teacher data collection process by the teacher data collection unit 15 (step S4)” and “On the other hand, when there is no label whose discrimination accuracy is equal to or less than the threshold (step S3: NO), the determination unit 13 includes a label whose number of teacher data is equal to or less than the threshold among the labels of the discrimination results”). It would have been obvious to one of ordinary skill in the art before the effective filing date to use the retraining detection technique of Morimoto on the classification model of Koyoma, as previously modified by the consideration of the maximum certainty/reliability of Takehara, because it is predictable that additional training of the classification model with additional training data would improve the classification model’s accuracy and ensure new scenarios or environments are learned by the classification model as scenarios and/or environments change over time. Regarding claims 9-11, Koyama discloses wherein the defect information about the defect includes coordinate data of the defect in a case where the paper is provided with coordinates in addition to the feature amount of the defect (see Koyama pg. 7, where “The area S, the perimeter L, the circumscribed rectangle (Ox, Oy), position of center of gravity of figure (Gx, Gy), maximum diameter MAXP. Note that the circumscribed rectangles (Ox, Oy) are, as shown in FIG. 5, the length in the X-axis direction and the length in the Y-axis direction of a rectangle whose sides are in contact with and enclose the defect. The graphic centroid position (Gx, Gy) is the XY coordinate of the defect centroid position as shown in FIG. The maximum diameter MAXP is, as shown in FIG. 5, the diameter of the largest circle that touches the defect.”). Regarding claims 12-15, Koyama discloses wherein the defect cause items include at least the item in which adhesion of a foreign substance is a cause of the defect (see Koyama pg. 5, where “[t]his image identification device is used for identification of various images, but here, in the manufacturing process of paper or cloth, whether or not there is a defect such as adhesion of a foreign substance such as an insect, a hole or oil bleeding on an object in a manufacturing process such as paper or cloth”). Koyama does not explicitly disclose in the dry part. However, Ramakrishnan discloses in the dry part (see Ramakrishnan Fig. 3, and paras. 0002, 0042, and 0043, where “[d]rying section 134 includes camera 156 that is positioned between rolls 114 and 116 . . .” and “[t]he diagnostic patterns are compared with operational web defect patterns of the corresponding components to determine the source(s) of web defects that are detected”). Conclusion Pertinent prior art: Hirai et al., US 2022/0292317 A1, discloses defect classification based on reliability (see Hirai Abstract); and Soltwedel et al., US 2021/0287353 A1, discloses using machine learning for printed image inspection (see Soltwedel Abstract). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW M MOYER whose telephone number is (571)272-9523. The examiner can normally be reached Monday-Friday 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW M MOYER/ Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
Jan 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580065
IMAGE QUALITY RELATIVE TO MACHINE LEARNING DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12551121
BLOOD PRESSURE PREDICTION METHOD AND DEVICE FUSING NOMINAL PHOTOPLETHYSMOGRAPHY (PPG) SIGNAL DATA
2y 5m to grant Granted Feb 17, 2026
Patent 12437205
FOCUSED HYPERPARAMETER TUNING USING ATTRIBUTION
2y 5m to grant Granted Oct 07, 2025
Patent 12236635
DIGITAL PERSON TRAINING METHOD AND SYSTEM, AND DIGITAL PERSON DRIVING SYSTEM
2y 5m to grant Granted Feb 25, 2025
Patent 12223693
OBJECT DETECTION METHOD, OBJECT DETECTION APPARATUS, AND OBJECT DETECTION SYSTEM
2y 5m to grant Granted Feb 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
89%
With Interview (+12.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 427 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month