Prosecution Insights
Last updated: April 19, 2026
Application No. 18/421,545

STORAGE MEDIUM STORING IMAGE EVALUATION PROGRAM AND IMAGE EVALUATION METHOD

Non-Final OA §102§103
Filed
Jan 24, 2024
Examiner
LEE, BENEDICT E
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Brother Kogyo Kabushiki Kaisha
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
92 granted / 106 resolved
+24.8% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
16 currently pending
Career history
122
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
31.8%
-8.2% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 106 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. § 119 (a)-(d). The certified copy has been filed in parent Application No. JP2023-017639, filed on 02/08/2023. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “controlling the display to display” in claims 1–2, 13–15, and 19; and “an estimated processing time for processing” in claim 11. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification (e.g., Applicant’s CPU discloses a display. Applicant’s para. ¶0022; and the estimated processing time is acquired by a terminal apparatus. Id. para. ¶0055.) as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1–2, 5, 11, 13–14 and 19 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Katsuyama et al. (U.S. 10,262,202 B2). Regarding claim 1, Katsuyama discloses a non-transitory computer-readable storage medium storing an image evaluation program including a set of instructions for an image evaluation apparatus comprising a display and a controller, the set of instructions, when executed by the controller, causing the image evaluation apparatus to perform: acquiring image data representing an image; (Per Fig. 1, Katsuyama’s form image reception unit 101 analyzes image data. Katsuyama col. 5 lines 31–35. The form image reception unit 101 receives an input of image data of a form which is a recognition target.) acquiring an evaluation condition for determining whether the image data is suitable as data serving as a source for generating processing data used for processing a sheet-like workpiece (Under broadest reasonable interpretation (BRI), Examiner construes a sheet-like workpiece as a type of form—i.e., a recognition target.) by a processing apparatus, (Per Fig. 1, Katsuyama’s candidate extraction unit 102 discloses a feature extraction of the recognition target such that its form reveals to be evaluated. Id. col. 5 lines 36–57. The candidate extraction unit 102 extracts a corresponding line segment pair based on correspondence between a line segment (ruled line) in the input form image and a line segment in a form to which a form identifier extracted as a candidate is imparted.) the evaluation condition being a condition for determining whether the image data is suitable from a viewpoint1 of at least east of processing when the workpiece is processed in accordance with the processing data generated based on the image data or reproducibility of the image; (Per Fig. 3, Katsuyama discloses a layout of the form 4, in which line segments constitute correlating the form. Id. col. 7 lines 17–41. [r]egistered under each line segment ID imparted to each of other line segments LS2 to LS10 in the line segment information correlated with the form ID of the form 4 in the detailed identification dictionary 121.) acquiring a feature amount of the image data, the feature amount corresponding to the evaluation condition; and (Per Fig. 8B at step S15, Katsuyama’s registration unit 504 discloses feature amount regarding the segmentation information of the form. Id. col. 10 line 62 – col. 11 line 3. [t]he registration unit 504 calculates the coordinates (u, v) representing the relationship (feature amount) between the line segment having the number i and the line segment having the number j using,) controlling the display to display a value corresponding to the feature amount of the image data. (Per Fig. 15B at step S311, Katsuyama’s form ID specifying unit 103 outputs a value related to the form of the input image. Id. col. 24 lines 4–33. [t]he form ID specifying unit 103 outputs the value Fmax representing the determination) result as the form ID of the input image (Step S311) and ends the detailed identification processing (return). See also his Fig. 6. Katsuyama’s registration unit 504 applies a classification dictionary table 111, which is related to the form image, to his display 30. Id. col. 9 lines 42–62. The registration unit 504 outputs, for example, the form image of the form newly registered in the coarse classification dictionary table 111, the imparted form ID, or the like to the display device 30.) Regarding claim 19, Katsuyama discloses an image evaluation method performed by a controller of an image evaluation apparatus, the image evaluation method comprising: acquiring image data representing an image; (Per Fig. 1, Katsuyama’s form image reception unit 101 analyzes image data. Katsuyama col. 5 lines 31–35. The form image reception unit 101 receives an input of image data of a form which is a recognition target.) acquiring an evaluation condition for determining whether the image data is suitable as data serving as a source for generating processing data used for processing a sheet-like workpiece by a processing apparatus, (Per Fig. 1, Katsuyama’s candidate extraction unit 102 discloses a feature extraction of the recognition target such that its form reveals to be evaluated. Id. col. 5 lines 36–57. The candidate extraction unit 102 extracts a corresponding line segment pair based on correspondence between a line segment (ruled line) in the input form image and a line segment in a form to which a form identifier extracted as a candidate is imparted.) the evaluation condition being a condition for determining whether the image data is suitable from a viewpoint of at least east of processing when the workpiece is processed in accordance with the processing data generated based on the image data or reproducibility of the image; (Per Fig. 3, Katsuyama discloses a layout of the form 4, in which line segments constitute correlating the form. Id. col. 7 lines 17–41. [r]egistered under each line segment ID imparted to each of other line segments LS2 to LS10 in the line segment information correlated with the form ID of the form 4 in the detailed identification dictionary 121.) acquiring a feature amount of the image data, the feature amount corresponding to the evaluation condition; and (Per Fig. 8B at step S15, Katsuyama’s registration unit 504 discloses feature amount regarding the segmentation information of the form. Id. col. 10 line 62 – col. 11 line 3. [t]he registration unit 504 calculates the coordinates (u, v) representing the relationship (feature amount) between the line segment having the number i and the line segment having the number j using,) controlling the display to display a value corresponding to the feature amount of the image data. (Per Fig. 15B at step S311, Katsuyama’s form ID specifying unit 103 outputs a value related to the form of the input image. Id. col. 24 lines 4–33. [t]he form ID specifying unit 103 outputs the value Fmax representing the determination) result as the form ID of the input image (Step S311) and ends the detailed identification processing (return). See also his Fig. 6. Katsuyama’s registration unit 504 applies a classification dictionary table 111, which is related to the form image, to his display 30. Id. col. 9 lines 42–62. The registration unit 504 outputs, for example, the form image of the form newly registered in the coarse classification dictionary table 111, the imparted form ID, or the like to the display device 30.) Regarding claim 2, Katsuyama discloses the non-transitory computer-readable storage medium, wherein the image data includes a gradation value of each of a plurality of pixels forming the image; (Per Fig. 25A at step S25, Katsuyama’s information processing device 5 discloses RGB values corresponding to a plurality of pixels. Katsuyama col. 31 lines 30–47. [t]he information processing device 5 calculates, for example, an average value of RGB values in each of a plurality of pixels representing a single line segment, as the color of the line segment.) wherein the acquiring the feature amount includes acquiring the feature amount of the image data corresponding to the evaluation condition based on the gradation value of each of the plurality of pixels; (Katsuyama discloses line segments related to pixel values. Id.) wherein the set of instructions, when executed by the controller, causes the image evaluation apparatus to perform: setting an evaluation result to the image data in accordance with the feature amount, the evaluation result indicating whether the image data is suitable as the data serving as the source for generating the processing data; and (Per Fig. 8B at step S15, Katsuyama’s registration unit 504 discloses feature amount regarding the segmentation information of the form. Id. col. 10 line 62 – col. 11 line 3. [t]he registration unit 504 calculates the coordinates (u, v) representing the relationship (feature amount) between the line segment having the number i and the line segment having the number j using,) wherein the controlling the display includes controlling the display to display the evaluation result as the value corresponding to the feature amount. (Per Fig. 15B at step S311, Katsuyama’s form ID specifying unit 103 outputs a value related to the form of the input image. Id. col. 24 lines 4–33. [t]he form ID specifying unit 103 outputs the value Fmax representing the determination) result as the form ID of the input image (Step S311) and ends the detailed identification processing (return). See also his Fig. 6. Katsuyama’s registration unit 504 discloses a classification dictionary table 111, which is related to the form image, to his display 30. Id. col. 9 lines 42–62. The registration unit 504 outputs, for example, the form image of the form newly registered in the coarse classification dictionary table 111, the imparted form ID, or the like to the display device 30.) Regarding claim 5, it has been rejected in the same manner as claim 2. Regarding claim 11, Katsuyama discloses the non-transitory computer-readable storage medium, wherein the acquiring the feature amount includes acquiring, as the feature amount, an estimated processing time for processing the workpiece by the processing apparatus in accordance with the processing data generated based on the image data. (Per Fig. 1, Katsuyama’s candidate extraction unit 102 discloses a feature extraction of the recognition target such that its form reveals to be evaluated. Katsuyama col. 5 lines 36–57. The candidate extraction unit 102 extracts a corresponding line segment pair based on correspondence between a line segment (ruled line) in the input form image and a line segment in a form to which a form identifier extracted as a candidate is imparted.) Regarding claim 13, Katsuyama discloses the non-transitory computer-readable storage medium, wherein the controlling the display includes controlling the display to display the plurality of sets of the image and the evaluation result in association with each other in a screen. (Per Fig. 15B at step S311, Katsuyama’s form ID specifying unit 103 outputs a value related to the form of the input image. Katsuyama col. 24 lines 4–33. [t]he form ID specifying unit 103 outputs the value Fmax representing the determination) result as the form ID of the input image (Step S311) and ends the detailed identification processing (return). See also his Fig. 6. Katsuyama’s registration unit 504 discloses a classification dictionary table 111, which is related to the form image, to his display 30. Id. col. 9 lines 42–62. The registration unit 504 outputs, for example, the form image of the form newly registered in the coarse classification dictionary table 111, the imparted form ID, or the like to the display device 30.) Regarding claim 14, it has been rejected in the same manner as claim 13. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 3 is rejected under 35 U.S.C. § 103 as being unpatentable over Katsuyama in view of Kamihira et al. (U.S. 10,662,563 B2). Regarding claim 3, Katsuyama fails to specifically disclose the non-transitory computer-readable storage medium, wherein the processing apparatus is an embroidery sewing machine configured to perform embroidery sewing on a sewing object; and wherein the processing data is stitch data that is used by the embroidery sewing machine. In related art, Kamihira discloses the non-transitory computer-readable storage medium, wherein the processing apparatus is an embroidery sewing machine configured to perform embroidery sewing on a sewing object; (Per Fig. 4, Kamihira’s control portion 6 discloses a sewing area. Kamihira col. 7 lines 9 –30. The control portion 6 acquires the sewing area (step S5).) and wherein the processing data is stitch data that is used by the embroidery sewing machine. (Per Fig. 4, Kamihira discloses that the area corresponds to an embroidery frame 45. Id. [a]nd acquires the sewing area corresponding to the identified type of the embroidery frame 45.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Katsuyama into the teachings of Kamihira to provide more natural embroidery finish. Id. col. 1 lines 31–45. Claim 16 is rejected under 35 U.S.C. § 103 as being unpatentable over Katsuyama in view of Shinohara (U.S. 11,032,446 B2). Regarding claim 16, Katsuyama fails to specifically disclose the non-transitory computer-readable storage medium, wherein the set of instructions, when executed by the controller, causes the image evaluation apparatus to perform: acquiring a condition of changing the gradation value of each of the plurality of pixels; and changing the gradation value of each of the plurality of pixels in accordance with the condition; and wherein the acquiring the feature amount includes acquiring the feature amount of the image data corresponding to the evaluation condition based on the gradation value of each of the plurality of pixels after changing the gradation value. In related art, Shinohara discloses the non-transitory computer-readable storage medium, wherein the set of instructions, when executed by the controller, causes the image evaluation apparatus to perform: acquiring a condition of changing the gradation value of each of the plurality of pixels; and (Per Fig. 4, Shinohara’s evaluation value obtaining part 57 discloses plural pixels in an image. Shinohara col. 5 lines 46–56. [o]n the basis of the color distribution of a noted area 72 including a target pixel 71 that is any one of the plural pixels included in the image.) changing the gradation value of each of the plurality of pixels in accordance with the condition; and (Per Fig. 3, Shinohara discloses gradation area. Id. col. 4 line 59 – col. 5 line 2. [t]he image includes a gradation area having the banding phenomenon generated therein and a non-gradation area 81 that is not any gradation.) wherein the acquiring the feature amount includes acquiring the feature amount of the image data corresponding to the evaluation condition based on the gradation value of each of the plurality of pixels after changing the gradation value. (Per Fig. 9, Shinohara’s evaluation value obtaining part 57 discloses neighboring pixels 73 are corrected to prevent degradation thereof. Id. col. 7 lines 18–33. The calculation amount can be reduced preventing any degradation of the quality of the correction of the color by limiting the neighboring pixels 73 to be processed in the correction to pixels that are a part of the noted area 72 and that are each distant from each other.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Shinohara into the teachings of Katsuyama to limit any variation of grayscale such that the original pixels present in the image. Id. col. 1 lines 27–35. Allowable Subject Matter Claims 4, 6–10, 12, 15 and 17–18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wang et al. (U.S. 2015/0347855 A1) discloses clothing stripe detection. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENEDICT LEE whose telephone number is (571)270-0390. The examiner can normally be reached 10:00-16:00 (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R. Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENEDICT E LEE/Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665 1 Applicant discloses that a viewpoint as a line segment by which stitch data constitute. See his para. ¶0026.
Read full office action

Prosecution Timeline

Jan 24, 2024
Application Filed
Mar 24, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567243
METHOD FOR OPTIMIZING DATA TO BE USED TO TRAIN OBJECT RECOGNITION MODEL, METHOD FOR BUILDING OBJECT RECOGNITION MODEL, AND METHOD FOR RECOGNIZING AN OBJECT
2y 5m to grant Granted Mar 03, 2026
Patent 12561958
METHOD OF TRAINING SEMICONDUCTOR PROCESS IMAGE GENERATOR
2y 5m to grant Granted Feb 24, 2026
Patent 12561215
GRAPH MACHINE LEARNING FOR CASE SIMILARITY
2y 5m to grant Granted Feb 24, 2026
Patent 12548170
METHOD, DEVICE AND SYSTEM FOR REAL-TIME MULTI-CAMERA TRACKING OF A TARGET OBJECT
2y 5m to grant Granted Feb 10, 2026
Patent 12541999
METHOD FOR EMOTION RECOGNITION BASED ON HUMAN-OBJECT TIME-SPACE INTERACTION BEHAVIOR
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+14.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 106 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month