Prosecution Insights
Last updated: April 19, 2026
Application No. 18/426,907

STORAGE MEDIUM STORING COMPUTER PROGRAM, GENERATION APPARATUS, AND GENERATION METHOD

Non-Final OA §101§102§103§112
Filed
Jan 30, 2024
Examiner
JONES, ANDREW B
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Brother Kogyo Kabushiki Kaisha
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
53 granted / 74 resolved
+9.6% vs TC avg
Strong +19% interview lift
Without
With
+18.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
99
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
49.3%
+9.3% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
17.6%
-22.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 74 resolved cases

Office Action

§101 §102 §103 §112
18Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy filed on 16 March, 2024. Information Disclosure Statement The information disclosure statements (IDS) submitted on 30 January, 2024 and 1 May, 2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Objections Claim 1 is objected to because of the following informalities: Line 8 states “has no abnormality in visual”, this reads as a grammatic error and should be re-written to make clear what “in visual” means. As written, the examiner is interpreting it as “has no visual abnormalities”. Line 6 states “(K is an integer larger than or equal to one)”, the text within the parenthesis should be plainly written in the limitation. Examiner suggests rewriting this limitation to read as “detecting K object regions corresponding to K objects by using a trained object detection model, where K is an integer larger than or equal to one”. Line 14 states “(L is an integer larger than or equal to one and smaller than or equal to K)”, the text within the parenthesis should be plainly written in the limitation. Examiner suggests rewriting this limitation to read as “the inspection condition being among L inspection conditions, where L is an integer larger than or equal to one and smaller than or equal to K”. Appropriate correction is required. Claim 9 is objected to because of the following informalities: Line 6 states “has no abnormality in visual”, this reads as a grammatic error and should be re-written to make clear what “in visual” means. As written, the examiner is interpreting it as “has no visual abnormalities”. Line 4 states “(T is an integer larger than or equal to one)”, the text within the parenthesis should be plainly written in the limitation. Examiner suggests rewriting this limitation to read as “detecting T object regions corresponding to T objects by using a trained object detection model, where T is an integer larger than or equal to one”. Line 12 states “(U is an integer larger than or equal to one and smaller than or equal to T)”, the text within the parenthesis should be plainly written in the limitation. Examiner suggests rewriting this limitation to read as “the inspection condition being among U inspection conditions, where U is an integer larger than or equal to one and smaller than or equal to T”. Appropriate correction is required Claim 11 is objected to because of the following informalities: Line 8 states “has no abnormality in visual”, this reads as a grammatic error and should be re-written to make clear what “in visual” means. As written, the examiner is interpreting it as “has no visual abnormalities”. Line 6 states “(K is an integer larger than or equal to one)”, the text within the parenthesis should be plainly written in the limitation. Examiner suggests rewriting this limitation to read as “detecting K object regions corresponding to K objects by using a trained object detection model, where K is an integer larger than or equal to one”. Line 14 states “(L is an integer larger than or equal to one and smaller than or equal to K)”, the text within the parenthesis should be plainly written in the limitation. Examiner suggests rewriting this limitation to read as “the inspection condition being among L inspection conditions, where L is an integer larger than or equal to one and smaller than or equal to K”. Appropriate correction is required. Claim 19 is objected to because of the following informalities: Line 6 states “has no abnormality in visual”, this reads as a grammatic error and should be re-written to make clear what “in visual” means. As written, the examiner is interpreting it as “has no visual abnormalities”. Line 4 states “(T is an integer larger than or equal to one)”, the text within the parenthesis should be plainly written in the limitation. Examiner suggests rewriting this limitation to read as “detecting T object regions corresponding to T objects by using a trained object detection model, where T is an integer larger than or equal to one”. Line 12 states “(U is an integer larger than or equal to one and smaller than or equal to T)”, the text within the parenthesis should be plainly written in the limitation. Examiner suggests rewriting this limitation to read as “the inspection condition being among U inspection conditions, where U is an integer larger than or equal to one and smaller than or equal to T”. Appropriate correction is required Claim 20 is objected to because of the following informalities: Line 6 states “has no abnormality in visual”, this reads as a grammatic error and should be re-written to make clear what “in visual” means. As written, the examiner is interpreting it as “has no visual abnormalities”. Line 4 states “(K is an integer larger than or equal to one)”, the text within the parenthesis should be plainly written in the limitation. Examiner suggests rewriting this limitation to read as “detecting K object regions corresponding to K objects by using a trained object detection model, where K is an integer larger than or equal to one”. Line 12 states “(L is an integer larger than or equal to one and smaller than or equal to K)”, the text within the parenthesis should be plainly written in the limitation. Examiner suggests rewriting this limitation to read as “the inspection condition being among L inspection conditions, where L is an integer larger than or equal to one and smaller than or equal to K”. Appropriate correction is required Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 3, 5 – 7, 9, 11 – 13, 15 – 17, 19, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. When reviewing independent claim 1, and based upon consideration of all of the relevant factors with respect to the claim as a whole, claim(s) 1, 11, and 20 are held to claim an abstract idea without reciting elements that amount to significantly more than the abstract idea and is/are therefore rejected as ineligible subject matter under 35 U.S.C. 101. The Examiner will analyze Claim 1, and similar rationale applies to independent claims 11 and 20. The rationale, under MPEP § 2106, for this finding is explained below: The claimed invention (1) must be directed to one of the four statutory categories, and (2) must not be wholly directed to subject matter encompassing a judicially recognized exception, as defined below. The following two step analysis is used to evaluate these criteria. Step 1: Is the claim directed to one of the four patent-eligible subject matter categories: process, machine, manufacture, or composition of matter? When examining the claim under 35 U.S.C. 101, the Examiner interprets that the claims is related to a process since the claim is directed to a non-transitory computer-readable storage medium storing a set of program instructions for a computer that generates data for inspecting visual of an inspection target, the set of program instructions, when executed by a controller of a computer, causing the computer to perform… Step 2a, Prong 1: Does the claim wholly embrace a judicially recognized exception, which includes laws of nature, physical phenomena, and abstract ideas, or is it a particular practical application of a judicial exception? The Examiner interprets that the judicial exception applies since claim 1 limitation of “detecting K object regions…”, “generating first correspondence data indicating K correspondences”, and “storing the first correspondence data” are directed to an abstract idea. The claim is related to mental processes by performing steps which could feasibly be performed by the human mind. If the claim recites a judicial exception (i.e., an abstract idea enumerated in MPEP § 2106.04(a), a law of nature, or a natural phenomenon), the claim requires further analysis in Prong Two. Step 2a, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? The Examiner interprets that claim 1 limitations do not provide additional elements or combination of additional elements to a practical application since the claim is or insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). See, MPEP §2106.04(a), Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"). OR Genetic Techs. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (eligibility "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself."). For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, Alice Corp., 573 U.S. at 217, 110 USPQ2d at 1981, either at Prong Two or in Step 2B. If there are no additional elements in the claim, then it cannot be eligible. In such a case, after making the appropriate rejection (see MPEP § 2106.07 for more information on formulating a rejection for lack of eligibility), it is a best practice for the examiner to recommend an amendment, if possible, that would resolve eligibility of the claim. Step 2b: If a judicial exception into a practical application is not recited in the claim, the Examiner must interpret if the claim recites additional elements that amount to significantly more than the judicial exception. The Examiner interprets that the claims do not amount to significantly more since the claims are using machine learning models for object detection which is well understood, routine, and conventional. (YOLO is cited as known prior art in the applicant’s specification), storing data in memory is conventional, and no inventive concept is identified. Furthermore, the generic computer components of the controller of the computer recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system. Claims 2 – 10 and 11 - 19 depending on the independent claims include all the limitation of the independent claim. The Examiner finds that claims 2, 3, 5 – 7, 9, 12, 13, 15 – 17, and 19 do not state significantly more since the claim only recites “wherein the generating the first correspondence data includes determining the type of the object region by analyzing an image of the object region based on a predetermined rule.” In claims 2 and 12; “wherein the generating the first correspondence data includes determining the type of the object region by using the trained object detection model or a classification model trained to classify types of the object regions” in claims 3 and 13; “wherein the type of the object region is one of L types including a first type, the first type including a mark provided based on a standard or a law”, and “wherein a criterion indicated by the inspection condition associated with the first type is a criterion that is most difficult to satisfy among L criteria indicated by the L inspection conditions associated with the L types.” in claims 5 and 15; “wherein the type of the object region is one of L types including a second type including a photograph”, and “wherein a criterion indicated by the inspection condition associated with the second type is a criterion that is easiest to satisfy among L criteria indicated by the L inspection conditions associated with the L types.” in claims 6 and 16; “wherein the type of the object region is one of L types including a third type and a fourth type, the third type including a character, the fourth type including at least illustration or photograph”, and “wherein a criterion indicated by the inspection condition associated with the third type is more difficult to satisfy than a criterion indicated by the inspection condition associated with the fourth type.” in claims 7 and 17; and “based on second captured image data indicating a second captured image of a second inspection target detecting T object regions corresponding to T (T is an integer larger than or equal to one) objects by using the trained object detection model , the second inspection target including the T objects and having no abnormality in visual”, “generating second correspondence data indicating T correspondences corresponding to respective ones of the T object regions, each of the T correspondences indicating a correspondence between object region information and condition information , the object region information being information specifying an object region in the second captured image of the second inspection target , the condition information indicating an inspection condition associated with a type of the object region, the inspection condition being among U (U is an integer larger than or equal to one and smaller than or equal to T) inspection conditions”, and “storing the second correspondence data in the memory” claims 9 and 19. Thus, claims 2, 3, 5 – 7, 9, 12, 13, 15 – 17, and 19 recite the same abstract idea and therefore are not drawn to the eligible subject matter as they are directed to the abstract idea without significantly more. Therefore, the Examiner interprets that the claims are rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4 – 8, and 14 – 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 4 and 14, the term “visual of an object represented by a target object image is normal” in claims 4 and 14 uses a relative term which renders the claim indefinite. The term “normal” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. ¶ 0039 of the applicant’s specification describes the following “Objects may have various defects, such as missing parts in image, deformations, stains, and so on. In a case where the defects are minor, the object is determined to have a normal visual.”. It is not made clear what constitutes “minor defects”. Additionally, a visual being “normal” could mean nearly anything, there is no context provided which determines a definite description of what minor defects are and what a normal image is supposed to be. It can be an image which is determined to have no flaws, or an image with 1 or 2 small flaws. As such it is not clear what the scope of the term “normal” is supposed to cover in this claim. Regarding claims 5 and 15, the term “a criterion that is most difficult to satisfy” in claims 5 and 15 uses a relative term which renders the claim indefinite. The term “most difficult” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. ¶ 0127 of the applicant’s specification and Figure 9 describe setting reference values (Va1 – Va5) to a value associated with an object type. In the example given in ¶ 0127, the smallest reference value is set for the first image type. The specification describes that this smallest value is the most difficult value for an object type to be classified as. It is not described in the specification why this is the most difficult value to be classified as, or what that means. The classification model in ¶ 0158 appears to classify the types of object regions but it is not made clear by the figures or the specification what it means for a criterion to be the most difficult to satisfy. Regarding claims 6 and 16, the term “a criterion that is easiest to satisfy” in claims 5 and 15 uses a relative term which renders the claim indefinite. The term “easiest to satisfy” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. ¶ 0164 of the applicant’s specification and Figure 9 describe setting reference values (Va1 – Va5) to a value associated with an object type. In the example given in ¶ 0164, the largest reference value is set for the first image type. The specification describes that this largest value is the easiest value for an object type to be classified as. It is not described in the specification why this is the easiest value to be classified as, or what that means. The classification model in ¶ 0158 appears to classify the types of object regions but it is not made clear by the figures or the specification what it means for a criterion to be the easiest to satisfy. Regarding claims 7 and 17, the term “a criterion that is more difficult to satisfy” in claims 5 and 15 uses a relative term which renders the claim indefinite. The term “more difficult to satisfy” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. ¶ 0092 of the applicant’s specification and Figure 7 describe setting reference values (Va1 – Va3) to a value associated with an object type. The specification describes that a value Va3 is more difficult to satisfy than Va2. It is not described in the specification why this is a more difficult value to be classified as, or what that means. The classification model in ¶ 0158 appears to classify the types of object regions but it is not made clear by the figures or the specification what it means for a criterion to be more difficult to satisfy. Claims 8 and 18 recite the limitation "the object region information specifying the second object region" in line 8. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for a patent. (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States. Claims 1 – 3, 9, 11 – 13, 19, and 20 are rejected under pre-AIA 35 U.S.C. 102(a)(2) as being anticipated by Arroyo et al (U.S. Patent Publication No. 2022/0114821 A1, hereinafter “Arroyo”). Regarding claim 1, Arroyo teaches a non-transitory computer-readable storage medium storing a set of program instructions for a computer that generates data for inspecting visual of an inspection target, the set of program instructions, when executed by a controller of the computer (¶ 0046: The program(s) may be embodied in software stored on one or more non-transitory computer readable storage media… associated with processor circuitry located in one or more hardware devices, but the entire program and/or parts thereof could alternatively be executed by one or more hardware devices other than the processor circuitry and/or embodied in firmware or dedicated hardware), causing the computer to perform: based on first captured image data indicating a first captured image of a first inspection target (¶ 0025: As shown in FIG. 1, the system 100 may include a first image 102A, a second image 102B, and/or and third image 102C for categorization.), detecting K object regions corresponding to K (K is an integer larger than or equal to one) objects by using a trained object detection model (¶ 0023: For example, an R-CNN may be employed to determine regions of text likely to be a description of an associated product in a banner image.), the first inspection target including the K objects and having no abnormality in visual (¶ 0026: As shown by the system 100, an example categorization circuitry 110 may be used to perform the categorization of the images 102A-102C including respective portions of text 106A-106C.); generating first correspondence data indicating K correspondences corresponding to respective ones of the K object regions (¶ 0054: The example region detection model training circuitry 202 generates bounding boxes around detected regions (block 908)), each of the K correspondences indicating a correspondence between object region information and condition information (¶ 0039: In particular, the example category identification circuitry 224 classifies the text into one or more categories.), the object region information being information specifying an object region in the first captured image of the first inspection target (¶ 0054: The example region detection model training circuitry 202 generates bounding boxes around detected regions (block 908); Examiner’s note: A bounding box is known to acquire and store the coordinate information of the bounding box.), the condition information indicating an inspection condition associated with a type of the object region, the inspection condition being among L (L is an integer larger than or equal to one and smaller than or equal to K) inspection conditions ((¶ 0040: The trained model is used to perform inference of the categories related to each product description, and the inference output gives a vector with probabilities for each available category of interest.; ¶ 0054: The example text classification circuitry 114 applies the trained text classification model (block 916) for the purpose of classifying text into one or more categories (block 918).); Examiner’s note: Regarding the L inspection conditions, as this invention applies one inspection condition to each object, it is understood that the number of inspection conditions which are applied to the plurality of objects cannot be greater than the number of objects. Therefore, any art which teaches labelling a detected object with a single inspection condition would read on the limitation of “L is an integer larger than or equal to one and smaller than or equal to K”. For each object, there must be at least one inspection condition, however multiple objects could be labelled with the same inspection condition, therefore L must be less than or equal to K, but greater than or equal to 1.); and storing the first correspondence data in a memory (¶ 0054: The example region detection model training circuitry 202 generates bounding boxes around detected regions (block 908)…; ¶ 0039: In particular, the example category identification circuitry 224 classifies the text into one or more categories.). Regarding claim 2, Arroyo teaches the non-transitory computer-readable storage medium according to claim 1. Additionally, Arroyo teaches wherein the generating the first correspondence data includes determining the type of the object region by analyzing an image of the object region based on a predetermined rule (¶ 0027: FIG. 2 illustrated the example text classification circuitry 114 as shown in FIG. 1. In some examples, the text classification circuitry 114 includes an example region detection model training circuitry 202 for detecting one or more regions of an image 102A-102C related to text or textual descriptions of each product… For example, in the examples disclosed herein, GT (Ground Truth) information is employed about generated bounding boxes and associated classes of information to teach one or more neural networks to localize and classify objects of interest. In particular, once bounding boxes are localized in a particular image, the R-CNN classifies the bounding boxes based on the GT information provided.). Regarding claim 3, Arroyo teaches the non-transitory computer-readable storage medium according to claim 1. Additionally, Arroyo teaches wherein the generating the first correspondence data includes determining the type of the object region by using the trained object detection model or a classification model trained to classify types of the object regions (¶ 0023: For example, an R-CNN may be employed to determine regions of text likely to be a description of an associated product in a banner image.; ¶ 0034: In some examples, the region detection model training circuitry 202 employs R-CNN architectures with ground truth information corresponding to bounding boxes and associated classes so that the network is trained to localize and classify one or more objects of interest.). Regarding claim 9, Arroyo teaches the non-transitory computer-readable storage medium according to claim 1. Additionally, Arroyo teaches wherein the set of program instructions, when executed by the controller, causing the computer to perform: based on second captured image data indicating a second captured image of a second inspection target (¶ 0025: As shown in FIG. 1, the system 100 may include a first image 102A, a second image 102B, and/or and third image 102C for categorization. In some examples, the images 102A, 102B, and/or 102C are banner images, whereas in other examples the images 102A, 102B, and/or 102C are a different type of image. In some examples, each image 102A-102C includes one or more product representations ( e.g., an image, a graphic depicting the product, etc.), respectively.; Examiner’s note: Arroyo teaches a process of inputting multiple images to perform the text detection and classification, therefore the limitations of claim 9 are the same as the independent claim, only described on subsequent images as disclosed in Arroyo ¶ 0025), detecting T object regions corresponding to T (T is an integer larger than or equal to one) objects by using the trained object detection model (¶ 0023: For example, an R-CNN may be employed to determine regions of text likely to be a description of an associated product in a banner image.), the second inspection target including the T objects and having no abnormality in visual (¶ 0026: As shown by the system 100, an example categorization circuitry 110 may be used to perform the categorization of the images 102A-102C including respective portions of text 106A-106C.); generating second correspondence data indicating T correspondences corresponding to respective ones of the T object regions (¶ 0054: The example region detection model training circuitry 202 generates bounding boxes around detected regions (block 908)), each of the T correspondences indicating a correspondence between object region information and condition information (¶ 0039: In particular, the example category identification circuitry 224 classifies the text into one or more categories.), the object region information being information specifying an object region in the second captured image of the second inspection target (¶ 0054: The example region detection model training circuitry 202 generates bounding boxes around detected regions (block 908); Examiner’s note: A bounding box is known to acquire and store the coordinate information of the bounding box.), the condition information indicating an inspection condition associated with a type of the object region, the inspection condition being among U (U is an integer larger than or equal to one and smaller than or equal to T) inspection conditions ((¶ 0040: The trained model is used to perform inference of the categories related to each product description, and the inference output gives a vector with probabilities for each available category of interest.; ¶ 0054: The example text classification circuitry 114 applies the trained text classification model (block 916) for the purpose of classifying text into one or more categories (block 918).)); and storing the second correspondence data in the memory (¶ 0054: The example region detection model training circuitry 202 generates bounding boxes around detected regions (block 908)…; ¶ 0039: In particular, the example category identification circuitry 224 classifies the text into one or more categories.). Regarding claim 11, claim 11 has been analyzed with regard to claim 1 and is rejected for the same reasons of obviousness as used above as well as in accordance with Arroyo’s further teaching on: A controller (¶ 0046: The program(s) may be embodied in software stored on one or more non-transitory computer readable storage media… associated with processor circuitry located in one or more hardware devices, but the entire program and/or parts thereof could alternatively be executed by one or more hardware devices other than the processor circuitry and/or embodied in firmware or dedicated hardware); and A memory storing instructions, the instructions, when executed by a controller, causing the generation apparatus to perform (¶ 0046: The program(s) may be embodied in software stored on one or more non-transitory computer readable storage media… associated with processor circuitry located in one or more hardware devices, but the entire program and/or parts thereof could alternatively be executed by one or more hardware devices other than the processor circuitry and/or embodied in firmware or dedicated hardware): Regarding claim 12, claim 12 has been analyzed with regard to respective claim 2 and is rejected for the same reasons of obviousness as used above. Regarding claim 13, claim 13 has been analyzed with regard to respective claim 3 and is rejected for the same reasons of obviousness as used above. Regarding claim 19, claim 19 has been analyzed with regard to respective claim 9 and is rejected for the same reasons of obviousness as used above. Regarding claim 20, claim 20 has been analyzed with regard to respective claim 1 and is rejected for the same reasons of obviousness as used above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Arroyo et al (U.S. Patent Publication No. 2022/0114821 A1, hereinafter “Arroyo”)in view of Okazaki et al U.S. Patent Publication No. 2024/0161271 A1, hereinafter “Okazaki”). Regarding claim 4, Arroyo teaches the non-transitory computer-readable storage medium according to claim 1. Arroyo does not explicitly teach wherein the integer L is larger than or equal to two; wherein each of the L inspection conditions is a condition for determining that a difference indicates that visual of an object represented by a target object image is normal, the difference being a difference between the target object image and a reference object, the target object image being an image of a region indicated by the object region information in a captured image for inspection, the reference object image being an image of an object that is preliminary associated with the object region information and that has no abnormality; and wherein the L inspection conditions includes a plurality of inspection conditions indicating different criteria for determining that the difference indicates that the visual is normal. However, Okazaki teaches wherein the integer L is larger than or equal to two (¶ 0068: The abnormality score map is a diagram in which the score of the abnormality degree corresponding to the magnitude of difference between the image 210 and the reconstructed image 230 is indicated by, for example, color, brightness, density, or the like in units of pixels.); wherein each of the L inspection conditions is a condition for determining that a difference indicates that visual of an object represented by a target object image is normal, the difference being a difference between the target object image and a reference object image (¶ 0068: The calculation section 113 may calculate, as an abnormality score map indicating the abnormality degree, the difference in the inspection target region 212 between the image 210 and the reconstructed image 230.), the target object image being an image of a region indicated by the object region information in a captured image for inspection, the reference object image being an image of an object that is preliminary associated with the object region information and that has no abnormality (Figure 7; ¶ 0054: The identification section 112 can identify the inspection target region 212 in the image 210 as follows. The identification section 112 uses a template image 240 of the inspection target region 212 of the input normal product. Then, the identification section 112 can identify the inspection target region 212 by template matching between the template image 240 and the image 210. The template image 240 constitutes a predetermined reference image.); and wherein the L inspection conditions includes a plurality of inspection conditions indicating different criteria for determining that the difference indicates that the visual is normal (¶ 0068: The calculation section 113 may calculate, as an abnormality score map indicating the abnormality degree, the difference in the inspection target region 212 between the image 210 and the reconstructed image 230.The abnormality score map is a diagram in which the score of the abnormality degree corresponding to the magnitude of the difference between the image 210 and the reconstructed image 230 is indicated by, for example, color, brightness, density, or the like in units of pixels. In the abnormality score map, a portion where the abnormality degree of the target 220 is high can be emphasized. The score of the abnormality degree may be the magnitude itself of the difference between the image 210 and the recon structed image 230 (e.g., an absolute value difference between pixel values).). Arroyo and Okazaki are considered to be analogous art as both pertain to image region detection and processing. Therefore, it would have been obvious to one of ordinary skill in the art to combine the method of categorizing image text (as taught by Arroyo) and the information processing apparatus (as taught by Okazaki) before the effective filing date of the claimed invention. The motivation for this combination of references would be the method of Okazaki only compares differences for the image region which is detected within images, thus, if areas of the image outside the detected region are included in the image or the appearance of the object varies because of the nature of the object, erroneous detection of abnormality is reduced. (See ¶ 0082). This motivation for the combination of [Primary] and [Secondary] is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III). Allowable Subject Matter Claims 10 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW JONES whose telephone number is (703)756-4573. The examiner can normally be reached Monday - Friday 8:00-5:00 EST, off Every Other Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW B. JONES/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Jan 30, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599285
ANALYSIS OF IN-VIVO IMAGES USING CONNECTED GRAPH COMPONENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12587607
CORRECTION OF COLOR TINTED PIXELS CAPTURED IN LOW-LIGHT CONDITIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12586201
ORAL IMAGE PROCESSING DEVICE AND ORAL IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12586213
METHOD AND SYSTEM FOR SUPPORTING MOVEMENT OF MOBILE OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12573222
DETECTING RELIABILITY USING AUGMENTED REALITY
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
90%
With Interview (+18.9%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 74 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month