Prosecution Insights
Last updated: April 19, 2026
Application No. 18/169,830

ADDITIONAL TRAINING APPARATUS, ADDITIONAL TRAINING METHOD, AND STORAGE MEDIUM

Non-Final OA §101§103§112
Filed
Feb 15, 2023
Examiner
MISIR, DAYWAYSHWAR D
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Kabushiki Kaisha Toshiba
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
451 granted / 538 resolved
+28.8% vs TC avg
Strong +48% interview lift
Without
With
+47.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
32.5%
-7.5% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 538 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1, line 7, recites the limitation “the respective pieces of existing training data” which lacks antecedent basis (this should probably be: respective pieces of the existing training data). The same rejection is made for independent Claims 9 and 10. Dependent claims are subsequently rejected. Additionally, Claim 3, line 4, recites the limitation “labels with higher accuracy” that reads on a relative term and renders the claim indefinite. The term “higher accuracy” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: All claims are directed towards either a method, an apparatus or a non-transitory computer readable storage medium and thus satisfies Step 1 as falling into one of the statutory categories. Step 2A, Prong One: Independent Claim 1 recites (the same analysis applies to similar independent Claims 9 and 10): extract, based on the cluster data, a plurality of pieces of first existing training data from the pieces of existing training data in accordance with a size of each of the clusters; this limitation, under its broadest reasonable interpretation, covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of extracting data based on a size of cluster data using observation, evaluation and judgment. Step 2A, Prong Two: Claim 1 recites the additional elements of (the same analysis applies to similar independent Claims 9 and 10): store, in a memory, a plurality of pieces of existing training data in which existing data is input data and a classification of defects according to the existing data is output data, and cluster data representing clusters to which the respective pieces of existing training data belong; acquire a plurality of pieces of new training data in which new data is input data and a classification of defects according to the new data is output data; and store in the memory a plurality of pieces of additional training data that are based on the pieces of first existing training data and the pieces of new training data. These limitations are considered as adding insignificant extra-solution activity (acquiring and storing data) to the judicial exception - see MPEP 2106.05(g). The further additional elements of “processing circuitry” and/or “processor” as recited in these independent claims are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are therefore directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are considered as adding insignificant extra-solution activity (acquiring and storing data) to the judicial exception and therefore considered as appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d). And, the further additional elements of “processing circuitry” and/or “processor” as recited in these independent claims amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are therefore not patent eligible. Dependent Claim 2 is also considered as adding insignificant extra-solution activity (storing data) to the judicial exception - see MPEP 2106.05(g) under Step 2A Prong 2 as well as well-understood, routine, conventional activities of storing information under Step 2B; and using the “training model” as a tool to perform the abstract idea, which also includes its training - see MPEP 2106.05(f). Dependent Claim 3, first limitation is also considered as being directed towards the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of labeling data with any degree of accuracy using observation, evaluation and judgment. The second limitation is considered as adding insignificant extra-solution activity (storing data) to the judicial exception - see MPEP 2106.05(g) under Step 2A Prong 2 as well as well-understood, routine, conventional activities of storing information under Step 2B. Dependent Claim 4, first and second limitations are also considered as being directed towards the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of clustering and extracting data with any degree of accuracy using observation, evaluation and judgment. The third limitation is considered as adding insignificant extra-solution activity (storing data) to the judicial exception - see MPEP 2106.05(g) under Step 2A Prong 2 as well as well-understood, routine, conventional activities of storing information under Step 2B. Dependent Claims 5-8 are also considered as directed towards the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of selecting and extracting data using observation, evaluation and judgment. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Tsuchiya, US 2021/0312235 A1, in view of Chen, US 11514321 B1. Regarding Claim 1, Tsuchiya teaches: An additional training apparatus comprising processing circuitry configured to: store, in a memory, a plurality of pieces of existing training data in which existing data is input data and a classification of defects according to the existing data is output data (paragraph 52: “the training data 140 stored in the storage 130 may be used for the supervised training”. And, paragraph 30: “The image determination system 1 according to the present embodiment analyzes an input image generated by capturing an image of an examination target using a training model included in an image determination device 100 and performs examination of a defect and classification of the target. The training model may be trained in advance to classify the presence or absence of a defect with respect to an examination target and the type of examination target by using training data”), and cluster data representing clusters to which the respective pieces of existing training data belong (paragraph 15: “the division part may divide the training data into the plurality of pieces of sub-training data by clustering the training data according to a predetermined standard and extracting one or a plurality of pieces of representative data from a plurality of clusters”); acquire a plurality of pieces of new training data in which new data is input data and a classification of defects according to the new data is output data (paragraphs 66, 68: “In a case where a change in the accuracy of determination satisfies a predetermined condition (S15: YES), the image determination device 100 executes the division of training data (S12), the training of a training model using sub-training data and the measurement of the accuracy of determination (S13), and the selection of sub-training data (S14) again using the selected sub-training data 142 as new training data”; And: “Thereafter, an image to be examined is captured by the camera 102 (S17). Then, the image determination device 100 inputs a newly captured image to one or a plurality of feature extractors and inputs output feature data to a determiner generated by additional training to determine the image using output data indicating a determination result related to the image”); and store in the memory a plurality of pieces of additional training data that are based on the pieces of first existing training data and the pieces of new training data (paragraph 45: “The storage 130 may further store a training model 136, an input image 138 acquired from the camera 102, and training data 140 which is used to train the training model 136. Meanwhile, the training data 140 may be acquired from external equipment such as the database device 12 through the high-order network 8 or may be temporarily stored in the storage”. The training data obtained from the external equipment can be the new training data). Tsuchiya may not have explicitly taught the following, however, Chen shows: extract, based on the cluster data, a plurality of pieces of first existing training data from the pieces of existing training data in accordance with a size of each of the clusters (C3, L17-22: “a subset of the previously-generated clusters which meet an expected intra-cluster divergence criterion (e.g., clusters which are smaller than a threshold size, and thus may be less likely to contain mismatched entity records) may be selected as candidates for generating attribute-level training data sets”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Chen with that of Tsuchiya for extracting, based on the cluster data, a plurality of pieces of first existing training data from the pieces of existing training data in accordance with a size of each of the clusters. The ordinary artisan would have been motivated to modify Tsuchiya in the manner set forth above for the purposes of learning from examples in clusters in which, even though there may be differences in attribute values for critical attributes among entity record pairs, the underlying real-world entities represented by the entity records are the same [Chen: C3, L26-30]. Regarding Claim 2, Tsuchiya further teaches: The additional training apparatus of Claim 1, wherein the processing circuitry is further configured to: store in the memory, as a plurality of pieces of advance training data, a plurality of pieces of second existing training data that are different from the pieces of first existing training data, from among the pieces of existing training data (paragraph 7: “a division part which divides the training data into a plurality of pieces of sub-training data”. And, paragraph 11: “the division part may divide the training data into the plurality of pieces of sub-training data that do not overlap each other”. The non-overlapping training data being representative of training data that are different); generate an advance training model by applying training to a training model, based on the pieces of advance training data (paragraph 5: “a training model may be generated through supervised training to output a correct determination result in a case where an image is input to a training model using training data including the image and the correct determination result”); and apply additional training to the advance training model, based on the pieces of additional training data (paragraph 8: “it is possible to perform additional training of a training model using a smaller number of pieces of sub-training data than the number of pieces of training data”). Regarding Claim 3, Tsuchiya further teaches: The additional training apparatus of Claim 1, wherein the processing circuitry is further configured to: label the pieces of first existing training data by labels with higher accuracy (paragraph 7: “a training part which causes the training model to train to output the output data indicating label data associated with a training image in a case where the training image is input to the training model using training data including the training image and the label data”); and store in the memory the pieces of additional training data that are based on the pieces of first existing training data that are labeled, and the pieces of new training data (paragraph 52: “the training data 140 stored in the storage 130 may be used for the supervised training”). Regarding Claim 4, Tsuchiya further teaches: The additional training apparatus of Claim 1, wherein the processing circuitry is further configured to: cluster the pieces of new training data that are acquired, and compute cluster data representing clusters to which the respective pieces of new training data belong (paragraph 15: “the division part may divide the training data into the plurality of pieces of sub-training data by clustering the training data according to a predetermined standard and extracting one or a plurality of pieces of representative data from a plurality of clusters”); and store in the memory the pieces of additional training data by using the extracted pieces of new data for additional training as the pieces of new training data (paragraph 52: “the training data 140 stored in the storage 130 may be used for the supervised training”). And Chen further teaches: extract, based on the cluster data, a plurality of pieces of new data for additional training from the pieces of new training data, while maintaining features of the pieces of new training data (C3, L17-30: “a subset of the previously-generated clusters which meet an expected intra-cluster divergence criterion (e.g., clusters which are smaller than a threshold size, and thus may be less likely to contain mismatched entity records) may be selected as candidates for generating attribute-level training data sets”. And, “Such clusters, which may be termed training-data-set (TDS) clusters, may be chosen because one of the objectives of the intra-cluster analysis is to learn from examples in which, even though there may be differences in attribute values for critical attributes among entity record pairs, the underlying real-world entities represented by the entity records are the same”. The entity records remaining the same being indicative of maintaining features of the training data). Regarding Claim 5, Tsuchiya further teaches: The additional training apparatus of Claim 1, wherein the processing circuitry is further configured to randomly select and extract the pieces of first existing training data (paragraph 56: “the division part 152 may divide the training data 140 into a plurality of pieces of sub-training data 142 by repeating processing for clustering the training data 140 according to features of a training image included in the training data 140 and randomly extracting one piece of data from a plurality of clusters”). Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Tsuchiya, US 2021/0312235 A1, in view of Chen, US 11514321 B1, and further in view of Nakano, US 2021/0365813 A1. Regarding Claim 6, with Tsuchiya and Chen teaching those limitations of the claim as previously pointed out, neither Tsuchiya nor Chen may have taught all of the following, however, Nakano shows: The additional training apparatus of Claim 1, wherein the processing circuitry is further configured to select and extract the pieces of first existing training data in accordance with a distance between the pieces of existing training data (paragraphs 70, 83: “In the determination of whether or not the distributions are equivalent to each other, if the difference, ratio, or distance between the predetermined statistical indices of the distributions of the training data and the retraining data is equal to or smaller than a certain value, both distributions are considered to be equivalent to each other”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Nakano with that of Tsuchiya and Chen for extracting the pieces of first existing training data in accordance with a distance between the pieces of existing training data. The ordinary artisan would have been motivated to modify Tsuchiya and Chen in the manner set forth above for the purposes of having a feature representing a relationship between predetermined statistical indices of the training data and the retraining data [Nakano: paragraph 83]. Regarding Claim 7, Nakano further teaches: The additional training apparatus of Claim 1, wherein the processing circuitry is further configured to select and extract the pieces of first existing training data in accordance with a distribution of each of the pieces of existing training data (paragraphs 70, 83: “In the determination of whether or not the distributions are equivalent to each other, if the difference, ratio, or distance between the predetermined statistical indices of the distributions of the training data and the retraining data is equal to or smaller than a certain value, both distributions are considered to be equivalent to each other”). Regarding Claim 8, Nakano further teaches: The additional training apparatus of Claim 1, wherein the processing circuitry is further configured to select and extract the pieces of first existing training data in accordance with a label ratio of the pieces of existing training data in the clusters (paragraphs 70, 83: “In the determination of whether or not the distributions are equivalent to each other, if the difference, ratio, or distance between the predetermined statistical indices of the distributions of the training data and the retraining data is equal to or smaller than a certain value, both distributions are considered to be equivalent to each other”). Claims 9 and 10 are similar to Claim 1 and are rejected under the same rationale as stated above for that claim. Examiner's Note: The Examiner cites particular pages, sections, columns, line numbers, and/or paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner and the additional related prior arts made of record that are considered pertinent to applicant's disclosure to further show the general state of the art. The Examiner's interpretations in parenthesis are provided with the cited references to assist the applicants to better understand how the examiner interprets the prior art to read on the claims. Such comments are entirely consistent with the intent and spirit of compact prosecution. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 for the relevant prior art where for example Yokoyama, US 2022/0405605 A1, teaches a selection unit that selects data to be added as the training data from among pieces of training candidate data, on the basis of distance. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVE MISIR whose telephone number is (571)272-5243. The examiner can normally be reached M-R 8-5 pm, F some hours. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at 5712703169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVE MISIR/Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Feb 15, 2023
Application Filed
Feb 19, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602619
MACHINE LEARNING SYSTEM AND MACHINE LEARNING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12585991
DIGITAL RIGHTS MANAGEMENT OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579475
ARTIFICIAL INTELLIGENCE MODEL GENERATED USING AGENTIC WORKFLOW SYSTEM AND METHOD FOR ARTIFICIAL INTELLIGENCE MODEL ALIGNED WITH DOMAIN-SPECIFIC PRINCIPLES
2y 5m to grant Granted Mar 17, 2026
Patent 12572802
METHODS AND DEVICES IN PERFORMING A VISION TESTING PROCEDURE ON A PERSON
2y 5m to grant Granted Mar 10, 2026
Patent 12562242
DATA DRIVEN FEATURIZATION AND MODELING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+47.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 538 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month