Prosecution Insights
Last updated: April 19, 2026
Application No. 17/897,394

MANAGEMENT DEVICE, MANAGEMENT METHOD, AND STORAGE MEDIUM

Non-Final OA §101§103
Filed
Aug 29, 2022
Examiner
PENG, HUAWEN A
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Kabushiki Kaisha Toshiba
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
586 granted / 712 resolved
+27.3% vs TC avg
Strong +20% interview lift
Without
With
+20.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
14 currently pending
Career history
726
Total Applications
across all art units

Statute-Specific Performance

§101
15.6%
-24.4% vs TC avg
§103
42.9%
+2.9% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 712 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-12 are presented for examination. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 3. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. The Foreign Priority Documents have been electronically retrieved by USPTO on 10/7/2022. Claim Interpretation 4. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 5. The claims (claims 1-10) in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. 6. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a data processor configured to …; a data manager configured to …; and an evaluator configured to … in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 7. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 8. Claims 1, 11 and 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite “performing preprocessing operation of creating a training dataset; saving the created training dataset; evaluating a model created using the created training dataset and determining whether or not to permanently save the created training dataset on the basis of an evaluation result of the model” which is an abstract idea under mental process. This judicial exception is not integrated into a practical application because the additional computer elements which are recited as a processor and memory, do not add meaningful limitations to the abstract idea, and they simply implement the abstract idea on a computing device. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because creating training dataset and determining whether to permanently save the created training dataset based on an evaluation result of a model, are general computer functions which is well understood routine and conventional activities. Claims 2-10 are dependent on claim 1, and includes all the limitations of claim 1. Therefore, claims 2-10 recite the same abstract idea. The additional limitations recited in claims 2-10, for example determining a difference between the training datasets, do not amount to significantly more than the abstract idea. Claim Rejections - 35 USC § 103 9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 10. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 11. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 12. Claims 1-12 are rejected under 35 U.S.C. 103 as being unpatentable over Trim et al. (US 2021/0064929) hereinafter Trim. In claim 1, Trim discloses “A management device comprising: a data processor configured to perform at least one preprocessing operation of creating a training dataset ([0021] The data pre-processor 130 is a component of the data evaluation system 100 configured to preprocess the new dataset 110. The data pre-processor 130 is further configured to perform data analytics on the new dataset 110 and the existing dataset 120 to determine the variances between the datasets [0022] The machine learning model 140 is a component of the data evaluation system 100 configured to learn from training data and assign labels to unlabeled datasets once trained [0029] An output value of the new dataset 110 is determined by the data pre-processor 130 (“output value of the new dataset” equivalent to “creating a training dataset”)); a data manager configured to perform a process of saving the created training dataset ([0021] The data pre-processor 130 can also transform raw data into a usable format. Transformation can include cleaning raw data of missing values, smoothing noisy data, and resolving inconsistencies. The transformation can also include data integration, transformation, reduction, and discretization. The data pre-processor 130 can also perform a sentiment analysis and a toxicity analysis on the data [0022] The machine learning model 140 is a component of the data evaluation system 100 configured to learn from training data and assign labels to unlabeled datasets once trained. The machine learning model 140 is further configured to adjust parameters and weights of features during the training cycle); and an evaluator configured to evaluate a model created using the created training dataset ([0030] The output value of the new dataset 110 is compared to the baseline of variation of the existing dataset 110. This is illustrated at step 240. The distribution of the output value compared to the baseline of variation can be measured to determine whether the output value is within an acceptable range of the baseline of variation. In some embodiments, the data pre-processor 130 compares the baseline of variation by calculating the standard deviation of the existing dataset 120 [0040] a process 400 of evaluating a model for unwanted behavior, the model tester 150 applies a probability density function to the machine learning model 140. The probability density function can be configured to provide the probability of a given behavior of the machine learning model 140 (“evaluating a model/the model tester“ equivalent to “evaluation model”)), wherein the data manager is configured to: temporarily save the created training dataset; and determine whether or not to permanently save the created training dataset on the basis of an evaluation result of the model by the evaluator ([0031] a determination is made as to whether the variance between the output value and the baseline of variation is within an acceptable range. For example, if the output value is of significant difference, or two standard deviations away from the mean, then the new dataset 110 is rejected and not integrated into the existing dataset 120. A variance within the average, or less than one standard deviation, can be accepted by the data evaluation system 100 and integrated into the existing dataset 120)”. Trim does not appear to explicitly disclose “temporarily/permanently save the created training dataset”, however, it is reasonable for one of ordinary skill in the art to interpret “the new dataset is rejected and not integrated into the existing dataset” as “temporarily save the created training dataset” and “accepted by the data evaluation system and integrated into the existing dataset” as “permanently save the created training dataset”. In claim 2, Trim teaches The management device according to claim 1, wherein the data manager is configured to determine whether or not there is a difference between a training dataset newly created in a preprocessing operation by the data processor and the saved preprocessed training dataset ([0031] if the output value is of significant difference, or two standard deviations away from the mean, then the new dataset 110 is rejected and not integrated into the existing dataset 120, a variance of moderate difference, or one standard deviation away from the mean, is further evaluated to determine whether the data will lead to a heightened significant difference or not. The determination as to how to handle variances of moderate difference can be changed by policy as an administrator sees fit). In claim 3, Trim teaches The management device according to claim 1, wherein the at least one preprocessing operation includes first preprocessing and second preprocessing, the data processor is configured to create a second preprocessed dataset by performing the second preprocessing on the temporarily saved first preprocessed training dataset, and the data manager is configured to temporarily save the second preprocessed dataset ([0034] Raw datasets can be processed, or preprocessed, in several different ways. For example, multiple characteristics can be applied to text data that can transform that data into a structured format. A few of these characteristics can be, word or phrase count, special character count, relative length of text, type of topics, and character count. However, other forms of processes occur, such as correcting inconsistent data, replacing missing data, and removing noisy data [0035] The raw dataset can also go through a series of preprocessing procedures such as data cleaning, integration, transformation, reduction, and discretization. In data cleaning, the raw dataset is adjusted through processes such as filling in missing values, smoothing noisy data, and resolving inconsistencies within the data. Integration can involve resolving conflicts between data points with different representations. Transformation can involve a process of normalizing, aggregating, and generalizing the raw dataset. Reduction can reduce the amount of data to consolidate the representation of the raw dataset. Discretization can involve reducing number values of continuous attributes. Once processed, the raw dataset is transformed in a processed dataset ready for annotation). In claim 4, Trim teaches The management device according to claim 3, wherein the data manager is configured to determine whether or not there is a difference between the first preprocessed dataset and the second preprocessed dataset ([0035] In data cleaning, the raw dataset is adjusted through processes such as filling in missing values, smoothing noisy data, and resolving inconsistencies within the data. Integration can involve resolving conflicts between data points with different representations). In claim 5, Trim teaches The management device according to claim 4, wherein the data manager is configured to temporarily save the second preprocessed dataset in a case where it is determined that there is a difference between the first preprocessed dataset and the second preprocessed dataset ([0035] In data cleaning, the raw dataset is adjusted through processes such as filling in missing values, smoothing noisy data, and resolving inconsistencies within the data. Integration can involve resolving conflicts between data points with different representations. Once processed, the raw dataset is transformed in a processed dataset ready for annotation). In claim 6, Trim teaches The management device according to claim 1, wherein the data manager is configured to determine to permanently save the created training dataset in a case where the evaluation result of the model is acceptable ([0031] a determination is made as to whether the variance between the output value and the baseline of variation is within an acceptable range. For example, a variance within the average, or less than one standard deviation, can be accepted by the data evaluation system 100 and integrated into the existing dataset 120). In claim 7, Trim teaches The management device according to claim 6, wherein the data manager is configured to determine not to permanently save the created training dataset in a case where the evaluation result of the model is unacceptable and discard the temporarily saved training dataset ([0031] a determination is made as to whether the variance between the output value and the baseline of variation is within an acceptable range. For example, if the output value is of significant difference, or two standard deviations away from the mean, then the new dataset 110 is rejected and not integrated into the existing dataset 120). In claim 8, Trim teaches The management device according to claim 1, wherein the data manager is configured to create a branch for performing version management and temporarily save the created training dataset in the branch ([0036] A distributed ledger is instantiated to require a consensus on annotations applied to the processed dataset. This is illustrated at step 330. A distributed ledger is a database that can exist across several locations or among multiple participants. Records, or annotations, can only be accepted when there is a consensus among participants performing the annotation process. Once a consensus has been determined, all participants are updated with the same updated ledger [0037] Participants are invited to annotate the processed dataset through the distributed ledger. This is illustrated at step 340. Each participant can perform the annotation process of labeling the processed dataset. Once a participant has completed their annotations for the processed dataset, they can then submit those annotations to the distributed ledger). In claim 9, Trim teaches The management device according to claim 1, wherein the data manager is configured to temporarily save the created training dataset every time each of a plurality of preprocessing operations is completed ([0037] Participants are invited to annotate the processed dataset through the distributed ledger. This is illustrated at step 340. Each participant can perform the annotation process of labeling the processed dataset. Once a participant has completed their annotations for the processed dataset, they can then submit those annotations to the distributed ledger). In claim 10, Trim teaches The management device according to claim 1, wherein the data manager is configured to save metadata related to a training process of the model ([0042] The machine learning model 140 is trained with the new dataset 110. This is illustrated at step 420. In order to determine whether the machine learning model 140 behaves differently with the new dataset 110, it must first be trained with the new data. Training can occur through supervised learning techniques. The machine learning model 140 may use a variety of algorithms during the training cycle. For example, support vector machines, linear regression, logistic regression, decision trees, as well as various other algorithms may be used). Claim 11 is essentially same as claim 1 except that it recites claimed invention as a method and is rejected for the same reasons as applied hereinabove. Claim 12 is essentially same as claim 1 except that it recites claimed invention as a computer-readable non-transitory storage and is rejected for the same reasons as applied hereinabove. Conclusion 13. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is listed on 892 form. Examiner’s Note: Examiner has cited particular figures, and paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested for the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUAWEN A PENG whose telephone number is (571)270-5215. The examiner can normally be reached Mon thru Fri 9 am to 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUAWEN A PENG/Primary Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

Aug 29, 2022
Application Filed
Nov 13, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602367
DATA INTEGRITY CHECKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602625
SYSTEMS AND METHODS FOR CREATING A RICH SOCIAL MEDIA PROFILE
2y 5m to grant Granted Apr 14, 2026
Patent 12598135
TECHNIQUES TO BALANCE LOG STRUCTURED MERGE TREES
2y 5m to grant Granted Apr 07, 2026
Patent 12579160
SYSTEMS, METHODS, AND APPARATUSES FOR GENERATING, EXTRACTING, CLASSIFYING, AND FORMATTING OBJECT METADATA USING NATURAL LANGUAGE PROCESSING IN AN ELECTRONIC NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12567274
GEOGRAPHIC MANAGEMENT OF DOCUMENT CONTENT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+20.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 712 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month