Prosecution Insights
Last updated: April 17, 2026
Application No. 17/976,532

INFORMATION PROCESSING METHOD, STORAGE MEDIUM, AND INFORMATION PROCESSING APPARATUS

Non-Final OA §103§112
Filed
Oct 28, 2022
Examiner
BALDWIN, RANDALL KERN
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
unknown
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
185 granted / 232 resolved
+24.7% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
12 currently pending
Career history
244
Total Applications
across all art units

Statute-Specific Performance

§101
17.4%
-22.6% vs TC avg
§103
43.2%
+3.2% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
26.6%
-13.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 232 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the application filed 10/28/20221 and the response to notice to file missing parts filed 12/29/2022. Claims 1-9 are pending and have been examined. Claims 1-9 are rejected. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. The present application claims foreign priority based on Japanese Patent Application No. JP2021-176197 filed 10/28/2021. The examiner notes that a certified copy (in Japanese) of the above-noted application was retrieved on 2/1/2023. Although a certified copy of the foreign priority application was retrieved, a translation of said application has not yet been made of record in accordance with 37 CFR 1.55. See MPEP §§ 215 and 216. Applicant is reminded of requirements set forth in 37 CFR 1.55(g)(3)-(4) Claim for foreign priority: “(3) An English language translation of a non-English language foreign application is not required except: (i) When the application is involved in an interference (see § 41.202 of this chapter) or derivation (see part 42 of this chapter) proceeding; (ii) When necessary to overcome the date of a reference relied upon by the examiner; or (iii) When specifically required by the examiner. (4) If an English language translation of a non-English language foreign application is required, it must be filed together with a statement that the translation of the certified copy is accurate” (emphasis added). Since an English language translation of Application No. JP2021-176197 has not been made of record to-date, the Examiner notes that prior art references with a filing date or a publication date prior to the instant Application’s filing date of 10/28/2022 are considered applicable prior art references. Information Disclosure Statement Acknowledgment is made of the information disclosure statement filed 10/28/2022, which complies with 37 CFR 1.97. As such, the information disclosure statement has been placed in the application file and the information referred to therein has been considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. A new title is required that is clearly indicative of the invention to which the claims are directed. In particular, the title of the invention is “INFORMATION PROCESSING METHOD, STORAGE MEDIUM, AND INFORMATION PROCESSING APPARATUS”. However, this broad and generic title does not describe or reflect the subject matter that is recited in the claims. As such, the examiner believes that the title of the invention is imprecise. A descriptive title indicative of the invention will help in proper indexing, classifying, searching, etc. See, MPEP § 606.01. However, the title of the invention should be limited to 500 characters. The examiner suggests including the aspect(s) of the claims which Applicant believes to be novel or nonobvious over the prior art. A new title is required that is clearly indicative of the invention to which the claims are directed. Applicant is reminded of the proper content of an abstract of the disclosure. A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art. If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives. Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps. Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length. See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts. Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because it appears to be unrelated to the claimed subject matter. In particular, the abstract recites “According to an embodiment, an arithmetic device configured to execute an operation related to a neural network approximately calculates similarities between a first vector and a plurality of second vectors. Further, the arithmetic device selects, among the plurality of second vectors, a plurality of third vectors whose similarities are equal to or greater than a threshold. Furthermore, the arithmetic device also calculates similarities between the first vector and the selected plurality of third vectors.” However, applicant’s disclosure (specification, original claims and drawings) does not mention or disclose any vectors, much less approximating or calculating any similarities between any vectors. Additionally, none of the pending claims recite any vector or vectors, let alone approximating or calculating any similarities between any vectors. Further, none of the pending claims recite any “arithmetic device”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-9 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. The claims are generally narrative and indefinite, failing to conform with current U.S. practice. They appear to be a literal translation into English from a foreign document and are replete with grammatical and idiomatic errors. As such, the claims are profoundly unclear and replete with grammatical errors and antecedent basis issues. Thus, the issues that the examiner has identified herein should be seen as in illustrative list, not an exhaustive one. The examiner respectfully recommends that Applicant rewrite the entire claim set to clarify the subject matter and to conform to the rules of idiomatic English. Independent claims 1 and 7 both recite “inputting, to the learning model, each item of the expanded data generated by stepwise changing a weight of the coupled function to implement learning” (see, lines 10-12 of claims 1 and 7). Similarly, independent claim 4 recites “input, to the learning model, each item of the expanded data generated by stepwise changing a weight of the coupled function to implement learning” (see, lines 8-9). These recitations are grammatically incorrect and unclear. In particular, the recitations of “each item of the expanded data generated by stepwise changing a weight of the coupled function” are grammatically incorrect and appear to be missing one or more words (i.e., missing the article “a” between “by” and stepwise” and missing the word “of” between “changing” and “a weight”). Also, the recitations of “inputting [input – claim 4], to the learning model, each item of the expanded data … to implement learning” are unclear because applicant previously introduced “a learning model that performs predetermined learning” (see, lines 4-5 of claims 1 and 7, and lines 3-4 of claim 4). As such, it is unclear if the subsequent recitations of “the learning model … to implement learning” refers to the previously-introduced “predetermined learning” performed by the same “learning model”, or to some other “learning” (i.e., a non-predetermined training or learning?). For the purposes of determining patent eligibility and comparison with the prior art, the Examiner is interpreting “inputting, to the learning model, each item of the expanded data generated by stepwise changing a weight of the coupled function to implement learning” as inputting each item of the expanded data generated by iterative, step-wise, step-by-step changing of a weight or parameter of the coupled function into the learning model in order to implement any learning or training, including, but not limited to, the previously-introduced “predetermined learning”. Appropriate correction is required. Claims 1, 4 and 7 recite, using respective similar language, “acquiring expanded data resulting from expansion of target data using an optional data expansion algorithm including a coupled function obtained by coupling together a plurality of data expandable functions by using weights” (see, lines 6-9 of claims 1 and 7 and lines 5-7 of claim 4). These recitations are unclear. In particular, it is unclear how the recited “coupled function” is “obtained by coupling together a plurality of data expandable functions by using weights”. That is, it is unclear how the recited “weights” are used to join, concatenate, merge, or otherwise couple “together a plurality of data expandable functions”. Applicant’s specification merely repeats the claim language in paragraphs 6, 43 and 81 and mentions examples wherein “mathematized functions are linearly coupled using weights” and “expansion unit 15 may also change parameters of the function of the data expansion algorithm” in paragraphs 15 and 41 without disclosing or defining how “a coupled function” is “obtained by coupling together a plurality of data expandable functions by using weights”. For examination purposes, the examiner is interpreting “acquiring expanded data resulting from expansion of target data using an optional data expansion algorithm including a coupled function obtained by coupling together a plurality of data expandable functions by using weights” as obtaining or acquiring expanded data that resulted from expanding target data, wherein the expanding optionally uses a data expansion algorithm that includes a coupled, joined, or merged set of data expandable functions, wherein the coupling, joining, or merging is based at least in part on parameters or weights. Appropriate correction is required. The examiner respectfully recommends that Applicant rewrite the entire claim set to clarify the subject matter and to conform to the rules of idiomatic English. Also, claims 2-3, 5-6 and 8-9, which each depend directly or indirectly from claims 1, 4 and 7, respectively, are rejected under 35 U.S.C. 112(b) as being indefinite under the same rationale as claims 1, 4 and 7. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Vasudevan et al. (U.S. Patent Application Pub. No. 2024/0242125 A1, hereinafter “Vasudevan”) in view of non-patent literature Yao et al. ("Boosting for transfer learning with multiple sources." 2010 IEEE computer society conference on computer vision and pattern recognition. IEEE, 2010, hereinafter “Yao”). Vasudevan is a continuation of U.S. Patent Application No. 17/061,103, filed on 10/1/2020, which is a continuation of U.S. Patent Application No. 16/417,133, filed on 5/20/2019 and both of these dates are before the earliest possible effective filing date of this application, i.e., 10/28/20212. Therefore, Vasudevan constitutes prior art under 35 U.S.C. 102(a)(2). Further, Vasudevan claims priority to U.S. Provisional Application No. 62/673,777, filed on 5/18/2018, which is also before the earliest possible effective filing date of this application, i.e., 10/28/2021. With respect to claim 1, Vasudevan discloses the invention as claimed including an information processing method in an information processing device including a memory and one or a plurality of processors (see, e.g., Abstract, “Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for learning a data augmentation policy for training a machine learning model.” and paragraphs 106-111, “Embodiments … can be implemented in … computer software or firmware, in computer hardware”, “The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory”, “Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. … a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data.” [i.e., an information processing method in a device including a memory and a processor/central processing unit/CPU]), the method comprising: the memory storing therein a learning model that performs predetermined learning by using a neural network (see, e.g., Abstract, “computer programs encoded on a computer storage medium, for learning a data augmentation policy for training a machine learning model.” and paragraphs 14-15, “In some implementations, the machine learning model is a neural network … and adjusting the current values of the machine learning model parameters based on the augmented batch of training data includes … using the augmented batch of training data; and adjusting the current values of the machine learning model parameters”, “neural network is trained by reinforcement learning techniques” [i.e., storage medium/memory stores a model that performs predetermined learning/training using a neural network]); the one or plurality of processors acquiring expanded data resulting from expansion of target data using an optional data expansion algorithm (see, e.g., paragraphs 39, “A data augmentation policy can be used to increase the quantity … of the training inputs used in training the machine learning model”, 53, “A data augmentation policy is defined by a set of parameters (referred to in this document as "data augmentation policy parameters") that specify a procedure for transforming training inputs (i.e., included in training examples) before the training inputs are used to train the machine learning model.”, 96, “The training data includes multiple training examples, each of which specifies a training input and a corresponding target output.” and 99, “the system may adjust the target outputs in the current batch of training examples” [i.e., acquire augmented/expanded quantity of training data from augmentation/expansion of target training data using a data augmentation/expansion algorithm/policy with parameters]) including a coupled function obtained by coupling together a plurality of data expandable functions by using weights3 (see, e.g., paragraphs 69, “the system 100 may combine … data augmentation policies generated by the training system with the highest quality scores to generate the final data augmentation policy”, 90-91, “the data augmentation policy would have a total of 5x2x2x16=320 parameters”, “multiple data augmentation policies can be combined by aggregating their respective sub-policies into a single, combined data augmentation policy.” and 102, “the system may generate the final data augmentation policy by combining a predetermined number of data augmentation policies generated … that have the highest quality scores.” [i.e., a final coupled data augmentation/expansion algorithm/policy generated/obtained by combining/coupling data augmentation/expansion functions/policies by using parameters/scores/weights]); the one or plurality of processors inputting, to the learning model, each item of the expanded data generated (see, e.g., paragraphs 62, “a machine learning model can be trained using a data augmentation policy by transforming the training inputs of existing training examples to generate "new" training examples, and using the new training examples (instead of or in addition to the existing training examples) to train the machine learning model.” and 99, “At each training iteration, the system selects a current "batch" (i.e., set) of one or more training examples, and then determines an "augmented" batch of training examples by transforming the training inputs in the current batch of training examples using the current data augmentation policy.” [i.e., inputting into the machine learning model, each generated new item of augmented/expanded training data examples]) by stepwise changing a weight of the coupled function to implement learning4 (see, e.g., paragraphs 60, “At each of multiple iterations, referred to in this specification as "time steps", the policy generation engine 114 generates one or more "current" data augmentation policies 116. … the system 100 uses the training engine 112 to train a machine learning model 104 using the current data augmentation policy and thereafter determines a quality measure 110 of the current data augmentation policy. The policy generation engine 114 uses the quality measures 110 of the current data augmentation policies 116 to improve the expected quality measures of the data augmentation policies generated for the next time step.”, 82, “at a given time step, the parameter update engine 204 updates the current values of the policy network parameters” and 102, “the system may generate the final data augmentation policy by combining a predetermined number of data augmentation policies generated during steps 704-708 that have the highest quality scores.” [i.e., implement training/learning with data generated by step-wise iterations changing a quality measure/parameter/weight of the combined/coupled function]). Although Vasudevan substantially discloses the claimed invention and Vasudevan discloses “output may estimate the coordinates of bounding boxes that enclose respective objects” and “the target output corresponding to a training input may specify coordinates of a bounding box that encloses an object depicted in the image of the training input. … a translation … of the training input would require applying the same translation operation to the bounding box coordinates specified by the target output.” [i.e., a bounding box for an object/target data and a target output/intended result] (see, e.g., paragraphs 45 and 64), Vasudevan is not relied on for explicitly disclosing the one or plurality of processors specifying a boundary weight with which a learning result of the learning indicates an intended result and associating the boundary weight with information related to the target data. In the same field, analogous art Yao teaches the one or plurality of processors specifying a boundary weight with which a learning result of the learning indicates an intended result and associating the boundary weight with information related to the target data (see, e.g., FIG. 2 – depicting “Decision boundaries … between the positive and negative samples in the target domain” and pages 1855, Sect. 1, “The algorithms are general, and have the potential for significantly improving the performance of several computer vision applications.” [i.e., computer has one or more processors], 1856-1857, Sects. 2 and 4, “Support vector machines (SVM) have been modified for transfer learning. In [27] an SVM is derived by adjusting existing classifiers according to the target data. [14] derived more adaptable decision boundaries by training a target SVM with the help of weighted support vectors learned”, “AdaBoost at every iteration [i.e., step-wise] increases the accuracy of the … classifier by carefully adjusting the weights of the training instances” and 1859-1860, Sects. 5-6, “the update of the weights of the target training instances drives the search for the transfer of the next sub-task that is needed the most for boosting the target classifier [i.e., specify/update weights associated with target data] … Figure 2 shows a data distribution, and … various learning algorithms … Figure 2(c) shows how MultiSource-TrAdaBoost improves the decision boundaries. Each source separately combines with the target … the boundary parts more closely related to the target are transferred to produce tighter target decision boundaries.” [i.e., specify/update decision boundary weights where a training/learning result indicates an intended classification result and associating the boundary weights with support vector information related to the target data]). Vasudevan and Yao are analogous art because they are both directed to techniques for training (i.e., learning) machine learning models and classifiers using training data (see, e.g., Vasudevan, Abstract and Yao, Abstract and page 1858). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the disclosed system of Vasudevan to incorporate the teachings of Yao to provide “new algorithms, MultiSource-TrAdaBoost, and TaskTrAdaBoost” “for transferring knowledge from multiple sources” where “MultiSource-TrAdaBoost, extends the TrAdaBoost framework for handling multiple sources” (see, e.g., Yao, Abstract and page 1855). One of ordinary skill in the art would have been motived to combine the system of Vasudevan with the algorithms of Yao because “MultiSource-TrAdaBoost improves the decision boundaries” where “the boundary parts more closely related to the target are transferred to produce tighter target decision boundaries” by “grab[bing] the most useful pieces of the dashed boundaries to build the tight target decision boundaries.” and “By incorporating the ability to transfer knowledge from multiple individual domains, MultiSource-TrAdaBoost and TaskTrAdaBoost demonstrate a significant improvement in recognition accuracy … and the corresponding standard deviations decrease, indicating an improved performance in both accuracy and consistency.”, as suggested by Yao. (see, e.g., Yao, pages 1859-1860). With respect to independent claim 4, claim 4 is substantially similar to claim 1 and therefore is rejected on the same ground as claim 1, discussed above. In particular, claim 4 is a computer-readable non-transitory recording medium claim that stores a learning model that performs operations corresponding to the method steps of claim 1. In addition, Vasudevan further discloses a computer-readable non-transitory recording medium recording thereon a program that causes one or a plurality of processors included in an information processing device having a memory storing therein a learning model that performs predetermined learning by using a neural network (see, e.g., Abstract, “apparatus, including computer programs encoded on a computer storage medium, for learning a data augmentation policy for training a machine learning model.” and paragraphs 14, “In some implementations, the machine learning model is a neural network” and 106, “Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device”). With respect to independent claim 7, claim 7 is substantially similar to claim 1 and therefore is rejected on the same ground as claim 1, discussed above. In particular, claim 7 is an information processing device claim with a memory that stores a learning model that performs operations corresponding to the method steps of claim 1. In addition, Vasudevan further discloses an information processing device comprising: a memory; and one or a plurality of processors, the memory storing therein a learning model that performs predetermined learning by using a neural network (see, e.g., Abstract, “apparatus, including computer programs encoded on a computer storage medium, for learning a data augmentation policy for training a machine learning model.” and paragraphs 14, “In some implementations, the machine learning model is a neural network” and 111, “Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data.”). Regarding claims 2, 5, and 8, as discussed above, Vasudevan in view of Yao teaches the method of claim 1, the computer-readable non-transitory recording medium of claim 4, and the device of claim 7. Vasudevan further teaches wherein, when the learning result of the learning indicates the intended result, the one or plurality of processors assign, to the expanded data, the same label as a label assigned to the target data (see, e.g., paragraphs 43, “the machine learning model is configured to process an image to generate a classification output that includes a respective score corresponding to each of multiple categories. The score for a category indicates a likelihood that the image belongs to the category. … the categories may be classes of objects (e.g., dog, cat, person, and the like), and the image may belong to a category if it depicts an object included in the object class corresponding to the category.”, 50, “training data 106 is composed of multiple training examples, where each training example specifies a training input and a corresponding target output. The training input includes an image. The target output represents the output that should be generated by the machine learning model by processing the training input. For example, the target output may be a classification output that specifies a category (e.g., object class) corresponding to the input” and 55-56, “The quality measure of a data augmentation policy characterizes the performance (e.g., prediction accuracy) of a machine learning model trained using the data augmentation”, “system 100 may determine the quality measure of a data augmentation policy by evaluating the performance of a machine learning model trained using the data augmentation policy … e.g., an F1 score … (in the case of a classification task)” [i.e., when training/learning result indicates accurate, intended result/target output, assign same quality score/label to the final augmented/expanded training data as the quality score/label assigned to target data]). Regarding claims 3, 6, and 9, as discussed above, Vasudevan in view of Yao teaches the method of claim 1, the computer-readable non-transitory recording medium of claim 4, and the device of claim 7. Vasudevan further teaches wherein, when the predetermined learning is learning of a classification problem and the learning result indicates a classification result, the association includes specifying, as the … weight, a weight when a result of the classification changes from a first result to a second result (see, e.g., paragraphs 43, “the machine learning model is configured to process an image to generate a classification output that includes a respective score corresponding to each of multiple categories” [i.e., including 1st and 2nd classification results/category scores], 50, “The target output represents the output that should be generated by the machine learning model by processing the training input. For example, the target output may be a classification output that specifies a category (e.g., object class) corresponding to the input”, 61, “Training a machine learning model refers to determining trained values of the parameters of the machine learning model from initial values of the parameters of the machine learning model” and 63, “the training input of a training example can be transformed (e.g., in accordance with a data augmentation policy) while maintain the same corresponding target output. For example, for an image classification task” [i.e., the learning/training is for classification and the result indicates a classification result, and the association includes specifying the model parameter/weight when a classification result/output/score changes from a 1st result to a 2nd result]). Although Vasudevan substantially discloses the claimed invention and Vasudevan discloses “output may estimate the coordinates of bounding boxes that enclose respective objects” and “the target output corresponding to a training input may specify coordinates of a bounding box that encloses an object depicted in the image of the training input. … a translation … of the training input would require applying the same translation operation to the bounding box coordinates specified by the target output.” [i.e., a bounding box for an object/target data and a target output/intended result] (see, e.g., paragraphs 45 and 64), Vasudevan is not relied on for explicitly disclosing specifying, as the boundary weight, a weight when a result of the classification changes from a first result to a second result. In the same field, analogous art Yao teaches specifying, as the boundary weight, a weight when a result of the classification changes from a first result to a second result (see, e.g., FIG. 2 – depicting “Decision boundaries … between the positive and negative samples in the target domain” and pages 1856-1857, Sects. 2 and 4, “Support vector machines (SVM) have been modified for transfer learning. In [27] an SVM is derived by adjusting existing classifiers according to the target data. [14] derived more adaptable decision boundaries by training a target SVM with the help of weighted support vectors learned”, “AdaBoost at every iteration [i.e., 1st and 2nd iterations/results] increases the accuracy of the … classifier by carefully adjusting the weights of the training instances. In particular, it gives more importance to misclassified instances because they are believed to be the “most informative” for the next selection. … at every iteration the source training instances are given less importance when they are misclassified.” [i.e., a 1st classification result, misclassified instance, and next, 2nd result in next iteration] and 1859-1860, Sects. 5-6, “the update of the weights of the target training instances drives the search for the transfer of the next sub-task that is needed the most for boosting the target classifier [i.e., specifying/updating a weight when a classification result changes from a 1st result to a 2nd result] … Figure 2 shows a data distribution, and … various learning algorithms … Figure 2(c) shows how MultiSource-TrAdaBoost improves the decision boundaries. Each source separately combines with the target … the boundary parts more closely related to the target are transferred to produce tighter target decision boundaries.” [i.e., specifying/updating weight as the decision boundary weight]). Vasudevan and Yao are analogous art because they are both directed to techniques for training (i.e., learning) machine learning models and classifiers using training data (see, e.g., Vasudevan, Abstract and Yao, Abstract and page 1858). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the disclosed system of Vasudevan to incorporate the teachings of Yao to provide “new algorithms, MultiSource-TrAdaBoost, and TaskTrAdaBoost” “for transferring knowledge from multiple sources” where “MultiSource-TrAdaBoost, extends the TrAdaBoost framework for handling multiple sources” (see, e.g., Yao, Abstract and page 1855). One of ordinary skill in the art would have been motived to combine the system of Vasudevan with the algorithms of Yao because “MultiSource-TrAdaBoost improves the decision boundaries” where “the boundary parts more closely related to the target are transferred to produce tighter target decision boundaries” by “grab[bing] the most useful pieces of the dashed boundaries to build the tight target decision boundaries.” and “By incorporating the ability to transfer knowledge from multiple individual domains, MultiSource-TrAdaBoost and TaskTrAdaBoost demonstrate a significant improvement in recognition accuracy … and the corresponding standard deviations decrease, indicating an improved performance in both accuracy and consistency.”, as suggested by Yao. (see, e.g., Yao, pages 1859-1860). Conclusion The prior art made of record, listed on form PTO-892, and not relied upon, is considered pertinent to applicant's disclosure. For example, Brueckner et al. (U.S. Patent Application Pub. No. 2016/0078361 A1, hereinafter “Brueckner”) and Lee et al. (U.S. Patent Application Pub. No. 2015/0379429 A1, hereinafter “Lee”) both disclose that “an optimization may be used in some implementations to find an approximate boundary weight for the selected fraction (i.e., the weight Wk such that approximately 10% of the features have smaller absolute weights and the remaining approximately 90% have higher absolute weights), without sorting the weights or copying the weights.” (see, Brueckner, paragraph 214, and Lee, paragraph 265). The examiner requests, in response to this office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the reference cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111 (c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to RANDY K BALDWIN whose telephone number is (571)270-5222. The examiner can normally be reached on Mon - Fri 9:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached on 571-272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RANDALL K. BALDWIN/Primary Examiner, Art Unit 2125 /KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125 1 The examiner notes that the original specification, claims, drawings and abstract filed 10/28/2022, like the certified copy of the foreign priority application discussed herein, are in Japanese. The examiner further notes that no English language translation of a non-English language original specification, claims and drawings were filed until the response to notice to file missing parts was filed on 12/29/2022. 2 As discussed in the priority section above, since an English language translation of Application No. JP2021-176197 has not been made of record to-date, the Examiner notes that prior art references with a filing date or a publication date prior to the instant Application’s filing date of 10/28/2022 are considered applicable prior art references. 3 As indicated above in the section 112(b) rejection of this claim, “acquiring expanded data resulting from expansion of target data using an optional data expansion algorithm including a coupled function obtained by coupling together a plurality of data expandable functions by using weights” has been interpreted as obtaining or acquiring expanded data that resulted from expanding target data, wherein the expanding optionally uses a data expansion algorithm that includes a coupled, joined, or merged set of data expandable functions, wherein the coupling, joining, or merging is based at least in part on parameters or weights 4 As indicated above in the section 112(b) rejection of this claim, “inputting, to the learning model, each item of the expanded data generated by stepwise changing a weight of the coupled function to implement learning” as inputting each item of the expanded data generated by an iterative, step-wise, step-by-step changing of a weight or parameter of the coupled function into the learning model in order to implement any learning or training, including, but not limited to, the previously-introduced “predetermined learning”.
Read full office action

Prosecution Timeline

Oct 28, 2022
Application Filed
Nov 05, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602573
NEURAL NETWORK ROBUSTNESS VIA BINARY ACTIVATION
2y 5m to grant Granted Apr 14, 2026
Patent 12596918
ACCELERATOR FOR DEEP NEURAL NETWORKS
2y 5m to grant Granted Apr 07, 2026
Patent 12579000
SCHEDULING METHOD FOR A MULTI-LAYER CONVOLUTIONAL NEURAL NETWORK, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12574477
DISTRIBUTED DEEP LEARNING USING A DISTRIBUTED DEEP NEURAL NETWORK
2y 5m to grant Granted Mar 10, 2026
Patent 12572789
BLOCKWISE FACTORIZATION OF HYPERVECTORS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+26.9%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 232 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month