Prosecution Insights
Last updated: April 19, 2026
Application No. 17/351,628

MODEL-AWARE DATA TRANSFER AND STORAGE

Final Rejection §101§103§112
Filed
Jun 18, 2021
Examiner
FACCENDA, GISEL GABRIELA
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
4 (Final)
56%
Grant Probability
Moderate
5-6
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
9 granted / 16 resolved
+1.3% vs TC avg
Strong +49% interview lift
Without
With
+49.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
24 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
35.4%
-4.6% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The office action is responsive to the amendment filed on 12/19/2025. As directed by the amendments claims 1, 10 and 11 are amended. Claims 1, 3-11 and 13-20 are pending for examination. Response to Arguments Regarding the 35 U.S.C § 101 Rejection: Applicant's further arguments see pg. 8-10, filed 12/19/2025, have been fully considered but they are not persuasive. APPLICANT ARGUMENT: Applicant argues, “Present claim 1 is not an abstract idea at least because the claim incorporates a practical application in the form of improving the functioning of a computer”. In addition applicant argues, claim 1 as presented “improves the functioning of a computer by "receiving" and "storing" a reduced training dataset, where the "reduced training dataset [is] smaller than the training data." "Receiving" and "storing" the reduced training dataset is an improvement to a computer technology because the reduced training dataset "decreases storage of extraneous data as compared to the training data" and reduces the amount of time taken to receive the data from the request (Present Specification 1 [0001]-[0003])”. Moreover, applicant argues, claim 1 as presented reflect similar improvement as in the USPTO Memo on December 2025, regarding SME and therefore, “reciting patent eligible subject matter”. Accordantly, applicant request reconsideration and withdraw of the present rejection. EXAMINER RESPONSE: Examiner respectfully disagree, applicant argument is not persuasive. amended claim 1 as presented does not integrate into a practical application under the second prong of the two-prong analysis since the claimed invention do not improve the functioning of a computer or improves another technology or technical field. Rather the claim recites additional element of: transmitting a first request for training data, the first request including information about types of data in the training data, types of function to be performed, type of data reduction, and information about a neural network model; receiving a reduced training dataset that includes minimal viable data, responsive to the first request, the reduced training dataset being smaller than the training data which decreases storage of extraneous data as compared to the training data; storing the reduced training dataset; training the model using the reconstructed dataset. That merely recites the words "apply it" (or an equivalent) with the judicial exception, as discussed in MPEP § 2106.05(f), adds insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g) and generally links the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h) which the courts have identified such limitations do not integrate a judicial exception into a practical application (see MPEP 2106.04(d)(I)). Furthermore, amended claim 1 does not recite "An improvement in the functioning of a computer, or an improvement to other technology or technical field" (See MPEP 2106.05 (a)). The claim invention as presented does not disclose any technical details or feature of improvements to the technology or technological field, rather, the improvement is recited as a mental process of reducing the size of the data in a generic manner which is an abstract idea. Therefore, the claimed invention fails to disclose any technical improvements instead discloses an abstract idea as an improvements or inventive concepts. In addition, the claim fails to disclose any practical application, as the recited claim is disclosing in a generic manner reducing size of data without any detail. Moreover, as stated in the “ the USPTO Memo on "Advance notice of change to the MPEP in light of Ex Parte Desjardins" of December 5, 2025 (hereinafter referred to as the "USPTO Memo") the “updates are not intended to announce any new USPTO practice or procedure and are meant to be consistent with existing USPTO guidance”. Regarding applicant argument, applicant does not provide detail in how the claims of the instant applicant are analogues to the USPTO Memo other than stating “The method recited in present claim 1 reflect similar improvement and is consequently also reciting patent eligible subject matter”. Nonetheless, the claims presented for examination are not analogues to the Ex Parte Desjardins, Appeal No. 2024-000567 (hereinafter Ex Parte Desjardins) as the claims do not recite "an improvement in the functioning of a computer, or an improvement to other technology or technical field", do not specify the improvement nor is disclosed in the claim. Rather the claim as presented for examination, do not specify the technical features or details for how the computer functionality is being improved, in the Ex Parte Desjardins, the claims were found to be patent eligible because when evaluating the claim as whole, the limitation of independent claim 1, constituted an improvement to how the machine learning model itself operated, and not, for example, the identified mathematical calculation. However, the claims of the instant applicant do not reflect such improvement to the machine learning model, instead independent claims 1, 10 and 11 recites claim language in a generic manner which appear to be an improvements of an abstract idea such as a mental process of reducing the size of the data in a generic manner. The following paragraph from the USPTO Memo regarding the Ex Parte Desjardins, recites: “The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were deemed to be outside any specific, enumerated judicial exception (Step 2A: NO)”. (emphasis added, see MPEP § 2106.04(d), subsection III). Per contra, independent claim 1 when view as whole do not appear to recites features which provide concrete benefits, for example, the claim does not provide details in how for example the machine learning model is being optimize, how deployment times are being decreased or how lower computational cost for training in the target domain are being achieved. Further, applicant is reminder that while “the claim itself does not need to explicitly recite the improvement described in the specification (e.g., “thereby increasing the bandwidth of the channel”)”, if the specification sets forth an improvement in technology or a technical field, the claims must “includes the components or steps of the invention that provide the improvement described in the specification” (see MPEP § 2106.04(d)(1)). Therefore for the above reason, claims 1, 3-11 and 13-20 are not directed to patent-eligible subject matter under 35 U.S.C § 101. Regarding the 35 U.S.C § 103 Rejection: Applicant's further arguments see pg. 10-13, filed 12/19/2025, have been fully considered but they are not persuasive. APPLICANT ARGUMENT: Applicant argues, none of the cited reference teach or suggest every element in amended claim 1. In particular, applicant argues the cited reference Ando fail to teach the limitation “transmitting a first request for training data, the first request including information about types of data in the training data, types of function to be performed, type of data reduction, and information about a neural network model” since Ando “fails to consider "type of data reduction" as a input to the requester. In other words, Ando presumes the types of data reduction are already known or otherwise fails to teach or suggest that the data reduction type can vary. Therefore, Ando fails to teach or suggest” the request including the type of data reduction. Accordingly, Applicant respectfully requests reconsideration and withdrawal of the present rejection. EXAMINER RESPONSE: Examiner respectfully disagree, regarding amended independent claim 1, and similar recited in independent claims 10 and 11, Shimazu, Ando and Iwase in combination do teach the new added limitations. To clarify, Shimazu teach a method for training a neural network ([0181-0182]), teaches obtaining a training set of data ([0293]) and teaches the training data set comprising of “minimal viable data” allowing the model to maintain the overall performance/accuracy or allowing the model to be trained with reduce training data (Abstract). Moreover, Shimazu teaches generating a reconstructed training dataset from the reduced training dataset, specifically Shimazu para. [0294] teaches a method that comprises randomizing part of the reduced training set of data which can be view as a method to reconstruct training datasets as the training set is being reduced into a plurality of “randomizing reduces training sets of data”. Further, applicant specification does not provide a definition for what constitute “minimal viable data”, under broadest reasonable interpretation, examiner is interpreting “minimal viable data” as data that’s reduced allowing the model to maintains the overall performance/ accuracy or allows the model to be trained with reduced training data. Having said that, Shimazu Abstract teaches minimal viable data, specifically teaches “ wherein the reduced training set of data is suitable to be used in a classification algorithm for determining a relationship between the at least one label and the features of the electronic items, thereby reducing computation complexity when processing the reduced training set of data, compared to processing the training set of data” and the Abstract also teaches storing the reduced training data set. While Shimazu does not teach transmitting a first request for training data, the request including information about types of data in the training data, types of function to be performed, and information about a neural network model; and wherein generating the reconstructed training dataset includes adding filler data to the reduced training dataset to match an input data format for the neural network model. Ando and Iwase do overcome these deficiencies. Ando Fig. 1 explicitly teaches transmitting a request that include necessary information for training. To be specific, Ando [0056] teaches a requester that provides a request in the form of “learning request information” to a learning service providing system to perform machine learning. In particular, Ando [0103] teaches the data/information to be handle include various types of data and [0132] teaches the information input by the requested includes type and specification of input data (i.e., types of data in the training data and type of data reduction), learning objective (i.e., type of function to be performed) and information for specifying the target model (i.e., information about a neural network model) such that all pieces of necessary information regarding the learning model are input. Therefore, Ando does teach or suggest the request includes type of data, type of functions and information about the neural network model. Lastly, while neither Shimazu or Ando teach or suggest ...wherein generating the reconstructed training dataset includes adding filler data to the reduced training dataset to match an input data format for the neural network model. Iwase overcomes this deficiency and teaches generating a padded image (i.e., reconstructed dataset) that is used as input to the image quality improving engine as described in paragraphs [0188], [0193] and [0208]. Therefore, Shimazu, Ando and Iwase analogues in the art teach or suggest amended claim 1. Independent claims 10 and 11 recites similar features to those of claim 1, therefore same rationale applies. Claims 3-9 and 13-20 are dependent on independent claims 1, 10-11 therefore the rejection of claim 1,10-11 applies. Accordingly, claims 1,3-11 and 13-20 are not patent eligible under 35 U.S.C § 103. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 3-11, and 13-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the term “minimal viable data” in claim 1 is a relative term which renders the claim indefinite. The term “minimal viable data” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For purpose of examination, this limitation will be viewed as data that’s reduced allowing the machine learning model to maintains the overall accuracy/performance or allowing the machine learning model to be trained with reduced training data. Independent claims 10 and 11 recites similar limitation to claim 1. Therefore, the rejection of claim 1 is incorporated. Claim 3 recites the limitation “the group” in line 1. There is insufficient antecedent basis for this limitation in the claim. For purpose of examination, this limitation will be viewed as “a group”. Claims 3-9 and 13-20 are dependent on independent claims 1 and 11, and thus are rejected for reasons set forth in the rejection of claim 1 and 11. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-11 and 13-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. STEP 1 Claims 1,and 3-9 are a method type claim. Claim 10, is a computer program product type claim. Claims 11, and 13-20 is a system claim. Therefore, claims 1, 3-11 and 13-20 are directed to either a process, machine, manufacture or composition of matter. Regarding claim 1: 2A Prong 1: generating a reconstructed training dataset from the stored reduced training dataset, wherein generating the reconstructed training dataset includes adding filler data to the reduced training dataset to match an input data format for the neural network model; and (mental process – of generating a reconstructed training dataset from the reduced training dataset and generate filler data to the training dataset can be performed by the human mind with the help of pen and paper (e.g., judgement )). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: transmitting a first request for training data, the first request including information about types of data in the training data, types of function to be performed, type of data reduction, and information about a neural network model; (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). receiving a reduced training dataset that includes minimal viable data, responsive to the first request,... (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). ...the reduced training dataset being smaller than the training data which decreases storage of extraneous data as compared to the training data; (The specification of data to be stored is understood to be a field of use limitation. See MEPE 2106.05(h)). storing the reduced training dataset; (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). training the model using the reconstructed dataset (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: transmitting a first request for training data, the first request including information about types of data in the training data, types of function to be performed, type of data reduction, and information about a neural network model; ( This is directed to well understood, routine of receiving or transmitting data over a network. See MPEP 2106.05 (d)(II)). receiving a reduced training dataset that includes minimal viable data, responsive to the first request,... ( This is directed to well understood, routine of receiving or transmitting data over a network. See MPEP 2106.05 (d)(II)). ...the reduced training dataset being smaller than the training data which decreases storage of extraneous data as compared to the training data; (The specification of data to be stored is understood to be a field of use limitation. See MEPE 2106.05(h)). storing the reduced training dataset; ( This is directed to well understood, routine of storing and retrieving information in memory. See MPEP 2106.05 (d)(II)). training the model using the reconstructed dataset (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Regarding claim 3: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein information about the model comprises at least one element selected from the group consisting of a function of the model and a structure of the model (This is directed to restricting the abstract idea to a particular technological environment. See MPEP 2106.05(h)). Regarding claim 4: 2A Prong 1: further comprising testing the trained model to determine whether the trained model has an accuracy that is within a predetermined threshold from an expected accuracy of the model as trained on a full dataset (mental process, an evaluation can be made with the aid of pen and paper to test a trained model to determine accuracy). 2A Prong 2 and 2B: None. Regarding claim 5: 2A Prong 1: further comprising (mental process, an evaluation can be made to request training data after training a model). 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: further comprising transmitting a second request... (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Further, this is directed to well understood, routine of receiving or transmitting data over a network. See MPEP 2106.05 (d)(II)). The additional elements as disclosed above alone or in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Regarding claim 6: 2A Prong 1: generating a second reconstructed dataset from the second reduced training dataset; and (mental process, a judgement can be made to generate data from a dataset). 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: receiving a second reduced training dataset, responsive to the second request; (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Further, this is directed to well understood, routine of receiving or transmitting data over a network. See MPEP 2106.05 (d)(II)). retraining the model using the second reconstructed dataset (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Regarding claim 7: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: further comprising deploying the trained model to an edge device for implementation (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)). Regarding claim 8: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the reduced training dataset does not include at least some information from an original dataset that does not affect the efficacy of the model (This is directed to restricting the abstract idea to a particular technological environment. See MPEP 2106.05(h)). Regarding claim 9: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the minimal viable data comprises motion vector information (This is directed to restricting the abstract idea to a particular technological environment. See MPEP 2106.05(h)). Regarding claim 10: is rejected under the same rational of claim 1. Claim 10 only recites the additional elements of A computer program product for training a neural network model, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions being executable by a hardware processor to cause the hardware processor to.. which is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f). Regarding claim 11: is rejected under the same rational of claim 1. Claim 11 only recites the additional elements of A system for training a neural network, comprising: a hardware processor; and a memory that stores a computer program product, which, when executed by the hardware processor, causes the hardware processor to.. which is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f). Regarding Claims 13-19: The claims are the system claims having similar limitations of the method of Claims 3-9, and therefore, they are rejected under the same rationale. Regarding claim 20: 2A Prong 1: None. 2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the minimal viable data comprises region of interest information (This is directed to restricting the abstract idea to a particular technological environment. See MPEP 2106.05(h)). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 8, 10-11, 13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Shimazu et al. US 2021/0056444 A1 (hereinafter Shimazu) in view of Ando US 2018/0349757 A1 in further view of Iwase et al. US 2021/0224957 A1 (hereinafter Iwase). Regarding Claim 1: Shimazu teaches: A method for training a neural network, comprising: (Shimazu [0181] “The method can comprise (operation 810) building at least one model based on this reduced training set of data, using a classification algorithm. This model establishes a relationship between features and at least one label” and [0182] “An example of a classification algorithm that can be used is random forest model. This is however not limitative, and other algorithms, such as … neural network model”). receiving a reduced training dataset that includes minimal viable data, responsive to the first request, the reduced training dataset being smaller than the training data which decreases storage of extraneous data as compared to the training data; (Shimazu [0293] “The method comprises obtaining (operation 1700) the reduced training set of data (see above various methods for building this reduced training set of data” and [0292] “FIG. 17, which depicts a method of determining a relationship between the at least one label and the features of the electronic items, and using this relationship in prediction and/or analysis of the electronic items”. To further clarify, applicant specification does not provide a definition for what constitute “minimal viable data”, under broadest reasonable interpretation, “minimal viable data” is data that’s reduced allowing the model to maintains the overall performance/ accuracy or allows the model to be trained with reduced training data. Having said that, Shimazu Abstract teaches “ wherein the reduced training set of data is suitable to be used in a classification algorithm for determining a relationship between the at least one label and the features of the electronic items, thereby reducing computation complexity when processing the reduced training set of data, compared to processing the training set of data”. Furthermore, a person skilled in the relevant art will recognize that “reduced training sets of data” inheritably will be smaller than the training data, as these are a smaller representations set of data of a larger set of data for training a machine learning model and therefore will decreases storage of extraneous data); storing the reduced training dataset; (Shimazu Abstract teaches storing in reduced training set of data). generating a reconstructed training dataset from the stored reduced training dataset, training the neural network model using the reconstructed training dataset (Shimazu [0296] “If a plurality of different labels are present, the model reflects the relationship between features and the different labels” and [0297] “a model comprising of a plurality of decision trees can be obtained … (based on randomized reduced training sets of data)”). Shimazu does not teach transmitting a first request for training data, the request including information about types of data in the training data, types of function to be performed, and information about a neural network model; and wherein generating the reconstructed training dataset includes adding filler data to the reduced training dataset to match an input data format for the neural network model; and. However, Ando teaches: transmitting a first request for training data, the request including information about types of data in the training data, types of function to be performed, type of data reduction, and information about a neural network model; (To clarify, Ando [0056] teaches a requester that provides a request in the form of “learning request information” to a learning service providing system to perform machine learning. In particular, [0133] teaches the “learning request information” includes information for specifying the training information (i.e., information necessary for performing machine learning, and includes learning model information, target apparatus model information, target model information, learning simulator model information, and ability providing data generation model information (see [0099])) corresponding to the selected learning model, and information input by the requester (i.e., type and specification of input data, a learning objective, a type and specification of output data, information for specifying the target apparatus or the target apparatus model, information for specifying the target or the target model (see [0132])). Moreover, applicant specification paragraph [0031] teaches the “request may specify a type of data, a type of model it will be used for, a training process, and/or any other information which may help in performing the data reduction” and [0033] teaches “the data reduction may be determined automatically, based on information regarding the type of data that is requested and a type of function performed by the machine learning model”(emphasis added), thus the “type of data” to be used implies types of data reduction. Accordantly, Ando [0103] teaches the data/information to be handle include various types of data and [0132] teaches the information input by the requested includes type and specification of input data (i.e., type of data reduction) such that all pieces of necessary information regarding the learning model are input in the request). Ando is also in the same field of endeavor as Shimazu (machine learning). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of request for training information, as being disclosed and taught by Ando, in the system taught by Shimazu to yield the predictable results of “improving the efficiency of development operations in which machine learning is used” (see Ando [0002]). Shimazu and Ando does not disclose...wherein generating the reconstructed training dataset includes adding filler data to the reduced training dataset to match an input data format for the neural network model; and. Nevertheless, Iwase teaches the following: ...wherein generating the reconstructed training dataset includes adding filler data to the reduced training dataset to match an input data format for the neural network model; and (Iwase [0188] teaches an image quality improving unit “reduces an input image” so that the size of the input image becomes an input data format such as an image size that the image quality improving engine is capable of handling and [0193] teaches the image quality improving unit generates a “transformed image” (reduced training dataset) In addition, [0208] teaches “the image quality improving unit performs padding with respect to the transformed image so that the transformed image becomes a certain image size set for the training data” therefore generating a padded image (reconstructed dataset) that is used as input to the image quality improving engine). Iwase is also in the same field of endeavor as Shimazu and Ando (machine learning). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of padding, as being disclosed and taught by Iwase, in the system taught by Shimazu and Ando to yield the predictable results of generating “an image that is more suitable for image diagnosis than in the conventional technology” doing so generating a more effective high quality image (Iwase [0009] & [0478]). Regarding Claim 3: Shimazu, Ando and Iwase teach The method of claim 1. Ando specifically teaches wherein information about the model comprises at least one element selected from the group consisting of a function of the model and a structure of the model ( Ando [0022] teaches a request acceptance unit may display a plurality of options for each parametric item to the requester, and allow the requester to select information to be input from the plurality of options. Moreover, Ando [0132] teaches the information input by the requester includes a learning objective (function of the model) and information for specifying the target or the target model (structure of the model)). Regarding Claim 8: Shimazu, Ando and Iwase teach The method of claim 1. Shimazu specifically teaches wherein the reduced training dataset does not include at least some information from an original dataset that does not affect the efficacy of the model (Shimazu [0011] “wherein for a plurality of the groups which comprise a plurality of sets of data, a number of aggregated set of data is less than a number of the sets of data of the group, wherein the reduced training set of data is suitable to be used in a classification algorithm implementing one or more decision trees for determining a relationship between the at least one label and the features of the electronic items, thereby reducing computation complexity when processing the reduced training set of data by the classification algorithm, compared to processing the training set of data” and [0042] “According to some embodiments, the proposed solution allows building one or more models which reflect a relationship between features and label(s) with less computational requirements while maintaining accuracy of the models”). It would be obvious for PHOSITA to combine the method taught by Shimazu, Ando and Iwase with the information from a new dataset that has no effect on the efficacy of the model from Shimazu. The motivation to do so is to have a reduced dataset with new filler data that does not increase or decrease the model accuracy or efficacy. Regarding claim 10: is rejected under the same rationale of claim 1. Claim 10, only recites the additional elements of A computer program product for training a neural network model, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions being executable by a hardware processor to cause the hardware processor to for which Shimazu [0027] teaches “In accordance with certain aspects of the presently disclosed subject matter, there is provided a non-transitory storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform the method described above” and [0029] “In accordance with certain aspects of the presently disclosed subject matter, there is provided a system comprising a processing unit and a memory, configured to”. Regarding claim 11: is rejected under the same rationale of claim 1. Claim 11, only recites the additional elements of A system for training a neural network, comprising: a hardware processor; and a memory that stores a computer program product, which, when executed by the hardware processor, causes the hardware processor to for which Shimazu [0027] teaches “In accordance with certain aspects of the presently disclosed subject matter, there is provided a non-transitory storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform the method described above” and [0029] “In accordance with certain aspects of the presently disclosed subject matter, there is provided a system comprising a processing unit and a memory, configured to”. Regarding Claim 13 and 18: Claims 13 and 18 are the system claims of the method of claims 3 and 8 and thus the claim mapping is identical. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Shimazu, Ando, Iwase in further view of Chen et al. US 11,301,762 B1 (hereinafter Chen). Regarding Claim 7: Shimazu, Ando and Iwase teach The method of claim 1. Neither Shimazu, Ando and Iwase teach further comprising deploying the trained model to an edge device for implementation. Nonetheless, Chen specifically teaches the following: further comprising deploying the trained model to an edge device for implementation (Chen page 12, column 2, lines 15-17, “In some embodiments, the ML models can be simply deployed to different edge devices” and page 13, column 3, lines 12-14, “Data can be collected in a secure manner from devices and used to train new models, which can be re-deployed to devices”). It would be obvious for PHOSITA to combine the method taught by Shimazu, Ando and Iwase with the deployment of a trained model to an edge device of Chen. The motivation to do so is to deploy a trained model using the reconstructed dataset to an edge device for implementation. Regarding claim 17: is rejected under the same rational of claim 7. Claim 17 only recites the additional elements of The system..., for which Shimazu FIG. 1 illustrates an architecture of a system. Claims 4-6 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Shimazu, Ando and Iwase in further view of Mirzasoleiman et al. Coresets for Data-efficient Training of Machine Learning Models (hereinafter Mirzasoleiman). Regarding Claim 4: Shimazu, Ando and Iwase The method of claim 1. However, Shimazu, Ando and Iwase do not teach whether the trained model has an accuracy that is within a predetermined threshold from an expected accuracy of the model as trained on a full dataset. Nevertheless, Mirzasoleiman teaches: whether the trained model has an accuracy that is within a predetermined threshold from an expected accuracy of the model as trained on a full dataset (Mirzasoleiman page 9, section 6, right column, paragraph 1 “We developed a method, CRAIG, for selecting a subset (coreset) of data points with their corresponding per-element step sizes to speed up iterative gradient (IG) methods. In particular, we showed that weighted subsets that minimize the upper-bound on the estimation error of the full gradient”). It would be obvious for PHOSITA to combine the method taught by Shimazu, Ando and Iwase with the accuracy of the trained model being within a predetermined threshold of Mirzasoleiman. The motivation to do so is to determine the accuracy of the model trained on the reduced dataset compared to the model trained on the full dataset to see if the model is effective enough on the reduced training dataset. Regarding Claim 5: Shimazu, Ando, Iwase and Mirzasoleiman teach The method of claim 4. Mirzasoleiman specifically teaches further comprising transmitting a second request for training data, responsive to a determination that the trained model has an accuracy that is outside the predetermined threshold from the expected accuracy, the second request comprising a relaxed data reduction parameter as compared to the first request (Mirzasoleiman page 4, section 3.2, left column, paragraph 4 “At every step, the greedy algorithm selects an element that reduces the upper bound of the estimation error the most” and “Notice that CRAIG creates subset S incrementally one element at a time, which produces a natural order to the elements in S . Adding the element with largest marginal gain j ∈ a r g m a x e ∈ V F ( e | S i - 1 ) improves our estimation from the full gradient by an amount bounded by the marginal gain.” Algorithm 1, see below). PNG media_image1.png 393 507 media_image1.png Greyscale It would be obvious for PHOSITA to combine the method of transmitting a request for training data taught by Shimazu, Ando, Iwase with the algorithm that selects the next data from a reduced threshold value. The motivation to do so is to be able to request data that is almost as accurate as the first set of data requested in order to have a larger dataset or another dataset. Regarding Claim 6: Shimazu, Ando, Iwase and Mirzasoleiman teach The method of claim 5. Shimazu specifically teaches further comprising: receiving a second reduced training dataset, responsive to the second request (Shimazu [0293] “The method comprises obtaining (operation 1700) the reduced training set of data (see above various methods for building this reduced training set of data” and [0292] “Attention is now drawn to FIG. 17, which depicts a method of determining a relationship between the at least one label and the features of the electronic items, and using this relationship in prediction and/or analysis of the electronic items”); generating a second reconstructed dataset from the second reduced training dataset (Shimazu [0294] “The method can comprise randomizing at least part of the reduced training set of data into a plurality of randomized reduced training sets of data”); and retraining the model using the second reconstructed dataset (Shimazu [0296] “If a plurality of different labels are present, the model reflects the relationship between features and the different labels” and [0297] “a model comprising of a plurality of decision trees can be obtained … (based on randomized reduced training sets of data)”); It would be obvious for PHOSITA to combine the method of transmitting a second request for training data taught by Shimazu, Ando, Iwase and Mirzasoleiman with the method of receiving a reduced training dataset that is reconstructed and using said dataset to retrain a model of Shimazu. The motivation to do so is to be able to train a second model with nearly the same accuracy as the full dataset model while being only trained on a reduced dataset. Regarding Claims 14-16: Claims 14, 15 and 16 are the system claims of the method of claims 4, 5 and 6 and thus the claim mapping is identical. Claims 9, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shimazu, Ando, Iwase in further view of Liao et al. US 2019/0327486 A1 (hereinafter Liao). Regarding Claim 9: Shimazu, Ando and Iwase teach The method of claim 1. Neither Shimazu, Ando or Iwase disclose wherein the minimal viable data comprises motion vector information. Nevertheless, Liao teaches the following: wherein the minimal viable data comprises motion vector information (Liao [0081] “For example, as explained above, object detection can be performed on a compressed video frame using the motion vectors encoded in the compressed frame along with the object bounding boxes detected in a preceding frame. Thus, in the illustrated example, the motion vectors encoded in a compressed frame and the object bounding boxes detected in a preceding frame are supplied as input”). It would be obvious for PHOSITA to combine the method taught by Shimazu, Ando and Iwase with the data type including motion vectors in a compressed frame of Liao. The motivation to do so is to have minimal viable data that comprises of motion vector information in order to be able to train a neural network model that can track video objects. Regarding Claim 19: is rejected under the same rational of claim 9. Claim 19 only recites the additional elements of The system..., for which Shimazu FIG. 1 illustrates an architecture of a system. Regarding Claim 20: Shimazu, Ando and Iwase teach The method of claim 11. Neither Shimazu, Ando or Iwase disclose wherein the minimal viable data comprises region of interest information. Nevertheless Liao teaches: wherein the minimal viable data comprises region of interest information (Liao [0081]“For example, as explained above, object detection can be performed on a compressed video frame using the motion vectors encoded in the compressed frame along with the object bounding boxes detected in a preceding frame. Thus, in the illustrated example, the motion vectors encoded in a compressed frame and the object bounding boxes detected in a preceding frame are supplied as input”). It would be obvious for PHOSITA to combine the method taught by Shimazu, Ando and Iwase with the data type including bounding boxes in a compressed frame of Liao. The motivation to do so is to have minimal viable data that comprises of object bounding boxes, which contain regions of interest by definition of bounding box. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GISEL G FACCENDA whose telephone number is (703)756-1919. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /G.G.F./Examiner, Art Unit 2127 /ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Jun 18, 2021
Application Filed
Oct 09, 2024
Non-Final Rejection — §101, §103, §112
Dec 30, 2024
Interview Requested
Jan 08, 2025
Examiner Interview Summary
Jan 08, 2025
Applicant Interview (Telephonic)
Jan 17, 2025
Response Filed
Apr 16, 2025
Final Rejection — §101, §103, §112
Jun 10, 2025
Interview Requested
Jun 16, 2025
Response after Non-Final Action
Jul 21, 2025
Request for Continued Examination
Jul 28, 2025
Response after Non-Final Action
Sep 23, 2025
Non-Final Rejection — §101, §103, §112
Dec 03, 2025
Interview Requested
Dec 11, 2025
Examiner Interview Summary
Dec 19, 2025
Response Filed
Jan 28, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12511538
HYBRID GRAPH-BASED PREDICTION MACHINE LEARNING FRAMEWORKS
2y 5m to grant Granted Dec 30, 2025
Patent 12450489
METHOD, SYSTEM AND APPARATUS FOR FEDERATED LEARNING
2y 5m to grant Granted Oct 21, 2025
Patent 12393863
Distributed Training Method and System, Device and Storage Medium
2y 5m to grant Granted Aug 19, 2025
Patent 12314852
METHOD FOR RECOMMENDING OBJECT, COMPUTING DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted May 27, 2025
Patent 12242970
Incremental cluster validity index-based offline clustering for machine learning
2y 5m to grant Granted Mar 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
56%
Grant Probability
99%
With Interview (+49.2%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month