DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
The status of the claims as of the response filed 10/10/2025 is as follows: Claims 9, 12-21, 23, and 25 are cancelled, and all previously given rejections for these claims are considered moot. Claims 1, 7, 10, and 24 are currently amended. Claims 5-6, 11, and 22 are as previously presented. Claims 2-4 and 8 are original. Claims 1-8, 10-11, 22, and 24 are currently pending in the application and have been considered below.
Response to Amendment
Rejection Under 35 USC 101
Claim 24 has been amended such that it is now directed to statutory subject matter and the corresponding 35 USC 101 “software per se” rejection is withdrawn.
The claims have been amended but the 35 USC 101 eligibility rejections for claims 1-8, 10-11, 22, and 24 are upheld.
Rejection Under 35 USC 102/103
The amendments made to the claims introduce limitations that are not fully addressed in the previous office action, and thus the corresponding 35 USC 102/103 rejections are withdrawn. However, Examiner will consider the amended claims in light of an updated prior art search and address their patentability with respect to prior art below.
Response to Arguments
Rejection Under 35 USC 101
On page 6 of the response filed 10/10/2025 Applicant argues that amending claim 1 to include the subject matter of now-cancelled claim 9 as well as limitations directed to receiving and aggregating parameter gradients from the computing nodes and transmitting parameter updates back to the computing nodes until a convergence condition is reached renders the claims patent eligible. Applicant’s arguments are fully considered, but are not persuasive. Examiner notes that Applicant has provided no specific arguments or explanation about why the newly-added subject matter would confer eligibility. Examiner submits that the newly-added limitations still describe abstract functions like receiving and aggregating data, making determinations about the data, and sharing updates about the data that fit into the abstract idea grouping of certain methods of organizing human activity such as managing personal behavior, relationships, or interactions between people. The use of computing nodes as tools with which to implement these functions amounts to mere instructions to “apply” the otherwise-abstract features in a digitized environment and do not confer eligibility. Accordingly, the 35 USC 101 rejections are upheld for claims 1-8, 10-11, 22, and 24, as explained in more detail below.
Rejection Under 35 USC 102/103
On page 6 of the response Applicant presents arguments that fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Examiner respectfully disagrees with Applicant’s assertions, and submits that the previously cited Patil and Anwar references do teach the amended claims when considered in combination, as explained in more detail in the updated 35 USC 103 rejections below.
Claim Interpretation
Claim 1 and its dependent claims describe the various iterations of the model with the descriptor “medical validation,” but do not positively claim any iterations of the model as performing any medical validation functions or resulting in any specific medical validation outputs. The only positively recited aspects of claim 1 are directed to exchanging various information about the various iterations of the model between master and computing nodes to perform a federated learning process until a convergence condition is reached. That is, the type of model being specified as a “medical validation” model has no functional bearing on the positively recited claim limitations, because the structure and function of the invention would remain unchanged if the model was described as a “medical validation” model, a “diagnostic” model, a “treatment recommendation” model, or any other adjective. Because the model being a “medical validation” model has no specific functional or structural impact on the claims, this language is considered non-functional descriptive language and is not considered patentably limiting in this case. See MPEP 2111.05. Accordingly, a prior art reference disclosing the federated training of any type of model in the manner recited by claim 1 will be considered to meet the claim language for purposes of analyzing patentability over the prior art.
Claim 2 recites “distributing, by the master node, the final medical validation model to at least one of the plurality of computing nodes or at least one further computing node for use in medical validation.” The words “for use in medical validation” do not confer patentable weight because they merely reflect an intended use or intended result of the model distribution step, and do not positively affect the structure or function of the invention, because no step for actually executing the model to achieve a medical validation function is actually claimed. The only positively recited function of claim 2 is “distributing, by the master node, the final medical validation model to at least one of the plurality of computing nodes or at least one further computing node,” so the claim language will be considered to be met if the prior art teaches that the final model is distributed to a computing node for any purpose. See MPEP 2111.04.
Claim 3 recites “wherein the respective local training datasets comprise historical medical data generated in medical tests and labeling information indicating local validation categories of the historical data.” This limitation merely further describes the content of the training data, and does not positively impact or affect the structure or function of the invention such that it also amounts to non-functional descriptive language, as explained for claim 1 above. Accordingly, a prior art reference disclosing local training datasets comprising historical data and labeling information of any type will be considered to meet the claim language for purposes of analyzing patentability over the prior art.
Claim 4 recites “wherein the definition information indicates unified item names in medical data input to the initial medical validation model, and unified validation categories output from the initial medical validation model, the unified validation categories indicating a plurality of predetermined validation actions to be performed on the medical data; and wherein the respective local training datasets are processed by mapping local item names used in the historical medical data to the unified item names, and mapping the local validation categories to the unified validation categories.” This first limitation merely further describes the content of the definition information as being unified “item names in medical data” and unified “validation categories”; these specific types of unified data do not positively impact or affect the structure or function of the invention such that they also amount to non-functional descriptive language, as explained for claim 1 above. Accordingly, a prior art reference disclosing definition information indicating unified or standard features/names for any type of inputs and outputs represented in the local training datasets that are then used to map local input and output features to the unified input and output features will be considered to meet the claim language for purposes of analyzing patentability over the prior art.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8, 10-11, 22, and 24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
In the instant case, claims 1-8 and 10-11 are directed to a method (i.e. a process), claim 22 is directed to an electronic device (i.e. a system), and claim 24 is directed to a non-transitory computer readable medium (i.e. a manufacture). Thus, each of the claims falls within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea.
Step 2A – Prong 1
Independent claim 1 recites steps that, under their broadest reasonable interpretations, cover certain methods of organizing human activity, e.g. managing personal behavior, relationships, or interactions between people. Specifically, claim 1 (as representative) recites:
transmitting, by a master node to a plurality of computing nodes, definition information about an initial medical validation model;
performing, by the master node, a federated learning process together with the plurality of computing nodes, to jointly train the initial medical validation model using respective processed local training datasets available at the plurality of computing nodes; and
determining, by the master node, a final medical validation model based on a result of the federated learning process,
wherein the federated learning process comprises:
obtaining, by the master node, a trained medical validation model from the result of the federated learning process;
distributing the trained medical validation model to the plurality of computing nodes;
receiving feedback from the plurality of computing nodes, the feedback indicating respective performance metrics of the trained medical validation model determined by the computing nodes using respective local validation datasets; and
determining the final medical validation model based on the received feedback; and
wherein determining the trained medical validation model from the results of the federated learning process by the master node comprises iteratively performing steps of:
receiving, from the plurality of computing nodes, parameter gradients generated by the plurality of computing nodes based on the respective processed local training datasets;
aggregating the received parameter gradients to determine parameter updates; and
transmitting the parameter updates to the plurality of computing nodes to update intermediate initial medical validation models of the plurality of computing nodes until a convergence condition for the federated learning process is reached to obtain the trained medical validation model.
But for the recitation of generic computer components like a computer and computing nodes, the italicized functions, when considered as a whole, describe a distributed medical model fitting and updating operation that could be achieved by a human actor such as a researcher managing their personal behavior and/or interactions with others. For example, a lead researcher could define constraints for preparing data to train/fit a medical validation model and communicate the constraints to colleagues at other institutions/facilities (e.g. in writing, verbally, etc.). The lead researcher could then coordinate a distributed learning process wherein the researchers at each institution train/fit a model with their own local datasets that they have each prepared in accordance with the defined constraints, and communicate with each of the institution-specific researchers to obtain the results of the institution-specific model fitting (including features like parameter gradients) to aggregate or combine the resulting model parameters into a final or updated medical validation model. The lead researcher could then distribute the final or updated model (e.g. in writing, verbally, etc.) back to the plurality of institution-specific researchers, who could each then use their own respective validation datasets to evaluate the model’s performance and communicate these performance metrics back to the lead researcher. The lead researcher could then aggregate the performance metrics from all of the institutions, and further determine whether the fitting/learning process is complete when they judge that a convergence condition is satisfied. Accordingly, claim 1 recites an abstract idea in the form of a certain method of organizing human activity.
Dependent claims 2-8, 10-11, 22, and 24 inherit the limitations that recite an abstract idea from their dependence on claim 1, and thus these claims also recite an abstract idea under the Step 2A – Prong 1 analysis. In addition, claims 2-8 and 10-11 recite additional limitations that further describe the abstract idea identified in the independent claims.
Specifically, claim 2 recites distributing the final medical validation model to at least one of the plurality nodes or at least one further node for use in medical validation, which could be accomplished by the lead researcher communicating the final model back to their colleagues at each institution (e.g. in writing, verbally, etc.) for use/deployment of the model at that institution.
Claim 3 describes the content of the local training datasets, which are types of information that researchers would be capable of obtaining and preparing for use in training/fitting a model.
Claim 4 specifies that the definition information indicates unified definitions and types of input and outputs of the training datasets, and mapping the local inputs and outputs to the unified input and output definitions. A lead researcher would be capable of providing these types of data formatting/naming standards for these types of data in a training dataset, and an institution-specific researcher would be capable of viewing such definitions and preparing/standardizing training data in a manner that complies with the unified definitions.
Claim 5 specifies that the definition information indicates a scaled value range for a medical data item of the training datasets, and mapping the local medical items to the scaled value range. A lead researcher would be capable of providing this type of data scaling standard for this type of data in a training dataset, and an institution-specific researcher would be capable of viewing such a definition and preparing/scaling training data in a manner that complies with the value range.
Claim 6 specifies that the definition information indicates a unified red flag rule for medical data of the training datasets, and filtering out historical medical data satisfying the unified red flag rule in the training data. A lead researcher would be capable of providing this type of data exclusion standard for this type of data in a training dataset, and an institution-specific researcher would be capable of viewing such a definition and removing excluded training data types from the training dataset.
Claims 7-8 specify that the definition information indicates an item that should be input to the model and a predetermined value to be filled in for that item when its value is unavailable in the training dataset. A lead researcher would be capable of providing this type of data inclusion and imputation standard for this type of data in a training dataset, and an institution-specific researcher would be capable of viewing such a definition and imputing missing training data with a predetermined value such as an average or median as known from other references values in their site-specific data.
Claim 10 recites determining the final model by determining whether the performance metrics meet or fail a model release criterion. A lead researcher would be capable of evaluating the aggregated performance metrics from all of the institutions to determine whether they meet or fail a model release criterion (e.g. by comparing a performance score with a threshold) and deciding whether the model is sufficient to be the final model or whether it requires adjustments to become the final model.
Claim 11 specifies that the master node is communicatively connected with the plurality of nodes in a star topology network, which could describe the shape/arrangement of a lead researcher collaborating with a plurality of researchers at different institutions such that the lead researcher acts as a hub coordinating all of the research efforts.
However, recitation of an abstract idea is not the end of the analysis. Each of the claims must be analyzed for additional elements that indicate the abstract idea is integrated into a practical application to determine whether the claim is considered to be “directed to” an abstract idea.
Step 2A – Prong 2
The judicial exception is not integrated into a practical application. In particular, independent claim 1 does not include additional elements that integrate the abstract idea into a practical application. The additional elements of claim 1 include that the method is computer-implemented, and use of a plurality of computing nodes. Though the term “master node” does not inherently denote any specific hardware or structure, Examiner will also consider the “master node” to be a type of computing node for the purposes of this analysis. These additional elements, when considered in the context of the claim as a whole, merely serve to automate interactions that could occur between human actors (as described above), and thus amount to instructions to “apply” the abstract idea using generic computer components (see MPEP 2106.05(f)). For example, various researchers can interact to share data preparation definitions and model fitting constraints as well as the results of localized model training/fitting steps so that model updates or other performance determinations may be made for redistribution, and specifying that these steps are “computer-implemented” via a master node transmitting and receiving data to and from a plurality of computing nodes merely serves to digitize and/or automate the otherwise-abstract functions such that the communications and data sharing occur digitally (i.e. merely using computers as tools with which to implement otherwise-abstract interactions between humans). Accordingly, claim 1 as a whole is directed to an abstract idea without integration into a practical application.
The judicial exception recited in dependent claims 2-8, 10-11, 22, and 24 is also not integrated into a practical application under a similar analysis as above. Claims 22 and 24 introduce further high-level computing elements like a processor and memory executing computer readable instructions to achieve the functions of the method, which also amount to mere instructions to apply the abstract idea with generic computing components in the same manner as explained above for the computer implementation and computing node elements of claim 1. The functions of claims 2-8 and 10-11 are performed with the same additional elements introduced in the independent claim, without introducing any new additional elements of their own, and accordingly also amount to mere instructions to apply the abstract idea on these same additional elements.
Accordingly, the additional elements of claims 1-8, 10-11, 22, and 24 do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claims 1-8, 10-11, 22, and 24 are directed to an abstract idea.
Step 2B
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of computer implementation and computer nodes used for performing the transmitting, performing, determining, obtaining, distributing, receiving, aggregating, etc. steps of the invention amount to mere instructions to apply the exception using generic computer components. As evidence of the generic nature of the above recited additional elements, Examiner notes paras. [00117]-[00121] of Applicant’s specification, where the computer elements are disclosed in terms of known, general-purpose elements like processors, memory, networks, etc. Examiner also notes that the computer functions of receiving or transmitting data over a network and performing repetitive calculations are examples of well-understood, routine, and conventional activities previously known to the industry, as outlined in MPEP 2106.05(d)(II).
Analyzing these additional elements as an ordered combination adds nothing that is not already present when considering the elements individually; the overall effect of the computer implementation and computing nodes in combination is to digitize and/or automate a distributed medical model fitting operation that could otherwise be achieved as a certain method of organizing human activity. Thus, when considered as a whole and in combination, claims 1-8, 10-11, 22, and 24 are not patent eligible.
Claim Rejections - 35 USC § 103
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 6, 10-11, 22, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Patil et al. (US 20230351204 A1) in view of Anwar et al. (US 20220292392 A1).
Claim 1
Patil teaches a computer-implemented method (Patil abstract), comprising:
transmitting, by a master node to a plurality of computing nodes, definition information about an initial medical validation model (Patil [0004], [0082], [0085], noting central server 102 (i.e. a master node) sends information about a global model such as parameters, selected training data, and/or clinical requirements (i.e. definition information) to each clinical site (i.e. a plurality of computing nodes) so that a local copy of the model can be created at each clinical site; see also [0035], noting the model may be trained to perform any type of task that may be performed on medical data by a model, considered sufficient to meet the interpretation of the “medical validation” model outlined in para. 7 above);
performing, by the master node, a federated learning process together with the plurality of computing nodes, to jointly train the initial medical validation model using respective processed local training datasets available at the plurality of computing nodes (Patil [0004], [0082], [0085], [0090]-[0093], noting central server 102 (i.e. a master node) sends information about a global model such as parameters, selected training data, and/or clinical requirements (i.e. definition information) to each clinical site (i.e. a plurality of computing nodes) so that a local copy of the model can be created at each clinical site via a federated learning process with the selected training data meeting the clinical requirements (i.e. with local training datasets processed by the plurality of computing nodes based on the definition information)); and
determining, by the master node, a final medical validation model based on a result of the federated learning process (Patil [0004], [0083], noting the central server receives results of the federated learning process and creates or updates a global (i.e. final) copy of the model),
wherein the federated learning process comprises: obtaining, by the master node, a trained medical validation model from the result of the federated learning process; distributing the trained medical validation model to the plurality of computing nodes; receiving feedback (Patil [0092]-[0095], noting the central server receives the trained models from each clinical site as a result of the federated training process, generates or updates the global model, and evaluates performance metrics (e.g. accuracy) of the global model using validation data to determine when training should end. [0095] further notes that the trained model may be distributed back to the plurality of computing nodes for retraining); and
wherein determining the trained medical validation model from the results of the federated learning process by the master node comprises iteratively performing steps of: receiving, from the plurality of computing nodes, parameter gradients generated by the plurality of computing nodes based on the respective processed local training datasets; aggregating the received parameter gradients to determine parameter updates; and transmitting the parameter updates to the plurality of computing nodes to update intermediate initial medical validation models of the plurality of computing nodes until a convergence condition for the federated learning process is reached to obtain the trained medical validation model (Patil Fig. 6, [0093]-[0095], noting the central server receives the local parameter gradients trained generated from each clinical site as a result of the federated training process, considers them together (i.e. aggregates them) to generate or update weights of the global model, and evaluates a validation condition to determine whether training should end or whether the updated weights should be distributed back to the plurality of computing nodes for another iteration of retraining. Because Applicant’s specification provides no definition of a “convergence condition” beyond that it is a condition triggering the end of an iterative federated learning process as in [0088], the validation condition of Patil is considered functionally equivalent to the convergence condition of the instant claim).
In summary, Patil teaches a method for performing federated learning on local datasets, combining the results of the federated learning into a final model at the central server, and using validation data at the central server to determine performance metric feedback of the model and determine whether a condition is met or whether the weights of the global model should be distributed back to the clinical sites for additional epochs of training. This disclosed validation method relies on a single validation operation that takes place at the central server, rather than a distributed validation operation taking place with local validation datasets at each computing node as required by the claim. Accordingly, Patil fails to explicitly disclose receiving feedback from the plurality of computing nodes, the feedback indicating respective performance metrics of the trained medical validation model determined by the computing nodes using respective local validation datasets.
However, Anwar teaches an analogous federated learning architecture wherein the local nodes each receive a copy of the global model, evaluate the global model’s performance with respect to a local data set, and send the performance feedback to the central server as an indicator of the global model’s bias for or against the local node’s dataset (Anwar [0059], [0072]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the model validation operation of Patil such that it takes place in a distributed manner as in Anwar in order to determine how much the global model is biased for or against the local data of each computing node (as suggested by Anwar [0059]), thereby allowing for improved global model evaluation and selection.
Claim 2
Patil in view of Anwar teaches the method of claim 1, and the combination further teaches: distributing, by the master node, the final medical validation model to at least one of the plurality of computing nodes or at least one further computing node for use in medical validation (Patil [0095], noting the global model may be distributed from the central server back to the clinical sites (i.e. computing nodes) for further training; this meets the interpretation of the claim language as outlined in para. 8 above).
Claim 3
Patil in view of Anwar teaches the method of claim 1, and the combination further teaches wherein the respective local training datasets comprise historical medical data generated in medical tests and labeling information indicating local validation categories of the historical medical data (Patil [0037], [0058], noting the models learn a mapping between example inputs (i.e. historical data) and ground truth labels (i.e. labeling information) in the local training data; this meets the interpretation of the claim language as outlined in para. 9 above).
Claim 6
Patil in view of Anwar teaches the method of claim 2, and the combination further teaches wherein the definition information further indicates a unified red flag rule for medical data prevented from being input to the initial medical validation model, and wherein the respective local training datasets are processed by filtering out historical medical data satisfying the unified red flag rule (Patil [0089], noting each clinical site checks for local training data meeting the clinical requirements and may use a sample eliminator-augmenter to remove unnecessary/irrelevant samples, considered equivalent to the clinical requirements (i.e. definition information) indicating a unified red flag rule for medical data that should be filtered out of the local training datasets such that it is prevented from being input to the model).
Claim 10
Patil in view of Anwar teaches the method of claim 1, and the combination further teaches wherein determining the final medical validation model based on the received feedback comprises: in response to the respective performance metrics meeting a model release criterion, determining the trained medical validation model as the final medical validation model; and in response to the respective performance metrics failing to meet the model release criterion, adjusting the trained medical validation model to generate the final medical validation model (Patil [0095], noting that if the model meets a predetermined accuracy requirement (i.e. a model release criterion) the training ends (i.e. the model is determined to be the final model), but if the criterion is not met further retraining occurs; when considered in the context of the combination with Anwar, the performance/accuracy metric being evaluated as in [0095] of Patil would be based on the respective performance metrics provided by each local computing node’s distributed validation procedure as in Anwar).
Claim 11
Patil in view of Anwar teaches the method of claim 1, and the combination further teaches wherein the master node is communicatively connected with the plurality of computing nodes in a star topology network (Patil Fig. 1, showing central server 102 as a central hub communicating with clinical sites 104 to 112 in a star topology).
Claim 22
Patil in view of Anwar teaches an electronic device comprising: at least one processor; and at least one memory comprising computer readable instructions which, when executed by the at least one processor of the electronic device, cause the electronic device to perform the steps of claim 1 (Patil [0098]-[0101]).
Claim 24
Patil in view of Anwar teaches a non-transitory computer readable medium having stored thereon a computer program product comprising instructions which, when executed by a processor of an apparatus, cause the apparatus to perform the steps of claim 1 (Patil [0098]-[0101]).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Patil and Anwar as applied to claims 1 and 3 above, and further in view of Calcutt et al. (US 20200311300 A1).
Claim 4
Patil in view of Anwar teaches the method of claim 3, but the present combination fails to explicitly disclose wherein the definition information indicates unified item names in medical data input to the initial medical validation model, and unified validation categories output from the initial medical validation model, the unified validation categories indicating a plurality of predetermined validation actions to be performed on the medical data; and wherein the respective local training datasets are processed by mapping local item names used in the historical medical data to the unified item names, and mapping the local validation categories to the unified validation categories.
However, Calcutt teaches an analogous federated learning method that includes defining data formatting and nomenclature standardization constraints for training data input to the model and harmonizing local training datasets in accordance with the defined constraints (Calcutt [0069], [0077], [0103], considered to meet the claim interpretation outlined in para. 10 above). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the model definition information and training process of the combination to include defining unified feature names/formats for the local training datasets and performing mapping of the local training datasets to the unified feature names/formats as in Calcutt in order to improve the training process by ensuring standardization and cohesion of the local training datasets while still maintaining privacy of the local datasets (as suggested by Calcutt [0077] & [0103]).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Patil and Anwar as applied to claims 1 and 3 above, and further in view of Fang et al. (Reference U on the PTO-892 mailed 7/15/2025).
Claim 5
Patil in view of Anwar teaches the method of claim 3, but the present combination fails to explicitly disclose wherein the definition information further indicates a scaled value range for an item in medical data input to the initial medical validation model, and wherein the respective local training datasets are processed by mapping values of the item in the historical medical data into values within the scaled value range.
However, Fang teaches an analogous federated learning method that includes defining data value scaling ranges for training data input to the model and transforming local training datasets in accordance with the defined scaling ranges (Fang Fig. 5, third paragraph of “5 Experiment” on Pg 8, noting features are transformed via normalization scaling to values between [-k, k] or [-1, 1] in federated learning training datasets in accordance with defined transformations). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the model definition information and training process of the combination to include defining scaled value ranges for items of the local training datasets and performing normalization transformations to scale the features as in Fang because appropriate transformations help scale the features and significantly improve the performance of learning models (as suggested by Fang first paragraph of “1 Introduction” on Pg 1).
Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Patil and Anwar as applied to claims 1 and 2 above, and further in view of Krishnapuram et al. (US 20140088989 A1).
Claim 7
Patil in view of Anwar teaches the method of claim 2, and the combination further teaches wherein the definition information indicates an item in medical data input to the initial medical validation model, a value of the indicated item being unavailable from historical medical data in a local training dataset, and wherein the local training datasets are processed by filling in a (Patil [0089], noting each clinical site checks for local training data meeting the clinical requirements and may use a sample eliminator-augmenter to add additional data, considered equivalent to the clinical requirements (i.e. definition information) indicating an item in medical data for filling in).
In summary, the combination teaches a method for performing federated learning on local datasets based on clinical requirements, as well as augmenting the training data to add additional data. However, it fails to explicitly disclose that the additional filled in value is a predetermined value for an indicated item of medical data. However, Krishnapuram teaches an analogous federated learning method that includes site-specific data imputation methods to augment the training data when certain medical item values are missing by filling in a predetermined data type (Krishnapuram [0091]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the training data augmentation methods of the combination to include data imputation of a predetermined value for a certain type of missing data as in Krishnapuram in order to leverage the localized data distributions of medical items specific to each clinical site when filling in missing data (as suggested by Krishnapuram [0091]).
Claim 8
Patil in view of Anwar and Krishnapuram teaches the method of claim 7, and the combination further teaches wherein the predetermined value comprises either one of an average value of a reference value range of the indicated item and a median value of available values of the indicated item in historical medical data generated in other medical tests (Krishnapuram [0091], noting the imputed value may be an average or median value known from the site-specific localized dataset).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chu et al. (US 20210073678 A1), Zhu et al. (US 20220114475 A1), and Chang et al. (US 20230186098 A1) describe iterative federated learning processes involving parameter gradient exchanges between nodes that continue until a convergence condition has been satisfied.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAREN A HRANEK whose telephone number is (571)272-1679. The examiner can normally be reached M-F 8:00-4:00 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KAREN A HRANEK/ Primary Examiner, Art Unit 3684