DETAILED ACTION
This Office Action is sent in response to Applicant’s Communication received 1/25/2023 for application number 18/159,246.
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
With respect to subject matter eligibility under 35 USC § 101, the Examiner notes that although the independent claims recite a mental process (obtaining synthetic data and classifying models into two groups – steps that can be performed mentally), the claims are integrated into a practical application. In particular, the abstract idea in combination with additional elements in the claims (models of local data sources are obtained on systems that do not have access to each other’s local data sources, and performing federated learning with a first group of models on the first systems that host the first group of models) provide a technical improvement of improved federated machine learning. These additional elements are also not mere data gathering or instructions to apply the exception.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Joshi et al. (US 2022/0201021 A1) in view of Szeto et al. (US 2018/0018590 A1).
In reference to claim 1, Joshi teaches a method for providing computer implemented services using a final inference model (method for building an ML model using federated learning, para. 0017), the method comprising: obtaining inference models using local data sources, the inference models being obtain by data processing systems that have access to respective portions of the local data sources (local models train using local data, para. 0029), and each of the data processing systems not having access to more than one of the respective portions of the local data sources (see para. 0013-16: in the disclosed federated learning, each client keeps its own local data private); … classifying, using … data, the inference models into a first group and a second group (using ground truth data, local model is classified as either satisfying an accuracy threshold, or classified as below the accuracy threshold, para. 0064-65, fig. 9); performing, using the first group of the inference models, federated learning across a portion of the data processing systems that host the first group of the inference models to obtain the final inference model (federated learning is used to create global model only with models that meet accuracy threshold, para. 0064); and using the final inference model to provide the computer implemented services (global model can be deployed for use, para. 0069).
However, Joshi does not explicitly teach obtaining, based on the local data sources, synthetic data that is representative of features of the local data sources but cannot be used to obtain the local data sources.
Szeto teaches obtaining, based on the local data sources, synthetic data that is representative of features of the local data sources but cannot be used to obtain the local data sources (synthetic proxy data can be created for use in training on non-private computer, para. 0018, 0045-50).
It would have been obvious to one of ordinary skill in art, having the teachings of Joshi and Szeto before the earliest effective filing date, to modify the classification as disclosed by Joshi to include synthetic data as taught by Szeto.
One of ordinary skill in the art would have been motivated to modify the classification of Joshi to include the synthetic data of Szeto because it can help better share data while respecting privacy (Szeto, para. 0005) and further allow for verification when there is not pre-labeled ground-truth data available.
In reference to claim 2, Joshi teaches the method of claim 1, wherein the final inference model is not based on the second group of the inference models (models below accuracy threshold are not used, para. 0065).
In reference to claim 3, Joshi teaches the method of claim 2, wherein obtaining the inference models using the local data sources comprises: by a data processing system of the data processing systems: obtaining training data using a respective portion of the local data sources; and training, using the training data, an inference model of the inference models (local models are trained using local data, para. 0029).
In reference to claim 4, Joshi teaches the method of claim 3, wherein classifying, using the synthetic data, the inference models into the first group and the second group comprises: by the data processing system of the data processing systems: ingesting, by the inference model, a feature of a record of the [verification] data to obtain an output; making a comparison between the output to an average output to identify a level of difference, the average output being based on outputs generated by the inference models from ingestion of the feature of the record; and placing the data processing system in the first group or the second group based on the level of the difference (accuracy metric used to classify, para. 0065, metric can be mean square error, para. 0027, which is a level of different from the average output).
However, Joshi does not explicitly teach synthetic data.
Szeto teaches synthetic data (synthetic proxy data can be created for use in training on non-private computer, para. 0018, 0045-50).
It would have been obvious to one of ordinary skill in art, having the teachings of Joshi and Szeto before the earliest effective filing date, to modify the classification as disclosed by Joshi to include synthetic data as taught by Szeto.
One of ordinary skill in the art would have been motivated to modify the classification of Joshi to include the synthetic data of Szeto because it can help better share data while respecting privacy (Szeto, para. 0005) and further allow for verification when there is not pre-labeled ground-truth data available.
In reference to claim 5, Joshi teaches the method of claim 4, wherein placing the data processing system in the first group or the second group based on the level of the difference comprises: making a determination regarding whether the level of difference exceeds a difference threshold; in a first instance of the determination where the level of difference exceeds the threshold: placing the data processing system in the first group; and in a second instance of the determination where the level of difference is within the threshold: placing the data processing system in the second group (if accuracy metric meets threshold, the model is put into the first group, otherwise the model is put in second group, para. 0064-65, fig. 9).
In reference to claim 6, Joshi does not explicitly teach the method of claim 5, wherein performing the federated learning comprises: exchanging learning data with the portion of the data processing systems; and obtaining the final inference model using the learning data.
Szeto teaches the method of claim 5, wherein performing the federated learning comprises: exchanging learning data with the portion of the data processing systems; and obtaining the final inference model using the learning data (federated learning can be performed with shared training data, para. 0018, 0045-50).
It would have been obvious to one of ordinary skill in art, having the teachings of Joshi and Szeto before the earliest effective filing date, to modify the classification as disclosed by Joshi to include synthetic data as taught by Szeto.
One of ordinary skill in the art would have been motivated to modify the classification of Joshi to include the synthetic data of Szeto because it can help better share data while respecting privacy (Szeto, para. 0005).
In reference to claim 7, Joshi teaches the method of claim 6, wherein using the final inference model to provide the computer implemented services comprises: distributing the final inference model to the data processing systems; and generating, using copies of the final inference model that are local to the data processing systems, inference using new data from the respective portions of the local data sources (global model can be deployed for use, which would entail clients using the model to make inferences on new data, para. 0069).
In reference to claim 8, this claim is directed to a non-transitory machine-readable medium associated with the method claimed in claim 1 and is therefore rejected under a similar rationale.
In reference to claim 9, this claim is directed to a non-transitory machine-readable medium associated with the method claimed in claim 2 and is therefore rejected under a similar rationale.
In reference to claim 10, this claim is directed to a non-transitory machine-readable medium associated with the method claimed in claim 3 and is therefore rejected under a similar rationale.
In reference to claim 11, this claim is directed to a non-transitory machine-readable medium associated with the method claimed in claim 4 and is therefore rejected under a similar rationale.
In reference to claim 12, this claim is directed to a non-transitory machine-readable medium associated with the method claimed in claim 5 and is therefore rejected under a similar rationale.
In reference to claim 13, this claim is directed to a non-transitory machine-readable medium associated with the method claimed in claim 6 and is therefore rejected under a similar rationale.
In reference to claim 14, this claim is directed to a non-transitory machine-readable medium associated with the method claimed in claim 5 and is therefore rejected under a similar rationale.
In reference to claim 15, this claim is directed to a system associated with the method claimed in claim 1 and is therefore rejected under a similar rationale.
In reference to claim 16, this claim is directed to a system associated with the method claimed in claim 2 and is therefore rejected under a similar rationale.
In reference to claim 17, this claim is directed to a system associated with the method claimed in claim 3 and is therefore rejected under a similar rationale.
In reference to claim 18, this claim is directed to a system associated with the method claimed in claim 4 and is therefore rejected under a similar rationale.
In reference to claim 19, this claim is directed to a system associated with the method claimed in claim 5 and is therefore rejected under a similar rationale.
In reference to claim 20, this claim is directed to a system associated with the method claimed in claim 6 and is therefore rejected under a similar rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. References C – G and U – W generally teach background information on federated learning and privacy.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew T. Chiusano whose telephone number is (571)272-5231. The examiner can normally be reached M-F, 10am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW T CHIUSANO/Primary Examiner, Art Unit 2144