DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments filed 12/24/2025 have been entered.
Claims 1-10 remain pending within the application.
The amendments filed 12/24/2025 are sufficient to overcome each and every objection previously set forth in the Non-Final Office Action mailed 09/28/2025. The objections have been withdrawn.
The amendments filed 12/24/2025 are sufficient to overcome the 112 rejections previously set forth in the Non-Final Office Action mailed 09/28/2025. The rejections have been withdrawn.
Claim Objections
Claim 4 is objected to because of the following informalities: “The machine leaning learning apparatus according to claim 1, the n machine learning models…the common model is trained… ” should be “The machine leaning learning apparatus according to claim 1, wherein the n machine learning models…wherein the common model is trained…" . Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-10 are rejected under 35 U.S.C. 103 as being unpatentable over Nasr et al. ("Machine Learning with Membership Privacy using Adversarial Regularization") (Jul. 2018), hereafter Nasr, in view of Shokri et al. ("Membership Inference Attacks Against Machine Learning Models"), hereafter Shokri.
Regarding claim 1, Nasr discloses:
A machine learning apparatus comprising; at least one processor; and memory storing instructions, wherein the instructions, when executed by the at least one processor, cause the machine learning apparatus to (Nasr, page 6, left column, penultimate paragraph, lines 1-3 “we apply our method to several different classification tasks using various neural network structures. We implemented our method using Pytorch”),
classify input data with a classification value (Nasr, Table 1 and Figures 1- 2 teaches a classifier f that classifies an input data with an output classification value f(x)),
train … machine learning models using training data determined based on the classification value of the input data (Nasr, Table 2 and Figures 1- 2 teaches inference model h as the machine learning model trained using training data determined based on the classification value of the input data),
perform inference using the … machine learning models based on the classification value of the input data (Nasr, Figures 1-2 teaches the inference model to perform inference based on the classification value of the input data, i.e. f(x) and/or f(x’)),
wherein the instructions, when executed by the at least one processor, cause the machine learning apparatus to: perform inference based on the input data using a first machine learning model … when the input data is classified as a first classification value (Nasr, Tables 1-2, Figure 1, and page 2, right column, paragraph 3, lines 9-13 “The output reflects how f classifies each input into different classes. Each element of an output y ∈ Y is a score vector that shows the relative association of any input to different classes. All elements of a vector y are in range [0, 1]” and page 4, left column, paragraph 2, lines 1-2 “Let h be the inference model h … For any data point (x, y) and the model’s prediction vector f (x), it outputs…” teaches the inference model to perform inference based on the input data when the input data is classified as a first classification value, i.e. a class prediction of the input),
train, using the input data as the training data, … machine learning models …. when the input data is classified as being is the first classification value (Nasr, page 5, right column, paragraph 1, lines 1-3 “In each epoch of training, the two models f and h are alternatively trained to find their best responses against each other through solving the nested optimizations in (7).” Teaches training the machine learning models using the input data as the training data when the input data is classified as being is the first classification value).
While Nasr discloses training a machine learning model using training data determined based on the classification value of the input data and perform inference based on the input data using a first machine learning model…, they do not disclose more than 2 of these models.
Shokri discloses:
train n (n is an integer greater than 2) machine learning models using training data…(Shokri, Fig. 2 teaches training shadow models n machine learning models trained using training data),
perform inference … using a first machine learning model of the n machine learning models … (Shokri, Fig. 3 teaches performing inference using a first model from the plurality of shadow models).
Nasr and Shokri are analogous art because they are from the same field of endeavor, adversarial learning and machine learning models.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Nasr to include train n (n is an integer greater than 2) machine learning models using training data, and perform inference … using a first machine learning model of the n machine learning models … based on the teachings of Shokri. One of ordinary skill in the art would have been motivated to make this modification in order to generalize the model and improve its predictive power, as suggested by Shokri (page 14, left column, paragraph 4, lines 4-5).
While Nasr discloses perform inference using the … machine learning models based on the classification value of the input data, they do not disclose perform inference using the n machine learning models.
Shokri discloses:
perform inference using the n machine learning models (Shokri, Fig. 3 teaches performing inference using the n shadow models).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Nasr to include perform inference using the n machine learning models, based on the teachings of Shokri. One of ordinary skill in the art would have been motivated to make this modification in order to generalize the model and improve its predictive power, as suggested by Shokri (page 14, left column, paragraph 4, lines 4-5).
While Nasr teaches training, using the input data as the training data, … machine learning models …. when the input data is classified as being is the first classification value, they do not teach training a plurality of the n machine learning models other than the first machine learning model.
Shokri discloses:
train, using … input data … a plurality of the n machine learning models other than the first machine learning model when the input data is classified … (Shokri, Fig. 2 and page 5, right column, paragraph 2, lines 1-4 “(1) search, using a hill-climbing algorithm, the space of possible data records to find inputs that are classified by the target model with high confidence; (2) sample synthetic data from these records.” Teaches training shadow models as a plurality of models other than a first model, using sampled training data which have been classified with a high confidence).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Nasr to include train, using the input …a plurality of the n machine learning models other than the first machine learning model when the input data is classified, based on the teachings of Shokri. One of ordinary skill in the art would have been motivated to make this modification in order to generalize the model and improve its predictive power, as suggested by Shokri (page 14, left column, paragraph 4, lines 4-5).
Regarding claim 2, Nasr, in view of Shokri, discloses the machine learning apparatus according to claim 1 (and thus the rejection of claim 1 is incorporated). Nasr further discloses:
wherein the classification value is deterministic with respect to the input data (Nasr, page 2, right column, paragraph 3, “
PNG
media_image1.png
208
475
media_image1.png
Greyscale
“ teaches the classification value to be deterministic with respect to input data).
Regarding claim 3, Nasr, in view of Shokri, discloses the machine learning apparatus according to claim 1 (and thus the rejection of claim 1 is incorporated). Nasr further discloses:
wherein the input data is classified into … classification values, and the … classification values each appear with substantially the same probability (Nasr, page 6, left column, paragraph 4, lines 2-5 “This means that for a fixed classification loss and with enough learning capacity for model f, the training algorithm minimizes the privacy loss by making the two distributions Pf* and P’f* indistinguishable“ teaches the classifier to output classification values which appear with substantially the same probability as each other).
While Nasr discloses wherein the input data is classified into .. classification values, and the … classification values each appear with substantially the same probability, they do not disclose these values to be numbered n.
Shokri discloses:
input data is classified into n classification values, and the n classification values each appear with substantially the same probability (Shokri, Fig. 3, prediction sets 1-k teach that input data is classified into k classification values that each appear with substantially the same probabilities as they add up to 1).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Nasr to include input data is classified into n classification values, and the n classification values each appear with substantially the same probability, based on the teachings of Shokri. One of ordinary skill in the art would have been motivated to make this modification in order to generalize the model and improve its predictive power, as suggested by Shokri (page 14, left column, paragraph 4, lines 4-5).
Regarding claim 4, Nasr, in view of Shokri, discloses the machine learning apparatus according to claim 1 (and thus the rejection of claim 1 is incorporated). Nasr further discloses:
… common model is trained using the input data as the training data when the output input data is classified as being the first classification value (Nasr, Figures 1-2, and Algorithm 1 teaches jointly training the model as a common model, using the input data as the training data when the output data is classified as the first classification value).
While Nasr discloses … common model is trained using the input data as the training data when the output input data is classified as being the first classification value, they do not disclose the n machine learning models includes a common model having common parameter among the n machine learning models.
Shokri discloses:
the n machine learning models includes a common model having common parameter among the n machine learning models (Shokri, Fig. 2 and Fig. 2 caption “Training shadow models using the same machine learning platform as was used to train the target model” teaches the shadow models to have a common model having common parameters),
the common model is trained using … input data … (Shokri, Fig. 2 teaches training the target model using input data).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Nasr to include the n machine learning models includes a common model having common parameter among the n machine learning models …the common model is trained using … input data … based on the teachings of Shokri. One of ordinary skill in the art would have been motivated to make this modification in order to generalize the model and improve its predictive power, as suggested by Shokri (page 14, left column, paragraph 4, lines 4-5).
Claims 5-7 and 8-10 are substantially similar to claims 1-3 respectively, and thus are rejected on the same basis as claims 1-3.
Response to Arguments
Applicant's arguments filed 12/24/2025 have been fully considered with regards to the 35 U.S.C. 101 rejection, and they are persuasive. The rejection has been withdrawn.
Applicant's arguments filed 12/24/2025 have been fully considered with regards to the 35 U.S.C. 102/103 rejection, but they are not persuasive.
The applicant asserts on page 13 of the remarks “Nasr and Shokir, alone or in combination, fail to disclose the above features of claim 1. For example, the cited references fail to disclose ‘perform inference based on the input data using a first machine learning model of the n machine learning models when the input data is classified as a first classification value, and train, using the input data as the training data, a plurality of the n machine learning models other than the first machine learning model when the input data is classified as being the first classification value.’ ”. The Examiner respectfully disagrees, as Table 1-2, Figures 1-2, and algorithm 1 of Nasr discloses performing inference based on the input data using a first machine learning model (inference model h) … when the input data is classified as a first classification value (f(x)), and train (training algorithm 1), using the input data as the training data, … machine learning models … when the input data is classified as being the first classification value. Here, the BRI of the first classification value, as recited in the claims, is interpreted as any classified value of the input data. Nasr teaches the first classification value as any output prediction value determined by classifier f, and thus these values are used by the inference model h in the manner disclosed above. However, Nasr does not disclose training a plurality of the n machine learning models other than the first machine learning model. Shokri discloses training, using … input data … a plurality of the n machine learning models other than the first machine learning model when the input data is classified … (Fig. 2 and page 5, right column, paragraph 2, lines 1-4 “(1) search, using a hill-climbing algorithm, the space of possible data records to find inputs that are classified by the target model with high confidence; (2) sample synthetic data from these records.”) by training shadow models as a plurality of models other than a first model (Fig. 2), using sampled training data which have been classified with a high confidence. The examiner notes that while the sampled data is referred to as synthetic data, it is still obtained from classified input data. Shokri’s step (2) further processes this input data before it is used as a training dataset for the shadow models. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Nasr to include training, using … input data …, a plurality of the n machine learning models other than the first machine learning model when the input data is classified …, based on the teachings of Shokri. One of ordinary skill in the art would have been motivated to make this modification in order to generalize the model and improve its predictive power, as suggested by Shokri (page 14, left column, paragraph 4, lines 4-5). Thus, Nasr, in view of Shokri, discloses the inferencing and training steps as disclosed in the amended claims.
Claims 5 and 8 are substantially similar to claim 1, and thus are rejected on the same basis.
Claims dependent on independent claims do not overcome the deficiencies of the rejected independent claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAIRA ZAHIN MAUNI whose telephone number is (703)756-5654. The examiner can normally be reached Monday - Friday, 9 am - 5 pm (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.Z.M./Examiner, Art Unit 2141
/MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141