Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Amendments
This action is in response to amendments filed November 28th, 2025, in which Claims 21, 23-29, 31-34, and 36-38 have been amended. Claims 22, 39, and 40 have been cancelled. The amendments have been entered, and Claims 21 and 23-38 are currently pending.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are communication unit for performing communication; and storage unit configured to store data in Claim 21 and its dependents.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 27 and 33 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Specifically, the claims recite wherein a label list of the basic intelligence model is selected based on similarity. This limitation appears nowhere in the specification, nor the claims as originally filed – the disclosed invention only appears to ever select the basic intelligence model based on the similarities of the label lists, but never directly selects a label list.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 21 and 23-38 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim limitations communication unit and storage unit invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. No hardware structure is linked to the units in the specification; while [00225] states that the system may comprise processors, there is no indication of any specific structure for the recited units, merely descriptions of their functions (see [0026-0035] and [00210-00221]. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim 31 recites the limitations the task identifier. There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination, the claim will be interpreted as if it had read a task identifier.
Claim 37 recites the limitations the task identifier. There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination, the claim will be interpreted as if it had read a task identifier.
Claim 37 recites the limitations the number thereof and the number of the label list. There is insufficient antecedent basis for this limitation in the claim.
Dependent claims are rejected for inheriting and not curing the indefiniteness of a parent claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 21, 23, 26-30, and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Atrey et al., “Preserving Privacy in Personalized Models for Distributed Model Services,” in view of Ra et al, “A Federated Framework for Fine-Grained Cloud Access Control for Intelligent Big Data Analytic by Service Providers,” and further in view of Dirac, US PG Pub 2015/0379424, You et al., “LogME: Practical Assessment of Pre-trained Models for Transfer Learning,” and Klyutsev, US PG Pub 2021/0374635.
Regarding Claim 21, Atrey teaches an intermediate server comprising: a communication unit for performing communication with … an upper-level server (Atrey, pg. 7, 2nd column, last paragraph, “the general model is downloaded from the cloud to the device” where “cloud” denotes upper-level server and “device” denotes an intermediate server, i.e. between the user and the cloud; and “downloaded” requires a communication unit performing communication); a storage unit configured to store data for generating an intelligence model (Atrey, pg. 7, 2nd column, 2nd-to-last paragraph, “once a general ML model has been trained, the next phase personalizes this model for each user … using a small amount of training data for each new user …. Retaining all private data on local user-owned devices enhances privacy” where “retaining” requires a storage unit); … and one or more processors configured to generate an intelligence model (Atrey, pg. 1, 2nd column, “we present Pelican, and end-to-end system for training and deploying personalized ML models … performing sensitive personalized training on a user’s device” such as pg. 1, 1st column, introduction, “smartphones”), to retrieve a basic intelligence model corresponding to the intelligence model and … to adjust a received intelligence model using private raw data (Atrey, pg. 7, 2nd column, last paragraph, “the general model is downloaded from the cloud to the device and transfer learning is performed on the device using personal training data”).
Atrey further teaches prohibition of transmitting private raw data to an upper-level server (Atrey, pg. 7, 2nd column, 2nd-to-last paragraph, “Retaining all private data on local user-owned devices enhances privacy”) as well as permission to transmit raw data to an upper-level server (Atrey, pg. 4, 1st column, 2nd paragraph, “the general model takes as input the trajectories of many users” & 2nd column, 4th paragraph, “users who allow their data to be used to train a multi-user ML model”).
While Atrey teaches that data can be designated by particular users to be public or private, Atrey does not specifically teach users denoting a data comment including a data disclosure scope with labels indicating which of those two categories the user’s data may fall into. Ra, however, teaches a data comment including a data disclosure scope which allows a user to designate the disclosure scope of their own data (Ra, Abstract, “data owners manage the access privilege of service providers over their … data” & Ra, pg. 47085, 2nd column, last paragraph, “users control the use of data by service providers by labeling the scope of use of information”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to label the data disclosure scope of the personal data of the Atrey (as local or global) as Ra labels the disclosure scope of their data. The motivation to do so is “to support fine-grained access control for a federated outsourcing cloud” (Ra, Abstract), such as the one of Atrey, which requires data from different devices in order to train the general model (Atrey, pg. 4, 1st column, 2nd paragraph, “the general model takes as input the trajectories of many users”) – that is, to allow users to control access to their personal data.
While Atrey teaches generating an intelligence model by adjusting a general/basic intelligence model, Atrey does not teach doing so based on an intelligence requirement profile, particularly not a profile comprising the recited features. However, Dirac teaches an intelligence requirement profile and wherein the communication unit is configured to receive, from a user terminal, the intelligence requirement profile (Dirac, Fig. 9a, element 901 “Receive a request from a client via a MLS programmatic interface to perform an operation (e.g. create …) a machine learning entity of a specified type (e.g. a … model)” where “of a specified type” denotes a requirement and the “programmatic interface” on the client device/intermediate server provides a user terminal, also see Fig. 2, element 204, “MLS Software Development Kit” on the client/intermediate server) wherein the intelligence requirement profile includes a task specification corresponding to an intelligence model generation request (Dirac, Fig. 9a, element 901 “Receive a request from a client”) and raw data used to train the intelligence model (Dirac, [0062], “The creation request may specify an address or location from which data records can be retrieved” & [0026], “data sources … location or objects from which input records for machine learning can be obtained …. Input data for training models”).
It would have been obvious to one of ordinary kill in the art before the effective filing date of the claimed invention to include the MLS service request features of Dirac in the personalized training system of Atrey/Ra. The motivation to do so is so that clients can request and specify the model that they need (Dirac, [0038], “requests 210 may be submitted”).
The Atrey/Ra/Dirac combination does not explicitly teach an answer corresponding to the raw data nor a task identifier indicating a task performed by the intelligence model, but You teaches these limitations (You, pg. 5, 2nd column, Algorithm 1, “Target dataset D” including the input data and the desired labels y, where predicting the correct answer y for a particular x is indicating a task performed by the intelligence model). You further teaches to retrieve, based on the task identifier, a basic intelligence model from the storage unit (You, Abstract, “assessing pre-trained models for the target task and selecting best ones from the model zoo” using Algorithm 1, that is, based on the task identifier). It would have been obvious to one of ordinary skill in the art to use You’s method of pre-trained model selection, to select a model to be fine-tuned/transfer-learned using Atrey’s personal device data. The motivation to do so is “compared with brute-force fine-tuning, LogME brings … speedup … and requires only 1% memory footprint. It outperforms prior methods by a large margin” (You, Abstract) – that is, by selecting an appropriate model before performing transfer learning, better personalized models can be found, and be found more efficiently.
Finally, the Atrey/Ra/Dirac/You combination assumes that all of the general basic models are available to be fine-tuned, and thus does not teach to control the communication unit to send, to the upper-level server, a request for retrieving the basic intelligence model when the retrieval (from the data storage unit) fails, wherein the communication unit is configured to receive an intelligence model generated based on the intelligence requirement profile from the upper level server in the case when the local device does not already have such a model. However, Klyutsev, in combination with Atrey/Ra/Dirac/Yu, teaches these limitations (Klyutsev, Fig. 5 & [0039-0040], “the device 104-3, which lacks the model 124 and repository 120, send a request of the first type to the server 128, which responds with … copy of the model 124” that is, the local devices use either “a local copy, or a server copy” of the model – a local copy when it is available, and a server copy when the local copy is not available). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of Klyutsev (of using a local model when it is available, or a downloaded model from a server, when it is not) in the Atrey/Ra/Dirac/You combination, for obtaining a basic model to adjust using the local private data. The motivation to do so is so that the local devices can use the appropriate selected basic model, whether that particular model is already on that local device or not.
Regarding Claim 23, the Atrey/Ra/Dirac/You/Klyutsev combination of Claim 21 teaches the intermediate server of Claim 21 (and thus the rejection of Claim 21 is incorporated). Atrey further teaches public raw data to be transmitted to at least one upper-level server (Atrey, pg. 4, 1st column, 2nd paragraph, “the general model takes as input the trajectories of many users”) and private raw data not to be transmitted to the upper-level server (Atrey, pg. 7, 2nd column, 2nd-to-last paragraph, “Retaining all private data on local user-owned devices enhances privacy”). However, Atrey does not teach using the data disclosure scope to divide the raw data into these two kinds of levels of privacy. Ra does teach this limitation (Ra, pg. 47085, 2nd column, last paragraph, “users control the use of data by service providers by labeling the scope of use of information”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to label the data disclosure scope of the personal data of the Atrey/Dirac combination, including allowing uploading or not. The motivation to do so is “to support fine-grained access control” (Ra, Abstract) – that is, to allow users to control access to their personal data.
Regarding Claim 26, the Atrey/Ra/Dirac/You/Klyutsev combination of Claim 21 teaches the intermediate server of Claim 21 (and thus the rejection of Claim 21 is incorporated). Atrey further teaches to generate the intelligence model based on the basic intelligence model (Atrey, pg. 7, 2nd column, last paragraph, “the general model is downloaded from the cloud to the device and transfer learning is performed on the device using personal training data”).
Regarding Claim 27, the Atrey/Ra/Dirac/You/Klyutsev combination of Claim 26 teaches the intermediate server of Claim 26 (and thus the rejection of Claim 26 is incorporated). The combination, via You’s selection of a basic model already incorporated into the combination, further teaches wherein a label list of the basic intelligence model is selected based on similarity to a target label list included in the intelligence model profile (You, Algorithm 1, where loss is a similarity between the target label list of y’s and the label list of the basic intelligence model of
f
i
, see pg. 4, 2nd column, 3rd paragraph, “estimating the compatibility of the features
f
i
and labels”).
Regarding Claim 28, the Atrey/Ra/Dirac/You/Klyutsev combination of Claim 26 teaches the intermediate server of Claim 26 (and thus the rejection of Claim 26 is incorporated). The combination, via You, has already been shown to teach a target label list included in the intelligence requirement profile (You, pg. 5, 2nd column, Algorithm 1, “Target dataset D” including the input data and the desired labels y). The combination, via Atrey, thus further teaches to: modify a label list of the basic intelligence model to correspond to a target label list (Atrey, pg. 3, 2nd column, last paragraph, “the domain of the multi-user model can differ from the domain of the single user data for next location prediction. For instance, a general mobility prediction model that is trained for New York City will have a different domain from a user who lives is Boston. In this work, we assume that the target single-user domain is a subset of the source multi-user domain … Prior to applying transfer learning, we transform the target data by extending the domain … This simplifies the transfer learning process by equalizing the source and target domains”) and perform training of the modified intelligence model (Atrey, pg. 4, 1st column, 3rd paragraph, “In our work, we re-train and update parameters of the second LSTM layer and linear layer using single user data”).
Regarding Claim 29, the Atrey/Ra/Dirac/You/Klyutsev combination of Claim 28 teaches the intermediate server of Claim 28 (and thus the rejection of Claim 28 is incorporated). Atrey further teaches firstly training using a previously stored dataset; and secondly training using raw data included in the intelligence requirement profile (Atrey, pg. 7, 2nd column, 2nd-to-last paragraph, “once a general ML model has been trained, the next phase personalizes this model for each user … using a small amount of training data for each new user …. Retaining all private data on local user-owned devices”).
Regarding Claim 30, the Atrey/Ra/Dirac/You/Klyutsev combination of Claim 29 teaches the intermediate server of Claim 29 (and thus the rejection of Claim 29 is incorporated). Atrey further teaches wherein the previously stored dataset is generated by selecting data corresponding to the target label list from the storage unit (Atrey, pg. 3, 2nd column, last paragraph, “the domain of the multi-user model can differ from the domain of the single user data for next location prediction. For instance, a general mobility prediction model that is trained for New York City will have a different domain from a user who lives is Boston. In this work, we assume that the target single-user domain is a subset of the source multi-user domain. Assume the … target domain is
D
t
… Prior to applying transfer learning, we transform the target data by extending the domain … This simplifies the transfer learning process by equalizing the source and target domains” & pg. 3, 2nd column, 3rd paragraph, “historical training data from a single user are used to train each model”).
Regarding Claim 38, the Atrey/Ra/Dirac/You/Klyutsev combination of Claim 28 teaches the intermediate server of Claim 28 (and thus the rejection of Claim 28 is incorporated). The combination has not yet been shown to teach, but You teaches wherein output nodes of the modified intelligence model are generated such that the number thereof is more than the number of the label list of the basic intelligence model (You, pg. 5, 2nd column, 1st paragraph, “If the target problem is a classification task with K classes” & 2nd paragraph indicates K is approximately 1000, where the number of the label list of the basic models are given on pg. 13, Table 5, are less than 100, e.g. ResNet-50, ResNet-101, etc. shows that the output nodes of the modified intelligence model/one-hot classes can be more than a number of the label list/number of classes of the basic intelligence model in the system of You already incorporated into the combination). It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to allow the numbers of classes to differ between the basic and personalized models of the invention, as does You. The motivation to do so is that sometimes matching models with different numbers of classes provides better performance (You, pg. 13, Table 5).
Claims 24 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Atrey, in view of Ra, Dirac, You, and Klyutsev, and further in view of Liu et al., “FedEraser: Enabling Efficient Client-Level Data Removal from Federated Learning Models.”
Regarding Claim 24, the Atrey/Ra/Dirac/You/Klyutsev combination of Claim 23 teaches the intermediate server of Claim 23 (and thus the rejection of Claim 23 is incorporated). The combination has not been shown to teach, but Liu teaches, to: delete the private raw data and a data comment corresponding to the private raw data from the intelligence requirement profile (Liu, pg. 1, 2nd column, 3rd paragraph, “Directly delete the target data” including its label/comment); and transmit the intelligence requirement profile to the upper-level server (Liu, pg. 3, 2nd column, Algorithm 1, last line, “Return [update] to the central server”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Liu’s FedEraser method to allow clients to remove the portion of training a global model attributed to their private data, in response to determining that the data should not be shared, in the global-model-with-privacy-designation-capability invention of the Atrey/Dirac/Ra combination. The motivation to do so is to allow “the ‘right to be forgotten’ and countering data poisoning attacks” (Liu, Abstract), that is, allowing data from a client to be scrubbed from the model when it is no longer desired to include training based on that particular data in the model.
Regarding Claim 25, the Atrey/Ra/Dirac/You/Klyutsev/Liu combination of Claim 24 teaches the intermediate server of Claim 24 (and thus the rejection of Claim 24 is incorporated). Atrey further teaches wherein the communication unit is configured to receive the generated intelligence model based on the intelligence requirement profile from the upper-level server, and adjusts the generated intelligence model using the private raw data (Atrey, pg. 7, 2nd column, last paragraph, “the general model is downloaded from the cloud to the device and transfer learning is performed on the device using personal training data”).
Claims 31-33 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Dirac, in view of Ra, and further in view of Atrey, You, and Klyutsev.
Regarding Claim 31, Dirac teaches a method for generating an intelligence model, the method being performed by a server (Dirac, title, “Machine Learning Service” & Fig. 1, everything to the right of “MLS programmatic interfaces 161”) and comprising: receiving, from aa user terminal, an intelligence model generation request including an intelligence requirement profile (Dirac, Fig. 9a, element 901 “Receive a request from a client via a MLS programmatic interface to perform an operation (e.g. create …) a machine learning entity of a specified type (e.g. a … model)”); generating an intelligence model corresponding to an intelligence requirement profile (Dirac, Fig. 9a, element 910, “Perform operation” with element 901, “perform an operation (e.g. create …) a machine learning entity of a specified type (e.g. a … model)”) wherein the intelligence requirement profile includes a task specification corresponding to an intelligence model request (Dirac, Fig. 9a, element 901 “Receive a request from a client”) and raw data used to train the intelligence model (Dirac, [0062], “The creation request may specify an address or location from which data records can be retrieved” & [0026], “data sources … location or objects from which input records for machine learning can be obtained …. Input data for training models”).
Dirac does not teach explicitly a data comment including a data disclosure scope corresponding to the raw data, but Ra teaches this limitation (Ra, Abstract, “data owners manage the access privilege of service providers over their … data” & Ra, pg. 47085, 2nd column, last paragraph, “users control the use of data by service providers by labeling the scope of use of information”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to label the data disclosure scope of the personal data of Dirac, as does Ra. The motivation to do so is “to support fine-grained access control for a federated outsourcing cloud” (Ra, Abstract), such as the one of Dirac, which already has different security containers for differently-privileged data (Dirac, [0025], “To meet an MLS client’s data security needs, selected data sets … may be restricted to security containers” & [0045], “clients may indicate to the MLS control-plane that they only wish to use resources within a given availability container or a given security container”).
Further, the Dirac/Ra combination is not explicit about what limits the data disclosure scope may place upon the user data. Thus, Dirac/Ra does not explicitly teach, but Atrey teaches, a local data disclosure scope including prohibition of transmitting private raw data to an upper-level server (Atrey, pg. 7, 2nd column, 2nd-to-last paragraph, “Retaining all private data on local user-owned devices enhances privacy”) as well as a global data disclosure scope including permission to transmit raw data to an upper-level server (Atrey, pg. 4, 1st column, 2nd paragraph, “the general model takes as input the trajectories of many users” & 2nd column, 4th paragraph, “users who allow their data to be used to train a multi-user ML model”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use this particular two-class data disclosure scope of Atrey to describe data in the Dirac/Ra combination. The motivation to do so is to differentiate between these two situations, that is to provide a method allow users to indicate whether their data should be shared, or not shared, as Atrey implies they can do (Atrey, 2nd column, 4th paragraph, “users who allow their data to be used to train a multi-user ML model”).
Atrey further teaches retrieving a basic intelligence model corresponding to the intelligence model from a storage unit of the server and adjusting the received intelligence model using the private raw data (Atrey, pg. 7, 2nd column, last paragraph, “the general model is downloaded from the cloud to the device and transfer learning is performed on the device using personal training data”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adjust a general model with private data specific to that user, as does Atrey, in the combination of Dirac/Ra/Atrey. The motivation to do so is to personalize the model for each user (Atrey, pg. 7, 2nd column, 2nd-to-last paragraph, “once a general ML model has been trained, the next phase personalizes this model for each user … using a small amount of training data for each new user…. Retaining all private data on local user-owned devices enhances privacy”).
While the Dirac/Ra/Atrey combination teaches generating a model by personalizing a general model on private raw data, the combination does not explicitly teach an answer corresponding to the raw data nor a task identifier indicating a task performed by the intelligence model, but You teaches these limitations (You, pg. 5, 2nd column, Algorithm 1, “Target dataset D” including the input data and the desired labels y, where predicting the correct answer y for a particular x is indicating a task performed by the intelligence model). You further teaches to retrieve, based on the task identifier, a basic intelligence model from the storage unit (You, Abstract, “assessing pre-trained models for the target task and selecting best ones from the model zoo” using Algorithm 1, that is, based on the task identifier). It would have been obvious to one of ordinary skill in the art to use You’s method of pre-trained model selection, to select a model to be fine-tuned/transfer-learned using Atrey’s personal device data in the Dirac/Ra/Atrey combination. The motivation to do so is “compared with brute-force fine-tuning, LogME brings … speedup … and requires only 1% memory footprint. It outperforms prior methods by a large margin” (You, Abstract) – that is, by selecting an appropriate model before performing transfer learning, better personalized models can be found, and be found more efficiently.
Finally, the Dirac/Ra/Atrey/You combination assumes that all of the general basic models are available to be fine-tuned, and thus does not teach to control the communication unit to send, to the upper-level server, a request for retrieving the basic intelligence model when the retrieval (from the data storage unit) fails, wherein the communication unit is configured to receive an intelligence model generated based on the intelligence requirement profile from the upper level server in the case when the local device does not already have such a model. However, Klyutsev, in combination with Dirac/Ra/Atrey/You, teaches these limitations (Klyutsev, Fig. 5 & [0039-0040], “the device 104-3, which lacks the model 124 and repository 120, send a request of the first type to the server 128, which responds with … copy of the model 124” that is, the local devices use either “a local copy, or a server copy” of the model – a local copy when it is available, and a server copy when the local copy is not available). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of Klyutsev (of using a local model when it is available, or a downloaded model from a server, when it is not) in the Dirac/Ra/Atrey/You combination, for obtaining a basic model to adjust using the local private data. The motivation to do so is so that the local devices can use the appropriate selected basic model, whether that particular model is already on that local device or not.
Regarding Claim 32, the Dirac/Ra/Atrey/You/Klyutsev combination of Claim 31 teaches the method of Claim 31 (and thus the rejection of Claim 31 is incorporated). The combination has already been shown to teach, via Atrey, to generate the intelligence model based on the basic intelligence model (Atrey, pg. 7, 2nd column, last paragraph, “the general model is downloaded from the cloud to the device and transfer learning is performed on the device using personal training data”).
Regarding Claim 33, the Dirac/Ra/Atrey/You/Klyutsev combination of Claim 32 teaches the method of Claim 32 (and thus the rejection of Claim 31 is incorporated). The combination, via You, has already been shown to teach a target label list included in the intelligence requirement profile (You, pg. 5, 2nd column, Algorithm 1, “Target dataset D” including the input data and the desired labels y). The combination, via You’s selection of a basic model already incorporated into the combination, further teaches wherein a label list of the basic intelligence model is selected based on similarity to a target label list included in the intelligence model profile (You, Algorithm 1, where loss is a similarity between the target label list of y’s and the label list of the basic intelligence model of
f
i
, see pg. 4, 2nd column, 3rd paragraph, “estimating the compatibility of the features
f
i
and labels”).
Regarding Claim 37, Dirac teaches a method for utilizing an intelligence model (Dirac, title, “Machine Learning Service”), the method being performed by a user terminal (Dirac, Fig. 1, “Clients 164”) and comprising: requesting a server to generate an intelligence model corresponding to an intelligence requirement profile (Dirac, Fig. 9a, element 901 “Receive a request from a client via a MLS programmatic interface to perform an operation (e.g. create …) a machine learning entity of a specified type (e.g. a … model)” where “of a specified type” denotes a requirement); receiving the intelligence model; and performing a service using the intelligence model (Dirac, [0065], “In local mode, client may receive executable representations of a specified model that has been trained and validated at the MLS, and the clients may run the model on computing devices of their choice (e.g. at devices located in client networks…”) wherein the intelligence requirement profile includes a task specification corresponding to an intelligence model request (Dirac, Fig. 9a, element 901 “Receive a request from a client”) and raw data used to train the intelligence model (Dirac, [0062], “The creation request may specify an address or location from which data records can be retrieved” & [0026], “data sources … location or objects from which input records for machine learning can be obtained …. Input data for training models”).
Dirac does not teach explicitly a data comment including a data disclosure scope corresponding to the raw data, but Ra teaches this limitation (Ra, Abstract, “data owners manage the access privilege of service providers over their … data” & Ra, pg. 47085, 2nd column, last paragraph, “users control the use of data by service providers by labeling the scope of use of information”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to label the data disclosure scope of the personal data of Dirac, as does Ra. The motivation to do so is “to support fine-grained access control for a federated outsourcing cloud” (Ra, Abstract), such as the one of Dirac, which already has different security containers for differently-privileged data (Dirac, [0025], “To meet an MLS client’s data security needs, selected data sets … may be restricted to security containers” & [0045], “clients may indicate to the MLS control-plane that they only wish to use resources within a given availability container or a given security container”).
Further, the Dirac/Ra combination is not explicit about what limits the data disclosure scope may place upon the user data. Thus, Dirac/Ra does not explicitly teach, but Atrey teaches, a local data disclosure scope including prohibition of transmitting private raw data to an upper-level server (Atrey, pg. 7, 2nd column, 2nd-to-last paragraph, “Retaining all private data on local user-owned devices enhances privacy”) as well as a global data disclosure scope including permission to transmit raw data to an upper-level server (Atrey, pg. 4, 1st column, 2nd paragraph, “the general model takes as input the trajectories of many users” & 2nd column, 4th paragraph, “users who allow their data to be used to train a multi-user ML model”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use this particular two-class data disclosure scope of Atrey to describe data in the Dirac/Ra combination. The motivation to do so is to differentiate between these two situations, that is to provide a method allow users to indicate whether their data should be shared, or not shared, as Atrey implies they can do (Atrey, 2nd column, 4th paragraph, “users who allow their data to be used to train a multi-user ML model”).
Atrey further teaches where a basic intelligence model corresponding to the intelligence model from a storage unit of the server is retrieved and wherein the intelligence model is adjusted by using private raw data not transmitted to the upper-level server (Atrey, pg. 7, 2nd column, last paragraph, “the general model is downloaded from the cloud to the device and transfer learning is performed on the device using personal training data”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adjust a general model with private data specific to that user, as does Atrey, in the combination of Dirac/Ra/Atrey. The motivation to do so is to personalize the model for each user (Atrey, pg. 7, 2nd column, 2nd-to-last paragraph, “once a general ML model has been trained, the next phase personalizes this model for each user … using a small amount of training data for each new user…. Retaining all private data on local user-owned devices enhances privacy”).
While the Dirac/Ra/Atrey combination teaches generating a model by personalizing a general model on private raw data, the combination does not explicitly teach an answer corresponding to the raw data nor a task identifier, but You teaches these limitations (You, pg. 5, 2nd column, Algorithm 1, “Target dataset D” including the input data and the desired labels y, where predicting the correct answer y for a particular x is indicating a task performed by the intelligence model). You further teaches to retrieve the basic intelligence model based on the task identifier (You, Abstract, “assessing pre-trained models for the target task and selecting best ones from the model zoo” using Algorithm 1, that is, based on the task identifier). It would have been obvious to one of ordinary skill in the art to use You’s method of pre-trained model selection, to select a model to be fine-tuned/transfer-learned using Atrey’s personal device data in the Dirac/Ra/Atrey combination. The motivation to do so is “compared with brute-force fine-tuning, LogME brings … speedup … and requires only 1% memory footprint. It outperforms prior methods by a large margin” (You, Abstract) – that is, by selecting an appropriate model before performing transfer learning, better personalized models can be found, and be found more efficiently.
Finally, the Dirac/Ra/Atrey/You combination assumes that all of the general basic models are available to be fine-tuned, and thus does not teach when the retrieval (from the data storage unit) fails, a request for retrieving the basic intelligence model is sent to the upper-level server in the case when the local device does not already have such a model or fails. However, Klyutsev, in combination with Dirac/Ra/Atrey/You, teaches these limitations (Klyutsev, Fig. 5 & [0039-0040], “the device 104-3, which lacks the model 124 and repository 120, send a request of the first type to the server 128, which responds with … copy of the model 124” that is, the local devices use either “a local copy, or a server copy” of the model – a local copy when it is available, and a server copy when the local copy is not available). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of Klyutsev (of using a local model when it is available, or a downloaded model from a server, when it is not) in the Dirac/Ra/Atrey/You combination, for obtaining a basic model to adjust using the local private data. The motivation to do so is so that the local devices can use the appropriate selected basic model, whether that particular model is already on that local device or not.
Claims 34-36 are rejected under 35 U.S.C. 103 as being unpatentable over Dirac, in view of Ra, Atrey, You, and Klyutsev, and further in view of Liu.
Regarding Claim 34, the Dirac/Ra/Atrey/You/Klyutsev combination of Claim 32 teaches the method of Claim 32 (and thus the rejection of Claim 32 is incorporated). Dirac/You does not teach, but Liu teaches, wherein the storage unit stores intelligence model metadata corresponding to each intelligence model in the storage unit, and the intelligence model metadata includes a training history (Liu, pg. 2nd paragraph, 2nd-to-last paragraph, “Specifically, during the training process of the global model, the central server retains the updates of the clients, at intervals of regular rounds”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use Liu’s client-data-forgetting machine learning system, in the invention of Dirac. The motivation to do so is to allow “the ‘right to be forgotten’ and countering data poisoning attacks” (Liu, Abstract), that is, allowing data from a client to be scrubbed from the model when it is no longer desired to include training based on that particular data in the model.
Regarding Claim 35, the Dirac/Ra/Atrey/You/Klyutsev/Liu combination of Claim 34 teaches the method of Claim 34 (and thus the rejection of Claim 34 is incorporated). The rejection has already been shown to teach wherein the training history includes changes in parameter values for training (Liu, pg. 2nd paragraph, 2nd-to-last paragraph, “Specifically, during the training process of the global model, the central server retains the updates of the clients, at intervals of regular rounds”).
Regarding Claim 36, the Dirac/Ra/Atrey/You/Klyutsev/Liu combination of Claim 34 teaches the method of Claim 34 (and thus the rejection of Claim 34 is incorporated). The combination has already been shown to teach, via You, wherein the basic intelligence model is retrieved based on the task identifier and intelligence model metadata (You, Abstract, “assessing pre-trained models for the target task and selecting best ones from the model zoo” where any information about the pre-trained models is intelligence model metadata and the target task is a task identifier).
Response to Arguments
Applicant’s arguments filed November 24th, 2025 have been fully considered, but are not fully persuasive.
Applicant’s arguments regarding the 35 U.S.C. 112(f) interpretation of certain claim elements and related 112(b) rejections of Claim 21 and its dependents have been fully considered, but are not fully persuasive. While the model generation unit and adjustment unit have been amended out of the claim language, the communication unit and storage unit remain interpreted under 35 U.S.C. 112(f) and thus Claim 21 and it’s dependents remain rejected.
Applicant’s arguments regarding the 35 U.S.C. 112(a) rejections of Claims 24 and 25 have been fully considered, and, due to amendment, the rejections have been withdrawn. However, amendments have required the introduction of new 35 U.S.C. 112(a) rejections of Claims 27 and 33.
Applicant’s amendments have corrected other 35 U.S.C. 112(b) issues noted in the previous rejection, but have also introduced additional antecedent basis issues which have required new rejections.
Applicant’s amendments and arguments have overcome the 35 U.S.C. 101 rejections of the previous office action and the rejections have been withdrawn.
Applicant’s arguments regarding the prior art rejections of the previous office action have been fully considered, and are alternatively unpersuasive or moot.
Applicant first argues, on pg. 11 of the response, final paragraph, regarding the limitation the data disclosure scope is local, indicating prohibition of transmitting the raw data to the upper level server, or global, indicating permission to transmit the raw data to the upper level server (i.e. the first key feature). Applicant argues with respect to Ra and Dirac, but fails to note that it is Atrey that is relied upon in teaching the specific data disclosure scope of the claims. Atrey teaches that some data is private and should not be transmitted to an upper level server, and that some users can consent to transmit their data to an upper level server. Atrey does not teach a data comment indicating the data disclosure scope, that is, Atrey does not teach the specific recited method of allowing a user to label their data as local or global. Ra, however, teaches specifically allowing a user to label their data with the appropriate scope, so Ra and Atrey, taken in combination, clearly teach the recited first key feature.
Applicant’s argument regarding the second key feature are moot, as new reference Klyutsev teaches in which an intermediate server, failing to have the correct machine learning model for the task, downloads that model from an upper-level server.
Applicant’s argument regarding the third key feature is unpersuasive, as Atrey explicitly receives an intelligence model from an upper-level server and adjusts the received model using the private raw data.
Applicant also argues “the cited references do not disclose a system configuration composed of a user device, an intermediate server, and a higher-level server”. However, the claims never explicitly recite three computing devices – at most, the claims recite a user terminal i.e. a user interface on a client device; an intermediate server, i.e. the processor on the client device, and an upper-level server, i.e. a remote cloud server, all of which are taught by Atrey as well as Dirac.
Conclusion
Applicant’s amendment necessitated the new grounds of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
/BRIAN M SMITH/ Primary Examiner, Art Unit 2122