Prosecution Insights
Last updated: April 19, 2026
Application No. 17/451,434

System For Collaboration And Optimization Of Edge Machines Based On Federated Learning

Non-Final OA §102§103§112
Filed
Oct 19, 2021
Examiner
PHUNG, STEVEN HUYNH
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Tsinghua University
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
4y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
28 granted / 38 resolved
+18.7% vs TC avg
Strong +26% interview lift
Without
With
+26.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
20 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
33.6%
-6.4% vs TC avg
§103
34.6%
-5.4% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
20.6%
-19.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR 41.154(b) and 41.202(e). Failure to provide a certified translation may result in no benefit being accorded for the non-English application. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. The abstract of the disclosure is objected to because it contains language that repeats information given in the title and uses phrases which can be implied. “A system for collaboration and optimization of edge machines based on federated learning is provided.” A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). The use of the term, “TensorFlow” in para. [0005], which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term. Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks. Claim Objections Claims 1-10 are objected to because of the following informalities: In claim 1, “a model parameter assignment unit, configured to” should read, “a model parameter assignment unit configured to”. In claim 1, “the model training and optimization units…and configured to train” should read, “the model training and optimization units…are configured to train”. Claims 2-10 are objected to for inheriting the deficiencies of their respective base claims. In claim 2, “scenario feature model optimizing units, arranged in the T i specific edge machines, and configured to”, should read, “scenario feature model optimizing units, arranged in the T i specific edge machines, are configured to”. Claim 3 is objected to for inheriting the deficiencies of claim 2. In claim 3, “shorter than predetermined value” should read, “shorter than a predetermined value”. In claim 6, “a machine selection unit, configured to” should read, “a machine selection unit configured to”. In claim 6, “a task model parameter assignment unit, configured to” should read, “a task model parameter assignment unit configured to”. In claim 6, “the task model training and optimizing units…and configured to train” should read, “the task model training and optimizing units…are configured to train”. In claim 10, “a data acquisition module, configured to” should read, “a data acquisition module configured to”. In claim 10, “a storage unit, configured to” should read, “a storage unit configured to”. In claim 10, “an other part” should read, “another part”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are in claims 1-2, 6-7, and 10: Claim 1, a model parameter assignment unit, configured to “assign initial parameters for federated learning to the M i edge machines in the i-th federated learning system” “receive intermediate model parameters transmitted by model training and optimizing units” “aggregate and update the received intermediate model parameters to obtain new model parameters” Claim 1, “the model and optimizing units, arranged in the M i edge machines respectively, [are] configured to” “train, on the basis of the initial parameters assigned by the model parameter assignment unit and respective operating data, local operating models” “transmit the intermediate model parameters obtained after training the model parameter assignment unit” “obtain a new system collaborative operating model of the i-th federated learning system according to the new model parameters” Claim 2, “scenario feature model optimizing units, arranged in the T i specific edge machines, [are] configured to carry out, on the basis of the system collaborative operating model and working scenario features of the T i specific edge machines, model optimization” Claim 6, “a machine selection unit, configured to select edge machines with performance scores of executing a target task higher than a predetermined value in each of the R federated learning systems to obtain a task training alliance” Claim 6, “a task model parameter assignment unit, configured to” “assign task initial parameters to the edge machines in the task training alliance” “receive task model intermediate parameters transmitted by the task model training and optimizing units” “aggregate and update the received task model intermediate parameters to obtain new task model parameters” Claim 6, “the task model training and optimizing units, arranged in the edge machines in the task training alliance respectively, [are] configured to” “train, on the basis of the task initial parameters assigned by the task model parameter assignment unit and respective operating data, local operating models for the target task” “encrypt the task model intermediate parameters obtained after training” ”transmit the encrypted task model intermediate parameters to the task model parameter assignment unit” “obtain a system collaborative execution task model of the task training alliance according to the new task model parameters” Claim 7, “wherein the model parameter assignment unit is further configured for recording and making statistics on activity data in the federated learning systems” Claim 10, “a data acquisition module, configured to acquire an image, a movement track, operating data and environment responding data” Claim 10, “a storage unit, configured to store the operating data for model training” Claim 10, “a computing unit, of which one part is configured to execute a predetermined working task and an other part is configured to execute a task of the federating learning” Claim 10, “a communication module, which supports wired communication and wireless communication” The examiner is interpreting the claimed functions to be implemented on a generic processor or computer because the applicant’s disclosure does not provide details about how they are implemented or how they are function aside from merely reciting the claim language. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-10 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding Claims 1-10: As described above, the disclosure does not provide adequate structure to perform the claimed functions of: Claim 1, a model parameter assignment unit, configured to “assign initial parameters for federated learning to the M i edge machines in the i-th federated learning system” “receive intermediate model parameters transmitted by model training and optimizing units” “aggregate and update the received intermediate model parameters to obtain new model parameters” Claim 1, “the model and optimizing units, arranged in the M i edge machines respectively, [are] configured to” “train, on the basis of the initial parameters assigned by the model parameter assignment unit and respective operating data, local operating models” “transmit the intermediate model parameters obtained after training the model parameter assignment unit” “obtain a new system collaborative operating model of the i-th federated learning system according to the new model parameters” Claims 2-10 are rejected for inheriting the deficiencies of claim 1. Claim 2, “scenario feature model optimizing units, arranged in the T i specific edge machines, [are] configured to carry out, on the basis of the system collaborative operating model and working scenario features of the T i specific edge machines, model optimization” Claim 3 is rejected for inheriting the deficiencies of claim 2. Claim 6, “a machine selection unit, configured to select edge machines with performance scores of executing a target task higher than a predetermined value in each of the R federated learning systems to obtain a task training alliance” Claim 6, “a task model parameter assignment unit, configured to” “assign task initial parameters to the edge machines in the task training alliance” “receive task model intermediate parameters transmitted by the task model training and optimizing units” “aggregate and update the received task model intermediate parameters to obtain new task model parameters” Claim 6, “the task model training and optimizing units, arranged in the edge machines in the task training alliance respectively, [are] configured to” “train, on the basis of the task initial parameters assigned by the task model parameter assignment unit and respective operating data, local operating models for the target task” “encrypt the task model intermediate parameters obtained after training” ”transmit the encrypted task model intermediate parameters to the task model parameter assignment unit” “obtain a system collaborative execution task model of the task training alliance according to the new task model parameters” Claim 7, “wherein the model parameter assignment unit is further configured for recording and making statistics on activity data in the federated learning systems” Claim 10, “a data acquisition module, configured to acquire an image, a movement track, operating data and environment responding data” Claim 10, “a storage unit, configured to store the operating data for model training” Claim 10, “a computing unit, of which one part is configured to execute a predetermined working task and an other part is configured to execute a task of the federating learning” Claim 10, “a communication module, which supports wired communication and wireless communication” The specification merely recites the claim language and there are no algorithms disclosed for performing the claimed functions. Therefore, the specification does not demonstrate that the applicant has made an invention that achieves the claimed functions because the invention is not described with sufficient detail that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claims 1-10: Claims 1-2, 6-7, and 10 include claim limitations that invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The claim limitations: Claim 1, a model parameter assignment unit, configured to “assign initial parameters for federated learning to the M i edge machines in the i-th federated learning system” “receive intermediate model parameters transmitted by model training and optimizing units” “aggregate and update the received intermediate model parameters to obtain new model parameters” Claim 1, “the model and optimizing units, arranged in the M i edge machines respectively, [are] configured to” “train, on the basis of the initial parameters assigned by the model parameter assignment unit and respective operating data, local operating models” “transmit the intermediate model parameters obtained after training the model parameter assignment unit” “obtain a new system collaborative operating model of the i-th federated learning system according to the new model parameters” Claim 2, “scenario feature model optimizing units, arranged in the T i specific edge machines, [are] configured to carry out, on the basis of the system collaborative operating model and working scenario features of the T i specific edge machines, model optimization” Claim 6, “a machine selection unit, configured to select edge machines with performance scores of executing a target task higher than a predetermined value in each of the R federated learning systems to obtain a task training alliance” Claim 6, “a task model parameter assignment unit, configured to” “assign task initial parameters to the edge machines in the task training alliance” “receive task model intermediate parameters transmitted by the task model training and optimizing units” “aggregate and update the received task model intermediate parameters to obtain new task model parameters” Claim 6, “the task model training and optimizing units, arranged in the edge machines in the task training alliance respectively, [are] configured to” “train, on the basis of the task initial parameters assigned by the task model parameter assignment unit and respective operating data, local operating models for the target task” “encrypt the task model intermediate parameters obtained after training” ”transmit the encrypted task model intermediate parameters to the task model parameter assignment unit” “obtain a system collaborative execution task model of the task training alliance according to the new task model parameters” Claim 7, “wherein the model parameter assignment unit is further configured for recording and making statistics on activity data in the federated learning systems” Claim 10, “a data acquisition module, configured to acquire an image, a movement track, operating data and environment responding data” Claim 10, “a storage unit, configured to store the operating data for model training” Claim 10, “a computing unit, of which one part is configured to execute a predetermined working task and an other part is configured to execute a task of the federating learning” Claim 10, “a communication module, which supports wired communication and wireless communication” The specification is devoid of adequate structure to perform the claimed functions. The specification merely recites the claim language. There is no disclosure of any particular structure, either explicitly or inherently, to perform the claimed functions. As would be recognized by those of ordinary skill in the art, the claimed functions can be performed on a generically recited computer or processor. The specification does not provide sufficient details such that one of ordinary skill in the art would understand which structure or structures perform(s) the claimed functions. Therefore, claims 1-10 are indefinite and are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Additionally, claims 2-10 and 3 are rejected for inheriting the deficiencies of their base claim. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Xie (“Multi-Center Federated Learning to Cluster Clients with non-IID data”). Regarding Claim 1: Xie discloses: A system for collaboration and optimization of edge machines based on federated learning, comprising: Xie, pg. 37, “Federated Learning (FL)…is a decentralized machine learning framework that learns models collaboratively using the training data distributed on remote devices to boost communication efficiency.” Xie discloses federated learning which refers to a framework that collaboratively and efficiently works with remote devices [collaboration and optimization of edge machines]. R federated learning systems, wherein R ≥ 1, an i-th federated learning system in the R federated learning systems comprises Mi edge machines with uneven operating experience distribution, Mi ≥ 2, i = 1, 2,…, R Xie, pg. 42, “we propose a novel model aggregation method with multiple centers, each associating with a global model W ~ ( k ) updated by aggregating a cluster of user’s models with nearly IID data. In particular, all the local models will be grouped to K clusters, denoted as C 1 , ⋯ , C K , each covering a subset of local models with parameters { W j } j = 1 m k . An intuitive comparison between the vanilla FL and our multi-center FL is illustrated in Fig. 3.1.” Pg. 17, “The insight to FL is such: under these setups, the performances of the training process may vary significantly according to the degree of unbalanced local data samples” PNG media_image1.png 573 907 media_image1.png Greyscale On pg. 42, Xie discloses multi-center federated learning which is depicted in Fig. 3.1 (from pg. 41) [R federated learning systems, wherein R ≥ 1]. Xie further details that each K cluster of the multiple clusters [an i-th federated learning system in the R federated learning system] is an aggregate of local models C 1 , ⋯ , C K [comprises Mi edge machines…Mi ≥ 2, i = 1, 2,…, R]. Lastly, referring to pg. 17, Xie discloses that in federated learning, the performance of training varies due to unbalanced local data samples [edge machines with uneven operating experience distribution]. a model parameter assignment unit, configured to Xie, pg. 22, “To begin with, workers send their loss to centralised server, center machine estimates the cluster identities of each worker machine by running k-means on the collection of workers local models. Then with the cluster identity estimations, the center machine runs any federated learning algorithm…” Xie discloses a centralized server (also described as a center machine or cluster center) [a model parameter assignment unit]. The center is interpreted as the claimed model parameter assignment unit because both perform the claimed functions. assign initial parameters for federated learning to the Mi edge machines in the i-th federated learning system Xie, pg. 45, “to update the local models, the global model’s parameters W ~ ( k ) are sent to each device in cluster k to update its local model, and then we can fine-tune the local model’s parameters W i using a supervised learning algorithm on its own private training data…to update the local model, we need to fine-tune the local model by implementing Algorithm 2.” Pg. 46, PNG media_image2.png 182 348 media_image2.png Greyscale On pg. 45, Xie discloses using the center cluster’s global model’s parameters [assign initial parameters for federated learning] to initialize the local models in the cluster [to the Mi edge machines in the i-th federated learning system]. This is further depicted in the Initialization step of Algorithm 2. receive intermediate model parameters transmitted by model training and optimizing units In Algorithm 2, Xie discloses using the local models [training and optimizing units] to fine-tuned parameters [intermediate model parameters transmitted] to update the cluster center. The last line of Algorithm 2, cited above, further discloses that the local parameters are transmitted to the server/cluster center. The edge devices with local models correspond to training and optimizing units because both refer to the local models perform the training and loss optimization locally on the edge devices. aggregate and update the received intermediate model parameters to obtain new model parameters Xie, pg. 45, “for the M-Step, we update the cluster center W ~ ( k ) according to the W i   and r i ( k ) ” Xie discloses using each of the local W i [aggregate and update the received intermediate model parameters] to update the cluster center’s parameters, W ~ ( k ) [obtain new model parameters]. the model training and optimizing units, arranged in the Mi edge machines respectively, and configured to Xie, pg. 45, “to update the local model, we need to fine-tune the local model by implementing Algorithm 2.” Xie discloses using Algorithm 2 to train and optimize the local models [the model training optimizing units]. Furthermore, the algorithm is implemented by the local models [arranged in the Mi edge machines respectively]. The algorithm is interpreted as the claimed model training and optimizing units because both perform the claimed functions. train, on the basis of the initial parameters assigned by the model parameter assignment unit and respective operating data, local operating models Pg. 46, PNG media_image2.png 182 348 media_image2.png Greyscale Xie discloses training local models [train…local operating models] using the model parameters from the server [on the basis of the initial parameters assigned by the model parameter assignment unit] and the local training data [respective operating data]. transmit the intermediate model parameters obtained after training to the model parameter assignment unit As seen above in Algorithm 2, the locally updated parameters W i [the intermediate model parameters obtained after training] are returned to the server [transmit…to the model parameter assignment unit]. obtain a system collaborative operating model of the i-th federated learning system according to the new model parameters Xie, pg. 44, “we sequentially conduct: 1) E-step – updating cluster assignment r i ( k ) with fixed W i , 2) M-step – updating cluster centers W ~ ( k ) , and 3) updating local models by providing new initialization W ~ ( k ) .” Pg. 45, “Lastly, we repeat the three stochastic updating steps above until convergence.” Xie discloses repeating the above steps. Therefore, after updating the global model [a system collaborative operating model], the updated global model parameters are to be sent to the local models [obtain a system collaborative operating model…according to the new model parameters] for each cluster [of the i-th federated learning system]. wherein the local operating models are models in response to different operating environments Xie, pg. 4, “FL builds a joint model using the data located at different sites, where each party contributes some data to train the model. The devices can be owned by different individuals or organizations, and can be of different types (e.g., smartphones, sensors, vehicles, etc.).” Xie discloses that each of the local models are from devices that can be from different individuals or organizations [models in response to different operating environments]. Regarding Claim 9: As discussed above, Xie teaches [the] system according to claim 1, and Xie further discloses: wherein a cloud server or an edge server capable of communicating with the Mi edge machines serves as the model parameter assignment unit Xie, pg. 22, “To begin with, workers send their loss to centralised server, center machine estimates the cluster identities of each worker machine by running k-means on the collection of workers local models. Then with the cluster identity estimations, the center machine runs any federated learning algorithm…” Xie discloses a centralized server (also described as a center machine or cluster center) capable of communicating with workers or edge machines [an edge server capable of communicating with the Mi edge machines serves as the model parameter assignment unit]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Xie in view of Pang et al. (“Realizing the Heterogeneity: A Self-Organized Federated Learning Framework for IoT”), hereinafter Pang. Regarding Claim 2: As discussed above, Xie teaches [the] system according to claim 1, but does not explicitly disclose: wherein the Mi edge machines comprise Ti specific edge machines with operating experience not meeting predetermined requirements, 1 ≤ T i < M i and the system further comprises: scenario feature model optimizing units, arranged in the Ti specific edge machines, and configured to carry out, on the basis of the system collaborative operating model and working scenario features of the Ti specific edge machines, model optimization to increase single machine intelligence and improve capabilities of the Ti specific edge machines to respond to environments, in which the Ti specific edge machines are located, and to execute tasks However, in the same field, analogous art Pang teaches: wherein the Mi edge machines comprise Ti specific edge machines with operating experience not meeting predetermined requirements, 1 ≤
Read full office action

Prosecution Timeline

Oct 19, 2021
Application Filed
Nov 14, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596941
REALISTIC COUNTERFACTUAL EXPLANATION OF MACHINE LEARNING PREDICTIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12576990
PREDICTIVE MAINTENANCE MODEL DESIGN SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572844
Probing Model Signal Awareness
2y 5m to grant Granted Mar 10, 2026
Patent 12554979
ADAPTING AI MODELS FROM ONE DOMAIN TO ANOTHER
2y 5m to grant Granted Feb 17, 2026
Patent 12554997
Deep Multi-View Network Embedding on Incomplete Data
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+26.2%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month