Prosecution Insights
Last updated: April 19, 2026
Application No. 17/828,613

FEDERATED LEARNING USING SECURE CENTERS OF CLIENT DEVICE EMBEDDINGS

Non-Final OA §101§102§103§112
Filed
May 31, 2022
Examiner
ROY, SANCHITA
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
228 granted / 316 resolved
+17.2% vs TC avg
Strong +46% interview lift
Without
With
+46.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
19 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
45.4%
+5.4% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
27.3%
-12.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 316 resolved cases

Office Action

§101 §102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-30 are presented for examination. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim(s) 1-3, 5, 9-12, 15-19, 20-22, 24-27 and 30, recite(s) “secure center”. The term “secure center” is not well-understood in the art of machine learning, and one of ordinary skill in the art would not be able to definitely ascertain what constitutes a secure center in light of the disclose, rendering the claim(s) indefinite. Claim(s) 1 and 24 recite(s) “generating, by the local device, a secure center different from the local center based on information about secure centers shared by a plurality of other devices participating in a federated learning scheme”. It is unclear, in light of the specification, how the first secure center would be generated in the absence of other secure centers, rendering the claim(s) indefinite. Claim(s) 5, 9, 15-17, each depend on claim 1 and recite(s) “the secure center”. Claim 1 recites “a secure center different from the local center” and “secure centers shared by a plurality of other devices”. It is unclear whether “the secure center” in each of claims 5, 9, 15-17, refers to “a secure center different from the local center” or “secure centers shared by a plurality of other devices”, rendering the claim(s) indefinite. For examination purposes the examiner has interpreted “the secure center” in each of claims 5, 9, 15-17, to be “a secure center different from the local center” in claim 1. Claim(s) 2-17, 19-23 and 25-29 do not contain claim limitations that cure the indefiniteness of claim(s) 1, 18 and 24 respectively, and therefore are also indefinite under 35 U.S.C. 112(b). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-30 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claim(s) 1-23 is/are method type claim. Claim(s) 24-30 is/are system type claim(s). Therefore, claims 1-30 is/are directed to either a process, machine, manufacture or composition of matter. Independent claim(s): Step 2A Prong 1: Regarding claim(s) 1 and 24, this/these claim(s) recite(s) generating, ...a local center ...based on embeddings generated from ... data (mental process and math, mathematical calculation of a center of data) generating... a secure center different from the local center based, at least in part, on information about secure centers (mental process and math, mathematical calculation of a center based on multiple centers). Regarding claim(s) 18 and 30 this/these claim(s) recite(s) selecting a set of client devices to use in training a machine learning model (mental process of evaluation – a user can manually select models to use). Step 2A Prong 2: Regarding claim(s) 1 and 24, and 18 and 30, this judicial exception is not integrated into a practical application. Additional elements: Regarding claim(s) 24 and 30 this/these claim(s) recite(s) processor and memory to perform the steps of generating and selecting, and generic devices and server (mere instructions stored in a generic memory component to apply the exception using a generic computer components). Regarding claim(s) 1 and 24, this/these claim(s) further recite(s) receiving, at a local device from a server, information information shared by a plurality of other devices participating in a federated learning scheme; transmitting, by the local device to the server, information about the local version of the machine learning model and information about the secure center, (These steps are understood to be merely receiving or transmitting information and are under to be insignificant extra-solution activity (See MPEP § 2106.05(d)), generating, by the local device, a local version of the machine learning model ... based on embeddings generated from local data at a client device and the global version of the machine learning model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of generating a machine learning model with previously determined model and data embeddings); information defining a global version of a machine learning model, information about secure centers; information about the local version of the machine learning model and information about the secure center (These limitations appear to be directed to the specification of data to be received and transmitted, and is understood to be generally linking the use of the judicial exception to a particular technological environment or field of use, which is not indicative of integration into a practical application. MPEP 2106.05(h)). Regarding claim(s) 18 and 30, this/these claim(s) further recite(s) transmitting, to each respective client device in the selected set of client devices, a request to update the machine learning model; receiving, from each respective client device in the selected set of client devices, (These steps are understood to be merely receiving or transmitting information and are under to be insignificant extra-solution activity (See MPEP § 2106.05(d)), updating the machine learning model based on the updates (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level application of updating a machine learning model based on received information); updates to the machine learning model, information about a secure center for the respective client device; information about the secure center received from each respective client device in the selected set of client devices(These limitations appear to be directed to the specification of data to be received and transmitted, and is understood to be generally linking the use of the judicial exception to a particular technological environment or field of use, which is not indicative of integration into a practical application. MPEP 2106.05(h)). The additional element(s) as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Therefore, the claim(s) is/are directed to an abstract idea. Step 2B: Regarding claim(s) 1 and 24, and 18 and 30, this/these claim(s) do/does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: Regarding claim(s) ****, this/these claim(s) recite(s) processor and memory to perform the steps of generating and selecting, and generic devices and server (mere instructions stored in a generic memory component to apply the exception using a generic computer component). Regarding claim(s) 1 and 24, this/these claim(s) further recite(s) receiving, at a local device from a server, information information shared by a plurality of other devices participating in a federated learning scheme; transmitting, by the local device to the server, information about the local version of the machine learning model and information about the secure center, (These steps are understood to be merely receiving or transmitting information and are under to be insignificant extra solution activity that is well‐understood, routine, and conventional when they are claimed in a generic manner (See MPEP § 2106.05(d)), generating, by the local device, a local version of the machine learning model ... based on embeddings generated from local data at a client device and the global version of the machine learning model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of generating a machine learning model with previously determined model and data embeddings); information defining a global version of a machine learning model, information about secure centers; information about the local version of the machine learning model and information about the secure center (These limitations appear to be directed to the specification of data to be received and transmitted, and is understood to be generally linking the use of the judicial exception to a particular technological environment or field of use, which is not indicative of integration into a practical application. MPEP 2106.05(h)). Regarding claim(s) 18 and 30, this/these claim(s) further recite(s) transmitting, to each respective client device in the selected set of client devices, a request to update the machine learning model; receiving, from each respective client device in the selected set of client devices, (These steps are understood to be merely receiving or transmitting information and are under to be insignificant extra solution activity that is well‐understood, routine, and conventional when they are claimed in a generic manner (See MPEP § 2106.05(d)), updating the machine learning model based on the updates (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level application of updating a machine learning model based on received information); updates to the machine learning model, information about a secure center for the respective client device; information about the secure center received from each respective client device in the selected set of client devices (These limitations appear to be directed to the specification of data to be received and transmitted, and is understood to be generally linking the use of the judicial exception to a particular technological environment or field of use, which is not indicative of integration into a practical application. MPEP 2106.05(h)). The additional element(s) as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Therefore, the claim(s) is/are not patent eligible. Step 2A Prong 1, Dependent claims: Regarding claim(s) 3, this/these claim(s) recite(s) wherein generating the local version of the machine learning model comprises generating a local hypersphere defined by the local center and a local measurement relative to the local center by: minimizing a positive loss function for embeddings within the local hypersphere; and maximizing a negative loss function relative to each secure center of the plurality of secure centers with orthogonal regularization, Regarding claim(s) 4, this/these claim(s) recite(s) the local measurement relative to the local center comprises a local radius of the local hypersphere, and the method further comprises calculating the local radius of the local hypersphere by identifying a maximum distance between the local center and each of the embeddings generated from local data at the client device and the global version of the machine learning model, Regarding claim(s) 5, this/these claim(s) recite(s) generating a secure hypersphere defined by the secure center, the local center of the local hypersphere, a measurement relative to the local center, and a proximity to a plurality of hyperspheres in the global version of the machine learning model, the measurement relative to the local center comprises a distance from the local center, and the distance from the local center comprises a distance greater than a local radius of the local hypersphere such that the local hypersphere is contained within the secure hypersphere, Regarding claim(s) 6, this/these claim(s) recite(s) calculating the local center comprises calculating an average over the embeddings generated from the local data at the client device and the global version of the machine learning model, Regarding claim(s) 7, this/these claim(s) recite(s) calculating a moving average of embedding vectors used in calculating a loss function for a local hypersphere defined by the local center, Regarding claim(s) 8, this/these claim(s) recite(s) updating the local center from a randomly initialized initial position, Regarding claim(s) 9 and 25, this/these claim(s) recite(s) wherein the secure center is defined as a sum of a scaled value of the local center and scaled average of secure centers shared by one or more devices of the plurality of other devices participating in the federated learning scheme, Regarding claim(s) 10 and 26, this/these claim(s) recite(s) the scaled average of the secure centers shared by the one or more devices of the plurality of other devices participating in the federated learning scheme is scaled based on a scaling factor associated with a weight assigned to the local center, Regarding claim(s) 11, this/these claim(s) recite(s) ...a number of hyperspheres closest to a local hypersphere defined by the local center, Regarding claim(s) 12 and 27, this/these claim(s) recite(s) optimizing a loss function including a positive loss function associated with embeddings on a surface of a local hypersphere defined by the local center and a negative loss function associated with each secure center of a plurality of secure centers, Regarding claim(s) 13 and 28, this/these claim(s) recite(s) optimizing the loss function comprises optimizing the positive loss function to minimize intra-class variation for a batch of local data such that embeddings for the batch of local data is located on a surface of a local hypersphere, Regarding claim(s) 14 and 29 this/these claim(s) recite(s) optimizing the loss function comprises optimizing the negative loss function to maximize inter-class variation such that data different from a batch of local data is located away from a surface of a local hypersphere, Regarding claim(s) 15, this/these claim(s) recite(s) a value of the secure center and a sum of a local radius of a local hypersphere defined by the local center and a measurement from the local center, Regarding claim(s) 23, this/these claim(s) recite(s) generating an average over the machine learning model and the updates (These limitations appear to be directed to the steps that may be performed as mental process and math) Step 2A Prong 2, Dependent claims: Regarding claim(s) 2, this/these claim(s) recite(s) wherein the information defining the global version of the machine learning model comprises a plurality of hyperparameters, a plurality of secure centers, and information defining a radius of a secure hypersphere associated with each of the plurality of secure centers or an angle measured relative to an axis passing through each of the plurality of secure centers, Regarding claim(s) 16, this/these claim(s) recite(s) wherein the information about the secure center comprises a value of the secure center and an angle relative to an axis passing through the secure center exceeding an angle associated with a local hypersphere defined by the local center, Regarding claim(s) 17, this/these claim(s) recite(s) wherein the secure center comprises a proxy for the local center such that sharing the secure center minimizes exposure of the local data at the client device, Regarding claim(s) 19, this/these claim(s) recite(s) wherein the request to update the machine learning model includes information defining the machine learning model, Regarding claim(s) 20, this/these claim(s) recite(s) the information defining the machine learning model comprises a plurality of hyperparameters, a plurality of secure centers associated with devices participating in a federated learning scheme, and one or more measurements associated with each secure center of the plurality of secure centers, Regarding claim(s) 21, this/these claim(s) recite(s) information defining a radius of a secure hypersphere associated with each secure center of the plurality of secure centers or an angle relative to an axis passing through each secure center of the plurality of secure centers, Regarding claim(s) 22, this/these claim(s) recite(s) wherein the updates to the machine learning model and information about the secure center for the respective client device comprise an updated model, a value of the secure center of a secure hypersphere for the respective client device defined by the secure center, and a radius of the secure hypersphere or an angle relative to an axis passing through the secure center, Regarding claim(s) 23, this/these claim(s) recite(s) the updates received from each respective client device in the selected set of client devices, (These limitations appear to be directed to the specification of data to be received and transmitted, and is understood to be generally linking the use of the judicial exception to a particular technological environment or field of use, which is not indicative of integration into a practical application. MPEP 2106.05(h)). Step 2B, Dependent claims: Regarding claim(s) 2, this/these claim(s) recite(s) wherein the information defining the global version of the machine learning model comprises a plurality of hyperparameters, a plurality of secure centers, and information defining a radius of a secure hypersphere associated with each of the plurality of secure centers or an angle measured relative to an axis passing through each of the plurality of secure centers, Regarding claim(s) 16, this/these claim(s) recite(s) wherein the information about the secure center comprises a value of the secure center and an angle relative to an axis passing through the secure center exceeding an angle associated with a local hypersphere defined by the local center, Regarding claim(s) 17, this/these claim(s) recite(s) wherein the secure center comprises a proxy for the local center such that sharing the secure center minimizes exposure of the local data at the client device, Regarding claim(s) 19, this/these claim(s) recite(s) wherein the request to update the machine learning model includes information defining the machine learning model, Regarding claim(s) 20, this/these claim(s) recite(s) the information defining the machine learning model comprises a plurality of hyperparameters, a plurality of secure centers associated with devices participating in a federated learning scheme, and one or more measurements associated with each secure center of the plurality of secure centers, Regarding claim(s) 21, this/these claim(s) recite(s) information defining a radius of a secure hypersphere associated with each secure center of the plurality of secure centers or an angle relative to an axis passing through each secure center of the plurality of secure centers, Regarding claim(s) 22, this/these claim(s) recite(s) wherein the updates to the machine learning model and information about the secure center for the respective client device comprise an updated model, a value of the secure center of a secure hypersphere for the respective client device defined by the secure center, and a radius of the secure hypersphere or an angle relative to an axis passing through the secure center, Regarding claim(s) 23, this/these claim(s) recite(s) the updates received from each respective client device in the selected set of client devices, (These limitations appear to be directed to the specification of data to be received and transmitted, and is understood to be generally linking the use of the judicial exception to a particular technological environment or field of use, which is not indicative of integration into a practical application. MPEP 2106.05(h)). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 18-20, 23, 30, are rejected under pre-AIA 35 U.S.C. 102(a)(1) as being anticipated by Qian (US 20210374605 A1). Regarding claim 18, Qian teaches a method for distributed training of a machine learning model across client devices, comprising (Qian [95, 104, 114] method for system with processor and memory, processor executes instructions stored in memory): selecting a set of client devices to use in training a machine learning model; transmitting, to each respective client device in the selected set of client devices, a request to update the machine learning model (Qian [35, 52] Fig. 6, server sends current central model (request) to a fraction of clients (so they may send updates to server)); receiving, from each respective client device in the selected set of client devices, updates to the machine learning model and information about a secure center for the respective client device; and updating the machine learning model based on the updates and information about the secure center received from each respective client device in the selected set of client devices (Qian [35] each client may send back update over central model to be updated, (Qian [46-48, 54, 56, 91] secure (randomized) center and model weight information for each client may be sent back to server, and central model may be updated based on client information). Regarding claim 19, Qian teaches the invention as claimed in claim 18 above. Qian further teaches wherein the request to update the machine learning model includes information defining the machine learning model Qian [35, 52] Fig. 6, server sends current central model (request) to a fraction of clients (so they may send updates to server)). Regarding claim 20, Qian teaches the invention as claimed in claim 19 above. Qian further teaches wherein the information defining the machine learning model comprises a plurality of hyperparameters, a plurality of secure centers associated with devices participating in a federated learning scheme, and one or more measurements associated with each secure center of the plurality of secure centers (Qian [35, 52] Fig. 6, server sends current central model (request) to a fraction of clients (so they may send updates to server), Qian [46-48] current updated model is based on information sent from client devices previously, Qian [36, 41, 46-48, 54, 56, 57, 91] information sent to clients may include current mode information and hyperparameters and information regarding centers and distance to center for other clients). Regarding claim 23, Qian teaches the invention as claimed in claim 18 above. Qian further teaches wherein updating the machine learning model comprises generating an average over the machine learning model and the updates received from each respective client device in the selected set of client devices (Qian [35] central model may be updated by averaging information from clients). Claim 30 is directed towards a system executing instructions similar in scope to the instructions performed by the method of claim 18, and is rejected under the same rationale. Qian further teaches a processing system, comprising: a memory comprising computer-executable instructions; and one or more processors configured to execute the computer-executable instructions and cause the processing system (Qian [95, 104, 114] method for system with processor and memory, processor executes instructions stored in memory). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 6, 8, 17, 21, 22, are rejected under 35 U.S.C. 103 as being unpatentable over Qian (US 20210374605 A1), in view of Law (US 20220391667 A1). Regarding claim 21, Qian teaches the invention as claimed in claim 20 above. Qian further teaches wherein the one or more measurements comprise information defining a radius .... associated with each secure center of the plurality of secure centers (Qian [41, 46-48, 54, 56, 57, 61] distance may be radius). Qian does not specifically teach radius of a secure hypersphere. However Law teaches wherein the information regarding a secure center may be a radius of a secure hypersphere associated with each secure center of the plurality of secure centers (Law [2, 36-39] center of data may be determined based on hypersphere with radius, using hypersphere can provide better data representation than conventional latent space or graph). It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Law of a radius of a secure hypersphere associated with each secure center of the plurality of secure centers, into the invention suggested by Qian; since both inventions are directed towards using center of local data information in machine learning, and incorporating the teaching of Law into the invention suggested by Qian would provide the added advantage of providing better data representation than conventional latent space or graph, and the combination would perform with a reasonable expectation of success (Law [2, 36-39]). Regarding claim 22, Qian teaches the invention as claimed in claim 18 above. Qian further teaches wherein the updates to the machine learning model and information about the secure center for the respective client device comprise an updated model, a value of the secure center ... for the respective client device defined by the secure center, and a radius ... (Qian [46-48, 54, 56, 61, 91] secure (randomized) center, radius, and model weight information for each client may be sent back to server, and central model may be updated based on client information Qian does not specifically teach secure center of a secure hypersphere However Law teaches a value of the secure center of a secure hypersphere for the respective client device defined by the secure center (Law [2, 36-39] center of data may be determined based on hypersphere representation of data, using hypersphere can provide better data representation than conventional latent space or graph). It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Law of a value of the secure center of a secure hypersphere for the respective client device defined by the secure center into the invention suggested by Qian; since both inventions are directed towards using center of local data information in machine learning, and incorporating the teaching of Law into the invention suggested by Qian would provide the added advantage of providing better data representation than conventional latent space or graph, and the combination would perform with a reasonable expectation of success (Law [2, 36-39]). Regarding claim 1, Qian teaches a computer-implemented method for training a machine learning model, comprising (Qian [95, 104, 114] method for system with processor and memory, processor executes instructions stored in memory): receiving, at a local device from a server, information defining a global version of a machine learning model (Qian [35, 52] Fig. 6, server sends current central model (request) to a fraction of clients); generating, by the local device, a local version of the machine learning model and a local center associated with the local version of the machine learning model based on ... local data at a client device and the global version of the machine learning model (Qian [35, 52] local model may be based on local model and current central model, Qian [41, 46, 54] based on local model, local data center may be generated); generating, by the local device, a secure center different from the local center based, at least in part, on information about secure centers shared by a plurality of other devices participating in a federated learning scheme (Qian [54, 91] secure center may be generated based on local center and centers from other devices (secure is a broad term)); and transmitting, by the local device to the server, information about the local version of the machine learning model and information about the secure center (Qian [38, 56] local model updates and secure center information may be sent to server). Qian does not specifically teach a local center associated with the local version of the machine learning model based on embeddings generated from local data at a client device. However Law teaches a local center associated with the local version of the machine learning model based on embeddings generated from local data at a client device (Law [2, 28, 36-39] data embeddings may be used to determine center of local data, using embeddings allows data to be represented in space). It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Law of a local center associated with the local version of the machine learning model based on embeddings generated from local data at a client device, into the invention suggested by Qian; since both inventions are directed towards using center of local data information in machine learning, and incorporating the teaching of Qian into the invention suggested by Law would provide the added advantage of allowing data to be represented in space by using embeddings, and the combination would perform with a reasonable expectation of success (Law [2, 28, 36-39]). Regarding claim 2, Qian and Law teach the invention as claimed in claim 1 above. Qian further teaches wherein the information defining the global version of the machine learning model comprises a plurality of hyperparameters, a plurality of secure centers Qian [35, 52] Fig. 6, server sends current central model (request) to a fraction of clients (so they may send updates to server), Qian [46-48] current updated model is based on information sent from client devices previously, Qian [36, 41, 46-48, 54, 56, 57, 91] information sent to clients may include current mode information and hyperparameters and information regarding centers and distance to center for other clients). Qian does not specifically teach information defining a radius of a secure hypersphere associated with each of the plurality of secure centers or an angle measured relative to an axis passing through each of the plurality of secure centers. However Law teaches information defining a radius of a secure hypersphere associated with each of the plurality of secure centers (Law [2, 36-39] center of data may be determined based on hypersphere with radius, using hypersphere can provide better data representation than conventional latent space or graph). Regarding claim 6, Qian and Law teach the invention as claimed in claim 1 above. Claim 1 further teaches generating...a local center associated with the local version of the machine learning model based on ... information... generated from local data at a client device... wherein the information generated from local data at a client device comprises embeddings generated from local data at a client device. Qian further teaches calculating the local center comprises calculating an average over the ... information...generated from the local data at the client device and the global version of the machine learning model (Qian [32] center may be generated based on average over local data, using current central model information). Regarding claim 8, Qian and Law teach the invention as claimed in claim 1 above. Qian further teaches wherein calculating the local center comprises updating the local center from a randomly initialized initial position (Qian [39, 58, 41, 46, 54] based on local model, local data center may be randomly initialized and then generated). Regarding claim 17, Qian and Law teach the invention as claimed in claim 1 above. Qian further teaches wherein the secure center comprises a proxy for the local center such that sharing the secure center minimizes exposure of the local data at the client device (Qian [38, 54, 91] secure center may be generated based on local center and centers from other devices, using other centers and perturbation of data prevents local data from being exposed). Claim 24 is directed towards a system executing instructions similar in scope to the instructions performed by the method of claim 1, and is rejected under the same rationale. Qian further teaches a processing system, comprising: a memory comprising computer-executable instructions; and one or more processors configured to execute the computer-executable instructions and cause the processing system (Qian [95, 104, 114] method for system with processor and memory, processor executes instructions stored in memory). Claims 7, are rejected under 35 U.S.C. 103 as being unpatentable over Qian (US 20210374605 A1), in view of Law (US 20220391667 A1), and further in view of Nakayama (US 20210406782 A1). Regarding claim 7, Qian and Law teach the invention as claimed in claim 1 above. Qian further teaches Qian does not specifically teach wherein calculating the local center comprises calculating a moving average of embedding vectors used in calculating a loss function for a local hypersphere defined by the local center However Law teaches wherein calculating the local center comprises calculating ...information based on... embedding vectors used in calculating a loss function for a local hypersphere defined by the local center (Law [21, 27, 28] center may be calculated based on minimizing a loss function for embedding vectors of a hypersphere). Qian and Law does not specifically teach calculating the local center comprises calculating a moving average of embedding vectors However Nakayama teaches calculating the local center comprises calculating a moving average of ...local vectors... (Nakayama [125, 137, 145] center may be calculated using a sliding window averages, using average of a past window can provide a baseline). It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Nakayama of calculating the local center comprises calculating a moving average of ...local vectors..., into the invention suggested by Qian and Law; since both inventions are directed towards generating a center for vectors for data, and incorporating the teaching of Nakayama into the invention suggested by Qian and Law would provide the added advantage of using a baseline based on average of a past window, and the combination would perform with a reasonable expectation of success (Nakayama [125, 137, 145]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANCHITA ROY whose telephone number is (571)272-5310. The examiner can normally be reached Monday-Friday 12-8. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SANCHITA ROY Primary Examiner Art Unit 2146 /SANCHITA ROY/Primary Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

May 31, 2022
Application Filed
Mar 07, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599476
AI-BASED VIDEO ANALYSIS OF CATARACT SURGERY FOR DYNAMIC ANOMALY RECOGNITION AND CORRECTION
2y 5m to grant Granted Apr 14, 2026
Patent 12585966
INTELLIGENT DEVICE SELECTION USING HISTORICAL INTERACTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12585870
READER MODE-OPTIMIZED ATTENTION APPLICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579656
MACHINE LEARNING DENTAL SEGMENTATION SYSTEM AND METHODS USING GRAPH-BASED APPROACHES
2y 5m to grant Granted Mar 17, 2026
Patent 12562275
INTERACTIVE SUBGROUP DISCOVERY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+46.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 316 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month