DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 01/21/2026 have been fully considered but they are not persuasive:
Regarding 112(b) rejection, applicant argues on page 15, stating that ¶ 176 and ¶ 182 recite the corresponding structure. However, examiner disagrees. As stated by the applicant ¶ 176 and ¶ 182, uses the word “may”, such as “paragraph [0176] recites that the "means for determining" of claim 57 may correspond to "the one or more WWAN transceivers 310, the one or more short-range wireless transceivers 320, the one or more processors 332, memory 340, and/or positioning component 342."” Therefore, it is not definitive and for that reason 112(b) rejection is maintained.
Regarding 103 prior art rejection, applicant argues the “receiving…one or more selection criteria for determining…” and the “determining whether the UE satisfies…” limitations.
Examiner disagrees. Under broadest reasonable interpretation, the argued limitations does not require the UE to receive an explicit listing of the selection criteria or to independently compute them. Wu et al. discloses the that server applies similarity based selection criteria and communicates the selection results to the UE, as states on Wu et al. pages 10641, 10647, and 10649, which reflects the claimed limitation criteria. Upon receiving the selection results and conditionally participating in training, the UE determines whether it satisfies the criteria as claimed. Furthermore, transmitting updated parameters only when selected teaches transmission based on that determination. For at least the above reasons, Wu teaches the receiving and determining limitation argued, and therefore, 103 rejection is maintained.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) do not recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are:
“means for receiving, from a network entity, one or more selection…” (claim 57).
“means for determining whether the UE satisfies the one or more selection…” (claim 57).
“means for transmitting, to the network entity, after a second period…” (claim 57).
“means for transmitting, to a set of user equipments (UEs),” (claim 58).
“means for transmitting the machine learning model…”(claim 58).
“means for receiving updated parameters”…(claim 58).
“means for updating the machine learning model…”( claim 58).
For an analysis of the structure, material, or acts corresponding to the claimed functions, see rejection under 35 USC § 112(b) infra.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 57-58 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) do not recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are:
“means for determining whether the UE satisfies the one or more selection…” (claim 57).
“means for updating the machine learning model…”( claim 58).
However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function.
The closest paragraph is ¶ 89, “The processors 332, 384, and 394 may therefore provide means for processing, such as means for determining, means for calculating, means for receiving, means for transmitting, means for indicating, etc.”). The word “may” do not make it definite.
Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. For the purpose of examination, any computer capable of performing the claimed functions reads on the claims.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-60 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (“Prediction Based Semi-Supervised Online Personalized Federated Learning for Indoor Localization”, IEEE SENSORS JOURNAL, VOL. 22, NO. 11, JUNE 1, 2022, pages 10640-10654) in view of Li et al. (“SmartPC: Hierarchical Pace Control in Real-Time Federated Learning System”, 2019 IEEE Real-Time Systems Symposium (RTSS)).
Regarding claim 1.
Wu teaches a method of training a machine learning model performed by a user equipment (UE), comprising: receiving, see page 10641, right column, “machine learning model is trained to predict the data distribution of RU in the next round, and the server can select proper FL participants in advance according to the distribution similarity”, also see page 10647, Step 2 of the workflow: "RU then distributes the predicted result to all the FL clients after Homomorphic Encryption or the Peer-to-Peer (P2P) encryption protocol”, i.e. the selection criteria is implicitly communicated to the UE (client) when this is informed about the RU identity, wherein he (predicted) similarity of the UE with respect to a requesting user (RU).);
determining whether the UE satisfies the one or more selection criteria during a first period of time (see page 10649, Algorithm 2, steps 5-10; the determination is concluded when the network entity {server) sends to the UE the selection results from in step l0); and
transmitting, to the network entity (Figure 6: "Cloud Server"), after a Algorithm 2, steps 12-16), updated parameters for the machine learning model (Figure 6; Algorithm 2, step 17), wherein the machine learning model is updated during the second period of time (Algorithm 2, steps 12-16) based on a determination that the UE satisfies the one or more selection criteria (Algorithm 2, steps 13: "for each selected FL client").
Wu do not specifically teach where the selection criteria is being received from and also do not specifically teach second period of time.
Li teaches receiving, from a network entity, one or more selections and transmitting, to the network entity, after a second period of time (see page 408 and figure 1, “Figure 1 represents the workflow of a Federated Learning system which contains the following main steps: 1) At the beginning of each training round, the central server selects a set of online devices to participate in the training process. 2) The selected devices download the current global model state (e.g., current model parameters(wt)). 3) Each mobile device performs local training based on the global model state and its local training dataset for a specific number of training epochs. 4) After completing the local training process, each mobile device sends the model updates (e.g., Δw) back to the central server. 5) After receiving the model updates from all the mobile devices, the central server aggregates these gradient updates and generates the updated global model. Then, the system enters a new training round. 6) The whole process iterates until the global model converges”).
Both Wu and Li pertain to the problem of Federated Learning, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Wu and Li to teach the above limitations. The motivation for doing so would be “This paper proposes SmartPC, a hierarchical online pace control framework for Federated Learning that balances the training time and model accuracy in an energy-efficient manner. SmartPC consists of two layers of pace control: global and local. Prior to every training round, the global controller first oversees the status (e.g., connectivity, availability, and energy/resource remained) of every participating device, then selects qualified devices and assigns them a well-estimated virtual deadline for task completion. Within such virtual deadline, a statistically significant proportion (e.g., ≥60%) of the devices are expected to complete one round of their local training and model updates, while the overall progress of multi-round training procedure is kept up adaptively. On each device, a local pace controller then dynamically adjusts device settings such as CPU frequency so that the learning task is able to meet the deadline with the least amount of energy consumption. We performed extensive experiments to evaluate SmartPC on both Android smartphones and simulation platforms using well-known datasets. The experiment results show that SmartPC reduces up to 32.8% energy consumption on mobile devices and achieves a speedup of 2.27 in training time without model accuracy degradation.” (see Li Abstract).
Regarding claim 2.
Wu and Li teach the method of claim 1,
Wu further teach further comprising: transmitting, to the network entity, an indication that the UE satisfies the one or more selection criteria (see page 10647, Algorithm 2, step 9; also see page 10647-10648, step 3: "All the clients then upload the similarities to the cloud server").
Regarding claim 3.
Wu and Li teach the method of claim 1,
Wu further teach further comprising: receiving, from the network entity, a configuration to report whether the UE satisfies the one or more selection criteria before the machine learning model is updated; or receiving, from the network entity, a configuration to update the machine learning model (see page 10647, Algorithm 2: steps 10, and 14) based on a determination that the UE satisfies the one or more selection criteria and without reporting whether the UE satisfies the one or more selection criteria (Algorithm 2: steps 9, and 13 and also see pages 10649-10650, selection scenarios).
Regarding claim 4.
Wu and Li teach the method of claim 1,
Wu further teach further comprising: receiving, from the network entity, a configuration of the first period of time, the second period of time, or both (see page 10647, Algorithm 2, steps 10, and 14, i.e. second time period is model weight configuration).
Regarding claim 5.
Wu and Li teach the method of claim 1,
Wu further teach further comprising: receiving the machine learning model from the network entity (see page 10647, Algorithm 2: steps 10, and 14; Figure 3 (3)).
Regarding claim 6.
Wu and Li teach the method of claim 1,
Wu further teach wherein the machine learning model is received:
before the first period of time (see page 10647, Algorithm 2, step 14), or after the first period of time (see page 10647, Algorithm 2, steps 1-10) and before the machine learning model is updated (see page 10647, Algorithm 2, steps 15, and 18).
Regarding claim 7.
Wu and Li teach the method of claim 1,
Wu further teach further comprising: performing multiple repetitions of determining whether the UE satisfies the one or more selection criteria and updating the machine learning model (see page 10647, Algorithm 2, step 2).
Regarding claim 8.
Wu and Li teach the method of claim 7,
Wu further teach further comprising: receiving a new machine learning model from the network entity for each repetition of the multiple repetitions; or receiving a single machine learning model from the network entity for the multiple repetitions, wherein the machine learning model is the single machine learning model (see page 10647, Algorithm 2, steps 2, and 14).
Regarding claim 9.
Wu and Li teach the method of claim 1,
Wu further teach wherein the machine learning model is trained based on training data collected by the UE during the second period of time (see page 10644, "Continuously arriving RSS data are collected and stored in their mobile devices for local training in each round.". i.e. training round (see page 10647, Algorithm 2, steps 3-18) comprises the second period of time (see page 10647, Algorithm 2, steps 12-16)). Also see Li regarding second period of time and rational as stated in claim 1.
Regarding claim 10.
Wu and Li teach the method of claim 9,
Wu further teach further comprising: receiving, from the network entity, a configuration of types of the training data to collect (see page 10648, “In Scenario I, clients’ local training data are partially regarded as the unlabeled data and they are collected continuously during the whole FL process. In Scenario II, we utilize a subset of the clients’ data to construct the dataset of requesting user (RU) following the specific moving pattern, which is used to train the prediction model and evaluate the performance of the proposed method for Scenario II.”, i.e. the RU identity and the type of data collected for training, i.e. training data, is the data similar to the RU).
Regarding claim 11.
Wu and Li teach the method of claim 1,
Li further teach wherein the second period of time comprises: one or more update iterations to the machine learning model, or a time window (see page 408, “The whole process iterates until the global model converges.”, also see page 410, Section III .A: "the global pace controller estimates a virtual deadline of the upcoming training round to balance the overall training progress and model accuracy. The controller then broadcasts the virtual deadline to all the participants”). The motivation utilized in the combination of claim 1, super, applies equally as well to claim 11.
Regarding claim 12.
Wu and Li teach the method of claim 1,
Wu further teach wherein the updated parameters comprise updated weights of the machine learning model, updated gradients of the machine learning model, or both (see page 10647, Algorithm 2, step 17).
Regarding claim 13.
Wu and Li teach the method of claim 1,
Wu further teach wherein the one or more selection criteria comprise:
an area identifier criterion, a covered area criterion, a local dataset size criterion,
a training load balancing criterion, aUE training processing capabilities criterion, a communication channel conditions criterion, a test set performance criterion, or
any combination thereof (see page 10641, Section I: "Scenario II focusing on the personalized localization of users with high mobility and dynamical data distribution is considered [ ... ] the data distribution of one Requesting User (RU) changes during FL because of his high mobility. [ ... ] Prediction based client selection strategy is thus proposed and it can make aggregated FL model provide better service for RU.").
Regarding claim 14.
Wu and Li teach the method of claim 1,
Wu further teach wherein the machine learning model is a radio frequency fingerprinting (RFFP)-based machine learning model (see page 10640, introduction, "collect the received signal strength (RSS) measurements as fingerprints from Access Points (APs) at different Reference Points (RPs), and match them with the corresponding physical locations").
Regarding claim 15.
Wu and Li teach the method of claim 1,
Wu further teach wherein the network entity is a location server, an edge server, or a model repository server (see page 10645, Figure 3, network entity is a cloud server storing the model and, thus it is considered a model repository server).
Regarding claim 16.
Wu teach a method of training a machine learning model performed by a network entity, comprising: transmitting, to a set of user equipments (UEs), one or more selection criteria for determining whether the set of UEs are to participate in training the machine learning model (see page 10641, right column, “machine learning model is trained to predict the data distribution of RU in the next round, and the server can select proper FL participants in advance according to the distribution similarity”, also see page 10647, Step 2 of the workflow: "RU then distributes the predicted result to all the FL clients after Homomorphic Encryption or the Peer-to-Peer (P2P) encryption protocol”, i.e. the selection criteria is implicitly communicated to the UE (client) when this is informed about the RU identity, wherein he (predicted) similarity of the UE with respect to a requesting user (RU).);
transmitting the machine learning model to see page 10649, Algorithm 2, steps 5-10; the determination is concluded when the network entity {server) sends to the UE the selection results from in step l0);
receiving updated parameters for the machine learning model from each UE of the subset of UEs (Figure 6; Algorithm 2, step 13 and 17, "for each selected FL client"); and
updating the machine learning model based on the updated parameters received from each UE of the subset of UEs (Figure 6; Algorithm 2, step 13 and 17, "for each selected FL client", also see page 10648, “Step 4: Local training stage of selected FL participants. Practical FL participants selected by the server in the previous round train their local localization models via local datasets. The updated local models parameters then are uploaded to the cloud server”).
Wu do not specifically teach transmitting the machine learning model to at least a subset of UEs of the set of UEs, wherein Wu teaches transmitting to all clients and not just a subset of clients.
Li teaches transmitting the machine learning model to at least a subset of UEs of the set of UEs (see page 407, “At the beginning of each training round, the central server selects a set of online devices to participate in the training process.” and also see figure 1 on page 408).
Both Wu and Li pertain to the problem of Federated Learning, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Wu and Li to teach the above limitations. The motivation for doing so would be “This paper proposes SmartPC, a hierarchical online pace control framework for Federated Learning that balances the training time and model accuracy in an energy-efficient manner. SmartPC consists of two layers of pace control: global and local. Prior to every training round, the global controller first oversees the status (e.g., connectivity, availability, and energy/resource remained) of every participating device, then selects qualified devices and assigns them a well-estimated virtual deadline for task completion. Within such virtual deadline, a statistically significant proportion (e.g., ≥60%) of the devices are expected to complete one round of their local training and model updates, while the overall progress of multi-round training procedure is kept up adaptively. On each device, a local pace controller then dynamically adjusts device settings such as CPU frequency so that the learning task is able to meet the deadline with the least amount of energy consumption. We performed extensive experiments to evaluate SmartPC on both Android smartphones and simulation platforms using well-known datasets. The experiment results show that SmartPC reduces up to 32.8% energy consumption on mobile devices and achieves a speedup of 2.27 in training time without model accuracy degradation.” (see Li Abstract).
Regarding claim 17.
Wu and Li teach the method of claim 16,
Wu further teach further comprising:
receiving, from each UE of the subset of UEs, an indication that the UE satisfies the one or more selection criteria (see page 10647, Algorithm 2, step 9; Section IV.C(Step 3): "All the clients then upload the similarities to the cloud server"). Wherein Li further clarifies the subset of UEs based on rational of claim 16.
Regarding claim 18.
Wu and Li teach the method of claim 17,
Wu further teach wherein the machine learning model is transmitted to the UE in response to reception of the indication that the UE satisfies the one or more selection criteria (see page 10647, Algorithm 2, step 9; also see page 10647-10648, Step 3: "All the clients then upload the similarities to the cloud server").
Regarding claim 19.
Wu and Li teach the method of claim 16,
Wu further teach further comprising: transmitting, to each UE of the set of UEs, a configuration to report whether the UE satisfies the one or more selection criteria before training the machine learning model; or transmitting, to each UE of the set of UEs, a configuration to train the machine learning model (see page 10647, Algorithm 2: steps 10, and 14) based on a determination that the UE satisfies the one or more selection criteria and without reporting whether the UE satisfies the one or more selection criteria (see page 10647, Algorithm 2: steps 9, and 13 and also see pages 10649-10650, selection scenarios).
Regarding claim 20.
Wu and Li teach the method of claim 16,
Wu teaches the period starts when the RU applies for the localization service and ends when the RU (or the UE) exits the system, however, Li also teaches monitoring, therefore,
Li further teach further comprising: transmitting, to each UE of the set of UEs, a configuration of a period of time during which to monitor values of the one or more selection criteria to determine whether the UE satisfies the one or more selection criteria (see page 411, “For each training round j, the local pace control located on each mobile device monitors the starting time t j i start and the ending time t j i end of the training process. Moreover, it monitors the number of data objects Sj i that have been processed in each round. We define the training speed of mobile device i in round j as follows…”, also see page 410, “1) At the initialization step, the global controller first oversees the status (e.g., connectivity, availability, and energy/resource remained) of every participating device, then selects qualified devices. After that, each selected mobile device sends the following local information to the central server: 1) hardware information (e.g., available CPU frequency range) and 2) size of local training data”). The motivation utilized in the combination of claim 16, super, applies equally as well to claim 20.
Regarding claim 21.
Wu and Li teach the method of claim 16,
Wu further teach further comprising: transmitting, to each UE of the subset of UEs, a configuration of a period of time during which to train the machine learning model (see page 10647, Algorithm 2, steps 1-10, step 14-15 and 18).
Regarding claim 22.
Wu and Li teach the method of claim 21,
Wu further teach wherein the machine learning model is trained based on training data collected by the subset of UEs during the period of time (see page 10644, “Continuously arriving RSS data are collected and stored in their mobile devices for local training in each round.". In Dl a training round (Algorithm 2, steps 3-18) comprises the second period of time (Algorithm 2, steps 12-16)).
Regarding claim 23.
Wu and Li teach the method of claim 22,
Wu further teach further comprising: transmitting, to each UE of the subset of UEs, a configuration of types of the training data to collect (see page 10648, “In Scenario I, clients’ local training data are partially regarded as the unlabeled data and they are collected continuously during the whole FL process. In Scenario II, we utilize a subset of the clients’ data to construct the dataset of requesting user (RU) following the specific moving pattern, which is used to train the prediction model and evaluate the performance of the proposed method for Scenario II.”, i.e. the RU identity and the type of data collected for training, i.e. training data, is the data similar to the RU).
Regarding claim 24.
Wu and Li teach the method of claim 21,
Wu further teach wherein the period of time comprises:
one or more update iterations to the machine learning model, or a time window (see page 408, “The whole process iterates until the global model converges.”, also see page 410, Section III .A: "the global pace controller estimates a virtual deadline of the upcoming training round to balance the overall training progress and model accuracy. The controller then broadcasts the virtual deadline to all the participants”). The motivation utilized in the combination of claim 16, super, applies equally as well to claim 24.
Regarding claim 25.
Wu and Li teach the method of claim 16,
Wu further teach wherein the updated parameters comprise updated weights of the machine learning model, updated gradients of the machine learning model, or both (see page 10647, Algorithm 2, step 17).
Regarding claim 26.
Wu and Li teach the method of claim 16,
Wu further teach wherein the one or more selection criteria comprise:
an area identifier criterion, a covered area criterion, a local dataset size criterion, a training load balancing criterion, aUE training processing capabilities criterion, a communication channel conditions criterion, a test set performance criterion, or any combination thereof (see page 10641, Section I: "Scenario II focusing on the personalized localization of users with high mobility and dynamical data distribution is considered [ ... ] the data distribution of one Requesting User (RU) changes during FL because of his high mobility. [ ... ] Prediction based client selection strategy is thus proposed and it can make aggregated FL model provide better service for RU.").
Regarding claim 27.
Wu and Li teach the method of claim 16,
Wu further teach wherein the machine learning model is a radio frequency fingerprinting (RFFP)-based machine learning model (see page 10640, introduction, "collect the received signal strength (RSS) measurements as fingerprints from Access Points (APs) at different Reference Points (RPs), and match them with the corresponding physical locations").
Regarding claim 28.
Wu and Li teach the method of claim 16,
Wu further teach wherein the network entity is a location server, an edge server, or a model repository server (see page 10645, Figure 3, network entity is a cloud server storing the model and, thus it is considered a model repository server).
Claims 29-43 recites a user equipment (UE), comprising: a memory; at least one transceiver; and at least one processor communicatively coupled to the memory and the at least one transceiver to perform the method recited in claims 1-15. Therefore the rejection of claims 1-15 above applies equally here. Li also teaches the addition elements of claim 29 not recited in claim 1 comprising user equipment (UE), comprising: one or more memory; one or more transceivers; and one or more processor communicatively coupled to the one or more memory and the one or more transceiver (see page 413, “We build a prototype on-device Federated Learning system using Android smartphones with heterogeneous hardware configurations, as listed in Table I. The devices have different Android versions (i.e., 5.0.5-8.0), different number of CPU cores (i.e., 4, 6, 8) and different sets of CPU frequencies. The local training process is implemented based on the DL4J [20]. It runs as an asynctask in the background and has no user interface. Communications between the devices and the central server (PaddlePaddle based parameter server) are based on the client-server model as shown in Figure 1.”). Same rational utilized in claim 1, super, applies to claim 29.
Claims 44-56 recites a network entity, comprising: a memory; at least one transceiver; and at least one processor communicatively coupled to the memory and the at least one transceiver to perform the method recited in claims 16-28. Therefore the rejection of claims 16-28 above applies equally here. Li also teaches the addition elements of claim 44 not recited in claim 16 comprising network entity, comprising: one or more memory; one or more transceiver; and one or more processor communicatively coupled to the one or more memory and the one or more transceiver (see page 413, “We build a prototype on-device Federated Learning system using Android smartphones with heterogeneous hardware configurations, as listed in Table I. The devices have different Android versions (i.e., 5.0.5-8.0), different number of CPU cores (i.e., 4, 6, 8) and different sets of CPU frequencies. The local training process is implemented based on the DL4J [20]. It runs as an asynctask in the background and has no user interface. Communications between the devices and the central server (PaddlePaddle based parameter server) are based on the client-server model as shown in Figure 1.”). Same rational utilized in claim 1, super, applies to claim 29.
Claim 57 recites a user equipment (UE) to perform the method recited in claim 1. Therefore the rejection of claim 1 above applies equally here.
Claim 58 recites a network entity to perform the method recited in claim 16. Therefore the rejection of claim 16 above applies equally here.
Claim 59 recites a non-transitory computer-readable medium storing computer-executable instructions to perform the method recited in claim 1. Therefore the rejection of claim 1 above applies equally here. Li also teaches the addition elements of claim 59 not recited in claim 1 comprising non-transitory computer-readable medium storing computer-executable instructions (see page 413, “We build a prototype on-device Federated Learning system using Android smartphones with heterogeneous hardware configurations, as listed in Table I. The devices have different Android versions (i.e., 5.0.5-8.0), different number of CPU cores (i.e., 4, 6, 8) and different sets of CPU frequencies. The local training process is implemented based on the DL4J [20]. It runs as an asynctask in the background and has no user interface. Communications between the devices and the central server (PaddlePaddle based parameter server) are based on the client-server model as shown in Figure 1.”). Same rational utilized in claim 1, super, applies to claim 59.
Claim 60 recites a non-transitory computer-readable medium storing computer-executable instructions to perform the method recited in claim 16. Therefore the rejection of claim 16 above applies equally here. Li also teaches the addition elements of claim 60 not recited in claim 16 comprising non-transitory computer-readable medium storing computer-executable instructions (see page 413, “We build a prototype on-device Federated Learning system using Android smartphones with heterogeneous hardware configurations, as listed in Table I. The devices have different Android versions (i.e., 5.0.5-8.0), different number of CPU cores (i.e., 4, 6, 8) and different sets of CPU frequencies. The local training process is implemented based on the DL4J [20]. It runs as an asynctask in the background and has no user interface. Communications between the devices and the central server (PaddlePaddle based parameter server) are based on the client-server model as shown in Figure 1.”). Same rational utilized in claim 16, super, applies to claim 60.
Related arts:
Saxena et al. (US 20220245903 A1) teaches ¶ 42, “The processors 332, 384, and 394may therefore provide means for processing, such as means for determining, means for calculating, means for receiving, means for transmitting, means for indicating, etc.”
Xiao et al. (US 20190012575 A1) teaches ¶ 61, “after receiving the new training data set sent by the client, the electronic device may select training data satisfying a preset condition from the new training data set to generate a first training data set.”
Cheng et al. (US 20220137948 A1) teaches ¶ 52, “Candidate device identifier 231 may identify the set of candidate devices using a selection criteria that includes one or more device attributes. In some implementations, the device attributes may include a device property (e.g., a serial number, a device type, device class, etc.), a client identifier (e.g., a unique identifier associated with the device owner or client), a device location (e.g., IP address, GPS location, mailing address of the device owner, etc.), a device usage value (e.g., the number of times a device has been used over a particular period of time)”.
LAPPETELAINEN et al. (US 20150334523 A1) teaches estimating a number of people within a location. The estimation includes obtaining a plurality of estimates of the number of mobile transmitters and respective estimates of the number of people within a first location during a first period of time, and determining a mapping function providing a mapping between an estimate of the number of mobile transmitters at a location and an estimate of the number of people at the location on basis of the plurality of estimates of the number of mobile transmitters and the respective plurality of estimates of the number of people for determination of a second estimate of the number of people within a second location during a second period of time on basis of a second estimate of the number of mobile transmitters obtained at the second location during the second period of time.
Pittman et al. (US 10521822 B2) teaches geolocation and time based advertising. The platform may include receiving, using a communication interface, a first geolocation from a client device. Further, the platform may include, receiving, using the communication interface, an advertisement content from the client device. Additionally, the platform may include creating, using the processor, an association between the first geolocation and the advertisement content. Further, the platform may include storing, using a storage device, each of the first geolocation, the advertisement content and the association.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IMAD M KASSIM whose telephone number is (571)272-2958. The examiner can normally be reached 10:30AM-5:30PM, M-F (E.S.T.).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297 - 4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IMAD KASSIM/Primary Examiner, Art Unit 2129