DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The instant application is a National Stage entry of PCT/EP2021/058938, International Filing Date: 04/06/2021.
Preliminary Amendments
The preliminary amendments received on 11/1/2022 have been considered and entered.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-4, 8-11, 13-19, 25, 27-28 and 34 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As to claim 1, the claim recites the limitation “m_k (t)”, in line 6. It is not clear what the variable “t” represents.
As to claim 8, the claim further recites the phrase “in terms of”, in line 4, the phrase "in terms of" renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
As to claims 8, 25 and 28, the claims are also rejected under 112(b) for the same reason of claim 1.
As to the claim(s) that are dependent on claim(s) 1, 8, 25 or 28, the dependent claim(s) are also rejected under 112(b) for the same reason of their base claim(s).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4, 8-11, 13-17, 25, 27-28 and 34 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Pezeshki et al. (Pub. No.: US 20220124779 A1).
As to claim 1, Pezeshki teaches a method for configuring agent entities with a reporting condition for reporting computational results during an iterative learning process, the method being performed by a server entity (paragraph [0056], “the base station 305 may transmit a machine learning component to each of the UEs 310”, “UEs” teaches agent entities, “base station” teaches a server entity), the method comprising: configuring the agent entities with a computational task and a reporting condition (paragraph [0072], “…the base station 410 may transmit, and the UE 405 may receive, a federated learning configuration. The federated learning configuration may include an indication of a periodic communication scheme for communicating with the base station 410 to facilitate federated learning associated with a machine learning component…”), wherein the agent entities are to contend for channel access to report computational results of the computational task to the server entity only when an importance metric m_k (t) satisfies the reporting condition (paragraph [0082], “The configured grant configuration may indicate that, if the update condition is satisfied, the UE 405 is to transmit the machine learning component update” and paragraph [0075]); and
performing the iterative learning process with the agent entities until a termination criterion is met (paragraph [0067], “The second communication manager 325 may update the global machine learning component using multiple rounds of updates from the UEs 310 until a global loss function is minimized (which may be referred to as “convergence” of the machine learning component)”, “convergence” teaches a termination criterion).
As to claim 2, Pezeshki teaches wherein the server entity during each iteration of the iterative learning process:
provides a parameter vector of the computational task to the agent entities (paragraph [0084], “the base station 410 may transmit, and the UE 405 may receive, a global update associated with the machine learning component”);
obtains, in accordance with the reporting condition, computational results as a function of the parameter vector from the agent entities (paragraph [0084], “the base station 410 may transmit, and the UE 405 may receive, a global update associated with the machine learning component”); and
updates the parameter vector as a function of an aggregate of the obtained computational results when the aggregate of the obtained computational results for the iteration fails to satisfy the termination criterion (paragraph [0066], “…the base station 305 (e.g., using the second communication manager 325) may aggregate the updates received from the UEs 310 corresponding to a round of federated learning. For example, the second communication manager 325 may average the received gradients to determine an aggregated update…”).
As to claim 3, Pezeshki teaches wherein the importance metric m_k (t) satisfies the reporting condition when the importance metric m_k (t) exceeds a threshold value t_k (paragraph [0082], “…the configured grant configuration may indicate an update condition corresponding to the configured grant. The update condition may include any number of different rules, thresholds, and/or ranges, among other examples…”).
As to claim 4, Pezeshki teaches wherein, according to the configuring, the agent entities are to contend for channel access by performing a random access procedure (paragraph [0079], “…The configured grant configuration may be carried in a random access channel (RACH) message during a RACH procedure…”).
As to claim 8, Pezeshki teaches a method, performed by an agent entity, for being configured by a server entity with a reporting condition for reporting computational results during an iterative learning process (paragraph [0056], “the base station 305 may transmit a machine learning component to each of the UEs 310”, “UEs” teaches agent entities, “base station” teaches a server entity), the method comprising: obtaining configuring in terms of a computational task and a reporting condition from the server entity (paragraph [0072], “…the base station 410 may transmit, and the UE 405 may receive, a federated learning configuration. The federated learning configuration may include an indication of a periodic communication scheme for communicating with the base station 410 to facilitate federated learning associated with a machine learning component…”), wherein the agent entity is to contend for channel access to report computational results of the computational task to the server entity only when an importance metric m_k (t) satisfies the reporting condition (paragraph [0082], “The configured grant configuration may indicate that, if the update condition is satisfied, the UE 405 is to transmit the machine learning component update” and paragraph [0075]); and
performing the iterative learning process with the server entity until a termination criterion is met, wherein, as part of the learning process (paragraph [0067], “The second communication manager 325 may update the global machine learning component using multiple rounds of updates from the UEs 310 until a global loss function is minimized (which may be referred to as “convergence” of the machine learning component)”, “convergence” teaches a termination criterion), the agent entity contends for channel access to report a computational result for an iteration of the learning process to the server entity only when the importance metric m_k (t) satisfies the reporting criterion (paragraph [0082], “The configured grant configuration may indicate that, if the update condition is satisfied, the UE 405 is to transmit the machine learning component update” and paragraph [0075]).
As to claim 9, Pezeshki teaches wherein the agent entity during each iteration of the iterative learning process:
obtains a parameter vector of the computational task from the server entity (paragraph [0084], “the base station 410 may transmit, and the UE 405 may receive, a global update associated with the machine learning component”);
determines the computational result of the computational task as a function of the obtained parameter vector for the iteration and of data locally obtained by the agent entity (paragraph [0084], “the base station 410 may transmit, and the UE 405 may receive, a global update associated with the machine learning component”); and
contends for channel access to report the computational result for the iteration to the server entity only when the importance metric m_k (t) satisfies the reporting criterion (paragraph [0082], “The configured grant configuration may indicate that, if the update condition is satisfied, the UE 405 is to transmit the machine learning component update” and paragraph [0075]).
As to claim 10, Pezeshki teaches wherein the importance metric m_k (t) satisfies the reporting condition when the importance metric m_k (t) exceeds a threshold value t_k (paragraph [0082], “…the configured grant configuration may indicate an update condition corresponding to the configured grant. The update condition may include any number of different rules, thresholds, and/or ranges, among other examples…”).
As to claim 11, Pezeshki teaches wherein the importance metric m_k (t) is a function of a gradient update |∇_k(t)| computed by the agent entity as part of determining the computational result for iteration t of the iterative learning process, and wherein contention for channel access is made only when the gradient update for iteration t exceeds a threshold value t_k (paragraph [0064], “…a UE 310 may transmit a compressed set of gradients…”).
As to claim 13, Pezeshki teaches wherein the importance metric m_k (t) is a function of number of iterations n since recent-most contention for channel access was made, and wherein contention for channel access is made only when the number of iterations n of the iterative learning process exceeds the threshold value t_k (paragraph [0063], “By repeating this process of training the machine learning component to determine the gradients g.sub.k.sup.(n) a number of times, the first communication manager 320 may determine an update corresponding to the machine learning component. Each repetition, by the first communication manager 320, of the training procedure described above may be referred to as an epoch”).
As to claim 14, Pezeshki teaches wherein the importance metric m_k (t) is a function of number of iterations n of the iterative learning process since recent-most reporting of the computational result was made, and wherein contention for channel access is made only when the number of iterations n exceeds the threshold value t_k (paragraph [0063], “By repeating this process of training the machine learning component to determine the gradients g.sub.k.sup.(n) a number of times, the first communication manager 320 may determine an update corresponding to the machine learning component. Each repetition, by the first communication manager 320, of the training procedure described above may be referred to as an epoch”).
As to claim 15, Pezeshki teaches wherein the importance metric m_k (t) is a function of channel variation, over at least two iterations of the iterative learning process, of a radio propagation channel over which the computational result is to be reported, and wherein contention for channel access is made only when the channel variation exceeds the threshold value t_k (paragraph [0063], “By repeating this process of training the machine learning component to determine the gradients g.sub.k.sup.(n) a number of times, the first communication manager 320 may determine an update corresponding to the machine learning component. Each repetition, by the first communication manager 320, of the training procedure described above may be referred to as an epoch” and paragraph [0038]).
As to claim 16, Pezeshki teaches wherein the importance metric m_k (t) is a function of a local parameter updated as part of performing a recent-most iteration of the iterative learning process, and wherein contention for channel access is made only when the local parameter exceeds the threshold value t_k (paragraph [0064], “the UEs 310 may transmit their respective local updates (shown as “local update 1, . . . , local update k, . . . , local update K”) to the base station 305”).
As to claim 17, Pezeshki teaches wherein the importance metric m_k (t) is mapped onto an access probability value p, and wherein contention for channel access is made only when p>x, where x is a uniformly distributed random variable in an interval [0,1] and defines the threshold value t_k (paragraph [0079], “…The configured grant configuration may be carried in a random access channel (RACH) message during a RACH procedure. The RACH procedure may include a four-step RACH procedure or a two-step RACH procedure…”).
As to claims 25 and 27, the limitations of the claim are substantially similar to claims 1 and 2, respectively. Please refer to each respective claim above.
As to claims 28 and 34, the limitations of the claim are substantially similar to claims 8 and 9, respectively. Please refer to each respective claim above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 12 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al. (Pub. No.: US 20220124779 A1) in view of Wang et al. (Pub. No.: US 20230259789 A1).
As to claim 12, Pezeshki does not explicitly teach, but in the same field of endeavor (federated learning) Wang teaches wherein the importance metric m_k (t) is a function of a channel quality value h_k (t), as valid for iteration t of the iterative learning process, for a radio propagation channel over which the computational result is to be reported, and wherein contention for channel access for iteration t is made only when the channel quality as valid for iteration t exceeds the threshold value t_k (paragraph [0110], “the trigger event comprises the first signal or link quality parameter changing by more than the second threshold value, and wherein the first signal or link quality parameter comprises: received signal strength indicator, RSSI; reference signal receive quality, RSRQ; reference signal receive power, RSRP; signal-to-interference-plus-noise ratio, SINR; channel quality indicator”).
Based on Pezeshki in view of Wang, it would have been obvious to a person of ordinary skill in the art, obvious before the effective filing date of the claimed invention, to incorporate channel quality metrics for reporting results (taught by Wang) with reporting computation results (taught by Pezeshki) in order to prevent data loss.
As to claim 18, Pezeshki does not explicitly teach, but in the same field of endeavor (federated learning) Wang teaches wherein the importance metric m_k (t) is a function of an attainable quality of service, QoS, value as attainable when reporting over the radio propagation channel, wherein the threshold value t_k is mapped onto a required QoS value as required for reporting the computational result, and wherein contention for channel access for iteration t is made only when the attainable QoS value exceeds the required QoS value (paragraph [0110], “the trigger event comprises the first signal or link quality parameter changing by more than the second threshold value, and wherein the first signal or link quality parameter comprises: received signal strength indicator, RSSI; reference signal receive quality, RSRQ; reference signal receive power, RSRP; signal-to-interference-plus-noise ratio, SINR; channel quality indicator”).
Based on Pezeshki in view of Wang, it would have been obvious to a person of ordinary skill in the art, obvious before the effective filing date of the claimed invention, to incorporate quality of service metrics for reporting results (taught by Wang) with reporting computation results (taught by Pezeshki) in order to prevent data loss.
As to claim 19, Wang further teaches wherein the attainable QoS value is determined from a channel quality value h_k (t), as valid for iteration t, for a radio propagation channel over which the computational result is to be reported (paragraph [0110], “the trigger event comprises the first signal or link quality parameter changing by more than the second threshold value, and wherein the first signal or link quality parameter comprises: received signal strength indicator, RSSI; reference signal receive quality, RSRQ; reference signal receive power, RSRP; signal-to-interference-plus-noise ratio, SINR; channel quality indicator”). The limitations of claim 19 are rejected in view of the analysis of claim 18 above, and the rationale to combine, as discussed in claim 18, applies here as well.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDULKADER M ALRIYASHI whose telephone number is (313)446-6551. The examiner can normally be reached Monday - Friday, 8AM - 5PM Alt, Friday, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOON HWANG can be reached at (571)272-4036. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Abdulkader M Alriyashi/Primary Examiner, Art Unit 2447 1/24/2026