Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. C laims 1- 3 0 are r ejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are either directed to a system or a method, which is one of the statutory categories of invention. ( Step 1: YES ) The examiner has identified system Claim 1 as the claim that represents the claimed invention for analysis and is similar to Claims 14, 27, and 29 . Claim 11 recites the limitations of ( additional elements emphasized in bold and are considered to be parsed from the remaining abstract idea): “ A first network device for wireless communications, the first network device comprising: at least one memory ; and at least one processor coupled to the at least one memory and configured to: output, for transmission to one or more second network devices, configuration information associated with a trained machine learning model; receive, from the one or more second network devices, information associated with a first fine-tuned machine learning model based on adaptation of parameters of the trained machine learning model; and output, for transmission to one or more third network devices, configuration information associated with the first fine-tuned machine learning model. ” Which is a process that, under its broadest which is a process that, under its broadest reasonable interpretation, covers performance of the limitation(s) as a Mental process (concept performed in the human mind) and Mathematical Concept (mathematical relationships, formulas or equations, or calculations) dynamic adjustment of neural network parameters by using mathematical approximation/forecasting (e.g., via partial differential equations) (see, e.g., Figs. 5A-5B and ¶22-28) . If a claim limitation, under its broadest reasonable interpretation (BRI), covers performance of the limitation as a certain method of a fundamental economic practice , then it falls within the “ Certain Methods of Organizing Human Activity ” grouping of abstract ideas. Similarly if a claim limitation under its BRI, covers performance of the limitation in the human mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. (Claims can recite a mental process even if they are claimed as being performed on a computer Gottschalk v. Benson , 409 U.S. 63; "Courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015). ) Accordingly, the claim recites an abstract idea . (Step 2A-Prong 1: YES. The claims are abstract) This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05.f), (2) Adding insignificant extra-solution activity to the judicial exception (MPEP 2106.05.g), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05.h). Claims 1-2 do not recite any hardware components and have been read as including said generic components. Claims 14 and 29 : does not utilize any generic computer components in its current form, however in order to advance to compact prosecution, will be assumed to also apply the same circuitry as applied in Claim (Applicant should fix in the next action) . Each step should positively recited to include “the processor” performing each step. The computer hardware is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to implement an abstract idea by adding the words “apply it” (or an equivalent) with the judicial exception. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore claim 1 is directed to an abstract idea without a practical application. ( Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application ) . The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using computer hardware amounts to no more than mere instructions to implement an abstract idea by adding the words “apply it” (or an equivalent) with the judicial exception. Mere instructions to implement an abstract idea on or with the use of generic computer components, cannot provide an inventive concept - rendering the claim patent ineligible. Thus claim 27 is not patent eligible. (St ep 2B: NO. The claims do not provide significantly more ) The dependent claims further define the abstract idea that is present in their respective independent claims and hence are abstract for at least the reasons presented above. The dependent claims do not include any additional element s ( dependent claims discussing different modes of data collected and updates to the neural network’s parameters which, broadly read are all generic computer components that further implement the abstract idea) that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination : The dependent claims recite further steps that can be performed in the human mind. Therefore, the dependent claims are directed to an abstract idea. Thus, the aforementioned claims are not patent-eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claims 1- 3, 6 , 8 -9, 14 -16, 19, and 21-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li et al. US 20220038349 (“Li”). Li teaches: Re 1 : A first network device for wireless communications (abstract: gNB -DU, gNB -CU, or LMF ) , the first network device comprising: at least one memory ( Fig. 2; ¶46: teaching various types of memory ) ; and at least one processor ( Fig. 2; ¶45: CPUs and GPUs taught ) coupled to the at least one memory and configured to: output, for transmission to one or more second network devices, configuration information associated with a trained machine learning model (abstract; ¶48: teaching transmitting the AI/ML model to UEs ) ; receive, from the one or more second network devices, information associated with a first fine-tuned machine learning model based on adaptation of parameters of the trained machine learning model (abstract: reports update parameters to the central server/first network device ) ; and output, for transmission to one or more third network devices, configuration information associated with the first fine-tuned machine learning model (abstract: aggregates parameters and updated the AI/ML model ; claim 1: see decoding portion ) . Re 2 : wherein the at least one processor is configured to: receive, from the one or more third network devices, information associated with a second fine-tuned machine learning model based on adaptation of parameters of the first fine-tuned machine learning model (¶¶66-70: see Operations 2 and 4 which can be repeatedly performed ) . Re 3 : wherein the at least one processor is configured to: output, for transmission to one or more fourth network devices, configuration information associated with the second fine-tuned machine learning model (¶66-70: teaches updating any number of UE’s with new model parameters ) . Re 6 : wherein the one or more second network devices includes a single network device (abstract: teaching sending to a UE which includes being a single network connected device ) . Re 8 : wherein the one or more second network devices includes a plurality of network devices (¶39-41: teaching a plurality of client devices ) . Re 9 : wherein the information associated with the first fine-tuned machine learning model received from the one or more second network devices comprises training data (abstract: receives updated parameters from UE ) , and wherein the at least one processor is configured to: update the trained machine learning model based on the training data to generate the first fine-tuned machine learning model (abstract: updates the AI/ML model based on the received parameters ) . Re 14 : A method of wireless communications at a first network device (abstract: gNB -DU, gNB -CU, or LMF ) , the method comprising: transmi tting, to one or more second network devices, configuration information associated with a trained machine learning model (abstract; ¶48: teaching transmitting the AI/ML model to UEs ) ; receiv ing , from the one or more second network devices, information associated with a first fine-tuned machine learning model based on adaptation of parameters of the trained machine learning model (abstract: reports update parameters to the central server/first network device ) ; and transmi tting, to one or more third network devices, configuration information associated with the first fine-tuned machine learning model (abstract: aggregates parameters and updated the AI/ML model ; claim 1: see decoding portion ) . Re 15 : receiv ing , from the one or more third network devices, information associated with a second fine-tuned machine learning model based on adaptation of parameters of the first fine-tuned machine learning model (¶¶66-70: see Operations 2 and 4 which can be repeatedly performed ) . Re 16 : transmi tting, to one or more fourth network devices, configuration information associated with the second fine-tuned machine learning model (¶66-70: teaches updating any number of UE’s with new model parameters ) . Re 19 : wherein the one or more second network devices includes a single network device (abstract: teaching sending to a UE which includes being a single network connected device ) . Re 21 : wherein the one or more second network devices includes a plurality of network devices (¶39-41: teaching a plurality of client devices ) . Re 22 : wherein the information associated with the first fine-tuned machine learning model received from the one or more second network devices comprises training data (abstract: receives updated parameters from UE ) , and wherein the method further comprises : updat ing the trained machine learning model based on the training data to generate the first fine-tuned machine learning model (abstract: updates the AI/ML model based on the received parameters ) . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 4 -5 , 7 , 13 , 17-18, 20 , 26 -27, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of LASKARIDIS US 20220245459 (“ LASKARIDIS ”). Re 4 -5 , 7 , 17-18, and 20 : Li teaches (¶¶66-70; abstract; claim 1 ; Figs. 1-3 ): Claim s 4 and 17 : wherein the configuration information associated with the trained machine learning model includes one or more parameters of the trained machine learning model . Claim s 5 and 18 : wherein the configuration information associated with the first fine-tuned machine learning model includes one or more parameters of the first fine-tuned machine learning model. Claim 7 and 20 : wherein the information associated with the first fine-tuned machine learning model received from the one or more second network devices is the configuration information associated with the first fine-tuned machine learning model, and wherein the configuration information associated with the first fine-tuned machine learning model comprises one or more parameters of the first fine-tuned machine learning model. Li does not explicitly teach, while LASKARIDIS teaches (¶43-45): Claim s 4 and 17 : wherein the configuration information associated with the trained machine learning model includes architecture information for the trained machine learning model . Claim 5 and 18 : wherein the configuration information associated with the first fine-tuned machine learning model includes architecture information for the first fine-tuned machine learning model . Claim 7 and 20 : wherein the configuration information associated with the first fine-tuned machine learning model comprises architecture information for the first fine-tuned machine learning model . LASKARIDIS teaches that the model is a super-model with nested submodels and corresponding weights, as well as receiving gradients/weights corresponding to submodels . In short, it teaches model-structure/architecture plus parameters . Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to combine FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" Li with FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" LASKARIDIS's teachings in order to more fully update UE’s with the full spectrum of the updated model, thereby better adapting individual devices to the feedback received by the first network device. Re 13 and 26 : Li does not explicitly teach, while LASKARIDIS teaches (abstract; ¶10, 41, 44-45, 54; claims 4-6: all discussing that the capabilities of each UE ): Claim 13: wherein the at least one processor is configured to: receive capability information from the one or more second network devices, the capability information being associated with capability of the one or more second network devices to fine-tuning one or more trained machine learning models; and output the configuration information based on the received capability information. Claim 26 : further comprising: receiving capability information from the one or more second network devices, the capability information being associated with capability of the one or more second network devices to fine-tuning one or more trained machine learning models; and transmitting the configuration information based on the received capability information. LASKARIDIS explicitly recognizes that the differing capabilities across UE’s “ means that it can be difficult to implement federated learning of a global model in a reasonable time frame. ” As such, receiving capability information allows for greater precision in updating these models as a function of such capabilities, to ensure their operation is optimized. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to combine FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" Li with FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" LASKARIDIS's teachings in order to optimize the operation of each UE based on its capabilities. Re 27 : Li teaches: A first network device for wireless communications (abstract: gNB -DU, gNB -CU, or LMF ) , the first network device comprising: output, for transmission to the second network device, information associated with a fine-tuned machine learning model based on adaptation of parameters of the trained machine learning model (abstract; claim 1) . Li does not explicitly teach, while LASKARIDIS teaches (abstract; ¶10, 41, 44-45, 54; claims 4-6: all discussing that the capabilities of each UE ) : output, for transmission to a second network device capability information associated with capability of the first network device to fine-tune one or more trained machine learning models; receive, from the second network device based on the capability information, configuration information associated with a trained machine learning model; and LASKARIDIS explicitly recognizes that the differing capabilities across UE’s “ means that it can be difficult to implement federated learning of a global model in a reasonable time frame. ” As such, receiving capability information allows for greater precision in updating these models as a function of such capabilities, to ensure their operation is optimized. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to combine FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" Li with FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" LASKARIDIS's teachings in order to optimize the operation of each UE based on its capabilities. Re 29 : Li teaches A method of wireless communications at a first network device (abstract: gNB -DU, gNB -CU, or LMF ) , the method comprising: transmitting, to the second network device, information associated with a fine-tuned machine learning model based on adaptation of parameters of the trained machine learning model (abstract; claim 1) . Li does not explicitly teach, while LASKARIDIS teaches (abstract; ¶10, 41, 44-45, 54; claims 4-6: all discussing that the capabilities of each UE ): transmitting, to a second network device, capability information associated with capability of the first network device to fine-tune one or more trained machine learning models; receiving, from the second network device based on the capability information, configuration information associated with a trained machine learning model; and LASKARIDIS explicitly recognizes that the differing capabilities across UE’s “ means that it can be difficult to implement federated learning of a global model in a reasonable time frame. ” As such, receiving capability information allows for greater precision in updating these models as a function of such capabilities, to ensure their operation is optimized. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to combine FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" Li with FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" LASKARIDIS's teachings in order to optimize the operation of each UE based on its capabilities. Claim s 10-11 and 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Hsu et al. US 12,423,618 (“Hsu”). Re 10-11 and 23-24 : Li does not explicitly teach, while Hsu teaches (Fig. 6 col. 6 ll 62-67 & col. 7 ll 1-9): Claim 10 and 23 : wherein the training data comprises gradient data. Claim 11 and 24 : wherein the gradient data comprises at least first gradient data from a first of the plurality of network devices and second gradient data from a second of the plurality of network devices. Hsu teaches that each network device’s parameters (such as gradient data) are transmitted to the moderator, and then aggregated to update the general model. As such, each UE’s model can be used for what it has learned via interaction with the user or the general computing environment to further refine the general model, thereby flowing the improvements across all UE’s. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to combine FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" Li with FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" Hsu's teachings in order to more precisely refine a general model based on interactions across all UE’s. Claim s 1 2 and 2 5 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Feng et al. US 20250023795 (“Feng”). Re 12 and 25 : Li does not explicitly teach, while Feng teaches (¶¶169-170) : Claim 12: wherein the at least one processor is configured to : output, for transmission to the one or more second network devices, deadline information indicating a deadline for adapting the parameters of the trained machine learning model. Claim 25 : further comprising: transmitting, to the one or more second network devices, deadline information indicating a deadline for adapting the parameters of the trained machine learning model. Imposing a deadline for update of parameters allows for time discrimination in receipt of parameters, allowing general/central models to ensure parameters reflect certain conditions. For instance, it allows for models to be “rolled back” or only updated after certain critical events take place. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to combine FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" Li with FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" Feng's teachings in order to allow for selective updating of the model to reflect time-sensitive criteria. Claim s 28 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Li and LASKARIDIS as applied to claim s 27 and 29 (respectively) above, and further in view of Feng. Re 28 and 30 : Li and Laskaridis do not explicitly disclose, while Feng teaches (¶¶169-170) : wherein the at least one processor is configured to: receive, from the second network device, deadline information indicating a deadline for adapting the parameters of the trained machine learning model. Imposing a deadline for update of parameters allows for time discrimination in receipt of parameters, allowing general/central models to ensure parameters reflect certain conditions. For instance, it allows for models to be “rolled back” or only updated after certain critical events take place. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to combine FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" Li and Laskaridis with FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" Feng's teachings in order to allow for selective updating of the model to reflect time-sensitive criteria. Conclusion Relevant prior art considered : US 20220182802 teaching a base station may transmit, to a user equipment (UE), a federated learning configuration that indicates one or more parameters of a federated learning procedure associated with a machine learning component. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT GERALD J SUFLETA II whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-4279 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 9AM-6PM EDT/EST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT ABDULMAJEED AZIZ can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 270-5046 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. FILLIN "Examiner Stamp" \* MERGEFORMAT GERALD J. SUFLETA II Primary Examiner Art Unit 2875 /GERALD J SUFLETA II/ Primary Examiner, Art Unit 2875