Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/26/2025 has been entered.
Claims 1, 2, 4, 5, 7, 9-17 & 21-23 & 26-28 are pending and presented for examination.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/12/2025 was filed after the mailing date of the Final Rejection on 10/27/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendment
Claims 3, 24 & 25 have been cancelled. Claims 6, 8 & 18-20 were previously cancelled.
Claims 1, 2, 4, 17, 22 & 23 have been amended.
Claims 26-28 have been added.
Response to Arguments
Applicant’s arguments, see “Remarks”, filed 12/26/2025, with respect to the objection of claim 17 have been fully considered and are persuasive. Therefore, the objective has been withdrawn.
Applicant’s arguments, see “Remarks”, filed 12/26/2025, with respect to the rejection of claim 17 under 35 USC 112(b) have been fully considered and are persuasive. Therefore, the rejection of claim 17 under 35 USC 112(b) has been withdrawn.
Applicant's arguments filed 12/26/2025 have been fully considered but they are not persuasive.
Regarding claim 1, applicant submits that this claim is patentable because Shen, Narayanan and Anderson individually or in combination fail to disclose all the limitations in amended claim 1. Examiner respectfully disagrees noting that, per 35 U.S.C. 103, a patent for a claimed invention may not be obtained if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains (see §MPEP 2141).
Applicant argues that Shen doesn’t disclose activating a first training process as recited in claim 1. Examiner agrees but submits that this argument is moot since examiner uses Narayanan to teach of activating a first training process.
Applicant argues that Shen, Narayanan and Anderson fail to disclose or teach of an iterative model training in a first training process, or iteratively sending, to the network device using a semi persistent resource, first parameter information of a model obtained from performing iterative model training on the model, or that iteratively performing model training on the first model in the first training process continues until the first training process is deactivated by the network device. Examiner respectfully disagrees noting that [0119] & [0135] of Narayanan teaches that training may occur over a number of epochs, where each epoch is a full training pass over an entire training dataset, and during each iteration of training learnable parameters may be affected. [0106]-[0107] of Narayanan teaches that a WTRU may be configured to report reconstruction loss (i.e. first parameter information) to the network semi-persistently using PUCCH resources. [0012] of Narayanan discloses that the reconstruction loss is part of an iterative training process where a first node (i.e. a WTRU) updates learnable parameters to reduce the reconstruction loss value. A broadest reasonable interpretation is that the WTRU iteratively and semi-persistently sends the reconstruction loss value each iteration of the training process. [0126] of Narayanan teaches that a MAC CE/DCI may be signaled (i.e. by a network device) to the WTRU to deactivate a training configuration set.
Applicant argues that Narayanan only discloses that a MAC CE/DCI activates or deactivates a configuration and activating the configuration may trigger the WTRU to perform online training, and that Narayanan says nothing about a DCI to deactivate an iterative training process that continues until the DCI is received. Examiner respectfully disagrees noting that a broadest reasonable interpretation is that a configuration set consisting of learning parameters for online training may be interpreted as a training process. Thus, activating and deactivating a configuration set through a DCI may be interpreted as activating and deactivating a training process through a DCI. As discussed above, Narayanan also teaches that the training process may be an iterative training process. Thus, Narayanan teaches that an iterative training process may be activated through a first DCI and deactivated through a second DCI. A broadest reasonable interpretation is that a WTRU would begin the iterative training process upon receiving the first DCI to activate the training process, and would continue the iterative training process until the WTRU receives the second DCI indicating to deactivate the training process.
Based on the above discussion, examiner maintains rejection of claim 1 under 35 USC 103 based on Shen in view of Narayanan and Anderson.
Regarding claim 22, applicant submits that this claims is patentable based on similar amendments and arguments as made for claim 1. Examiner respectfully disagrees and for the same reasons as discussed above, maintains rejection of claim 22 under 35 U.S.C. 103 based on Shen in view of Narayanan and Anderson.
Regarding claims 2, 3, 5, 7, 9-17, 21-23 & 26-28, applicant submits that these claims are patentable based on amendments and arguments made for claims 1 & 22 and being dependent on claims 1 or 22. Examiner respectfully disagrees and for the same reasons as discussed above, maintains or introduces rejections of these claims under 35 U.S.C. 103 based on Shen in view of Narayanan and Anderson and further in view of other cited references in this office action.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Claims 1, 4, 5, 7, 9, 21, 22 & 26-28 rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Shen et al. (WO 2021/035724) in view of Narayanan et al. (US 20230409963)(herein after “Narayanan”) and further in view of Anderson et al. (US 2022/0020142)(herein after “Anderson”).
Regarding Claim 1, Shen discloses a communication method comprising: receiving, by a terminal device, first downlink control information (DCI) from a network device, wherein the first DCI is for a first training process, and wherein the first training process is for training a model corresponding to target information (Fig 2 & [0038] disclose wireless communication between a base station 110 (i.e. a network device) and a terminal 120 where the terminal receives a DCI from the base station. [0045] discloses a method wherein the DCI received by the terminal indicates a resource to be used for transmitting a training set. [0046] discloses the training set is used to train a machine learning model. [0051] discloses a target bit field included in the DCI to determine whether the resource indicated in the DCI is used to transmit the training set. [0063] discloses that when it is determined that the resource is used for transmitting a training set, the target bit field may also indicate use of the training set to train the machine learning model (i.e. an indication to activate a training process). [0047]-[0048] discloses the training set includes at least one known target configuration including scheduling parameters (i.e. target information).).
Shen fails to disclose wherein the first DCI is for activating a first training process; in response to receiving the first DCI, activating the first training process and iteratively performing model training on a first model in the first training process, the iteratively performing model training on the first model in the first training process comprises iteratively sending, to the network device using a semi persistent resource, first parameter information of the first model obtained from performing iterative model training on the first model, and the iteratively performing model training on the first model in the first training process continues until the first training process is deactivated by the network device; receiving, by the terminal device, second DCI from the network device, wherein the second DCI is for deactivating the first training process; and in response to receiving the second DCI, deactivating, by the terminal device, the first training process.
However, Narayanan teaches wherein the first DCI is for activating a first training process ([0126] discloses a DCI may be used to activate a specific configuration set, where a configuration set consists of learning parameters for online training (i.e. activate a specific training process).);
in response to receiving the first DCI, activating, by the terminal device, the first training process and iteratively performing model training on a first model in the first training process ([0126] discloses a WTRU receiving a DCI that may be used to activate a specific configuration set, where a configuration set consists of learning parameters for online training (i.e. activate a specific training process for a first model). A broadest reasonable interpretation is that a configuration set consisting of learning parameters for online training may be interpreted as a training process. Thus, activating a specific configuration set may be interpreted as activating a specific training process. The DCI that activates the specific configuration set may also trigger the WTRU to perform online training. [0119] & [0135] disclose that training may occur over a number of epochs, where each epoch is a full training pass over an entire training dataset, and during each iteration of training learnable parameters may be affected.),
the iteratively performing model training on the first model in the first training process comprises iteratively sending, to the network device using a semi persistent resource, first parameter information of the first model obtained from performing iterative model training on the first model ([0106]-[0107] discloses that a WTRU may be configured to report reconstruction loss (i.e. first parameter information) to the network semi-persistently using PUCCH resources. [0012] discloses that the reconstruction loss is part of an iterative training process where a first node (i.e. a WTRU) updates learnable parameters to reduce the reconstruction loss value. A broadest reasonable interpretation is that the WTRU iteratively and semi-persistently sends the reconstruction loss value each iteration of the training process.), and the iteratively performing model training on the first model in the first training process continues until the first training process is deactivated by the network device ([0126] discloses that a MAC CE/DCI may be signaled (i.e. by a network device) to the WTRU to deactivate a training configuration set.);
receiving, by the terminal device, second DCI from the network device, wherein the second DCI is for deactivating the first training process ([0126] discloses a DCI may be signaled (i.e. received) by a WTRU to deactivate a specific configuration set, where a configuration set consists of learning parameters for online training (i.e. deactivate a specific training process for a first model). A broadest reasonable interpretation is that a configuration set consisting of learning parameters for online training may be interpreted as a training process. Thus, deactivating a specific configuration set may be interpreted as deactivating a specific training process.); and
in response to receiving the second DCI, deactivating, by the terminal device, the first training process ([0126] discloses a DCI may be signaled to deactivate, by the WTRU, a specific configuration set, where a configuration set consists of learning parameters for online training (i.e. deactivate a specific training process for a first model). A broadest reasonable interpretation is that a configuration set consisting of learning parameters for online training may be interpreted as a training process. Thus, a WTRU deactivating a specific configuration set in response to receiving a DCI indicating to deactivate a specific configuration set may be interpreted as the WTRU deactivating a specific training process in response to receiving a DCI indicating to deactivate a specific configuration set.).
Therefore, it would have been obvious to someone having ordinary skill in the art prior to the effective filing date of the claimed invention to have a communication method comprising: receiving, by a terminal device, first downlink control information (DCI) from a network device, wherein the first DCI is for a first training process, and wherein the first training process is for training a model corresponding to target information, as disclosed by Shen, wherein the first DCI is for activating a first training process; in response to receiving the first DCI, activating, by the terminal device, the first training process and iteratively performing model training on a first model in the first training process, the iteratively performing model training on the first model in the first training process comprises iteratively sending, to the network device using a semi persistent resource, first parameter information of the first model obtained from performing iterative model training on the first model, and the iteratively performing model training on the first model in the first training process continues until the first training process is deactivated by the network device ; receiving, by the terminal device, second DCI from the network device, wherein the second DCI is for deactivating the first training process; and in response to receiving the second DCI, deactivating, by the terminal device, the first training process, as taught by Narayanan. The motivation to do so would be to have a communication method where a WTRU can receive: a first DCI providing information for a plurality of training sets (or training processes) for a training model that also indicates a specific training set is activated and training should be performed, in response to receiving the first DCI the WTRU iteratively performs training using the specific training set at each iteration sends a reconstruction loss value to a network using semi-persistent PUCCH resources so that the network can determine when the WTRU has accurately reconstructed the training model, so that the network can then send a second DCI indicating a specific training set is deactivated and in response the WTRU can deactivate the specific training set process, in order to improve performance of the WTRU while minimizing processing time in the WTRU through activating, using well defined standards based protocols (i.e. DCI), specific training sets for the training model only while the WTRU is iteratively performing training to reduce reconstruction loss to a point where the training model is accurate.
Shen fails to disclose wherein the network device performs model training on a second model in the first training process, and wherein the model corresponding to the target information comprises the first model and the second model;
However, Anderson further teaches wherein the network device performs model training on a second model in the first training process, and wherein the model corresponding to the target information comprises the first model and the second model ([0012] discloses applying a first trained model to input data to obtain a first output data and applying a second trained model to the input data to obtain a second output, wherein the first and second trained models are dependent on a hierarchical relationship between the first and second outputs. Both the first and second training models are part of the same training process with outputs with hierarchical dependency on their outputs corresponding the same clinical data target information.);
Therefore, it would have been obvious to someone having ordinary skill in the art prior to the effective filing date of the claimed invention to have a communication method comprising: receiving first downlink control information (DCI) from a network device, wherein the first DCI is for a first training process, and wherein the first training process is for training a model corresponding to target information, as disclosed by Shen, wherein the network device performs model training on a second model in the first training process, and wherein the model corresponding to the target information comprises the first model and the second model, as further taught by Anderson. The motivation to do so would be to have a communication method where a UE can receive a DCI providing information that activates a specific training set (or training process) of a plurality of training sets and initiates the UE to perform training using two training models with hierarchically dependency on their outputs in order to improve accuracy in training compared to using a single training model.
Regarding Claims 4 & 26, Shen in view of Narayanan and further in view of Anderson disclose the method according to claim 1 and the apparatus according to claim 22.
Shen fails to disclose wherein the first DCI is further for activating or indicating the semi-persistent resource.
However, Narayanan teaches wherein the first DCI is further for activating or indicating the semi-persistent resource ([0107] discloses that a PUCCH resource may be used to semi-persistently report reconstruction loss, and that a DCI may be used to request for (i.e. activate) the aperiodic reporting of reconstruction loss.).).
Therefore, it would have been obvious to someone having ordinary skill in the art prior to the effective filing date of the claimed invention to have the method according to claim 1 or the apparatus according to claim 22, as disclosed by Shen in view of Narayanan and Anderson, wherein the first DCI is further for activating or indicating the semi-persistent resource, as taught by Narayanan. The motivation to do so would be to have a WTRU, or a communication method for a WTRU, that can receive a DCI configuring semi-persistent PUCCH resources for the WTRU to iteratively report reconstruction loss for a training model to a network so that the network can determine when the WTRU has accurately reconstructed the training model so that the network can configure the WTRU to deactivate the training model to save processing power at the WTRU.
Regarding Claims 5 & 27, Shen in view of Narayanan and further in view of Anderson disclose the method according to claim 1 and the apparatus according to claim 22.
Shen discloses wherein the first DCI comprises a first indicator field ([0051] & [0055] discloses a first DCI comprising a first target field (i.e. first indicator field).).
Shen fails to disclose wherein the first indicator field indicates that the first DCI is for activating the first training process.
However, Narayanan teaches wherein the first indicator field indicates that the first DCI is for activating the first training process ([0126] discloses a DCI for activating a specific configuration for online training (i.e. a first training process).).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of Claim 1, or the apparatus of claim 22, wherein the first DCI comprises a first indicator field, as disclosed by Shen in view of Narayanan and Anderson, wherein the first indicator field indicates that the first DCI is for activating the first training process, as taught by Narayanan. The motivation to do so would be to have a WTRU, or a communication method for a WTRU, that can receive: a first DCI providing information for a plurality of training sets (or training processes) for a training model that also indicates a specific training set is activated, in order to simplify the process of activating and deactivating specific training sets for a training model at the UE by using well defined standards based protocols (i.e. DCI).
Regarding Claims 7 & 28, Shen in view of Narayanan and further in view of Anderson disclose the method according to claim 1 and the apparatus according to claim 22.
Shen discloses wherein both the first DCI and the second DCI indicate an identifier of the first training process (Table 5 & [0085] disclose multiple DCI formats. A first DCI format 1_2 identifier indicates the scheduling of a PDSCH carrying a downlink training set and a second DCI format 0_2 identifier indicates the scheduling of a PUSCH carrying an uplink training set.).
Regarding Claim 9, Shen in view of Narayanan and further in view of Anderson disclose the method according to claim 1.
Shen discloses wherein the second DCI comprises a second indicator field (Table 5 & [0085] disclose multiple DCI formats for a DCI. A first DCI format 1_2 is an indicator field scheduling a PDSCH carrying a downlink training set and a second DCI format 0_2 is an indicator field scheduling a PUSCH carrying an uplink training set.).
Shen fails to disclose wherein the second indicator field indicates that the second DCI is for deactivating the training process.
However, Narayanan teaches wherein the second indicator field indicates that the second DCI is for deactivating the first training process ([0126] discloses a first DCI to activate online training and a second DCI to deactivate online training.).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of claim 1, wherein the second DCI comprises a second indicator field, as disclosed by Shen in view of Narayanan and Anderson, wherein the second indicator field indicates that the second DCI is for deactivating the first training process, as taught by Narayanan. The motivation to do so would be to provide a method for disabling training at a receiving module in response to an indication from a receiving module of reconstruction loss or unavailable status of an AI component.
Regarding claim 21, Shen in view of Narayanan and Anderson discloses the method according to Claim 1 wherein both the first DCI and the second DCI are associated with the first training process.
Shen discloses wherein both the first DCI and the second DCI are associated with a first radio network temporary identifier (RNTI) that is associated with the first training process (Table 5 & [0085] disclose multiple DCI formats. A first DCI format 1_2 identifier indicates the scheduling of a PDSCH carrying a downlink training set and a second DCI format 0_2 identifier indicates the scheduling of a PUSCH carrying an uplink training set. [0090] discloses that the resources indicated by the first DCI format 1_2 and the second DCI format 0_2 may be used to transmit a training set according to a target RNTI adopted by the first DCI format 1_2 and the second DCI format 0_2. Thus, disclosed are a first DCI identifier and a second DCI identifier associated with the an RNTI associated with a training process.).
Regarding claim 22, Shen discloses an apparatus (Fig 7 & [0195] disclose a terminal), comprising:
at least one processor (Fig 7 & [0195] disclose the terminal includes a processor 91); and
non-transitory computer readable storage storing executable instructions that are executed by at least one processor (Fig 7 & [0201] disclose a computer-readable storage medium storing at least one instruction that can be executed by the processor.), wherein execution of the instructions causes the apparatus to:
receive first downlink control information (DCI) from a network device, wherein the first DCI is for a first training process, and wherein the first training process is for training a model corresponding to target information (Fig 2 & [0038] disclose wireless communication between a base station 110 (i.e. a network device) and a terminal 120 where the terminal receives a DCI from the base station. [0045] discloses a method wherein the DCI received by the terminal indicates a resource to be used for transmitting a training set. [0046] discloses the training set is used to train a machine learning model. [0051] discloses a target bit field included in the DCI to determine whether the resource indicated in the DCI is used to transmit the training set. [0063] discloses that when it is determined that the resource is used for transmitting a training set, the target bit field may also indicate use of the training set to train the machine learning model (i.e. an indication to activate a training process). [0047]-[0048] discloses the training set includes at least one known target configuration including scheduling parameters (i.e. target information).);
Shen fails to disclose wherein the first DCI is for activating a first training process; in response to receiving the first DCI, activate the first training process and iteratively perform model training on a first model in the first training process, the iteratively performing model training on the first model in the first training process comprises iteratively sending, to the network device using a semi persistent resource, first parameter information of the first model obtained from iteratively performing model training on the first model in the first training process, and the iteratively performing model training on the first model in the first training process continues until the first training process is deactivated by the network device; receive second DCI from the network device, wherein the second DCI is for deactivating the first training process; and in response to receiving the second DCI, deactivate the first training process.
However, Narayanan teaches wherein the first DCI is for activating a first training process ([0126] discloses a DCI may be used to activate a specific configuration set, where a configuration set consists of learning parameters for online training (i.e. activate a specific training process). A broadest reasonable interpretation is that a configuration set consisting of learning parameters for online training may be interpreted as a training process. Thus, activating a specific configuration set may be interpreted as activating a specific training process.);
in response to receiving the first DCI, activate the first training process and perform model training on a first model in the first training process ([0126] discloses a DCI may be used to activate a specific configuration set, where a configuration set consists of learning parameters for online training (i.e. activate a specific training process for a first model). A broadest reasonable interpretation is that a configuration set consisting of learning parameters for online training may be interpreted as a training process. Thus, activating a specific configuration set may be interpreted as activating a specific training process. The DCI that activates the specific configuration may also trigger a WTRU to perform online training.);
the iteratively performing model training on the first model in the first training process comprises iteratively sending, to the network device using a semi persistent resource, first parameter information of the first model obtained from iteratively performing model training on the first model in the first training process ([0106]-[0107] discloses that a WTRU may be configured to report reconstruction loss (i.e. first parameter information) to the network semi-persistently using PUCCH resources. [0012] discloses that the reconstruction loss is part of an iterative training process where a first node (i.e. a WTRU) updates learnable parameters to reduce the reconstruction loss value. A broadest reasonable interpretation is that the WTRU iteratively and semi-persistently sends the reconstruction loss value each iteration of the training process.), and the iteratively performing model training on the first model in the first training process continues until the first training process is deactivated by the network device ([0126] discloses that a MAC CE/DCI may be signaled (i.e. by a network device) to the WTRU to deactivate a training configuration set.);
receive second DCI from the network device, wherein the second DCI is for deactivating the first training process ([0126] discloses a DCI may be used to deactivate a specific configuration set, where a configuration set consists of learning parameters for online training (i.e. deactivate a specific training process for a first model). A broadest reasonable interpretation is that a configuration set consisting of learning parameters for online training may be interpreted as a training process. Thus, deactivating a specific configuration set may be interpreted as deactivating a specific training process.); and
in response to receiving the second DCI, deactivate the first training process ([0126] discloses a DCI may be used to deactivate a specific configuration set, where a configuration set consists of learning parameters for online training (i.e. deactivate a specific training process for a first model). A broadest reasonable interpretation is that a configuration set consisting of learning parameters for online training may be interpreted as a training process. Thus, deactivating a specific configuration set may be interpreted as deactivating a specific training process.).
Therefore, it would have been obvious to someone having ordinary skill in the art prior to the effective filing date of the claimed invention to have an apparatus comprising: at least one processor; and non-transitory computer readable storage storing executable instructions that are executed by at least one processor, wherein execution of the instructions causes the apparatus to: receive first downlink control information (DCI) from a network device, wherein the first DCI is for a first training process, and wherein the first training process is for training a model corresponding to target information, as disclosed by Shen, wherein the first DCI is for activating a first training process; in response to receiving the first DCI, activate the first training process and perform model training on a first model in the first training process, the iteratively performing model training on the first model in the first training process comprises iteratively sending, to the network device using a semi persistent resource, first parameter information of the first model obtained from iteratively performing model training on the first model in the first training process, and the iteratively performing model training on the first model in the first training process continues until the first training process is deactivated by the network device; receive second DCI from the network device, wherein the second DCI is for deactivating the first training process; and in response to receiving the second DCI, deactivate the first training process, as taught by Narayanan. The motivation to do so would be to have a WTRU, comprising a processor and memory with instructions that can be executed by the processor, that can receive: a first DCI providing information for a plurality of training sets (or training processes) for a training model that also indicates a specific training set is activated and training should be performed, in response to receiving the first DCI the WTRU iteratively performs training using the specific training set at each iteration sends a reconstruction loss value to a network using semi-persistent PUCCH resources so that the network can determine when the WTRU has accurately reconstructed the training model, so that the network can then send a second DCI indicating a specific training set is deactivated and in response the WTRU can deactivate the specific training set process, in order to improve performance of the WTRU while minimizing processing time in the WTRU through activating, using well defined standards based protocols (i.e. DCI), specific training sets for the training model only while the WTRU is iteratively performing training to reduce reconstruction loss to a point where the training model is accurate.
Shen fails to disclose wherein the network device performs model training on a second model in the first training process, and wherein the model corresponding to the target information comprises the first model and the second model;
However, Anderson further teaches wherein the network device performs model training on a second model in the first training process, and wherein the model corresponding to the target information comprises the first model and the second model ([0012] discloses applying a first trained model to input data to obtain a first output data and applying a second trained model to the input data to obtain a second output, wherein the first and second trained models are dependent on a hierarchical relationship between the first and second outputs. Both the first and second training models are part of the same training process with outputs with hierarchical dependency on their outputs corresponding the same clinical data target information.);
Therefore, it would have been obvious to someone having ordinary skill in the art prior to the effective filing date of the claimed invention to have an apparatus that: receives first downlink control information (DCI) from a network device, wherein the first DCI is for a first training process, and wherein the first training process is for training a model corresponding to target information, as disclosed by Shen, wherein the network device performs model training on a second model in the first training process, and wherein the model corresponding to the target information comprises the first model and the second model, as further taught by Anderson. The motivation to do so would be to have a UE that can receive a DCI providing information that activates a specific training set (or training process) of a plurality of training sets and initiates the UE to perform training using two training models with hierarchically dependency on their outputs in order to improve accuracy in training compared to using a single training model.
Claim 2 rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Shen et al. (WO 2021/035724) in view of Narayanan et al. (US 20230409963)(herein after “Narayanan”) and Anderson et al. (US 2022/0020142)(herein after “Anderson”), as applied to claim 1, and further in view of Zilka et al. (US 11741191)(herein after “Zilka”).
Regarding Claim 2. Shen in view of Narayanan and Anderson discloses the method according to Claim 1.
Shen fails to disclose wherein the first training process further comprises: receiving first data from the network device; and wherein iteratively performing model training on the first model comprises: iteratively training the first model based on the first data in order to obtain the first parameter information of the first model.
However, Zilka teaches wherein the first training process further comprises: receiving first data from the network device (Col 2, lines 55-67 & col 3, lines 1-3 disclose receiving a first data from a communication network.); wherein iteratively performing model training on the first model comprises:
Iteratively training the first model based on the first data in order to obtain the first parameter information of the first model (Col 2, lines 55-67 & col 3, lines 1-6 and col 4, lines 58-67 & col 5, lines 1-5 disclose iteratively training a first machine learning model based on the first data received and obtaining an first parameter update for the first machine learning model.); and
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of claim 1, as disclosed by Shen in view of Narayanan and Anderson, wherein the first training process further comprises: receiving first data from the network device; and wherein iteratively performing model training on the first model comprises: iteratively training the first model based on the first data in order to obtain the first parameter information of the first model, as further taught by Zilka. The motivation to do so would be to improve a machine learning network model corresponding to search results from search queries through iterative parameter feedback and model training based on a first training using first data in the machine learning model.
Claim 23 rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Shen et al. (WO 2021/035724) in view of Narayanan et al. (US 20230409963)(herein after “Narayanan”) and Anderson et al. (US 2022/0020142)(herein after “Anderson”), as applied to claim 22, and further in view of Zilka et al. (US 11741191)(herein after “Zilka”).
Regarding claim 23, Shen in view of Narayanan and Anderson disclose the apparatus according to claim 22.
Shen fails to disclose wherein the first training process further comprises: receiving first data from the network device; wherein performing model training on the first model comprises: training the first model based on the first data in order to obtain first parameter information of the first model; and wherein the first training process further comprises: sending the first parameter information to the network device.
However, Zilka further teaches wherein the first training process further comprises: receiving first data from the network device (Col 2, lines 55-67 & col 3, lines 1-3 disclose receiving a first data from a communication network.); wherein performing model training on the first model comprises:
training the first model based on the first data in order to obtain first parameter information of the first model (Col 2, lines 55-67 & col 3, lines 1-3 disclose training a first machine learning model based on the first data received and obtaining an first parameter update for the first machine learning model.); and
wherein the first training process further comprises: sending the first parameter information to the network device. (Col 2, lines 55-67 & col 3, lines 1-3 disclose transmitting the first parameter update to the communication network.).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the apparatus of claim 22, as disclosed by Shen in view of Narayanan and Anderson, wherein the first training process further comprises: receiving first data from the network device; wherein performing model training on the first model comprises: training the first model based on the first data in order to obtain first parameter information of the first model; and wherein the first training process further comprises: sending the first parameter information to the network device, as further taught by Zilka. The motivation to do so would be to have a UE assist in improving a machine learning network model corresponding to search results from search queries through parameter feedback based on a first training using first data in the machine learning model.
Claims 10-17 rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Shen et al. (WO 2021/035724) in view of Narayanan et al. (US 20230409963)(herein after “Narayanan”) and Anderson et al. (US 2022/0020142)(herein after “Anderson”), as applied to claim 1, and further in view of Bai et al. (WO 2022000365)(herein after “Bai”).
Regarding Claim 10, Shen in view of Narayanan and Anderson discloses the method according to Claim 1.
Shen discloses receiving a second DCI from a base station (Fig 2 & [0038] disclose wireless communication between a base station 110 (i.e. a network device) and a terminal 120 where the terminal receives a DCI from the base station. Table 5 & [0085] further disclose multiple DCI formats for the DCI received from the base station. A first DCI format 1_2 is received when scheduling a PDSCH carrying a downlink training set and a second DCI format 0_2 is received when scheduling a PUSCH carrying an uplink training set. Thus, further disclosed is the receiving of a second DCI from a base station.).
Shen fails to disclose further comprising receiving third DCI from the network device.
However, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have a third DCI received from a network device, since it has been held that mere duplication of parts that does not produce new or unexpected results involves only routine skill in the art and thus has no patentable significance (see MPEP Section 2144.04, subsection VI.B). The motivation to do so would be to define a plurality of DCI formats for sending multiple DCIs to indicate various uses of resources, activation of processes or other control messages from a base station.
Shen fails to disclose wherein the third DCI is for activating a prediction process, and the prediction process comprises a process of predicting the target information by using the model corresponding to the target information.
However, Bai further teaches wherein the third DCI is for activating a prediction process, and the prediction process comprises a process of predicting the target information by using the model corresponding to the target information ([00104]-[00105] discloses delivering and activating a machine learning model structure through a DCI. [0059] discloses that a SOC in a UE may receive the machine learning model from a base station through a DCI, including code to predict a future downlink data channel (i.e. target information) based on using the machine learning model corresponding to estimating and predicting the downlink data channel.).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method according to Claim 1 further comprising receiving third DCI from the network device, as disclosed by Shen in view of Narayanan and Anderson, wherein the third DCI is for activating a prediction process, and the prediction process comprises a process of predicting the target information by using the model corresponding to the target information, as further taught by Bai. The motivation to do so would be to provide a method for a base station to indicate to a UE, through a DCI, to activate future downlink data prediction based on a machine learning model if the base station is detecting poor downlink data channel performance at the UE without use of the machine learning model to predict future downlink data channel state.
Regarding Claim 11, Shen in view of Narayanan and Anderson and further in view of Bai disclose the method according to Claim 10.
Shen discloses a second DCI comprising a second indicator field (Table 5 & [0085] disclose multiple DCI formats for a DCI. A first DCI format 1_2 is an indicator field scheduling a PDSCH carrying a downlink training set and a second DCI format 0_2 is an indicator field scheduling a PUSCH carrying an uplink training set. Thus, disclosed is a second DCI comprising a second indicator field.).
Shen fails to disclose wherein the third DCI comprises a third indicator field.
However, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have a third indicator field, since it has been held that mere duplication of parts that does not produce new or unexpected results involves only routine skill in the art and thus has no patentable significance (see MPEP Section 2144.04, subsection VI.B). The motivation to do so would be to define a plurality of DCI indicator fields for sending multiple DCIs to indicate various uses of resources, activation of processes or other control messages from a base station.
Shen fails to disclose wherein the third indicator field indicates that the third DCI is for activating the prediction process.
However, Bai further teaches wherein the third indicator field indicates that the third DCI is for activating the prediction process ([00104]-[00105] discloses delivering and activating a machine learning model structure through a DCI (that would include an indicator field). [0059] discloses that a SOC in a UE may receive the machine learning model from a base station through a DCI (that would include an indicator field), including code to predict a future downlink data channel (i.e. target information) based on using the machine learning model corresponding to estimating and predicting the downlink data channel.).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method according to Claim 10 wherein the third DCI comprises a third indicator field, as disclosed by Shen in view of Narayanan and Anderson and further in view of Bai, and wherein the third indicator field indicates that the third DCI is for activating the prediction process, as further taught by Bai. The motivation to do so would be to provide a method for a base station to indicate to a UE, through an indicator field in a DCI, to activate future downlink data prediction based on a machine learning model if the base station is detecting poor downlink data channel performance at the UE without use of the machine learning model to predict future downlink data channel state.
Regarding Claim 12, Shen in view of Narayanan and Anderson and further in view of Bai disclose the method according to Claim 10.
Shen discloses receiving a second DCI from a base station (Fig 2 & [0038] disclose wireless communication between a base station 110 (i.e. a network device) and a terminal 120 where the terminal receives a DCI from the base station. Table 5 & [0085] further disclose multiple DCI formats for the DCI received from the base station. A first DCI format 1_2 is received when scheduling a PDSCH carrying a downlink training set and a second DCI format 0_2 is received when scheduling a PUSCH carrying an uplink training set. Thus, further disclosed is the receiving of a second DCI from a base station.).
Shen fails to disclose further comprising receiving fourth DCI from the network device.
However, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have a fourth DCI received from a network device, since it has been held that mere duplication of parts that does not produce new or unexpected results involves only routine skill in the art and thus has no patentable significance (see MPEP Section 2144.04, subsection VI.B). The motivation to do so would be to define a plurality of DCI formats for sending multiple DCIs to indicate various uses of resources, activation of processes or other control messages from a base station.
Shen fails to disclose wherein the fourth DCI is for deactivating a process.
However, Narayanan teaches wherein the fourth DCI is for deactivating a process ([0126] discloses a first DCI to activate online training and a second DCI to deactivate online training.).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of Claim 10 further comprising receiving fourth DCI from the network device, as disclosed by Shen in view of Narayanan and Anderson and further in view of Bai, wherein the fourth DCI is for deactivating a process, as taught by Narayanan. The motivation to do so would be to provide a method for base station to disable a process such as a training process at a terminal in response to an indication from the terminal of reconstruction loss or unavailable status of an AI component.
Shen fails to disclose wherein the process is a prediction process.
However, Bai further teaches wherein the process is a prediction process ([0059] discloses that a SOC in a UE may receive a machine learning model including code to predict a future downlink data channel (i.e. target information) based on using the machine learning model corresponding to estimating and predicting the downlink data channel).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of Claim 10 wherein the fourth DCI is for deactivating a process, as disclosed by Shen in view of Narayanan and Anderson and further in view of Bai, and wherein the process is a prediction process, as further taught by Bai. The motivation to do so would be to provide a method for a base station to indicate to a UE, through an indicator field in a DCI, to deactivate future downlink data prediction based on a machine learning model if the base station is detecting superior downlink data channel performance at the UE to save battery power at the UE.
Regarding Claim 13 Shen in view of Narayanan and Anderson and further in view of Bai disclose the method according to Claim 12.
Shen discloses a first DCI identifier and a second DCI identifier associated with an RNTI (Table 5 & [0085] disclose multiple DCI formats. A first DCI format 1_2 identifier indicates the scheduling of a PDSCH carrying a downlink training set and a second DCI format 0_2 identifier indicates the scheduling of a PUSCH carrying an uplink training set. [0090] discloses that the resources indicated by the first DCI format 1_2 and the second DCI format 0_2 may be used to transmit a training set according to a target RNTI adopted by the first DCI format 1_2 and the second DCI format 0_2. Thus, disclosed are a first DCI identifier and a second DCI identifier associated with an RNTI.).
Shen fails to disclose wherein both the third DCI and the fourth DCI indicate an identifier of the process, and/or both the third DCI and the fourth DCI are associated with a second radio network temporary identifier (RNTI).
However, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have a third DCI and a fourth DCI indicate an identifier of a process, and/or both the third DCI and the fourth DCI are associated with a second RNTI, since it has been held that mere duplication of parts that does not produce new or unexpected results involves only routine skill in the art and thus has no patentable significance (see MPEP Section 2144.04, subsection VI.B). The motivation to do so would be to define a plurality of DCI formats for sending multiple DCIs to indicate various uses of resources, activation of processes or other control messages from a base station.
Shen fails to disclose wherein the process is a prediction process.
However, Bai further teaches a wherein the process is a prediction process ([0059] discloses that a SOC in a UE may receive a machine learning model including code to predict a future downlink data channel (i.e. target information) based on using the machine learning model corresponding to estimating and predicting the downlink data channel).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of Claim 12, as disclosed by Shen in view of Narayanan and Anderson and further in view of Bai, wherein both the third DCI and the fourth DCI indicate an identifier of a process, and/or both the third DCI and the fourth DCI are associated with a second RNTI, and wherein the process is a prediction process, as further taught by Bai. The motivation to do so would be to define multiple RNTIs for scrambling and masking CRC bits of a multiple DCI messages, allowing multiple individuals or multiple groups of devices to distinguish the multiple DCIs intended to provide multiple different training set information for multiple different machine learning processing.
Regarding Claim 14, Shen in view of Narayanan and Anderson and further in view of Bai disclose the method according to claim 13.
Shen discloses wherein the second RNTI is one of the following RNTIs: an artificial intelligence RNTI, a prediction process RNTI, a cell RNTI, a prediction RNTI, or a semi-persistent scheduling RNTI ([0090]-[0092] disclose an RNTI may be a new training process RNTI, for example ML-RNTI, for scheduling resources used to transmit a training set, or can be a C-RNTI.).
Regarding Claim 15, Shen in view Narayanan and Anderson and further in view of Bai disclose the method according to Claim 12.
Shen discloses a second DCI comprising a second indicator field (Table 5 & [0085] disclose multiple DCI formats for a DCI. A first DCI format 1_2 is an indicator field scheduling a PDSCH carrying a downlink training set and a second DCI format 0_2 is an indicator field scheduling a PUSCH carrying an uplink training set. Thus, disclosed is a second DCI comprising a second indicator field.).
Shen fails to disclose wherein the fourth DCI comprises a fourth indicator field.
However, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have a fourth indicator field, since it has been held that mere duplication of parts that does not produce new or unexpected results involves only routine skill in the art and thus has no patentable significance (see MPEP Section 2144.04, subsection VI.B). The motivation to do so would be to define a plurality of DCI indicator fields for sending multiple DCIs to indicate various uses of resources, activation of processes or other control messages from a base station.
Shen fails to disclose wherein the fourth indicator field indicates that the fourth DCI is for deactivating a process.
However, Narayanan teaches wherein the fourth indicator field indicates that the fourth DCI is for deactivating a process ([0126] discloses a DCI to activate online training and a separate DCI to deactivate online training.).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of Claim 12, as disclosed by Shen in view Narayanan and Anderson and further in view of Bai, wherein the fourth indicator field indicates that the fourth DCI is for deactivating a process, as taught by Narayanan. The motivation to do so would be to provide a method for disabling training at a receiving module in response to an indication from a receiving module of reconstruction loss or unavailable status of an AI component.
Shen fails to disclose wherein the process is a prediction process.
However, Bai further teaches wherein the process is a prediction process ([0059] discloses that a SOC in a UE may receive a machine learning model including code to predict a future downlink data channel (i.e. target information) based on using the machine learning model corresponding to estimating and predicting the downlink data channel).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of Claim 12, as disclosed by Shen in view Narayanan and Anderson and further in view of Bai, wherein the process is a prediction process, as further taught by Bai. The motivation to do so would be to provide a method for a base station to indicate to a UE, through an indicator field in a DCI, to deactivate future downlink data prediction based on a machine learning model if the base station is detecting superior downlink data channel performance at the UE to save battery power at the UE.
Regarding Claim 16, Shen in view of Narayanan and Anderson discloses the method according to Claim 1.
Shen fails to disclose wherein the second DCI is for deactivating the first training process.
However, Narayanan teaches wherein the second DCI is for deactivating the first training process ([0126] discloses a first DCI to activate online training and a second DCI to deactivate online training.).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of Claim 1, as disclosed by Shen in view Narayanan and Anderson, wherein the second DCI is for deactivating the first training process, as taught by Narayanan. The motivation to do so would be to provide a method for base station to disable training at a terminal in response to an indication from the terminal of reconstruction loss or unavailable status of an AI component.
Shen fails to disclose wherein the second DCI is for activating a prediction process, and wherein the prediction process comprises a process of predicting the target information by using the model corresponding to the target information
However, Bai further teaches wherein the second DCI is for activating a prediction process, and the prediction process comprises a process of predicting the target information by using the model corresponding to the target information ([00104]-[00105] discloses delivering and activating a machine learning model structure through a DCI. [0059] discloses that a SOC in a UE may receive the machine learning model from a base station through a DCI, including code to predict a future downlink data channel (i.e. target information) based on using the machine learning model corresponding to estimating and predicting the downlink data channel.).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of Claim 1, as disclosed by Shen in view of Narayanan and Anderson, wherein the second DCI is for activating a prediction process, and the prediction process comprises a process of predicting the target information by using the model corresponding to the target information, as further taught by Bai. The motivation to do so would reduce signaling overhead by have a single DCI indicate both disabling of training and activation of a prediction process at a terminal in response to an indication from the terminal of reconstruction loss or unavailable status of an AI component.
Regarding Claim 17, Shen in view of Narayanan and Anderson and further in view of Bai disclose the method according to Claim 16.
Shen discloses a second DCI comprising a second indicator field (Table 5 & [0085] disclose multiple DCI formats for a DCI. A first DCI format 1_2 is an indicator field scheduling a PDSCH carrying a downlink training set and a second DCI format 0_2 is an indicator field scheduling a PUSCH carrying an uplink training set. Thus, disclosed is a second DCI comprising a second indicator field.).
Shen fails to disclose wherein the second DCI comprises a fifth indicator field.
However, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have a fifth indicator field, since it has been held that mere duplication of parts that does not produce new or unexpected results involves only routine skill in the art and thus has no patentable significance (see MPEP Section 2144.04, subsection VI.B). The motivation to do so would be to define a plurality of DCI indicator fields for sending multiple DCIs to indicate various uses of resources, activation of processes or other control messages from a base station.
Shen fails to disclose wherein the fifth indicator field indicates that the fifth DCI is for deactivating the first training process.
However, Narayanan teaches wherein the fifth indicator field indicates that the fifth DCI is for deactivating the first training process and activating a process ([0126] discloses a DCI to activate online training and a separate DCI to deactivate online training.).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of Claim i6, as disclosed by Shen in view of Narayanan and Anderson and further in view of Bai, wherein the fifth indicator field indicates that the fifth DCI is for deactivating the first training process and activating a process, as taught by Narayanan. The motivation to do so would be to provide a method for disabling training at a receiving module in response to an indication from a receiving module of reconstruction loss or unavailable status of an AI component.
Shen fails to disclose wherein the process is a prediction process and wherein the fifth indicator field indicates that the second DCI is for activating the prediction process.
However, Bai further teaches wherein the process is a prediction process and wherein the fifth indicator field indicates that the second DCI is for activating the prediction process ([0059] discloses that a SOC in a UE may receive a machine learning model including code to predict a future downlink data channel (i.e. target information) based on using the machine learning model corresponding to estimating and predicting the downlink data channel. [00104]-[00105] discloses delivering and activating a machine learning model structure through a DCI.).
Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have the method of Claim 16, as disclosed by Shen in view of Narayanan and Anderson and further in view of Bai, wherein the process is a prediction process and wherein the fifth indicator field indicates that the second DCI is for activating the prediction process, as further taught by Bai. The motivation to do so would be to provide a method for a base station to indicate to a UE, through an indicator field in a DCI, to deactivate future downlink data prediction based on a machine learning model if the base station is detecting superior downlink data channel performance at the UE to save battery power at the UE and activate a prediction process at a terminal in response to an indication from the terminal of reconstruction loss or unavailable status of an AI component, in order to reduce signaling overhead by having a single indicator field indicate a DCI is for both disabling a training process and enabling a prediction process.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES P SEYMOUR whose telephone number is (571)272-7654. The examiner can normally be reached M-F 8-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nishant Divecha can be reached at 571-270-3125. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES P SEYMOUR/Examiner, Art Unit 2419
/Nishant Divecha/Supervisory Patent Examiner, Art Unit 2419