DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is made final.
Claims 1-32 are pending. Claims 1, 9, 15, 21 and 27 are independent claims.
Response to Arguments
Applicant's arguments filed 8/25/2025, regarding 35 U.S.C. 101 have been fully considered but they are not persuasive. Because the claims have been amended, the updated 35 U.S.C. 101 rejections will be explained below.
Applicant's arguments filed 8/25/2025, regarding 35 U.S.C. 102 rejections for the amended claims have been fully considered and are persuasive. However, because the claims have been amended, new 35 U.S.C. 103 rejections for claims 1, 2, 4, 6, 9, 10. 12, 13, 15-17, 21, 22, 25, 27, 28 and 30 have been made and will be explained below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-32 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1,
Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. Claim 1 is directed to an apparatus (Step 1: YES).
Step 2A prong 1: Does the claim recite a judicial exception? Claim 1 recites: using the one or more capabilities, to partition training of the neural network between the first device and second device. The partitioning of training of a neural network between two devices is a mental process – i.e., splitting training data into parts based on available memory or processing power (Step 2A Prong 1: YES).
Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, considered individually and in combination, integrate the judicial exception into a practical application? Claim 1 recites: One or more processors, comprising: circuitry to cause a first device to indicate to a second device one or more capabilities of the first device to train a neural network to cause the second device. Specifying one or more processors comprising circuitry, a first device and a second device is recited at a high level of generality, i.e., as a generic computer performing generic computer functions. Specifying that the capability is of a neural network is an additional element specifying a field of use without significantly more. Specifying a device indicating to another device is also recited at a high level of generality and is considered well-understood, routine, conventional activity (see MPEP 2106.05(d)(II)(i)) without significantly more (Step 2A prong 2: NO).
Step 2B: These elements are recited at such a high level of generality that they fail to integrate the abstract idea into a practical application, since they only amount to “apply it” using generic computer components (MPEP 2106.05(f)), limit the field of use without significantly more (MPEP 2106.05(h)), or well-understood, routine, conventional activity. These limitations, taken either alone or in combination, fail to provide an inventive concept (Step 2B: NO). Thus, the claim is not patent eligible.
Regarding claim 9, 15 and 27, they are apparatuses similar to claim 1 and are rejected on the same grounds – see above.
Regarding claim 21, it is a process similar to the one performed by the apparatus of claim 1 and is rejected on the same grounds – see above.
Regarding claims 2-8, 10-14, 16-20, 22-26 and 28-32, they recite limitations which further narrow the abstract idea without significantly more: Claim 2, specifying that the capability indication includes type of training supported by a UE device is specifying a field of use without significantly more; Claim 3, specifying that the capability indication includes channel state information autoencoder training capabilities of a UE device is specifying a field of use without significantly more; Claim 4, specifying that the capability indication includes computational capabilities of a UE device to perform training is specifying a field of use without significantly more; Claim 5, specifying that the capability indication includes training latency of a UE device is specifying a field of use without significantly more; Claim 6, specifying that the capability indication includes memory storage of a UE device is specifying a field of use without significantly more; Claim 7, specifying that the capability indication includes input types of a UE device is specifying a field of use without significantly more; Claim 8, specifying that the capability indication includes quantization types of a UE device is specifying a field of use without significantly more; Claim 10, specifying that the capability indication includes a type of training supported by a UE device is specifying a field of use without significantly more; Claim 11, specifying that the capability indication includes autoencoder training capabilities of a UE device is specifying a field of use without significantly more; Claim 13, specifying that the capability indication includes memory storage and computational capability of a UE device is specifying a field of use without significantly more; Claim 14, specifying that the capability indication includes the types of autoencoder training supported by a UE device is specifying a field of use without significantly more; Claim 16, specifying that the capability indication includes a type of training supported by a UE device is specifying a field of use without significantly more; Claim 17, specifying that the capability indication includes neural network training capabilities of a UE device is specifying a field of use without significantly more; Claim 18, specifying that the capability indication includes channel state information autoencoder training capabilities, and specifying that the devices are a UE device and a wireless radio network base station are specifying a field of use without significantly more; Claim 19, specifying that the capability indication includes autoencoder training capabilities of a UE device is specifying a field of use without significantly more, and causing the UE device to train at least a portion of an autoencoder is recited at a high level of generality, and provides nothing more than mere instructions to implement an abstract idea on a generic computer (MPEP 2106.05(f)); Claim 20, specifying that the capability indication includes training capabilities of a UE is specifying a field of use without significantly more, and causing the UE device to deploy an encoder that includes a portion of the neural network is recited at a high level of generality, and provides nothing more than mere instructions to implement an abstract idea on a generic computer (MPEP 2106.05(f)); Claim 22, specifying that the capability indication includes a type of training supported by a UE device is specifying a field of use without significantly more; Claim 23, specifying that the capability indication includes autoencoder training capabilities of a UE device is specifying a field of use without significantly more; Claim 24, specifying that the capability indication includes autoencoder input types supported by a UE device is specifying a field of use without significantly more; Claim 25, specifying that the capability indication includes sending a signal is well understood, routine, conventional activity of data transmitting (see MPEP 2106.05(d)(II)(i)) or insignificant extra-solution activity that does not add a meaningful limitation to the capability indicating process, and specifying that the devices are a UE device and a base station is specifying a field of use without significantly more; Claim 26, specifying that the capability indication includes neural network size information supported by a UE device is specifying a field of use without significantly more; Claim 28, specifying that the capability indication includes a type of training supported by a UE device is specifying a field of use without significantly more; Claim 29, specifying that the capability indication includes channel state information autoencoder training capabilities supported by a UE device is specifying a field of use without significantly more; Claim 30, specifying that the capability indication includes computational capabilities, memory storage and training latency of a UE device is specifying a field of use without significantly more; Claim 31, specifying that the capability indication includes estimated downlink channel as a type of supported input for the UE device is specifying a field of use without significantly more; Claim 32, specifying that the capability indication includes different quantization types supported by a UE device is specifying a field of use without significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 4, 6, 9, 10, 12, 13, 15, 16, 17, 21, 22, 25, 27, 28 and 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 20230259789 A1, INCLUDED IN IDS), herein Wang ‘789, in view of Wang et al. (US 20210158151 A1), herein Wang ‘151.
Regarding claim 1, Wang ‘789 teaches: One or more processors, comprising: circuitry (¶26, The UE 110 also includes processor(s) 210 and computer-readable storage media 212) to cause a first device to indicate to a second device one or more capabilities of the first device to train a neural network (¶68, In aspects, the base station receives a UE capability information message (not illustrated) from each UE – and – ¶68, As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures…).
Wang ‘789 fails to teach: to cause the second device, using the one or more capabilities, to partition training of the neural network between the first device and second device.
However, in the same field of endeavor, Wang ‘151 teaches: to cause the second device, using the one or more capabilities, to partition training of the neural network between the first device and second device (¶146, For example, the E2E ML controller 318 determines a first partition of the E2E ML configuration that corresponds to processing information at the UE 110, a second partition of the E2E ML configuration that corresponds to processing information at the base station 120, and a third partition of the E2E ML configuration that corresponds to processing information at the core network server 302, where determining the partitions can be based on any combination of the capabilities, wireless network resource partitioning, the operating parameters, the current operating environment, and so forth).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to partition training between devices based on capabilities as disclosed by Wang ‘151 in the apparatus disclosed by Wang ‘789 to create more device-appropriate training processes (¶54, Thus, UEs or access points with less processing resources relative to a core network server or base station receive NN formation configurations optimized for the available processing resources).
Regarding claim 2, Wang ‘789 further teaches: The one or more processors of claim 1, wherein the circuitry is to indicate the one or more capabilities by at least indicating a type of training supported by a user equipment (UE) device (¶68, As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures…).
Regarding claim 4, Wang ‘789 further teaches: The one or more processors of claim 1, wherein the circuitry is to indicate the one or more capabilities by at least indicating a computational capability of a user equipment (UE) device to perform training (¶68, As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures…)
Regarding claim 6, Wang ‘789 further teaches: The one or more processors of claim 1, wherein the circuitry is to indicate the one or more capabilities by at least indicating a memory storage of a user equipment (UE) device to perform training (¶68, As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: …memory/storage capabilities).
Regarding claim 9, Wang ‘789 teaches: A system, comprising: one or more processors (¶26, The UE 110 also includes processor(s) 210) to cause a first device to indicate to a second device one or more capabilities of the first device to train a neural network (¶68, In aspects, the base station receives a UE capability information message (not illustrated) from each UE – and – ¶68, As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures…)… and one or more memories to store at least a portion of the neural network (¶26, The UE 110 also includes… computer-readable storage media 212).
Wang ‘789 fails to teach: to cause the second device, using the one or more capabilities, to partition training of the neural network between the first device and second device.
However, in the same field of endeavor, Wang ‘151 teaches: to cause the second device, using the one or more capabilities, to partition training of the neural network between the first device and second device (¶146, For example, the E2E ML controller 318 determines a first partition of the E2E ML configuration that corresponds to processing information at the UE 110, a second partition of the E2E ML configuration that corresponds to processing information at the base station 120, and a third partition of the E2E ML configuration that corresponds to processing information at the core network server 302, where determining the partitions can be based on any combination of the capabilities, wireless network resource partitioning, the operating parameters, the current operating environment, and so forth).
Regarding claim 10, it recites similar limitations to claim 2 and is rejected on the same grounds – see above.
Regarding claim 12, Wang ‘789 further teaches: The system of claim 9, wherein the one or more processors are to indicate the one or more capabilities by at least causing a signal to be sent from a user equipment (UE) device to a base station (¶79, the UEs 111, 112, and 113 transmit a message that indicates updated ML information to the base station).
Regarding claim 13, Wang ‘789 further teaches: The system of claim 9, wherein the one or more processors are to indicate the one or more capabilities by at least indicating one or more of a computational capability and a memory storage of a user equipment (UE) device to perform training (¶68, As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures, supported number of layers, available processing power, memory/storage capabilities…)).
Regarding claim 15, it recites similar limitations to claim 1 and is rejected on the same grounds – see above.
Regarding claim 16, it recites similar limitations to claim 2 and is rejected on the same grounds – see above.
Regarding claim 17, Wang ‘789 teaches: The non-transitory machine-readable medium of claim 15, wherein the set of instructions, which if performed by the one or more processors, cause the one or more processors to at least indicate the one or more capabilities by at least indicating one or more neural network training capabilities of a user equipment (UE) device (¶68, As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures…).
Regarding claim 21, it is a method similar to the one performed on the apparatus of claim 1 and is rejected on the same grounds – see above.
Regarding claim 22, it recites similar limitations to claim 2 and is rejected on the same grounds – see above.
Regarding claim 25, Wang ‘789 further teaches: The system of claim 21, wherein the one or more processors are to indicate the one or more capabilities by at least causing a signal to be sent from a user equipment (UE) device to a base station (¶79, the UEs 111, 112, and 113 transmit a message that indicates updated ML information to the base station).
Regarding claim 27, it is an apparatus that recites similar limitations to claim 1 and is rejected on the same grounds – see above.
Regarding claim 28, it recites similar limitations to claim 2 and is rejected on the same grounds – see above.
Regarding claim 30, Wang ‘789 further teaches: The user equipment device of claim 27, wherein the circuitry is to indicate the one or more capabilities by at least indicating one or more of a computational capability, a training latency, and a memory storage of the user equipment device to perform training (¶68, As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures… memory/storage capabilities – and – ¶182, the aspect associated with the WTRU capability may include one or more of the following: … a training latency, etc.).
Claim(s) 3, 11, 18, 19, 20, 23, 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang ‘789 in view of Wang ‘151 as applied to claims 1, 9, 15, 21 and 27, and further in view of Namgoong.
Regarding claim 3, Wang ‘789 in view of Wang ‘151 fails to teach: The one or more processors of claim 1, wherein the circuitry is to indicate the one or more capabilities by at least indicating one or more channel state information autoencoder training capabilities of a user equipment (UE) device
However, in the same field of endeavor, Namgoong teaches: wherein the circuitry is to indicate the one or more capabilities by at least indicating one or more channel state information autoencoder training capabilities of a user equipment (UE) device (¶91, For example, in some aspects, autoencoders may be used for compressing CSF for feeding back CSI to a server — and — ¶92, Information about the environment of the client may include information about the client (e.g., device information, configuration information, capability information, and/or the like. Also see Fig. 8, environmental vectors 850 being sent to the server from the clients).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate channel state information autoencoder training capabilities as disclosed by Namgoong in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to achieve better performance (¶35, aspects may facilitate better physical layer link performance).
Regarding claim 11, Wang ‘789 in view of Wang ’151 fails to teach: The system of claim 9, wherein the one or more processors are to indicate the one or more capabilities by at least indicating one or more autoencoder training capabilities of a user equipment (UE) device.
However, in the same field of endeavor, Namgoong teaches: wherein the one or more processors are to indicate the one or more capabilities by at least indicating one or more autoencoder training capabilities of a user equipment (UE) device (¶92, Information about the environment of the client may include information about the client (e.g., device information, configuration information, capability information, and/or the like. Also see Fig. 8, environmental vectors 850 being sent to the server from the clients).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate autoencoder training capabilities as disclosed by Namgoong in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to achieve better performance (¶35, aspects may facilitate better physical layer link performance).
Regarding claim 18, Wang ‘789 in view of Wang ’151 fails to teach: The non-transitory machine-readable medium of claim 15, wherein the one or more capabilities include one or more channel state information autoencoder training capabilities of a user equipment (UE) device and the set of instructions, which if performed by the one or more processors, cause a representation of the one or more capabilities to be sent from the UE device to a wireless radio network base station.
However, in the same field of endeavor, Namgoong teaches: wherein the one or more capabilities include one or more channel state information autoencoder training capabilities of a user equipment (UE) device and the set of instructions, which if performed by the one or more processors, cause a representation of the one or more capabilities to be sent from the UE device to a wireless radio network base station (¶91, For example, in some aspects, autoencoders may be used for compressing CSF for feeding back CSI to a server — and — ¶92, Information about the environment of the client may include information about the client (e.g., device information, configuration information, capability information, and/or the like. Also see Fig. 8, environmental vectors 850 being sent to the server from the clients).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate channel state information autoencoder training capabilities as disclosed by Namgoong in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to achieve better performance (¶35, aspects may facilitate better physical layer link performance).
Regarding claim 19, Wang ‘789 in view of Wang ’151 fails to teach: The non-transitory machine-readable medium of claim 15, wherein the set of instructions, which if performed by the one or more processors, cause the one or more processors to indicate the one or more capabilities by at least indicating one or more autoencoder training capabilities of a user equipment (UE) device, and cause the UE device to train at least a portion of an autoencoder.
However, in the same field of endeavor, Namgoong teaches: wherein the set of instructions, which if performed by the one or more processors, cause the one or more processors to indicate the one or more capabilities by at least indicating one or more autoencoder training capabilities of a user equipment (UE) device, and cause the UE device to train at least a portion of an autoencoder (¶92, Information about the environment of the client may include information about the client (e.g., device information, configuration information, capability information, and/or the like) – and – ¶35, According to aspects of the techniques and apparatuses described herein, a client is configured with a classifier and a set of associated autoencoders… During the training, the autoencoders and the classifier are collaboratively learned using the federated learning techniques).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate channel state information autoencoder training capabilities as disclosed by Namgoong in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to achieve better performance (¶35, aspects may facilitate better physical layer link performance).
Regarding claim 20, Wang ‘789 in view of Wang ’151 fails to teach: The non-transitory machine-readable medium of claim 15, wherein the set of instructions, which if performed by the one or more processors, cause the one or more processors to indicate the one or more capabilities by at least indicating one or more training capabilities of a user equipment (UE) device, and cause the UE device to deploy an encoder that includes at least a portion of the neural network.
However, in the same field of endeavor, Namgoong teaches: wherein the set of instructions, which if performed by the one or more processors, cause the one or more processors to indicate the one or more capabilities by at least indicating one or more training capabilities of a user equipment (UE) device, and cause the UE device to deploy an encoder that includes at least a portion of the neural network (¶92, Information about the environment of the client may include information about the client (e.g., device information, configuration information, capability information, and/or the like) – and – ¶38, As shown, the client autoencoder 310 may include an encoder 314 configured to receive an observed wireless communication vector, x, as input and to provide a latent vector, h, as output).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate training capabilities and deploy an encoder on the UE as disclosed by Namgoong in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to achieve better performance (¶35, aspects may facilitate better physical layer link performance).
Regarding claim 23, it recites similar limitations to claim 11 and is rejected on the same grounds – see above.
Regarding claim 29, it recites similar limitations to claim 3 and is rejected on the same grounds – see above.
Claim(s) 5, 7, 8, 24 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang ‘789 in view of Wang ‘151, as applied to claims 1 and 21, and further in view of Narayanan et al. (US 20240187127 A1) herein Narayanan.
Regarding claim 5, Wang ‘789 in view of Wang ‘151 fails to teach: The one or more processors of claim 1, wherein the circuitry is to indicate the one or more capabilities by at least indicating a training latency of a user equipment (UE) device.
However, in the same field of endeavor, Narayanan teaches: wherein the circuitry is to indicate the one or more capabilities by at least indicating a training latency of a user equipment (UE) device (¶182, For example, the aspect associated with the WTRU capability may include one or more of the following: …an inference latency, a training latency, etc. – and – ¶263, The WTRU may be configured with different event configurations to monitor and/or report change(s) in the context).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate a training latency of UE as disclosed by Narayanan in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to achieve better model performance (¶68, triggered based on a change in context, for example, to improve AI model performance).
Regarding claim 7, Wang ‘789 in view of Wang ‘151 fails to teach: The one or more processors of claim 1, wherein the circuitry is to indicate the one or more capabilities by at least indicating one or more types of input supported by a user equipment (UE) device.
However, in the same field of endeavor, Narayanan teaches: wherein the circuitry is to indicate the one or more capabilities by at least indicating one or more types of input supported by a user equipment (UE) device (¶182, For example, the aspect associated with the WTRU capability may include one or more of the following: processing (e.g., the number of operations that can be executed in a time period, for example, per second, and/or supported by GPU, NPU, or TPU), a size of a neural network (NN) supported, quantization levels, maximum input and/or output dimensions).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate input types supported by a UE device as disclosed by Narayanan in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to achieve better model performance (¶68, triggered based on a change in context, for example, to improve AI model performance).
Regarding claim 8, Wang ‘789 in view of Wang ‘151 fails to teach: The one or more processors of wherein the circuitry is to indicate the one or more capabilities by at least indicating one or more quantization types supported by a user equipment (UE) device
However, in the same field of endeavor, Narayanan teaches: wherein the circuitry is to indicate the one or more capabilities by at least indicating one or more quantization types supported by a user equipment (UE) device (¶182, For example, the aspect associated with the WTRU capability may include one or more of the following: … quantization levels).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate quantization types supported by a UE device as disclosed by Narayanan in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to achieve better model performance (¶68, triggered based on a change in context, for example, to improve AI model performance).
Regarding claim 24, Wang ‘789 in view of Wang ‘151 fails to teach: The method of claim 21, wherein indicating one or more capabilities includes indicating one or more types of autoencoder input supported by a user equipment (UE) device.
However, in the same field of endeavor, Narayanan teaches: wherein indicating one or more capabilities includes indicating one or more types of autoencoder input supported by a user equipment (UE) device (¶182, For example, the aspect associated with the WTRU capability may include… maximum input and/or output dimensions – the WTRU may contain an autoencoder, as described in ¶129, In examples, the AI component at the WTRU may correspond to an encoder function… the encoder and decoder herein may be coupled to form an autoencoder architecture).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate autoencoder input types supported by a UE device as disclosed by Narayanan in the method disclosed by Wang ‘789 in view of Wang ‘151 to achieve better model performance (¶68, triggered based on a change in context, for example, to improve AI model performance).
Regarding claim 26, Wang ‘789 further teaches: The method of claim 21, wherein indicating one or more capabilities includes indicating one or more of a maximum number of neural network layers… (¶68, As one example, the UE capability information message includes ML capabilities (e.g., any one or more of… supported number of layers).
Wang ‘789 in view of Wang ‘151 fails to teach: indicating a maximum number of neurons in a layer, and a maximum number of neurons across layers.
However, in the same field of endeavor, Narayanan teaches: indicating a maximum number of neurons in a layer, and a maximum number of neurons across layers (¶182, For example, the aspect associated with the WTRU capability may include one or more of the following: processing (e.g., the number of operations that can be executed in a time period, for example, per second, and/or supported by GPU, NPU, or TPU), a size of a neural network (NN) supported, quantization levels, maximum input and/or output dimensions – the size of a neural network and input/output dimensions encompass the maximum number of neurons in a layer and across layers).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate aspects of a neural network size supported by a UE device as disclosed by Narayanan in the method disclosed by Wang ‘789 in view of Wang ‘151 to achieve better model performance (¶68, triggered based on a change in context, for example, to improve AI model performance).
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang ‘789 in view of Wang ‘151, as applied to claim 9, and further in view of Namgoong and O’Shea (US 20180322388 A1).
Regarding claim 14, Wang ‘789 in view of Wang ‘151 fails to teach: The system of claim 9, wherein the one or more processors are to indicate the one or more capabilities by at least indicating one or more of a capability of a user equipment (UE) device to train a complete autoencoder, train a local autoencoder in federated training, and train an encoder of an autoencoder in split training.
However, in the same field of endeavor, Namgoong teaches: wherein the one or more processors are to indicate the one or more capabilities by at least indicating one or more of a capability of a user equipment (UE) device to… train a local autoencoder in federated training.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate UE capability to train a local autoencoder as disclosed by Namgoong in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to achieve better performance (¶35, aspects may facilitate better physical layer link performance).
Wang ‘789 in view of Wang ‘151 and Namgoong fails to teach: train a complete autoencoder… and train an encoder of an autoencoder in split training.
However, in the same field of endeavor, O’Shea teaches: train a complete autoencoder… and train an encoder of an autoencoder in split training (¶71, For example, the encoder network 302 and the decoder network 304 may be jointly trained as an auto-encoder... In some implementations, the encoder network 302 and decoder network 304 may be separately trained).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to train the complete autoencoder or train in a split fashion as disclosed by O’Shea in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to achieve more flexibility (¶30, thus providing advantages in adapting to different types of wireless system requirements, and in some cases improving the throughput, error rate, complexity, and power consumption performance of such systems).
Claim(s) 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang ‘789 in view of Wang ‘151, as applied to claim 27, and further in view of Ren et al. (US 20240121621 A1), herein Ren.
Regarding claim 31, Wang ‘789 in view of Wang ‘151 fail to teach: The user equipment device of claim 27, wherein the circuitry is to indicate the one or more capabilities by at least indicating the user equipment device supports estimated downlink channel as a type of supported input.
However, in the same field of endeavor, Ren teaches: wherein the circuitry is to indicate the one or more capabilities by at least indicating the user equipment device supports estimated downlink channel as a type of supported input (¶78, For example, with reference to the diagram 800 of FIG. 8A, in instances where the estimated downlink channel 804 is the input for the model 802 in the UE, the model 802 may output the compressed and quantized channel information 806 (e.g., one quantized channel set index)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use estimated downlink channel as a type of supported input as disclosed by Ren in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to create a reduced data size (¶78, The size of the extracted latent data may be very small, for example, in comparison to the raw data. The latent data may still keep most of the features of the raw data).
Claim(s) 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang ‘789 in view of Wang ‘151, as applied to claim 27, and further in view of Wang et al. (US 20190385050 A1), herein Wang ‘050, Cheng et al. (US 20200021373 A1), herein Cheng, and Lee et al. (US 20230155889 A1), herein Lee.
Regarding claim 32, Wang ‘789 further teaches: The user equipment device of claim 27, wherein the circuitry is to indicate the one or more capabilities (¶68, the base station receives a UE capability information message... from each UE and selects the initial ML configuration).
Wang ‘789 in view of Wang ‘151 fails to teach: by at least indicating the user equipment device supports one or more of uniform quantization, non-uniform quantization, symmetric quantization, asymmetric quantization.
However, in the same field of endeavor, Wang ‘050 teaches: by at least indicating the user equipment device supports one or more of uniform quantization, non-uniform quantization, symmetric quantization, asymmetric quantization (¶53, For instance, the quantization setting component 302 can set or select quantization to be symmetric and/or uniform, or neither symmetric or uniform…).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add symmetric, asymmetric, uniform, and non-uniform quantization as disclosed by Wang ‘050 in the apparatus disclosed by Wang ‘789 in view of Wang ‘151 to increase model training speed and performance (¶19, it can reduce the memory footprint and communication overhead for transferring data between layers).
Wang ‘789 in view of Wang ‘151 and Wang ‘050 fails to teach: by at least indicating the user equipment device supports... static quantization, dynamic quantization...
However, in the same field of endeavor, Cheng teaches: by at least indicating the user equipment device supports... static quantization, dynamic quantization... (¶64, in one implementation, the step size for uniform quantization may be fixed and stored in the UE. In one implementation, the step size for uniform quantization may be dynamically configured).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add dynamic and static quantization as disclosed by Cheng in the user equipment device disclosed by Wang ’789 in view of Wang ‘050 and Wang ‘151 to increase model performance (¶82, the performance gain may be higher than 97% when the step size is greater than 4 dB).
Wang ’789 in view of Wang ‘050, Wang ‘151 and Cheng fails to teach: by at least indicating the user equipment device supports... stochastic quantization.
However, in the same field of endeavor, Lee teaches: by at least indicating the user equipment device supports... stochastic quantization (¶95, a stochastic quantization activation is developed).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add stochastic quantization as disclosed by Lee in the apparatus disclosed by Wang ‘789 in view of Wang ‘050, Wang ‘151 and Cheng to mitigate the vanishing gradient issue in the neural network training process (¶94, at DNN outputs exhibits gradient vanishing issues ending up with the training failure).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARRISON CHAN YOUNG KIM whose telephone number is (571)272-0713. The examiner can normally be reached Monday - Thursday 10:00 am - 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CESAR PAULA can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HARRISON C KIM/Examiner, Art Unit 2145
/CESAR B PAULA/Supervisory Patent Examiner, Art Unit 2145