DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings filed on 6/8/2023 are accepted.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-6, 8-10, 15, 18-25, 29 and 30 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang et al. (US-2021/0182658 hereinafter, Wang).
Regarding claim 1, Wang teaches an apparatus for wireless communication at a user equipment (UE) (Fig. 2 [200]), comprising:
one or more memories; (Fig. 2 [212])
one or more processors (Fig. 2 [210]), coupled to the one or more memories, which individually or in any combination, are operable to cause the UE to:
obtain generalization information associated with a model, a model structure (MS), or a parameter set (PS) associated with the model; (Fig. 7 [715] and Page 11 [0104] “the base station transmits a message that includes neural network parameter configurations (e.g., weight values, coefficient values, number of filters)”)
initiate a connection to a network node; (Pages 10-11 [0100] “At times, the base station forwards the UE capabilities to a core network server (e.g., the core network server 302)”)
filter the model, the MS, or the PS based at least in part on the generalization information; (Pages 3-4 [0039] “UE neural network manager 218 accesses the neural network table 216, such as by way of an index value, and forms a DNN using the NN formation configuration elements specified by a NN formation configuration. In implementations, UE neural network manager forms multiple DNNs to process wireless communications (e.g., downlink communications and/or uplink communications exchanged with the base station 120).”) and
transmit UE capability information to the network node, based at least in part on filtering the model, the MS or the PS, that indicates whether the model, the MS, or the PS is applicable, available, or supported by the UE. (Fig. 7 [705] and Pages 10-11 [0100-0101])
Regarding claim 2, Wang teaches wherein filtering the model, the MS or the PS based at least in part on the generalization information comprises filtering the model, the MS, or the PS based at least in part on a scope associated with the model, the MS, or the PS, wherein the scope is indicated in a model structure identifier or a model descriptor associated with the model, the MS, or the PS. (Page 11 [0103] “analyzes multiple neural network formation configurations and/or multiple neural network formation configuration elements included in a neural network table, and determines the neural network formation configuration by selects and/or creates a neural network formation configuration that aligns with current channel conditions, such as by matching the channel type, transmission medium properties, etc., to input characteristics as further described”)
Regarding claim 3, Wang teaches wherein the one or more processors, to obtain the generalization information, are configured to receive the generalization information from a server (Page 11 [0103] “the base station 120 (and/or the core network server 302) selects the neural network formation configuration from multiple neural network formation configurations.”), and wherein the one or more processors, to receive the generalization information from the server, are configured to receive a software update from the server that includes the generalization information or to receive the generalization information from the server during a deployment of the model, the MS, or the PS. (Page 5 [0051] “he core network neural network manager 312 then communicates the selected NN formation configuration to the base stations 120 and/or the UE 110”)
Regarding claim 4, Wang teaches wherein the generalization information includes at least one of area information, location information, network node configuration information, UE configuration information, antenna information, carrier frequency information, band information, sub-carrier spacing information, time division duplexing information, frequency division duplexing information, speed information, range information, Doppler information, or delay spread information. (Page 11 [0101] “a channel type being processed by the deep neural network (e.g., downlink, uplink, data, control, etc.), transmission medium properties (e.g., power measurements, signal-to-interference-plus-noise ratio (SINR) measurements, channel quality indicator (CQI) measurements), encoding schemes, UE capabilities, BS capabilities, and so forth”)
Regarding claim 5, Wang teaches wherein the one or more processors are further configured to transmit an indication of model, MS, or PS identifiers that are supported by the UE (Pages 10-11 [0100] “supported ML architectures”), wherein the indication is transmitted per cell, per network node, per radio access network area code, per tracking area, or per public land mobile network. (Pages 14-15 [0129] “ML capabilities of intermediary devices (e.g., the base station 120, the core network server 302), a current operating environment (e.g., channel conditions, UE location)”)
Regarding claim 6, Wang teaches wherein the one or more processors are further configured to prioritize a filtering of model, MS, or PS identifiers that are supported in a particular cell, network node, radio access network area code, tracking area, or public land mobile network. (Page 14 [0127] “E2E ML configuration, some implementations of the E2E ML controller partition the E2E ML configuration based on the device(s) participating in the E2E communication” and “an end-to-end machine-learning controller (E2E ML controller) obtains capabilities of device(s) associated with end-to-end communications in a wireless network, such as machine-learning (ML) capabilities of device(s) participating in the E2E communication, and determines an E2E ML configuration based on the ML capabilities (e.g., supported ML architectures, supported number of layers, available processing power, memory limitation, available power budget, fixed-point processing vs. floating point processing, maximum kernel size capability, computation capability) of the device(s)”)
Regarding claim 8, Wang teaches wherein the one or more processors are further configured to transmit an indication of model, MS, or PS identifiers that are supported by the UE (Pages 10-11 [0100] “supported ML architectures”) per cell, per network node, per radio access network area code, per tracking area, or per public land mobile network (Pages 14-15 [0129] “ML capabilities of intermediary devices (e.g., the base station 120, the core network server 302), a current operating environment (e.g., channel conditions, UE location)”), wherein the indication includes model descriptor information or model assignment information. (Page 11 [0104] “the base station 120 specifies a purpose and/or processing assignment in the message” & “ the base station can communicate a processing assignment with a neural network formation configuration”)
Regarding claim 9, Wang teaches wherein the UE capability information comprises:
model, MS, or PS identifiers that are supported by the UE; (Pages 10-11 [0100] “supported ML architectures”)
model, MS, or PS identifiers that are available to the UE;
UE vendor information; or
information that indicates whether a model, MS, or PS identifier is cell specific, network node specific, radio access network area code specific, tracking area specific, or public land mobile network specific.
Regarding claim 10, Wang teaches wherein the one or more processors are further configured to receive, from the network node, a group identifier associated with a plurality of network nodes (Fig. 2 [216] “a number of nodes utilized by the neural network” and Page 3 [0038]), wherein the group identifier is used by the UE to identify infra-vendor information or network node grouping information associated with a training of the model, the MS, or the PS. (note: this is a recitation of intended use and not an active step)
Regarding claim 15, Wang teaches wherein a machine learning function, feature, or feature group name and a model, or an MS identifier is indicated per configuration. (Page 11 [0104] “the base station 120 specifies a purpose and/or processing assignment in the message”)
Regarding claim 18, Wang teaches an apparatus for wireless communication at a network node (Fig. 7 [700]), comprising:
one or more memories; (Fig. 2 [262])
one or more processors (Fig. 2 [260]), coupled to the one or more memories, which individual or in any combination, are operable to cause the network node to:
obtain generalization information associated with a model, a model structure (MS), or a parameter set (PS) associated with the model; (Page 11 [0101] “At 710 the base station 120 determines a neural network formation configuration. In determining the neural network formation configuration, the base station analyzes any combination of information, such as a channel type being processed by the deep neural network (e.g., downlink, uplink, data, control, etc.), transmission medium properties (e.g., power measurements, signal-to-interference-plus-noise ratio (SINR) measurements, channel quality indicator (CQI) measurements), encoding schemes, UE capabilities, BS capabilities, and so forth”)
receive user equipment (UE) capability information associated with a UE; (Fig. 7 [705] and Pages 10-11 [0100])
filter the model, the MS, or the PS based at least in part on the generalization information and the UE capability information; (Fig. 7 [710] and Page 11 [0101-0103] “e base station analyzes any combination of information, such as a channel type being processed by the deep neural network (e.g., downlink, uplink, data, control, etc.), transmission medium properties (e.g., power measurements, signal-to-interference-plus-noise ratio (SINR) measurements, channel quality indicator (CQI) measurements), encoding schemes, UE capabilities, BS capabilities, and so forth”) and
transmit an indication, to the UE, that indicates whether the model, the MS or the PS is to be activated, deactivated, or switched by the UE. (Fig. 7 [715] and Page 11 [0104-0106] “the base station 120 communicates the neural network formation configuration to the UE 110” and “the UE 110 extracts neural network architecture and/or parameter configurations from the message. The UE 110 then forms the neural network using the neural network formation configuration, the extracted architecture and/or parameter configurations, etc”)
Regarding claim 19, Wang teaches wherein the one or more processors, to filter the model, the MS, or the PS, are configured to filter model, MS, or PS identifiers associated with the model, the MS or the PS. (Page 11 [0103] “the base station 120 (and/or the core network server 302) selects the neural network formation configuration by selecting a subset of neural network architecture formation elements in a neural network table.”)
Regarding claim 20, Wang teaches wherein the one or more processors, to obtain the generalization information, are configured to receive the generalization information from a server, and wherein the one or more processors, to receive the generalization information from the server, are configured to receive the generalization information from the server during a deployment of the model, the MS or the PS. (Page 11 [0101] “In some implementations, the core network server 302 determines the neural network formation configuration in manner(s) similar to that described with respect to the base station, and communicates the determined neural network formation configuration to the base station”)
Regarding claim 21, Wang teaches wherein the generalization information includes at least one of area information, location information, network node configuration information, UE configuration information, antenna information, carrier frequency information, band information, sub-carrier spacing information, time division duplexing information, frequency division duplexing information, speed information, range information, doppler information, or delay spread information. (Page 11 [0101] “the base station analyzes any combination of information, such as a channel type being processed by the deep neural network (e.g., downlink, uplink, data, control, etc.), transmission medium properties (e.g., power measurements, signal-to-interference-plus-noise ratio (SINR) measurements, channel quality indicator (CQI) measurements), encoding schemes, UE capabilities, BS capabilities, and so forth”)
Regarding claim 22, the limitations of claim 22 are rejected as being the same reasons set forth above in claim 5.
Regarding claim 23, the limitations of claim 23 are rejected as being the same reasons set forth above in claim 8.
Regarding claim 24, Wang teaches wherein the UE capability information comprises:
model, MS or PS identifiers that are supported by the UE; (Pages 10-11 [0100] “the UE capabilities include ML-related capabilities, such as a maximum kernel size capability, a memory limitation, a computation capability, supported ML architectures”)
Regarding claim 25, the limitations of claim 25 are rejected as being the same reasons set forth above in claim 10.
Regarding claim 29, the limitations of claim 29 are rejected as being the same reasons set forth above in claim 1.
Regarding claim 30, the limitations of claim 30 are rejected as being the same reasons set forth above in claim 18.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 7, 13 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Soykan et al. (US-2025/0119296 hereinafter, Soykan).
Regarding claim 7, Wang teaches the limitations of claim 1 above, but differs from the claimed invention by not explicitly reciting refrain from transmitting an indication of a local model, a local MS, or a local PS identifier based at least in part on the UE being configured with a global model, the MS or the PS.
In an analogous art, Soykan teaches a method and apparatus for performing federated learning (Abstract and Page 1 [0002-0007]) that includes refraining from using a local model, a local MS, or a local PS identifier based at least in part on the UE being configured with a global model, the MS or the PS. (Page 7 [0122])
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to be motivated to implement the invention of Wang after modifying it to incorporate the ability to use the global model which is configured and not the local model of Soykan since it enables the ability to generate an updated global model (Soykan Page 7 [0125]) which can be used to construct an updated global ML model. (Soykan Page 1 [0007])
Regarding claim 13, Wang in view of Soykan teaches wherein the one or more processors are further configured to receive an indication of a global identifier associated with the model, the MS, or the PS (Soykan Page 3 [0047] as opposed to the local model identifier, see Page 3 [0049]), wherein the global identifier is received in accordance with a hierarchical assignment of the global identifier. (Non-Functional Descriptive material1 the data is not recited as being used in a step; it is simply received)
Regarding claim 17, Wang in view of Soykan teaches wherein the one or more processors are further configured to receive, from the network node, an indication of one or more configuration identifiers associated with one or more configurations of the model that are retained, modified, or released. (Soykan Page 2 [0015] key & model signature)
Regarding claim 27, the limitations of claim 27 are rejected as being the same reasons set forth above in claim 13.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Mu et al. (WO2022/236831 hereinafter Mu).
Regarding claim 16, Wang teaches the limitations of claim 1 above, but differs from the claimed invention by not explicitly reciting wherein one or more configurations of the model are retained during a handover operation.
In an analogous art, Mu teaches a model learning method and apparatus (Abstract) that includes the ability to retain one or more configurations of the model during a handover operation. (Page 16 of translation “In one embodiment, the macro base station decides that the terminal continues to participate in the model training task of the source micro base station, then the target micro base station (that is, the micro base station accessed by the terminal after handover) will be responsible for forwarding the first model training between the terminal and the source micro base station As a result, the source Femtocell continues to keep the terminal in the training task list and reassigns the model training task to it. The target micro base station sends the task arrangement result of the terminal to the terminal, and the terminal retains the training information in the source micro base station, and continues to participate in the federated learning of the source micro base station”)
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to be motivated to implement the invention of Wang after modifying it to incorporate the ability to retain and continue training a model during handover of Mu since it enables the training to continue based upon the type of training task that is occurring. (Mu Page 16)
Allowable Subject Matter
Claims 11, 12, 14, 26 and 28 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The Examiner was unable to find the combination of claims 1+10+(11 or 12) or 1+13+14 or 18+25+26 or 18+27+28 in the prior art.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW C SAMS whose telephone number is (571)272-8099. The examiner can normally be reached M-F 8:30-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Anderson can be reached at (571)272-4177. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Matthew C Sams/Primary Examiner, Art Unit 2646
1 The Applicant is reminded that structure defines how an apparatus differs from prior art apparatuses and when no difference in structure is defined, the assumption is made that the prior art structure meets the limitations.
The Examiner will not give patentable weight to descriptive material absent a new and unobvious functional relationship between the descriptive material and the substrate. See In re Lowry, 32 F.3d 1579, 1582-1583 (Fed. Cir. 1994); In re Ngai, 367 F.3d 1336,1339 (Fed. Cir. 2004) (nonfunctional descriptive material cannot render nonobvious an invention that would have otherwise been obvious). See also Ex parte Mathias, 84 USPQ2d 1276 (BPAI 2005) (nonprecedential), aff' d, 191 Fed. Appx. 959 (Fed. Cir. 2006). “Claim limitations directed to printed matter are not entitled to patentable weight unless the printed matter is functionally related to the substrate on which the printed matter is applied.” Praxair Distribution, Inc. v. Mallinckrodt Hosp. Prods. IP Ltd., 890 F.3d 1024, 1031 (Fed. Cir. 2018) (emphasis added). This printed matter doctrine is not strictly limited to “printed” materials. Mallinckrodt, 890 F.3d at 1032. More specifically, “a claim limitation is directed to printed matter ‘if it claims the content of information.' ” Mallinckrodt, 890 F.3d at 1032 (quoting In re Distefano, 808 F.3d 845, 848 (Fed. Cir. 2015)).
In method cases, the relevant inquiry is whether a new and unobvious functional relationship with the known method exists. See In re Kao, 639 F.3d 1057, 1072-73, 98 USPQ2d 1799, 1811-12 (Fed. Cir. 2011); King Pharmaceuticals Inc. v. Eon Labs Inc., 616 F.3d 1267, 1279, 95 USPQ2d 1833, 1842 (Fed. Cir. 2010).