DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Allowable Subject Matter
Claim 10 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-5, 7-9, and 11-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Otterstein et al. (US Pub. 20210345134).
Referring to claim 1, Otterstein discloses A configuration method for an artificial intelligence (AI) network parameter, comprising:
obtaining, by a first communication device [figs. 1 and 2, wireless device 120], an AI network parameter in at least one of the following manners: predefinition, receiving from a second communication device [figs. 1 and 2, radio network node 110; pars. 66, 85-88, 205-211, 223; wireless communication network 100 comprises radio network nodes, including radio network node 110 (i.e., a BS) and wireless device 120 (i.e., a UE); the BS exchanges information about a machine learning model (e.g., model capabilities, node capabilities, training data types, training objectives, training data, quality level indicators, model descriptions, model parameters, model metadata, model updates) with the UE], or real-time training; and
processing, by the first communication device, a target service according to the AI network parameter [pars. 86, 101-103, 205-211, and 223; the exchanged information is used to configure or update the machine learning model to generate a prediction about a future performance of one or more of the radio network nodes].
Referring to claim 2, Otterstein discloses The method according to claim 1, wherein the processing, by the first communication device, a target service according to the AI network parameter comprises: performing, by the first communication device, at least one of following according to the AI network parameter: signal processing, channel transmission, acquisition of channel state information, beam management, channel prediction, interference suppression, positioning, prediction of a high-layer service or parameter, or management of a high layer service or parameter [pars. 101-103; generating the prediction involves processing input parameters such as received signal strength (i.e., signal processing), angle of arrival (i.e., beam management), measured or estimated UE speed (i.e., prediction/management of a high-layer service or parameter), or target block or bit error rates (i.e., prediction/management of a high-layer service or parameter)].
Referring to claim 3, Otterstein discloses The method according to claim 1, wherein the AI network parameter comprises at least one of following: a structure of an AI network, a multiplicative coefficient of a neuron in an AI network, an additive coefficient of a neuron in an AI network, or an activation function of a neuron in an AI network [pars. 66, 85-88, 170, 205-211, 223; the exchanged information includes model descriptions (i.e., model types, structure description), where the machine learning model comprises an input layer, an output layer, and one or more hidden layers; each layer comprises one or more artificial neurons, and each artificial neuron has an activation function, weighting coefficients (i.e., multiplicative coefficients), and a bias (i.e., additive coefficients)].
Referring to claim 4, Otterstein discloses The method according to claim 3, wherein the structure of the AI network comprises at least one of following: a fully connected neural network [pars. 170 and 233; the machine learning model may be a feedforward neural network (i.e., a neural network where layers are typically fully connected], a convolutional neural network, a recurrent neural network [pars. 170 and 233; the machine learning model may be a recurrent neural network], or a residual network; a combination manner of a plurality of sub-networks comprised in the AI network [par. 88; the exchanged information may aid in combining the machine learning model with other machine learning models]; a quantity of hidden layers of the AI network [par. 170; note the hidden layers]; a connection manner between an input layer and a hidden layer of the AI network [par. 170; note the activation function]; a connection manner between a plurality of hidden layers of the AI network [par. 170; note the activation function]; a connection manner between a hidden layer and an output layer of the AI network [par. 170; note the activation function]; a quantity of neurons at each layer of the AI network [par. 170; note the artificial neurons included in each layer]; or an activation function of the AI network [par. 170; note the activation function].
Referring to claim 5, Otterstein discloses The method according to claim 3, wherein activation functions used by a plurality of neurons in the AI network are the same [par. 233; note the RNN]; and/or a neuron at an output layer of the AI network does not comprise an activation function.
Referring to claim 7, Otterstein discloses The method according to claim 3, wherein the AI network comprises a recurrent neural network; and the AI network parameter further comprises a multiplicative weighting coefficient of a recurrent unit and an additive weighting coefficient of the recurrent unit [pars. 170 and 233; note the RNN, the weighting coefficients and the bias].
Referring to claim 8, Otterstein discloses The method according to claim 1, wherein the AI network parameter is predefined, and the method further comprises one of following: reporting, by the first communication device, an AI network parameter supported by the first communication device to the second communication device; reporting, by the first communication device, an AI network parameter that is selected to use by the first communication device to the second communication device; or receiving, by the first communication device, first indication information from the second communication device, wherein the first indication information is used for indicating an AI network parameter used by the first communication device [pars. 66, 85-88, 205-211, 223; note the exchanged information, and transmitting of the exchanged information via messages (i.e., indications)].
Referring to claim 9, Otterstein discloses The method according to claim 8, wherein the first indication information is used for indicating a plurality of AI network parameters, and the method further comprises one of following: receiving, by the first communication device, second indication information from the second communication device, wherein the second indication information is used for indicating the AI network parameter used by the first communication device among the plurality of AI network parameters indicated by the first indication information; or reporting, by the first communication device, the AI network parameter that is selected to be used to the second communication device, wherein the AI network parameter that is selected to be used is comprised in the plurality of AI network parameters indicated by the first indication information [pars. 66, 85-88, 205-211, 223; note the exchanged information, and transmitting of the exchanged information via messages (i.e., indications)].
Referring to claim 11, Otterstein discloses The method according to claim 8, wherein the AI network parameter is obtained according to at least one of following: hardware configurations of the first communication device and the second communication device; channel environments of the first communication device and the second communication device; or quality of service required by the first communication device [pars. 205-211 and 215-228; the exchanged information is provided according to existing or presented protocols (e.g., Intelligent RAN, 3GPP) based on ML/node capabilities (e.g., collection and storage capabilities, resource utilization limitations) and objective functions (e.g., related to data rates, acceptable latencies, error rates)].
Referring to claim 12, Otterstein discloses The method according to claim 1, wherein the AI network parameter is received from the second communication device, and the AI network parameter is transmitted according to at least one of following sequences: according to a sequence of a layer at which the AI network parameter is located; according to a sequence of a neuron at a layer at which the AI network parameter is located; or according to a sequence of a multiplicative coefficient and an additive coefficient of an AI network [pars. 205-211 and 215-228; the exchanged information is provided according to objective functions related to data rates (e.g., acceptable latencies, error rates) and objective functions related to ML objectives (e.g., error function, training stopping criteria)].
Referring to claim 13, Otterstein discloses The method according to claim 1, wherein a condition under which an AI network corresponding to the AI network parameter is available comprises at least one of following: performance of the AI network meets a requirement of target performance; the AI network is trained for a target quantity of times; or a target latency expires [pars. 205-211 and 215-228; the exchanged information is provided according to objective functions related to data rates (e.g., acceptable latencies, error rates) and objective functions related to ML objectives (e.g., error function, training stopping criteria)].
Referring to claim 14, see at least the rejection for claim 1. Otterstein further discloses A first communication device, comprising a processor, a memory, and a program or an instruction stored in the memory and executable on the processor, wherein the program or the instruction, when executed by the processor, causes the first communication device to perform the claimed steps [fig. 12, processing circuitries 3318/3338, applications 3312/3332].
Referring to claim 15, see the rejection for claim 2.
Referring to claim 16, see the rejection for claim 3.
Referring to claim 17, see the rejection for claim 8.
Referring to claim 18, see the rejection for claim 12.
Referring to claim 19, see at least the rejection for claim 1. Otterstein further discloses A non-transitory readable storage medium, storing a program or an instruction, wherein the program or the instruction, when executed by a processor of a first communication device, causes the first communication device to perform the claimed steps [fig. 12, processing circuitries 3318/3338, applications 3312/3332].
Referring to claim 20, see the rejection for claim 2.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Otterstein in view of O’Shea (US Pub. 20180308013).
Referring to claim 3, Otterstein discloses The method according to claim 1, wherein the AI network parameter comprises at least one of following: a structure of an AI network, a multiplicative coefficient of a neuron in an AI network, an additive coefficient of a neuron in an AI network, or an activation function of a neuron in an AI network [pars. 66, 85-88, 170, 205-211, 223; the exchanged information includes model descriptions (i.e., model types, structure description), where the machine learning model comprises an input layer, an output layer, and one or more hidden layers; each layer comprises one or more artificial neurons, and each artificial neuron has an activation function, weighting coefficients (i.e., multiplicative coefficients), and a bias (i.e., additive coefficients); see also O’Shea, par. 48; the machine learning network includes one or more collections of multiplications, divisions, and summations of inputs].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the machine learning model taught by Otterstein so that the artificial neural network includes can be designed using various types of parameters and specific structures as taught by O’Shea, with a reasonable expectation of success. The motivation for doing so would have been to design for the best performance [O’Shea, par. 48].
Referring to claim 4, Otterstein discloses The method according to claim 3, wherein the structure of the AI network comprises at least one of following: a fully connected neural network [pars. 170 and 233; the machine learning model may be a feedforward neural network (i.e., a neural network where layers are typically fully connected; see also O’Shea, par. 48 disclosing a fully connected neural network], a convolutional neural network [see also O’Shea, par. 48 disclosing a convolutional neural network], a recurrent neural network [pars. 170 and 233; the machine learning model may be a recurrent neural network], or a residual network [see also O’Shea, par. 48 disclosing a residual network]; a combination manner of a plurality of sub-networks comprised in the AI network [par. 88; the exchanged information may aid in combining the machine learning model with other machine learning models]; a quantity of hidden layers of the AI network [par. 170; note the hidden layers]; a connection manner between an input layer and a hidden layer of the AI network [par. 170; note the activation function]; a connection manner between a plurality of hidden layers of the AI network [par. 170; note the activation function]; a connection manner between a hidden layer and an output layer of the AI network [par. 170; note the activation function]; a quantity of neurons at each layer of the AI network [par. 170; note the artificial neurons included in each layer]; or an activation function of the AI network [par. 170; note the activation function].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the machine learning model taught by Otterstein so that the artificial neural network includes can be designed using various types of parameters and specific structures as taught by O’Shea, with a reasonable expectation of success. The motivation for doing so would have been to design for the best performance [O’Shea, par. 48].
Referring to claim 5, Otterstein discloses The method according to claim 3, wherein activation functions used by a plurality of neurons in the AI network are the same [par. 233; note the RNN]; and/or a neuron at an output layer of the AI network does not comprise an activation function [see also O’Shea, par. 53; activation functions are applied to intermediate layers].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the machine learning model taught by Otterstein so that the artificial neural network includes can be designed using various types of parameters and specific structures as taught by O’Shea, with a reasonable expectation of success. The motivation for doing so would have been to design for the best performance [O’Shea, par. 48].
Referring to claim 6, Otterstein discloses The method according to claim 3, ...the AI network meets at least one of following: ...the multiplicative coefficient of the neuron in the AI network comprises a weight coefficient [par. 170; note the weighting coefficients; see also O’Shea, par. 48 disclosing that a machine learning network includes one or more collections of multiplications, divisions, and summations of inputs]; or the additive coefficient of the neuron in the AI network comprises a bias [par. 170; note the bias; see also O’Shea, par. 48 disclosing that a machine learning network includes one or more collections of multiplications, divisions, and summations of inputs].
Otterstein does not appear to explicitly disclose wherein the AI network comprises a convolutional neural network; and the AI network meets at least one of following: the neuron in the AI network comprises a convolution kernel.
However, O’Shea discloses wherein the AI network comprises a convolutional neural network; and the AI network meets at least one of following: the neuron in the AI network comprises a convolution kernel [pars. 42 and 48-52; a machine learning network is implemented using an artificial neural network comprising one or more layers that generate output from received input based on a set of parameters; the artificial neural network may include a convolutional neural network with convolutional layers with one or more filters (i.e., kernels)].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the machine learning model taught by Otterstein so that the artificial neural network includes can be designed using various types of parameters and specific structures as taught by O’Shea, with a reasonable expectation of success. The motivation for doing so would have been to design for the best performance [O’Shea, par. 48].
Referring to claim 7, Otterstein discloses The method according to claim 3, wherein the AI network comprises a recurrent neural network; and the AI network parameter further comprises a multiplicative weighting coefficient of a recurrent unit and an additive weighting coefficient of the recurrent unit [pars. 170 and 233; note the RNN, the weighting coefficients and the bias; see also O’Shea, par. 48 disclosing that a machine learning network includes one or more collections of multiplications, divisions, and summations of inputs].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the machine learning model taught by Otterstein so that the artificial neural network includes can be designed using various types of parameters and specific structures as taught by O’Shea, with a reasonable expectation of success. The motivation for doing so would have been to design for the best performance [O’Shea, par. 48].
Referring to claim 16, see the rejection for claim 3.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE PARK whose telephone number is (571)270-7727. The examiner can normally be reached M-F 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Grace Park/Primary Examiner, Art Unit 2144