CTNF 18/329,706 CTNF 92539 DETAILED ACTION Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 102 07-06 AIA 15-10-15 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 07-07-aia AIA 07-07 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – 07-08-aia AIA (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 07-15-aia AIA Claim(s) 1-5 and 10 is/are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Orhan et al. [US 2022/0124543 A1] . Regarding claim 1 , Orhan teaches “A prediction model generation method performed by a computing device comprising at least one processor, the prediction model generation method comprising:” as “The conn-event processor 503 performs the GNN inferences based on the conn-events 506 pulled from the queue 507 (or pushed from the queue 507 to the conn-event processor 503).” [¶0106] “constructing a training dataset; and” as “the term “ML model” or “model” may describe the output of an ML algorithm that is trained with training data.” [¶0205] “generating a prediction model by training a graph neural network (GNN),” as “ the term “ML model” or “model” may describe the output of an ML algorithm that is trained with training data. After training, an ML model may be used to make predictions on new datasets. ” [¶0205] “wherein the training dataset includes input data and result data, and” as “Supervised learning is an ML task that aims to learn a mapping function from the input to the output, given a labeled data set.” [¶0206] “the constructing of the training dataset comprises: converting a distributed deep learning training code (distributed training (DT) code) to a graph; and” as “At operation 1503, the CMF 136 obtains a graph and a set of feature vectors, and at operation 1504, the CMF 136 sets an RL-state (or current state s.sub.t) as a cell-cell connectivity graph with some cell-UE pairings.” [¶0230] “extracting an adjacency matrix and a feature matrix from the graph.” as “For each conn-event 406 a local cell-cell adjacency matrix is obtained (410). Additionally, reshuffled UE iterations T are identified based on the UE connectivity information 404 and the local cell-cell adjacency matrix (411), and cell-UE pairings are selected based on the UE connectivity information 404 (412). ” [¶0094] Regarding claim 2 , Orhan teaches “wherein the result data includes at least one of graphics processing unit (GPU) utilization, GPU memory utilization, network transmission (TX) throughput, network reception (RX) throughput, a burst time of a GPU, a burst time of a GPU memory, a burst time of a network TX, a burst time of a network RX, an idle time of the GPU, an idle time of the GPU memory, an idle time of the network TX, and an idle time of the network RX.” as “Edge compute nodes may partition resources (e.g., memory, CPU, GPU, interrupt controller, I/O controller, memory controller, bus controller, network connections or sessions, etc.) where respective partitionings may contain security and/or integrity protection capabilities.” [¶0124] Regarding claim 3 , Orhan teaches “wherein the GNN is a graph convolutional network (GCN), a graph isomorphism network (GIN), or a graph attention network (GAN).” as “each interface is represented as an edge in the graph/GNN. Representing such a network as a graph allows relevant features to be extracted from network logical entities using GNN tools such as graph convolutional neural network (CNN), spatial-temporal neural network, and/or the like.” [¶0021] Regarding claim 4 , Orhan teaches “wherein the GNN includes a plurality of graph layers, a graph readout layer, and a multilayer perceptron (MLP) layer.” as “The CUs 132 are network (logical) nodes hosting higher/upper layers of a network protocol functional split.” [¶0025] Regarding claim 5 , Orhan teaches “wherein each of the graph layers includes a gated recurrent unit (GRU).” as “Examples of NNs include deep NN (DNN), feed forward NN (FFN), a deep FNN (DFF), convolutional NN (CNN), deep CNN (DCN), deconvolutional NN (DNN), a deep belief NN, a perception NN, recurrent NN (RNN) (e.g., including Long Short Term Memory (LSTM) algorithm, gated recurrent unit (GRU), etc.), deep stacking network (DSN).” [¶0217] Regarding claim 10 , Orhan teaches “A prediction method using a prediction model generated by a prediction model generation method according to claim 1, the prediction method comprising: generating input data to be predicted; and” as “The artificial neurons can be aggregated or grouped into one or more layers where different layers may perform different transformations on their inputs.” [¶0486] “performing prediction by inputting the input data to be predicted to the prediction model.” as “Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.” [¶0486] Claim Rejections - 35 USC § 103 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-23-aia AIA The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 07-21-aia AIA Claim (s) 6-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Orhan et al. [US 2022/0124543 A1] in view of Patel et al. [US 10,915,818 B1] . Claim 6 is rejected over Orhan and Patel. Orhan does not explicitly teach further comprising performing transfer learning (TL) on the prediction model after generating the prediction model. However, Patel teaches “further comprising performing transfer learning (TL) on the prediction model after generating the prediction model.” as “The prediction model 118 uses transfer learning (e.g., ingredient vocabulary embeddings) from the representation model 116 to encode and decode the sets of ingredients. ” [Col 4, lines 37-40] Orhan and Patel are analogous arts because they teach artificial intelligence and neural network. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Orhan and Patel before him/her, to modify the teachings of Orhan to include the teachings of Patel with the motivation of a representation model, a prediction model, a quantity solver, a formula searcher, and an optional optimization layer or optimizer. [Patel, Col 4, lines 4-7] Claim 7 is rejected over Orhan and Patel. Orhan does not explicitly teach wherein the performing of the transfer learning is performed using a second training dataset, the training dataset is a dataset corresponding to a first DT setting, the second training dataset is a dataset corresponding to a second DT setting, and the first DT setting and the second DT setting are different in at least one type of GPU that performs distributed deep learning, the number of parameter servers (PSs), and the number of worker nodes. However, Patel teaches “wherein the performing of the transfer learning is performed using a second training dataset,” as “The method also comprises collecting second digital data representing a set of recipes from a recipes database, extracting third digital data representing recipe ingredient from each recipe in the set of recipes, representing each recipe ingredients for each recipe in the set of recipes as a digitally stored vector to result in groups of vectors that are associated with the set of recipes, creating a second training set for use in training a second neural network, the second training set comprising the groups of vectors that are associated with the set of recipes” [Col 3, lines 5-15] “the training dataset is a dataset corresponding to a first DT setting,” as “In an embodiment, a computer-implemented method to generate a candidate formula of a plant-based food item using a set of ingredients to mimic a target food item that is not plant-based, comprises collecting first digital data representing a plurality of ingredients from an ingredients database,” [Col 2, lines 59-64] “the second training dataset is a dataset corresponding to a second DT setting, and” as “creating a second training set for use in training a second neural network,” [Col 3, lines 11-14] “the first DT setting and the second DT setting are different in at least one type of GPU that performs distributed deep learning, the number of parameter servers (PSs), and the number of worker nodes.” as “The prediction model 118 may generate as many candidate sets of ingredients as needed or desired by sampling different points in the encoded search space R.” [Col 4, lines 40-43] Claim 8 is rejected over Orhan and Patel. Orhan teaches “wherein the transfer learning updates at least one of parameters of at least some graph layers among a plurality of graph layers included in the prediction model and parameters of an MLP layer included in the prediction model.” as “The inputs to hidden units 1410 of the hidden layers L.sub.a, L.sub.b, and L.sub.c may be based on the outputs of other neurons 1410.” [¶0224] Claim 9 is rejected over Orhan and Patel. Orhan teaches “wherein the transfer learning updates parameters of the latter half layers of a plurality of graph layers included in the prediction model and parameters of an MLP layer.” as “At operation 1509, the CMF 136 perform L-layer GNN computations to compute a Q-score for each state-action pair in the current graph custom-character (see equation (18)). At operation 1510, the CMF 136 updates the parameters for the deployment scenario (e.g., the parameters set at operation 1502) such as the weight parameters, number of deployment scenarios I, episodes K and/or RL-steps T.” [¶0231] Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MASUD K KHAN whose telephone number is (571)270-0606. The examiner can normally be reached Monday-Friday (8am-5pm). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at (571) 272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MASUD K KHAN/Primary Examiner, Art Unit 2132 Application/Control Number: 18/329,706 Page 2 Art Unit: 2132 Application/Control Number: 18/329,706 Page 3 Art Unit: 2132 Application/Control Number: 18/329,706 Page 4 Art Unit: 2132 Application/Control Number: 18/329,706 Page 5 Art Unit: 2132 Application/Control Number: 18/329,706 Page 6 Art Unit: 2132 Application/Control Number: 18/329,706 Page 7 Art Unit: 2132