Prosecution Insights
Last updated: April 19, 2026
Application No. 17/759,168

COMMUNICATION SYSTEM BASED ON NEURAL NETWORK MODEL, AND CONFIGURATION METHOD THEREFOR

Final Rejection §101§103§112
Filed
Jul 20, 2022
Examiner
AGRAWAL, SHISHIR
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
NTT Docomo Inc.
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 13 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
31 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
26.9%
-13.1% vs TC avg
§103
37.6%
-2.4% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
29.9%
-10.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Status of Claims This Office action is responsive to communications filed on 2025-12-17. Claim(s) 8 and 22 was/were cancelled. Claim(s) 1-2, 4-5, 7, 9, 11-13, 15-16, 18-19, 21, 23, and 25-27 is/are pending and are examined herein. Claim(s) 1-2, 4-5, 7, 9, 11-13, and 23 is/are objected to. Claim(s) 1-2, 4-5, 7, 9, 11-13, 15-16, 18-19, 21, 23, and 25-27 is/are rejected under 35 USC 112(b). Claim(s) 1-2, 4-5, 7, 9, 11-13, 15-16, 18-19, 21, 23, and 25-27 is/are rejected under 35 USC 101. Claim(s) 1-2, 4-5, 7, 9, 11-13, 15-16, 18-19, 21, 23, and 25-27 is/are rejected under 35 USC 103. Notice of Pre-AIA or AIA Status The present application, filed on or after 2013-03-16, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Regarding objections for informalities, the applicant’s amendments resolve the issues raised in the previous Office action, but they also introduce further issues as described below. Regarding rejections under 35 USC 112(b), the applicant’s amendments do not adequately resolve the substance of the issues raised in the previous Office action (e.g., the addition of “in historical time” to claims 13 and 27 does not clarify the intended antecedent of “the occurrence times”). The amendments also raise further issues of indefiniteness. The issues in the pending claims are described below. Regarding rejections under 35 USC 101, the applicant’s arguments have been fully considered but they are not persuasive. The applicant indicates that the amended claim “recites a configuration method for a communication system comprising a master node and a plurality of child nodes connected with the master node, where the method is performed by a processor and a receiver” and goes on to assert that “the amended claims are not directed towards an abstract idea but are directed towards practical applications in the telecommunication technology” [remarks, page 13]. This assertion is not persuasive because a mere description of generic computing equipment (a processor and a receiver) or of a particular technological environment (a master node connected with a plurality of child nodes) does not prevent the claims from directed to an abstract idea. The examiner notes that, though the claims do recite a distributed technological environment, the machine learning itself does not appear to be distributed. The applicant asserts that the pending claims provide an improvement [remarks, pages 16-19]. The examiner respectfully disagrees and maintains that the claims recite abstract ideas and generic additional elements, none of which a person of ordinary skill in the art would recognize as providing an improvement. As best understood by the examiner, the ‘thrust’ of the applicant’s remarks appears to be: quoting purported advantages of the invention described in the specification [remarks, pages 16-17], quoting limitations added to the amended claims [remarks, page 17], and then immediately asserting that “[t]herefore… the claimed invention is more than an abstract idea” [remarks, page 18]. This is not persuasive: the applicant has provided no substantive link between the claim elements and the purported advantages recited in the specification (i.e., no explanation as to which specific claim elements provide which specific improvement). The applicant is reminded that MPEP 2106.05(a) indicates explicitly that “reflect[ing] the disclosed improvement in technology” means that “the claim must include the components or steps of the invention that provide the improvement described in the specification”. If the applicant believes that specific claim elements included in the pending claims provide an improvement, the applicant is invited to indicate specifically which elements provide which improvement. (The examiner notes further that some of the claim limitations quoted by the applicant are abstract ideas, not additional elements. MPEP 2106.05(a) indicates that one of the requirements of the improvements analysis is that a “judicial exception alone cannot provide the improvement”, which means that the applicant must identify limitations which are additional elements, not abstract ideas, and explain why they provide an improvement.) Regarding rejections under 35 USC 102/103, the applicant’s arguments have been fully considered. The substance of the applicant’s amendments to the independent claims amounts to an incorporation of the limitations previously found in dependent claims 8 and 22, and the applicant’s arguments regarding the rejection under Wang in view Vankalaya are unpersuasive. The applicant asserts that Wang in view of Vankalaya “fails to teach ‘dividing the child node neural network models of the plurality of child nodes into the plurality of categories accordingly’” [remarks, page 23]. This remark is unpersuasive: as noted in the previous Office action, Wang discloses a neural network associated to each BS, and Wang in view of Vankalaya discloses grouping the BSs. It would be clear to a person of ordinary skill in the art that a grouping of BSs necessarily results in a grouping of the neural networks operating in the BSs. The applicant asserts that Wang in view of Vankalaya “fails to teach ‘training a child node neural network model for a plurality of categories to obtain an updated child node neural network model by using the characteristic information’” [remarks, page 23]. This argument is not persuasive. As noted in the previous Office action, Wang already discloses that the core network “analyzes various parameters” and that neural networks are configured “based on these parameters” [Wang, 0059]. The parameters analyzed by the core network, based on which neural networks are configured, have been mapped to the “characteristic information” of the claim. In fact, similar disclosures are also visible in Vankalaya (cf. the rejection of claim 9 in the previous Office action and below): it discloses receiving feedback from each BS in a group [Vankalaya, figure 12 step 1210] and updating the neural network of to a group of BSs based on the received feedback [Vankalaya, figure 12 step 1212], and it is indicated that the updating can involve (re)learning the model [Vankalaya, 0153]. Consequently, both Wang and Vankalaya individually disclose configuring/training child node neural networks “using the characteristic information” recited by the pending claims, so the combination certainly discloses this feature. The applicant asserts that “a person of ordinary skill in the art would have had no motivation to supply the missing elements without the benefit of Applicant’s own disclosure as a guide” [remarks, page 24]. This is unpersuasive: the motivation to combine as given in the previous Office action (and again below) is entirely independent of the applicant’s disclosure. The prior art mappings, updated in view of the applicant’s amendments, are given below. Examiner’s Remarks The claims include a number of recitations of configuring and configuration. These words are interpreted broadly in accordance with the plain meaning of “configure” as referring to any act of setting up or arranging something so that it is ready for a particular purpose or so that it is to someone’s liking. The examiner notes that this interpretation is in keeping with the applicant’s specification, which indicates, for example, that “configuring” a neural network can refer to a wide variety of actions including at least initializing, training, or updating the neural network (see, e.g., [specification, 0059]). Claim Objections Claim(s) 1-2, 4-5, 7, 9, 11-13, and 23 is/are objected to because of the following informalities: Claim 1 recites dynamically configuring each child node neural network model based on the acquired characteristic information by a processor, comprises: [emphasis added] but the underlined word is dangling and renders the claim ungrammatical. The examiner suggests “dynamically configuring each child node neural network model based on the acquired characteristic information by a processor, wherein the dynamically configuring comprises:” for grammaticality and for consistency with the language of the cancelled dependent whose limitations have been incorporated into the independent claim. Dependent claims 2, 4-5, 7, 9, and 11-13 inherit the objection. (For consistency of language between claim groups, the examiner further suggests amending claim 15 to read: “wherein the dynamically configuring comprises: dividing the plurality of child nodes…”) Claim 1 recites and wherein the communication system configuration method comprises [emphasis added] but this should be “and wherein the Claims 9 and 23 are indefinite because they recite dividing/es the plurality of child nodes into a plurality of categories based on the characteristic information by the processor. The parent claim already introduces “a plurality of categories”, so the repetition of nomenclature results in terminology having ambiguous antecedent basis. In fact, the parent claim already recites this limitation verbatim, and accordingly, the examiner suggests removing this limitation from these dependent claims. Appropriate correction is required. Claim Rejections - 35 USC 112(b) The following is a quotation of 35 USC 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 USC 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-2, 4-5, 7, 9, 11-13, 15-16, 18-19, 21, 23, and 25-27 is/are rejected under 35 USC 112(b) or 35 USC 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 USC 112, the applicant), regards as the invention. Claims 1 and 15 are indefinite for at least the following reasons: They recite dividing the child node neural network models of the plurality of child nodes into the plurality of categories accordingly [emphasis added] but this language does not adequately clarify what the division of child node neural network models is intended to be according to. MPEP 2173.05(b)(II) indicates that a claim is “indefinite when a limitation of the claim is defined by reference to an object and the relationship between the limitation and the object is not sufficiently defined”. In the pending independent claims, the object with respect to which a relationship appears to be defined has been left undefined, thereby rendering the claim indefinite. For the purpose of compact prosecution, the claims are interpreted broadly as encompassing any division of the child node neural network models into the plurality of categories (e.g., a division of child node neural network models that is in accordance with the division of the child nodes). They recite using the characteristic information, training the child node neural network model for the plurality of categories to obtain an updated child node neural network model by the processor; and updating the child node neural network models of the plurality of child nodes by using updated child node neural network model by the processor [emphasis added]. The first underlined phrase lacks antecedent basis (there is not a unique “child node neural network model for the plurality of categories”), and the third is ungrammatical due to a missing article. Moreover, the specification appears to describe an updated child node neural network model for each category [specification, 0089], not a singular “updated child node neural network” which is used to update all of child node neural networks, as appears to be presently recited by the claim. MPEP 2173.03 indicates that a claim is “indefinite when a conflict or inconsistency between the claimed subject matter and the specification disclosure renders the scope of the claim uncertain”. Consequently, the pending independent claims are indefinite. In accordance with the interpretation suggested by the specification, the examiner suggests amending the claim to “using the characteristic information, training the child node neural network models for the plurality of categories to obtain an updated child node neural network model for each category of the plurality of categories by the processor; and updating the child node neural network models of the plurality of child nodes by using the updated child node neural network models by the processor”. For the purpose of compact prosecution, the claims are interpreted broadly as encompassing at least this interpretation herein. Dependent claims 2, 4-5, 7, 9, 11-13, 16, 18-19, 21, 23, and 25-27 inherit the rejections. Claims 13 and 27 recite determining a weight of each historical optimal beam by using the occurrence times of each historical optimal beam in the historical optimal beam set [emphasis added] but the underlined phrase lacks antecedent basis. The intended referent of this phrase is moreover unclear from the relevant portion of the specification [specification, 0106-0108]. For the purpose of compact prosecution, the limitation is interpreted broadly as encompassing any act of determining a weight associated to each historical optimal beam. Claim Rejections - 35 USC 101 35 USC 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-2, 4-5, 7, 9, 11-13, 15-16, 18-19, 21, 23, and 25-27 is/are rejected under 35 USC 101 because the claimed invention(s) is/are directed to abstract ideas without significantly more. Claim 1 Step 1. The claim and its dependents 2, 4-5, 7, 9, and 11-13 fall under the statutory category of methods. An analysis of step 2 for each of these claims follows. Step 2A Prong 1. The claim recites the following abstract ideas: A configuration method for a communication system (This encompasses mental processes that can be performed in the human mind or by a human using pen and paper since a human being can mentally or manually perform actions which fall under the broadest reasonable interpretation of “configuring” a communication system (cf. examiner’s remarks and next). See MPEP 2106.04(a)(2)(III).) and dynamically configuring each child node neural network model based on the acquired characteristic information [by a processor], comprises: (This encompasses mental processes that can be performed in the human mind or by a human using pen and paper. A human mind can perform actions which fall under the broadest reasonable interpretation of “configuring” neural networks. For example, a human mind can make a decision about the architecture to be used in a neural network, which is a form of “configuring” a neural network (cf. examiner’s remarks regarding “configuring”). See MPEP 2106.04(a)(2)(III).) dividing the plurality of child nodes into a plurality of categories based on the characteristic information, (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) and dividing the child node neural network models of the plurality of child nodes into a plurality of categories accordingly (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: [A configuration method for a communication system] based on neural network models, (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) wherein the communication system comprises at least one master node and a plurality of child nodes communicatively connected with the at least one master node, and a child node neural network model is configured in each of the plurality of child nodes, (This recites a general link between an abstract idea and a particular field of use or technological environment. See MPEP 2106.05(h).) and wherein the communication system configuration method comprises: acquiring characteristic information of the plurality of child nodes by a receiver; (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) by a processor… by the processor… by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) using the characteristic information, (This recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) training the child node neural network model for the plurality of categories to obtain an updated child node neural network model (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) and updating the child node neural network model of the plurality of child nodes by using the child node neural network model (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: [A configuration method for a communication system] based on neural network models, (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) wherein the communication system comprises at least one master node and a plurality of child nodes communicatively connected with the at least one master node, and a child node neural network model is configured in each of the plurality of child nodes, (This recites a general link between an abstract idea and a particular field of use or technological environment. See MPEP 2106.05(h).) and wherein the communication system configuration method comprises: acquiring characteristic information of the plurality of child nodes by a receiver; (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data transfer. See MPEP 2106.05(d)(II), “Receiving or transmitting data over a network” and/or “Storing and retrieving information in memory”.) by a processor… by the processor… by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) using the characteristic information, (This recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) training the child node neural network model for the plurality of categories to obtain an updated child node neural network model (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) and updating the child node neural network model of the plurality of child nodes by using the child node neural network model (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data storage. See MPEP 2106.05(d)(II), “Electronic recordkeeping” and/or “Storing and retrieving information in memory”.) Claim 2 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). and predicting the characteristic information of the one child node [by the processor] based on the initial information. (This recites a mental process that can be performed in the human mind or by a human using pen and paper since a human being can mentally or manually make predictions. See MPEP 2106.04(a)(2)(III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). [The configuration method of claim 1, wherein the acquiring characteristic information of the plurality of child nodes comprises:] receiving the characteristic information transmitted from one child node of the plurality of child nodes by the receiver, (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) or receiving initial information transmitted from one child node of the plurality of child nodes by the receiver, (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) by the processor (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). [The configuration method of claim 1, wherein the acquiring characteristic information of the plurality of child nodes comprises:] receiving the characteristic information transmitted from one child node of the plurality of child nodes by the receiver, (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data transfer. See MPEP 2106.05(d)(II), “Receiving or transmitting data over a network” and/or “Storing and retrieving information in memory”.) or receiving initial information transmitted from one child node of the plurality of child nodes by the receiver, (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data transfer. See MPEP 2106.05(d)(II), “Receiving or transmitting data over a network” and/or “Storing and retrieving information in memory”.) by the processor (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) Claim 4 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). [The configuration method of claim 2, wherein the dynamically configuring each child node neural network model based on the acquired characteristic information comprises:] selecting one neural network model from a plurality of predetermined neural network models based on the characteristic information (This recites a mental process that can be performed in the human mind or by a human using pen and paper since a human being can mentally or manually make selections. See MPEP 2106.04(a)(2)(III).) and configuring the child node neural network model of the one child node by using the selected one neural network model (This encompasses mental processes that can be performed in the human mind or by a human using pen and paper. A human mind can perform actions which fall under the broadest reasonable interpretation of “configuring” neural networks. A human mind can make a decision to use a selected model. See MPEP 2106.04(a)(2)(III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) Claim 5 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). [The configuration method of claim 2, wherein the dynamically configuring each child node neural network model based on the acquired characteristic information comprises:] selecting a matching child node that matches the one child node from the plurality of child nodes based on the characteristic information (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) and configuring the child node neural network model of the one child node by using the child node neural network model of the matching child node (This encompasses mental processes that can be performed in the human mind or by a human using pen and paper. A human mind can perform actions which fall under the broadest reasonable interpretation of “configuring” neural networks. A human mind can make a decision to use a particular model. See MPEP 2106.04(a)(2)(III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). by the processor… by the processor; (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) receiving the child node neural network model of the matching child node from the matching child node by the receiver; (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). by the processor… by the processor; (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) receiving the child node neural network model of the matching child node from the matching child node by the receiver; (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data transfer. See MPEP 2106.05(d)(II), “Receiving or transmitting data over a network” and/or “Storing and retrieving information in memory”.) Claim 7 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). [The configuration method of claim 1, wherein the acquiring characteristic information of the plurality of child nodes comprises:] receiving the characteristic information transmitted from each of the plurality of child nodes by the receiver. (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). [The configuration method of claim 1, wherein the acquiring characteristic information of the plurality of child nodes comprises:] receiving the characteristic information transmitted from each of the plurality of child nodes by the receiver. (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data transfer. See MPEP 2106.05(d)(II), “Receiving or transmitting data over a network” and/or “Storing and retrieving information in memory”.) by the receiver. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) Claim 9 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). [The configuration method of claim 7, wherein the dynamically configuring each child node neural network model based on the acquired characteristic information comprises:] dividing the plurality of child nodes into a plurality of categories based on the characteristic information (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). by the processor… by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) notifying the characteristic information of the child nodes belonging to a same category among the plurality of categories to the child nodes of the same category according to the plurality of categories (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) and training the child nodes of the same category by using the characteristic information of the child nodes of the same category, (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) and updating the child node neural network model of the child nodes of the same category. (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). notifying the characteristic information of the child nodes belonging to a same category among the plurality of categories to the child nodes of the same category according to the plurality of categories; (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data transfer. See MPEP 2106.05(d)(II), “Receiving or transmitting data over a network” and/or “Storing and retrieving information in memory”.) and training the child nodes of the same category by using the characteristic information of the child nodes of the same category, (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) and updating the child node neural network model of the child nodes of the same category. (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data storage. See MPEP 2106.05(d)(II), “Electronic recordkeeping” and/or “Storing and retrieving information in memory”.) Claim 11 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). [The configuration method of claim 1, wherein the dynamically configuring each child node neural network model comprises one of:] establishing indexes of a plurality of neural network models, and using the indexes to indicate that the child node neural network model is one of plurality of the neural network models (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) indicating the child node neural network model by using a model weight of the one of the plurality of neural network models (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) indicating the child node neural network model by using a model weight variation of the one of the plurality of neural network models (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) and indicating the child node neural network model by using a semantic representation of the one of the plurality of neural network models (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). by the processor… by the processor… by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). by the processor… by the processor… by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) Claim 12 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). [The configuration method of claim 7, wherein] the characteristic information is a historical optimal beam set of a user equipment corresponding to one of the plurality of child nodes, and wherein the historical optimal beam set comprises a difference sequence between a plurality of optimal beams at a plurality of consecutive time points and an optimal beam at a latest time point; or a difference sequence between the optimal beams of two adjacent time points in the plurality of consecutive time points. (This recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). [The configuration method of claim 7, wherein] the characteristic information is a historical optimal beam set of a user equipment corresponding to one of the plurality of child nodes, and wherein the historical optimal beam set comprises a difference sequence between a plurality of optimal beams at a plurality of consecutive time points and an optimal beam at a latest time point; or a difference sequence between the optimal beams of two adjacent time points in the plurality of consecutive time points. (This recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) Claim 13 Step 2A Prong 1. The claim recites the following abstract ideas: The abstract idea(s) in the parent claim(s). [The configuration method of claim 12, wherein the dynamically configuring the child node neural network model of the one of the plurality of child nodes by using the characteristic information comprises:] determining a weight of each historical optimal beam by using the occurrence times of each historical optimal beam in the historical optimal beam set in historical time (This recites a mathematical concept and/or a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) and according to the weight of each historical optimal beam and the historical optimal beam set, constructing a weighted loss function to perform training to update the child node neural network model (This recites a mathematical concept and/or a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(I, III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: The additional element(s) in the parent claim(s). by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: The additional element(s) in the parent claim(s). by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) Claim 15 Step 1. The claim and its dependents 16, 18-19, 21, 23, 25-27 fall under the statutory category of machines. Step 2A Prong 1. The claim recites the following abstract ideas: and dynamically configures each child node neural network model based on the acquired characteristic information (This encompasses mental processes that can be performed in the human mind or by a human using pen and paper. A human mind can perform actions which fall under the broadest reasonable interpretation of “configuring” neural networks. For example, a human mind can make a decision about the architecture to be used in a neural network, which is a form of “configuring” (cf. examiner’s remarks regarding “configuring”). See MPEP 2106.04(a)(2)(III).) divides the plurality of child nodes into a plurality of categories based on the characteristic information, (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) and divides the child node neural network models of the plurality of child nodes into a plurality of categories accordingly (This recites a mental process that can be performed in the human mind or by a human using pen and paper. See MPEP 2106.04(a)(2)(III).) Step 2A Prong 2. The claim recites the following additional elements which, considered individually and as an ordered combination, do not integrate the abstract idea into a practical application: A communication system [based on neural network models,] comprising: at least one master node; a plurality of child nodes, which are communicatively connected with the at least one master node, and a child node neural network model is configured in each of the plurality of child nodes, (This recites a general link between an abstract idea and a particular field of use or technological environment. See MPEP 2106.05(h).) based on neural network models (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) wherein the at least one master node acquires the characteristic information of the plurality of child nodes by a receiver; (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) by a processor… wherein the at least one master node [divides]… by the processor… by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) using the characteristic information, (This recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) trains the child node neural network model for the plurality of categories to obtain an updated child node neural network model (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) and updates the child node neural network model of the plurality of child nodes by using the child node neural network model (This recites insignificant extra-solution activity. See MPEP 2106.05(g).) Step 2B. The claim recites the following additional elements which, considered individually and as an ordered combination, do not amount to significantly more than the abstract idea: A communication system [based on neural network models,] comprising: at least one master node; a plurality of child nodes, which are communicatively connected with the at least one master node, and a child node neural network model is configured in each of the plurality of child nodes, (This recites a general link between an abstract idea and a particular field of use or technological environment. See MPEP 2106.05(h).) based on neural network models (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) wherein the at least one master node acquires the characteristic information of the plurality of child nodes by a receiver; (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data transfer. See MPEP 2106.05(d)(II), “Receiving or transmitting data over a network” and/or “Storing and retrieving information in memory”.) by a processor… wherein the at least one master node [divides]… by the processor… by the processor… by the processor. (This recites generic computing components for performing an abstract idea. See MPEP 2106.05(f)(2).) using the characteristic information, (This recites data of a particular type or source, merely linking an abstract idea to a particular field of use. See MPEP 2106.05(h).) trains the child node neural network model for the plurality of categories to obtain an updated child node neural network model (This recites merely applying (or equivalent) an abstract idea, or implementing an abstract idea on a computer, or using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f).) and updates the child node neural network model of the plurality of child nodes by using the child node neural network model (This insignificant extra-solution activity is well-understood, routine, conventional as it is mere data storage. See MPEP 2106.05(d)(II), “Electronic recordkeeping” and/or “Storing and retrieving information in memory”.) Claims 16, 18-19, 21, 23, and 25-27 inherit limitations from claim 15 and recite additional limitations which are substantially similar to those recited by claims 2, 4-5, 7, 9, and 11-13, respectively, so they are rejected by the same rationale. Claim Rejections - 35 USC 103 The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 USC 102(b)(2)(C) for any potential 35 USC 102(a)(2) prior art against the later invention. Claim(s) 1-2, 4-5, 7, 9, 11, 15-16, 18-19, 21, 23, and 25 is/are rejected under 35 USC 103 as being unpatentable over Jibing WANG et al. (US20230004864A1, effectively filed 2019-10-28; hereafter, “Wang”) in view of Satya VANKALAYA et al. (US20220278728A1, effectively filed 2019-11-22; hereafter, “Vankalaya”). Claim 1 Wang discloses: A configuration method for a communication system based on neural network models, wherein the communication system comprises at least one master node and a plurality of child nodes communicatively connected with the at least one master node, ([Wang, figure 1 and 0040-0042, 0138]: Wang discloses an environment 100 in which a core network 150 communicates wirelessly with base stations 120 which communicate wirelessly with user equipment devices (UEs) 110 [Wang, figure 1 and 0040-0042] and explicitly indicates that there are “multiple UEs” in the wireless communication system [Wang, 0138]. The core network and base stations map, respectively, to the “at least one master node” and the “plurality of child nodes” of the claim. The examiner notes that mapping base stations to “child nodes” is consistent with the broadest reasonable interpretation of the term in view of the applicant’s specification (“the child node 11 is, for example, a base station…” [specification, 0099]).) and a child node neural network model is configured in each of the plurality of child nodes, ([Wang, 0051, 0053-0054, and 0266]: Wang discloses that a “base station neural network manager 268 selects the NN formation configurations utilized by the base station” [Wang, 0051] from a neural network table 272 which stores NN formation configurations [Wang, 0053-0054] in order to configure a deep neural network (DNN) [Wang, 0051]. See also: [Wang, 0266]. This DNN maps to a “child node neural network model” of the claim.) and wherein the communication system configuration method comprises: acquiring characteristic information of the plurality of child nodes by a receiver; ([Wang, 0059, 0066, and figure 3]: Wang discloses that the core network “analyzes various parameters such as current signal channel conditions (e.g., as reported by base stations 120, as reported by other wireless access points, as reported by UEs 110 (via base stations or other wireless access points)), capabilities at base stations 120 (e.g., antenna configurations, cell configurations, MIMO capabilities, radio capabilities, processing capabilities), capabilities of UE 110 (e.g., antenna configurations, MIMO capabilities, radio capabilities, processing capabilities), and so forth. For example, the base stations 120 obtain the various parameters during the communications with the UE and forward the parameters to the core network neural network manager 312” [Wang, 0059]. The parameters analyzed by the core network neural network manager map to the “characteristic information” of the claim. The core network interface 320 [Wang, figure 3] is “for communication of user-plane, control-plane, and other information with the other functions or entities in the core network 150, base stations 120, or UE 110” [Wang, 0066] and thus maps to the “receiver” of the claim.) and dynamically configuring each child node neural network model based on the acquired characteristic information by a processor, comprises: ([Wang, 0053-0054, 0059, 0062, and figures 2-3]: Wang further discloses that the “core network neural network manager selects, based on these parameters, a NN formation configuration… [and] then communicates the selected NN formation configuration to the base stations 120” [Wang, 0059; emphasis added]. The base station training modules 270 generate NN formation configurations which are “streamlined … relative to those generated by the training module 314 [of the core network]” [Wang, 0062], and which are stored in the neural network tables 272 [Wang, 0053-0054]. In other words, the configuring of each “child node neural network model” as mapped above is “based on the acquired information” as recited by the claim, with the “acquired information” being as mapped above. The core network processor 304 [Wang, figure 3] maps to the “processor” of the claim. Alternatively, all of the processors 206 and 304 taken together [Wang, figures 2-3] can be mapped to the “processor” of the claim.) Wang discusses “neural network formation configuration used by multiple base stations” (as opposed to neural network formation configurations which are “specific to a respective base station”) [Wang, 0128]. Nonetheless, it may be argued that it does not distinctly disclose a step of grouping base stations. In other words, Wang may not distinctly disclose: dividing the plurality of child nodes into a plurality of categories based on the characteristic information, and dividing the child node neural network models of the plurality of child nodes into the plurality of categories accordingly by the processor; using the characteristic information, training the child node neural network model for the plurality of categories to obtain an updated child node neural network model by the processor; and updating the child node neural network models of the plurality of child nodes by using updated child node neural network model by the processor. Vankalaya is in the field of communications networks, discussing base stations (BSs) in communication with UEs [Vankalaya, figure 5; see also, 0087 and figure 8]. Moreover, Wang in view of Vankalaya discloses: dividing the plurality of child nodes into a plurality of categories based on the characteristic information, and dividing the child node neural network models of the plurality of child nodes into the plurality of categories accordingly by the processor; ([Vankalaya, 0111 and figure 12]: Vankalaya discloses “classify[ing]… each BS (504) into a group of BSs based on similar working environment of the BSs, for example, a frequency bandwidth of the BSs, a user load associated with each BS classified in the group of BSs” [Vankalaya, 0111; see also, figure 12 step 1202]. In the combination, the base stations of Wang are classified into groups as in Vankalaya, and the resulting groups of BSs map to the “plurality of categories” of the claim. A division of the BSs into categories results in a division of the “child node neural network models” associated with the BSs into categories.) using the characteristic information, training the child node neural network model for the plurality of categories to obtain an updated child node neural network model by the processor; and updating the child node neural network models of the plurality of child nodes by using updated child node neural network model by the processor. ([Wang, 0128; Vankalaya, figure 12]: Vankalaya discloses determining a neural network “to be deployed at each BS of the group of BSs” and sending it “to each of BS of the group of BSs” [Vankalaya, figure 12 elements 1206-1208]. As noted above, Wang also discloses “neural network formation configuration used by multiple base stations” [Wang, 0128]. In the combination, the multiple base stations which share a neural network formation configuration, as in Wang, are taken to be the neural networks in a group, as in Vankalaya. As noted above, the configuration/training of neural networks is based on the parameters which are mapped to the “characteristic information” of the claim, so the training is “using the characteristic information” as recited. The applicant is also invited to consult the rejection of claim 9 below.) Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the communication system of Wang with that of Vankalaya because it “intelligently predict[s] a channel quality status (CQS) in a wireless communication network” and “reduce[s] the computational burden at the UE related to reporting the CQS information to the base station” [Vankalaya, 0045-0046], so the combination would be more efficient overall. Claim 2 Wang in view of Vankalaya discloses the elements of the parent claim(s). It also discloses: [The configuration method of claim 1, wherein the acquiring characteristic information of the plurality of child nodes comprises:] receiving the characteristic information transmitted from one child node of the plurality of child nodes by the receiver, or receiving initial information transmitted from one child node of the plurality of child nodes by the receiver, and predicting the characteristic information of the one child node by the processor based on the initial information. ([Wang, 0059]: The claim recites two limitations in the alternative (“receiving the characteristic information” and “receiving initial information… and predicting the characteristic information”) so its broadest reasonable interpretation necessitates only one of the two. As noted above, the parameters analyzed by the core network (i.e., the “characteristic information” of the claim) are reported by (i.e., “receiv[ed]… from”) the base stations [Wang, 0059].) The same motivation to combine applies. Claim 4 Wang in view of Vankalaya discloses the elements of the parent claim(s). It also discloses: [The configuration method of claim 2, wherein the dynamically configuring each child node neural network model based on the acquired characteristic information comprises:] selecting one neural network model from a plurality of predetermined neural network models based on the characteristic information by the processor; ([Wang, 0059]: As noted above, Wang discloses that the core network “selects, based on these parameters, a NN formation configuration… [and] then communications the selected NN formation configuration to the base stations 120” [Wang, 0059]. The selected NN formation configuration is the “one neural network model” of the claim, and the space of all NN formation configurations is the “plurality of predetermined neural network models” of the claim (cf. [Wang, 0046, 0075-0081]).) and configuring the child node neural network model of the one child node by using the selected one neural network model by the processor. ([Wang, 0051, 0053, 0062, and 0266]: As noted above, Wang discloses that the base station receives NN formation configurations from the core network [Wang, 0059] and generates streamlined NN formation configurations [Wang, 0062] which are stored in a neural network table 272 [Wang, 0053-0054], from which the base station neural network manager 268 selects formation configurations for use in a DNN [Wang, 0051]. See also: [Wang, 0266]. In other words, configuring the “child node neural network model” as mapped above “us[es] the selected one neural network model” as recited by the claim, where the “selected one neural network model” is as mapped above.) The same motivation to combine applies. Claim 5 Wang in view of Vankalaya discloses the elements of the parent claim(s). It also discloses: [The configuration method of claim 2, wherein the dynamically configuring each child node neural network model based on the acquired characteristic information comprises:] selecting a matching child node that matches the one child node from the plurality of child nodes ([Wang, figure 20 and 0226]: Wang depicts [Wang, figure 20] a communication system which “exchanges information… through the base station 2004 and base station 2006” via a “single direction E2E communication” 2010 [Wang, 0226]. Base station 2006 maps to the “one child node” of the claim, and base station 2004 maps to the “matching child node” of the claim.) based on the characteristic information ([Vankalaya, 0111 and figure 12]: Vankalaya discloses “classify[ing]… each BS (504) into a group of BSs based on similar working environment of the BSs, for example, a frequency bandwidth of the BSs, a user load associated with each BS classified in the group of BSs” [Vankalaya, 0111; see also, figure 12 step 1202]. In the combination, the mapping suggested above is refined so that one of the BSs in the same group maps to the “matching child node” of the claim.) by the processor; ([Wang, figure 3]: As noted above, the processor of the core network maps to the “processor” of the claim.) receiving the child node neural network model of the matching child node from the matching child node by the receiver; ([Wang, figure 20]: The neural network 2018 of base station 2004 (i.e., the “child node neural network of the matching child node” of the claim) is transmitted from base station 2004 (i.e., “from the matching child node” as recited by the claim).) and configuring the child node neural network model of the one child node by using the child node neural network model of the matching child node by the processor. ([Wang, figure 20]: The neural network 2022 of base station 2006 (i.e., the “child node neural network model of the one child node” of the claim) is based on the neural network 2018 of base station 2004 (i.e., the “child node neural network of the matching child node” of the claim).) The same motivation to combine applies. Claim 7 Wang discloses the elements of the parent claim(s). It also discloses: [The configuration method of claim 1, wherein the acquiring characteristic information of the plurality of child nodes comprises:] receiving the characteristic information transmitted from each of the plurality of child nodes by the receiver. ([Wang, 0059]: As noted above, the parameters analyzed by the core network (mapped to the “characteristic information” of the claim) are reported to the core network by base stations [Wang, 0059].) The same motivation to combine applies. Claim 9 Wang in view of Vankalaya discloses the elements of the parent claim(s). It also discloses: [The configuration method of claim 7, wherein the dynamically configuring each child node neural network model based on the acquired characteristic information comprises:] dividing the plurality of child nodes into a plurality of categories based on the characteristic information by the processor; ([Vankalaya, 0111 and figure 12]: This is a repetition of a limitation found in the parent claim and is disclosed in the same way as described above.) notifying the characteristic information of the child nodes belonging to a same category among the plurality of categories to the child nodes of the same category according to the plurality of categories by the processor; ([Vankalaya, figure 12]: Vankalaya discloses receiving feedback from each BS in a group [Vankalaya, step 1210]. This feedback received from each BS in a group maps to the “characteristic information of the child nodes belong to a same category” of the claim.) and training the child node neural network models of the child nodes of the same category by using the characteristic information of the child nodes of the same category, and updating the child node neural network model of the child nodes of the same category by the processor. ([Vankalaya, figure 12 and 0153]: Vankalaya discloses updating the neural network of to a group of BSs based on the received feedback [Vankalaya, figure 12 steps 1212], and it is indicated that the updating can involve (re)learning the model [Vankalaya, 0153]. In other words, the updating step of Vankalaya maps to the “training” and “updating” steps of the claim.) The same motivation to combine applies. Claim 11 Wang in view of Vankalaya discloses the elements of the parent claim(s). It also discloses: [The configuration method of claim 1, wherein the dynamically configuring each child node neural network model comprises one of:] establishing indexes of a plurality of neural network models, and using the indexes to indicate that the child node neural network model is one of the plurality of the neural network models by the processor; indicating the child node neural network model by using a model weight of the one of the plurality of neural network models by the processor; indicating the child node neural network model by using a model weight variation of the one of the plurality of neural network models by the processor; and indicating the child node neural network model by using a semantic representation of the one of the plurality of neural network models by the processor. ([Wang, 0051, 0053-0054]: The claim recites four limitations in the alternative, so its broadest reasonable interpretation requires that only one of the four. As noted above, Wang discloses a base station neural network manager which configures a DNN by selecting NN formation configurations from a neural network table [Wang, 0051]. The “neural network table 272 stores multiple different NN formation configuration elements and/or NN formation configurations” and “single index value of the neural network table 272 maps to a single NN formation configuration” [Wang, 0054] (where “configurations include any combination of information that defines the behavior of a neural network, such as node connections, coefficients, active layers, weights, biases, pooling, etc” [Wang, 0053]). A neural network table 272 maps to one of the “indexes of a plurality of neural network models” of the claim and the NN formation configurations it stores to the “plurality of neural network models” of the claim. This mapping ensures that the “indexes” are used “to indicate that the child node neural network model is one of the neural network models” as recited by the claim.) The same motivation to combine applies. Claim 15 Wang discloses: A communication system based on neural network models, comprising: at least one master node; a plurality of child nodes, which are communicatively connected with the at least one master node, ([Wang, figure 1 and 0040-0042, 0138]: Wang discloses an environment 100 in which a core network 150 communicates wirelessly with base stations 120 which communicate wirelessly with user equipment devices (UEs) 110 [Wang, figure 1 and 0040-0042] and explicitly indicates that there are “multiple UEs” in the wireless communication system [Wang, 0138]. The core network and base stations map, respectively, to the “at least one master node” and the “plurality of child nodes” of the claim. The examiner notes that mapping base stations to “child nodes” is consistent with the broadest reasonable interpretation of the term in view of the applicant’s specification (“the child node 11 is, for example, a base station…” [specification, 0099]).) and a child node neural network model is configured in each of the plurality of child nodes, ([Wang, 0051, 0053-0054, and 0266]: Wang discloses that a “base station neural network manager 268 selects the NN formation configurations utilized by the base station” [Wang, 0051] from a neural network table 272 which stores NN formation configurations [Wang, 0053-0054] in order to configure a deep neural network (DNN) [Wang, 0051]. See also: [Wang, 0266]. This DNN maps to a “child node neural network model” of the claim.) wherein the at least one master node acquires the characteristic information of the plurality of child nodes by a receiver; ([Wang, 0059, 0066, and figure 3]: Wang discloses that the core network “analyzes various parameters such as current signal channel conditions (e.g., as reported by base stations 120, as reported by other wireless access points, as reported by UEs 110 (via base stations or other wireless access points)), capabilities at base stations 120 (e.g., antenna configurations, cell configurations, MIMO capabilities, radio capabilities, processing capabilities), capabilities of UE 110 (e.g., antenna configurations, MIMO capabilities, radio capabilities, processing capabilities), and so forth. For example, the base stations 120 obtain the various parameters during the communications with the UE and forward the parameters to the core network neural network manager 312” [Wang, 0059]. The parameters analyzed by the core network neural network manager map to the “characteristic information” of the claim. The core network interface 320 [Wang, figure 3] is “for communication of user-plane, control-plane, and other information with the other functions or entities in the core network 150, base stations 120, or UE 110” [Wang, 0066] and thus maps to the “receiver” of the claim.) and dynamically configures each child node neural network model based on the acquired characteristic information by a processor, ([Wang, 0053-0054, 0059, 0062, and figures 2-3]: Wang further discloses that the “core network neural network manager selects, based on these parameters, a NN formation configuration… [and] then communicates the selected NN formation configuration to the base stations 120” [Wang, 0059; emphasis added]. The base station training modules 270 generate NN formation configurations which are “streamlined … relative to those generated by the training module 314 [of the core network]” [Wang, 0062], and which are stored in the neural network tables 272 [Wang, 0053-0054]. In other words, the configuring of each “child node neural network model” as mapped above is “based on the acquired information” as recited by the claim, with the “acquired information” being as mapped above. The core network processor 304 [Wang, figure 3] maps to the “processor” of the claim. Alternatively, all of the processors 206 and 304 taken together [Wang, figures 2-3] can be mapped to the “processor” of the claim.) Wang discusses “neural network formation configuration used by multiple base stations” (as opposed to neural network formation configurations which are “specific to a respective base station”) [Wang, 0128]. Nonetheless, it may be argued that it does not distinctly disclose a step of grouping base stations. In other words, Wang may not distinctly disclose: wherein the at least one master node divides the plurality of child nodes into a plurality of categories based on the characteristic information, and dividing the child node neural network models of the plurality of child nodes into the plurality of categories accordingly by the processor; using the characteristic information, trains the child node neural network model for the plurality of categories to obtain an updated child node neural network model by the processor; and updates the child node neural network models of the plurality of child nodes by using updated child node neural network model by the processor. Vankalaya is in the field of communications networks, discussing base stations (BSs) in communication with UEs [Vankalaya, figure 5; see also, 0087 and figure 8]. Moreover, Wang in view of Vankalaya discloses: wherein the at least one master node divides the plurality of child nodes into a plurality of categories based on the characteristic information, and dividing the child node neural network models of the plurality of child nodes into the plurality of categories accordingly by the processor; ([Vankalaya, 0111 and figure 12]: Vankalaya discloses “classify[ing]… each BS (504) into a group of BSs based on similar working environment of the BSs, for example, a frequency bandwidth of the BSs, a user load associated with each BS classified in the group of BSs” [Vankalaya, 0111; see also, figure 12 step 1202]. In the combination, the base stations of Wang are classified into groups as in Vankalaya, and the resulting groups of BSs map to the “plurality of categories” of the claim. A division of the BSs into categories results in a division of the “child node neural network models” associated with the BSs into categories.) using the characteristic information, trains the child node neural network model for the plurality of categories to obtain an updated child node neural network model by the processor; and updates the child node neural network models of the plurality of child nodes by using updated child node neural network model by the processor. ([Wang, 0128; Vankalaya, figure 12]: Vankalaya discloses determining a neural network “to be deployed at each BS of the group of BSs” and sending it “to each of BS of the group of BSs” [Vankalaya, figure 12 elements 1206-1208]. As noted above, Wang also discloses “neural network formation configuration used by multiple base stations” [Wang, 0128]. In the combination, the multiple base stations which share a neural network formation configuration, as in Wang, are taken to be the neural networks in a group, as in Vankalaya. As noted above, the configuration/training of neural networks is based on the parameters which are mapped to the “characteristic information” of the claim, so the training is “using the characteristic information” as recited. The applicant is also invited to consult the rejection of claim 9 below.) Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the communication system of Wang with that of Vankalaya because it “intelligently predict[s] a channel quality status (CQS) in a wireless communication network” and “reduce[s] the computational burden at the UE related to reporting the CQS information to the base station” [Vankalaya, 0045-0046], so the combination would be more efficient overall. Claims 16, 18-19, 21, 23, and 25 inherit limitations from claim 15 and recite additional limitations which are substantially similar to those recited by claims 2, 4-5, 7, 9, and 11, respectively, so they are rejected by the same rationale. Claim(s) 12 and 26 is/are rejected under 35 USC 103 as being unpatentable over Wang in view of Vankalaya, further in view of Yicheng LIN et al. (US20200374863A1, effectively filed 2019-05-24; hereafter “Lin”) and Min QI et al. (Trend Time-Series Modeling and Forecasting With Neural Networks, published 2008-05; hereafter “Qi”). Claim 12 Wang in view of Vankalaya discloses the elements of the parent claims. It also discloses: a user equipment corresponding to one of the plurality of child nodes ([Wang, figure 1 and 0119]: As noted under the parent claim, Wang discloses user equipment devices (UEs) 110 in communication with a base station [Wang, figure 1; see also, 0040-0042, and 0138]. Any one of the BSs maps to “one of the plurality of child nodes” of the claim, and any UE in communication with that BS maps to a “user equipment corresponding to [the] one of the plurality of child nodes” of the claim.) While the DNNs of Wang are indicated as being able to “perform any high-level and/or low-level operation found within the transmitter processing chain” [Wang, 0096], Wang in view of Vankalaya does not distinctly disclose DNNs performing beam selection. In other words, Wang in view of Vankalaya does not distinctly disclose: [The configuration method of claim 7, wherein] the characteristic information is a historical optimal beam set of [a user equipment corresponding to one of the plurality of child nodes] and wherein the historical optimal beam set comprises a difference sequence between a plurality of optimal beams at a plurality of consecutive time points and an optimal beam at a latest time point; or a difference sequence between the optimal beams of two adjacent time points in the plurality of consecutive time points. Lin is in the field of communication systems and discloses a system in which a core network 530 communications with base stations 520 which communicate with user equipment devices (UEs) 510 (also called electronic devices (EDs)) [Lin, figure 5 and 0103]. Moreover, Wang in view of Lin discloses: [The configuration method of claim 7, wherein] the characteristic information is a historical optimal beam set of [a user equipment corresponding to one of the plurality of child nodes] ([Lin, 0032-0037]: Lin discusses the use of neural networks for beam selection. More precisely, it discloses that a “BS trains an ML module, such as a neural network, at the BS side using the collected data samples” [Lin, 0037] so that the neural network “learns BS/UE beam directions based directly on UE location” [Lin, 0033], where the “data samples [used for training] include UE locations and corresponding optimal BS and/or UE beam directions” [Lin, 0035; see also, 0059-0060]. In the combination, the neural networks at base stations as in Wang are taken to be neural networks performing beam selection as in Lin. The training data disclosed in Lin is then the “historical optimal beam set” of the claim.) and wherein the historical optimal beam set comprises… a plurality of optimal beams at a plurality of consecutive time points… the optimal beams of two adjacent time points in the plurality of consecutive time points. ([Lin, 0035, 0053]: Lin discloses that the training data may be arranged in a time series. For example, in the context of discussing “UE location information [which] is signaled to the BS in… the training stage”, it notes that this location information can be an “incremental value” (i.e., a “quantized offset indicating a new UE location relative to a previous location”) [Lin, 0053]. Since the training data is arranged in a time series, the times represented in the data set map to the “plurality of consecutive time points” of the claim, any two adjacent times in the training data (e.g., the time corresponding to the “new UE location” and the time corresponding to the “previous location” [Lin, 0053]) map to the “two adjacent time points” of the claim, and the optimal beams in the training data map to the “plurality of optimal beams at a plurality of consecutive time points” of the claim.) Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the communication system of Wang in view of Vankalaya with the beam selection method described in Lin because “optimal beam direction, for mm Wave directional beamforming for example, is UE location-specific” [Lin, 0032] and the method involves “train[ing] a neural network with UE location/beam direction samples, such that the neural network learns BS/UE beam directions based directly on UE location” [Wang, 0033], thereby resulting in an effective system. Wang in view of Vankalaya and Lin does not distinctly disclose: [and wherein the historical optimal beam set comprises] a difference sequence between [a plurality of optimal beams at a plurality of consecutive time points] and an optimal beam at a latest time point; or a difference sequence between [the optimal beams of two adjacent time points in the plurality of consecutive time points.] Qi is in the field of machine learning and discusses neural networks for time series modeling. Moreover, Wang in view of Lin, Vankalaya, and Qi discloses [and wherein the historical optimal beam set comprises] a difference sequence between [a plurality of optimal beams at a plurality of consecutive time points] and an optimal beam at a latest time point; or a difference sequence between [the optimal beams of two adjacent time points in the plurality of consecutive time points.] ([Qi, section III.C]: The limitation two limitations in the alternative so its broadest reasonable interpretation necessitates only one of the two. Qi discusses “modeling with differenced data” in which each observation y_t is replaced with Δy_t = y_t - y_{t-1} [Qi, section III.C item 4)]. In the combination, differencing is applied to the optimal beams in the training data as disclosed in Wang. The differenced data is then the “difference sequence between the optimal beams of two adjacent time points in a plurality of consecutive time points” of the claim.) Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the communication system of Wang in view of Vankalaya, Lin with the use of differencing as described in Qi because “modeling with differenced data consistently outperforms other modeling approaches” [Qi, section IV second paragraph]. Claim 26 inherits limitations from claim 15 and recite additional limitations which are substantially similar to those recited by claim 12, so it is rejected by the same rationale. Claim(s) 13 and 27 is/are rejected under 35 USC 103 as being unpatentable over Wang in view of Vankalaya, Lin, and Qi, further in view of Daniel MUTHUKRISHNA et al. (RAPID: Early Classification of Explosive Transients Using Deep Learning, published 2018; hereafter “Muthukrishna”). Claim 13 Wang in view of Vankalaya, Lin, and Qi discloses the elements of the parent claim(s). It also discloses the use of cross-entropy loss [Lin, 0161], but does not disclose weighted cross-entropy loss. In other words, it does not distinctly disclose: [The configuration method of claim 12, wherein the dynamically configuring the child node neural network model of the one of the plurality of child nodes by using the characteristic information comprises:] determining a weight of each historical optimal beam by using the occurrence times of each historical optimal beam in the historical optimal beam set in historical time by the processor; and according to the weight of each historical optimal beam and the historical optimal beam set, constructing a weighted loss function to perform training to update the child node neural network model by the processor. Muthukrishna is in the field of machine learning and discusses training neural networks [Muthukrishna, section 3.1]. Moreover, Wang in view of Lin, Qi, and Muthukrishna discloses: [The configuration method of claim 12, wherein the dynamically configuring the child node neural network model of the one of the plurality of child nodes by using the characteristic information comprises:] determining a weight of each historical optimal beam by using the occurrence times of each historical optimal beam in the historical optimal beam set in historical time by the processor; and according to the weight of each historical optimal beam and the historical optimal beam set, constructing a weighted loss function to perform training to update the child node neural network model by the processor. ([Muthukrishna, section 3.1]: Muthukrishna discloses the use of weighted cross-entropy loss H_w [Muthukrishna, section 3.1 equation (6)]. In the combination, the use of cross-entropy loss disclosed in Wang in view of Lin and Qi is replaced with weighted cross-entropy loss as in Muthukrishna. In other words, H_w maps to the “weighted loss function” of the claim. The weights w_c maps to the “weight of each historical beam” of the claim as best understood by the examiner in view of the 112(b) rejections.) Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art to replace cross-entropy loss as disclosed in the communication system of Wang in view of Vankalaya, Lin, and Qi with weighted cross-entropy loss as in Muthukrishna because weighting helps “counteract imbalances in the distribution of classes in the data set which may cause more abundant classes to dominate in the optimization” [Muthukrishna, section 3.1 paragraph beginning “If weights”], so the combination would be more effective overall. Claim 27 inherits limitations from claim 15 and recite additional limitations which are substantially similar to those recited by claim 13, so it is rejected by the same rationale. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Shishir AGRAWAL whose telephone number is +1 703-756-1183. The examiner can normally be reached Monday through Thursday, 08:30-14:30 Pacific Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey SHMATOV can be reached on +1 571-270-3428. The fax phone number for the organization where this application or proceeding is assigned is +1 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at +1 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call +1 800-786-9199 (IN USA OR CANADA) or +1 571-272-1000. /S.A./Examiner, Art Unit 2123 /ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Jul 20, 2022
Application Filed
Sep 11, 2025
Non-Final Rejection — §101, §103, §112
Dec 17, 2025
Response Filed
Feb 02, 2026
Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month