Prosecution Insights
Last updated: April 19, 2026
Application No. 17/799,697

TRANSMITTING NODE INSTRUCTIONS

Non-Final OA §101§102§103
Filed
Aug 15, 2022
Examiner
STANLEY, JEREMY L
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Hewlett-Packard Development Company, L.P.
OA Round
1 (Non-Final)
48%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
131 granted / 276 resolved
-7.5% vs TC avg
Strong +45% interview lift
Without
With
+44.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
28 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 276 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Application filed on August 15, 2022. Claims 1-15 are pending in the case. Claims 1, 8, and 12 are the independent claims. This action is non-final. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 and 12-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the claims are directed to software per se; i.e., claims 1 and 12 recite a device/system comprising a logical node and a logical edge which are essentially virtual/logical partitions which can therefore be implemented entirely in software and therefore is not a machine, an article of manufacture, a process, or a composition of matter as contemplated ty 35 U.S.C. 101. Claims 2-7 and 13 inherit this deficiency by virtue of their respective dependency upon claims 1 and 12. Examiner respectfully suggests amending the claims to clarify that the claimed invention is not purely software, i.e., that the structural components of claims 1-7 and 12-15 are not complete software embodiments or realized entirely by software. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental steps) without significantly more. This judicial exception is not integrated into a practical application because any additional elements amount to implementing the abstract idea on a generic computer. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding independent claims 1, 8, and 12, and relying on the evaluation flowchart in MPEP 2106: Step 1 (Is the claim to a process, machine, manufacture, or composition of matter?): Yes. Claim 1 is a device (machine). Claim 8 is a machine readable medium (manufacture). Claim 12 is a system (machine). Step 2a Prong One (Does the claim recite an abstract idea?): Yes. Claim 1 recites: generate an output change value based on the received first node instruction (a mental process of determination, i.e. regarding a change value based on/corresponding to a received first node instruction); generate a second node instruction based on the output change value and the first node instruction (a mental process of determination, i.e. regarding a second node instruction based on/corresponding to the received first node instruction and the output change value). Under the broadest reasonable interpretation, these steps may be performed mentally, using mental observation and mental determination, including by a human using a physical aid such as pen and paper, including a human mentally performing observations, making determinations, and/or mentally performing mathematical calculations, and therefore correspond to the Mental Processes grouping. Step 2a Prong Two (Does the claim recite additional elements that integrate the judicial exception into a practical application?): No. Claim 1 additionally recites: a logical node of a neural network to, a logical edge of the neural network to (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)); receive a first node instruction (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)); transmit the output change value and the first node instruction to a logical edge (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)); receive the output change value and the first node instruction from the logical node (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)); transmit the second node instruction (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)). Therefore, in view of the considerations set forth in MPEP 2106.04(d), 2106.05(a)-(c) and (e)-(h), the additional elements as disclosed above alone or in combination do not integrate the judicial exception into a practical application as they are mere insignificant extra solution activity, combined with implementing the abstract idea using generic computer components. Step 2b (Does the claim recite additional elements that amount to siqnificantly more than the judicial exception): No. Relying on the same analysis as Step 2a Prong Two (see MPEP 2106.05.I.A: Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include:…Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP 2106.05(f));…Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception...; Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP 2106.05(g);…)), claims 3 does not recite any additional elements that amount to significantly more than the abstract idea. As discussed above, Claims 3 and 1 recite: the logical edge (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). a logical node of a neural network to, a logical edge of the neural network to (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)); receive a first node instruction (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362); transmit the output change value and the first node instruction to a logical edge (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)); receive the output change value and the first node instruction from the logical node (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)); transmit the second node instruction (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)). The additional elements as discussed above, in combination with the abstract idea, are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination with generic computer functions and components used to implement the abstract idea. Step 2a Prong One (Does the claim recite an abstract idea?): Yes.Claim 8 recites: generate an output change value based on the first node instruction (a mental process of determination, i.e. regarding a change value based on/corresponding to a received first node instruction); generate a second node instruction based on the output change value and the first node instruction (a mental process of determination, i.e. regarding a second node instruction based on/corresponding to the received first node instruction and the output change value). Under the broadest reasonable interpretation, these steps may be performed mentally, using mental observation and mental determination, including by a human using a physical aid such as pen and paper, including a human mentally performing observations, making determinations, and/or mentally performing mathematical calculations, and therefore correspond to the Mental Processes grouping. Step 2a Prong Two (Does the claim recite additional elements that integrate the judicial exception into a practical application?): No. Claim 8 additionally recites: a non-transitory machine readable medium storing instructions executable by a processing resource to, a logical node device of a neural network, a logical edge of the neural network, the logical node device and logical edge device performing the generate steps discussed above and the additional steps below (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)); transmit a first node instruction to a logical node device of a neural network, wherein the first node instruction includes a node operation code (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)); transmit the output change value and the first node instruction to a logical edge device of the neural network (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)); receive the output change value and the first node instruction from the first device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)); transmit the second node instruction to one of the logical node device or an external device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)). Therefore, in view of the considerations set forth in MPEP 2106.04(d), 2106.05(a)-(c) and (e)-(h), the additional elements as disclosed above alone or in combination do not integrate the judicial exception into a practical application as they are mere insignificant extra solution activity, combined with implementing the abstract idea using generic computer components. Step 2b (Does the claim recite additional elements that amount to siqnificantly more than the judicial exception): No. Relying on the same analysis as Step 2a Prong Two (see MPEP 2106.05.I.A: Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include:…Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP 2106.05(f));…Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception...; Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP 2106.05(g);…)), claims 3 does not recite any additional elements that amount to significantly more than the abstract idea. As discussed above, Claims 3 and 1 recite: a non-transitory machine readable medium storing instructions executable by a processing resource to, a logical node device of a neural network, a logical edge of the neural network, the logical node device and logical edge device performing the generate steps discussed above and the additional steps below (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). transmit a first node instruction to a logical node device of a neural network, wherein the first node instruction includes a node operation code (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362); transmit the output change value and the first node instruction to a logical edge device of the neural network (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)); receive the output change value and the first node instruction from the first device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)); transmit the second node instruction to one of the logical node device or an external device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)). The additional elements as discussed above, in combination with the abstract idea, are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination with generic computer functions and components used to implement the abstract idea. Step 2a Prong One (Does the claim recite an abstract idea?): Yes. Claim 12 recites: generate an output change value based on the received node instruction (a mental process of determination, i.e. regarding a change value based on/corresponding to a received first node instruction); generate a sequence of node instructions based on the output change value and the node instruction (a mental process of determination, i.e. regarding a second node instruction based on/corresponding to the received first node instruction and the output change value). Under the broadest reasonable interpretation, these steps may be performed mentally, using mental observation and mental determination, including by a human using a physical aid such as pen and paper, including a human mentally performing observations, making determinations, and/or mentally performing mathematical calculations, and therefore correspond to the Mental Processes grouping. Step 2a Prong Two (Does the claim recite additional elements that integrate the judicial exception into a practical application?): No. Claim 8 additionally recites: a system, comprising: a logical node device of a neural network to, a logical edge device, the logical edge device of the neural network to (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)); receive a node instruction, wherein the node instruction includes a node operation code, a node input change value, and a node index (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)); transmit the output change value and the node instruction to a logical edge device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)); receive the output change value and the node instruction from the logical device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)); transmit the sequence of node instructions to the first logical node device in response to a node index of the sequence of node instructions being associated with the first logical node device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)); and transmit the sequence of node instructions to an external device based on a node index of the sequence of node instructions being associated with the external device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)). Therefore, in view of the considerations set forth in MPEP 2106.04(d), 2106.05(a)-(c) and (e)-(h), the additional elements as disclosed above alone or in combination do not integrate the judicial exception into a practical application as they are mere insignificant extra solution activity, combined with implementing the abstract idea using generic computer components. Step 2b (Does the claim recite additional elements that amount to siqnificantly more than the judicial exception): No. Relying on the same analysis as Step 2a Prong Two (see MPEP 2106.05.I.A: Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include:…Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP 2106.05(f));…Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception...; Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP 2106.05(g);…)), claims 3 does not recite any additional elements that amount to significantly more than the abstract idea. As discussed above, Claims 3 and 1 recite: a system, comprising: a logical node device of a neural network to, a logical edge device, the logical edge device of the neural network to (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). receive a node instruction, wherein the node instruction includes a node operation code, a node input change value, and a node index (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362); transmit the output change value and the node instruction to a logical edge device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)); receive the output change value and the node instruction from the logical device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)); transmit the sequence of node instructions to the first logical node device in response to a node index of the sequence of node instructions being associated with the first logical node device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)); and transmit the sequence of node instructions to an external device based on a node index of the sequence of node instructions being associated with the external device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)). The additional elements as discussed above, in combination with the abstract idea, are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination with generic computer functions and components used to implement the abstract idea. Regarding dependent claim 2: Step 2a Prong One: incorporates the rejection of claim 1. Step 2a Prong Two: the claims additionally recite wherein the logical node is a logical node device and the logical edge is a logical edge device (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Step 2b: the claims additionally recite wherein the logical node is a logical node device and the logical edge is a logical edge device (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Regarding dependent claim 3: Step 2a Prong One: incorporates the rejection of claim 1. Claim 3 additionally recites determine the is affected by the output change value (a mental process of determination, i.e. that an output change value affects the logical edge). Step 2a Prong Two: the claims additionally recite the logical edge (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims additionally recite the logical edge (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Regarding dependent claim 4: Step 2a Prong One: incorporates the rejection of claim 1. Step 2a Prong Two: the claims additionally recite wherein the first node instruction and the second node instruction include a node operation code, a node input change value, and a node index (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Step 2b: the claims additionally recite wherein the first node instruction and the second node instruction include a node operation code, a node input change value, and a node index (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Regarding dependent claim 5: Step 2a Prong One: incorporates the rejection of claim 1. Step 2a Prong Two: the claims additionally recite wherein the logical edge is to transmit the second node instruction to the logical node in response to a node index of the second node instruction being associated with the logical node (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)). Step 2b: the claims additionally recite wherein the logical edge is to transmit the second node instruction to the logical node in response to a node index of the second node instruction being associated with the logical node (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)). Regarding dependent claim 6: Step 2a Prong One: incorporates the rejection of claim 1. Step 2a Prong Two: the claims additionally recite wherein the logical edge transmits the second node instruction to an external device based on a node index of the second node instruction being associated with the external device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)). Step 2b: the claims additionally recite wherein the logical edge transmits the second node instruction to an external device based on a node index of the second node instruction being associated with the external device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)). Regarding dependent claim 7: Step 2a Prong One: incorporates the rejection of claim 1. Step 2a Prong Two: the claims additionally recite wherein the logical node receives the first node instruction from an external device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)). Step 2b: the claims additionally recite wherein the logical node receives the first node instruction from an external device (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)). Regarding dependent claim 9: Step 2a Prong One: incorporates the rejection of claim 8. Step 2a Prong Two: the claims additionally recite wherein the first node instruction includes a node index, wherein the second node instruction is based on the node index and the output change value (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Step 2b: the claims additionally recite wherein the first node instruction includes a node index, wherein the second node instruction is based on the node index and the output change value (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Regarding dependent claim 10: Step 2a Prong One: incorporates the rejection of claim 8. Step 2a Prong Two: the claims additionally recite wherein the second node instruction is transmitted to one of the logical node device and an external device based on the node operation code (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)). Step 2b: the claims additionally recite wherein the second node instruction is transmitted to one of the logical node device and an external device based on the node operation code (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362)). Regarding dependent claim 11: Step 2a Prong One: incorporates the rejection of claim 8. Step 2a Prong Two: the claims additionally recite wherein the neural network generates a set of node instructions simultaneously (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Step 2b: the claims additionally recite wherein the neural network generates a set of node instructions simultaneously (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Regarding dependent claim 13: Step 2a Prong One: incorporates the rejection of claim 12. Step 2a Prong Two: the claims additionally recite wherein the logical node device includes a node memory map to instruct the logical node device to transmit the node instruction to the logical edge device (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Step 2b: the claims additionally recite wherein the logical node device includes a node memory map to instruct the logical node device to transmit the node instruction to the logical edge device (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Regarding dependent claim 14: Step 2a Prong One: incorporates the rejection of claim 12. Step 2a Prong Two: the claims additionally recite wherein the logical edge device includes a node edge map to instruct the logical edge device to transmit the sequence of node instructions to the external device (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Step 2b: the claims additionally recite wherein the logical edge device includes a node edge map to instruct the logical edge device to transmit the sequence of node instructions to the external device (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Regarding dependent claim 15: Step 2a Prong One: incorporates the rejection of claim 12. Step 2a Prong Two: the claims additionally recite wherein the logical edge device encodes a set of partial node instructions including the node operation code and the node index (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g)). Step 2b: the claims additionally recite wherein the logical edge device encodes a set of partial node instructions including the node operation code and the node index (insignificant extra-solution activity of data gathering and outputting as discussed in MPEP 2106.05(g) which can further be re-evaluated in Step 2B as a well understood, routine, and conventional activity MPEP 2106.05(d) of electronic recordkeeping and/or storing and retrieving information in memory). Therefore, in view of the considerations set forth in MPEP 2106.04(d), 2106.05(a)-(c) and (e)-(h), the additional elements as recited in the dependent claims discussed above alone or in combination do not integrate the judicial exception into a practical application as they are mere insignificant extra solution activity, combined with implementing the abstract idea using generic computer components, and limitations describing a field of use or technological environment. The additional elements as discussed above, in combination with the abstract idea, are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination with generic computer functions and components used to implement the abstract idea, and limitations describing a field of use or technological environment. Claim Rejections – 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 1-5, 8, 9, and 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sridharan et al. (US 20180322387 A1). With respect to claim 1, Sridharan teaches a device, comprising: a logical node of a neural network (e.g. paragraph 0143, describing neural networks including nodes arranged in layers via edges which are fully connected to nodes in adjacent layers, but where there are no nodes between edges within each layer; Fig. 10, illustrating an example neural network including layers 1002, 1004, 1006 each including at least one node; paragraph 0179, Fig. 12, describing model parallelism implementation 1202 in which the neural network is split such that each layer is assigned to a respective computational node; Figs. 16A-C, showing nodes 1610/1620; paragraph 0227, Fig. 17A, nodes of a multi-node compute system; paragraph 0233, Fig. 19B, multi-node system 1920 including producer node 1930 and consumer node 1940; i.e. as can be seen in Figs. 10 and 12, where a layer containing at least a single neural network node is assigned/implemented on a single physical computational node, the computational node to which the layer (and therefore the constituent node) is assigned can be considered to be a physical device implementing the layer/node, forming a logical node of the neural network) to: receive a first node instruction (e.g. paragraph 0143, data received at nodes; paragraph 0200, Fig. 14E, describing communication operations for enabling data transfers for weight and activation data within neural network; alltoall communication operation used to transfer activation data from first layer to successive layer; alltoall operation transfers distinct data from compute nodes that generate activations to receivers; reduce scatter operation used to transfer data to final layers; paragraph 0217, first MLSL API command configuring forward compute operation for first layer at first node 1610; paragraph 0218, describing use of backward computes; paragraph 0222-0223, describing internode communication using point to point primitives, used to forward and backward propagation operations to exchange data between nodes; paragraph 0223, issuing request to node to send specific block of data to other node); generate an output change value based on the received first node instruction (e.g. paragraph 0143, data at node propagated/fed forward to other nodes of another layer/output layer via activation function; paragraph 0200, allreduce operations performed between layers to update weights of each layer; paragraph 0217, waiting to finish receiving communication of activation data that will be used as input data for forward compute; forward compute automatically begins upon communication of completion of activation data; communicating activation data output from first node 1610 is activation data generated by first layer and used as input data for second layer/second node; paragraph 0218, describing use of backward computes, including communication of computed activation gradients, transmission of weight gradients, updated weights, etc.; paragraph 0222, each node generating weight deltas; weight deltas received in receive buffer; paragraph 0223, node sending requested block of data to other node as soon as dependencies are satisfied, such as by performing remote write; paragraph 0233, producer node produces data that will be consumed by consumer node via shared memory 1950); and transmit the output change value and the first node instruction to a logical edge (e.g. paragraph 0143, data at node propagated/fed forward to other nodes of another layer/output layer via activation function; paragraph 0200, allreduce operations performed between layers to update weights of each layer; paragraph 0217, waiting to finish receiving communication of activation data that will be used as input data for forward compute; forward compute automatically begins upon communication of completion of activation data; communicating activation data output from first node 1610 is activation data generated by first layer and used as input data for second layer/second node; paragraph 0218, describing use of backward computes, including communication of computed activation gradients, transmission of weight gradients, updated weights, etc.; paragraph 0222, each node generating weight deltas; weight deltas received in receive buffer; paragraph 0223, node sending requested block of data to other node as soon as dependencies are satisfied, such as by performing remote write; paragraph 0233, producer node produces data that will be consumed by consumer node via shared memory 1950); and the logical edge of the neural network (e.g. paragraph 0143, describing neural networks including nodes arranged in layers via edges which are fully connected to nodes in adjacent layers, but where there are no nodes between edges within each layer; Fig. 10, illustrating an example neural network including layers 1002, 1004, 1006 each including at least one node; paragraph 0209, Fig. 15B, describing MLSL architecture 1511 as including machine learning specific abstractions including layer-to-layer communication abstractions for implementing communication patterns for layers/parallelisms, where communications for (i.e. between) layers are enabled via communication modules 1517, messaging library 1519, and high performance communications fabric 1521, and also enable intelligent messaging scheduling across neural network layers; paragraph 0210, communication module 1517 includes logic to drive underlying messaging library 1519 enabling transmitting data between various compute nodes; logic to optimize network bandwidth and enable low latency communications, specified processor resources tasked with managing distributed communication; compute/communication resources; paragraph 0211, communication module 1517 enabling communication between processing nodes; paragraph 0212, messaging library 1519 using communication routines to transmit data over high performance communications fabric 1521; Figs. 16A-C, showing links/connections between nodes 1610/1620; paragraph 0222, Fig. 16C, receive buffer receiving set of weight deltas generated by nodes; summation unit 1636 (shown in Fig. 16 as being positioned between, for purposes of receiving weight delta, etc., nodes 1631) may be a separate control node which transmits new/updated set of weights to each node; paragraph 0226, fabric interconnect logic routing data based on target memory address associated with message, write, or packet to be routed; paragraph 0227, Fig. 17A, interconnects provided via links 816, 1716, 1708; paragraph 0228, fabric interface determining which node message is intended for and relaying the message to the corresponding node; paragraph 0233, Fig. 19B, showing shared memory 1950 on communication path between nodes 1930 and 1940, which may be as distributed and shared virtual address space mapped across multiple nodes; i.e. the combined hardware and software functionalities providing communication capabilities between nodes (both nodes of the neural network and their corresponding compute nodes), including the shared memory implementation, communications fabric, and associated processing resources (such as summation units shared by nodes, etc.) and library routines for implementing communications, collectively provide the logical edges between the nodes; compare with specification of the instant application at paragraph 0012, indicating that “a logical node, logical edge, and/or logical device can include a logical partition of memory resources and/or processing resources….the processing resource can be a virtual processing resource….”) to: receive the output change value and the first node instruction from the logical node (e.g. paragraph 0143, data at node propagated/fed forward to other nodes of another layer/output layer via activation function based on coefficients/weights associated with edges connecting layers (and therefore nodes); paragraph 0222, Fig. 16C, receive buffer of summation unit 1636 receiving set of weight deltas generated by nodes; paragraph 0233, tensor data written to memory 1950 (i.e. by a producer node)); generate a second node instruction based on the output change value and the first node instruction (e.g. paragraph 0222, SGD 1638 of summation unit 1636 generating new set of weights, which are then transmitted to each node; paragraph 0223, block of data sent to node via transmit buffer; paragraph 0226, fabric interconnect logic routing data based on target memory address associated with message, write, or packet to be routed; paragraph 0228, fabric interface determining which node message is intended for and relaying the message to the corresponding node; paragraph 0234, consumer node notified when monitored addresses written (and therefore reads the written data intended for it, resulting in transmission of the data from the memory to the consumer node)); and transmit the second node instruction (e.g. paragraph 0222, SGD 1638 of summation unit 1636 generating new set of weights, which are then transmitted to each node; paragraph 0223, block of data sent to node via transmit buffer; paragraph 0226, fabric interconnect logic routing data based on target memory address associated with message, write, or packet to be routed; paragraph 0228, fabric interface determining which node message is intended for and relaying the message to the corresponding node; paragraph 0234, consumer node notified when monitored addresses written (and therefore reads the written data intended for it, resulting in transmission of the data from the memory to the consumer node)). With respect to claim 8, Sridharan teaches a non-transitory machine readable medium storing instructions executable by a processing resource to: transmit a first node instruction to a logical node device of a neural network, wherein the first node instruction includes a node operation code (e.g. paragraph 0143, data received at nodes; paragraph 0200, Fig. 14E, describing communication operations for enabling data transfers for weight and activation data within neural network; alltoall communication operation used to transfer activation data from first layer to successive layer; alltoall operation transfers distinct data from compute nodes that generate activations to receivers; reduce scatter operation used to transfer data to final layers; paragraph 0217, first MLSL API command configuring forward compute operation for first layer at first node 1610; paragraph 0218, describing use of backward computes; paragraph 0222-0223, describing internode communication using point to point primitives, used to forward and backward propagation operations to exchange data between nodes; paragraph 0223, issuing request to node to send specific block of data to other node; paragraph 0228, node 1 requesting data from node 3 by issuing a request for data along with providing an address within node 1’s address range; node 1 requesting synchronized write to receive buffer at node 3, requesting a read of data at address within node 3’s space; destination address; i.e. the message includes a request for a particular operation to be performed (analogous to a node operation code, such as a request to write, or permit reading, of data at a particular address; see also paragraphs 0303-0304, describing instruction formats including at least an opcode 2912 defining an operation that an execution unit is to perform; see also Fig. 31A, described in paragraphs 0322-0323, showing command format 3100 including data fields identifying a command operation code/opcode 3104); generate, by the logical node device, an output change value based on the first node instruction (e.g. paragraph 0143, data at node propagated/fed forward to other nodes of another layer/output layer via activation function; paragraph 0200, allreduce operations performed between layers to update weights of each layer; paragraph 0217, waiting to finish receiving communication of activation data that will be used as input data for forward compute; forward compute automatically begins upon communication of completion of activation data; communicating activation data output from first node 1610 is activation data generated by first layer and used as input data for second layer/second node; paragraph 0218, describing use of backward computes, including communication of computed activation gradients, transmission of weight gradients, updated weights, etc.; paragraph 0222, each node generating weight deltas; weight deltas received in receive buffer; paragraph 0223, node sending requested block of data to other node as soon as dependencies are satisfied, such as by performing remote write; paragraph 0233, producer node produces data that will be consumed by consumer node via shared memory 1950); transmit, by the logical node device, the output change value and the first node instruction to a logical edge device of the neural network (e.g. paragraph 0143, data at node propagated/fed forward to other nodes of another layer/output layer via activation function; paragraph 0200, allreduce operations performed between layers to update weights of each layer; paragraph 0217, waiting to finish receiving communication of activation data that will be used as input data for forward compute; forward compute automatically begins upon communication of completion of activation data; communicating activation data output from first node 1610 is activation data generated by first layer and used as input data for second layer/second node; paragraph 0218, describing use of backward computes, including communication of computed activation gradients, transmission of weight gradients, updated weights, etc.; paragraph 0222, each node generating weight deltas; weight deltas received in receive buffer; paragraph 0223, node sending requested block of data to other node as soon as dependencies are satisfied, such as by performing remote write; paragraph 0233, producer node produces data that will be consumed by consumer node via shared memory 1950); receive, at the logical edge device, the output change value and the first node instruction from the first device (e.g. paragraph 0143, data received at nodes; paragraph 0200, Fig. 14E, describing communication operations for enabling data transfers for weight and activation data within neural network; alltoall communication operation used to transfer activation data from first layer to successive layer; alltoall operation transfers distinct data from compute nodes that generate activations to receivers; reduce scatter operation used to transfer data to final layers; paragraph 0217, first MLSL API command configuring forward compute operation for first layer at first node 1610; paragraph 0218, describing use of backward computes; paragraph 0222-0223, describing internode communication using point to point primitives, used to forward and backward propagation operations to exchange data between nodes; paragraph 0223, issuing request to node to send specific block of data to other node); generate, by the logical edge device, a second node instruction based on the output change value and the first node instruction (e.g. paragraph 0222, SGD 1638 of summation unit 1636 generating new set of weights, which are then transmitted to each node; paragraph 0223, block of data sent to node via transmit buffer; paragraph 0226, fabric interconnect logic routing data based on target memory address associated with message, write, or packet to be routed; paragraph 0228, fabric interface determining which node message is intended for and relaying the message to the corresponding node; paragraph 0234, consumer node notified when monitored addresses written (and therefore reads the written data intended for it, resulting in transmission of the data from the memory to the consumer node)); and transmit, by the logical edge device, the second node instruction to one of the logical node device or an external device (e.g. paragraph 0222, SGD 1638 of summation unit 1636 generating new set of weights, which are then transmitted to each node; paragraph 0223, block of data sent to node via transmit buffer; paragraph 0226, fabric interconnect logic routing data based on target memory address associated with message, write, or packet to be routed; paragraph 0228, fabric interface determining which node message is intended for and relaying the message to the corresponding node; paragraph 0234, consumer node notified when monitored addresses written (and therefore reads the written data intended for it, resulting in transmission of the data from the memory to the consumer node)). With respect to claim 2, Sridharan teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the logical node is a logical node device (e.g. paragraph 0143, describing neural networks including nodes arranged in layers via edges which are fully connected to nodes in adjacent layers, but where there are no nodes between edges within each layer; Fig. 10, illustrating an example neural network including layers 1002, 1004, 1006 each including at least one node; paragraph 0179, Fig. 12, describing model parallelism implementation 1202 in which the neural network is split such that each layer is assigned to a respective computational node; Figs. 16A-C, showing nodes 1610/1620; paragraph 0227, Fig. 17A, nodes of a multi-node compute system; paragraph 0233, Fig. 19B, multi-node system 1920 including producer node 1930 and consumer node 1940; i.e. as can be seen in Figs. 10 and 12, where a layer containing at least a single neural network node is assigned/implemented on a single physical computational node, the computational node to which the layer (and therefore the constituent node) is assigned can be considered to be a physical device implementing the layer/node, forming a logical node device of the neural network) and the logical edge is a logical edge device (e.g. paragraph 0143, describing neural networks including nodes arranged in layers via edges which are fully connected to nodes in adjacent layers, but where there are no nodes between edges within each layer; Fig. 10, illustrating an example neural network including layers 1002, 1004, 1006 each including at least one node; paragraph 0209, Fig. 15B, describing MLSL architecture 1511 as including machine learning specific abstractions including layer-to-layer communication abstractions for implementing communication patterns for layers/parallelisms, where communications for (i.e. between) layers are enabled via communication modules 1517, messaging library 1519, and high performance communications fabric 1521, and also enable intelligent messaging scheduling across neural network layers; paragraph 0210, communication module 1517 includes logic to drive underlying messaging library 1519 enabling transmitting data between various compute nodes; logic to optimize network bandwidth and enable low latency communications, specified processor resources tasked with managing distributed communication; compute/communication resources; paragraph 0211, communication module 1517 enabling communication between processing nodes; paragraph 0212, messaging library 1519 using communication routines to transmit data over high performance communications fabric 1521; Figs. 16A-C, showing links/connections between nodes 1610/1620; paragraph 0222, Fig. 16C, receive buffer receiving set of weight deltas generated by nodes; summation unit 1636 (shown in Fig. 16 as being positioned between, for purposes of receiving weight delta, etc., nodes 1631) may be a separate control node which transmits new/updated set of weights to each node; paragraph 0226, fabric interconnect logic routing data based on target memory address associated with message, write, or packet to be routed; paragraph 0227, Fig. 17A, interconnects provided via links 816, 1716, 1708; paragraph 0228, fabric interface determining which node message is intended for and relaying the message to the corresponding node; paragraph 0233, Fig. 19B, showing shared memory 1950 on communication path between nodes 1930 and 1940, which may be as distributed and shared virtual address space mapped across multiple nodes; i.e. the combined hardware and software functionalities providing communication capabilities between nodes (both nodes of the neural network and their corresponding compute nodes), including the shared memory implementation, communications fabric, and associated processing resources (such as summation units shared by nodes, etc.) and library routines for implementing communications, collectively provide at least one logical edge device between the nodes). With respect to claim 3, Sridharan teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the logical node is to determine the logical edge is affected by the output change value (e.g. paragraph 0222, indicating that summation unit positioned between nodes may be implemented as a separate node such as a control node; paragraph 0228, associating memory address with each node with virtual address space; writing/exchanging data within virtual address space based on corresponding messages; determining, based on write address of message, that message is destined for a given node; fabric interface, based on destination address, determining message is intended for given node; paragraph 0233, Fig. 19B, showing shared memory 1950, which may be a distributed and shared virtual address space as an edge between two nodes; i.e. based on the instruction/command/message, it may be determined that specified data is to be provided/routed to a specified node or virtual memory address space; where the specified node (such as a control node) or virtual address space is embodied as a logical edge between two nodes, this determination amounts to determining that the logical edge is to be affected by the specified data (i.e. output change value)). With respect to claim 4, Sridharan teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the first node instruction and the second node instruction include a node operation code, a node input change value, and a node index (e.g. paragraph 0223, request to send specific block of data to specific node; paragraph 0226, node identifier or target memory address associated with message, write, or packet to be related; paragraph 0228, physical address ranges of nodes mapped to virtual addresses, virtual address mapping exchanged between nodes such that each node is aware of address range of other nodes; node 1 requesting data from node 3 by issuing a request for data along with providing an address within node 1’s address range; node 1 requesting synchronized write to receive buffer at node 3, requesting a read of data at address within node 3’s space; destination address; i.e. the message includes a request for specific data (analogous to a node input change value, such as weight change/update/delta data), a request for a particular operation to be performed (analogous to a node operation code, such as a request to write, or permit reading, of data at a particular address, and an identification of the related nodes (such as a node identifier, or a target memory address which is known to be associated with a particular node); see also paragraphs 0303-0304, describing instruction formats including at least an opcode 2912 defining an operation that an execution unit is to perform, along with portions related to a destination 2918, sources 2920-2924, and access/address mode 2926; see also Fig. 31A, described in paragraphs 0322-0323, showing command format 3100 including data fields identifying a target client 3102 of the command, a command operation code/opcode 3104, and relevant data for the command 3106, where the target client field (i.e. node index) is used to route command data to the appropriate unit, the opcode fields (i.e. node operation code) are used to determine the operation to perform, and the information in the data field (i.e. node input change value) is used to perform the command). With respect to claim 9, Sridharan teaches all of the limitations of claim 8 as previously discussed, and further teaches wherein the first node instruction includes a node index, wherein the second node instruction is based on the node index and the output change value (e.g. paragraph 0223, request to send specific block of data to specific node; paragraph 0226, node identifier or target memory address associated with message, write, or packet to be related; paragraph 0228, physical address ranges of nodes mapped to virtual addresses, virtual address mapping exchanged between nodes such that each node is aware of address range of other nodes; node 1 requesting data from node 3 by issuing a request for data along with providing an address within node 1’s address range; node 1 requesting synchronized write to receive buffer at node 3, requesting a read of data at address within node 3’s space; destination address; i.e. the message includes a request for specific data (analogous to a node input change value, such as weight change/update/delta data), a request for a particular operation to be performed (analogous to a node operation code, such as a request to write, or permit reading, of data at a particular address, and an identification of the related nodes (such as a node identifier, or a target memory address which is known to be associated with a particular node); see also paragraphs 0303-0304, describing instruction formats including at least an opcode 2912 defining an operation that an execution unit is to perform, along with portions related to a destination 2918, sources 2920-2924, and access/address mode 2926; see also Fig. 31A, described in paragraphs 0322-0323, showing command format 3100 including data fields identifying a target client 3102 of the command, a command operation code/opcode 3104, and relevant data for the command 3106, where the target client field (i.e. node index) is used to route command data to the appropriate unit, the opcode fields (i.e. node operation code) are used to determine the operation to perform, and the information in the data field (i.e. node input change value) is used to perform the command). With respect to claim 5, Sridharan teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the logical edge is to transmit the second node instruction to the logical node in response to a node index of the second node instruction being associated with the logical node (e.g. paragraph 0223, request to send specific block of data to specific node; paragraph 0226, node identifier or target memory address associated with message, write, or packet to be related; paragraph 0228, physical address ranges of nodes mapped to virtual addresses, virtual address mapping exchanged between nodes such that each node is aware of address range of other nodes; node 1 requesting data from node 3 by issuing a request for data along with providing an address within node 1’s address range; node 1 requesting synchronized write to receive buffer at node 3, requesting a read of data at address within node 3’s space; destination address; i.e. the message includes a request for specific data (analogous to a node input change value, such as weight change/update/delta data), a request for a particular operation to be performed (analogous to a node operation code, such as a request to write, or permit reading, of data at a particular address, and an identification of the related nodes (such as a node identifier, or a target memory address which is known to be associated with a particular node); see also paragraphs 0303-0304, describing instruction formats including at least an opcode 2912 defining an operation that an execution unit is to perform, along with portions related to a destination 2918, sources 2920-2924, and access/address mode 2926; see also Fig. 31A, described in paragraphs 0322-0323, showing command format 3100 including data fields identifying a target client 3102 of the command, a command operation code/opcode 3104, and relevant data for the command 3106, where the target client field (i.e. node index) is used to route command data to the appropriate unit, the opcode fields (i.e. node operation code) are used to determine the operation to perform, and the information in the data field (i.e. node input change value) is used to perform the command). With respect to claim 11, Sridharan teaches all of the limitations of claim 8 as previously discussed, and further teaches wherein the neural network generates a set of node instructions simultaneously (e.g. paragraph 0197, implementing model parallelism, in which different portion of the model’s (neural network’s) computations are performed simultaneous on different nodes). Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 6, 7, 10, and 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Sridharan in view of Bequet et al. (US 20180349508 A1). With respect to claim 6, Sridharan teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the logical edge transmits the second node instruction to a device based on a node index of the second node instruction being associated with the device (e.g. paragraph 0223, request to send specific block of data to specific node; paragraph 0226, node identifier or target memory address associated with message, write, or packet to be related; paragraph 0228, physical address ranges of nodes mapped to virtual addresses, virtual address mapping exchanged between nodes such that each node is aware of address range of other nodes; node 1 requesting data from node 3 by issuing a request for data along with providing an address within node 1’s address range; node 1 requesting synchronized write to receive buffer at node 3, requesting a read of data at address within node 3’s space; destination address; i.e. the message includes a request for specific data (analogous to a node input change value, such as weight change/update/delta data), a request for a particular operation to be performed (analogous to a node operation code, such as a request to write, or permit reading, of data at a particular address, and an identification of the related nodes (such as a node identifier, or a target memory address which is known to be associated with a particular node); see also paragraphs 0303-0304, describing instruction formats including at least an opcode 2912 defining an operation that an execution unit is to perform, along with portions related to a destination 2918, sources 2920-2924, and access/address mode 2926; see also Fig. 31A, described in paragraphs 0322-0323, showing command format 3100 including data fields identifying a target client 3102 of the command, a command operation code/opcode 3104, and relevant data for the command 3106, where the target client field (i.e. node index) is used to route command data to the appropriate unit, the opcode fields (i.e. node operation code) are used to determine the operation to perform, and the information in the data field (i.e. node input change value) is used to perform the command). Sridharan does not explicitly disclose that the device is an external device. However, Bequet teaches that the device is an external device (e.g. paragraph 0163, node determining how data should be routed (such as which node should receive the data); paragraph 0165, grid computing system including control and worker nodes; control nodes transmitting and receiving information from one another; paragraph 0166, each worker node connected to control node, receiving and transmitting from/to the control nodes, and between each other; paragraph 0167, control node connected with external device; control node receiving data from external device; paragraph 0170, control node and external device connected; paragraph 0192, control node transmitting data with client device; query transmitted to control node; paragraph 0193, control node transmitting results of analysis; paragraph 0381, neural network defined by weights and biases applied to set of emulated neurons interconnected as nodes in a network; paragraph 0404, Fig. 24C, describing artificial neuron implementing architecture of neural network in which neurons 2577 in input layer receiving external inputs; paragraph 0406, indicating that artificial neurons 2577 incorporated into output layer provide external outputs of the neural network; i.e. nodes (implementing neurons) within the system make determinations regarding a destination for routing their output, including to another internal node, or to an external system/device). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sridharan and Bequet in front of him to have modified the teachings of Sridharan (directed to hardware implemented point to point communication primitives for machine learning), to incorporate the teachings of Bequet (directed to automated transfer of neural network definitions among federated areas) to include the capability to transmit instructions to another device (i.e. based on an identifier, memory address, etc. of the device as taught by Sridharan), including to and from an external device (as taught by Bequet). One of ordinary skill would have been motivated to perform such a modification in order to improve accountability, reproducibility, and ease of access in use of pooled data as described in Bequet (paragraph 0079). With respect to claim 7, Sridharan teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the logical node receives the first node instruction from a device (e.g. paragraph 0143, data received at nodes; paragraph 0200, Fig. 14E, describing communication operations for enabling data transfers for weight and activation data within neural network; alltoall communication operation used to transfer activation data from first layer to successive layer; alltoall operation transfers distinct data from compute nodes that generate activations to receivers; reduce scatter operation used to transfer data to final layers; paragraph 0217, first MLSL API command configuring forward compute operation for first layer at first node 1610; paragraph 0218, describing use of backward computes; paragraph 0222-0223, describing internode communication using point to point primitives, used to forward and backward propagation operations to exchange data between nodes; paragraph 0223, issuing request to node to send specific block of data to other node). Sridharan does not explicitly disclose that the device is an external device. However, Bequet teaches that the device is an external device (e.g. paragraph 0163, node determining how data should be routed (such as which node should receive the data); paragraph 0165, grid computing system including control and worker nodes; control nodes transmitting and receiving information from one another; paragraph 0166, each worker node connected to control node, receiving and transmitting from/to the control nodes, and between each other; paragraph 0167, control node connected with external device; control node receiving data from external device; paragraph 0170, control node and external device connected; paragraph 0192, control node transmitting data with client device; query transmitted to control node; paragraph 0193, control node transmitting results of analysis; paragraph 0381, neural network defined by weights and biases applied to set of emulated neurons interconnected as nodes in a network; paragraph 0404, Fig. 24C, describing artificial neuron implementing architecture of neural network in which neurons 2577 in input layer receiving external inputs; paragraph 0406, indicating that artificial neurons 2577 incorporated into output layer provide external outputs of the neural network; i.e. nodes (implementing neurons) within the system make determinations regarding a destination for routing their output, including to another internal node, or to an external system/device). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sridharan and Bequet in front of him to have modified the teachings of Sridharan (directed to hardware implemented point to point communication primitives for machine learning), to incorporate the teachings of Bequet (directed to automated transfer of neural network definitions among federated areas) to include the capability to transmit instructions to another device (i.e. based on an identifier, memory address, etc. of the device as taught by Sridharan), including to and from an external device (as taught by Bequet). One of ordinary skill would have been motivated to perform such a modification in order to improve accountability, reproducibility, and ease of access in use of pooled data as described in Bequet (paragraph 0079). With respect to claim 10, Sridharan teaches all of the limitations of claim 8 as previously discussed, and further teaches wherein second node instruction is transmitted to one of the logical node device and a device based on the node operation code (e.g. paragraph 0223, request to send specific block of data to specific node; paragraph 0226, node identifier or target memory address associated with message, write, or packet to be related; paragraph 0228, physical address ranges of nodes mapped to virtual addresses, virtual address mapping exchanged between nodes such that each node is aware of address range of other nodes; node 1 requesting data from node 3 by issuing a request for data along with providing an address within node 1’s address range; node 1 requesting synchronized write to receive buffer at node 3, requesting a read of data at address within node 3’s space; destination address; i.e. the message includes a request for specific data (analogous to a node input change value, such as weight change/update/delta data), a request for a particular operation to be performed (analogous to a node operation code, such as a request to write, or permit reading, of data at a particular address, and an identification of the related nodes (such as a node identifier, or a target memory address which is known to be associated with a particular node); see also paragraphs 0303-0304, describing instruction formats including at least an opcode 2912 defining an operation that an execution unit is to perform, along with portions related to a destination 2918, sources 2920-2924, and access/address mode 2926; see also Fig. 31A, described in paragraphs 0322-0323, showing command format 3100 including data fields identifying a target client 3102 of the command, a command operation code/opcode 3104, and relevant data for the command 3106, where the target client field (i.e. node index) is used to route command data to the appropriate unit, the opcode fields (i.e. node operation code) are used to determine the operation to perform, and the information in the data field (i.e. node input change value) is used to perform the command). Sridharan does not explicitly disclose that the device is an external device. However, Bequet teaches that the device is an external device (e.g. paragraph 0163, node determining how data should be routed (such as which node should receive the data); paragraph 0165, grid computing system including control and worker nodes; control nodes transmitting and receiving information from one another; paragraph 0166, each worker node connected to control node, receiving and transmitting from/to the control nodes, and between each other; paragraph 0167, control node connected with external device; control node receiving data from external device; paragraph 0170, control node and external device connected; paragraph 0192, control node transmitting data with client device; query transmitted to control node; paragraph 0193, control node transmitting results of analysis; paragraph 0381, neural network defined by weights and biases applied to set of emulated neurons interconnected as nodes in a network; paragraph 0404, Fig. 24C, describing artificial neuron implementing architecture of neural network in which neurons 2577 in input layer receiving external inputs; paragraph 0406, indicating that artificial neurons 2577 incorporated into output layer provide external outputs of the neural network; i.e. nodes (implementing neurons) within the system make determinations regarding a destination for routing their output, including to another internal node, or to an external system/device). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sridharan and Bequet in front of him to have modified the teachings of Sridharan (directed to hardware implemented point to point communication primitives for machine learning), to incorporate the teachings of Bequet (directed to automated transfer of neural network definitions among federated areas) to include the capability to transmit instructions to another device (i.e. based on an identifier, memory address, etc. of the device as taught by Sridharan), including to and from an external device (as taught by Bequet). One of ordinary skill would have been motivated to perform such a modification in order to improve accountability, reproducibility, and ease of access in use of pooled data as described in Bequet (paragraph 0079). With respect to claim 12, Sridharan teaches a system, comprising: a logical node device of a neural network (e.g. paragraph 0143, describing neural networks including nodes arranged in layers via edges which are fully connected to nodes in adjacent layers, but where there are no nodes between edges within each layer; Fig. 10, illustrating an example neural network including layers 1002, 1004, 1006 each including at least one node; paragraph 0179, Fig. 12, describing model parallelism implementation 1202 in which the neural network is split such that each layer is assigned to a respective computational node; Figs. 16A-C, showing nodes 1610/1620; paragraph 0227, Fig. 17A, nodes of a multi-node compute system; paragraph 0233, Fig. 19B, multi-node system 1920 including producer node 1930 and consumer node 1940; i.e. as can be seen in Figs. 10 and 12, where a layer containing at least a single neural network node is assigned/implemented on a single physical computational node, the computational node to which the layer (and therefore the constituent node) is assigned can be considered to be a physical device implementing the layer/node, forming a logical node of the neural network) to: receive a node instruction, wherein the node instruction includes a node operation code, a node input change value, and a node index (e.g. paragraph 0143, data received at nodes; paragraph 0200, Fig. 14E, describing communication operations for enabling data transfers for weight and activation data within neural network; alltoall communication operation used to transfer activation data from first layer to successive layer; alltoall operation transfers distinct data from compute nodes that generate activations to receivers; reduce scatter operation used to transfer data to final layers; paragraph 0217, first MLSL API command configuring forward compute operation for first layer at first node 1610; paragraph 0218, describing use of backward computes; paragraph 0222-0223, describing internode communication using point to point primitives, used to forward and backward propagation operations to exchange data between nodes; paragraph 0223, issuing request to node to send specific block of data to other/specific node; paragraph 0226, node identifier or target memory address associated with message, write, or packet to be related; paragraph 0228, physical address ranges of nodes mapped to virtual addresses, virtual address mapping exchanged between nodes such that each node is aware of address range of other nodes; node 1 requesting data from node 3 by issuing a request for data along with providing an address within node 1’s address range; node 1 requesting synchronized write to receive buffer at node 3, requesting a read of data at address within node 3’s space; destination address; i.e. the message includes a request for specific data (analogous to a node input change value, such as weight change/update/delta data), a request for a particular operation to be performed (analogous to a node operation code, such as a request to write, or permit reading, of data at a particular address, and an identification of the related nodes (such as a node identifier, or a target memory address which is known to be associated with a particular node); see also paragraphs 0303-0304, describing instruction formats including at least an opcode 2912 defining an operation that an execution unit is to perform, along with portions related to a destination 2918, sources 2920-2924, and access/address mode 2926; see also Fig. 31A, described in paragraphs 0322-0323, showing command format 3100 including data fields identifying a target client 3102 of the command, a command operation code/opcode 3104, and relevant data for the command 3106, where the target client field (i.e. node index) is used to route command data to the appropriate unit, the opcode fields (i.e. node operation code) are used to determine the operation to perform, and the information in the data field (i.e. node input change value) is used to perform the command); generate an output change value based on the received node instruction (e.g. paragraph 0143, data at node propagated/fed forward to other nodes of another layer/output layer via activation function; paragraph 0200, allreduce operations performed between layers to update weights of each layer; paragraph 0217, waiting to finish receiving communication of activation data that will be used as input data for forward compute; forward compute automatically begins upon communication of completion of activation data; communicating activation data output from first node 1610 is activation data generated by first layer and used as input data for second layer/second node; paragraph 0218, describing use of backward computes, including communication of computed activation gradients, transmission of weight gradients, updated weights, etc.; paragraph 0222, each node generating weight deltas; weight deltas received in receive buffer; paragraph 0223, node sending requested block of data to other node as soon as dependencies are satisfied, such as by performing remote write; paragraph 0233, producer node produces data that will be consumed by consumer node via shared memory 1950); and transmit the output change value and the node instruction to a logical edge device (e.g. paragraph 0143, data at node propagated/fed forward to other nodes of another layer/output layer via activation function; paragraph 0200, allreduce operations performed between layers to update weights of each layer; paragraph 0217, waiting to finish receiving communication of activation data that will be used as input data for forward compute; forward compute automatically begins upon communication of completion of activation data; communicating activation data output from first node 1610 is activation data generated by first layer and used as input data for second layer/second node; paragraph 0218, describing use of backward computes, including communication of computed activation gradients, transmission of weight gradients, updated weights, etc.; paragraph 0222, each node generating weight deltas; weight deltas received in receive buffer; paragraph 0223, node sending requested block of data to other node as soon as dependencies are satisfied, such as by performing remote write; paragraph 0233, producer node produces data that will be consumed by consumer node via shared memory 1950); and the logical edge device of the neural network (e.g. paragraph 0143, describing neural networks including nodes arranged in layers via edges which are fully connected to nodes in adjacent layers, but where there are no nodes between edges within each layer; Fig. 10, illustrating an example neural network including layers 1002, 1004, 1006 each including at least one node; paragraph 0209, Fig. 15B, describing MLSL architecture 1511 as including machine learning specific abstractions including layer-to-layer communication abstractions for implementing communication patterns for layers/parallelisms, where communications for (i.e. between) layers are enabled via communication modules 1517, messaging library 1519, and high performance communications fabric 1521, and also enable intelligent messaging scheduling across neural network layers; paragraph 0210, communication module 1517 includes logic to drive underlying messaging library 1519 enabling transmitting data between various compute nodes; logic to optimize network bandwidth and enable low latency communications, specified processor resources tasked with managing distributed communication; compute/communication resources; paragraph 0211, communication module 1517 enabling communication between processing nodes; paragraph 0212, messaging library 1519 using communication routines to transmit data over high performance communications fabric 1521; Figs. 16A-C, showing links/connections between nodes 1610/1620; paragraph 0222, Fig. 16C, receive buffer receiving set of weight deltas generated by nodes; summation unit 1636 (shown in Fig. 16 as being positioned between, for purposes of receiving weight delta, etc., nodes 1631) may be a separate control node which transmits new/updated set of weights to each node; paragraph 0226, fabric interconnect logic routing data based on target memory address associated with message, write, or packet to be routed; paragraph 0227, Fig. 17A, interconnects provided via links 816, 1716, 1708; paragraph 0228, fabric interface determining which node message is intended for and relaying the message to the corresponding node; paragraph 0233, Fig. 19B, showing shared memory 1950 on communication path between nodes 1930 and 1940, which may be as distributed and shared virtual address space mapped across multiple nodes; i.e. the combined hardware and software functionalities providing communication capabilities between nodes (both nodes of the neural network and their corresponding compute nodes), including the shared memory implementation, communications fabric, and associated processing resources (such as summation units shared by nodes, etc.) and library routines for implementing communications, collectively provide the logical edges between the nodes) to: receive the output change value and the node instruction from the logical node device (e.g. paragraph 0143, data received at nodes; paragraph 0200, Fig. 14E, describing communication operations for enabling data transfers for weight and activation data within neural network; alltoall communication operation used to transfer activation data from first layer to successive layer; alltoall operation transfers distinct data from compute nodes that generate activations to receivers; reduce scatter operation used to transfer data to final layers; paragraph 0217, first MLSL API command configuring forward compute operation for first layer at first node 1610; paragraph 0218, describing use of backward computes; paragraph 0222-0223, describing internode communication using point to point primitives, used to forward and backward propagation operations to exchange data between nodes; paragraph 0223, issuing request to node to send specific block of data to other node); generate a sequence of node instructions based on the output change value and the node instruction (e.g. paragraph 0222, SGD 1638 of summation unit 1636 generating new set of weights, which are then transmitted to each node; paragraph 0223, block of data sent to node via transmit buffer; paragraph 0226, fabric interconnect logic routing data based on target memory address associated with message, write, or packet to be routed; paragraph 0228, fabric interface determining which node message is intended for and relaying the message to the corresponding node; paragraph 0234, consumer node notified when monitored addresses written (and therefore reads the written data intended for it, resulting in transmission of the data from the memory to the consumer node)); transmit the sequence of node instructions to the first logical node device in response to a node index of the sequence of node instructions being associated with the first logical node device (e.g. paragraph 0222, SGD 1638 of summation unit 1636 generating new set of weights, which are then transmitted to each node; paragraph 0223, block of data sent to node via transmit buffer; request to send specific block of data to specific node; paragraph 0226, node identifier or target memory address associated with message, write, or packet to be related; routing/directing messages to corresponding destination; fabric interconnect logic routing data based on target memory address associated with message, write, or packet to be routed; paragraph 0228, physical address ranges of nodes mapped to virtual addresses, virtual address mapping exchanged between nodes such that each node is aware of address range of other nodes; node 1 requesting data from node 3 by issuing a request for data along with providing an address within node 1’s address range; node 1 requesting synchronized write to receive buffer at node 3, requesting a read of data at address within node 3’s space; destination address; determining, based on address, node which message is intended for;fabric interface determining which node message is intended for and relaying the message to the corresponding node; paragraph 0234, consumer node notified when monitored addresses written (and therefore reads the written data intended for it, resulting in transmission of the data from the memory to the consumer node)); and transmit the sequence of node instructions to a device based on a node index of the sequence of node instructions being associated with the external device (e.g. paragraph 0222, SGD 1638 of summation unit 1636 generating new set of weights, which are then transmitted to each node; paragraph 0223, block of data sent to node via transmit buffer; request to send specific block of data to specific node; paragraph 0226, node identifier or target memory address associated with message, write, or packet to be related; routing/directing messages to corresponding destination; fabric interconnect logic routing data based on target memory address associated with message, write, or packet to be routed; paragraph 0228, physical address ranges of nodes mapped to virtual addresses, virtual address mapping exchanged between nodes such that each node is aware of address range of other nodes; node 1 requesting data from node 3 by issuing a request for data along with providing an address within node 1’s address range; node 1 requesting synchronized write to receive buffer at node 3, requesting a read of data at address within node 3’s space; destination address; determining, based on address, node which message is intended for; fabric interface determining which node message is intended for and relaying the message to the corresponding node; paragraph 0234, consumer node notified when monitored addresses written (and therefore reads the written data intended for it, resulting in transmission of the data from the memory to the consumer node)). Sridharan does not explicitly disclose that the device is an external device. However, Bequet teaches that the device is an external device (e.g. paragraph 0163, node determining how data should be routed (such as which node should receive the data); paragraph 0165, grid computing system including control and worker nodes; control nodes transmitting and receiving information from one another; paragraph 0166, each worker node connected to control node, receiving and transmitting from/to the control nodes, and between each other; paragraph 0167, control node connected with external device; control node receiving data from external device; paragraph 0170, control node and external device connected; paragraph 0192, control node transmitting data with client device; query transmitted to control node; paragraph 0193, control node transmitting results of analysis; paragraph 0381, neural network defined by weights and biases applied to set of emulated neurons interconnected as nodes in a network; paragraph 0404, Fig. 24C, describing artificial neuron implementing architecture of neural network in which neurons 2577 in input layer receiving external inputs; paragraph 0406, indicating that artificial neurons 2577 incorporated into output layer provide external outputs of the neural network; i.e. nodes (implementing neurons) within the system make determinations regarding a destination for routing their output, including to another internal node, or to an external system/device). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sridharan and Bequet in front of him to have modified the teachings of Sridharan (directed to hardware implemented point to point communication primitives for machine learning), to incorporate the teachings of Bequet (directed to automated transfer of neural network definitions among federated areas) to include the capability to transmit instructions to another device (i.e. based on an identifier, memory address, etc. of the device as taught by Sridharan), including to and from an external device (as taught by Bequet). One of ordinary skill would have been motivated to perform such a modification in order to improve accountability, reproducibility, and ease of access in use of pooled data as described in Bequet (paragraph 0079). With respect to claim 13, Sridharan in view of Bequet teaches all of the limitations of claim 12 as previously discussed, and Sridharan further teaches wherein the logical node device includes a node memory map to instruct the logical node device to transmit the node instruction to the logical edge device (e.g. paragraph 0228, Fig. 17B, memory addresses within each node associated with virtual addresses within distributed virtual address space 1730; specific physical address range in each node mapped to virtual addresses associated with the node; distributed virtual address mapping exchanged between nodes such that each node is aware of the address range for each other node; paragraph 0233, Fig. 19B, shared memory 1950, shown as providing an edge between nodes 1930 and 1940, and which may be a distributed and shared virtual address space mapped across multiple nodes; i.e. where each node may include data indicating the memory mapping of nodes to the virtual address space, and where the virtual address space itself is implemented ad a logical edge between nodes, such that nodes transmit instructions, data etc. to each other via the logical edge). With respect to claim 14, Sridharan in view of Bequet teaches all of the limitations of claim 12 as previously discussed, and Sridharan further teaches wherein the logical edge device includes a node edge map to instruct the logical edge device to transmit the sequence of node instructions to a device (e.g. paragraph 0228, Fig. 17B, memory addresses within each node associated with virtual addresses within distributed virtual address space 1730; specific physical address range in each node mapped to virtual addresses associated with the node; distributed virtual address mapping exchanged between nodes such that each node is aware of the address range for each other node; fabric interface determining how to route/relay message based on destination address; paragraph 0233, Fig. 19B, shared memory 1950, shown as providing an edge between nodes 1930 and 1940, and which may be a distributed and shared virtual address space mapped across multiple nodes; i.e. where the virtual address space itself is implemented as a logical edge between nodes including a mapping to the respective nodes, such that the logical edge is also instructed, based on the mapping, of how/where to transmit corresponding data/instructions). Sridharan does not explicitly disclose that the device is an external device. However, Bequet teaches that the device is an external device (e.g. paragraph 0163, node determining how data should be routed (such as which node should receive the data); paragraph 0165, grid computing system including control and worker nodes; control nodes transmitting and receiving information from one another; paragraph 0166, each worker node connected to control node, receiving and transmitting from/to the control nodes, and between each other; paragraph 0167, control node connected with external device; control node receiving data from external device; paragraph 0170, control node and external device connected; paragraph 0192, control node transmitting data with client device; query transmitted to control node; paragraph 0193, control node transmitting results of analysis; paragraph 0381, neural network defined by weights and biases applied to set of emulated neurons interconnected as nodes in a network; paragraph 0404, Fig. 24C, describing artificial neuron implementing architecture of neural network in which neurons 2577 in input layer receiving external inputs; paragraph 0406, indicating that artificial neurons 2577 incorporated into output layer provide external outputs of the neural network; i.e. nodes (implementing neurons) within the system make determinations regarding a destination for routing their output, including to another internal node, or to an external system/device). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sridharan and Bequet in front of him to have modified the teachings of Sridharan (directed to hardware implemented point to point communication primitives for machine learning), to incorporate the teachings of Bequet (directed to automated transfer of neural network definitions among federated areas) to include the capability to transmit instructions to another device (i.e. based on an identifier, memory address, etc. of the device as taught by Sridharan), including to and from an external device (as taught by Bequet). One of ordinary skill would have been motivated to perform such a modification in order to improve accountability, reproducibility, and ease of access in use of pooled data as described in Bequet (paragraph 0079). With respect to claim 15, Sridharan in view of Bequet teaches all of the limitations of claim 12 as previously discussed, and Sridharan further teaches wherein the logical edge device encodes a set of partial node instructions including the node operation code and the node index (e.g. paragraph 0223, request to send specific block of data to specific node; paragraph 0226, node identifier or target memory address associated with message, write, or packet to be related; paragraph 0228, physical address ranges of nodes mapped to virtual addresses, virtual address mapping exchanged between nodes such that each node is aware of address range of other nodes; node 1 requesting data from node 3 by issuing a request for data along with providing an address within node 1’s address range; node 1 requesting synchronized write to receive buffer at node 3, requesting a read of data at address within node 3’s space; destination address; i.e. the message includes a request for specific data (analogous to a node input change value, such as weight change/update/delta data), a request for a particular operation to be performed (analogous to a node operation code, such as a request to write, or permit reading, of data at a particular address, and an identification of the related nodes (such as a node identifier, or a target memory address which is known to be associated with a particular node); see also paragraphs 0303-0304, describing instruction formats including at least an opcode 2912 defining an operation that an execution unit is to perform, along with portions related to a destination 2918, sources 2920-2924, and access/address mode 2926; see also Fig. 31A, described in paragraphs 0322-0323, showing command format 3100 including data fields identifying a target client 3102 of the command, a command operation code/opcode 3104, and relevant data for the command 3106, where the target client field (i.e. node index) is used to route command data to the appropriate unit, the opcode fields (i.e. node operation code) are used to determine the operation to perform, and the information in the data field (i.e. node input change value) is used to perform the command; it is noted that paragraph 0303 indicates that the instruction format may include a compacted instruction format 2930, which appears to be analogous to a partial set of instructions and includes opcode 2912, index 2913,destination, source, etc. information). It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain,” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting in re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (GCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co, v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert, denied, 493 U.S. 975 (1989). See also Upsher-Smith Labs. v. Pamlab, LLC, 412 F,3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir, 2005): Celeritas Technologies Ltd. v. Rockwell International Corp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L STANLEY whose telephone number is (469)295-9105. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM CST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar, can be reached at telephone number (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /JEREMY L STANLEY/ Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Aug 15, 2022
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591827
ETHICAL CONFIDENCE FABRICS: MEASURING ETHICAL ALGORITHM DEVELOPMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12580783
CONFIGURING 360-DEGREE VIDEO WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572266
ACCESSING AND DISPLAYING INFORMATION CORRESPONDING TO PAST TIMES AND FUTURE TIMES
2y 5m to grant Granted Mar 10, 2026
Patent 12561041
Systems, Methods, and Graphical User Interfaces for Interacting with Virtual Reality Environments
2y 5m to grant Granted Feb 24, 2026
Patent 12555684
ASSESSING A TREATMENT SERVICE BASED ON A MEASURE OF TRUST DYNAMICS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
48%
Grant Probability
92%
With Interview (+44.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month