DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-8 and 10-25 are presented for examination.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on June 9, 2022 and September 21, 2022 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. However, the extremely large number of references cited precludes more than a perfunctory consideration of each reference. If there are any references that Applicant believes are particularly relevant, Applicant is advised to point them out in the response to this Office action.
Drawings
The drawings are objected to because (a) in Figure 1, reference character 115, “Brian” should be “brain”; (b) reference characters 140 (Figures 1-3) and 240 (Figure 2) appear in the drawings but not the specification; (c) in Figure 12, “elemenary” and “ognitive” should be “elementary” and “cognitive”, respectively; (d) in Figure 14, “elemenary” should be “elementary”; (e) in Figure 15, “hierachicial” should be “hierarchical”; and (f) reference character 1500 appears in the specification (p. 45 as originally filed) but not the drawings. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
Examiner objects to the specification for containing various grammatical informalities. Examiner has attached a marked-up copy of the specification indicating where errors have occurred. To the extent that the markings are not self-explanatory and are not corrected, Examiner will enumerate the remaining objections in a subsequent Office Action.
Claim Objections
Claim 16 is objected to because of the following informalities: “which if” should be “which of” and “includes a time window in which two nodes connected” should be “include a time window in which two nodes are connected”.
Claims 22-23 are objected to because of the following informalities: “elements comprising” should be “elements comprises”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 22 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “small” in claim 22 is a relative term which renders the claim indefinite. The term “small” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The specification does not define “small”, and Examiner is unaware of any definition of “small” in the art that would provide a dividing line between a “small” fraction and a “large” fraction. For purposes of examination, any fraction will be deemed to read on the claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 and 10-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”).
Claim 1
Step 1: The claim recites a method; therefore, it is directed to the statutory category of processes.
Step 2A Prong 1: The claim recites, inter alia:
[I]dentifying one or more relatively complex root topological elements that each comprises a subset of the nodes and edges in the artificial recurrent neural network: This limitation could encompass the mental identification of a complex set of nodes and edges in the network.
[I]dentifying a plurality of relatively simpler topological elements that each comprises a subset of the nodes and edges in the artificial recurrent neural network, wherein the identified relatively simpler topological elements stand in a hierarchical relationship to at least one of the relatively complex root topological elements: This limitation could encompass the mental identification of a subset of the previously identified nodes and edges that stand in a hierarchical relationship to the original neural network.
[G]enerating a collection of digits, wherein each of the digits represents whether a respective of the relatively complex root topological elements and the relatively simpler topological elements is active during a window: This limitation could encompass the mental generation of a collection of digits representing the activity of the nodes and edges.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “outputting the collection of digits.” This limitation is directed to the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “outputting the collection of digits.” This limitation is directed to the well-understood, routine, and conventional activity of receiving and transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). As an ordered whole, the claim is directed to a mentally performable method of analyzing the output of a neural network. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible.
Claim 2
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “determining that the relatively complex root topological elements are active when the recurrent neural network is responding to an input.” This limitation could encompass visual inspection of the RNN and mentally determining that the elements are active when the RNN is responding to an input.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis.
Claim 3
Step 1: A process, as above.
Step 2A Prong 1: The claim recites, inter alia, “determining that either activity or inactivity of the relatively simpler topological elements is correlated with activity of the relatively complex root topological elements.” This limitation could encompass the mental determination that the activity is correlated with the complex elements.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “ inputting a dataset of inputs into the recurrent neural network”. This limitation recites the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “ inputting a dataset of inputs into the recurrent neural network”. This limitation recites the well-understood, routine, and conventional activity of receiving and transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network).
Claim 4
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “defining criteria for determining if a topological element is active, wherein the criteria for determining if the topological element is active are based on activity of the nodes or edges included in the topological element.” This limitation could encompass the mental definition of criteria for determining whether an element is active.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis.
Claim 5
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “defining criteria for determining if edges in the artificial recurrent neural network are active.” This limitation could encompass mentally defining the criteria.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis.
Claim 6
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “decomposing the relatively complex root topological elements into a collection of topological elements.” This limitation could encompass the mental decomposition of the complex elements into simpler elements.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis.
Claim 7
Step 1: A process, as above.
Step 2A Prong 1: The claim recites:
[F]orming a list of topological elements into which the relatively complex root topological elements decompose: This limitation could encompass mentally forming the list of elements.
[S]orting the list from the most complex of the topological elements to the least complex of the topological elements: This limitation could encompass mentally sorting the list.
[S]tarting at the most complex of the topological elements, selecting the relatively simpler topological elements from the list for representation in the collection of digits based on the information content regarding the relatively complex root topological elements: This limitation could encompass mentally selecting simpler elements based on information content of the complex elements.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 6 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 6 analysis.
Claim 8
Step 1: A process, as above.
Step 2A Prong 1: The claim recites:
[D]etermining whether the relatively simpler topological elements selected from the list suffice to determine the relatively complex root topological elements: This limitation could encompass mentally determining whether the simpler elements determine the complex elements.
[I]n response to determining that the relatively simpler topological elements selected from the list suffice to determine the relatively complex root topological elements, selecting no further relatively simpler topological elements from the list: This limitation could encompass mentally deciding not to select any more simpler elements if they determine the more complex elements.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 7 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 7 analysis.
Claim 10
Step 1: The claim recites a method; therefore, it is directed to the statutory category of processes.
Step 2A Prong 1: The claim recites, inter alia:
[D]efining computational results to be read from the artificial recurrent neural network, wherein defining the computational results comprises defining criteria for determining if the edges in the artificial recurrent neural network are active: This limitation could encompass mentally defining criteria for determining if edges in an RNN are active.
[D]efining a plurality of topological elements that each comprise a proper subset of the edges in the artificial recurrent neural network: This limitation could encompass mentally defining a subset of edges in the network.
[D]efining criteria for determining if each of the defined topological elements is active, wherein the criteria for determining if each of the defined topological elements is active are based on activity of the edges included in the respective of the defined topological elements, wherein an active topological element indicates that a corresponding computational result has been completed: This limitation could encompass mentally defining criteria for determining if elements of the network are active by observing the activity of the edges.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. There are no additional elements that would integrate the judicial exception into a practical application.
Step 2B: The claim does not contain significantly more than the judicial exception. There are no additional elements that would amount to significantly more than the judicial exception. As an ordered whole, the claim is directed to a mentally performable process of analyzing the output of an RNN. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible.
Claim 11
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “reading the completed computational results from the artificial recurrent neural network.” This limitation could encompass mentally reading the results.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 10 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 10 analysis.
Claim 12
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “reading incomplete computational results from the artificial recurrent neural network, wherein reading an incomplete computational result comprises reading activity of the edges that are included in a corresponding of the topological elements, wherein the activity of the edges does not satisfy the criteria for determining that the corresponding of the topological elements is active.” This limitation could include mentally reading the incomplete computational results by visual inspection.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 11 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 11 analysis.
Claim 13
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “estimating a percent completion of a computational result, wherein estimating the percent completion comprises determining an active fraction of the edges that are included in a corresponding of the topological elements.” This limitation could encompass mentally estimating a percent completion of the result by determining an active fraction of the edges included in the elements.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 11 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 11 analysis.
Claim 14
Step 1: A process, as above.
Step 2A Prong 1: The claim recites that “the criteria for determining if the edges in the artificial recurrent neural network are active include requiring, for a given edge, that: a spike is generated by a node connected to that edge; the spike is transmitted by the edge to a receiving node; and the receiving node generates a response to the transmitted spike.” Defining the criteria remains mentally performable given these further assumptions.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 10 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 10 analysis.
Claim 15
Step 1: A process, as above.
Step 2A Prong 1: The claim recites that “the criteria for determining if the edges in the artificial recurrent neural network are active includes a time window in which the spike is to be generated and transmitted and the receiving node is to generate the response.” Defining the criteria remains mentally performable given these further assumptions.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 14 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 14 analysis.
Claim 16
Step 1: A process, as above.
Step 2A Prong 1: The claim recites that “the criteria for determining if the edges in the artificial recurrent neural network are active includes a time window in which two nodes connected by the edge spike, regardless of which [of] the two nodes spikes first.” Defining the criteria remains mentally performable under these further assumptions.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 10 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 10 analysis.
Claim 17
Step 1: A process, as above.
Step 2A Prong 1: The claim recites that “different criteria for determining if the edges in the artificial recurrent neural network are active are applied to different of the edges.” This limitation could encompass visually observing the edges and determining different criteria for activity of the edges based thereon.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 10 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 10 analysis.
Claim 18
Step 1: A process, as above.
Step 2A Prong 1: The claim recites:
[C]onstructing functional graphs of the artificial recurrent neural network: This limitation could encompass drawing the graphs with a pen and paper.
[D]efining a collection of time bins: This limitation could encompass mentally defining the time bins.
[C]reating a plurality of functional graphs of the artificial recurrent neural network, wherein each functional graph includes only nodes that are active within a respective of the time bins: This limitation could encompass drawing the graphs including only nodes active during a given time period with a pen and paper.
[D]efining the plurality of topological elements based on the active of the edges in the functional graphs of the artificial recurrent neural network: This limitation could encompass defining the elements based on which edges are active.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 10 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 10 analysis.
Claim 19
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “combining a first topological element that is defined in a first of the functional graphs with a second topological element that is defined in a second of the functional graphs, wherein the first and the second of the functional graphs include nodes that are active within different of the time bins.” This limitation could encompass mentally combining elements that are defined in different graphs and that are active at different time periods.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 18 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 18 analysis.
Claim 20
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “including one or more global graph metrics or meta information in the computational results.” This limitation could encompass writing down results that include graph metrics.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 18 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 18 analysis.
Claim 21
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “selecting a proper subset of the plurality of topological elements to be read from the artificial recurrent neural network based on a number of times that each topological element is active during the processing of a single input and across a dataset of inputs.” This limitation could encompass mentally selecting the subset of elements based on how active each element is.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 10 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 10 analysis.
Claim 22
Step 1: A process, as above.
Step 2A Prong 1: The claim recites that “selecting the proper subset of the plurality of topological elements compris[es] selecting a first of the topological elements that is active for only a small fraction of the dataset of inputs and designating the first of the topological elements as indicative of an anomaly.” This limitation could encompass mentally selecting a subset of elements that is active for a fraction of the inputs and mentally designating these elements as anomalous.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 21 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 21 analysis.
Claim 23
Step 1: A process, as above.
Step 2A Prong 1: The claim recites that “selecting the proper subset of the plurality of topological elements compris[es] selecting topological elements to insure that the proper subset includes a predefined distribution of topological elements that are active for different fractions of the dataset of inputs.” This limitation could encompass mentally selecting the subset of elements to ensure that it includes elements that are active for different parts of the input.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 21 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 21 analysis.
Claim 24
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “selecting a proper subset of the plurality of topological elements to be read from the artificial recurrent neural network based on a hierarchical arrangement of the topological elements, wherein a first of the topological elements is identified as a root topological element and topological elements that contribute to the root topological element are selecting for the proper subset.” This limitation could encompass mentally selecting the subset of elements by mentally organizing the elements into a hierarchy.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 10 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 10 analysis.
Claim 25
Step 1: A process, as above.
Step 2A Prong 1: The claim recites “identifying a plurality of root topological elements and selecting topological elements that contribute to the root topological elements for the proper subset.” This limitation could encompass mentally identifying root elements and mentally selecting those elements.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 24 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 24 analysis.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 10-11 and 14-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hart et al. (US 20150346302) (“Hart”).
Regarding claim 10, Hart discloses “[a] method of reading the output of an artificial recurrent neural network that comprises a plurality of nodes and edges forming connections between the nodes, the method comprising:
defining computational results to be read from the artificial recurrent neural network (when a neuron energy level is over a firing threshold, the neuron outputs a spike; otherwise the neuron outputs nothing; the output of the neuron can be represented by a binary value, for example 1 is indicative of a spike and 0 indicates no spike – Hart, paragraph 34 [computational result = determining that the energy level is over a threshold]; see also paragraph 6, Fig. 3 (disclosing that the spike trains are fed back to the spiking neural network, i.e., the network is recurrent)), wherein defining the computational results comprises
defining criteria for determining if the edges in the artificial recurrent neural network are active (spiking neural network outputs spike trains indicative of activities of the neurons; spike trains are input to another spiking neural network or may be fed back to the spiking neural network [criterion = spike; note that the spike travels from neuron to neuron and therefore travels through an edge connecting them] – Hart, paragraph 6), and
defining a plurality of topological elements that each comprise a proper subset of the edges in the artificial recurrent neural network (spiking neural network is used to learn correlation between regions; neurons in the spiking neural network are connected with synapses [edges] that possess particular time delays in conduction – Hart, paragraph 28 [note that each pair of neurons and the edge connecting them may be regarded as a topological element, which is a proper subset when there are more than two neurons and more than one synapse]), and
defining criteria for determining if each of the defined topological elements is active, wherein the criteria for determining if each of the defined topological elements is active are based on activity of the edges included in the respective of the defined topological elements (spiking neural network outputs spike trains indicative of activities of the neurons; spike trains are input to another spiking neural network or may be fed back to the spiking neural network [criterion = spike; note that the spike travels from neuron to neuron and therefore travels through an edge connecting them] – Hart, paragraph 6),
wherein an active topological element indicates that a corresponding computational result has been completed (when a neuron energy level is over a firing threshold, the neuron outputs a spike; otherwise the neuron outputs nothing; the output of the neuron can be represented by a binary value, for example 1 is indicative of a spike and 0 indicates no spike – Hart, paragraph 34 [computational result = determining that the energy level is over a threshold]).”
Regarding claim 11, Hart discloses “reading the completed computational results from the artificial recurrent neural network (when a neuron energy level is over a firing threshold, the neuron outputs a spike; otherwise the neuron outputs nothing; the output of the neuron can be represented by a binary value, for example 1 is indicative of a spike and 0 indicates no spike – Hart, paragraph 34 [computational result = determining that the energy level is over a threshold; note that this result is read by the receiving neuron]).”
Regarding claim 14, Hart discloses that “the criteria for determining if the edges in the artificial recurrent neural network are active include requiring, for a given edge, that:
a spike is generated by a node connected to that edge (when a neuron energy level is over a firing threshold, the neuron outputs a spike; otherwise the neuron outputs nothing; the output of the neuron can be represented by a binary value, for example 1 is indicative of a spike and 0 indicates no spike; thus, each neuron outputs a spike train over time – Hart, paragraph 34);
the spike is transmitted by the edge to a receiving node (when a neuron energy level is over a firing threshold, the neuron outputs a spike; otherwise the neuron outputs nothing; the output of the neuron can be represented by a binary value, for example 1 is indicative of a spike and 0 indicates no spike; thus, each neuron outputs [transmits] a spike train over time – Hart, paragraph 34); and
the receiving node generates a response to the transmitted spike (when a neuron energy level is over a firing threshold, the neuron outputs a spike; otherwise the neuron outputs nothing; the output of the neuron can be represented by a binary value, for example 1 is indicative of a spike and 0 indicates no spike; thus, each neuron outputs a spike train over time; based on the spike trains, activities can be classified [i.e., responses can be generated] – Hart, paragraph 34).”
Regarding claim 15, Hart discloses that “the criteria for determining if the edges in the artificial recurrent neural network are active includes a time window in which the spike is to be generated and transmitted and the receiving node is to generate the response (weight can be updated based on activities of the neurons; for example, neuron 1 has a spike at time t1 and neuron 2 has a spike at time t2; when the time difference between t1 and t2 [window in which the spikes are generated and transmitted] is smaller than a threshold, the weight is incremented; based on the weight matrix, coordination features of the regions of interest can be classified [classification = response] – Hart, paragraph 36).”
Regarding claim 16, Hart discloses that “the criteria for determining if the edges in the artificial recurrent neural network are active include[] a time window in which two nodes [are] connected by the edge spike, regardless of which [of] the two nodes spikes first (when a neuron energy level is over a firing threshold, the neuron outputs a spike; otherwise the neuron outputs nothing; the output of the neuron can be represented by a binary value, for example 1 is indicative of a spike and 0 indicates no spike; thus, each neuron outputs a spike train over time [window] – Hart, paragraph 34; see also Fig. 3 (showing that the spike trains are fed back to the neurons, i.e., the spikes connect neurons to other neurons)).”
Regarding claim 17, Hart discloses that “different criteria for determining if the edges in the artificial recurrent neural network are active are applied to different of the edges (spiking neural network outputs spike trains indicative of activities of the neurons; spike trains are input to another spiking neural network or may be fed back to the spiking neural network [criterion = spike, so that the criterion for determining whether an edge linking neuron A to neuron B is active is whether a spike has been transmitted from A to B and the criterion for determining whether an edge linking neuron A to neuron C is active is whether a spike has been transmitted from A to C, i.e., different criteria] – Hart, paragraph 6).”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-6 are rejected under 35 U.S.C. 103 as being unpatentable over Harada (US 20200202212) (“Harada”) in view of Hart.
Regarding claim 1, Harada discloses “[a] method of reading the output of an artificial recurrent neural network that comprises a plurality of nodes and edges connecting the nodes, the method comprising:
identifying one or more relatively complex root topological elements that each comprises a subset of the nodes and edges in the artificial recurrent neural network (processing by a learning device is illustrated; learning device performs learning by using a hierarchical recurrent network, which is formed of a lower-layer RNN that is divided into predetermined units in a time-series direction, and an upper-layer RNN that aggregates these predetermined units in the time-series direction – Harada, paragraph 61 [upper-layer RNN = relatively complex root topological elements]; see also Fig. 1 (showing that the upper-layer RNN comprises nodes 30-0 to 30-m and edges h0 to hn)); [and]
identifying a plurality of relatively simpler topological elements that each comprises a subset of the nodes and edges in the artificial recurrent neural network, wherein the identified relatively simpler topological elements stand in a hierarchical relationship to at least one of the relatively complex root topological elements (processing by a learning device is illustrated; learning device performs learning by using a hierarchical recurrent network, which is formed of a lower-layer RNN that is divided into predetermined units in a time-series direction, and an upper-layer RNN that aggregates these predetermined units in the time-series direction – Harada, paragraph 61 [lower-layer RNN = relatively simpler topological elements]; see also Fig. 1 (showing that the lower-layer RNN comprises nodes 20-0 to 20-n and input edges x(0) to x(n))) ….”
Harada appears not to disclose explicitly the further limitations of the claim. However, Hart discloses “generating a collection of digits, wherein each of the digits represents whether a respective of the … topological elements is active during a window (memory space is allocated to store vectors; when a spiking neural network is used, the vectors correspond to spike trains output from neurons in the spiking neural network; when a neuron fires at a time [window], the neuron outputs a binary value 1, and when the neuron does not fire at a time, the neuron outputs a binary value 0 [vector of binary values = collection of digits] – Hart, paragraph 23); and
outputting the collection of digits (memory space is allocated to store vectors; when a spiking neural network is used, the vectors correspond to spike trains output from neurons in the spiking neural network; when a neuron fires at a time [window], the neuron outputs a binary value 1, and when the neuron does not fire at a time, the neuron outputs a binary value 0 [vector of binary values = collection of digits] – Hart, paragraph 23).”
Hart and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Harada to generate and output values corresponding to whether a neuron is active at a given time, as disclosed by Hart, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the system to keep track of when neurons are active and inactive, thereby increasing the efficiency of the system. See Hart, paragraph 23.
Regarding claim 2, Harada, as modified by Hart, discloses that “the identifying the relatively complex root topological elements comprises:
determining that the relatively complex root topological elements are active when the recurrent neural network is responding to an input (by performing calculation based on a parameter and hidden state vectors h0 to h3, the RNN 30-0 finds another hidden state vector Y0; by performing calculation based on Y0, the hidden state vectors h4 to h7, and the parameter, the RNN 30-1 finds a hidden state vector Y1, etc. [i.e., the system determines that the nodes of RNN 30 are active when they are processing the input hidden state vectors] – Harada, paragraph 69).”
Regarding claim 3, Harada, as modified by Hart, discloses that “identifying the relatively simpler topological elements that stand in a hierarchical relationship to the relatively complex root topological elements comprises:
inputting a dataset of inputs into the recurrent neural network (Harada Fig. 1 shows inputs x(0) to x(n) being input to the RNN 20); and
determining that either activity or inactivity of the relatively simpler topological elements is correlated with activity of the relatively complex root topological elements (by performing calculation based on a parameter and hidden state vectors h0 to h3, the RNN 30-0 finds another hidden state vector Y0; by performing calculation based on Y0, the hidden state vectors h4 to h7, and the parameter, the RNN 30-1 finds a hidden state vector Y1, etc. – Harada, paragraph 69; see also Fig. 1 (showing that the activity of the RNN 30 [complex elements] is dependent on [correlated with] the hidden state vectors sent by the RNN 20 [simpler elements])).”
Regarding claim 4, Harada, as modified by Hart, discloses that “defining criteria for determining if a topological element is active, wherein the criteria for determining if the topological element is active are based on activity of the nodes or edges included in the topological element (by performing calculation based on a parameter and hidden state vectors h0 to h3, the RNN 30-0 finds another hidden state vector Y0; by performing calculation based on Y0, the hidden state vectors h4 to h7, and the parameter, the RNN 30-1 finds a hidden state vector Y1, etc. [criterion for activity = either or both of the hidden state vectors are being sent to the node or the node is processing those vectors] – Harada, paragraph 69).”
Regarding claim 5, Harada, as modified by Hart, discloses that “defining criteria for determining if edges in the artificial recurrent neural network are active (by performing calculation based on a parameter and hidden state vectors h0 to h3, the RNN 30-0 finds another hidden state vector Y0; by performing calculation based on Y0, the hidden state vectors h4 to h7, and the parameter, the RNN 30-1 finds a hidden state vector Y1, etc. [criterion = the hidden state vectors are being sent through the edges to the nodes of the network] – Harada, paragraph 69).”
Regarding claim 6, Harada, as modified by Hart, discloses that “identifying the relatively simpler topological elements that stand in a hierarchical relationship to the relatively complex root topological elements comprises decomposing the relatively complex root topological elements into a collection of topological elements (processing by a learning device is illustrated; learning device performs learning by using a hierarchical recurrent network, which is formed of a lower-layer RNN that is divided into predetermined units in a time-series direction, and an upper-layer RNN that aggregates these predetermined units in the time-series direction – Harada, paragraph 61; see also Fig. 1 (showing that the nodes 20-0 to 20-3 [collection of topological elements] constitute the decomposition of the node 30-0, the nodes 20-4 to 20-7 constitute the decomposition of the node 30-1, etc.)).”
Claims 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hart in view of Wang et al., “Topological Recurrent Neural Network for Diffusion Prediction,” in arXiv preprint arXiv:1711.10162 (2017) (“Wang”).
Regarding claim 18, Hart appears not to disclose explicitly the further limitations of the claim. However, Wang discloses “defining computational results to be read from the artificial recurrent neural network comprises constructing functional graphs of the artificial recurrent neural network (Wang Fig. 2 depicts mapping a diffusion topology comprising a directed acyclic graph [functional graph] to an LSTM [recurrent neural network]), including:
defining a collection of time bins (Wang, paragraph spanning pp. 1-2 and Fig. 1 disclose diffusion cascade modeling over a plurality of time stamps t [time bins]);
creating a plurality of functional graphs of the artificial recurrent neural network, wherein each functional graph includes only nodes that are active within a respective of the time bins (Wang Fig. 1 discloses diffusion cascade modeling in which each directed graph [plurality of functional graphs] represents a cascade status up to time t; the solid circles represent active nodes and the dotted circles represent inactive nodes [i.e., the graph only considers/includes the active nodes at a given time step]);
defining the plurality of topological elements based on the active of the edges in the functional graphs of the artificial recurrent neural network (Wang Fig. 1 discloses diffusion cascade modeling in which each directed graph [plurality of functional graphs] represents a cascade status up to time t; the solid circles represent active nodes and the dotted circles represent inactive nodes [so, for instance, in Fig. 1(a), node A and the edges connecting node A to nodes C, B, and F constitute one topological element, in Fig. 1(b), node B and the edges connecting node B to nodes C and E constitute another topological element, etc.]).”
Wang and the instant application both relate to directed graph representations of recurrent neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hart to define the elements as part of a graph at various time steps, as disclosed by Wang, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would provide a simple-to-understand graphical way to model the diffusion of the signals among the nodes of the network. See Wang, p. 2, last full paragraph.
Regarding claim 19, Hart, as modified by Wang, discloses “combining a first topological element that is defined in a first of the functional graphs with a second topological element that is defined in a second of the functional graphs, wherein the first and the second of the functional graphs include nodes that are active within different of the time bins (Wang Fig. 1 discloses diffusion cascade modeling in which each directed graph [plurality of functional graphs] represents a cascade status up to time t; the solid circles represent active nodes and the dotted circles represent inactive nodes [so, for instance, in Fig. 1(a), node A and the edges connecting node A to nodes C, B, and F constitute one topological element, in Fig. 1(b), node B and the edges connecting node B to nodes C and E constitute another topological element, etc., and that the graph depicted in Fig. 1(b) is the combination of these two elements; note that node B is active at the timestep of Fig. 1(b) but not in that of Fig. 1(a)]).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hart to include to combine topological elements comprising nodes that are active within different time bins, as disclosed by Wang, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would provide a simple-to-understand graphical way to model the diffusion of the signals among the nodes of the network. See Wang, p. 2, last full paragraph.
Regarding claim 20, Hart, as modified by Wang, discloses “including one or more global graph metrics or meta information in the computational results (predicting the next active node can be viewed as a retrieval problem due to the large number of potential targets; for evaluation, two widely adopted ranking metrics are used: the rate of the top-k ranked nodes containing the next active node and the classical mean average precision measure [meta information] – Wang, sec. IV, subsection entitled “Evaluation metrics”).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hart to include graph metrics and/or meta information in the results, as disclosed by Wang, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would provide a simple-to-understand graphical way to model the diffusion of the signals among the nodes of the network. See Wang, p. 2, last full paragraph.
Claims 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Hart in view of Harada.
Regarding claim 24, Hart appears not to disclose explicitly the further imitations of the claim. However, Harada discloses that “defining the computational results to be read from the artificial recurrent neural network comprises:
selecting a proper subset of the plurality of topological elements to be read from the artificial recurrent neural network based on a hierarchical arrangement of the topological elements, wherein a first of the topological elements is identified as a root topological element and topological elements that contribute to the root topological element are selecting for the proper subset (Harada Fig. 3 and paragraphs 61 and 69 disclose a hierarchical RNN in which each node of the upper-layer RNN receives hidden state vectors from a proper subset of the lower-layer RNN and a hidden state vector from a previous timestep of the upper-layer RNN; for example, RNN 30-1 [root topological element] receives contributions from nodes 20-4 to 20-7 [proper subset selected for contribution to the root topological element]).”
Harada and the instant application both relate to recurrent neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hart to select a proper subset of elements to contribute to a root higher-level element, as disclosed by Harada, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would increase the efficiency of learning by ensuring that not all of the data have to be processed at once. See Harada, paragraph 8-9.
Regarding claim 25, Hart, as modified by Harada, discloses “identifying a plurality of root topological elements and selecting topological elements that contribute to the root topological elements for the proper subset (Harada Fig. 3 and paragraphs 61 and 69 disclose a hierarchical RNN in which each node of the upper-layer RNN receives hidden state vectors from a proper subset of the lower-layer RNN and a hidden state vector from a previous timestep of the upper-layer RNN; for example, RNN 30-1 [root topological element] receives contributions from nodes 20-4 to 20-7 [proper subset selected for contribution to the root topological element], and RNN 30-0 receives contributions from nodes 20-0 to 20-3 [i.e., there is a plurality of root topological elements]).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hart to select a proper subset of elements to contribute to a root higher-level element, as disclosed by Harada, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would increase the efficiency of learning by ensuring that not all of the data have to be processed at once. See Harada, paragraph 8-9.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN C VAUGHN whose telephone number is (571)272-4849. The examiner can normally be reached M-R 7:00a-5:00p ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar, can be reached at 571-272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN C VAUGHN/ Primary Examiner, Art Unit 2125