Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
Acknowledgment is made of the Information Disclosure Statement dated 03/09/2023 and 12/05/2025. All of the cited references have been considered.
Drawings
The drawings have been received on 03/09/2023. These drawings are accepted.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention.
Claim 1 recites “extracting, by an analysis computer, a plurality of first datasets from a plurality of graph snapshots using a graph structural learning module”. It is unclear if the extracting of a plurality of first datasets is being done by an analysis computer or a graph structural learning module. For purposes of examination, Examiner will interpret the graph structural learning module as a neural network used to extract a plurality of first datasets from a plurality of graph snapshots.
Dependent claims 2-12 do not resolve the issue and are rejected with the same rationale.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 13-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the claims recite “a computer readable medium”. The specification does not define the “a computer readable medium” to exclude “signals per se”; Therefore, the broadest reasonable interpretation of “a computer readable medium” includes signals per se and not statutory.
Regarding Claim 1,
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 1 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“extracting, [by an analysis computer,] a plurality of first datasets from a plurality of graph snapshots [using a graph structural learning module;]”
“extracting, [by the analysis computer,] a plurality of second datasets from the plurality of first datasets [using a temporal convolution module] across the plurality of first datasets;”
“performing, [by the analysis computer,] graph context prediction based on the plurality of second datasets; and”
“performing, [by the analysis computer,] an action based on the graph context prediction.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., extracting, performing). The above limitations in the context of this claim encompass, inter alia, extracting datasets, performing graph context prediction, performing an action, Examiner notes that in paragraph [0184], an action can include a determining step (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“[extracting,] by an analysis computer, [a plurality of first datasets from a plurality of graph snapshots] using a graph structural learning module;”
“[extracting,] by the analysis computer, [a plurality of second datasets from the plurality of first datasets] using a temporal convolution module [across the plurality of first datasets;]”
“[performing,] by the analysis computer, [graph context prediction based on the plurality of second datasets; and]”
“[performing,] by the analysis computer, [an action based on the graph context prediction.]”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an analysis computer and a module (e.g., by using these elements as tools).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitations:
“[extracting,] by an analysis computer, [a plurality of first datasets from a plurality of graph snapshots] using a graph structural learning module;”
“[extracting,] by the analysis computer, [a plurality of second datasets from the plurality of first datasets] using a temporal convolution module [across the plurality of first datasets;]”
“[performing,] by the analysis computer, [graph context prediction based on the plurality of second datasets; and]”
“[performing,] by the analysis computer, [an action based on the graph context prediction.]”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an analysis computer and a module (e.g., by using these elements as tools).
The claim is not patent eligible.
Regarding Claim 2,
Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 2 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein each graph snapshot of the plurality of graph snapshots comprises a plurality of nodes that represent entities and a plurality of edges represent that interactions between the entities, each node of the plurality of nodes connected to neighboring nodes of the plurality of nodes by one or more edges of the plurality of edges.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., extracting). The above limitations in the context of this claim encompass, inter alia, extracting datasets (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 3,
Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 3 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein the plurality of first datasets includes intermediate vector representations for each node for each snapshot of the plurality of graph snapshots, the intermediate vector representations each including a first plurality of feature values corresponding to a plurality of feature dimensions.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., extracting). The above limitations in the context of this claim encompass, inter alia, extracting datasets (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 4,
Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 4 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein the plurality of second datasets include final vector representations for each node for each graph snapshot of the plurality of graph snapshots, the final vector representations each including a second plurality of feature values corresponding to the plurality of feature dimensions, wherein the intermediate vector representations and the final vector representations are embeddings of each node in a vector space representative of characteristics of the plurality of nodes.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., extracting). The above limitations in the context of this claim encompass, inter alia, extracting datasets (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 5,
Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 5 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“determining a plurality of convolution kernels, each of the plurality of convolution kernels corresponding to at least one feature dimension of the plurality of feature dimensions; and”
“performing temporal convolution on each of the intermediate vector representations using the plurality of convolution kernels to produce the final vector representations.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., determining, performing). The above limitations in the context of this claim encompass, inter alia, determining a plurality of convolution kernels, performing temporal convolution (Examiner is interpreting temporal convolution as applying filters to data to capture temporal relationships) (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 6,
Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 6 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein each graph snapshot of the plurality of graph snapshots includes graph data associated with a timestamp.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., extracting). The above limitations in the context of this claim encompass, inter alia, extracting datasets (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 7,
Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 7 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein each of the plurality of nodes are temporal convoluted separately, and each feature dimension of each node are temporal convoluted separately.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., determining, performing). The above limitations in the context of this claim encompass, inter alia, determining a plurality of convolution kernels, performing temporal convolution (Examiner is interpreting temporal convolution as applying filters to data to capture temporal relationships) (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 8,
Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 8 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein performing temporal convolution includes, for each feature dimension of each node, applying a corresponding convolution kernel from the plurality of convolution kernels to a subset of first feature values of the feature dimension, the subset of first feature values corresponding to a subset of consecutive timestamps.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., determining, performing). The above limitations in the context of this claim encompass, inter alia, determining a plurality of convolution kernels, performing temporal convolution (Examiner is interpreting temporal convolution as applying filters to data to capture temporal relationships) (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 9,
Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 9 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein applying the corresponding convolution kernel provides a result, and the result is used as a second feature value of the feature dimension at a last timestamp from the subset of consecutive timestamps.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., determining, performing). The above limitations in the context of this claim encompass, inter alia, determining a plurality of convolution kernels, performing temporal convolution (Examiner is interpreting temporal convolution as applying filters to data to capture temporal relationships) (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 10,
Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 10 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein each convolution kernel has a predefined length, and wherein a number of first feature values in the subset of first feature values is equal to the predefined length of the convolution kernel.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., determining, performing). The above limitations in the context of this claim encompass, inter alia, determining a plurality of convolution kernels, performing temporal convolution (Examiner is interpreting temporal convolution as applying filters to data to capture temporal relationships) (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 11,
Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 11 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein the temporal convolution module utilizes depthwise convolution or lightweight convolution.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., determining, performing). The above limitations in the context of this claim encompass, inter alia, determining a plurality of convolution kernels, performing temporal convolution (Examiner is interpreting temporal convolution as applying filters to data to capture temporal relationships) (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 12,
Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 12 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“for each graph snapshot of the plurality of graph snapshots, determining an intermediate vector representation for each node based on learned coefficients and intermediate vector representations corresponding to neighboring nodes.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., determining). The above limitations in the context of this claim encompass, inter alia, determining an intermediate vector representation (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 13,
Claim 13 recites a computer for performing steps similar of claim 1 and is rejected with the same rationale, mutatis mutandis, in view of the following additional elements, considered individually and as an ordered combination with the additional elements identified above, failing to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea:
“a processor; and”
“a computer readable medium coupled to the processor, the computer readable medium comprising code, executable by the processor, for implementing a method comprising:”
This is a recitation of generic computing components to be used in performing the abstract idea, which does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).
Regarding Claim 14,
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 14 is directed to a computer, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: Please see the corresponding analysis of Claim 1.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“the graph structural learning module coupled to the processor; and”
“the temporal convolution module coupled to the processor.”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a module coupled to the processor (e.g., by using these elements as tools).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitations:
“the graph structural learning module coupled to the processor; and”
“the temporal convolution module coupled to the processor.”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a module coupled to the processor (e.g., by using these elements as tools).
The claim is not patent eligible.
Regarding Claim 15,
Claim 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 15 is directed to a computer, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“determining a prediction based on at least performing graph context prediction based on the plurality of second datasets; and”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., determining). The above limitations in the context of this claim encompass, inter alia, determining a prediction (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“receiving a prediction request from a requesting client;”
“transmitting, to the requesting client, a prediction response comprising the prediction.”
As drafted, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional elements of "receiving a prediction request" and “transmitting a prediction response” amount to mere data gathering and data storage, respectively, which are insignificant extra-solution activities that do not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception. (i.e., the additional element describes a unit for applying the abstract ideas). Insignificant extra-solution activities and mere instructions to apply an exception cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP 2106.05(d)(II) ("The courts have recognized the following computer functions as well-understood, routine, and conventional functions ... i. Receiving or transmitting data over a network ... iv. Storing and retrieving information in memory") (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)).
The claim is not patent eligible.
Regarding Claim 17,
Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 17 is directed to a computer, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein the graph context prediction is performed using the plurality of second datasets [and the machine learning model.]”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., predicting). The above limitations in the context of this claim encompass, inter alia, determining a prediction (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“[wherein the graph context prediction is performed using the plurality of second datasets] and the machine learning model.”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a machine-learning based model (e.g., by using these elements as tools).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitations:
“[wherein the graph context prediction is performed using the plurality of second datasets] and the machine learning model.”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a machine-learning based model (e.g., by using these elements as tools).
The claim is not patent eligible.
Regarding Claim 18,
Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 18 is directed to a computer, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: Please see the corresponding analysis of Claim 1.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“wherein the machine learning model is an SVM or a neural network."
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a machine-learning based model (e.g., by using these elements as tools).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitations:
“wherein the machine learning model is an SVM or a neural network."
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a machine-learning based model (e.g., by using these elements as tools).
The claim is not patent eligible.
Regarding Claim 19,
Claim 19 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 19 is directed to a computer, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein each graph snapshot of the plurality of graph snapshots comprises a plurality of nodes that represent entities, wherein the plurality of first datasets includes intermediate vector representations for each node for each snapshot of the plurality of graph snapshots, the intermediate vector representations each including a first plurality of values corresponding to a plurality of feature dimensions, wherein the plurality of second datasets include final vector representations for each node for each graph snapshot of the plurality of graph snapshots, the final vector representations each including a second plurality of values corresponding to the plurality of feature dimensions.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., extracting). The above limitations in the context of this claim encompass, inter alia, extracting datasets (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 20,
Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 20 is directed to a computer, i.e., a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“determining a plurality of convolution kernels based on the intermediate vector representations, each of the plurality of convolution kernels corresponding to at least one feature dimension of the plurality of feature dimensions;”
“performing temporal convolution on each of the intermediate vector representations using the plurality of convolution kernels; and”
“determining the final vector representations based on the temporal convolution.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., determining, performing). The above limitations in the context of this claim encompass, inter alia, determining a plurality of convolutional kernels, performing temporal convolution (Examiner is interpreting temporal convolution as applying filters to data to capture temporal relationships), determining the final vector representations (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Abdelaziz et al. (US20180189634A1); hereinafter Abdelaziz in view of Wu et al. (Connecting the Dots: Multivariate Time Series Forecasting with Graph Neural Networks); hereinafter Wu
Claim 1 is rejected over Abdelaziz and Wu.
Regarding claim 1, Abdelaziz teaches a method comprising:
extracting, by an analysis computer, a plurality of first datasets from a plurality of graph snapshots using a graph structural learning module; (Abdelaziz [0004] and Figures 1-3: “traversing a knowledge graph that including a plurality of nodes connected by a plurality of edges, each edge of the plurality of edges representing embedded semantic information … receiving the knowledge graph at a deep neural network (DNN), capturing the embedded semantic information by the DNN”)
performing, by the analysis computer, an action based on the graph context prediction. (Abdelaziz [0004]: “receiving a path query at the DNN specifying a starting node of the knowledge graph and a termination node of the knowledge graph, determining a context for the received path query, and traversing the knowledge graph, based on the context and the embedded semantic information, in response to said receiving the path query at the DNN.”; and [0018]: “Path-based training determines or predicts whether or not a path exists between a starting node and a terminating node where the path includes at least one intervening node between the starting node and the terminating node.”)
Abdelaziz does not appear to explicitly teach extracting, by the analysis computer, a plurality of second datasets from the plurality of first datasets using a temporal convolution module across the plurality of first datasets;
performing, by the analysis computer, graph context prediction based on the plurality of second datasets; and
However, Wu teaches extracting, by the analysis computer, a plurality of second datasets from the plurality of first datasets using a temporal convolution module across the plurality of first datasets; (Wu [Section 4.1 Model Architecture]: “To discover hidden associations among nodes, a graph learning layer computes a graph adjacency matrix, which is later used as an input to all graph convolution modules. Graph convolution modules are interleaved with temporal convolution modules to capture spatial and temporal dependencies respectively. Figure 3 gives a demonstration of how a temporal convolution module and a graph convolution module collaborate with each other.”; and [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; Note: The extracted high-level temporal features are the second datasets)
performing, by the analysis computer, graph context prediction based on the plurality of second datasets; and (Wu [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; Note: See Figure 1 of Wu to see that after the temporal convolution is done after the extraction of second datasets, the results are forecasted. Examiner is interpreting “graph context prediction” as any suitable prediction based on graph data as shown in paragraph [0031] of the Specification.)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 2 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 2, Abdelaziz teaches wherein each graph snapshot of the plurality of graph snapshots comprises a plurality of nodes that represent entities and a plurality of edges represent that interactions between the entities, each node of the plurality of nodes connected to neighboring nodes of the plurality of nodes by one or more edges of the plurality of edges. (See Figure 2 of Abdelaziz to see that the knowledge graph includes a plurality of nodes connected by a plurality of edges (equivalent to where each graph snapshot in the plurality of graph snapshots includes a plurality of nodes representing entities and a plurality of edges representing interactions between the entities, each node in the plurality of nodes is connected to an adjacent node in the plurality of nodes by one or more edges in the plurality of edges).)
Claim 3 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 3, Abdelaziz teaches wherein the plurality of first datasets includes intermediate vector representations for each node for each snapshot of the plurality of graph snapshots, the intermediate vector representations each including a first plurality of feature values corresponding to a plurality of feature dimensions (Abdelaziz [0018]: “Path-based training determines or predicts whether or not a path exists between a starting node and a terminating node where the path includes at least one intervening node between the starting node and the terminating node “; and [0031]: “Each node in the set of nodes from the knowledge graph 200 can be represented by a graph or textual embedding using a vector space model.”)
Claim 4 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 4, Abdelaziz does not appear to explicitly teach wherein the plurality of second datasets include final vector representations for each node for each graph snapshot of the plurality of graph snapshots, the final vector representations each including a second plurality of feature values corresponding to the plurality of feature dimensions, wherein the intermediate vector representations and the final vector representations are embeddings of each node in a vector space representative of characteristics of the plurality of nodes.
However, Wu teaches wherein the plurality of second datasets include final vector representations for each node for each graph snapshot of the plurality of graph snapshots, the final vector representations each including a second plurality of feature values corresponding to the plurality of feature dimensions, wherein the intermediate vector representations and the final vector representations are embeddings of each node in a vector space representative of characteristics of the plurality of nodes. (Wu [page 2]: “we propose a novel graph learning layer, which extracts a sparse graph adjacency matrix adaptively based on data. Furthermore, we develop a graph convolution module to address the spatial dependencies among variables, given the adjacency matrix computed by the graph learning layer.”; and [Section 4.1 Model Architecture]: “To discover hidden associations among nodes, a graph learning layer computes a graph adjacency matrix, which is later used as an input to all graph convolution modules. Graph convolution modules are interleaved with temporal convolution modules to capture spatial and temporal dependencies respectively. Figure 3 gives a demonstration of how a temporal convolution module and a graph convolution module collaborate with each other.”; and [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; Note: The extracted high-level temporal features are the second datasets which are the final vector representation. The graph structure learning module shown in Figure 1 contains the intermediate vector representations.)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 5 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 5, Abdelaziz does not appear to explicitly teach wherein extracting the plurality of second datasets further comprises:
determining a plurality of convolution kernels, each of the plurality of convolution kernels corresponding to at least one feature dimension of the plurality of feature dimensions; and
performing temporal convolution on each of the intermediate vector representations using the plurality of convolution kernels to produce the final vector representations.
However, Wu teaches wherein extracting the plurality of second datasets further comprises:
determining a plurality of convolution kernels, each of the plurality of convolution kernels corresponding to at least one feature dimension of the plurality of feature dimensions; and (Wu [Section 4.4 Temporal Convolution Module]: “we propose a temporal inception layer consisting of four filter sizes, viz. 1 × 2, 1 × 3, 1 × 6, and 1 × 7 … For example, to represent the period 12, a model can pass the inputs through a 1 × 7 filter from the first temporal inception layer followed by a 1 × 6 filter from the second temporal inception layer.”)
performing temporal convolution on each of the intermediate vector representations using the plurality of convolution kernels to produce the final vector representations. (Wu [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; Note: See Figure 1 of Wu to see that after the temporal convolution is done after the extraction of second datasets, the results are forecasted. Examiner is interpreting “graph context prediction” as any suitable prediction based on graph data as shown in paragraph [0031] of the Specification.)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 6 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 6, Abdelaziz does not appear to explicitly teach wherein each graph snapshot of the plurality of graph snapshots includes graph data associated with a timestamp.
However, Wu teaches wherein each graph snapshot of the plurality of graph snapshots includes graph data associated with a timestamp. (Wu [page 2]: “The most suitable type of graph neural networks for multivariate time series is spatial-temporal graph neural networks. Spatial temporal graph neural networks take multivariate time series and an external graph structure as inputs, and they aim to predict future values or labels of multivariate time series”; Note: The time series are time stamps.)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 7 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 7, Abdelaziz does not appear to explicitly teach wherein each of the plurality of nodes are temporal convoluted separately, and each feature dimension of each node are temporal convoluted separately.
However, Wu teaches wherein each of the plurality of nodes are temporal convoluted separately, and each feature dimension of each node are temporal convoluted separately. (Wu [Section 4.3 Graph Convolution Module]: “The graph convolution module consists of two mix hop propagation layers to process inflow and outflow information passed through each node separately.”; and [Figure 3]: “A demonstration of how a temporal convolution module and a graph convolution module collaborate with each other. A temporal convolution module filters the inputs by sliding a 1D window over the time and node axes, as denoted by the red. A graph convolution module filters the inputs at each step, denoted by the blue.”)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 8 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 8, Abdelaziz does not appear to explicitly teach wherein performing temporal convolution includes, for each feature dimension of each node, applying a corresponding convolution kernel from the plurality of convolution kernels to a subset of first feature values of the feature dimension, the subset of first feature values corresponding to a subset of consecutive timestamps.
However, Wu teaches wherein performing temporal convolution includes, for each feature dimension of each node, applying a corresponding convolution kernel from the plurality of convolution kernels to a subset of first feature values of the feature dimension, the subset of first feature values corresponding to a subset of consecutive timestamps. (Wu [Figure 3]: “A demonstration of how a temporal convolution module and a graph convolution module collaborate with each other. A temporal convolution module filters the inputs by sliding a 1D window over the time and node axes, as denoted by the red. A graph convolution module filters the inputs at each step, denoted by the blue.”; Note: A filter is a kernel and the time steps are timestamps.)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 9 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 9, Abdelaziz does not appear to explicitly teach wherein applying the corresponding convolution kernel provides a result, and the result is used as a second feature value of the feature dimension at a last timestamp from the subset of consecutive timestamps.
However, Wu teaches wherein applying the corresponding convolution kernel provides a result, and the result is used as a second feature value of the feature dimension at a last timestamp from the subset of consecutive timestamps. (Wu [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; and [page 2]: “The most suitable type of graph neural networks for multivariate time series is spatial-temporal graph neural networks. Spatial temporal graph neural networks take multivariate time series and an external graph structure as inputs, and they aim to predict future values or labels of multivariate time series”; Note: The time series are time stamps. The extracted high-level temporal features are the second datasets which are the final vector representation. The graph structure learning module shown in Figure 1 contains the intermediate vector representations.)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 10 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 10, Abdelaziz does not appear to explicitly teach wherein each convolution kernel has a predefined length, and wherein a number of first feature values in the subset of first feature values is equal to the predefined length of the convolution kernel.
However, Wu teaches wherein each convolution kernel has a predefined length, and wherein a number of first feature values in the subset of first feature values is equal to the predefined length of the convolution kernel. (Wu [Section 4.4 Temporal Convolution Module]: “we propose a temporal inception layer consisting of four filter sizes, viz. 1 × 2, 1 × 3, 1 × 6, and 1 × 7.”; Note: The application of the filter sizes will have exactly that many feature values.)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 11 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 11, Abdelaziz does not appear to explicitly teach wherein the temporal convolution module utilizes depthwise convolution or lightweight convolution.
However, Wu teaches wherein the temporal convolution module utilizes depthwise convolution or lightweight convolution. (Wu [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; Note: See Figure 1 of Wu to see that after the temporal convolution is done after the extraction of second datasets, the results are forecasted. Examiner is interpreting “graph context prediction” as any suitable prediction based on graph data as shown in paragraph [0031] of the Specification.)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 12 is rejected over Abdelaziz and Wu with the incorporation of claim 1.
Regarding claim 12, Abdelaziz teaches for each graph snapshot of the plurality of graph snapshots, determining an intermediate vector representation for each node based on learned coefficients and intermediate vector representations corresponding to neighboring nodes. (“Abdelaziz [0018]: “Path-based training determines or predicts whether or not a path exists between a starting node and a terminating node where the path includes at least one intervening node between the starting node and the terminating node. For example, if the starting node is the second node 203 (Nicotine), and the terminating node is fifth node 209 (Sarcoma). A path exists between the second node 203 and the fifth node 209, e.g., including second edge 213, third node 205, third edge 215, fourth node 207, and fourth edge 217. Thus, the path-based training process uses vector space models from the knowledge graph 200 to implement, produce, or generate a traversal operator.”)
Claim 13 is rejected over Abdelaziz and Wu.
Regarding claim 13, Abdelaziz teaches an analysis computer comprising:
a processor; and
a computer readable medium coupled to the processor, the computer readable medium comprising code, executable by the processor, for implementing a method comprising: (Abdelaziz [0041]: “The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.”)
The remainder of claim 13 is claim 1 in the form of an analysis computer and is rejected for the same reasons as claim 1 stated above.
Claim 14 is rejected over Abdelaziz and Wu with the incorporation of claim 13.
Regarding claim 14, Abdelaziz teaches further comprising:
the graph structural learning module coupled to the processor; and (Abdelaziz [0004] and Figures 1-3: “traversing a knowledge graph that including a plurality of nodes connected by a plurality of edges, each edge of the plurality of edges representing embedded semantic information … receiving the knowledge graph at a deep neural network (DNN), capturing the embedded semantic information by the DNN”; and [0035]: “The processor 12 may include a module 10 that performs the methods described herein. The module 10 may be programmed into the integrated circuits of the processor 12, or loaded from memory 16, storage device 18, or network 24 or combinations thereof.”)
Abdelaziz does not appear to explicitly teach the temporal convolution module coupled to the processor.
However, Wu teaches the temporal convolution module coupled to the processor. (Wu [page 2]: “As demonstrated by Figure 1, our framework consists of three core components- the graph learning layer, the graph convolution module, and the temporal convolution module.”)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 16 is rejected over Abdelaziz and Wu with the incorporation of claim 13.
Regarding claim 16, Abdelaziz does not appear to explicitly teach training a machine learning model using at least the plurality of second datasets.
However, Wu teaches training a machine learning model using at least the plurality of second datasets. (see page 2, Figures 1-2 of Wu to see that the space-time graph neural network takes as input a plurality of datasets and a graph structure for outputting predicted values (equivalent to training a machine learning model using at least the second plurality of datasets, graph context prediction being performed using the second plurality of datasets and the machine learning model, the machine learning model being a neural network).
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 17 is rejected over Abdelaziz and Wu with the incorporation of claim 13.
Regarding claim 17, Abdelaziz does not appear to explicitly teach wherein the graph context prediction is performed using the plurality of second datasets and the machine learning model.
However, Wu teaches wherein the graph context prediction is performed using the plurality of second datasets and the machine learning model. (Wu [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; and [page 2]: “The most suitable type of graph neural networks for multivariate time series is spatial-temporal graph neural networks. Spatial temporal graph neural networks take multivariate time series and an external graph structure as inputs, and they aim to predict future values or labels of multivariate time series.”; Note: See Figure 1 of Wu to see that after the temporal convolution is done after the extraction of second datasets, the results are forecasted. Examiner is interpreting “graph context prediction” as any suitable prediction based on graph data as shown in paragraph [0031] of the Specification)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 18 is rejected over Abdelaziz and Wu with the incorporation of claim 13.
Regarding claim 18, Abdelaziz does not appear to explicitly teach wherein the machine learning model is an SVM or a neural network.
However, Wu teaches wherein the machine learning model is an SVM or a neural network. (Wu [page 2]: “The most suitable type of graph neural networks for multivariate time series is spatial-temporal graph neural networks. Spatial temporal graph neural networks take multivariate time series and an external graph structure as inputs, and they aim to predict future values or labels of multivariate time series”)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 19 is rejected over Abdelaziz and Wu with the incorporation of claim 13.
Regarding claim 19, Abdelaziz teaches wherein each graph snapshot of the plurality of graph snapshots comprises a plurality of nodes that represent entities, (See Figure 2 of Abdelaziz to see that the knowledge graph includes a plurality of nodes connected by a plurality of edges (equivalent to where each graph snapshot in the plurality of graph snapshots includes a plurality of nodes representing entities and a plurality of edges representing interactions between the entities, each node in the plurality of nodes is connected to an adjacent node in the plurality of nodes by one or more edges in the plurality of edges).)
wherein the plurality of first datasets includes intermediate vector representations for each node for each snapshot of the plurality of graph snapshots, the intermediate vector representations each including a first plurality of values corresponding to a plurality of feature dimensions, (Abdelaziz [0018]: “Path-based training determines or predicts whether or not a path exists between a starting node and a terminating node where the path includes at least one intervening node between the starting node and the terminating node “; and [0031]: “Each node in the set of nodes from the knowledge graph 200 can be represented by a graph or textual embedding using a vector space model.”)
Abdelaziz does not appear to explicitly teach wherein the plurality of second datasets include final vector representations for each node for each graph snapshot of the plurality of graph snapshots, the final vector representations each including a second plurality of values corresponding to the plurality of feature dimensions.
However, Wu teaches wherein the plurality of second datasets include final vector representations for each node for each graph snapshot of the plurality of graph snapshots, the final vector representations each including a second plurality of values corresponding to the plurality of feature dimensions. (Wu [page 2]: “we propose a novel graph learning layer, which extracts a sparse graph adjacency matrix adaptively based on data. Furthermore, we develop a graph convolution module to address the spatial dependencies among variables, given the adjacency matrix computed by the graph learning layer.”; and [Section 4.1 Model Architecture]: “To discover hidden associations among nodes, a graph learning layer computes a graph adjacency matrix, which is later used as an input to all graph convolution modules. Graph convolution modules are interleaved with temporal convolution modules to capture spatial and temporal dependencies respectively. Figure 3 gives a demonstration of how a temporal convolution module and a graph convolution module collaborate with each other.”; and [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; Note: The extracted high-level temporal features are the second datasets which are the final vector representation. The graph structure learning module shown in Figure 1 contains the intermediate vector representations.)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 20 is rejected over Abdelaziz and Wu with the incorporation of claim 13.
Regarding claim 20, Abdelaziz does not appear to explicitly teach wherein extracting the plurality of second datasets further comprises:
determining a plurality of convolution kernels based on the intermediate vector representations, each of the plurality of convolution kernels corresponding to at least one feature dimension of the plurality of feature dimensions;
performing temporal convolution on each of the intermediate vector representations using the plurality of convolution kernels; and
determining the final vector representations based on the temporal convolution.
However, Wu teaches wherein extracting the plurality of second datasets further comprises:
determining a plurality of convolution kernels based on the intermediate vector representations, each of the plurality of convolution kernels corresponding to at least one feature dimension of the plurality of feature dimensions; (Wu [Section 4.4 Temporal Convolution Module]: “we propose a temporal inception layer consisting of four filter sizes, viz. 1 × 2, 1 × 3, 1 × 6, and 1 × 7.”; and [page 2]: “we propose a novel graph learning layer, which extracts a sparse graph adjacency matrix adaptively based on data. Furthermore, we develop a graph convolution module to address the spatial dependencies among variables, given the adjacency matrix computed by the graph learning layer.”; and [Section 4.1 Model Architecture]: “To discover hidden associations among nodes, a graph learning layer computes a graph adjacency matrix, which is later used as an input to all graph convolution modules. Graph convolution modules are interleaved with temporal convolution modules to capture spatial and temporal dependencies respectively. Figure 3 gives a demonstration of how a temporal convolution module and a graph convolution module collaborate with each other.”; and [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; Note: The extracted high-level temporal features are the second datasets which are the final vector representation. The graph structure learning module shown in Figure 1 contains the intermediate vector representations.)
performing temporal convolution on each of the intermediate vector representations using the plurality of convolution kernels; and (Wu [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; Note: See Figure 1 of Wu to see that after the temporal convolution is done after the extraction of second datasets, the results are forecasted. Examiner is interpreting “graph context prediction” as any suitable prediction based on graph data as shown in paragraph [0031] of the Specification.)
determining the final vector representations based on the temporal convolution. (Note: The extracted high-level temporal features are the second datasets which are the final vector representation. The graph structure learning module shown in Figure 1 contains the intermediate vector representations)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Abdelaziz and Wu in further view of Mann et al. (US20150170049A1); hereinafter Mann
Claim 15 is rejected over Abdelaziz, Wu and Mann with the incorporation of claim 13.
Regarding claim 15, Abdelaziz does not appear to explicitly teach receiving a prediction request from a requesting client;
transmitting, to the requesting client, a prediction response comprising the prediction.
However, Mann teaches receiving a prediction request from a requesting client; (Mann [0040]: “If the client computing system 202 desires to access the trained model 218 to receive a predictive output, the client computing system 202 can transmit to the URL a request that includes the input data. The predictive modeling server system 206 receives the input data and prediction request from the client computing system 202 (Step 414).”)
transmitting, to the requesting client, a prediction response comprising the prediction. (“Mann [0040]: “In response, the input data is input to the trained model 218 and a predictive output generated by the trained model (Step 416). The predictive output is provided; it can be provided to the client computing system (Step 418).”)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the transmission of prediction requests of Mann to improve training and prediction operations (Mann, [0040]). Abdelaziz and Mann are analogous art because they both concern prediction.
Abdelaziz does not appear to explicitly teach determining a prediction based on at least performing graph context prediction based on the plurality of second datasets; and
However, Wu teaches determining a prediction based on at least performing graph context prediction based on the plurality of second datasets; and (Wu [Section 4.4 Temporal Convolution Module]: “The temporal convolution module applies a set of standard dilated 1D convolution filters to extract high-level temporal features.”; Note: See Figure 1 of Wu to see that after the temporal convolution is done after the extraction of second datasets, the results are forecasted. Examiner is interpreting “graph context prediction” as any suitable prediction based on graph data as shown in paragraph [0031] of the Specification.)
It would have been obvious before the effective filing date to combine the context-aware knowledge graph traversal of Abdelaziz with the temporal convolution module of Wu to improve the accuracy of long-term predictions (Wu, [Section 4.6 Proposed Learning Algorithm]). Abdelaziz and Wu are analogous art because they both concern training with graphs.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TRAN whose telephone number is (703)756-1525. The examiner can normally be reached M-F 9:30 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID H TRAN/Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147