DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is made final.
This office action is in response to the amendments filed on January 13, 2026.
Claims 1, 3, 4, 9, 10, 12, 14, 15, 17, 19, and 20 have been amended.
Claims 2, 7, 13, and 18 have been cancelled.
Response to Amendment
The amendment filed January 13, 2026 has been entered. Claims 1, 3-6, 8-12, 14-17, 19, and 20 remain pending in the application.
Response to Arguments
Response to 101 arguments
Applicant asserts the amended claims are not an abstract idea under the mental process grouping because it is not capable of practical performance in the human mind or with aid of a pen and paper, and alternatively it provides a practical application/significantly more, and an improvement based on their specification about decoupling properties via peripheral nodes.
Claim 1, as amended, continues to recite operations that amount to generating and manipulating a graph representation of information and using a model to produce predicted states (an information processing concept), which falls within the mental process/abstract idea grouping even if performed on a computer and even if impractical to perform mentally at the claimed scale.
Further claim 1 does not integrate the recited abstract idea into a practical application and does not add significantly more, because the additional elements recite generic computing/Machine learning functionality (applying a model, predicting states, training on graphs) without reciting a specific improvement to computer technology itself or other meaningful technological limitation. Applicant’s citation to the specification is not commensurate with the scope of the claim, which does not recite and particular technical mechanics, beyond generic graph/model operations. Accordingly, the rejection of claim 1 under 35 U.S.C. § 101 is maintained, and the dependent claims fall therewith.
Response to 103 arguments
Applicant asserts that Sankar and Tang do not teach the amended limitans including “decoupling …”, “speculative transient attributes,” “predicting a second chronological state,” and “training … on training graphs … to learn temporal evolutions.” However, the rejection of claim 1 as amended relies on the applied combination, including De LA Torre, Sun , and Knuff, which collectively teach these additional limitations as set forth in the Office Action’s findings. Applicant’s remarks do not address the specific disclosures relied upon from De La Torre, Sun, and Knuff, but instead make a conclusory assertion that the “do not cure the deficiencies.”
Decoupling: De La Torre teaches representing patient-associated information (diagnoses/treatments/risks) as separate graph entities linked to a central patient node. Representing such properties as discrete linked nodes rather than embedded attributes corresponds to the claimed ”decoupling”.
Speculative transient attributes / reapplying / second chronological state: De La Torre teaches accessing potential treatment attributes and applying them with patient data to generate updated predictive outputs, corresponding to applying speculative attributes and predicting a second chronological state.
Training on a set of training graphs to learn temporal evolutions: Sun teaches training over ordered sequences of graph snapshots to capture temporal evolution of graph states. Knuff teaches training a conditional generative model. The combination teaches training a conditional generative model on training graphs to learn temporal evolution.
Applicant does not specifically rebut these teachings or the articulated motivations to combine. Rather, Applicant provides a conclusory statement that the references do not cure the alleged deficiencies. Such unsupported assertions are insufficient to overcome the prima facie case of obviousness. Accordingly, the rejection of claim 1 under 35 U.S.C. § 103 is maintained, and the dependent claims fall therewith.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
To determine if a claim is directed to patent ineligible subject matter, the Court has guided the Office to apply the Alice/Mayo test, which requires:
Step 1: Determining if the claim falls within a statutory category.
Step 2A: Determining if the claim is directed to a patent ineligible judicial exception consisting of a law of nature, a natural phenomenon, or abstract idea; and Step 2A is a two prong inquiry. MPEP 2106.04(II)(A). Under the first prong, examiners evaluate whether a law of nature, natural phenomenon, or abstract idea is set forth or described in the claim. Abstract ideas include mathematical concepts, certain methods of organizing human activity, and mental processes. MPEP 2104.04(a)(2). The second prong is an inquiry into whether the claim integrates a judicial exception into a practical application. MPEP 2106.04(d).
Step 2B: If the claim is directed to a judicial exception, determining if the claim recites limitations or elements that amount to significantly more than the judicial exception. (See MPEP 2106).
Claims 1, 3-6, 8-12, 14-17, 19, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1, 3-6, and 8-11 are directed to a method (a process), Claims 12, and 14-16 are directed to a system (a machine), and Claims 17, and 19-20 are directed to a computer readable storage medium (a manufacture). Therefore, Claims 1, 3-6, 8-12, 14-17, 19, and 20 are directed to a process, machine or manufacture or composition of matter.
Regarding claim 1
Step 2A Prong 1
Claim 1 recites the following mental processes, that in each case under the broadest reasonable interpretation, covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components (e.g., “ conditional generative model”, “graph neural network”) [see MPEP 2106.04(a)(2)(III)].
“generating a test graph corresponding to an entity” (e.g., a human can organize information into a structured graph to create a graph)
“predicting a first chronological state for the test graph” (e.g., a human can predict future states from a structured graph )
“predicting a second chronological state for the test graph” (e.g., a human can predict future states from a structured graph)
Accordingly, at Step 2A, prong one, the claim is directed to an abstract idea.
Step 2A Prong 2
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a “conditional generative model”, and “graph neural network” which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). In particular, the recited “conditional generative model” is merely a generic computer component, because it is recited to perform the function of implementing the “test graph” and the claims do not recite any particular structure for how such “conditional generative model” is implemented. The examiner notes that this language is used throughout the claims and is rejected thusly for each recitation used.
Regarding the and “the test graph has a hybrid structure comprising a static graph that includes a reference vertex associated with an entity node and a dynamic graph connected to the entity node by a reference vertex, and wherein the static graph comprises a first plurality of peripheral nodes connected to the entity node, and the dynamic graph has a second plurality of peripheral nodes connected to the reference vertex” limitations, these additional elements are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Regarding the “applying the test graph to a conditional generative model, wherein the conditional generative model has a graph neural network structure” limitation, this additional element is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process (see MPEP 2106.05(g)).
Regarding the “accessing one or more speculative transient attributes” limitation, this additional element is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process (see MPEP 2106.05(g)).
Regarding the “applying the one or more speculative transient attributes and the test graph to the conditional generative model” limitation, this additional element is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a “training the conditional generative model on the set of training graphs to learn the temporal evolutions of chronological states of the training graphs” limitation, training a machine learning model is well-known, routine, and conventional, as evidenced at least by:
• US 20170140753 A1 (Jaitly et al. – filing date 11/11/2016) at para.0072: “conventional machine learning training technique to train the layers of the RNN system”
• US 20180342050 A1 (Fitzgerald et al. – filing date 02/19/2018) at para. 0022: “Those of skill in the relevant arts understand that the neural network may be trained with any desired or conventional training methodology such as backpropagation.”
• US 20160329044 A1 (Cao et al. – filing date 04/05/2015) at para. 0061: “In many embodiments, conventional approaches (for example, stochastic gradient descent) for training neural networks are used.”
• US 20210295979 A1 (Abraham et al. – effective filing date 12/2/2019) at para. 0109: conventional machine learning model training system performs multiple iterations using stochastic gradient descent with backpropagation.
Such training is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)).
Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of a “processor”, “conditional generative model”, and “graph neural network” which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Regarding the and “the test graph has a hybrid structure comprising a static graph that includes a reference vertex associated with an entity node and a dynamic graph connected to the entity node by a reference vertex, and wherein the static graph comprises a first plurality of peripheral nodes connected to the entity node, and the dynamic graph has a second plurality of peripheral nodes connected to the reference vertex” limitations, these additional elements are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Regarding the “applying the test graph to a conditional generative model, wherein the conditional generative model has a graph neural network structure” limitation, this additional element is recited at a high level of generality and amounts to extra-solution activity pre-solution activity of gathering data. The courts have found limitations directed to obtaining information electronically, recited at a high-level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
Regarding the “accessing one or more speculative transient attributes” limitation, as discussed above, this additional element is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (See MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
Regarding the “applying the one or more speculative transient attributes and the test graph to the conditional generative model” limitation, this additional element is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)).
Regarding the additional elements of a “training the conditional generative model on the set of training graphs to learn the temporal evolutions of chronological states of the training graphs” as discussed above, training a machine-learning model comprising neural networks is well-known, routine, and conventional (see above cited evidence). Moreover, such training is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)).
Accordingly, at Step 2B, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Regarding Claim 2 (Cancelled)
Regarding Claim 3
Step 2A Prong 1
Claim 3 inherits the same abstract ideas as claim 1 and further recites the following mathematical process, that in each case under the broadest reasonable interpretation, involves mathematical relationships, formulas, calculations, or algorithms implemented using generic computer components (e.g., “ conditional generative model”, “graph neural network”) [see MPEP 2106.04(a)(2)(I)].
“predicting a second chronological vertex associated with at least one of the one or more speculative transient attributes” (e.g., inference reflecting algorithmic data modeling and transformation)
Accordingly, at Step 2A, prong one, the claim is directed to an abstract idea.
Step 2A Prong 2
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a “predicting a second chronological vertex associated with at least one of the one or more speculative transient attributes” this additional element is recited at a high level of generality and amounts to extra-solution activity of outputting refined data, i.e. post-solution activity of outputting data for use in the claimed process (see MPEP 2106.05(g)).
Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of a “a second chronological vertex associated with at least one of the one or more speculative transient attributes” this additional element is recited at a high level of generality and amounts to extra-solution activity of outputting refined data, i.e. post-solution activity of outputting data. The courts have found limitations directed to obtaining information electronically, recited at a high-level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
Accordingly, at Step 2B, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Regarding Claim 4
Step 2A Prong 1
Claim 4 recites that same abstract ideas as claim 1, and further recites the following mental processes, that in each case under the broadest reasonable interpretation, covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components (e.g., “ conditional generative model”, “graph neural network”) [see MPEP 2106.04(a)(2)(III)].
“determining the second chronological state of the test graph based on the modified version of the one or more speculative transient attributes” (e.g., a human can evaluate differences between modified and unmodified data)
Accordingly, at Step 2A, prong one, the claim is directed to an abstract idea.
Step 2A Prong 2
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a “accessing a modified version of the one or more speculative transient attributes” limitation, this additional element is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of selecting a particular data source or type of data to be manipulated for use in the claimed process (see MPEP 2106.05(g)).
Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of a “accessing a modified version of the one or more speculative transient attributes” limitation, as discussed above, this additional element is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of selecting a particular data source or type of data to be manipulated for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (See MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
Accordingly, at Step 2B, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Regarding Claim 5
Step 2A Prong 1
Claim 5 inherits the same abstract ideas as claim 4 and further recites the following judicial exceptions:
“analyzing the second chronological state” (e.g., a human can examine a timeline of conditions and analyze outcomes which is a mental process)
“determining an outcome for the second chronological state, based on the analysis of the second chronological state” (e.g., a human can evaluate an outcome and make a judgment about its outcomes which is a mental process)
Accordingly, at Step 2A, prong one, the claim is directed to an abstract idea.
Step 2A Prong 2
In accordance with Step 2A, Prong 2, the claim does not include any additional elements and the judicial exception is not integrated into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding Claim 6
Step 2A Prong 1
Claim 6 inherits the same abstract ideas as claim 1 and further recites the following mathematical concepts, that in each case under the broadest reasonable interpretation, involves mathematical relationships, formulas, calculations, or algorithms implemented using generic computer components (e.g., “ conditional generative model”, “graph neural network”) [see MPEP 2106.04(a)(2)(I)].
“adding one or more actual chronological vertices associated with transient actual properties” (e.g., modifying a graph via mathematical modeling)
“ordering the added vertices chronologically via oriented chronological edges, with respect to a most recent one of the chronological vertices of the test graph.” (e.g., algorithmic transformation of a graph)
Accordingly, at Step 2A, prong one, the claim is directed to an abstract idea.
Step 2A Prong 2
In accordance with Step 2A, Prong 2, the claim does not include any additional elements and the judicial exception is not integrated into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding Claim 7 (Cancelled)
Regarding Claim 8
Step 2A Prong 1
Claim 8 inherits the same abstract ideas as claim 6 and further recites the following mathematical concepts, that in each case under the broadest reasonable interpretation, involves mathematical relationships, formulas, calculations, or algorithms implemented using generic computer components (e.g., “ conditional generative model”, “graph neural network”) [see MPEP 2106.04(a)(2)(I)].
“imposing a loss function to a predetermined minimal smoothness of time variations for the predicted states” (e.g., regulating model outputs using a loss function)
Accordingly, at Step 2A, prong one, the claim is directed to an abstract idea.
Step 2A Prong 2
In accordance with Step 2A, Prong 2, the claim does not include any additional elements and the judicial exception is not integrated into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding Claim 9
Step 2A Prong 1
Claim 9 does not introduce a new judicial exception, but it inherits the same abstract ideas as claim 1.
Accordingly, at Step 2A, prong one, the claim is directed to an abstract idea.
Step 2A Prong 2
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a “the conditional generative model is implemented as a variational autoencoder by the graph neural network” which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). In particular, the recited “variational autoencoder” is merely a generic computer component, because it is recited to perform the function of implementing the “conditional generative model” and the claims do not recite any particular structure for how such “variational autoencoder” is implemented.
Regarding the “the variational autoencoder including two input channels consisting of a first input channel and a second input channel” limitation, this additional element is recited at a high level of generality and amounts to extra-solution activity of feeding data into the graph neural network, i.e. pre-solution activity of data gathering for use in the claimed process (see MPEP 2106.05(g)).
Regarding the “each graph of the training graphs spans a full chronological sequence decomposing into two contiguous chronological sequences, including a first sequence followed by a second sequence” limitation, this additional element is recited at a high level of generality and amounts to extra-solution activity of data preparation over time, i.e. pre-solution activity of selecting a particular data source or type of data to be manipulated for use in the claimed process (see MPEP 2106.05(g)).
Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of “the conditional generative model is implemented as a variational autoencoder by the graph neural network” which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Regarding the “the variational autoencoder including two input channels consisting of a first input channel and a second input channel” limitation, as described above, this additional element is recited at a high level of generality and amounts to extra-solution activity of feeding data into different parts of the model, i.e. pre-solution activity of data gathering for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
Regarding the “each graph of the training graphs spans a full chronological sequence decomposing into two contiguous chronological sequences, including a first sequence followed by a second sequence” limitation, as discussed above, this additional element is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of selecting a particular data source or type of data to be manipulated for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (See MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
Accordingly, at Step 2B, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Regarding Claim 10
Step 2A Prong 1
Claim 10 does not add any additional judicial exceptions, but it inherits the same abstract ideas as claim 9.
Accordingly, at Step 2A, prong one, the claim is directed to an abstract idea.
Step 2A Prong 2
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a “extracting a first data set from the chronological vertices and associated transient attributes corresponding to the first sequence”, and “extracting a second data set from sole transient attributes associated with the chronological vertices corresponding to the second sequence” limitations, these additional elements are recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process (see MPEP 2106.05(g)).
Regarding the “inputting the first data into a first input channel and the second data into a second input channel extracted, respectively, for the variational autoencoder to learn to reconstruct a representation of said each graph” limitation, this additional element is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of a “processor” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Regarding the “extracting a first data set from the chronological vertices and associated transient attributes corresponding to the first sequence”, and “extracting a second data set from sole transient attributes associated with the chronological vertices corresponding to the second sequence” limitations, these additional elements are recited at a high level of generality and amounts to extra-solution activity of receiving data, , i.e. pre-solution activity of data gathering for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
Regarding the “inputting the first data into a first input channel and the second data into a second input channel extracted, respectively, for the variational autoencoder to learn to reconstruct a representation of said each graph” limitation, this additional element is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)).
Accordingly, at Step 2B, the additional elements individually or in combination do not integrate the judicial exception into a practical application.
Regarding Claim 11
Step 2A Prong 1
Claim 11 inherits the same abstract ideas as claim 10 and further recites the following mathematical concepts, that in each case under the broadest reasonable interpretation, involves mathematical relationships, formulas, calculations, or algorithms implemented using generic computer components (e.g., “ conditional generative model”, “graph neural network”) [see MPEP 2106.04(a)(2)(I)].
“the variational autoencoder includes an encoder and a decoder the encoder is designed to encode input data in a latent space representation in an inner layer block, while the decoder is designed to decode data from the inner layer block; and the first input channel connects to the encoder, while the second input channel connects to the inner layer block.” (e.g., mathematical operations performed on input data, including latent vector representation and transformation)
Accordingly, at Step 2A, prong one, the claim is directed to an abstract idea.
Step 2A Prong 2
In accordance with Step 2A, Prong 2, the claim does not include any additional elements and the judicial exception is not integrated into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding claims 12, 14, and 15
Claims 12, 14, and 15 recites a system. Each of these claims corresponds directly to the method steps of claims 1, 3, and4, respectively, with the addition of generic hardware components such as a memory and a processor which are insufficient to render the claims subject matter eligible for the same reasons described above.
Regarding claim 16
Claim 16 recites a system. Each of these claims corresponds directly to the method steps of claim 6, respectively, with the addition of generic hardware components such as a memory and a processor which are insufficient to render the claims subject matter eligible for the same reasons described above.
Regarding claims 17-20
Claims 17, 19, and 20 recites a computer readable storage medium. Each of these claims corresponds directly to the method steps of claims 1, 3, and 4, respectively, with the addition of generic hardware components such as computer readable storage medium, and a processor which are insufficient to render the claims subject matter eligible for the same reasons described above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-6, 8-12, 14-17, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sankar et al. (US 20210326389 A1, referred to as Sankar), in view of Tang et al. (US 11496493 B2, referred to as Tang), in view of De La Torre et al. (US 10885150 B2, referred to as De La Torre), in view of Sun et al. (US 20230351215 A1, referred to as Sun), in view of Knuff et al. (US 20220188654 A1, referred to as Knuff).
Regrading claim 1, Sankar teaches a computer-implemented method comprising:
generating a test graph corresponding to an entity ([0037-0041]: Describes that the method is executed on a computer based system comprising a processor.; [0133-0134]: Describes a first graph snapshot that the analysis computer feeds to DySAT; each node represents an email address of a user, the snapshot corresponds to a test graph and the email address corresponds to an entity.);
applying the test graph to a conditional generative model, wherein the conditional generative model has a graph neural network structure ([0130-0132]: Describes stacked structural and temporal self-attention layers forming a graph neural network that conditions on the current snapshot and produces embeddings used for downstream generation and/or prediction); and
predicting a first chronological state for the test graph based on the application of the test graph to the conditional generative model. ([0165]: Describes tat in single-step forecasting “… the latest embeddings at time step t to predict the links at t+1.” Those predicted links correspond to a first chronological state of the same test graph.)
Although Sankar teaches a computer-implemented method comprising generating a test graph corresponding to an entity, applying the test graph to a conditional generative model, wherein the conditional generative model has a graph neural network structure and predicting a first chronological state for the test graph based on the application of the test graph to the conditional generative model. It does not teach the test graph has a hybrid structure comprising a static graph that includes a reference vertex associated with an entity node and a dynamic graph connected to the entity node by a reference vertex, and wherein the static graph comprises a first plurality of peripheral nodes connected to the entity node, and the dynamic graph has a second plurality of peripheral nodes connected to the reference vertex.
Tang teaches, the test graph has a hybrid structure comprising a static graph that includes a reference vertex associated with an entity node and a dynamic graph connected to the entity node by a reference vertex, and wherein the static graph comprises a first plurality of peripheral nodes connected to the entity node, and the dynamic graph has a second plurality of peripheral nodes connected to the reference vertex (Col. 6, lines 22-41: Describes separate static and dynamic graph components, corresponding to a hybrid graph construction.; Col. 8, lines 27-45: Describes a focal node, which is a reference/anchor vertex associated with the entity node. The “nodes to whom ego is directly connected to (for example, “alters”) plus the ties, if any, among the alters” are a plurality of peripheral nodes connected to that entity/reference node. The static one-hop/multi-hop features are computes from this topology, i.e., the static graph side. Dynamic features are computed per time snapshot around the same focal/anchor node, corresponding to the dynamic graph being attached via the same reference node and includes another set of neighbors (peripheral nodes) observable in the time-varying snapshots.)
It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the method of Sankar to incorporate the graph structure of Tang. Doing so would enhance prediction accuracy by leveraging both static and structural knowledge and dynamic state updates enhancing predictive capabilities in graph-based systems.
Although Sankar in view of Tang teaches a computer-implemented method comprising, generating a test graph corresponding to an entity, wherein the test graph has a hybrid structure comprising a static graph that includes a reference vertex associated with an entity node and a dynamic graph connected to the entity node by a reference vertex, and wherein the static graph comprises a first plurality of peripheral nodes connected to the entity node, and the dynamic graph has a second plurality of peripheral nodes connected to the reference vertex, applying the test graph to a conditional generative model, wherein the conditional generative model has a graph neural network structure and predicting a first chronological state for the test graph based on the applying They do not teach decoupling various properties of the first plurality of peripheral nodes and the second plurality of peripheral nodes from the entity node and the reference vertex.
De La Torre teaches decoupling various properties of the first plurality of peripheral nodes and the second plurality of peripheral nodes from the entity node and the reference vertex (Col. 1, lines 66-67 cont. Col. 2, lines 1-24: Describes generating a healthcare risk knowledge graph storing entities and links between the entities.; Col. 10, lines 6-18: Describes a Patient Clinical Object (PCO) provided as a graph centered on the patient, with information about the patient linked by categories such as diagnosis, symptom treatment, hospital visit, and prescription.; Col. 3, lines 42-48, Col. 4 lines 22-28, and Col. 10, lines 36-58: Describes matching entities in the knowledge graph and expanding a subgraph to include nodes in the shortest path and adjacent nodes. These show properties associated with the entity (risks, diagnose, treatments) are represented as discrete graph nodes connected via links, rather than embedded within the entity node itself. These correspond to representing properties as separate peripheral nodes connected via graph links.)
It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the test graph structure of Sankar in view of Tang to incorporate the graph based representation of entity properties of De La Torre. Doing so would represent properties as separate nodes in a graph allowing improved relational modeling, flexible subgraph expansion, and enhanced propagation of information across connected nodes.
De La Torre further teaches accessing one or more speculative transient attributes;
applying the one or more speculative transient attributes and the test graph to the conditional generative model (Col. 8, lines 26-55: Describes accessing a potential treatment(speculative transient attribute) and applying it together with a patient’s risk data to the predictive model. This corresponds with inputting attributes into a graph); and
predicting a second chronological state for the test graph (Col 12, lines 44-49: Describes generating a predicted subgraph representing the patient’s future risk state, corresponding to a second chronological state.).
Although Sankar in view of Tang in view off De La Torre teaches a computer-implemented method comprising, generating a test graph corresponding to an entity, wherein the test graph has a hybrid structure comprising a static graph that includes a reference vertex associated with an entity node and a dynamic graph connected to the entity node by a reference vertex, and wherein the static graph comprises a first plurality of peripheral nodes connected to the entity node, and the dynamic graph has a second plurality of peripheral nodes connected to the reference vertex and wherein the generating further comprises, decoupling various properties of the first plurality of peripheral nodes and the second plurality of peripheral nodes from the entity node and the reference vertex, respectively, applying the test graph to a conditional generative model, wherein the conditional generative model has a graph neural network structure, predicting a first chronological state for the test graph based on the applying, accessing one or more speculative transient attributes, applying the one or more speculative transient attributes and the test graph to the conditional generative model, and predicting a second chronological state for the test graph. They do not teach training the conditional generative model on a set of training graphs to learn temporal evolutions of chronological states of the set.
Knuff teaches training the conditional generative model (FIG. 20, and [0160-0161]: Describes a variational autoencoder (VAE) that learns a latent distribution and provides a decoder which can construct new graphs.).
It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have incorporated the conditional generative model of Knuff, with the graph structured node representations of Sankar, Tang and De LA Torre. Doing so would enable the system to improve predictive capability over graph data, allowing for faster and more efficient processing.
Although Knuff teaches training the conditional generative model, it does not teach training on the set of training graphs to learn the temporal evolutions of chronological states of the training graphs.
Sun teaches on the set of training graphs to learn the temporal evolutions of chronological states of the training graphs ([0075] Describes that the model is trained over “an ordered sequence of graph snapshots,” which capture temporal evolutions).
It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the conditional generative model of Knuff with the temporal evolutions of Sun. Doing so would combine the deep learning techniques and would yield the predictable benefit of graph sequence generation, improving chronological prediction.
Regarding claim 2 (Cancelled)
Regarding claim 3, Sankar, in view of Tang in view of De La Torre, in view of Knuff, in view of Sun teaches the computer-implemented method of claim 1.
De La Torre further teaches wherein predicting a second chronological state further comprises:
predicting a second chronological vertex associated with at least one of the one or more speculative transient attributes(Col 12, lines 44-49: Describes predicating specific future risk nodes that arise only if a potential treatment is applied, corresponding to a second chronological vertex associated with a speculative transient attribute.).
Regarding claim 4, Sankar, in view of Tang in view of De La Torre, in view of Knuff, in view of Sun teaches the computer-implemented method of claim 1.
De La Torre further teaches accessing a modified version of the one or more speculative transient attributes (Col 12, lines 44-56: Describes comparing different potential treatments for the same patient, corresponding to modifying attributes to test alternative scenarios); and
determining the second chronological state of the test graph based on the modified version of the one or more speculative transient attributes (Col 12, lines 44-56: Describes generating a new predicted risk subgraph for each treatment scenario, which corresponds to a second state being determined based on modified input.).
Regarding claim 5, Sankar, in view of Tang in view of De La Torre, in view of Knuff, in view of Sun teaches the computer-implemented method of claim 4.
De La Torre further teaches analyzing the second chronological state (Col 12, lines 44-56: Describes examining the predicated risk subgraph generated after applying a speculative treatment to assess whether any newly identified risks would impact the patient’s existing condition.) ; and determining an outcome for the second chronological state, based on the analyzing (Col 12, lines 44-56: Describes using the results of the risk analysis to determine how the identified risks would affect the patient, thereby providing a specific outcome derived from the second stat’s evaluation.).
Regarding claim 6, Sankar, in view of Tang in view of De La Torre, in view of Knuff, in view of Sun teaches, the computer-implemented method of claim 1.
Tang further teaches, adding one or more actual chronological vertices associated with transient actual properties (Col. 6, lines 1-21, and Col. 7, lines 12-67, Cont. Col. 8, lines 1-19: Describes per-time instances of the node’s states, chronological vertices with transient properties.); and
ordering the added vertices chronologically via oriented chronological edges, with respect to a most recent one of the chronological vertices of the test graph (Col. 6, lines 65-67, Cont. Col. 7, lines 1-11 Describes directed edges (oriented in-going and out-going edges.; Col. 7, lines 12-67, Cont. Col. 8, lines 1-19: Describes computing next/current month and year ratios, requiring chronological ordering anchored at the most recent snapshot, corresponding to comparing t and t-1 in an ordered series.).
Regarding claim 7 (Cancelled)
Regarding claim 8, Sankar, in view of Tang in view of De La Torre, in view of Knuff, in view of Sun teaches the computer-implemented method of claim 6.
Sankar further teaches wherein training further comprises: imposing a loss function to a predetermined minimal smoothness of time variations for the predicted states ([0046-0047]: Describes that dynamic-graph techniques “impose a temporal regularizer to enforce smoothness of the node representations from adjacent snapshots,” expressly teaching a smoothness-of-time loss.).
Regarding claim 9, Sankar, in view of Tang in view of De La Torre, in view of Knuff, in view of Sun teaches the computer-implemented method of claim 1.
Sun further teaches each graph of the training graphs spans a full chronological sequence decomposing into two contiguous chronological sequences, including a first sequence followed by a second sequence ([0213]: Describes that the training procedure uses snapshots of the dynamic graph as input, corresponding to a first chronological subsequence.; [0235]: Describes an evaluation loop “trains on the first t snapshots” of every dynamic graph and treats the remaining snapshots as the next contiguous block, thereby splitting every full timeline into a second chronological subsequence).
Although sun teaches each graph of the training graphs spans a full chronological sequence decomposing into two contiguous chronological sequences, including a first sequence followed by a second sequence, it does not teach the conditional generative model is implemented as a variational autoencoder by the graph neural network, the latter including two input channels consisting of a first input channel and a second input channel.
Knuff further teaches the conditional generative model is implemented as a variational autoencoder by the graph neural network, the latter including two input channels consisting of a first input channel and a second input channel (FIG. 20, [0160-0161]: “The exemplary model described herein is a variational autoencoder” these describe a VAE whose encoder/decoder blocks are graph neural network components.; [0143]: Describes that the VAE takes a node-feature matrix and an edge-feature tensor as two distinct matrices, corresponding to two separate input channels.).
It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify Knuff’s VAE with Sun’s input channels. Doing so would produce clearer temporal conditioning with routine data-pipeline wiring.
Regarding claim 10, Sankar, in view of Tang in view of De La Torre, in view of Knuff, in view of Sun teaches the computer-implemented method of claim 9.
Sun further teaches, wherein training the model on the set of training graphs further comprises: comprises: extracting a first data from the chronological vertices and the associated transient attributes corresponding to the first sequence (SUN [0088]: Describes each snapshot as time varying weights/attributes and states that the first database of intermediate vectors is pulled from those vertices across the first sequence)
Although sun teaches each graph of the training graphs spans a full chronological sequence decomposing into two contiguous chronological sequences, including a first sequence followed by a second sequence, it does not teach the conditional generative model is implemented as a variational autoencoder by the graph neural network, the latter including two input channels consisting of a first input channel and a second input channel.
Knuff further teaches extracting a second data from the sole transient attributes associated with the chronological vertices corresponding to the second sequence ([0142]: Describes the stored attributes are in a node feature matrix N even when link structure is omitted, corresponding to passing only transient attributes without edges.); and inputting the first data into a first input channel and the second data into a second input channel extracted, respectively, for the variational autoencoder to learn to reconstruct a representation of said each graph ([0143]: Describes that two matrices N and A being supplied to the VAE.).
It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify Sun’s vertex and attribute data with attribute only data of Knuff. Doing so would allow for selection of readily available data for pre-processing decision making.
Regarding claim 11, Sankar, in view of Tang in view of De La Torre, in view of Knuff, in view of Sun teaches the computer-implemented method of claim 10.
Knuff further teaches wherein: the variational autoencoder includes an encoder and a decoder ([0160]: Describes how the3 VAE incorporates a MPNN encoder and is also used to decode.); the encoder is designed to encode input data in a latent space representation in an inner layer block, while the decoder is designed to decode data from the inner layer block; and the first input channel connects to the encoder, while the second channel connects to the inner layer block ([0143]: Describes that the node feature matrix N is input to the encoder during message passing.; [0163] Describes latent samples being “ passed through a sequence of dense layers 2303a-n and subsequently processed via two different matrices” including the edge feature tensor, placing the second matrix/channel at the latent/decoder of the VAE.).
Regarding claims 12, 14, and 15, which recites substantially the same limitations as claims 1, 3, and 4. Claims 12, 14, and 15 further recite a computer system (Sankar [0039-0041]: Describes components incorporated into a computer system.) to perform the method steps of claims 1, 3, and 4, respectively, and are therefore rejected on the same premise.
Regarding claim 16, which recites substantially the same limitations as claim 6. Claim 16 further recites a computer system (Sankar [0039-0041]: Describes components incorporated into a computer system.) to perform the method steps of claim 6, respectively, and are therefore rejected on the same premise.
Regarding claims 17, 19, and 20, which recites substantially the same limitations as claims 1, 3, and 4. Claims 17, 19, and 20, further recite computer program product (Sankar [0039-0041]: Describes a computer system to store information that is run by a computer program to execute instructions.) to perform the method steps of claim 1, 3, and 4, respectively, and are therefore rejected on the same premise.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONALD T RODEN whose telephone number is (571)272-6441. The examiner can normally be reached Mon-Thur 8:00-5:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D.T.R./Examiner, Art Unit 2128
/OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128