Prosecution Insights
Last updated: April 19, 2026
Application No. 18/421,812

GENERATING QUERY OUTCOMES USING DOMAIN-BASED CAUSAL GRAPHS AND LARGE GENERATIVE MODELS

Final Rejection §103
Filed
Jan 24, 2024
Examiner
SMITH, SEAN THOMAS
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+21.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
37 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is responsive to amendments and arguments filed on February 2nd, 2026. Claims 1, 17 and 20 are amended, claims 1-20 are pending and have been examined; hence, this action is made FINAL. All objections/rejections not mentioned in this Office Action have been withdrawn by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on January 24th, 2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendments and Arguments With regards to rejections made under 35 U.S.C. 101, Applicant discloses, “As discussed in the interview, the claims include communications between multiple are directed toward [sic] patent-eligible subject matter,” (page 11 of Remarks). The claims as amended recite patent eligible material. Accordingly, the rejections under 35 U.S.C. 101 are withdrawn. With regards to rejections made under 35 U.S.C. 103, Applicant argues, “As the Examiner appeared to agree in the interview, Croutwater, whether considered alone or in combination with the other cited references, does not appear to teach or suggest the subject matter of the currently amended independent claims. For example, Croutwater, whether considered alone or in combination with the other cited references, does not appear to teach or suggest [all elements of the amended claims],” (page 12 of Remarks). Applicant’s argument is not persuasive. Given further consideration, references Croutwater and Xu teach the elements of the claims in such a way that their combination would be obvious to a person having ordinary skill in the art. For example, Croutwater teaches a method of generating knowledge graphs for mapping and answering questions with entity relations, and Xu teaches using an artificial intelligence model to generate graph queries. Accordingly, the rejections under 35 U.S.C. 103 are maintained. Further details are provided below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8-9, 11-14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent 11,521,078 to Croutwater et al. (hereinafter, "Croutwater") in view of U.S. Patent 12,332,896 to Xu et al. (hereinafter, "Xu"). Regarding claim 1, Croutwater teaches A computer-implemented method for generating a causal query output using a domain-based causal graph and one or more large generative models, comprising:receiving a dataset for a target domain and a causal query corresponding to the target domain, wherein the causal query corresponds to a cause-and-effect relationship between variables in the target domain (column 2, lines 14-16, "An approach is provided that receives a question at a question-answering (QA) system. A number of passages are identified that are relevant to the received question," and column 3, lines 40-44, "When a question is submitted, the approach extracts entities and relations from the question text and create a KG like data structure that includes the entity or relation that is missing from the question.");mapping data values from the dataset to nodes in a target domain causal graph to generate a mapped causal graph, wherein the target domain causal graph is a directional causal graph with the nodes representing variables of the target domain and directional edges encoding cause-and- effect relationships between the variables (column 2, lines 16-20, "A question knowledge graph is generated that corresponds to the question and a set of passage knowledge graphs are also generated with each passage knowledge graph corresponding to one of the identified passages," and column 3, lines, 35-40, "In more detail, the Candidate Answer Generator phase first creates a knowledge graph database from a corpus, such as an online encyclopedia or other external knowledge base. In the created knowledge graph database, each node represents entities and the edges between nodes represent relations between two nodes/entities.");determining a node with missing values in the mapped causal graph (column 2, lines 62-66, "FIGS. 1-7 describe an approach that leverages entity-relation data from knowledge graphs (KGs) and computes similarity scores to find missing information for entities and also boost scores of candidate answers in order to better rank correct answers (with a reasonable/trustful answer).");populating the missing values with data obtained from a data resource to generate a modified causal graph (column 10, lines 5-14, "While the initial question knowledge graph (320) and the initial passage knowledge graphs (360) can be analyzed and used to identify candidate answers based on the knowledge graphs, in one embodiment, the knowledge graphs are 'expanded' using known, reliable data, such as an online encyclopedia that is depicted being retrieved from external data store 330. The expanded knowledge graphs are used to identify additional entities and relations that might not be readily found in the question and passage data."). Croutwater does not explicitly teach “providing a causal script prompt, the causal query, and the modified causal graph to a large generative model that generates a causal query script,” “providing the causal query script and the modified causal graph to a causal analysis model that generates a raw answer to the causal query,” nor “providing a plain language response based on the raw answer in response to receiving the causal query,” and thus Xu is introduced. Xu teaches providing a causal script prompt, the causal query, and the modified causal graph to a large generative model that generates a causal query script (column 11, lines 37-48, "For instance, in some embodiments, the graph query generation and path extraction component 126 passes the query intent 116, the entity 118, the first node 121, and the subgraph 122 to a generative artificial intelligence model with instructions to generate the graph query 128 based on the query intent 116, the entity 118, the first node 121, and the subgraph 122, and to execute the graph query on the graph 120 to identify the second node 129 and extract the path 130, and then the graph query generation and path extraction component 126 receives the second node 129 and the path 130 from the generative artificial intelligence model.");providing the causal query script and the modified causal graph to a causal analysis model that generates a raw answer to the causal query, wherein the causal analysis model is separate from the large generative model (column 11, lines 37-48, "For instance, in some embodiments, the graph query generation and path extraction component 126 passes the query intent 116, the entity 118, the first node 121, and the subgraph 122 to a generative artificial intelligence model with instructions to generate the graph query 128 based on the query intent 116, the entity 118, the first node 121, and the subgraph 122, and to execute the graph query on the graph 120 to identify the second node 129 and extract the path 130, and then the graph query generation and path extraction component 126 receives the second node 129 and the path 130 from the generative artificial intelligence model."); andproviding a plain language response based on the raw answer in response to receiving the causal query (column 11, lines 49-56, "Response generation component 132 converts the path 130 to a response 134. The response 134 is configured for output to the device 104, e.g., for display to the user 102 via the app 105. In the example of FIG. 1A, the path 130 is converted to a natural language description of the entity 118 and the intent 116; that is, the response 134 is responsive to the intent 116 (“follow these steps”) and identifies the issue identified by the entity 118."). Croutwater and Xu are considered analogous because they are each concerned with processing relational graph data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Croutwater by including the graph query generation model of Xu for the purpose of improving question answer accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 2, Xu further teaches The computer-implemented method of claim 1, further comprising:in response to receiving the causal query, generating a causal query prompt to provide the large generative model based on the causal query (column 25, lines 28-36, "In some embodiments, the bidirectional communications 442 include the input understanding component 404 passing the query to a large language model with an instruction that causes the large language model to function as intent/entity extractor 418. That is, the instruction is or is included in a prompt that is configured for input to the large language model. The query is passed to the large language model along with the instruction, e.g., as an argument of the prompt."); andproviding the causal query prompt and a system prompt to the large generative model with the causal query prompt, wherein the system prompt provides a set of data resources for obtaining missing dataset values for the target domain (column 29, lines 50-59, "The node-based content retrieval component 410 uses the path generated by the graph path identification component 408 to identify and extract relevant portions of content from the document set. For example, the node-based retrieval component 410 uses keys 434 to identify the relevant portions of content based on the portions of content that are associated with at least the intent node and the entity node of the path. In some embodiments, these node-level content portions are used to formulate a prompt for input to a large language model."). Regarding claim 3, Xu further teaches The computer-implemented method of claim 2, wherein:the causal query prompt includes instructions to determine an intent of the causal query (column 9, lines 34-39, "Intent understanding component 110 reads the query 106 and identifies a first query portion 108 of the query 106 as corresponding to a canonical intent label. Intent understanding component 110 interprets or translates the first query portion 108 into a query intent 116. ");the intent of the causal query is used to determine the data resource (column 10, lines 17-22, "The canonical labels referenced by intent understanding component 110 and entity extraction component 114 are pre-defined according to the requirements of a particular design or implementation of the generative graph-enhanced retrieval system and stored by, e.g., a graph template, taxonomy, ontology, or vocabulary."); andthe intent of the causal query is used to generate the causal script prompt (column 11, lines 22-28, "Graph query generation and path extraction component 126 receives as input and processes the query intent 116, the entity 118, the first node 121, and the subgraph 122 to generate a graph query 128, execute the graph query 128 on the subgraph 122, identify a second node 129 as corresponding to the query intent 116, and create and extract a path 130 from the subgraph 122."). Regarding claim 4, Croutwater teaches The computer-implemented method of claim 2, wherein obtaining the missing values from the data resource comprises:identifying the data resource from the set of data resources using the large generative model (column 9, lines 51-54, "In addition, the traditional QA pipeline identifies passages of text that are relevant to the question with these passages being stored in memory area 350."); andrequesting the missing dataset values from the data resource (column 12, lines 11-15, "At step 510, the process retrieves an external data source, such as an online encyclopedia, etc. In one embodiment, an external data source is chosen that is relevant to the subject matter of the submitted question and resulting passages."). Xu and Croutwater are considered analogous because they are each concerned with processing relational graph data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Xu with the teachings of Croutwater for the purpose of improving question answer accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 5, Croutwater further teaches The computer-implemented method of claim 1, wherein the data resource comprises a database of domain-based information (column 1, lines 11-14, "A QA implementation, usually a computer program, may construct its answers by querying a structured database of knowledge or information, usually a knowledge base, or 'corpus.'"). Regarding claim 8, Croutwater further teaches The computer-implemented method of claim 1, wherein mapping the data values from the dataset to the nodes in the target domain causal graph includes associating one or more data values from a dataset category with the node that corresponds to the dataset category (column 3, lines 10-14, "This process expands the graph by adding neighbors to existing entities using common relations with the neighbors being added from the external data used to expand the graph, such as an online encyclopedia."). Regarding claim 9, Croutwater further teaches The computer-implemented method of claim 1, wherein:the target domain causal graph includes nodes corresponding to categories in the target domain (column 1, lines 61-66, "Current approaches analyze relations between entities to provide supporting features and generate candidate answers provided by the QA system often relying heavily on entity disambiguation, which includes PERSON, GPE, ORGANIZATION, etc., based on the type of QA system that is being used.");various nodes in the target domain causal graph are connected with edges to indicate a relationship between connected nodes (column 3, lines 38-40, "In the created knowledge graph database, each node represents entities and the edges between nodes represent relations between two nodes/entities."); andthe target domain causal graph is not associated with the data values before being mapped to the dataset (column 3, lines 35-37, "In more detail, the Candidate Answer Generator phase first creates a knowledge graph database from a corpus, such as an online encyclopedia or other external knowledge base."). Regarding claim 11, Xu teaches The computer-implemented method of claim 1, further comprising:generating the causal script prompt that causes the large generative model to generate the causal query script that instructs the causal analysis model to return the raw answer to the causal query based on the modified causal graph (column 28, lines 40-47, "In some embodiments, graph path identification component 408 passes the identified nodes and the identified query intent to the large language model with an instruction to generate a graph query, receives the graph query from the large language model, and execute the graph query generated by the large language model on the identified subgraph(s) using the graph querying mechanism of the graph database management system 428."); andreceiving the raw answer to the causal query (column 28, lines 47-60, "In some embodiments, graph query generation and execution are executed as a single step path extraction process. In other embodiments, graph query generation and execution are executed in an iterative process in which the large language model generates a graph query in a language that can be executed on the identified subgraph(s) by the graph database management system 428, the graph database management system 428 executes the large language model-generated graph query and passes the resulting path back to the large language model, the large language model evaluates the path and either regenerates the graph query or passes the resulting path back to the graph path identification component 408."). Regarding claim 12, Xu teaches The computer-implemented method of claim 1, wherein providing the plain language response based on the raw answer includes:generating a plain language answer prompt for the large generative model to generate a refined answer based on the causal query and the raw answer (column 29, line 64, through column 30, line 2, "In some embodiments, the output generation component 412 formulates a prompt for input to a large language model based on the portions of content identified by the node-based retrieval component 410 as associated with the path, and receives the formatted output from the large language model in response to the prompt.");providing the plain language answer prompt to the large generative model (column 29, line 64, through column 30, line 2, "In some embodiments, the output generation component 412 formulates a prompt for input to a large language model based on the portions of content identified by the node-based retrieval component 410 as associated with the path, and receives the formatted output from the large language model in response to the prompt."); andreceiving the refined answer from the large generative model to provide to a client device that requested the causal query (column 11, lines 49-52, "Response generation component 132 converts the path 130 to a response 134. The response 134 is configured for output to the device 104, e.g., for display to the user 102 via the app 105."). Regarding claim 13, Croutwater teaches The computer-implemented method of claim 1, further comprising:generating the target domain causal graph using the large generative model based on categories in the target domain with knowledgebase documents from the target domain (column 1, lines 11-16, "A QA implementation, usually a computer program, may construct its answers by querying a structured database of knowledge or information, usually a knowledge base, or “corpus.” QA systems can ingest data from an unstructured collections of natural language documents, such as documents found on the Internet."). Regarding claim 14, Croutwater further teaches The computer-implemented method of claim 13, further comprising:providing a causal graph creation prompt that causes the large generative model to generate the target domain causal graph showing relationships between the categories in the target domain based on the knowledgebase documents from the target domain (column 3, lines 35-40, "In more detail, the Candidate Answer Generator phase first creates a knowledge graph database from a corpus, such as an online encyclopedia or other external knowledge base. In the created knowledge graph database, each node represents entities and the edges between nodes represent relations between two nodes/entities."). Regarding claim 16, Xu teaches The computer-implemented method of claim 1, wherein the causal query script includes instructions written in a computer programming language to cause the causal analysis model to execute the instructions to determine the raw answer based on the modified causal graph (column 44, lines 43-54, "The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein."). Regarding claim 17, Croutwater teaches A computer-implemented method comprising:receiving a query corresponding to a target domain (column 7, lines 14-18, "The IBM Watson™ knowledge manager system may receive an input question which it then parses to extract the major features of the question, that in turn are then used to formulate queries that are applied to the corpus of data.");mapping, in response to receiving data values from a dataset for the target domain, the data values to nodes in a target domain causal graph to generate a mapped causal graph, wherein the target domain causal graph is a directional causal graph with the nodes representing variables of the target domain and directional edges encoding cause-and-effect relationships between the variables (column 2, lines 16-20, "A question knowledge graph is generated that corresponds to the question and a set of passage knowledge graphs are also generated with each passage knowledge graph corresponding to one of the identified passages," and column 3, lines, 35-40, "In more detail, the Candidate Answer Generator phase first creates a knowledge graph database from a corpus, such as an online encyclopedia or other external knowledge base. In the created knowledge graph database, each node represents entities and the edges between nodes represent relations between two nodes/entities.");based on determining a node with missing values in the mapped causal graph, populating the missing values with data obtained from a data resource to generate a modified causal graph obtaining the missing values to generate the modified causal graph (column 10, lines 5-14, "While the initial question knowledge graph (320) and the initial passage knowledge graphs (360) can be analyzed and used to identify candidate answers based on the knowledge graphs, in one embodiment, the knowledge graphs are 'expanded' using known, reliable data, such as an online encyclopedia that is depicted being retrieved from external data store 330. The expanded knowledge graphs are used to identify additional entities and relations that might not be readily found in the question and passage data."). Croutwater does not explicitly teach “providing a causal script prompt, the query, and the modified causal graph to a large generative model to generate a query script,” “receiving a raw answer to the query from a causal analysis model based on providing the query script to the causal analysis model,” nor “providing a plain language response to the query based on using the large generative model to generate the plain language response from the raw answer,” and thus, Xu is referenced. Xu teaches providing a causal script prompt, the query, and the modified causal graph to a large generative model to generate a query script (column 11, lines 37-48, "For instance, in some embodiments, the graph query generation and path extraction component 126 passes the query intent 116, the entity 118, the first node 121, and the subgraph 122 to a generative artificial intelligence model with instructions to generate the graph query 128 based on the query intent 116, the entity 118, the first node 121, and the subgraph 122, and to execute the graph query on the graph 120 to identify the second node 129 and extract the path 130, and then the graph query generation and path extraction component 126 receives the second node 129 and the path 130 from the generative artificial intelligence model.");receiving a raw answer to the query from a causal analysis model based on providing the query script to the causal analysis model, wherein the causal analysis model is separate from the large generative model (column 29, line 64, through column 30, line 2, "In some embodiments, the output generation component 412 formulates a prompt for input to a large language model based on the portions of content identified by the node-based retrieval component 410 as associated with the path, and receives the formatted output from the large language model in response to the prompt."); andproviding a plain language response to the query based on using the large generative model to generate the plain language response from the raw answer (column 11, lines 49-56, "Response generation component 132 converts the path 130 to a response 134. The response 134 is configured for output to the device 104, e.g., for display to the user 102 via the app 105. In the example of FIG. 1A, the path 130 is converted to a natural language description of the entity 118 and the intent 116; that is, the response 134 is responsive to the intent 116 (“follow these steps”) and identifies the issue identified by the entity 118."). Croutwater and Xu are considered analogous because they are each concerned with processing relational graph data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Croutwater by including the graph query generation model of Xu for the purpose of improving question answer accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 18, Croutwater teaches The computer-implemented method of claim 17, further comprising:receiving the dataset for the target domain and the query from a client device (column 6, lines 31-40, "QA system 100 may be configured to receive inputs from various sources. For example, QA system 100 may receive input from the network 102, a corpus of electronic documents 107 or other data, a content creator, content users, and other possible sources of input. In one embodiment, some or all of the inputs to QA system 100 may be routed through the network 102. The various computing devices on the network 102 may include access points for content creators and content users. Some of the computing devices may include devices for a database storing the corpus of data"). Croutwater does not explicitly teach “providing the plain language response answering the query to the client device,” however, Xu teaches providing the plain language response answering the query to the client device (column 11, lines 49-52, "Response generation component 132 converts the path 130 to a response 134. The response 134 is configured for output to the device 104, e.g., for display to the user 102 via the app 105."). Croutwater and Xu are considered analogous because they are each concerned with processing relational graph data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Croutwater with the teachings of Xu for the purpose of improving question answer usability. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 19, Xu further teaches The computer-implemented method of claim 18, further comprising receiving the dataset from the client device along with the query (column 9, lines 4-8, "For example, some or all of generative graph-enhanced retrieval system is implemented directly on the user's electronic device in some implementations, thereby avoiding the need to communicate with servers over a network such as the Internet."). By teaching a method that does not rely on external information sources, Xu teaches a graph-enhanced retrieval system wherein all elements, including the dataset, are implemented directly on a user’s device. Regarding claim 20, Croutwater teaches A system for generating a causal query output using a domain-based knowledge graph and one or more large generative models, comprising:a processing system; and a computer memory comprising instructions (column 4, lines 19-24, "The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.") that, when executed by the processing system, cause the system to perform operations of:receiving a dataset for a target domain and a causal query corresponding to the target domain, wherein the causal query corresponds to a cause-and-effect relationship between variables in the target domain (column 2, lines 14-16, "An approach is provided that receives a question at a question-answering (QA) system. A number of passages are identified that are relevant to the received question," and column 3, lines 40-44, "When a question is submitted, the approach extracts entities and relations from the question text and create a KG like data structure that includes the entity or relation that is missing from the question.");mapping data values from the dataset to nodes in a target domain causal graph to generate a mapped causal graph, wherein the target domain causal graph is a directional causal graph with the nodes representing variables of the target domain and directional edges encoding cause-and-effect relationships between the variables (column 2, lines 16-20, "A question knowledge graph is generated that corresponds to the question and a set of passage knowledge graphs are also generated with each passage knowledge graph corresponding to one of the identified passages," and column 3, lines, 35-40, "In more detail, the Candidate Answer Generator phase first creates a knowledge graph database from a corpus, such as an online encyclopedia or other external knowledge base. In the created knowledge graph database, each node represents entities and the edges between nodes represent relations between two nodes/entities.");determining a node with missing values in the mapped causal graph (column 2, lines 62-66, "FIGS. 1-7 describe an approach that leverages entity-relation data from knowledge graphs (KGs) and computes similarity scores to find missing information for entities and also boost scores of candidate answers in order to better rank correct answers (with a reasonable/trustful answer).");populate the missing values with data obtained from a data resource to generate a modified causal graph (column 10, lines 5-14, "While the initial question knowledge graph (320) and the initial passage knowledge graphs (360) can be analyzed and used to identify candidate answers based on the knowledge graphs, in one embodiment, the knowledge graphs are 'expanded' using known, reliable data, such as an online encyclopedia that is depicted being retrieved from external data store 330. The expanded knowledge graphs are used to identify additional entities and relations that might not be readily found in the question and passage data."). Croutwater does not explicitly teach “providing a causal script prompt, the causal query, and the modified causal graph to a large generative model that generates a causal query script,” “providing the causal query script and the modified causal graph to a causal analysis model that generates a raw answer to the causal query,” “generating a plain language answer prompt for the large generative model to generate a refined answer based on the causal query and the raw answer,” nor “providing a plain language response in response to the causal query,” and thus, Xu is referenced. Xu teaches providing a causal script prompt, the causal query, and the modified causal graph to a large generative model that generates a causal query script (column 11, lines 37-48, "For instance, in some embodiments, the graph query generation and path extraction component 126 passes the query intent 116, the entity 118, the first node 121, and the subgraph 122 to a generative artificial intelligence model with instructions to generate the graph query 128 based on the query intent 116, the entity 118, the first node 121, and the subgraph 122, and to execute the graph query on the graph 120 to identify the second node 129 and extract the path 130, and then the graph query generation and path extraction component 126 receives the second node 129 and the path 130 from the generative artificial intelligence model.");providing the causal query script and the modified causal graph to a causal analysis model that generates a raw answer to the causal query, wherein the causal analysis model is separate from the large generative model (column 11, lines 37-48, "For instance, in some embodiments, the graph query generation and path extraction component 126 passes the query intent 116, the entity 118, the first node 121, and the subgraph 122 to a generative artificial intelligence model with instructions to generate the graph query 128 based on the query intent 116, the entity 118, the first node 121, and the subgraph 122, and to execute the graph query on the graph 120 to identify the second node 129 and extract the path 130, and then the graph query generation and path extraction component 126 receives the second node 129 and the path 130 from the generative artificial intelligence model.");generating a plain language answer prompt for the large generative model to generate a refined answer based on the causal query and the raw answer (column 29, line 64, through column 30, line 2, "In some embodiments, the output generation component 412 formulates a prompt for input to a large language model based on the portions of content identified by the node-based retrieval component 410 as associated with the path, and receives the formatted output from the large language model in response to the prompt."); andproviding a plain language response in response to the causal query (column 29, line 64, through column 30, line 2, "In some embodiments, the output generation component 412 formulates a prompt for input to a large language model based on the portions of content identified by the node-based retrieval component 410 as associated with the path, and receives the formatted output from the large language model in response to the prompt."). Croutwater and Xu are considered analogous because they are each concerned with processing relational graph data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Croutwater by including the graph query generation model of Xu for the purpose of improving question answer accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Croutwater and Xu as applied to claim 1 above, and further in view of U.S. Patent Application Publication 2023/0351213 to Zhao et al. Regarding claim 6, the combination of Croutwater and Xu does not teach “The computer-implemented method of claim 1, wherein the data resource comprises a machine-learning simulator model that generates the missing values for the node,” however, Zhao teaches the data resource comprises a machine-learning simulator model that generates the missing values for the node (paragraph [0127], "A GPU processor deployed with the disclosed method could be applied to any application scenario based on knowledge graph, for example, e-commerce graph, finance graph and so on, which, during operation, could automatically generate missing relations in the original graph. For example, in an e-commerce knowledge graph, the production place of the entity ‘fridge’ is ‘Hubei Province’, ‘Hubei Province’ belongs to ‘China’, as model could learn great number of instances to obtain a matching rule, it could automatically deduce that the fridge is made in China, so as to complement the missing information."). Croutwater, Xu and Zhao are considered analogous because they are each concerned with processing relational graph data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Croutwater and Xu with the teachings of Zhao for the purpose of improving answer generation efficiency. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 7, Zhao further teaches The computer-implemented method of claim 6, wherein the machine-learning simulator model generates the missing values for the node based on the data values from the mapped causal graph (paragraph [0127], "A GPU processor deployed with the disclosed method could be applied to any application scenario based on knowledge graph, for example, e-commerce graph, finance graph and so on, which, during operation, could automatically generate missing relations in the original graph. For example, in an e-commerce knowledge graph, the production place of the entity “fridge” is “Hubei Province”, “Hubei Province” belongs to “China”, as model could learn great number of instances to obtain a matching rule, it could automatically deduce that the fridge is made in China, so as to complement the missing information."). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Croutwater and Xu as applied to claim 1 above, and further in view of U.S. Patent 12,406,142 to Bhamidipaty et al. Regarding claim 10, the combination of Croutwater and Xu does not teach “The computer-implemented method of claim 1, wherein determining the node with the missing values in the mapped causal graph includes identifying the node in the mapped causal graph that is associated with less than a minimum number of data values,” however, Bhamidipaty teaches determining the node with the missing values in the mapped causal graph includes identifying the node in the mapped causal graph that is associated with less than a minimum number of data values (column 6, lines 45-48, "To identify possibly missing relationships, the semantic model 150 (the knowledge graph) is analyzed for disconnected components of the graph, i.e., isolated clusters that share no edges with nodes outside the cluster."). Croutwater, Xu and Bhamidipaty are considered analogous because they are each concerned with processing relational graph data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Croutwater and Xu with the teachings of Bhamidipaty for the purpose of improving question answer accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Croutwater and Xu as applied to claim 1 above, and further in view of U.S. Patent Application Publication 2023/0316096 to Birru et al. Regarding claim 15, the combination of Croutwater and Xu does not teach “The computer-implemented method of claim 13, wherein the knowledgebase documents include academic papers from the target domain and questionnaire responses corresponding to the categories in the target domain,” however, Birru teaches the knowledgebase documents include academic papers from the target domain and questionnaire responses corresponding to the categories in the target domain (paragraph [0060], "A first step is to collect scientific literature data from various sources such as academic databases, digital libraries, or repositories. This data can include research papers, patents, reports, and conference proceedings. In an example embodiment the knowledge source is the scholarly articles about COVID-19 and variants," and paragraph [0062], "Once the relevant information is extracted, it needs to be integrated into a Knowledge Graph. In an example embodiment, graph data is generated by the CORD Knowledge Graph ML models, in the form of nodes & edges or in the form of triplets (consisting of a subject, object and relation). This data is curated into a suitable format for the graph library which is used to render the graph for use in the application."). Croutwater, Xu and Birru are considered analogous because they are each concerned with processing relational graph data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Croutwater and Xu with the teachings of Birru for the purpose of improving question answer accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent Application Publication 2018/0075359 to Brennan et al. U.S. Patent Application Publication 2022/0398271 to Wu et al. U.S. Patent Application Publication 2023/0169361 to Mitra et al. U.S. Patent Application Publication 2023/0342629 to Panda et al. U.S. Patent Application Publication 2024/0061739 to Chen et al. U.S. Patent Application Publication 2024/0428787 to Namazifar et al. U.S. Patent 10,423,652 to Zhai et al. U.S. Patent 11,811,707 to De et al. U.S. Patent 12,417,569 to Zadrozny et al. China Invention Application CN 110727802 to Sun and Chen. China Invention Application CN 113590824 to Zhang. China Invention Application CN 116881470 to Tang et al. European Patent Application 4202789 to Huang. WIPO Publication WO 2018/111699 to Kreutzer et al. “Causal modelling agents: Causal graph discovery through synergising metadata-and data-driven reasoning” by Abdulaal et al. “Toward unique and unbiased causal effect estimation from data with hidden variables” by Cheng et al. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN T SMITH whose telephone number is (571)272-6643. The examiner can normally be reached Monday - Friday 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PIERRE-LOUIS DESIR can be reached at (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN THOMAS SMITH/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jan 24, 2024
Application Filed
Sep 29, 2025
Non-Final Rejection — §103
Dec 23, 2025
Interview Requested
Jan 09, 2026
Applicant Interview (Telephonic)
Jan 09, 2026
Examiner Interview Summary
Feb 02, 2026
Response Filed
Feb 23, 2026
Final Rejection — §103
Apr 02, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602540
LEVERAGING A LARGE LANGUAGE MODEL ENCODER TO EVALUATE PREDICTIVE MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12530534
SYSTEM AND METHOD FOR GENERATING STRUCTURED SEMANTIC ANNOTATIONS FROM UNSTRUCTURED DOCUMENT
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month