Prosecution Insights
Last updated: April 19, 2026
Application No. 18/659,524

Learning Techniques for Causal Discovery

Final Rejection §101§103
Filed
May 09, 2024
Examiner
DIVELBISS, MATTHEW H
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Servicenow Inc.
OA Round
2 (Final)
23%
Grant Probability
At Risk
3-4
OA Rounds
4y 1m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
83 granted / 367 resolved
-29.4% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
50 currently pending
Career history
417
Total Applications
across all art units

Statute-Specific Performance

§101
37.0%
-3.0% vs TC avg
§103
43.5%
+3.5% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
6.9%
-33.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 367 resolved cases

Office Action

§101 §103
DETAILED ACTION The following is a Final Office action. In response to Examiner’s communication of 11/24/25, Applicant, on 1/28/26, amended claims 1, 2, 5, 12, 13, 15, 18, and 20, cancelled claims 14 and 19, and added new claims 21 and 22. Claims 1-13, 15-18, and 20-22 are now pending and have been rejected as indicated below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments Applicant’s amendments are acknowledged. The 35 USC 101 rejection of claims 1-13, 15-18, and 20-22 in regard to abstract ideas has been maintained in light of Applicant’s amendments and explanations. The 35 USC § 103 rejections of claims 1-13, 15-18, and 20-22 are maintained in light of Applicant’s amendments explanations. Claim Rejections - 35 USC§ 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13, 15-18, and 20-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Here, under considerations of the broadest reasonable interpretation of the claimed invention, Examiner finds that the Applicant invented a method and system for determining inefficiencies in a data model by analyzing a causal graph through application of a natural language model. Examiner formulates an abstract idea analysis, following the framework described in the MPEP as follows: Step 1: The claims are directed to a statutory category, namely a "method" (claims 1-13) and "system" (claims 15-18, and 21-22). Step 2A - Prong 1: The claims are found to recite limitations that set forth the abstract idea(s), namely, regarding claim 1: A method comprising: obtaining static data operated on by a process and dynamic data from event logs of the process; generating, from the static data and the dynamic data, a causal graph of dependencies between features of the process; providing, to a generative natural language model, representations of the causal graph, causalities between features in the causal graph, and a prompt requesting that the generative natural language model identify inefficiencies in the process; obtaining, from the generative natural language model, indications of an inefficiency in the process; and in response to receiving the indications of the inefficiency in the process, automatically modifying a type or amount of hardware or software that performs the process. Independent claims 15 and 20 recite substantially similar claim language. Dependent claims 2-13, 16-18, and 20-22 recite the same or similar abstract idea(s) as independent claims 1, 15, and 20 with merely a further narrowing of the abstract idea(s) to particular data characterization and/or additional data analyses performed as part of the abstract idea. The limitations in claims 1-13, 15-18, and 20-22 above falling well-within the groupings of subject matter identified by the courts as being abstract concepts, specifically the claims are found to correspond to the category of: "Certain methods of organizing human activity- fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)" as the limitations identified above are directed to determining inefficiencies in a data model by analyzing a causal graph through application of a natural language model and thus is a method of organizing human activity including at least commercial or business interactions or relations and/or a management of user personal behavior; and/or "Mental processes - concepts performed in the human mind (including an observation, evaluation, judgement, opinion)" as the limitations identified above include mere data observations, evaluations, judgements, and/or opinions, e.g. including user observation and evaluation of determining inefficiencies in a data model by analyzing a causal graph, which is capable of being performed mentally and/or using pen and paper. Step 2A - Prong 2: Claims 1-13, 15-18, and 20-22 are found to clearly be directed to the abstract idea identified above because the claims, as a whole, fail to integrate the claimed judicial exception into a practical application, specifically the claims recite the additional elements of: " providing, to a generative natural language model / A non-transitory computer-readable medium, having stored thereon program instructions that, upon execution by a computing system, cause the computing system to perform operations comprising / A system comprising: one or more processors; and memory, containing program instructions that, upon execution by the one or more processors, cause the system to perform operations comprising:" (claims 1, 15, and 20); “wherein the generative natural language model is a large language model,” (claim 12) however the aforementioned elements merely amount to generic components of a general purpose computer used to "apply" the abstract idea (MPEP 2106.0S(f)) and thus fails to integrate the recited abstract idea into a practical application, furthermore the high-level recitation of receiving data from a generic "natural language model" is at most an attempt to limit the abstract to a particular field of use (MPEP 2106.0S(h), e.g.: "For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (limiting use of abstract idea to use with XML tags).") and/or merely insignificant extra-solution activity (MPE 2106.05(g)) and thus further fails to integrate the abstract idea into a practical application; Step 2B: Claims 1-13, 15-18, and 20-22 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 merely amount to a general purpose computer that attempts to apply the abstract idea in a technological environment (MPEP 2106.0S(f)), including merely limiting the abstract idea to a particular field of use of KPI analysis of a "data" via a "natural language model", as explained above, and/or performs insignificant extra-solution activity, e.g. data gathering or output, (MPEP 2106.0S(g)), as identified above, which is further found under step 2B to be merely well-understood, routine, and conventional activities as evidenced by MPEP 2106.0S(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser's back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to determining inefficiencies in a data model by analyzing a causal graph through application of a natural language model. Claims 1-13, 15-18, and 20-22 are accordingly rejected under 35 USC§ 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more. Note: The analysis above applies to all statutory categories of invention. As such, the presentment of any claim otherwise styled as a machine or manufacture, for example, would be subject to the same analysis For further authority and guidance, see: MPEP § 2106 https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-10, 12, 13, 15-18, and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2021/0019309 to Yadav et al. (hereafter referred to as Yadav), in view of U.S. Patent Application Publication Number 2022/0358005 to Saha et al. (hereafter referred to as Saha), in further view of U.S. Patent Application Publication Number 2022/0237383 to Park et al. (hereafter referred to as Park), and in even further view of U.S. Patent Application Publication Number 2020/0218988 to Boxwell et al. (hereafter referred to as Boxwell). As per claim 1, Yadav teaches: A method comprising: obtaining static data operated on by a process and dynamic data from event logs of the process (Paragraph Number [0099] teaches a goal of the system for providing a database interface may be to, given a string (full or partial natural language question), produce the following: (1) Context sensitive suggestions; (2) Possible N translations of the question into a database query (e.g., possibly generating a formula and/or query on query workflows); (3) Confidence Score around how confident the system is that the question has properly translated to a database query; and (4) Clarifying questions that provide a way to take input from the user about what they mean. For example, the question may be “top product by profit” but the data set may not contain profit column and contain an “item” column instead. Or, the profit column may not be defined and may require someone to define it as revenue−cost. Paragraph Number [0527] teaches patterns (e.g., static patterns or dynamic patterns) may be applied to the string and/or the set of tokens to modify the query graph. The technique 2400 includes determining 2420 that the string and the set of tokens matches a pattern. For example, the set of tokens or a subset of the set of tokens may satisfy a set token type constraints associated with (e.g., stored with) a pattern. For example, the string fragments of the string matched to tokens of the set of tokens may satisfy a set of text constraints (e.g., a regular expression) associated with the pattern). generating, from the static data and the dynamic data, a causal graph of dependencies between features of the process (Paragraph Number [0515] teaches the technique 2300 includes generating 2330 a query graph for the set of tokens using a finite state machine representing a query grammar. Nodes of the finite state machine represent token types. Directed edges of the finite state machine represent valid transitions between token types in the query grammar. An example of a finite state machine 2100 is depicted in the state diagram of FIG. 21. Vertices of the query graph correspond to respective tokens of the set of tokens. Directed edges of the query graph represent a transition between two tokens in a sequencing of the tokens. An example, of a query graph 2200 is depicted in FIG. 22. The query graph may be used to select a sequence tokens in the set of tokens as a database query that makes sense under the query grammar. For example, in forming a database query as a sequence of tokens, a directed edge of the query graph may be selected, which would cause the token of the vertex at the source of the selected directed edge to immediately precede the token of the vertex at the destination of the selected directed edge in the resulting sequence of tokens). providing, to a ... natural language model, representations of the causal graph (Paragraph Number [0368] teaches some implementations include translation of a string (e.g., including natural language text) into a database query. For example, in order to generate a database query from a natural language question or command multiple stages of transformation may be implemented. The following diagram describes an example workflow for query translation. Paragraph Number [0500] teaches given a finite state machine (FSM) G which represents the underlying query grammar of a database syntax and a string (e.g., a natural language string) from a user, extract a query graph S from the finite state machine G consisting of tokens matched to fragments (e.g., words or sequences of words) of the string (e.g., a natural language string) from a user. The query graph S may have nodes corresponding to the words in the user string and directed edges which are derived from the finite state machine G. In some implementations, the query graph S may be used to find a desired (e.g., an optimal) arrangement of tokens (e.g., corresponding to user words) which are compatible with the underlying query grammar by first removing cycles in the query graph using a greedy heuristic (e.g., the Eades algorithm, described in Eades, Peter, et al., “A Fast & Effective Heuristic for the Feedback Arc Set Problem,” Information Processing Letters, Volume 47, Issue 6, 18 Oct. 1993, pp. 319-323) and then finding a topological sort of the nodes in S on which a modified Dijkstra algorithm is applied to find the sequence of words (i.e., a graph tour T where each node is visited exactly once) with the maximum score). and a prompt requesting that the ...natural language model (Paragraph Number [0257] teaches the technique 700 includes receiving 710, via a user interface (e.g., a webpage), an indication that a string entered via the user interface matches a database query. In some implementations, the indication includes receiving the string via the user interface in response to a prompt presented in the user interface, wherein the prompt requested a natural language translation of the database query. Paragraph Number [0271] teaches there are several sources from which inferences about the dataset can be derived. For example, the set of sources may include: (1.) Refinements: The refinements can come from different sources, such as, (a) natural language string to database query translation refinements (e.g., captured through interactions with the like icon 170, or through answers to questions posed to the user in a prompt of the user interface)). Yadav teaches determining metrics in a data model by analyzing a causal graph through application of a natural language model but does not explicitly teach determining causal relationships for inefficiencies as described by the following citations from Saha: causalities between features in the causal graph (Paragraph Number [0047] teaches a simplified diagram of a method 400 for generating a causal knowledge graph, according to some embodiments. One or more of the processes of method 400 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the processes. In some embodiments, method 400 corresponds to the operation of RCA module 130 (FIG. 1) to perform the task of generating the causal knowledge graph for root cause analysis. As illustrated, the method 400 includes a number of enumerated steps, but aspects of the method 400 may include additional steps before, after, and in between the enumerated steps. In some respects, one or more of the enumerated steps may be omitted or performed in a different order). identify inefficiencies in the process (Paragraph Number [0024] teaches the statistics of the original PRB documents as well as the extracted information is shown in FIG. 6. In other words, when a new incident is detected, the RCA analysis would often need to repeat the processes 102-112 to identify the root cause, which largely compromise the system efficiency. Paragraph Number [0028] teaches a block diagram illustrating a retrieval-based RCA pipeline 200 using PRB, according to embodiments described herein. In view of a lack of automated RCA pipeline that streamlines the RCA process, an AI-driven pipeline 200 of ICM to extract crisp causal information from the unstructured PRB documents (e.g., the highlighted snippets in FIG. 1) and construct a causal knowledge graph from incident symptoms, root causes and resolutions. The pipeline 200 includes neural natural language processing, information extraction and knowledge mining components. The core of ICM uses rich distributional semantics of natural language to compactly represent prior knowledge and efficiently retrieve and reuse the appropriate information from them to avoid the cold-start problem when investigating repeating or similar issues). obtaining, from the generative natural language model, indications of an inefficiency in the process (Paragraph Number [0024] teaches the statistics of the original PRB documents as well as the extracted information is shown in FIG. 6. In other words, when a new incident is detected, the RCA analysis would often need to repeat the processes 102-112 to identify the root cause, which largely compromise the system efficiency. Paragraph Number [0028] teaches a block diagram illustrating a retrieval-based RCA pipeline 200 using PRB, according to embodiments described herein. In view of a lack of automated RCA pipeline that streamlines the RCA process, an AI-driven pipeline 200 of ICM to extract crisp causal information from the unstructured PRB documents (e.g., the highlighted snippets in FIG. 1) and construct a causal knowledge graph from incident symptoms, root causes and resolutions. The pipeline 200 includes neural natural language processing, information extraction and knowledge mining components. The core of ICM uses rich distributional semantics of natural language to compactly represent prior knowledge and efficiently retrieve and reuse the appropriate information from them to avoid the cold-start problem when investigating repeating or similar issues). Both Yadav and Saha are directed to natural language modelling. Yadav discloses determining metrics in a data model by analyzing a causal graph through application of a natural language model. Saha improves upon Yadav by disclosing determining causal relationships for inefficiencies. One of ordinary skill in the art would be motivated to further include determining causal relationships for inefficiencies, to efficiently utilize relationships in a causal graph to determine where changes can be made in a process to increase effectiveness and efficiencies in a process. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of determining metrics in a data model by analyzing a causal graph through application of a natural language model in Yadav to further utilize determining causal relationships for inefficiencies as disclosed in Saha, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Yadav teaches determining metrics in a data model by analyzing a causal graph through application of a natural language model but does not explicitly teach determining the specific efficiency as a threshold of limits or time as described by the following citations from Park: in response to receiving the indications of the inefficiency in the process, automatically modifying a type or amount of hardware or software that performs the process. (Paragraph Number [0073] teaches it should be appreciated that the cloud-based platform 20 discussed above provides an example architecture that may utilize NLU technologies. In particular, the cloud-based platform 20 may include or store a large corpus of source data that can be mined, to facilitate the generation of a number of outputs, including an intent/entity model. For example, the cloud-based platform 20 may include ticketing source data having requests for changes or repairs to particular systems, dialog between the requester and a service technician or an administrator attempting to address an issue, a description of how the ticket was eventually resolved, and so forth. Then, the generated intent/entity model can serve as a basis for classifying intents in future requests, and can be used to generate and improve a conversational model to support a virtual agent that can automatically address future issues within the cloud-based platform 20 based on natural language requests from users. As such, in certain embodiments described herein, the disclosed agent automation framework is incorporated into the cloud-based platform 20, while in other embodiments, the agent automation framework may be hosted and executed (separately from the cloud-based platform 20) by a suitable system that is communicatively coupled to the cloud-based platform 20 to process utterances, as discussed below). Both the combination of Yadav and Saha and Park are directed to natural language modelling. The combination of Yadav and Saha discloses determining metrics in a data model by analyzing a causal graph through application of a natural language model. Park improves upon the combination of Yadav and Saha by disclosing determining the specific efficiency as a threshold of limits or time. One of ordinary skill in the art would be motivated to further include determining the specific efficiency as a threshold of limits or time, to efficiently determine specific forms of efficiency that are particular to the situation needed. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of determining metrics in a data model by analyzing a causal graph through application of a natural language model in the combination of Yadav and Saha to further utilize determining the specific efficiency as a threshold of limits or time as disclosed in Park, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Yadav teaches determining metrics in a data model by analyzing a causal graph through application of a natural language model but does not explicitly teach utilizing a generative natural language model as described by the following citations from Boxwell: generative natural language model (Paragraph Number [0052] teaches during a training of the QA supplement system 120, each edge of the knowledge graph stored in the QA element data structure 126 is analyzed by a Generative Language Model (GLM) engine 129 to identify relationships between the linked entities and to train a corresponding generative language model 132 to generate one or more natural language passages representing the relationships between the syntactic entities corresponding to a particular edge of the analyzed knowledge graph. Advantageously, as noted above, generative language model 132 may employ an n-best generation scheme described below). Both the combination of Yadav, Saha, and Park and Boxwell are directed to natural language modelling. The combination of Yadav, Saha, and Park discloses determining metrics in a data model by analyzing a causal graph through application of a natural language model. Boxwell improves upon the combination of Yadav, Saha, and Park by disclosing utilizing a generative natural language model. One of ordinary skill in the art would be motivated to further include utilizing a generative natural language model, to efficiently identify relationships between the linked entities and to train a corresponding generative language model to generate one or more natural language passages representing the relationships. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of determining metrics in a data model by analyzing a causal graph through application of a natural language model in the combination of Yadav, Saha, and Park to further utilize a generative natural language model as disclosed in Boxwell, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 15, Yadav teaches: A non-transitory computer-readable medium, having stored thereon program instructions that, upon execution by a computing system, cause the computing system to perform operations comprising (Paragraph Number [0436] teaches the user device 1820 may be a computing device, such as the computing device 1900 shown in FIG. 19. Although one user device 1820 is shown for simplicity, multiple user devices may be used. A user may use the user device 1820 to access the database analysis server 1830. The user device 1820 may comprise a personal computer, computer terminal, mobile device, smart phone, electronic notebook, or the like, or any combination thereof. The user device 1820 may communicate with the database analysis server 1830 via an electronic communication medium 1822, which may be a wired or wireless electronic communication medium. For example, the electronic communication medium 1822 may include a LAN, a WAN, a fiber channel network, the Internet, or a combination thereof). The remainer of the claim limitations are substantially similar to those found in claim 1 and are rejected for the same reasons put forth in regard to claim 1. As per claim 20, Yadav teaches: A system comprising: one or more processors; and memory, containing program instructions that, upon execution by the one or more processors, cause the system to perform operations comprising (Paragraph Number [0436] teaches the user device 1820 may be a computing device, such as the computing device 1900 shown in FIG. 19. Although one user device 1820 is shown for simplicity, multiple user devices may be used. A user may use the user device 1820 to access the database analysis server 1830. The user device 1820 may comprise a personal computer, computer terminal, mobile device, smart phone, electronic notebook, or the like, or any combination thereof. The user device 1820 may communicate with the database analysis server 1830 via an electronic communication medium 1822, which may be a wired or wireless electronic communication medium. For example, the electronic communication medium 1822 may include a LAN, a WAN, a fiber channel network, the Internet, or a combination thereof). The remainer of the claim limitations are substantially similar to those found in claim 1 and are rejected for the same reasons put forth in regard to claim 1. As per claim 2, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claim 1. Yadav teaches determining inefficiencies in a data model by analyzing a causal graph through application of a natural language model but does not explicitly teach determining causal relationships for inefficiencies as described by the following citations from Saha: wherein the process is an incident management workflow (Paragraph Number [0029] teaches anomaly incident detection 202 may be performed through various multivariate time-series analysis 203 of the key performance indices (e.g., APT). At stage 204, different hand-crafted static workflows may be auto-triggered to analyze related performance metrics via traffic/database/CPU analysis 104a-c, targeted at detecting the incident symptom 205. The generated symptom 205 is then sent to a searchable index database 219 as input query for searching the past incidents with the detected symptom description). and wherein the event logs record changes to incidents as they progress through the incident management workflow (Paragraph Number [0032] teaches the infrastructure of the auto-pipeline 200 may yield a holistic RCA engine over multimodal multi-source data like log data, memory dumps, execution traces and time series. By unifying the causal knowledge from multiple data sources, the pipeline 200 thus can yield a richer incident representation and model cross-modal causality to potentially discover unknown root causes). A person of ordinary skill would have been motivated to combine these references for the same reasons put forth in regard to claim 1. As per claim 3, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claim 1. In addition, Yadav teaches: wherein the features are represented as nodes in the causal graph (Paragraph Number [0178] teaches a probabilistic graphical model may be implemented where a node is built from each match found in the matching 510 phase. These nodes are assigned some prior probabilities based on the relative match scores. In some implementations, the scores are normalized so that they add up to 1.0 and can be treated as probabilities. For example, the probabilistic graphical model 1200 of FIG. 12 may be used for ranking 550). As per claims 4 and 21, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claims 1 and 15 respectively. In addition, Yadav teaches: generating the features from the static data and the dynamic data (Paragraph Number [0099] teaches a goal of the system for providing a database interface may be to, given a string (full or partial natural language question), produce the following: (1) Context sensitive suggestions; (2) Possible N translations of the question into a database query (e.g., possibly generating a formula and/or query on query workflows); (3) Confidence Score around how confident the system is that the question has properly translated to a database query; and (4) Clarifying questions that provide a way to take input from the user about what they mean. For example, the question may be “top product by profit” but the data set may not contain profit column and contain an “item” column instead. Or, the profit column may not be defined and may require someone to define it as revenue−cost. Paragraph Number [0527] teaches patterns (e.g., static patterns or dynamic patterns) may be applied to the string and/or the set of tokens to modify the query graph. The technique 2400 includes determining 2420 that the string and the set of tokens matches a pattern. For example, the set of tokens or a subset of the set of tokens may satisfy a set token type constraints associated with (e.g., stored with) a pattern. For example, the string fragments of the string matched to tokens of the set of tokens may satisfy a set of text constraints (e.g., a regular expression) associated with the pattern). As per claims 5 and 22, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claims 1 and 15. In addition, Yadav teaches: wherein the features include representations of: time that the work items spend in various states of the process, classes or categories of the work items, whether particular types of database entries are attached to the work items, or cycles exhibited by the work items as they progress through the process (Paragraph Number [0174] teaches at the match level: Matches are first sorted by their type; that is exact, prefix, suffix, substring, approximate_prefix, approximate and within each type, sorted by their score. In some implementations, the match type may be a categorical feature and a weight associated with a match type may be learned independently. For example, the match type may be weighted as a feature. (Examiner asserts that this section teaches at least the alternative of particular types of database entries are attached to the work items)). As per claims 6 and 16, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claims 1 and 15 respectively. Yadav teaches determining inefficiencies in a data model by analyzing a causal graph through application of a natural language model but does not explicitly teach determining causal relationships for inefficiencies as described by the following citations from Saha: wherein generating the causal graph comprises: classifying at least some of the features as either coarse-grained because their values were known when an associated work item was created, fine-grained because their values became known during performance of the process on the associated work item, or target because their values are observable outcomes of the process (Paragraph Number [0028] teaches a block diagram illustrating a retrieval-based RCA pipeline 200 using PRB, according to embodiments described herein. In view of a lack of automated RCA pipeline that streamlines the RCA process, an AI-driven pipeline 200 of ICM to extract crisp causal information from the unstructured PRB documents (e.g., the highlighted snippets in FIG. 1) and construct a causal knowledge graph from incident symptoms, root causes and resolutions. The pipeline 200 includes neural natural language processing, information extraction and knowledge mining components. The core of ICM uses rich distributional semantics of natural language to compactly represent prior knowledge and efficiently retrieve and reuse the appropriate information from them to avoid the cold-start problem when investigating repeating or similar issues. Paragraph Number [0060] teaches a few examples of the root cause and remedial actions respectively extracted from the raw Post Action Review and Immediate Resolution data field of various past incidents. Some of the salient observations about the model predictions are i) the unsupervised QA models are able to extract relevant root cause or resolution spans, despite the long unstructured nature of the document having abundant unseen technical jargon ii) the model does not show any undesirable stark bias towards passage location in span selection iii) the selected spans are well-formed crisp phrases that are short yet self-explanatory. In fact, this performance is particularly appreciable as the use case only targets “cause/effect” seeking questions that rarely occur in the pretraining QA datasets. (Examiner asserts that this section teaches at least the alternative of target because their values are observable outcomes of the process)). A person of ordinary skill would have been motivated to combine these references for the same reasons put forth in regard to claim 1. As per claims 7, and 17, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claims 1 and 6 and 15 and 16 respectively. Yadav teaches determining inefficiencies in a data model by analyzing a causal graph through application of a natural language model but does not explicitly teach determining causal relationships for inefficiencies as described by the following citations from Saha: wherein generating the causal graph further comprises: applying prior constraints to structure of the causal graph (Paragraph Number [0014] teaches artificial intelligence (AI) models-powered methods and systems for the generation a causal knowledge graph for root cause analysis of incidents disrupting cloud services, and the generation of an incident report of a service disrupting incident based on said causal knowledge graph. [0046] Thus, the retrieval-based RCA may generate a query specific causal knowledge subgraph 210, which gives an interactive visualization of subgraph over the symptoms, root causes and resolutions associated with the top-K retrieved search results. With this, users can get an extensive global view of the causal structure underlying the past similar incidents and arbitrarily navigate to other related nodes in the overall graph. For example, the subgraph 210 may be used to generate an output of a distribution of the suggested resolutions 322 and the detected root causes 324). wherein the prior constraints include: fine-grained features not causing coarse-grained features, the coarse-grained features not causing other coarse-grained features, and target features not causing either the fine-grained features or the coarse-grained features (Paragraph Number [0028] teaches a block diagram illustrating a retrieval-based RCA pipeline 200 using PRB, according to embodiments described herein. In view of a lack of automated RCA pipeline that streamlines the RCA process, an AI-driven pipeline 200 of ICM to extract crisp causal information from the unstructured PRB documents (e.g., the highlighted snippets in FIG. 1) and construct a causal knowledge graph from incident symptoms, root causes and resolutions. The pipeline 200 includes neural natural language processing, information extraction and knowledge mining components. The core of ICM uses rich distributional semantics of natural language to compactly represent prior knowledge and efficiently retrieve and reuse the appropriate information from them to avoid the cold-start problem when investigating repeating or similar issues. Paragraph Number [0060] teaches a few examples of the root cause and remedial actions respectively extracted from the raw Post Action Review and Immediate Resolution data field of various past incidents. Some of the salient observations about the model predictions are i) the unsupervised QA models are able to extract relevant root cause or resolution spans, despite the long unstructured nature of the document having abundant unseen technical jargon ii) the model does not show any undesirable stark bias towards passage location in span selection iii) the selected spans are well-formed crisp phrases that are short yet self-explanatory. In fact, this performance is particularly appreciable as the use case only targets “cause/effect” seeking questions that rarely occur in the pretraining QA datasets. (Examiner asserts that this section teaches at least the alternative of target because their values are observable outcomes of the process)). A person of ordinary skill would have been motivated to combine these references for the same reasons put forth in regard to claim 1. As per claim 8, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claim 1. In addition, Yadav teaches: wherein generating the causal graph comprises: applying expert-derived constraints to structure of the causal graph (Paragraph Number [0500] teaches given a finite state machine (FSM) G which represents the underlying query grammar of a database syntax and a string (e.g., a natural language string) from a user, extract a query graph S from the finite state machine G consisting of tokens matched to fragments (e.g., words or sequences of words) of the string (e.g., a natural language string) from a user. The query graph S may have nodes corresponding to the words in the user string and directed edges which are derived from the finite state machine G. In some implementations, the query graph S may be used to find a desired (e.g., an optimal) arrangement of tokens (e.g., corresponding to user words) which are compatible with the underlying query grammar by first removing cycles in the query graph using a greedy heuristic (e.g., the Eades algorithm, described in Eades, Peter, et al., “A Fast & Effective Heuristic for the Feedback Arc Set Problem,” Information Processing Letters, Volume 47, Issue 6, 18 Oct. 1993, pp. 319-323) and then finding a topological sort of the nodes in S on which a modified Dijkstra algorithm is applied to find the sequence of words (i.e., a graph tour T where each node is visited exactly once) with the maximum score). As per claim 9, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claim 1. In addition, Yadav teaches: wherein the dependencies are conditional probabilities (Paragraph Number [0381] teaches this probabilistic graphical model approach may be used to build a node from each match that we find in the matching phase of a database query determination for a string. These nodes are assigned some prior probabilities based on the relative match scores. In some implementations, scores are normalized so that they add up to 1.0 and they may be treated as probabilities. Then edges may be added between the nodes. For example, an edge may be added from node S to node T iff, the input text that matches T immediately comes after the input text that matches S. In some implementations, conditional probabilities may be assigned to the edges representing state transitions, which says that given that we are at State S, what is the probability that the next node will be T. So, all the probabilities across all nodes going out from S may add up to 1). As per claim 10, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claim 1. In addition, Yadav teaches: performing a causal inference technique on the causal graph (Paragraph Number [0303] teaches a flowchart illustrating an example of a technique 900 for learning inferences from a string and an associated query. The technique 900 includes extracting 910 an inference pattern from a string and a database query; classifying 920 the inference pattern to determine an inference type; determining 930 a resolution, wherein the resolution includes one or more tokens of a database syntax; identifying 940 a set of context features for the inference pattern, wherein the set of context features includes words from the string and tokens from the database query; determining 950 a confidence score based on the set of context features; and storing 960 an inference record in an inference store, wherein the inference record includes the set of context features, the resolution, and the confidence score. For example, the technique 900 may be implemented by the database analysis server 1830 of FIG. 18. For example, the technique 900 may be implemented using the computing device 1900 of FIG. 19). Yadav teaches determining inefficiencies in a data model by analyzing a causal graph through application of a natural language model but does not explicitly teach determining causal relationships for inefficiencies as described by the following citations from Saha: to simulate effects of making changes to the process (Paragraph Number [0033] teaches an example block diagram 300 illustrating a framework for incident causation mining pipeline with downstream incident search and retrieval-based RCA over PRB data, according to one embodiment described herein. Diagram 400 shows the ICM pipeline over the repository of PRB documents, serving as an isolated Incident Search tool specialized for RCA. An example use case starting with detection of an ongoing incident and its symptom through time-series analysis and rule-based workflows. This auto-triggers the Incident Search and Retrieval based RCA for the query symptom and outputs an auto-generated Incident Report consisting of i) detected symptom ii) relevant past incidents matching the symptom iii) distribution of the most likely root causes and remedial actions to temporarily resolve the issue. In some embodiments, an incident report may be generated for the anomalous incident listing some or all of the retrieved root causes as the root causes of the anomalous incident). A person of ordinary skill would have been motivated to combine these references for the same reasons put forth in regard to claim 1. As per claim 12, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claim 1. Yadav teaches determining inefficiencies in a data model by analyzing a causal graph through application of a natural language model but does not explicitly teach determining causal relationships for inefficiencies as described by the following citations from Saha: wherein the natural language model is a large language model (Paragraph Number [0042] teaches the generated document level information from modules 305, 307, 309 (including 309a-c) may then be sent to search index database 219. When a new incident occurs, a core Incident Management task is to efficiently search over the past related incidents and promptly detect the likely root causes based on the past similar investigations. Hence, a specialized Neural Search and Retrieval system 315 is built over PRB data, that support any open-ended Natural Language Query. Neural search functions by representing documents as dense, high-dimensional real-valued vectors and constructing a searchable index over these representations, which will allow fast retrieval of the most relevant documents for any open-ended query. Large-scale pretrained neural language models make it possible to represent general linguistic semantics and match domain specific text even without any in-domain training). A person of ordinary skill would have been motivated to combine these references for the same reasons put forth in regard to claim 1. As per claims 13 and 18, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claims 1 and 15 respectively. Yadav teaches determining metrics in a data model by analyzing a causal graph through application of a natural language model but does not explicitly teach determining the specific efficiency as a threshold of limits or time as described by the following citations from Park: wherein the inefficiency in the process is an outcome of the process taking more than a threshold amount of time to achieve, time spent in a state of the process being more than a further threshold amount of time, or the work items cycling between states of the process (Paragraph Number [0199] teaches the process 1300 continues with the scoring weight optimization subsystem 1092 deciding whether the current value of the objective function is greater than or equal to a predefined threshold or if any limits have been reached (decision block 1314). For example, the scoring weight optimization subsystem 1092 may retrieve threshold values and/or limit values from a configuration of the scoring weight optimization subsystem 1092 or the lookup source system 1016, or may receive these values as user-provided inputs to the process 1300 along with the training data 1302. The threshold value dictates the value of the objective function that should be reached or exceeded to indicate that the scoring weight values have been sufficiently optimized. In certain embodiments, a default value may be used (e.g., 90%). The limit values may be other constraints applied to the process 1300, such as a time limit, a memory size limit, a number of iterations limit, and so forth). A person of ordinary skill would have been motivated to combine these references for the same reasons put forth in regard to claim 1. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2021/0019309 to Yadav et al. (hereafter referred to as Yadav) in view of U.S. Patent Application Publication Number 2022/0358005 to Saha et al. (hereafter referred to as Saha) in further view of U.S. Patent Application Publication Number 2022/0237383 to Park et al. (hereafter referred to as Park), in even further view of U.S. Patent Application Publication Number 2020/0218988 to Boxwell et al. (hereafter referred to as Boxwell) and in even further view of U.S. Patent Application Publication Number 2019/0236464 to Feinson et al. (hereafter referred to as Feinson). As per claim 11, the combination of Yadav, Saha, Park, and Boxwell teaches each of the limitations of claim 10. Yadav teaches determining metrics in a data model by analyzing a causal graph through application of a natural language model but does not explicitly teach determining causal relationships using do-calculus as described by the following citations from Feinson: wherein the causal inference technique comprises do-calculus (Paragraph Number [0175] teaches one or more databases (or portions thereof) may include one or more graph databases (e.g., a directed graph concept and data structure). In some embodiments, a graph associated with the AI entity (described herein) may include information from knowledge database 134 and affective concepts database 138 (and/or growth/decay factor database 136 or other databases), and the AI entity may query the graph (also referred to herein as “the ontology-affect graph”) to process inputs, generate responses, or perform other operations. In some embodiments, ontological categories and entries (e.g., from knowledge database 134 or other sources) may be grounded by semantically meaningful visceral and affect information. In some embodiments, the graph may additionally be augmented with probabilistic information (e.g., similar to probabilistic information provided in a Bayesian factor graph) as well as casual information (e.g., via Judea Perl's do calculus and/or operator)). Both the combination of Yadav, Saha, Park, and Boxwell and Feinson are directed to natural language modelling. The combination of Yadav, Saha, Park, and Boxwell discloses determining metrics in a data model by analyzing a causal graph through application of a natural language model. Feinson improves upon the combination of Yadav, Saha, Park, and Boxwell by disclosing determining causal relationships using do-calculus. One of ordinary skill in the art would be motivated to further include determining causal relationships using do-calculus, to efficiently determine both direct and indirect effects from the causal graph. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of determining metrics in a data model by analyzing a causal graph through application of a natural language model in the combination of Yadav, Saha, Park, and Boxwell to further utilize determining causal relationships using do-calculus as disclosed in Feinson, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Response to Arguments Applicant’s arguments filed 1/28/2026 have been fully considered but they are not fully persuasive. Applicant argues that the claims are eligible under 35 USC 101. (See Applicant’s Remarks, 1/28/2026, pgs. 8). Examiner respectfully disagrees. As noted in the 35 USC 101 analysis presented above, the claims recite an abstract concept that is encapsulated by decision making analogous to a method of organizing human activity. Examiner notes that each of the limitations that encapsulate the abstract concepts are recited in the above 35 USC 101. Implementing a knowledge graph and improving its functionality are abstract concepts. Being able to understand, add to, and manipulate a knowledge graph is additionally an abstract concept. Other than storing the knowledge graph in a computer database, the knowledge graph and its associated manipulations are wholly independent from computer technology. Additionally, the claims do not recite a practical application of the abstract concepts in that there is no specific use or application of the method steps other than to make conclusory determinations and provide for direction for either a person or machine to follow at some future time. The claims do not recite any particular use for these determinations and directions that improve upon the underlying computer technology (in this instance the computer software, processor, and memory). Instead, Examiner asserts that the additional elements in the claim language are only used as implementation of the abstract concepts utilizing technology. The concepts described in the limitations when taken both as a whole and individually are not meaningfully different than those found by the courts to be abstract ideas and are similarly considered to be certain methods of organizing human activity such as managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions. The steps are then encapsulated into a particular technological environment by executing these steps upon a computer processor and utilizing features such as a computer interface or sending and receiving data over a network or displaying information via a computerized graphical user interface. However, sending and receiving of information over a network and execution of algorithms on a computer are utilized only to facilitate the abstract concepts (i.e. selecting data on an interface, publishing/displaying information, etc.). As such, Examiner asserts that the implementation of the abstract concepts recited by the claims utilize computer technology in a way that is considered to be generally linking the use of the judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). Accordingly, Examiner does not find that the claims recite a practical application of the abstract concepts recited by the claims. Applicant argues that the previously cited reference does not teach the newly amended portions including the new limitations recited by the independent claims. (See Applicant’s Remarks, 1/28/2026, pgs. 8-9). Examiner respectfully disagrees. Examiner notes that new citations from the previously cited references have been applied to the newly presented claim limitations as indicated in the above in the new 35 USC 103 rejection. Additionally, the new Boxwell reference was added. Examiner has added and emphasized specific portions of the Yadav, Saha, Park, and Boxwell references to read on the new independent claims. As such, Applicant’s arguments directed towards the previous rejection are moot. In response to Applicant’s arguments, Examiner directs Applicant to review the new citations and explanations provided in the new 35 USC 103 rejection presented above. Conclusion Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW H. DIVELBISS whose telephone number is (571) 270-0166. The fax phone number is 571-483-7110. The examiner can normally be reached on M-Th, 7:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.H.D/Examiner, Art Unit 3624 /Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624
Read full office action

Prosecution Timeline

May 09, 2024
Application Filed
Nov 19, 2025
Non-Final Rejection — §101, §103
Jan 20, 2026
Interview Requested
Jan 26, 2026
Examiner Interview Summary
Jan 26, 2026
Applicant Interview (Telephonic)
Jan 28, 2026
Response Filed
Feb 27, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572889
Optimization of Large-scale Industrial Value Chains
2y 5m to grant Granted Mar 10, 2026
Patent 12503000
OPTIMIZATION PROCEDURE FOR THE ENERGY MANAGEMENT OF A SOLAR ENERGY INSTALLATION WITH STORAGE MEANS IN COMBINATION WITH THE CHARGING OF AN ELECTRIC VEHICLE AND SYSTEM
2y 5m to grant Granted Dec 23, 2025
Patent 12493860
WASTE MANAGEMENT SYSTEM AND METHOD
2y 5m to grant Granted Dec 09, 2025
Patent 12482011
FAMILIARITY DEGREE ESTIMATION APPARATUS, FAMILIARITY DEGREE ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Nov 25, 2025
Patent 12450574
METHOD FOR WASTE MANAGEMENT UTILIZING ARTIFICAL NEURAL NETWORK SYSTEM
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
23%
Grant Probability
46%
With Interview (+23.4%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 367 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month