Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in response to the amendments filed 11/13/2025. Claims 1-2, 4-5, 7-10, 12-13, 15-18, and 20 have been amended, claims 3, 6, 11, 14, and 19 have been cancelled. Claims 1-2, 4-5, 7-10, 12-13, 15-18, and 20 are currently pending.
Response to Arguments
Claims 3, 6, 11, 14, and 19 have been cancelled, therefore the rejections of claims 3, 6, 11, 14, and 19 no longer stand.
Applicant’s arguments regarding the 101 rejection have been fully considered but they are not persuasive. Applicant argues that the limitations directed to “receiving an uncertain knowledge base”, “generating an acyclic directed trigger graph from the uncertain knowledge base by executing the plurality of rules to derive one or more new facts”, and “storing the computed probabilities in the trigger graph” could not practically be performed by the human mind. Examiner notes that the limitations directed to “receiving an uncertain knowledge base”, and “storing the computed probabilities in the trigger graph” were not interpreted as mental processes, but rather as additional elements. Examiner respectfully disagrees that the limitation directed to “generating an acyclic directed trigger graph from the uncertain knowledge base by executing the plurality of rules to derive one or more new facts” could not practically be performed in the human mind, as the claim does not recite specific technical or computer-implemented steps that distinguish this process from the way a person could generate an acyclic directed graph from an observed uncertain knowledge base in their mind, potentially assisted by pen and paper as per MPEP 2106.04(a)(2)(III), by mentally executing rules to derive one or more new facts.
Applicant also argues that any claimed judicial exceptions are integrated into a practical application “by providing a solution for executing a probabilistic program” and further cites paragraph [0051] of the specification to argue that “the claimed subject matter provides a solution “for deriving knowledge from the uncertain KB 211, which may reduce execution time and memory consumption”, thereby providing improvements over other artificial intelligence and machine learning technologies”. Examiner respectfully disagrees and notes that Applicant’s alleged improvement is to the process of deriving knowledge from an uncertain knowledge base”, which was interpreted as a judicial exception directed to a mental step. As per section 2106.05(a) of the MPEP, “the judicial exception alone cannot provide the improvement”. Applicant has not shown in these remarks, and the claims do not reflect how an improvement is provided by any additional elements or combination of judicial exceptions and additional elements as required by the MPEP. The 101 rejections have been updated to include the amended limitations and to clarify the reasoning given for the limitations that were not amended.
Applicant’s arguments regarding the prior art rejection have been fully considered but are moot because of the new ground(s) of rejection. Applicant argues that the prior art references do not teach generating an “acyclic directed” trigger graph, wherein the derivation history comprises “at least one derivation tree comprising a root node corresponding to the one or more new facts and a plurality of leaf nodes corresponding to one or more of the plurality of probabilistic facts”, or computing probabilities of facts for the derivation tree “by calculating a probability for the root node based on associated probabilities of the plurality of leaf nodes”. Examiner notes that the Goldberg reference has been brought in to teach these limitations in combination with the previously applied Urbani and Block references. The prior art rejections have been updated to include the amended limitations and to clarify the reasoning given for the limitations that were not amended.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4-5, 7-10, 12-13, 15-18, and 20 are rejected under 35 U.S.C. 101. Claims 1-2, 4-5, and 7-8 are directed to a method, claims 9-10, 12-13, and 15-16 are directed to a system, and claims 17-18 and 19-20 are directed to non-transitory data carrier; therefore, claims 1-2, 4-5, 7-10, 12-13, 15-18, and 20 fall within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). However, claims 1-2, 4-5, 7-10, 12-13, 15-18, and 20 fall within the judicial exception of an abstract idea, specifically the abstract ideas of “Mental Processes” (including observation, evaluation, and opinion) and “Mathematical Concepts (including mathematical calculations and relationships)”.
Claim 1:
Claim 1 is directed to a method; therefore, the claim does fall within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter).
Claim 1 recites the following abstract ideas:
generating an acyclic directed trigger graph from the uncertain knowledge base by executing the plurality of rules to derive one or more new facts, wherein each node of the trigger graph is associated with a rule of the plurality of rules, and wherein each node of the trigger graph stores a derivation history of the node; the derivation history comprising at least one derivation tree comprising a root node corresponding to one of the one or more new facts and a plurality of leaf nodes corresponding to one or more of the plurality of probabilistic facts (mental step directed to observation, evaluation – a person could generate a acyclic directed trigger graph in their mind, potentially assisted by pen and paper (see MPEP 2106.04(a)(2)(III)), from an observed uncertain knowledge base, observed or determined rules, and an observed or determined derivation history comprising at least one root node and a plurality of leaf nodes corresponding probabilistic facts);
and computing probabilities of the one or more new facts for the at least one derivation tree by calculating a probability for the root node based on associated probabilities of the plurality of leaf nodes (mental step directed to evaluation – a person could compute probabilities of observed or determined derived new facts for at least one derivation tree by mentally calculating a probability for an observed or determined probability for a root node based on observed or determined probabilities of leaf nodes)).
Claim 1 recites the following additional elements:
receiving an uncertain knowledge base, the uncertain knowledge base comprising a plurality of probabilistic facts, each probabilistic fact having an associated probability, wherein the uncertain knowledge base is a graph knowledge base, the probabilistic facts are relationships represented by edges linking nodes representing entities, and the associated probability is the weight of an edge; receiving a plurality of rules, the plurality of rules for deriving new facts from the plurality of probabilistic facts; storing the computed probabilities in the trigger graph in associated with corresponding nodes of the trigger graph; receiving a user query, and providing an answer to the user query based on the derived one or more new facts.
Receiving an uncertain graph knowledge base, receiving a plurality of rules, receiving a query and providing an answer are all interpreted as transmitting and receiving data over a network. Wherein the uncertain knowledge base comprising a plurality of probabilistic facts, each probabilistic fact having an associated probability, wherein the uncertain knowledge base is a graph knowledge base, the probabilistic facts are relationships represented by edges linking nodes representing entities, and the associated probability is the weight of an edge is interpreted as further description of the kind of uncertain graph knowledge base that can be received over a network. Storing the computed probabilities in the trigger graph is interpreted as storing information in memory. These additional elements do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea (see MPEP 2106.05(d)(II)).
Claim 9 is a system claim and its limitation is included in claim 1. The only difference is that claim 9 requires a system. Therefore, claim 9 is rejected for the same reasons as claim 1.
Claim 17 is a non-transitory data carrier claim and its limitation is included in claim 1. The only difference is that claim 17 requires a non-transitory data carrier. Therefore, claim 17 is rejected for the same reasons as claim 1.
The independent claims are not patent eligible.
Dependent claims 2, 4-5, 7-8, 10, 12-13, 15-16, 18, and 20 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea, as they recite further embellishment of the judicial exception.
Claim 2 recites wherein generating the trigger graph comprises incrementally generating the trigger graph, a round k of generating the trigger graph comprising: constructing a trigger graph of depth k by adding nodes to a trigger graph of round k- 1; executing the rules associated with the nodes present in the trigger graph at depth k, and storing the derivation history of the knowledge in the trigger graph at depth k (mental step directed to observation, evaluation – a person could generate a trigger graph incrementally in their mind, potentially assisted by pen and paper (see MPEP 2106.04(a)(2)(III)), by adding nodes to the graph and executing rules associated with the nodes of the graph in their mind. Storing the derivation history of the knowledge in the trigger graph is interpreted as an additional element directed to storing information in memory, which does not integrate the claimed abstract ideas into a practical application or amount to significantly more than the claimed abstract ideas (see MPEP 2106.05(d)(II)).
Claim 4 recites wherein a probabilistic fact of the probabilistic facts comprises a likelihood that a first person detected in an image is carrying out an activity; the rules comprise rules for determining whether a second person detected in an image is also carrying out the activity; and the one or more new facts include the likelihood that the second person detected in the image is also carrying out the activity (these limitations are interpreted as further description of the kind of data used to build the trigger graph in the limitations interpreted as mental steps in claim 1, as a person could mentally detect a person in an image and mentally determine the likelihood that the detected person is carrying out a given activity. These limitations are interpreted as a field of use of the trigger graph and do not integrate the claimed abstract ideas into a practical application or amount to significantly more than the claimed abstract ideas (see MPEP 2106.05(h)).
Claim 5 recites wherein a probabilistic fact of the probabilistic facts comprises a likelihood that a first object detected in an image has a first label; the rules comprise rules for determining that a second object detected in the image has a second label; and the one or more new facts include the likelihood that the second object has the second label (these limitations are interpreted as further description of the kind of data used to build the trigger graph in the limitations interpreted as mental steps in claim 1, as a person could mentally detect an object in an image and mentally determine the likelihood that a detected object has a given label. These limitations are interpreted as a field of use of the trigger graph and do not integrate the claimed abstract ideas into a practical application or amount to significantly more than the claimed abstract ideas (see MPEP 2106.05(h)).
Claim 7 recites wherein providing the answer to the user query comprises selecting a part of the uncertain knowledge base relevant to the user query, and wherein generating the trigger graph comprises generating the trigger graph based on the selected part of the uncertain knowledge base (mental step directed to observation, judgement – a person could select a relevant part of an observed knowledge base in their mind to provide an answer to an observed user query and generate a trigger graph in their mind, potentially assisted by pen and paper, based on the selected part of the knowledge base).
Claim 8 recites wherein the user query and the answer relate to an input image (this limitation is interpreted as a further description of the kind of data being transmitted and received over a network, which is interpreted as a field of use of the trigger graph and does not integrate the claimed abstract ideas into a practical application or amount to significantly more than the claimed abstract ideas (see MPEP 2106.05(h)).
Claim 10 is a system claim and its limitation is included in claim 2. Claim 10 is rejected for the same reasons as claim 2.
Claim 12 is a system claim and its limitation is included in claim 4. Claim 12 is rejected for the same reasons as claim 4.
Claim 13 is a system claim and its limitation is included in claim 5. Claim 13 is rejected for the same reasons as claim 5.
Claim 15 is a system claim and its limitation is included in claim 7. Claim 15 is rejected for the same reasons as claim 7.
Claim 16 is a system claim and its limitation is included in claim 8. Claim 16 is rejected for the same reasons as claim 8.
Claim 18 is a non-transitory data carrier claim and its limitation is included in claim 2. Claim 18 is rejected for the same reasons as claim 2.
Claim 20 is a non-transitory data carrier claim and its limitation is included in claim 4. Claim 20 is rejected for the same reasons as claim 4.
Viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Therefore, the claims are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-5, 7-10, 12-13, 15-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Urbani et al (“Column-Oriented Datalog Materialization for Large Knowledge Graphs”, herein Urbani) in view of Block (US 20190163982 A1, herein Block), in further view of Goldberg et al (US 20060212279 A1, herein Goldberg).
Regarding claim 1, Urbani teaches a computer-implemented method for executing a probabilistic program comprising: [receiving an uncertain knowledge base], the uncertain knowledge base comprising a plurality of probabilistic facts, each probabilistic fact having an associated probability (pg. 258 left column para. 1 recites “Knowledge graphs (KGs) are widely used in industry and academia to represent large collections of structured knowledge”. Pg. 259 left column para. 5 recites “Knowledge graphs are often encoded in the RDF data model, which represents labelled graphs as sets of triples of the form subject, property, object. The simplest encoding of RDF data for Datalog is to use a ternary EDB (i.e., an extensional predicate) predicate triple to represent triples”. Pg. 259 left column para. 2 recites “An atom is an expression p(t) with p ∈ P and |t| = ar(p). A fact is a variable-free atom. A database instance is a finite set I of facts”. Pg. 259 right column para. 2 recites “consider a database I = {triple(a, hP,b), triple(b, hP,c), triple(hP, iO, pO)}. Iteratively applying rules (2)–(6) to I, we obtain the following new derivations in each step” (i.e., an uncertain knowledge base, or probabilistic database. Examiner notes that an “uncertain” knowledge base is interpreted as a knowledge base comprising probabilistic facts, or information and corresponding probabilities (see page 6 of Applicant’s specification))),
[receiving] a plurality of rules, the plurality of rules for deriving new facts from the plurality of probabilistic facts (pg. 259 left column para. 2-3 recite “A rule r is an expression of the form H ← B1, . . . , Bn where H and B1, . . . , Bn are head and body atoms, respectively. We assume rules to be safe: every variable in H must also occur in some Bi. A program is a finite set P of rules. Predicates that occur in the head of a rule are called intensional (IDB) predicates; all other predicates are extensional (EDB). IDB predicates must not appear in databases. Rules with at most one IDB predicate in their body are linear” (i.e., receiving a plurality of rules used to generate new facts));
generating an [acyclic directed] trigger graph from the uncertain knowledge base by executing the plurality of rules to derive one or more new facts, wherein each node of the trigger graph is associated with a rule of the plurality of rules (pg. 259 left column para. 2 recites “We define Datalog in the usual way; we assume a fixed signature consisting of an infinite set C of constant symbols, an infinite set P of predicate symbols, and an infinite set V of variable symbols. Each predicate p ∈ P is associated with an arity ar(p) ≥ 0. A term is a variable x ∈ V or a constant c ∈ C. We use symbols s, t for terms; x, y, z, v, w for variables; and a, b, c for constants. Expressions like t, x, and a denote finite lists of such entities. An atom is an expression p(t) with p ∈ P and |t| = ar(p). A fact is a variable-free atom. A database instance is a finite set I of facts. A rule r is an expression of the form H ← B1, . . . , Bn where H and B1, . . . , Bn are head and body atoms, respectively”. Pg. 259 left column para. 5 recites “Knowledge graphs are often encoded in the RDF data model, which represents labelled graphs as sets of triples of the form subject, property, object. The simplest encoding of RDF data for Datalog is to use a ternary EDB (i.e., an extensional predicate) predicate triple to represent triples” (i.e., a probabilistic, or uncertain knowledge base can be represented by a graph wherein nodes of the graph are associated with a plurality of rules. Examiner notes that the broadest reasonable interpretation of a “trigger graph” includes a graph, or a flowchart, associating nodes with rules or operations required to execute the rules (see page 8 of Applicant’s specification))
and wherein each node of the trigger graph stores a derivation history of the node (fig. 1 and pg. 260 para. 7 recite “we store each of the sets of inferences Δip that are produced during the derivation in a separate column-oriented table. The table for is Δip1 is created when applying rule[i] in step i and never modified thereafter. We store the data for each rule application (step number, rule, and table) in one block, and keep a separate list of blocks for each IDB predicate” (i.e., storing the data for each rule application separately in the graph. Examiner notes that a “derivation history” is interpreted as information related to the rules that were executed related to a given node (see page 9 of Applicant’s specification));
storing the computed probabilities in the trigger graph in associated with corresponding nodes of the trigger graph (pg. 259 right column para. 4 recites “In each step of the algorithm, we apply one rule r ∈ P to the facts derived so far. We do this fairly, so that each rule will be applied arbitrarily often. This differs from standard SNE where all rules are applied in parallel in each step. We write rule[i] for the rule applied in step i, and Δip for the set of new facts with predicate p derived in step I”. Pg. 260 left column para. 1 recites “We call this procedure the one-rule-per-step variant of SNE. The procedure terminates if all rules in P have been applied in the last steps |P| without deriving any new facts”. Pg. 260 para. 7 recites “we store each of the sets of inferences Δip that are produced during the derivation in a separate column-oriented table. The table for is Δip1 is created when applying rule[i] in step i and never modified thereafter. We store the data for each rule application (step number, rule, and table) in one block, and keep a separate list of blocks for each IDB predicate” (i.e., storing the derived, or computed probabilities associated with the facts in the graph using the stored rule application data)).
However, while Urbani teaches utilization of an uncertain knowledge base and a plurality of rules (see at least page 259), Urbani does not explicitly teach receiving an uncertain knowledge base and receiving a plurality of rules; wherein the uncertain knowledge base is a graph knowledge base, wherein the probabilistic facts are relationships represented by edges linking nodes representing entities, and the associated probability is a weight of an edge; receiving a user query, and providing an answer the query based on the derived new facts.
Block teaches receiving an uncertain knowledge base and receiving a plurality of rules (para. [0090] recites “The scene feature extractor and feature classifier system may further access feature instance data for feature types 609 from a scene feature extractor and feature classifier system database. The feature instance data 609 is used to compare feature vector nodes for previous observations of a feature type with a currently observed feature vector node. The multifeatured graph may be used with association edges and feature vector node values to estimate likelihood of a label applying to an observation of an object in the currently observed scene of interest”. Para. [0091] recites “previous semantic scene graphs accessed for previously observed visual scenes 610, including one or more currently observed objects, may be accessed via a scene feature extractor and feature classifier system database. Previous semantic scene graphs 610 may indicate known or expected relationships or affinities among objects such as persons, that are candidates to be labeled or identify currently observed objects in the scene of interest. Relationship edges between currently observed objects or attributes of a visual scene may further contribute to determination of likelihood of a label or a plurality of labels applying within a multi-feature graph for an object that was built by the recognition pipeline” (i.e., Examiner notes that the broadest reasonable interpretation of an “uncertain” knowledge base as a knowledge base comprising information and corresponding probabilities includes the scene graph, or collection of observed objects and likelihoods, from Block. Given this interpretation, receiving a scene graph falls under the broadest reasonable interpretation of receiving an uncertain knowledge base and associated rules));
wherein the uncertain knowledge base is a graph knowledge base, wherein the probabilistic facts are relationships represented by edges linking nodes representing entities, and the associated probability is a weight of an edge (para. [0079] and [0080] recite “Returning to 430 of FIG. 4, the multiple feature graph is built and a probabilistic graph algorithm is applied to the association edges as well as the feature spaces to determine a likelihood that one or more labels apply to the new observation. Probabilistic graphical models (PGM's) are used to provide a framework for encoding probability distributions over graphs, which are themselves made up of groups of nodes connected by edges. For the purposes of discussion, a simplified way to view PGM's is described as representing objects and their relations to one another, with the additional feature that certain aspects of the objects (nodes) and relations (edges) can be further represented by likelihoods or beliefs relating to hypotheses about the objects or relations that interact with each other in a way that respects the graph's structure”. Para. [0095] recites “The accuracy weighting or confidence level determination may include assessments of the individual object node multi-feature graphs for the observed objects and sub-components as well as the association edge relationships to other object nodes, subcomponent nodes, and attribute nodes associated with the observed visual scene. (i.e., a graph knowledge base, wherein relationships are represented by edges which connect nodes and edges can be weighted));
receiving a user query, and providing an answer the query based on the derived one or more new facts (para. [0105] recites “the scene feature extractor and feature classifier system according to embodiments herein may be queried as to particular nodes or edges to determine options for objects (e.g., particular nodes) observed in a captured visual scene, relationships (e.g., one or more edges) within the visual scene or additional overall assessments from multiple layers that may comprise one or more integrated multi-factor semantic scene graphs. An output may be provided by the scene feature extractor and feature classifier system according to various output devices herein including visual outputs of queried values or probabilities via a display device, transmission of responses to authorized users querying via remote information handling systems” (i.e., a user query and corresponding answer can be related to the derived new facts based on the output from the probabilistic scene graph)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine these teachings by applying the probabilistic knowledge graph from Urbani to the semantic scene graph from Block to more explicitly define rules which describe associations between nodes, or features of an image. Urbani and Block are both directed to use of probabilistic graph models to map associations between objects and rules, or associations. As Urbani teaches that its knowledge graph model may be applied to image data on at least page 262, one of ordinary skill would find it obvious to improve the rules, or associations, between objects in an image from Block using the rule evaluation method from Urbani to better identify those objects and any additional context in the image.
However, the combination of Urbani and Block does not explicitly teach an acyclic direct [trigger] graph; the derivation history comprising at least one derivation tree comprising a root node corresponding to one of the one or more new facts and a plurality of leaf nodes corresponding to one or more of the plurality of probabilistic facts; and computing probabilities of one or more new facts for the at least one derivation tree by calculating a probability for the root node based on associated probabilities of the plurality of leaf nodes.
Goldberg teaches an acyclic direct [trigger] graph (para. [0068] recites “The structure is encoded by a directed acyclic graph with the nodes corresponding to the variables in the modeled data set (in this case, to the positions in solution strings) and the edges corresponding to conditional dependencies”); the derivation history comprising at least one derivation tree comprising a root node corresponding to one of the one or more new facts and a plurality of leaf nodes corresponding to one or more of the plurality of probabilistic facts (para. [0073] recites “Each path in the decision tree for p(xi|∏i) that starts in the root of the tree and ends in a leaf encodes a set of constraints on the values of variables in ∏i. Each leaf stores the value of a conditional probability of xi = 1 given the condition specified by the path from the root of the tree to the leaf. A decision tree can encode the full conditional probability table for a variable with k parents if it splits to 2k leaves, each corresponding to a unique condition. However, a decision tree enables more efficient and flexible representation of local conditional distributions. See FIG. 2(a) for an example decision tree for the conditional probability table” (i.e., a tree used to derive, or compute probabilities of root and leaf nodes associated with probabilistic variables, or facts));
and computing probabilities of one or more new facts for the at least one derivation tree by calculating a probability for the root node based on associated probabilities of the plurality of leaf nodes (para. [0066] recites “FIG. 2 is a representative conditional probability table for p(X1 | X2, X3, X4) using traditional representation (a) as well as local structures (b and c )”. Para. [0070] recites “The parameters are represented by a set of conditional probability tables (CPTs) specifying a conditional probability for each variable given any instance of the variables that the variable depends on. Local structures-in the form of decision trees or decision graphs can also be used in place of full conditional probability tables to enable more efficient representation of local conditional probability” (i.e., computing probabilities of variables, or facts, represented in the derivation tree based on the probability of the root node and associated leaf nodes)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine these teachings by modifying the column-based knowledge graph storage from Urbani (as modified by Block) with the tree structure from Goldberg. Goldberg and Urbani are both directed to representing and optimizing knowledge representations and associated rules. As Goldberg states in at least paragraph [0073] “a decision tree enables more efficient and flexible representation of local conditional distributions”, one of ordinary skill in the art would be motivated to use the tree structure from Goldberg to more efficiently and flexibly represent the knowledge representations from Urbani.
Regarding claim 2, the combination of Urbani, Block, and Goldberg teaches the method of claim 1, wherein the generating the trigger graph comprises incrementally generating the trigger graph, a round k of generating the trigger graph comprising: constructing a trigger graph of depth k by adding nodes to a trigger graph of round k-1 (Urbani pg. 259 right column para. 2 recites “consider a database I = {triple(a, hP,b), triple(b, hP,c), triple(hP, iO, pO)}. Iteratively applying rules (2)–(6) to I, we obtain the following new derivations in each step”. Block fig. 6 step 635 recites “Add relevant nodes and edges meeting threshold criteria to semantic Scene Graph and label according to observed behavior” (i.e., adding nodes to the graph incrementally, or iteratively constructing the graph));
executing the rules associated with the nodes present in the trigger graph at depth k, and storing the derivation history of the knowledge in the trigger graph at depth k (fig. 1 and pg. 260 para. 7 recite “we store each of the sets of inferences Δip that are produced during the derivation in a separate column-oriented table. The table for is Δip1 is created when applying rule[i] in step i and never modified thereafter. We store the data for each rule application (step number, rule, and table) in one block, and keep a separate list of blocks for each IDB predicate” (i.e., executing the rules in the graph for a given iteration and storing the data for each applied rule, or derivation history, separately in the graph)).
Regarding claim 4, the combination of Urbani, Block, and Goldberg teaches the method of claim 1, wherein: a probabilistic fact of the probabilistic facts comprises a likelihood that a first person detected in an image is carrying out an activity (Urbani pg. 262 right column para. 3 recites “All of these rules operate on a Datalog translation of the input graph, e.g., a triple <entity:5593, rdf:type, a3:Image> might be represented by a <fact a3:Image(entity:5593)>” (i.e., the probabilistic graph can be applied to image data). Block para. [0045] recites “The real time scene of interest 300 depicted in FIG. 3 is an example only and depicts several aspects of one type of predicted behavior assessed by the scene feature extractor and feature classifier system according to embodiments herein. The scene of interest 300 depicts several objects 302-305. Objects 302-305 are discussed in the present embodiment. As a first aspect, persons 302 and 304 are captured in the real time scene of interest 300 (i.e., a first person is detected in an image. Indication of persons 302 and 304 sitting in chairs 303 and 305 nearby to one another may be information suggesting a meeting is taking place” (i.e., a first person is detected in an image carrying out an activity));
the rules comprise rules for determining whether a second person detected in an image is also carrying out the activity; and the one or more new facts include the likelihood that the second person detected in the image is also carrying out the activity (Block para. [0045] recites “The real time scene of interest 300 depicted in FIG. 3 is an example only and depicts several aspects of one type of predicted behavior assessed by the scene feature extractor and feature classifier system according to embodiments herein. The scene of interest 300 depicts several objects 302-305. Objects 302-305 are discussed in the present embodiment. As a first aspect, persons 302 and 304 are captured in the real time scene of interest 300 (i.e., a first person is detected in an image. Indication of persons 302 and 304 sitting in chairs 303 and 305 nearby to one another may be information suggesting a meeting is taking place” (i.e., a second person is detected in an image carrying out the same activity as the first person). Block para. [0047] recites “the scene feature extractor and feature classifier system may be attempting to determine the identity of person 304. As shown, several features may be assessed with respect to feature spaces, association edges, and the like described further herein to make an assessment of person 304. The match of person 304 may be provided on a likelihood level based on a semantic scene graph generation of scene of interest 300 such that either "Sam" or "Harry" may be ultimately weighed as candidates for person 304 according to an embodiment” (i.e., the rules from the probabilistic scene graph are used to derive one or more new facts to determine that the second detected person is also carrying out the same activity as the first detected person)).
Regarding claim 5, the combination of Urbani, Block, and Goldberg teaches the method of claim 1, wherein: a probabilistic fact of the probabilistic facts comprises a likelihood that a first object detected in an image has a first label (Urbani pg. 262 right column para. 3 recites “All of these rules operate on a Datalog translation of the input graph, e.g., a triple <entity:5593, rdf:type, a3:Image> might be represented by a <fact a3:Image(entity:5593)>” (i.e., the probabilistic graph can be applied to image data). Block para. [0013] recites “Objects identified within visual scenes may also include persons as well as other objects identified within scenes of interest for purposes herein and at times may refer to both persons, living creatures, or other objects such as vehicles, weapons, buildings, or other items that may be identified in a visual scene” (i.e., an object in an image can be a person). Block para. [0045] recites “The real time scene of interest 300 depicted in FIG. 3 is an example only and depicts several aspects of one type of predicted behavior assessed by the scene feature extractor and feature classifier system according to embodiments herein. The scene of interest 300 depicts several objects 302-305. Objects 302-305 are discussed in the present embodiment. As a first aspect, persons 302 and 304 are captured in the real time scene of interest 300 (i.e., a first person is detected in an image. Indication of persons 302 and 304 sitting in chairs 303 and 305 nearby to one another may be information suggesting a meeting is taking place” (i.e., a first object is detected and labeled as a chair));
the rules comprise rules for determining that a second object detected in the image has a second label; and the one or more new facts include the likelihood that the second object has the second label (Block para. [0045] recites “The real time scene of interest 300 depicted in FIG. 3 is an example only and depicts several aspects of one type of predicted behavior assessed by the scene feature extractor and feature classifier system according to embodiments herein. The scene of interest 300 depicts several objects 302-305. Objects 302-305 are discussed in the present embodiment. As a first aspect, persons 302 and 304 are captured in the real time scene of interest 300 (i.e., a first person is detected in an image. Indication of persons 302 and 304 sitting in chairs 303 and 305 nearby to one another may be information suggesting a meeting is taking place”. Block para. [0047] recites “the scene feature extractor and feature classifier system may be attempting to determine the identity of person 304. As shown, several features may be assessed with respect to feature spaces, association edges, and the like described further herein to make an assessment of person 304. The match of person 304 may be provided on a likelihood level based on a semantic scene graph generation of scene of interest 300 such that either "Sam" or "Harry" may be ultimately weighed as candidates for person 304 according to an embodiment” (i.e., the rules from the probabilistic scene graph are used to derive one or more new facts to determine that a second detected object is a person)).
Regarding claim 7, the combination of Urbani, Block, and Goldberg teaches the method of claim 1, wherein providing the answer to the user query comprises selecting a part of the uncertain knowledge base relevant to the user query, and wherein generating the trigger graph comprises generating the trigger graph based on the selected part of the uncertain knowledge base (Block para. [0081] recites “probabilistic inference is used to make assessments for an overall semantic graph network or any portions of the semantic graph network for behavioral prediction of a captured scene or series of captured scenes”. Block fig. 6 step 635 recites “Add relevant nodes and edges meeting threshold criteria to semantic Scene Graph and label according to observed behavior” (i.e., selecting a relevant part of the graph generated, or built from the selected part of the uncertain knowledge base).
Regarding claim 8, the combination of Urbani, Block, and Goldberg teaches the method of claim 1, wherein the user query and the answer relate to an input image (Urbani pg. 262 right column para. 3 recites “All of these rules operate on a Datalog translation of the input graph, e.g., a triple <entity:5593, rdf:type, a3:Image> might be represented by a <fact a3:Image(entity:5593)>”. Block para. [0105] recites “the scene feature extractor and feature classifier system according to embodiments herein may be queried as to particular nodes or edges to determine options for objects (e.g., particular nodes) observed in a captured visual scene, relationships (e.g., one or more edges) within the visual scene or additional overall assessments from multiple layers that may comprise one or more integrated multi-factor semantic scene graphs. An output may be provided by the scene feature extractor and feature classifier system according to various output devices herein including visual outputs of queried values or probabilities via a display device, transmission of responses to authorized users querying via remote information handling systems” (i.e., the user query and answer can be related to image data)).
Claim 9 is a system claim and its limitation is included in claim 1. The only difference is that claim 9 requires a system. Therefore, claim 9 is rejected for the same reasons as claim 1.
Claim 10 is a system claim and its limitation is included in claim 2. Claim 10 is rejected for the same reasons as claim 2.
Claim 12 is a system claim and its limitation is included in claim 4. Claim 12 is rejected for the same reasons as claim 4.
Claim 13 is a system claim and its limitation is included in claim 5. Claim 13 is rejected for the same reasons as claim 5.
Claim 15 is a system claim and its limitation is included in claim 7. Claim 15 is rejected for the same reasons as claim 7.
Claim 16 is a system claim and its limitation is included in claim 8. Claim 16 is rejected for the same reasons as claim 8.
Claim 17 is a non-transitory data carrier claim and its limitation is included in claim 1. The only difference is that claim 17 requires a non-transitory data carrier. Therefore, claim 17 is rejected for the same reasons as claim 1.
Claim 18 is a non-transitory data carrier claim and its limitation is included in claim 2. Claim 18 is rejected for the same reasons as claim 2.
Claim 20 is a non-transitory data carrier claim and its limitation is included in claim 4. Claim 20 is rejected for the same reasons as claim 4.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
“Acyclicity Notions for Existential Rules and Their Application to Query Answering in Ontologies” (Cuenca Grau et al) teaches acyclicity notions called model-faithful acyclicity (MFA) and model-summarizing acyclicity (MSA) for knowledge base representation.
“Association rule mining using FPTree as directed acyclic graph” (Rao et al) teaches a method for scanning a database and generating frequent pattern (fp) trees as direct acyclic graphs (DAG) so that we can generate Frequent Patterns directly using DAG without generating conditional fp trees.
US 20160063390 A1 (Mytkowicz et al) teaches a method for evaluating probabilistic assertions using a Bayesian network of nodes representing distributions.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEAH M FEITL whose telephone number is (571) 272-8350. The examiner can normally be reached on M-F 0900-1700 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/L.M.F./ Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147