DETAILED ACTION
Claims 1-20 are pending and have been examined.
--
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 03/06/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Objections
Claims 2-7, 9-14 and 16-20 are objected to because of the following informalities:
In claims 2-7, “A system according to…” should be “The system according to…”
In claims 9-14, “A method according to…” should be “The method according to…”
In claims 16-20, “A medium…” should be “The non-transitory medium…”
(claims 2-7, 9-14 and 16-20 are dependent claims.)
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claims 1, 8 and 15 recite the limitation “the first set of the plurality of features.” There is insufficient antecedent basis for “the plurality of features” in the claim. For examination purposes examiner has interpreted “the first set of the plurality of features” to be “the first set of the first plurality of features.”
Claims 1, 8 and 15 recite the limitation “determine a reward based on the performance and the interpretability.” There is insufficient antecedent basis for “the performance and the interpretability” in the claim. For examination purposes examiner has interpreted “the performance and the interpretability” to be “the performance of the model and the interpretability of the first plurality of features.”
Claims 5, 12 and 19 recite the limitation “the first set of the plurality of features and a second set of the plurality of features.” There is insufficient antecedent basis for “the plurality of features” in the claim. For examination purposes examiner has interpreted “the first set of the plurality of features and a second set of the plurality of features” to be “the first set of the first plurality of features and a second set of the first plurality of features.”
Claims 2-4, 6-7, 9-11, 13-14, 16-18 and 20 are also rejected due to their dependency on a rejected claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
-
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more
Step 1: Claims 1-7 recite a system. Claims 8-14 recite a method. Claims 15-20 recite a non-transitory medium Therefore, claims 1-7 are directed to a machine, claims 8-14 are directed to a process, and claims 15-20 are directed to a manufacture.
With respect to claims 1, 8 and 15:
2A Prong 1: The claim recites a judicial exception.
determine an interpretability of each of the first plurality of features based on a domain ontology and on symbolic rules associated with entities of the domain ontology (mental process – evaluation or judgement, a user can manually determine an interpretability of each of the features based on a domain ontology and on symbolic rules)
determine a first set of the first plurality of features which were determined as interpretable (mental process – evaluation or judgement, a user can manually determine a first set of the features which were determined as interpretable)
determine a performance of a model trained using the first set of the plurality of features (mental process – evaluation or judgement, a user can manually determine/evaluate a performance of a model, e.g. evaluating the result from a model)
determine a reward based on the performance and the interpretability (mental process – evaluation or judgement, a user can manually determine a reward based on the performance and the interpretability)
2A Prong 2: The judicial exception is not integrated into a practical application.
(claim 1) A system comprising: a memory storing processor-executable program code; and at least one processing unit to execute the processor-executable program code to cause the system to: (claim 15) A non-transitory medium storing executable program code executable by at least one processing unit of a computing system to cause the computing system to: (mere instructions to apply an exception, (2) Whether the claim invokes computers - MPEP 2106.05(f); generic computer components)
generate a first plurality of features using a learning network (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using a learning network to generate features)
generate a second plurality of features using the learning network based on the reward (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using the learning network based on the reward to generate features)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
(claim 1) A system comprising: a memory storing processor-executable program code; and at least one processing unit to execute the processor-executable program code to cause the system to: (claim 15) A non-transitory medium storing executable program code executable by at least one processing unit of a computing system to cause the computing system to: (mere instructions to apply an exception, (2) Whether the claim invokes computers - MPEP 2106.05(f); generic computer components)
generate a first plurality of features using a learning network (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using a learning network to generate features)
generate a second plurality of features using the learning network based on the reward (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using the learning network based on the reward to generate features)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claims 2, 9 and 16:
2A Prong 1: The claim recites a judicial exception.
wherein determination of an interpretability of each of the first plurality of features comprises (mental process – evaluation or judgement, a user can manually determine an interpretability of each of the first plurality of features)
annotation of each of the first plurality of features based on the entities of the domain ontology (mental process – evaluation or judgement, a user can manually annotate each of the first plurality of features based on the entities of the domain ontology)
With respect to claims 3, 10 and 17:
2A Prong 1: The claim recites a judicial exception.
wherein determination of an interpretability of each of the first plurality of features comprises (mental process – evaluation or judgement, a user can manually determine an interpretability of each of the first plurality of features)
2A Prong 2: The judicial exception is not integrated into a practical application.
executing symbolic reasoning by applying the symbolic rules to the annotated first plurality of features (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; applying the rules to the features)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
executing symbolic reasoning by applying the symbolic rules to the annotated first plurality of features (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; applying the rules to the features)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claims 4, 11 and 18:
2A Prong 2: The judicial exception is not integrated into a practical application.
wherein the symbolic reasoning comprises subsumption and instance checking (a particular technological environment or field of use – MPEP 2106.05(h))
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein the symbolic reasoning comprises subsumption and instance checking (a particular technological environment or field of use – MPEP 2106.05(h))
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claims 5, 12 and 19:
2A Prong 2: The judicial exception is not integrated into a practical application.
wherein the model is trained using the first set of the plurality of features and a second set of the plurality of features which were not determined as non-interpretable (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of training a model using features)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein the model is trained using the first set of the plurality of features and a second set of the plurality of features which were not determined as non-interpretable (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of training a model using features)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claims 6 and 13:
2A Prong 1: The claim recites a judicial exception.
wherein determination of an interpretability of each of the first plurality of features comprises (mental process – evaluation or judgement, a user can manually determine an interpretability of each of the first plurality of features)
2A Prong 2: The judicial exception is not integrated into a practical application.
executing symbolic reasoning by applying the symbolic rules to the first plurality of features (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; applying the rules to the features)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
executing symbolic reasoning by applying the symbolic rules to the first plurality of features (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; applying the rules to the features)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claims 7 and 14:
2A Prong 2: The judicial exception is not integrated into a practical application.
wherein the symbolic reasoning comprises subsumption and instance checking (a particular technological environment or field of use – MPEP 2106.05(h))
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein the symbolic reasoning comprises subsumption and instance checking (a particular technological environment or field of use – MPEP 2106.05(h))
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 20:
2A Prong 1: The claim recites a judicial exception.
wherein determination of an interpretability of each of the first plurality of features comprises (mental process – evaluation or judgement, a user can manually determine an interpretability of each of the first plurality of features)
2A Prong 2: The judicial exception is not integrated into a practical application.
execution of subsumption and instance checking on the first plurality of features using the symbolic rules (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; execution of subsumption and instance checking on the features)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
execution of subsumption and instance checking on the first plurality of features using the symbolic rules (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; execution of subsumption and instance checking on the features)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5-6, 8-10, 12-13, 15-17 and 19 rejected under 35 U.S.C. 103 as being unpatentable over Ma ("Learning Symbolic Rules for Interpretable Deep Reinforcement Learning" 20210316) in view of Dash (US 20240135205 A1, filed on 2022-10-11)
In regard to claims 1, 8 and 15, Ma teaches: generate a first plurality of features using a learning network; (Ma, p. 2, 1 Introduction "this framework features a reasoning module based on neural attention networks, [a learning network]which performs relational reasoning on symbolic states and induces the RL policy."; p. 2, 3.1 First Order Logic "Entities are constants (e.g., objects) while a predicate can be seen as a relation between entities"; p. 3, 4.1 System Framework "the symbolic states from the environment are firstly transformed into a matrix P... matrix P and the attention weights are sent to the reasoning module to perform reasoning on existing symbolic knowledge... we denote the predicate matrix at each step as P(1), P(2), P(3), P(4), which are the results of the multiplications of Sφ and the symbolic matrix P."; 4.2 Reasoning Module "Every predicate or relation Pk is represented as a binary matrix Mk in {0, 1} |K| x|K|, whose entry (i, j) is 1 if Pk(xi, xj) holds, i.e., entity xi and xj are connected [all entries or entities and their relation in P (e.g. xi and xj) are a first set of the plurality of features] by edge Pk in the knowledge graph. Set K contains the objects of the problem..."; entities (or objects) xi and xj [features]; in light of spec. [0044] 'each of the new features may be mapped to one or more logical entities of a domain ontology to connect the features ')
determine an interpretability of each of the first plurality of features (Ma, p.2, 3 Preliminary "Interpretable rules [an interpretability of each of the first plurality of features] described by First-Order Logic are first introduced, then the basics of Reinforcement Learning (RL) are briefly recalled... A rule also called clause can be written as follows: α ← α1 ^ α2,... ^ αn"; p. 3, 4.1 System Framework "Then, we sequentially multiply these matrices to generate logical rules of different lengths."; p. 3, 4.2 Reasoning Module "Multi-hop reasoning on such a graph mainly focuses on searching chain-like logical rules [symbolic rules] of the following form: query (x, x') ← R1 (x, z1) ^ R2 (z1, z2)... ^ Rn (z_n-1, x'). (1) [an interpretability of each of the first plurality of features]... Every predicate or relation Pk is represented as a binary matrix Mk in {0, 1} |K| x|K|, whose entry (i, j) is 1 if Pk(xi, xj) holds, i.e., entity xi and x j are connected [e.g. (xi, xj) = 1, the connected (interpretable) entities are the first set of the plurality of features] by edge Pk in the knowledge graph. Set K contains the objects of the problem...") based on a domain ontology and on symbolic rules associated with entities of the domain ontology; (Ma, p. 3, 4.2 Reasoning Module "Consider a knowledge graph, [ontology] where objects are represented as nodes and relations are edges. [entities of the domain ontology] Multi-hop reasoning on such a graph mainly focuses on searching chain-like logical rules [symbolic rules] of the following form: query (x, x') ← R1 (x, z1) ^ R2 (z1, z2)... ^ Rn (z_n-1, x'). (1)"; p. 5, 5 Experiments "we evaluate our approach on two domains, i.e., Montezuma’s Revenge and BlocksWorld Manipulation"; p. 8 "Table 4... Domain... Blocks World... Montezuma’s Revenge"; in light of spec. [0034] Domain ontology 160 may be considered a knowledge base defining a hierarchy of n logical entities and [0046])
determine a first set of the first plurality of features which were determined as interpretable; (Ma, p. 1, 1 Introduction "an action atom with higher confidence of being true is selected after performing some reasoning steps."; p. 3, 4.2 Reasoning Module "After T steps reasoning, the score of the query for one path is computed as follows: score (x, x')... (4) ... Considering all the predicate matrices at each step and relational paths of different lengths, the final score can be rewritten with soft attention as below:... score (x, x')... (6) [score (x, x'), the statistical importance (a first set: interpretable)]"; also see claim 5; in light of [0053] 'interpretable features (or not non-interpretable) determined at S240 and on the statistical importance of these features.'; the score of the path (x, x') with higher confidence)
determine a performance of a model trained using the first set of the plurality of features; (Ma, p. 5, 4.4 Policy Module "we introduce a multi-layer perceptron MLPa to the output of the reasoning module to induce the state-action value of action atom Acta (x, x'): Q(S, Act (x, x')) = … (13) [a
PNG
media_image1.png
538
960
media_image1.png
Greyscale
performance of a model]")
determine a reward based on the performance and the interpretability; and (Ma, p. 1, 1 Introduction "an action atom with higher confidence of being true is selected after performing some reasoning steps."; p. 5, 4.4 Policy Module "To learn a deterministic policy, we update the policy module by minimizing the loss function [a reward based on the performance Q of the model, and Act (x, x') which is selected based on the interpretability of the features] described below where D is a replay buffer. L(θ) = E… [(r + γmax Q(s', a'; θ) - Q (s, a; θ))2]"; TD error = (target calculation - predicted) where target calculation is r + γmaxQ(...))
generate a second plurality of features using the learning network based on the reward. (Ma, p. 2, 1 Introduction "NSRL can extract the logical rules selected by the attention modules... while providing improved interpretability by extracting the most relevant relational paths." ;p. 5, 4.4 Policy Module "we update the policy module by minimizing the loss function [the reward]… We denote an action atom by Act (x, x') where Act is an action predicate from Pa... Then, we can train NSRL with any deep RL algorithms such as DQN... "; p. 8, 5.3 Interpretable Policy "The predicate KeyToDoor that we introduced to embed the knowledge that a key is important to open a door also improves the interpretability of the learned rules, i.e., rule 7, which chooses a door as a task . [rule with features/entities]"; training means multiple iterations to generate/extract the improved or the most relevant paths/rules with entities/features using the learning network [generate a second plurality of features using the learning network])
Ma does not teach, but Dash teaches: A system comprising: a memory storing processor-executable program code; and at least one processing unit to execute the processor-executable program code to cause the system to: (Dash, [0116] Processor set 510 includes one, or more, computer processors of any type now known or to be developed in the future... Cache 521 is memory that is located in the processor chip package(s) and is typically used for data or code that should be available for rapid access by the threads or cores running on processor set 510.")
Claims 8 and 15 recite substantially the same limitation as claim 1, therefore the rejection applied to claim 1 also apply to claims 8 and 15. In addition, Dash teaches: (claim 15) A non-transitory medium storing executable program code executable by at least one processing unit of a computing system to cause the computing system to: (Dash, [0116] Processor set 510 includes one, or more, computer processors of any type now known or to be developed in the future... Cache 521 is memory that is located in the processor chip package(s) and is typically used for data or code that should be available for rapid access by the threads or cores running on processor set 510.")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Ma to incorporate the teachings of Dash by including the hardware implementation and the labeling function with the logic rules. Doing so would allow to infer missing facts or quickly identify new rules. (Dash, [0005] "One approach for KGC is to learn first-order logic rules that use known facts to imply other known facts, and then use these to infer missing facts."; [0051] "The rule generation is made efficient and feasible... to quickly identify, through a linear programming problem solving process of iteratively adding new rule clauses and corresponding rules.
In regard to claims 2, 9 and 16, Ma teaches: wherein determination of an interpretability of each of the first plurality of features comprises: (Ma, p.2, 3 Preliminary "Interpretable rules [an interpretability of each of the first plurality of features] described by First-Order Logic are first introduced, then the basics of Reinforcement Learning (RL) are briefly recalled... A rule also called clause can be written as follows: α ← α1 ^ α2,... ^ αn"; p. 3, 4.2 Reasoning Module "Multi-hop reasoning on such a graph mainly focuses on searching chain-like logical rules [symbolic reasoning, symbolic rules] of the following form: query (x, x') ← R1 (x, z1) ^ R2 (z1, z2)... ^ Rn (z_n-1, x'). (1) [an interpretability of x and x' each of the first plurality of features, applying the symbolic rules to x and x']"; entities (or objects) xi and xj (x and x') are [features])
Ma does not teach, but Dash teaches: annotation of each of the first plurality of features based on the entities of the domain ontology. (Dash, [0004] "A 'fact' in the knowledge graph [the domain ontology] may be represented as a tuple data structure, such as a triplet of the form (a, r, b) where a and b are nodes, and r is a binary relation labeling a directed edge from a to b indicating that r(a, b) is true. [annotation of each of the first plurality of features (a and b)] As an example, consider a KG where the nodes correspond to distinct cities, states, and countries and the relations are one of capital_of, shares_border_with, or part_of. A fact (a, part_of, b) in such a graph represents a directed edge from a to b labeled by part_of, implying that a is part of b. [annotation]"; [0087] "a set of n binary relations R defined over the domain V"; [0052])
The rationale for combining the teachings of Ma and Dash is the same as set forth in the rejection of claim 1.
In regard to claims 3, 10 and 17, Ma teaches: wherein determination of an interpretability of each of the first plurality of features comprises: executing symbolic reasoning by applying the symbolic rules to the annotated first plurality of features. (Ma, p.2, 3 Preliminary "Interpretable rules [an interpretability of each of the first plurality of features] described by First-Order Logic are first introduced, then the basics of Reinforcement Learning (RL) are briefly recalled... A rule also called clause can be written as follows: α ← α1 ^ α2,... ^ αn"; p. 3, 4.2 Reasoning Module "Multi-hop reasoning on such a graph mainly focuses on searching chain-like logical rules [symbolic reasoning, symbolic rules] of the following form: query (x, x') ← R1 (x, z1) ^ R2 (z1, z2)... ^ Rn (z_n-1, x'). (1) [an interpretability of x and x' each of the first plurality of features, applying the symbolic rules to x and x']... entry (i, j) is 1 if Pk(xi, xj) holds, i.e., entity xi and x j are connected by edge Pk [annotated, e.g. ^ (conjunction)] in the knowledge graph"; entities (or objects) xi and xj (x and x') are [features])
In regard to claims 5, 12 and 19, Ma teaches: wherein the model is trained using the first set of the plurality of features and a second set of the plurality of features which were not determined as non-interpretable. (Ma, p. 1, 1 Introduction "an action atom with higher confidence of being true is selected after performing some reasoning steps."; p. 3, 4.2 Reasoning Module "After T steps reasoning, the score of the query for one path is computed as follows: score (x, x')... (4) ... Considering all the predicate matrices at each step and relational paths of different lengths, the final score can be rewritten with soft attention as below:... score (x, x')... (6) [score (x, x'), the statistical importance (the first set: interpretable; a second set: not non-interpretable) of entities]"; p. 5, 4.4 Policy Module "We denote an action atom by Act (x, x') where Act is an action predicate from Pa... Then, we can train NSRL with any deep RL algorithms such as DQN... "; in light of [0053] 'interpretable features (or not non-interpretable) determined at S240 and on the statistical importance of these features.'; training the model using (the first set) the paths with higher confidence and (a second set) other not non-interpretable paths, e.g. through training and reasoning steps, new relations or missing relations that might be discovered from the existing paths )
In regard to claims 6 and 13, Ma teaches: wherein determination of an interpretability of each of the first plurality of features comprises: executing symbolic reasoning by applying the symbolic rules to the first plurality of features. (Ma, p.2, 3 Preliminary "Interpretable rules [an interpretability of each of the first plurality of features] described by First-Order Logic are first introduced, then the basics of Reinforcement Learning (RL) are briefly recalled... A rule also called clause can be written as follows: α ← α1 ^ α2,... ^ αn"; p. 3, 4.2 Reasoning Module "Multi-hop reasoning on such a graph mainly focuses on searching chain-like logical rules [symbolic reasoning, symbolic rules] of the following form: query (x, x') ← R1 (x, z1) ^ R2 (z1, z2)... ^ Rn (z_n-1, x'). (1) [an interpretability of x and x' each of the first plurality of features, applying the symbolic rules to x and x']"; entities (or objects) xi and xj (x and x') are [features])
Claims 4, 7, 11, 14, 18 and 20 rejected under 35 U.S.C. 103 as being unpatentable over Ma and Dash as applied to claims 1, 8 and 15, and in further view of Russell ("Inference in First-Order Logic" 20021114)
In regard to claims 4, 11 and 18, Ma and Dash do not teach, but Russell teaches: wherein the symbolic reasoning comprises subsumption and instance checking. (Russell, p. 273, Inference rules for quantifiers "Let us begin with universal quantifiers. Suppose our knowledge base contains the standard folkloric axiom stating that all greedy kings are evil: for all x King(x) ^ Greedy(x) => Evil(x)… The rule of Universal Instantiation [instance checking] (UI for short) says that we can infer any sentence obtained by substituting a ground term (a term without variables) for the variable. 1 To write out the inference rule formally, we use the notion of substitutions [subsumption] introduced in Section 8.3... The corresponding Existential Instantiation rule [instance checking] for the existential quantifier is slightly more complicated...."; p. 4, A first-order inference rule, "The inference that John is evil works like this: find some x such that x is a king and x is greedy, and then infer that this x is evil. More generally, if there is some substitution θ that makes the premise of the implication identical to sentences already in the knowledge base, then we can assert the conclusion of the implication, after applying θ. In this case, the substitution {x/John} achieves that aim... for any sentence p (whose variables are assumed to be universally quantified) and for any substitution θ, p |= SUBST(θ; p). [subsumption]")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Ma and Dash to incorporate the teachings of Russell by including inference in first-order logic rules. Doing so would result in use the most efficient method that can accommodate the facts and axioms that need to be expressed. (Russell, p. 272 "Section 9.1 introduces inference rules for quantifiers and shows how to reduce first-order inference to propositional inference, albeit at great expense... In general, one tries to use the most efficient method that can accommodate the facts and axioms that need to be expressed.")
In regard to claims 7 and 14, Ma and Dash do not teach, but Russell teaches: wherein the symbolic reasoning comprises subsumption and instance checking. (Russell, p. 273, Inference rules for quantifiers "Let us begin with universal quantifiers. Suppose our knowledge base contains the standard folkloric axiom stating that all greedy kings are evil: for all x King(x) ^ Greedy(x) => Evil(x)… The rule of Universal Instantiation [instance checking] (UI for short) says that we can infer any sentence obtained by substituting a ground term (a term without variables) for the variable. 1 To write out the inference rule formally, we use the notion of substitutions [subsumption] introduced in Section 8.3... The corresponding Existential Instantiation rule [instance checking] for the existential quantifier is slightly more complicated...."; p. 4, A first-order inference rule, "The inference that John is evil works like this: find some x such that x is a king and x is greedy, and then infer that this x is evil. More generally, if there is some substitution θ that makes the premise of the implication identical to sentences already in the knowledge base, then we can assert the conclusion of the implication, after applying θ. In this case, the substitution {x/John} achieves that aim... for any sentence p (whose variables are assumed to be universally quantified) and for any substitution θ, p |= SUBST(θ; p). [subsumption]")
The rationale for combining the teachings of Ma, Dash and Russell is the same as set forth in the rejection of claim 4.
In regard to claim 20, Ma and Dash do not teach, but Russell teaches: execution of subsumption and instance checking on the first plurality of features using the symbolic rules. (Russell, p. 273, Inference rules for quantifiers "Let us begin with universal quantifiers. Suppose our knowledge base contains the standard folkloric axiom stating that all greedy kings are evil: for all x King(x) ^ Greedy(x) => Evil(x)… The rule of Universal Instantiation [instance checking] (UI for short) says that we can infer any sentence obtained by substituting a ground term (a term without variables) for the variable. 1 To write out the inference rule formally, we use the notion of substitutions [subsumption] introduced in Section 8.3... The corresponding Existential Instantiation rule [instance checking] for the existential quantifier is slightly more complicated...."; p. 4, A first-order inference rule, "The inference that John is evil works like this: find some x such that x is a king and x is greedy, and then infer that this x is evil. More generally, if there is some substitution θ that makes the premise of the implication identical to sentences already in the knowledge base, then we can assert the conclusion of the implication, after applying θ. In this case, the substitution {x/John} achieves that aim... for any sentence p (whose variables are assumed to be universally quantified) and for any substitution θ, p |= SUBST(θ; p). [subsumption]")
The rationale for combining the teachings of Ma, Dash and Russell is the same as set forth in the rejection of claim 4.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SU-TING CHUANG whose telephone number is (408)918-7519. The examiner can normally be reached Monday - Thursday 8-5 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SU-TING CHUANG/Examiner, Art Unit 2146