Prosecution Insights
Last updated: April 19, 2026
Application No. 17/537,168

GENERATING ANSWERS TO MULTI-HOP CONSTRAINT-BASED QUESTIONS FROM KNOWLEDGE GRAPHS

Non-Final OA §103
Filed
Nov 29, 2021
Examiner
FIGUEROA, KEVIN W
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Accenture Global Solutions Limited
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
4y 0m
To Grant
91%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
252 granted / 362 resolved
+14.6% vs TC avg
Strong +21% interview lift
Without
With
+21.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
20 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 362 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/06/2026 has been entered. Response to Arguments Applicant’s arguments have been fully considered. Applicant has rolled up dependent claim 5 into independent claim 1, as well with the other corresponding independent/dependent claims. The arguments however are respectfully moot in light of a new rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-11, 13-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saxena, Apoorv, Aditay Tripathi, and Partha Talukdar. "Improving multi-hop question answering over knowledge graphs using knowledge base embeddings." in view of Zhang, Wenzheng, Wenyue Hua, and Karl Stratos. "EntQA: Entity linking as question answering." further in view of Bao, Junwei, et al. "Constraint-based question answering with knowledge graph. Regarding claims 1, 8, and 15, Saxena teaches “a computer-implemented method of generating answers to multi-hop constraint- based questions by a question-and-answer system (QAS) using knowledge graphs, comprising: accessing, by a QAS, a knowledge graph” (abstract “Knowledge Graphs (KG) are multi-relational graphs consisting of entities as nodes and relations among them as typed edges. Goal of the Question Answering over KG (KGQA) task is to answer natural language queries posed over the KG”); “generating, at the QAS, knowledge graph embeddings based on the knowledge graph via a knowledge graph embedding model” (abstract “In a separate line of research, KG embedding methods have been proposed to reduce KG sparsity by performing missing link prediction. Such KG embedding methods, even though highly relevant, have not been explored for multi-hop KGQA so far. We fill this gap in this paper and propose EmbedKGQA. EmbedKGQA is particularly effective in performing multi-hop KGQA over sparse KGs.” i.e. generating knowledge graph embeddings and §3.3); While Saxena generally teaches linking, Zhang more specifically teaches “further comprising: learning, at the QAS, a linking model based on training data and an ontology for the training data” (Zhang abstract “We present a new model that does not suffer from this limitation called EntQA, which stands for Entity linking as Question Answering. EntQA first proposes candidate entities with a fast retrieval module, and then scrutinizes the document to find mentions of each candidate with a powerful reader module […] and capitalizes on pretrained models for dense entity retrieval and reading comprehension”); “processing, at the QAS, the knowledge graph embeddings via the linking model” (Zhang pg. 3 PNG media_image1.png 410 912 media_image1.png Greyscale as shown, the model takes the embeddings and processes it); “generating, at the QAS, a new knowledge graph embedding based on output of the linking model” (Zhang pg. 2 PNG media_image2.png 148 852 media_image2.png Greyscale ) It would have been obvious to one having ordinary skill in the art at the time that the invention was effectively filed to combine the teachings of Saxena with that of Zhang since “We analyze EntQA and find that its retrieval performance is extremely strong (over 98 top-100 recall on the validation set of AIDA), verifying our hypothesis that finding relevant entities without knowing their mentions is easy” Zhang pg. 2. This shows that by combining the techniques, one would have a better question-answering system. Saxena further teaches “adding the new knowledge graph embedding to the set of knowledge graph embeddings” (Saxena abstract “KG embedding methods have been proposed to reduce KG sparsity by performing missing link prediction” generating embeddings is the same functionally regardless of where the data comes from) “receiving, at the QAS and from a user device, a natural language query question having one or more words” (abstract “Goal of the Question Answering over KG (KGQA) task is to answer natural language queries posed over the KG”); “transforming, at the QAS, the one or more words into one or more question embeddings, each question embedding being a vector representing a corresponding word” (pg. 4 §4.3 “This module embeds the natural language question q to a fixed dimension vector eq ∈ C d .”); While the references generally teaches constraints, Bao more explicitly teaches “identifying, at the QAS, at least a first constraint” (Bao fig. 1 shows the constraint identified in the second question) It would have been obvious to one having ordinary skill in the art at the time that the invention was made to combine the teachings of Saxena and Zhang with that of Bao since “we propose a novel systematic KBQA approach to solve multi-constraint questions. Compared to state-of-the-art methods, our approach not only obtains comparable results on the two existing benchmark data-sets, but also achieves significant improvements on the ComplexQuestions” Bao abstract. Therefore by combining the two, the QA system is more robust to complex questions. Saxena further teaches “and a topic entity in the question embeddings” (pg. 4 §4.3 “Given a question q, topic entity h ∈ E and set of answer entities A ⊆ E, it learns the question embedding”); “identifying, at the QAS, a plurality of core relation paths in the knowledge graph embeddings, each path linking the topic entity to a different ungrounded entity” (pg. 1 right col. last ¶ “In multi-hop KGQA, the system needs to perform reasoning over multiple edges of the KG to infer the right answer. KGs are often incomplete, which creates additional challenges for KGQA systems, especially in case of multi-hop KGQA” and pg. 6 above §5.3.2 “EmbedKGQA does not limit itself to a sub-graph and utilizing the link prediction properties the KG embeddings, EmbedKGQA is able to infer the relation on missing links”); “ranking, at the QAS, the plurality of query graphs using a CNN-based similarity scoring model” (Saxena pg. 3 ¶ above §3 “ConvE (Dettmers et al., 2018) utilizes Convolutional Neural Networks to learn a scoring function between the head entity, tail entity and relation. InteractE (Vashishth et al., 2019) improves upon ConvE by increasing feature interaction.” and left col. “Methods like (Dai et al., 2016; Dong et al., 2015; Hao et al., 2017; Lukovnikov et al., 2017; Yin et al., 2016) utilize neural networks to learn a scoring functions to rank the candidate answers” which shows scoring to rank and also Bao pg. 7 fig. 4); Bao further teaches “associating, at the QAS, the first constraint with each of the core relation paths to generate a plurality of query graphs including at least a first query graph, each query graph being based on a combination of the question embeddings and the knowledge graph embeddings” (Bao pg. 4 “MulCG A MulCG is constructed based on a basic query graph B of a question and an ordered constraint sequence C = {C1, ..., CN } by the following operations: (1) Treat the basic query graph B of the given question as G0; (2) Iteratively add Ci to Gi−1 to generate Gi , by linking the variable vertex of Ci to a v” i.e. generating paths); “determining, based on a top-ranked query graph and at the QAS, an answer to the query” (Bao pg. 4 into pg. 5 “For each MulCG G ∈ H(Q), a feature vector F(Q, G) is extracted and the one with the highest ranking score is selected. Finally, by executing the MulCG, we get the answers A.”); and “presenting, via the user device and from the QAS, the answer” (presenting the answer to a user is inherent to any question answering system) Note that independent claims 8 and 15 recite the same substantial subject matter as independent claim 1, only differing in embodiment. The differences in embodiments, a system and non-transitory computer readable medium are obvious variations of another and inherent to any computing system such as the ones of Saxena and Bao. Therefore the claims are subject to the same rejection. Regarding claims 2, 9, and 16, the Saxena, Zhang, and Bao references have been addressed above. Bao further teaches “further comprising extending a first core relation path by linking the topic entity to an ungrounded entity, thereby generating the first query graph” (Bao pg. 4 “MulCG A MulCG is constructed based on a basic query graph B of a question and an ordered constraint sequence C = {C1, ..., CN } by the following operations: (1) Treat the basic query graph B of the given question as G0; (2) Iteratively add Ci to Gi−1 to generate Gi , by linking the variable vertex of Ci to a v”) Regarding claims 3, 10, and 17, the Saxena, Zhang, and Bao have been addressed above. Saxena further teaches “further comprising connecting a first grounded entity of to either a lambda variable or an existential variable” (fig. 2 shows connecting to variable crime, which is ungrounded/unknown along with fig. 1) Regarding claims 4, 11, and 18, the Saxena, Zhang, and Bao references have been addressed above. Bao further teaches “further comprising: mapping the first constraint to an aggregation function” (Bao table 1 shows mapping the constraints to an aggregation function as it shows multiple types (aggregated) of constraints); and “attaching the mapped first constraint to either a lambda variable or an existential variable connected to the lambda variable, thereby generating the first query graph” (Bao pg. 2 ¶1 “Motivated by this issue, this work contributes to QA research in the following two aspects: (1) We propose a novel systematic KBQA approach to solve multi-constraint questions by translating a multiconstraint question (MulCQ) to a multi-constraint query graph (MulCG);” generating a graph) Regarding claims 6, 13, and 20, the Saxena, Zhang, and Bao references have been addressed above. Saxena further teaches “wherein the set of knowledge graph embeddings includes entity embeddings and relation embeddings” (fig. 2 which shows the multiple embeddings from questions and knowledge graphs and pg. 2 right col. ¶1 “KG embedding methods learn high-dimensional embeddings for entities and relations in the KG”) Regarding claims 7 and 14, the Saxena, Zhang, and Bao references have been addressed above. Bao further teaches “further comprising: extracting superlative linking in the question embeddings” (Bao fig. 1 shows superlative linking i.e. the highest rated answer Even Money is selected); and mapping the superlative linking to an aggregation function” (previous citations, the answer is compared to the constraint/aggregation functions to ensure it is correct) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN W FIGUEROA whose telephone number is (571)272-4623. The examiner can normally be reached Monday-Friday, 10AM-6PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MIRANDA HUANG can be reached at (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. KEVIN W FIGUEROA Primary Examiner Art Unit 2124 /Kevin W Figueroa/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Nov 29, 2021
Application Filed
Aug 09, 2025
Non-Final Rejection — §103
Oct 07, 2025
Response Filed
Jan 06, 2026
Final Rejection — §103
Mar 06, 2026
Request for Continued Examination
Mar 14, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586093
SYSTEMS AND METHODS FOR FACILITATING NETWORK CONTENT GENERATION VIA A DYNAMIC MULTI-MODEL APPROACH
2y 5m to grant Granted Mar 24, 2026
Patent 12573477
MOLECULAR STRUCTURE ACQUISITION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12570281
METHOD FOR EVALUATING DRIVING RISK LEVEL IN TUNNEL BASED ON VEHICLE BUS DATA AND SYSTEM THEREFOR
2y 5m to grant Granted Mar 10, 2026
Patent 12554964
CIRCUIT FOR HANDLING PROCESSING WITH OUTLIERS
2y 5m to grant Granted Feb 17, 2026
Patent 12547873
METHOD AND APPARATUS WITH NEURAL NETWORK INFERENCE OPTIMIZATION IMPLEMENTATION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
91%
With Interview (+21.0%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 362 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month