Prosecution Insights
Last updated: April 19, 2026
Application No. 17/986,782

SYSTEM AND METHOD OF GENERATING KNOWLEDGE GRAPH AND SYSTEM AND METHOD OF USING THEREOF

Non-Final OA §102
Filed
Nov 14, 2022
Examiner
ROBERTS, SHAUN A
Art Unit
2655
Tech Center
2600 — Communications
Assignee
National Cheng Kung University
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
86%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
491 granted / 647 resolved
+13.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
29.5%
-10.5% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 647 resolved cases

Office Action

§102
DETAILED ACTION Continued Examination Under 37 CFR 1.114 1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 3. Claims 3-4, 8 have been amended, claims 1-2, 5-7, 10 have been cancelled. Response to Arguments 4. Applicant’s arguments filed regarding claim objections, 101, and 112 have been reviewed and are accepted based on amendments. Regarding 101, The claims present limitations that transform the abstract idea into patent eligible subject matter. The claims require at least obtaining documents, performing complex linguistic analysis on the document to generate and store a knowledge graph, and performing natural language understanding on received input questions to linguistically analyze the question and compare to a stored knowledge graph to find a match and output a reply, Which integrates the abstract idea into a practical application and is patent eligible subject matter. Applicant’s arguments regarding 102 have been considered but are moot based on the new grounds of rejection responsive to the amendments (see rejection below). Claim Rejections - 35 USC § 102 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 7. Claims 3-4, 8-9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chang et al (2016/0132572). Regarding claim 3 Chang teaches A method of using a knowledge graph (fig 4; fig 8), performed by a first processing device (fig 7; 0092), comprising: performing a method of generating a knowledge graph (0032 generating and updating the knowledge dataset), wherein the method of generating a knowledge graph comprises: obtaining a knowledge document by executing web crawler (0030: a computing device 110 to search information from multiple information sources 140A-N (which may be collectively referred to as “information sources 140”). The computing device 110 interacts with a server 120, or some other computing device, over a network to access the information. 0044: the information sources 140 are provided as input to the data knowledge system 204. In other words, the data knowledge system may access the information sources 140 over, for example, a network and retrieve content therefrom. The content includes data in different formats. Generally, the information sources 140 may be categorized in multiple categories such as structured input 202A, semi-structured input 202B, and unstructured input 202C. The structured input 202A represents source files storing structured data, such as databases, tables, and other source files. Semi-structured input 202B represents source files storing semi-structured data such as XML, HTML, JSON, and other source files. The unstructured input 202C represents source files storing unstructured data, such as books, journals, documents, metadata, word-processor document, a TXT file, a PDF file, and other source files. [0045] Furthermore, content (e.g., data) of the information sources 140 typically includes text, such as natural language text. However, other types of content can be available. For example, the information sources 140 can include images, audio, video, or other multimedia content. Non-text content can be translated into text using various techniques. For example, optical character recognition, image recognition, machine learning, captioning, tagging, speech-to-text, and other techniques are available to convert the non-text content into text); performing word segmentation and part-of-speech tagging on the knowledge document to generate a plurality of tagged words ([0052] In an embodiment, if the input includes natural language text (e.g., a string), the triple extractor 304 can apply a segmentation, tokenization, and parsing process to detect words and apply part-of-speech tagging and noun/verb/adjective expression tagger to the words.); obtaining a plurality of sentences from the tagged words according to a default sentence pattern, wherein each of the sentences comprises a subject, an adverb, a verb and an object, and the adverb corresponding to an adverb type (0036: the triple includes three elements: a subject, a predicate, and an object (S,P,O). The predicate can indicate the association or relationship between the subject and the object. 0066: a triple extractor of the data knowledge system can receive and process the stratified data. If natural language text is processed, the triple extractor can tokenize, parse, and speech tag the text to determine sentences, words within the sentences, word types (e.g., noun, verb, adjective, etc.), and phrase expressions (e.g., noun phrases containing a noun and a proximate such as a consecutive word). The triple extractor can consider a set of words or phrase expressions (e.g., within a sentence) and apply a set of rules to generate a triple (or a number of triples). The triple can include a subject, predicate, and object based on the applied rules; 0053: The grammar covers rules for typical noun-verb-noun relationships and noun-verb-adjective relationships. In addition, the grammar accounts for various prepositions, described nouns (e.g. blue cat), adverbs relationships (e.g. runs fast) and possessive nouns (e.g., John's car).); for each of the sentences, performing: using the subject as a first entity of a triple (0040: referring back to the “triple 1” and “triple 2” examples, the mention dictionary would include {Subject: {John: triple 1; Susan: triple 2}; Object: {raw fish: triple 1; cooked fish: triple 2}}. In this example, the mention dictionary lists the different phrases used in the subjects and objects of the triples and identifies, for each phrase, the associated triple.); using the object as a second entity of the triple (40); and using the adverb type and the verb as a relation in the triple (36; 53; 66); forming a knowledge graph using the triple corresponding to each of the sentences (0047: the knowledge dataset 130 includes triples 206A); tagging the knowledge graph with a field header according to a knowledge field of the knowledge graph (0023: multiple information sources that include a number of text files, PDF documents, web pages, and database tables storing content about different topics related to cities in the U.S. By analyzing the associations between the words found in the content of the different sources, various topics represented by the associations can be identified, such as population sizes, ethnicities, age groups, occupations, etc. The dataset can store information related to the associations.); obtaining an input question ([0042] In an illustrative use case, the user of the computing device 110 may input at the personal assistant application the question of “what is the size of the population of San Diego?”); performing a natural language understanding procedure on the input question to obtain a question set, wherein the question set comprises a question subject, a question object and a question relation of the input question (fig 6; 31; 84-85; [0060] Furthermore, the data knowledge system 204 also implements an attribute query resolver 308. The attribute query resolver 308 can be configured to return triples from the knowledge dataset 130 in response to queries; triple of (John, eats, fish). Fish, the object in this triple, can be a subject in another triple such as (fish, can be, raw). Thus, the attribute query resolver 308 can expand the queries by considering both triples through this object-to-subject transition.); searching for a target knowledge graph matching the question subject from a plurality of candidate knowledge graphs generated according to the method of generating a knowledge graph (fig 6, 8-10; 24: in this way, triples can capture associations between words from the content of the different sources and can support queries to the dataset. For example, a query that includes San Diego as an attribute can be matched to a triple that uses San Diego as a subject.; 47; 60); determining a first target entity in the target knowledge graph matching the question subject, and a second target entity in the target knowledge graph matching the question object ([0041] By storing the triples (e.g., in a RDF format), the entity dictionary, and the mention dictionary, information contained therein can be retrieved via a query language (SQL or NoSQL statements). As such, the knowledge dataset 130 can include a searchable version of knowledge about the data 142 from the various information sources 140, regardless of the type of the data 142 and the type and number of the information sources 140. Querying the knowledge dataset 130 can include returning, in response to a query keyword, triples. These triples are found not only based on matching the query keyword, but also based on exploring the associations or relationships from the entity and mention dictionaries; claim 5: identifying a first triple of the dataset by matching the natural language phrase of the query with the identified first triple's subject or object; identifying a second triple of the dataset based on the identified first triple's subject or object matching the identified second triple's subject or object; and returning a query result comprising data from the source files based on the identified first triple and the identified second triple.); determining a target relation connecting the first target entity and the second target entity (41; claim 5); and outputting a question reply according to the first target entity, the second target entity and the target relation (fig 6; 60 response to queries; [0089] At operation 620, a query result is returned.; claim 5). Regarding claim 4 Chang teaches The method of using a knowledge graph according to claim 3, wherein the question set further comprises a question intention, and outputting the question reply according to the first target entity, the second target entity and the target relation comprises: matching the question intention with the first target entity, the second target entity and the target relation to form an initial reply (fig 6; 60; 84-85; 89; claim 5); and performing a natural language generation procedure on the initial reply to generate the question reply (0042: In addition, based on stored associations in the triples and the dictionaries, the server 120 may construct a number of other answers and suggestions related to the elements of the question. For example, the server 120 may respond with a question asking the user “are you interested in the percentage increase of the population of San Diego over the last three decades?” or any other relevant topic to “San Diego,” “cities in the U.S.”, or “population sizes.”). Regarding claim 8 Chang teaches A system of using knowledge graph, comprising: a memory storing a plurality of candidate knowledge graphs generated according to a method of generating a knowledge graph (fig 7 memory; 30; 92;); a user interface configured to obtain an input question and present a question reply corresponding to the input question (fig 1; 7); and a first processing device connected to the memory and the user interface (fig 1; 7; para 92), and configured to perform: performing a method of generating a knowledge graph, wherein the method of generating a knowledge graph comprises: obtaining a knowledge document by executing web crawler; performing word segmentation and part-of-speech tagging on the knowledge document to generate a plurality of tagged words; obtaining a plurality of sentences from the tagged words according to a default sentence pattern, wherein each of the sentences comprises a subject, an adverb, a verb and an object, and the adverb corresponding to an adverb type; for each of the sentences, performing: using the subject as a first entity of a triple; using the object as a second entity of the triple; and using the adverb type and the verb as a relation in the triple; forming a knowledge graph using the triple corresponding to each of the sentences; tagging the knowledge graph with a field header according to a knowledge field of the knowledge graph; performing a natural language understanding procedure on the input question to obtain a question set, wherein the question set comprises a question subject, a question object and a question relation of the input question; searching for a target knowledge graph matching the question subject from the plurality of candidate knowledge graphs; determining a first target entity in the target knowledge graph matching the question subject, and a second target entity in the target knowledge graph matching the question object; determining a target relation connecting the first target entity and the second target entity; and outputting the question reply according to the first target entity, the second target entity and the target relation. Claim 8 recites limitations similar to claim 3 and is rejected for similar rationale and reasoning Claim 9 recites limitations similar to claim 4 and is rejected for similar rationale and reasoning Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN A ROBERTS whose telephone number is (571)270-7541. The examiner can normally be reached Monday-Friday 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached on 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAUN ROBERTS/Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Nov 14, 2022
Application Filed
Feb 07, 2025
Non-Final Rejection — §102
May 13, 2025
Examiner Interview Summary
May 13, 2025
Applicant Interview (Telephonic)
Jun 10, 2025
Response Filed
Sep 27, 2025
Final Rejection — §102
Dec 16, 2025
Request for Continued Examination
Jan 13, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586599
AUDIO SIGNAL PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM WITH MACHINE LEARNING AND FOR MICROPHONE MUTE STATE FEATURES IN A MULTI PERSON VOICE CALL
2y 5m to grant Granted Mar 24, 2026
Patent 12586568
SYNTHETICALLY GENERATING INNER SPEECH TRAINING DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12573376
Dynamic Language and Command Recognition
2y 5m to grant Granted Mar 10, 2026
Patent 12562157
GENERATING TOPIC-SPECIFIC LANGUAGE MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12555562
VOICE SYNTHESIS FROM DIFFUSION GENERATED SPECTROGRAMS FOR ACCESSIBILITY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
86%
With Interview (+10.3%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 647 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month