Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This communication is responsive to Amendment, filed 02/10/2026.
Claims 1-12, 15-20 are pending in this application. This action is made Final.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1-12, 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hakkani-Tur et al. (US Pub No. 2015/0178273), in view of Tonkin et al. (US Pub No. 2017/0061001).
As to claims 1, 19, 20, Hakkani-Tur teaches a computer-implemented method comprising:
receiving data, via a user input provided by a user, indicating a configuration for data crawler (i.e. FIG. 2 illustrates an example of relation detection based on the semantic ontology provided by a knowledge graph. Relation detection aims to determine with relations in the part of knowledge graph related to the conversational input domain has been invoked in the user conversational inputs. The example shows two sample conversational inputs. The first conversational input 202 a seeks information about movies by Roberto Benigni, the director. The second conversational input 202 b seeks to identify the director of “Life is Beautiful,” a specific movie. Both conversational inputs invoke the Director relation in the knowledge graph, but from different starting points as represented by the different graphical representations of the Director triple store 204 a, 204 b, [0029]);
wherein the data comprises a uniform resource locator (URL) or a path to a folder, the folder that includes programmatic specification documents (i.e. Training examples for a relation are mined from the web by searching documents 316 (e.g., web pages) for content containing the entities in the two nodes linked by that relation (i.e., the entity pair). The training examples are taken or derived from the document snippets 318 returned in the search results. Additional training examples are mined from the query click logs. The query click logs contain a list of search queries 320 associated with the uniform resource locators (URLs) 322 returned by the search query and clicked on by the user. Queries 324 containing at least one entity from the entity pair and associated with a URL of a web page containing entities from the entity pair may be selected as training examples, [0032]);
extracting, from a repository of programmatic specifications, by the data crawler, representations of a subset of programmatic specifications (i.e. Next, an entity pair extraction operation 404 extracts all possible entity pairs in a given domain that are connected with a relation from the knowledge graph. Embodiments of the relation detection model training method operate on each relation separately. In other words, all possible entity pairs connected with a specific relation may be extracted from the knowledge graph and processed into annotated training data before moving on to the next relation in the knowledge graph. Extracting each entity pair connected by a relation provides the maximum amount of training examples for use in training the relation detection model; however, fewer than all possible entity pairs may be extracted for further processing, [0037]);
wherein the subset of programmatic specifications includes specification
files from service folders, type definitions, and properties of the type definitions (i.e. An entity pair search operation 406 uses the extracted entity pairs to mine patterns used in natural language realization of the relation by performing a search the general content of the web and/or a domain specific knowledge store, such as a backend knowledge base, [0039]; A snippet gathering operation 408 collects the snippets from documents that are returned by the entity pair search and contain both entities from the entity pair, [0040]; An initial parsing operation 412 that parses the returned snippets into parse trees using a natural language parser ... The snippet initially parsed snippet 602 returned from the entity pair a search based on the Director (Titanic, James Cameron) triple with the separate constitute elements isolated in separate boxes is shown in FIG. 6, [0043]);
generating a knowledge graph model of the subset of the programmatic specifications (i.e. A dependency conversion operation 414 converts the parse trees into dependency trees, [0044]; A snippet fragment selection operation 416 retains the word sequence from the smallest dependency sub-tree that includes both related entities, [0045]; A candidate pattern 608 substituting the tokens (i.e., “Director-name” and “Movie-name”) from the Director(Director-name, Movie-name) triple for the corresponding entities (i.e., “James Cameron” and “Titanic”) is shown in FIG. 6, [0046]; Some snippets may invoke more than one relation because some entities are connected with more than one relation, and some entities are related to other entities as well, [0047]);
refining the knowledge graph model by applying a set of classifiers and a set of refiners to the knowledge graph model, wherein nodes in the knowledge graph model are classified using each of the set of classifiers and the set of refiners to obtain a refined knowledge graph model (i.e. A full property refinement operation 420 implements one of two algorithms used to refine the annotations of multi-relation snippets. The property retrieval operation 422 retrieves all associated properties (i.e., relations and entities) for the searched entity from the knowledge base. Using the RDF segment from FIG. 1 as an example, the resulting property list includes “Roberto Benigni” as Cast, “Drama” as Genre, “1997” as Release Year, and “Oscar, Best actor” as “Award.” A property comparison operation 424 compares the properties from the list are then searched against the multi-relation snippets. If a match is found, the multi-relation snippet is labeled with the matching relation, [0048]);
iteratively performing, a series of steps on the knowledge graph model, each step of the series of steps includes a classification sub-step and a refinement sub-step (i.e. A bootstrap refinement operation 426 implements the second algorithms used to refine the annotations of multi-relation snippets. A classifier training operation 428 trains a relation classifier with the mined data and their annotations. In a classifier labeling operation 430, the relation classifier is used to label the multi-relation snippets with additional relations. Only relations r with a high probability of appearance in the conversational input a are included, which optimizes a threshold t for finding the relation r with the probability of being the most probable relation given the conversational input P(r|u) according to the classifier on a development data set. The bootstrap refinement operation may be iteratively performed to find more relations in multi-relation snippets, [0049]);
determining similarity between:
a refined knowledge graph model output by a final step of the series of steps and a knowledge graph model output by a final step of the series of steps in the immediately previous iteration, by comparing the refined knowledge graph model output by the final step of the series of steps and the knowledge graph model output by the final step of the series of steps in the immediately previous iteration (i.e. Models including the “1 iteration” designation used a single iteration of the bootstrap algorithm to refine and extend the labels of training examples. In other words, the training set is labeled with first model and then re-trained, [0058]);
determining stability of the knowledge graph model based on the determined similarity between the refined knowledge graph model output by the final step of the series of steps and the knowledge graph model output by the final step of the series of steps in the immediately previous iteration (i.e. The relation detection model may include combinations of different types of training data and/or the results obtained using previously-trained relation detection models. The training data may be extended with additional annotations using one or more iterations of bootstrap refinement, [0054]); and
generating an ontology based on determined stability of the knowledge graph, wherein the ontology comprises a formal conceptual data model of resources available in programming specifications such as specification programing interface (APIs) of a service provider (i.e. The high relevance query identification operation examines the URLs of the snippets that contain the two entities that appear in the search results for the related entity pairs Mab to identify and selects related queries, [0051]; A query annotation operation 434 labels the selected queries from the link-based query matching operation with the relation to use as training examples for the relation detection model. Once the training examples of the desired types are collected and labeled, a model building operation 436 builds a statistical relation detection model from the labeled training data, [0054]).
Hakkani-Tur does not teaches "comparing the refined knowledge graph model output by the final step of the series of steps and the knowledge graph model output by the final step of the series of steps in the immediately previous iteration".
Tonkin teaches "comparing the refined knowledge graph model output by the final step of the series of steps and the knowledge graph model output by the final step of the series of steps in the immediately previous iteration" (i.e. As part of this process, the plurality of selected ontology terms can include a first ontology term from a first ontology and a second ontology term from a second ontology. In this case, the electronic process device adds the first and second ontology terms to respective first and second groups and progressively adds ontology terms from the first ontology to the first group and from the second ontology to the second group until the first and second group include aligned ontology terms. Thus, the process includes creating groups and then merging the groups based on alignments between ontology terms in the groups. This allows a pruned ontology to be created than spans two or more different ontologies, [0237]; The aligned ontology terms can be determined in any suitable manner. For example, this can be performed by comparing ontology terms in the first and second groups to identify aligned ontology terms or by determining aligned ontology terms in accordance with user input commands. Thus, this could be done automatically based on a similarity of the ontology terms, for example using an alignment module, as will be described in more detail below, or alternatively could be performed manually, [0238]; Ontology Aligner module 1340. The Aligner module takes two or more ontologies and uses a number of techniques to align the concepts in the various ontologies, either with each other or with a specified target ontology. The techniques utilize the indexes created by the indexer module to find concepts which are semantically similar. Each data property and concept is compared using the semantic matcher module. It refines the matching based upon the ontology structure and the data properties, [0326]).
It would have been obvious to one of ordinary skill of the art having the teaching of Hakkani-Tur, Tonkin before the effective filing date of the claimed invention to modify the system of Hakkani-Tur to include the limitations as taught by Tonkin. One of ordinary skill in the art would be motivated to make this combination in order to compare ontology terms in the first and second groups to identify aligned ontology terms, in view of Tonkin ([0237]), as doing so would give the added benefit of effectively merging the groups based on alignments between ontology terms in the groups, as taught by Tonkin ([0237]).
As per claim 2, Hakkani-Tur teaches the method of claim 1, wherein:
the knowledge graph model includes nodes and edges (i.e. the knowledge base includes the triple Director(Life is Beautiful, Roberto Benigni) formed by the movie title node 310, the director name node 312, and the Director relation 314 between the two nodes, [0031]);
a first node represents a first object type (i.e. each node contains an entity
and has one or more links to the documents (e.g., web pages) from which the node
is populated, [0025]);
a second node represents a second object type (i.e. The RDF segment 100 centers on the title node 102 for the movie Life is Beautiful. The related nodes 104 show that Life is Beautiful is a drama directed by Roberto Benigni in 1997, along with other related information, [0025]);
attributes of the first node represent attributes of the first object type (i.e. the knowledge base includes the triple Director(Life is Beautiful, Roberto Benigni) formed by the movie title node 310, the director name node 312, and the Director relation 314 between the two nodes, [0031]); and
an edge between the first node and a second node represents an attribute of the first object type that references the second object type (i.e.for a pair of related entities, one can enhance the link of the relation in the knowledge graph with a set of natural language patterns that are commonly used to refer to that relation, [0034]).
As per claim 3, Hakkani-Tur teaches the method of claim 1, wherein:
classifying the nodes in the knowledge graph model comprises classifying a node as matching a category (i.e. A classifier training operation 428 trains a relation classifier with the mined data and their annotations. In a classifier labeling operation 430, the relation classifier is used to label the multi-relation snippets with additional relations, [0049]); and
refining the knowledge graph model comprises (i.e. A full property refinement operation 420 implements one of two algorithms used to refine the annotations of multi-relation snippets. The property retrieval operation 422 retrieves all associated properties (i.e., relations and entities) for the searched entity from the knowledge base. Using the RDF segment from FIG. 1 as an example, the resulting property list includes “Roberto Benigni” as Cast, “Drama” as Genre, “1997” as Release Year, and “Oscar, Best actor” as “Award.” A property comparison operation 424 compares the properties from the list are then searched against the multi-relation snippets. If a match is found, the multi-relation snippet is labeled with the matching relation, [0048]):
in response to classifying the node as matching the category, applying refinement policy for the category (i.e. A bootstrap refinement operation 426 implements the second algorithms used to refine the annotations of multi-relation snippets. A classifier training operation 428 trains a relation classifier with the mined data and their annotations. In a classifier labeling operation 430, the relation classifier is used to label the multi-relation snippets with additional relations. Only relations r with a high probability of appearance in the conversational input a are included, which optimizes a threshold t for finding the relation r with the probability of being the most probable relation given the conversational input P(r|u) according to the classifier on a development data set. The bootstrap refinement operation may be iteratively performed to find more relations in multi-relation snippets, [0049]).
As per claim 4, Tonkin teaches the method of claim 3, wherein applying the refinement policy for the category comprises removing the node from the knowledge graph model (i.e. This can be performed in any appropriate manner but typically involves having the electronic processing device use the ontology structure, to identify ontology terms that are related to ontology terms in the group. These can then be filtered to remove related ontology terms that are not of interest, [0219]; The combined ontology would contain many irrelevant concepts which would need to be removed, [0458]).
As per claim 5, Tonkin teaches the method of claim 3, wherein applying the refinement policy for the category comprises collapsing the node into another node of the knowledge graph model (i.e. At step 1020, this can be used to refine the alignments, allowing these to be stored to represent the alignment between the source and target ontologies at step 1025. This can be in the form of a merged ontology, or alternatively an alignment index, [0293]).
As per claim 6, Tonkin teaches the method of claim 5, wherein collapsing the node into the another node of the knowledge graph model comprises collapsing attributes of the node into another node (i.e. The first step is to load the configuration file specifying parameters to be used in the alignment and merge. There are a number of metadata parameters which can be set, [0809]).
As per claim 7, Tonkin teaches the method of claim 5, wherein collapsing the node into the another node of the knowledge graph model comprises connecting edges of the node to the another node (i.e. Once the merge parameters have been determined then it is a simple matter to merge the Classes, Data Properties and Object Properties of the two ontologies, [0806]).
As per claim 8, Hakkani-Tur teaches the method of claim 1, wherein classifying the nodes in the knowledge graph model comprises evaluating the knowledge graph model using the set of classifiers, each classifier of the set of classifiers being associated a type category (i.e. A property comparison operation 424 compares the properties from the list are then searched against the multi-relation snippets. If a match is found, the multi-relation snippet is labeled with the matching relation, [0048]; A bootstrap refinement operation 426 implements the second algorithms used to refine the annotations of multi-relation snippets. A classifier training operation 428 trains a relation classifier with the mined data and their annotations. In a classifier labeling operation 430, the relation classifier is used to label the multi-relation snippets with additional relations. Only relations r with a high probability of appearance in the conversational input a are included, which optimizes a threshold t for finding the relation r with the probability of being the most probable relation given the conversational input P(r|u) according to the classifier on a development data set. The bootstrap refinement operation may be iteratively performed to find more relations in multi-relation snippets, [0049]).
As per claim 9, Hakkani-Tur teaches the method of claim 8, comprising:
receiving policy data identifying the set of classifiers for evaluating the knowledge graph model (i.e. The relation detection model may include combinations of different types of training data and/or the results obtained using previously-trained relation detection models. The training data may be extended with additional annotations using one or more iterations of bootstrap refinement. Because each conversational input can invoke more than one relation, relation detection may be considered a multi-class, multi-label classification problem and a classifier is used to train the relation detection model from the labeled training data using word unigrams, bigrams and trigrams as features, [0054]).
As per claim 10, Hakkani-Tur teaches the method of claim 9, wherein the policy data is received as user input (i.e. A number of unsupervised models, described below, were trained using the training data set with various embodiments of the relation detection model training solution. The Supervised model was trained using 2,334 patterns manually-labeled with one of the seven relations, [0055]).
As per claim 11, Tonkin teaches the method of claim 8, wherein refining the knowledge graph model comprises:
evaluating the knowledge graph model using a first classifier of the set of classifiers (i.e. Though the desired ontology derivative was generally based on a subset extraction such as those above, it was then often further manipulated to better suit the needs of the application (i.e. classes added, classes removed, properties removed, properties added, etc.), [0451]);
based on the evaluation using the first classifier, removing nodes of the knowledge graph model to obtain a first refined knowledge graph model (i.e. This can be performed in any appropriate manner but typically involves having the electronic processing device use the ontology structure, to identify ontology terms that are related to ontology terms in the group. These can then be filtered to remove related ontology terms that are not of interest, [0219]; The combined ontology would contain many irrelevant concepts which would need to be removed, [0458]);
evaluating the knowledge graph model using a second classifier of the set of classifiers (i.e. Though the desired ontology derivative was generally based on a subset extraction such as those above, it was then often further manipulated to better suit the needs of the application (i.e. classes added, classes removed, properties removed, properties added, etc.), [0451]); and
based on the evaluation using the second classifier, removing nodes of the knowledge graph model to obtain a second refined knowledge graph model (i.e. This can be performed in any appropriate manner but typically involves having the electronic processing device use the ontology structure, to identify ontology terms that are related to ontology terms in the group. These can then be filtered to remove related ontology terms that are not of interest, [0219]; The combined ontology would contain many irrelevant concepts which would need to be removed, [0458]).
As per claim 12, Hakkani-Tur teaches the method of claim 1, wherein refining the knowledge graph model comprises:
iteratively classifying nodes of the knowledge graph model and refining the knowledge graph model based on the classifications of the nodes to obtain the refined knowledge graph model (i.e. The training data may be extended with additional annotations using one or more iterations of bootstrap refinement. Because each conversational input can invoke more than one relation, relation detection may be considered a multi-class, multi-label classification problem and a classifier is used to train the relation detection model from the labeled training data using word unigrams, bigrams and trigrams as features, [0054]; A bootstrap refinement operation 426 implements the second algorithms used to refine the annotations of multi-relation snippets. A classifier training operation 428 trains a relation classifier with the mined data and their annotations, [0049]).
As per claim 15, Hakkani-Tur teaches the method of claim 1, wherein the programmatic specifications comprise application programming interface (API) specification (i.e. FIG. 2 illustrates an example of relation detection based on the semantic ontology provided by a knowledge graph. Relation detection aims to determine with relations in the part of knowledge graph related to the conversational input domain has been invoked in the user conversational inputs. The example shows two sample conversational inputs. The first conversational input 202 a seeks information about movies by Roberto Benigni, the director. The second conversational input 202 b seeks to identify the director of “Life is Beautiful,” a specific movie. Both conversational inputs invoke the Director relation in the knowledge graph, but from different starting points as represented by the different graphical representations of the Director triple store 204 a, 204 b, [0029]).
As per claim 16, Hakkani-Tur teaches the method of claim 1, wherein the programmatic specifications comprise database of tables (i.e. As stated above, a number of program modules and data files may be stored in the system memory 704. While executing on the processing unit 702, the software applications 720 may perform processes including, but not limited to, one or more of the stages of the relation detection model training method 400. Other program modules that may be used in accordance with embodiments of the present invention may include electronic mail and contacts applications, word processing applications, spreadsheet applications, database applications, slide presentation applications, drawing or computer-aided application programs, etc., [0067]).
As per claim 17, Hakkani-Tur teaches the method of claim 1, comprising a visual representation of the ontology on a user interface (i.e. FIG. 2 illustrates an example of relation detection based on the semantic ontology provided by a knowledge graph. Relation detection aims to determine with relations in the part of knowledge graph related to the conversational input domain has been invoked in the user conversational inputs. The example shows two sample conversational inputs. The first conversational input 202 a seeks information about movies by Roberto Benigni, the director. The second conversational input 202 b seeks to identify the director of “Life is Beautiful,” a specific movie. Both conversational inputs invoke the Director relation in the knowledge graph, but from different starting points as represented by the different graphical representations of the Director triple store 204 a, 204 b, [0029]).
As per claim 18, Hakkani-Tur teaches the method of claim 1, wherein the data indicating the configuration for the data crawler is received as user input (i.e. Training examples for a relation are mined from the web by searching documents 316 (e.g., web pages) for content containing the entities in the two nodes linked by that relation (i.e., the entity pair). The training examples are taken or derived from the document snippets 318 returned in the search results. Additional training examples are mined from the query click logs. The query click logs contain a list of search queries 320 associated with the uniform resource locators (URLs) 322 returned by the search query and clicked on by the user. Queries 324 containing at least one entity from the entity pair and associated with a URL of a web page containing entities from the entity pair may be selected as training examples, [0032]).
Response to Arguments
Applicant's arguments with respect to claims 1-12, 15- 20 have been considered but are moot in view of the new ground(s) of rejection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire
later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIRANDA LE whose telephone number is (571)272-4112. The examiner can normally be reached M-F 7AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached on 571-272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MIRANDA LE/ Primary Examiner, Art Unit 2153