DETAILED ACTION
Receipt of Applicant’s Amendment, filed February 19, 2026 is acknowledged.
Claims 1, 9 and 17 were amended.
Claims 4, 6, 14, and 15 were cancelled.
Claims 1-3, 5, 7-13, 16-20 are pending in this office action.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5, 7-9, 11-13, 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Shaw [9183323] in view of Schneider [A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English].
With regard to claim 1 Shaw teaches A computer-implemented method for generating transcription insight data, the method comprising:
receiving an initial set of search results (Shaw, Column 6, line 52 “a group of search results 2005”; Figure 2, 2005; Column 7 lines 45-47 “The system receives a search result for a query, where the search result includes a link… to a resource (step 3010)”; Figure 3, 310), the initial set of search results generated based upon (Shaw, Column 7, lines 62-64 “the search results 2005 are received corresponding to one or more resources (e.g., web pages) that contain information relevant to the query 2010”) a query parameter (Shaw, Column 7, lines 41-42 “the system performs the process 3000 after the user submits the query 210 and requests that the search system 1014 conduct a search”; Figure 2, 2010 see query: “are gm foods dangerous to eat”);
receiving a set of sentences (Shaw, Column 8, line 35-36 “the system identifies a clause or sentence in the textual contents of the resource that includes words with high frequencies according to the histogram as the suggested query phrase”; Column 9, lines 4-6 “The system identifies multiple clauses in the text of the resource (step 4010). N some implementations, instead of clauses, the system identifies multiple sentences in the text of the resource”) from the initial set of search results as the resources (Id);
generating a set [[as parsing (Shaw, Column 9, lines 45-58 “Linguistic relation features for a segment of text, for example, a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text. The parser can identify linguistic relations (e.g., the relation between a verb and the main noun of the subject) as well as relation paths (e.g., the relation path between a main verb and an adjective of the main noun of the object of a sentence). A dependency parser is described, for example, in Gerold Schneider, "A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English," Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003.”) for each sentence in the set of sentences (Shaw, Column 8, line 35-36 “the system identifies a clause or sentence in the textual contents of the resource that includes words with high frequencies according to the histogram as the suggested query phrase”; Column 9, lines 4-6 “The system identifies multiple clauses in the text of the resource (step 4010). N some implementations, instead of clauses, the system identifies multiple sentences in the text of the resource”), wherein [[ as the terms in the query or clause (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”);
generating one or more dependency graphs for [[(Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”), wherein a dependency graphs comprises:
a keyword as keywords in the query 2010 (Shaw, Column 7, lines 54-56 “the search engine 1030 can use inverted-index posting lists for keywords in the query 2010 to find suitable search results.”) associated with the query parameter as query, e.g. query 2010 input by the user (Shaw, Column 9, lines 64-65 “A function used for calculating the similarity measure between the query and a clause”; Column 10, lines 6-11 “the suggested query phrase “GM food are not just potentially dangerous to individuals - They are also a thread to food diversity” is one example of a clause that might have high similarity score relative to the query 2010 “are gm foods dangerous to eat”) [[as the root node in the dependency parser applied to the query (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”); and
[[as the nodes in the dependency parser applied to the identified clause (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”) associated with the plurality of terms as the words in the identified clause (Shaw, Column 9, lines 4-18; Column 9, lines 20-22) from [[as the tree for the identified clause (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”);
determining, for the one or more dependency graphs (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”), a plurality of scores as the similarity measure (Shaw, Column 9, lines 4-18; Column 9, lines 20-22 “The system calculates a similarity measure for each identified clause, where the similarity measure is a measure of the similarity between the identified clause and the query”), wherein a score as a similarity score between the query and a specific clause (Id) from the plurality of scores as the similarity measures generated for each clause (Id) is associated with [[ (Shaw, Column 9, lines 43-48 “the similarity measure is calculated using a function that evaluates the linguistic relations between words. Linguistic relation features for a segment of text, for example, a query or a clause identified in a resource, can be identified by applying a natural language parser ( e.g., a dependency parser) to the text”), and wherein the score as the similarity measure (Shaw, Column 9, lines 58-63 “When linguistic relation features are used to calculate the similar measure between a query and a clause identified in a resource, the similarity measure increases with an increase in the co-occurrence of the linguistic relation features in the query and in the clause”) represents a relationship as the relations (Shaw, Column 9, lines 48-52 “The parser can identify linguistic relations… as well as relation paths”) between a term as words in the clause (Shaw, Column 9, lines 58-63 “When linguistic relation features are used to calculate the similar measure between a query and a clause identified in a resource, the similarity measure increases with an increase in the co-occurrence of the linguistic relation features in the query and in the clause”) [[ as the words of the query (Id);
identifying similar as the linguistic relations, e.g. similarity measure between the query and the clause (Shaw, Column 9, lines 48-51 “The parser can identify linguistic relations (e.g., the relation between a verb and the main noun of the subject”; Column 9, lines 58-63 “When linguistic relation features are used to calculate the similarity measure between a query and a clause identified in a resource, the similarity measure increases with an increase in the co-occurrence of the linguistic relation features in the query and in the clause.”; Column 9, lines 64-65 “A function sued for calculating the similarity measure between the query and the clause”) nodes in the one or more dependency graphs as the parsed dependency structure (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”);
aggregating scores for the similar nodes as the sum of the weights (Shaw, Column 10, lines 20-23 “A symmetric similarity measure can be calculated by dividing the sum of weights of the features in the intersection of the two groups by the sum of weights of the features in the union of the two groups”);
generating, based upon the plurality of scores as the similarity measure (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”), insight data as the suggested query phrase (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”) for the initial set of search results as the resource (Shaw, Column 10, lines 36-389 “The system then identifies a section of contiguous text from the resource that includes the suggested query phrase”), wherein the insight data as the suggested query phrase (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”) provides contextual information for the initial set of search results (Shaw, Column 8, lines 42-45 “The suggested query phrase and the user interface object are provided in context in a section of contiguous text from the resource”), and wherein the insight data comprises a ranked list of node values (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”) based upon the aggregated scores (Shaw, Column 10, lines 20-23 “A symmetric similarity measure can be calculated by dividing the sum of weights of the features in the intersection of the two groups by the sum of weights of the features in the union of the two groups”); and
displaying, in a user interface (Shaw, Figure 2), at least a subset of the initial set of search results as the search results (Shaw, Figure 2, 2005; Column 6, lines 51-55) and the insight data as the suggested alternative query phrases (Shaw, Figure 2, 2050; Column 6, lines 66-67), wherein the user interface comprises:
a search results area as the group of search results 2005 (Shaw, Figure 2, 2005; Column 6, lines 51-55 “In response to the query 2010, the search engine 1030 returns a group of search results 2005.) of the user interface (Shaw, Figure 2), the search results area displaying the subset of initial set of search results as the three search results displayed (Shaw, Figure 2, 2005; Column 6, lines 51-52 “In response to the query 2010, the search engine 1030 returns a group of search results 2005.”); and
an insight data area as the area where the alternative query is displayed (Shaw, Figure 2, 2050; Column 6, line 66 through Column 7, line 1 “The search system 1014 can provide to the user one or more suggested alternative query phrases 2050 as alternatives for the query 2010”) of the user interface (Shaw, Figure 2), wherein the insight data as the suggested alternative query 2050 (Column 7, lines 17-22 “For example, in snippet 2040, the suggested alternative query phrase, "There is no evidence that GM foods are dangerous: There is no evidence that GM foods are safe." 2050 is presented in bold font to distinguish the suggested alternative query phrase 2050 from the rest of the respective snippet 2040.”) area as the area where the alternative query is displayed (Shaw, Figure 2, 2050; Column 6, line 66 through Column 7, line 1 “The search system 1014 can provide to the user one or more suggested alternative query phrases 2050 as alternatives for the query 2010”) is displayed in a first area as section of the display where the alternative query phrase is distinguished from the rest of the respective snippet (Shaw, Column 7, lines 15-17 “a suggested alternative query phrase is emphasized to distinguish the suggested alternative query phrase from the rest of the respective snippet.”; Column 8 lines 45-53 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query.”) proximate to as provided in context of (Column 8, lines 42-45 “The suggested query phrase and the user interface object are provided in context in a section of contiguous text from the resource (e.g., a snippet of content from the”), but separate from as making the alternative query phrase distinct from the snippet of the result, such as the suggested phrase being a separate interface element or appended at the end (Shaw, Column 7, lines 15-17 “a suggested alternative query phrase is emphasized to distinguish the suggested alternative query phrase from the rest of the respective snippet.”; Column 8 lines 45-53 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query. For example, the server system 1014 can provide the search results 2005 as HTML code or in other conventional representations that describe the web page 2000, including the URL link 2060, which allows a user to invoke the suggested query phrase 2050 as a new query.”; Column 10, lines 59-63 “Alternatively, the system can identify a suggested query phrase from analyzing all the text of the resource (as described above) and can append the identified suggested query phrase to the section of contiguous text included with the received search result.”), a second area displaying the search results area as the group of search results 2005 (Shaw, Figure 2, 2020, 2030, 2040; Column 6, lines 51-55 “In response to the query 2010, the search engine 1030 returns a group of search results 2005. A search result can include, for each of a number of resources, a title 2020 for the resource, a selectable link 2030 to the resource, and a snippet 2040 of content from the resource.”), the insight data area displaying the insight data as the suggested alternative query phrases (Shaw, Figure 2, 2050; Column 6, lines 66-67).
Shaw does not explicitly teach generating a set of parse trees for each sentence …, wherein a parse tree comprising a plurality of terms; generating one or more dependency graphs for one or more parse trees from the set of parse trees, wherein a dependency graphs comprises: the … as a root node; and a plurality of nodes associated … from an associated parse tree; determining, for the one or more dependency graphs, … is associated with a node from the plurality of nodes in the dependency graph, and… associated with the node.
Schneider teaches generating a set of parse trees (Schneider, Page 32 Figure 1. The top structure is a parse tree the sentence “the man that came eats bananas with a fork”) for each sentence as the sentence (Id) …, wherein a parse tree comprising a plurality of terms as the words in the sentence (Id);
generating one or more dependency graphs (Schneider, Page 32 Figure 1, The bottom structure is a dependency graph) for one or more parse trees as the top structure (Id) from the set of parse trees as this is done for each sentence (Schneider, Page 31 “The parser has been trained, developed and tested on a large collection of syntactically analyzed sentences”), wherein a dependency graphs comprises:
the query parameter as a root node (Schneider, Page 32, Figure 1 see the root node “eats/V” in the dependency graph”); and
a plurality of nodes associated with the plurality of terms from an associated parse tree (Schneider, page 32 Figure 1 see the terms in the nodes of the dependency graph that are from the parse tree);
determining, for the one or more dependency graphs (Schneider, Page 32 Figure 1, The bottom structure is a dependency graph), a… is associated with a node from the plurality of nodes in the dependency graph (Schneider, page 32 Figure 1 see the nodes of the dependency graph), and… a relationship between a term associated with the node (Schneider, Page 32, see the relationship links in the parse tree and the dependency graph);
identifying similar nodes as relationships (Schneider, Page 32, Section 2.3 “a hierarchy of syntactic relations between lexical heads which serves as a bridgehead to semantics.”) in the one or more dependency graphs(Schneider, Page 32 Figure 1, The bottom structure is a dependency graph);…
It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have implemented the natural language parser taught by Shaw using the dependency parser taught by Schneider as Shaw explicitly lists this dependency parser as an example of a natural language parser that is usable by the disclosed system. (Shaw, Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”). The proposed combination would yield the expected results of providing a means of evaluating the linguistic relations between words and providing a means of calculating similarity (Shaw, Column 9, lines 43-63).
While the broadest reasonable interpretation of the claim language “wherein the insight data area is displayed in a first area proximate to, but separate from, a second area displaying the results search area” is addressed by Shaw as depicted above, for the sake of compact prosecution the following modification of Shaw in view of typical search systems is put forth:
Shaw teaches …, wherein the insight data area as the suggested alternative query phrases presented at the bottom (Shaw, Column 7, lines 1-4 “In typical search systems, the suggested alternative query phrases are presented in proximity to the query search results (e.g., at the bottom of a web page of search results).”) proximate to as in proximity to (Id), but separate from as at the bottom of the web page (Id), a second area displaying the search results area as the search results web page, e.g. the top of the page (Id).
It would have been obvious to one of ordinary skill in the art to which said subject matter pertains at the time in which the invention was filed to have implemented the UI presented within the proposed combination using the typical UI arrangement as it yields the predictable results of providing the search results and the suggested query phrase to the user (Shaw, Column 8, lines 40-42 “The system provides the search result, the suggested query phrase, and a user interface object for presentation to a user (step 3040).”) KSR rational (B) Simple substitution of one known element for another to obtain predictable results (MPEP 2141 (III)). The proposed combination would yield the predictable results of presenting the desired data elements, and would solve the problem of the user’s query not aligning well with the user’s intention (Shaw, Column 1, lines 14-18) by providing the user with suggested alternative queries (Shaw, Column 1, lines 22-24 “Some search engines provide to a user suggested alternative queries that the search engine identifies as being related to the user's query.”; Column 11, lines 27-34 “The search engine 1030 returns the search results for the suggested query phrase as it would for any other query. That is, without explicitly entering a new query into a search text field of the web page 2000 or even highlighting the new query, a user can receive search results for the suggested alternative query phrase 2050 by simply selecting the corresponding URL link 2060 displayed on the web page 2000.”)
Furthermore, the specific location in which the data is presented on the UI would be recognized by one of ordinary skill in the art a mere arrangement of parts amounting to an obvious matter of design choice (MPEP 2144.04 (VI)(C)). As long as the alternative suggested queries are presented to the user, the system would achieve the result of solving the problem of the user’s query not aligning well with the user’s intention (Shaw, Column 1, lines 14-18). As long as the user suggested query phrase is associated with search result (Shaw, Column 10, lines 40-53; Column 10, lines 59-63; Column 12, lines 43-51) the system will achieve the results of displaying the relationship between the result and the suggested query (Column 1, lines 29-32).
With regard to claims 2, 12, and 18, the proposed combination further teaches wherein the plurality of nodes comprise terms that are either nouns or adjectives (Schneider, Page 32 “verbs to nouns and adjectives”; Shaw Column 9, lines 50-52 “relation paths (e.g., the relation path between a main verb and an adjective of the main noun of the object of a sentence”).
With regard to claims 3, 13, and 19 the proposed combination further teaches wherein the plurality of nodes further comprise terms (Schneider, Page 32 Figure 1, see the terms in the nodes of the graph) that are connected to the query parameter (Shaw, Column 9, lines 59-61 “calculate the similarity measure between a query and a clause identified in a resource”) in an associated parse tree (Schneider, Page 32 Figure 1, see the parse tree of the sentence).
With regard to claim 5 the proposed combination further teaches wherein the similar nodes are associated with an identical term as the terms “GM foods” and “dangerous” (Column 10, lines 6-11 “the suggested query phrase “GM food are not just potentially dangerous to individuals- They are also a thread to food diversity” is one example of a clause that might have high similarity score relative to the query 2010 “are gm foods dangerous to eat”).
With regard to claim 7 the proposed combination further teaches wherein the insight data is displayed as one or more instances of terms as the suggested alternative query phrases (Shaw, Figure 2, 2050; Column 6, lines 66-67), wherein each instance of the one or more instances being an activatable user interface element (Shaw, Column 8, lines 45-48 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a checkbox) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query”).
With regard to claim 8 the proposed combination further teaches receiving, via activation as a click (Shaw, Column 8, lines 45-48 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query”) of an instance as the hyperlink (Id) of insight data as the suggested query phrase (Id), a request to refine as invoking the search (Id) the initial set of search results (Shaw, Column 6, line 52 “a group of search results 2005”; Figure 2, 2005);
generating a refined set of search results as processing the query and retrieving the results (Shaw, Column 11, lines 19-25 “the user can select the URL link 2060 to invoke the suggested alternative query phrase 2050 in snippet 2040 as a new query. In response to the user input, the client device 1004 submits to the search engine 1030 the suggested query phrase (step S030). The search engine 1030 processes the suggested query phrase as a new query. The search system displays one or more different search results received from the search engine 1030 for the suggested query phrase (5040)”); and
displaying the refined set of search results as displaying the one or more different search results (Id).
With regard to claim 9 Shaw teaches A system comprising:
at least one processor (Shaw, Column 13, line 41 “Processors”); and
memory encoding computer executable instructions that, when executed by the at least one processor (Shaw, Column 13, liens 41-46 “Processors suitable for the execution of a computer program … a processor will receive instructions and data from a read-only memory or a random access memory or both”), perform a method comprising:
receiving an initial set of search results (Shaw, Column 6, line 52 “a group of search results 2005”; Figure 2, 2005; Column 7 lines 45-47 “The system receives a search result for a query, where the search result includes a link… to a resource (step 3010)”; Figure 3, 310), the initial set of search results generated based upon (Shaw, Column 7, lines 62-64 “the search results 2005 are received corresponding to one or more resources (e.g., web pages) that contain information relevant to the query 2010”) a query parameter (Shaw, Column 7, lines 41-42 “the system performs the process 3000 after the user submits the query 210 and requests that the search system 1014 conduct a search”; Figure 2, 2010 see query: “are gm foods dangerous to eat”);
receiving a set of sentences (Shaw, Column 8, line 35-36 “the system identifies a clause or sentence in the textual contents of the resource that includes words with high frequencies according to the histogram as the suggested query phrase”; Column 9, lines 4-6 “The system identifies multiple clauses in the text of the resource (step 4010). N some implementations, instead of clauses, the system identifies multiple sentences in the text of the resource”) from the initial set of search results as the resources (Id);
generating a set [[as parsing (Shaw, Column 9, lines 45-58 “Linguistic relation features for a segment of text, for example, a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text. The parser can identify linguistic relations (e.g., the relation between a verb and the main noun of the subject) as well as relation paths (e.g., the relation path between a main verb and an adjective of the main noun of the object of a sentence). A dependency parser is described, for example, in Gerold Schneider, "A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English," Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003.”) for each sentence in the set of sentences (Shaw, Column 8, line 35-36 “the system identifies a clause or sentence in the textual contents of the resource that includes words with high frequencies according to the histogram as the suggested query phrase”; Column 9, lines 4-6 “The system identifies multiple clauses in the text of the resource (step 4010). N some implementations, instead of clauses, the system identifies multiple sentences in the text of the resource”), wherein [[ as the terms in the query or clause (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”);
generating one or more dependency graphs for [[(Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”), wherein a dependency graphs comprises:
a keyword as keywords in the query 2010 (Shaw, Column 7, lines 54-56 “the search engine 1030 can use inverted-index posting lists for keywords in the query 2010 to find suitable search results.”) associated with the query parameter as query, e.g. query 2010 input by the user (Shaw, Column 9, lines 64-65 “A function used for calculating the similarity measure between the query and a clause”; Column 10, lines 6-11 “the suggested query phrase “GM food are not just potentially dangerous to individuals - They are also a thread to food diversity” is one example of a clause that might have high similarity score relative to the query 2010 “are gm foods dangerous to eat”) [[as the root node in the dependency parser applied to the query (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”); and
[[as the nodes in the dependency parser applied to the identified clause (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”) associated with the plurality of terms as the words in the identified clause (Shaw, Column 9, lines 4-18; Column 9, lines 20-22) from [[as the tree for the identified clause (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”);
determining, for the one or more dependency graphs (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”), a plurality of scores as the similarity measure (Shaw, Column 9, lines 4-18; Column 9, lines 20-22 “The system calculates a similarity measure for each identified clause, where the similarity measure is a measure of the similarity between the identified clause and the query”), wherein a score as a similarity score between the query and a specific clause (Id) from the plurality of scores as the similarity measures generated for each clause (Id) is associated with [[ (Shaw, Column 9, lines 43-48 “the similarity measure is calculated using a function that evaluates the linguistic relations between words. Linguistic relation features for a segment of text, for example, a query or a clause identified in a resource, can be identified by applying a natural language parser ( e.g., a dependency parser) to the text”), and wherein the score as the similarity measure (Shaw, Column 9, lines 58-63 “When linguistic relation features are used to calculate the similar measure between a query and a clause identified in a resource, the similarity measure increases with an increase in the co-occurrence of the linguistic relation features in the query and in the clause”) represents a relationship as the relations (Shaw, Column 9, lines 48-52 “The parser can identify linguistic relations… as well as relation paths”) between a term as words in the clause (Shaw, Column 9, lines 58-63 “When linguistic relation features are used to calculate the similar measure between a query and a clause identified in a resource, the similarity measure increases with an increase in the co-occurrence of the linguistic relation features in the query and in the clause”) [[ as the words of the query (Id);
identifying similar as the linguistic relations, e.g. similarity measure between the query and the clause (Shaw, Column 9, lines 48-51 “The parser can identify linguistic relations (e.g., the relation between a verb and the main noun of the subject”; Column 9, lines 58-63 “When linguistic relation features are used to calculate the similarity measure between a query and a clause identified in a resource, the similarity measure increases with an increase in the co-occurrence of the linguistic relation features in the query and in the clause.”; Column 9, lines 64-65 “A function sued for calculating the similarity measure between the query and the clause”) nodes in the one or more dependency graphs as the parsed dependency structure (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”);
aggregating scores for the similar nodes as the sum of the weights (Shaw, Column 10, lines 20-23 “A symmetric similarity measure can be calculated by dividing the sum of weights of the features in the intersection of the two groups by the sum of weights of the features in the union of the two groups”);
generating, based upon the plurality of scores as the similarity measure (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”), insight data as the suggested query phrase (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”) for the initial set of search results as the resource (Shaw, Column 10, lines 36-389 “The system then identifies a section of contiguous text from the resource that includes the suggested query phrase”), wherein the insight data as the suggested query phrase (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”) provides contextual information for the initial set of search results (Shaw, Column 8, lines 42-45 “The suggested query phrase and the user interface object are provided in context in a section of contiguous text from the resource”), and wherein the insight data comprises a ranked list of node values (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”) based upon the aggregated scores (Shaw, Column 10, lines 20-23 “A symmetric similarity measure can be calculated by dividing the sum of weights of the features in the intersection of the two groups by the sum of weights of the features in the union of the two groups”); and
providing the initial set of search results as the search results (Shaw, Figure 2, 2005; Column 6, lines 51-55) and the insight data as the suggested alternative query 2050 (Column 7, lines 17-22 “For example, in snippet 2040, the suggested alternative query phrase, "There is no evidence that GM foods are dangerous: There is no evidence that GM foods are safe." 2050 is presented in bold font to distinguish the suggested alternative query phrase 2050 from the rest of the respective snippet 2040.”), wherein providing the initial set of search results and the insight data comprises displaying, in a user interface (Shaw, Figure 2), at least a subset of the initial set of search results as the three search results displayed (Shaw, Figure 2, 2005; Column 6, lines 51-52 “In response to the query 2010, the search engine 1030 returns a group of search results 2005.”) in a search results area as the group of search results 2005 (Shaw, Figure 2, 2005; Column 6, lines 51-55 “In response to the query 2010, the search engine 1030 returns a group of search results 2005.") of the user interface (Shaw, Figure 2) and the insight data as the suggested alternative query 2050 (Column 7, lines 17-22 “For example, in snippet 2040, the suggested alternative query phrase, "There is no evidence that GM foods are dangerous: There is no evidence that GM foods are safe." 2050 is presented in bold font to distinguish the suggested alternative query phrase 2050 from the rest of the respective snippet 2040.”) in an insight data area as the area where the alternative query is displayed (Shaw, Figure 2, 2050; Column 6, line 66 through Column 7, line 1 “The search system 1014 can provide to the user one or more suggested alternative query phrases 2050 as alternatives for the query 2010”) of the user interface (Shaw, Figure 2), wherein the insight data as the suggested alternative query 2050 (Column 7, lines 17-22 “For example, in snippet 2040, the suggested alternative query phrase, "There is no evidence that GM foods are dangerous: There is no evidence that GM foods are safe." 2050 is presented in bold font to distinguish the suggested alternative query phrase 2050 from the rest of the respective snippet 2040.”) area as the area where the alternative query is displayed (Shaw, Figure 2, 2050; Column 6, line 66 through Column 7, line 1 “The search system 1014 can provide to the user one or more suggested alternative query phrases 2050 as alternatives for the query 2010”) is displayed in a first area as section of the display where the alternative query phrase is distinguished from the rest of the respective snippet (Shaw, Column 7, lines 15-17 “a suggested alternative query phrase is emphasized to distinguish the suggested alternative query phrase from the rest of the respective snippet.”; Column 8 lines 45-53 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query.”) proximate to as provided in context of (Column 8, lines 42-45 “The suggested query phrase and the user interface object are provided in context in a section of contiguous text from the resource (e.g., a snippet of content from the”), but separate from as making the alternative query phrase distinct from the snippet of the result, such as the suggested phrase being a separate interface element or appended at the end (Shaw, Column 7, lines 15-17 “a suggested alternative query phrase is emphasized to distinguish the suggested alternative query phrase from the rest of the respective snippet.”; Column 8 lines 45-53 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query. For example, the server system 1014 can provide the search results 2005 as HTML code or in other conventional representations that describe the web page 2000, including the URL link 2060, which allows a user to invoke the suggested query phrase 2050 as a new query.”; Column 10, lines 59-63 “Alternatively, the system can identify a suggested query phrase from analyzing all the text of the resource (as described above) and can append the identified suggested query phrase to the section of contiguous text included with the received search result.”), a second area displaying the search results area as the group of search results 2005 (Shaw, Figure 2, 2020, 2030, 2040; Column 6, lines 51-55 “In response to the query 2010, the search engine 1030 returns a group of search results 2005. A search result can include, for each of a number of resources, a title 2020 for the resource, a selectable link 2030 to the resource, and a snippet 2040 of content from the resource.”), wherein the insight data is displayed as one or more instances of terms (Shaw, Figure 2, 2050 “There is no evidence that GM foods are safe”; Column 6, lines 51-55), wherein each instance of the one or more instances being an activatable user interface element (Shaw, Column 8, lines 45-48 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or check box) that the user can select to invoke (i.e. submit to the search engine) the suggested query phrase as a new query.”);
receiving a request as a click (Shaw, Column 8, lines 45-48 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query”) to refine as invoking the alternative (Id) the initial set of search results (Shaw, Column 6, line 52 “a group of search results 2005”; Figure 2, 2005), wherein the request is based upon an instance of the insight data as the clickable link being the suggested query phrase (Shaw, Column 8, lines 45-48 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a checkbox) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query”); and
in response to receiving the request as the click (Shaw, Column 8, lines 45-48 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query”), generating a refined set of search results as processing the query and retrieving the results (Shaw, Column 11, lines 19-25 “the user can select the URL link 2060 to invoke the suggested alternative query phrase 2050 in snippet 2040 as a new query. In response to the user input, the client device 1004 submits to the search engine 1030 the suggested query phrase (step S030). The search engine 1030 processes the suggested query phrase as a new query. The search system displays one or more different search results received from the search engine 1030 for the suggested query phrase (5040)”).
Shaw does not explicitly teach generating a set of parse trees for each sentence …, wherein a parse tree comprising a plurality of terms; generating one or more dependency graphs for one or more parse trees from the set of parse trees, wherein a dependency graphs comprises: the … as a root node; and a plurality of nodes associated … from an associated parse tree; determining, for the one or more dependency graphs, … is associated with a node from the plurality of nodes in the dependency graph, and… associated with the node.
Schneider teaches generating a set of parse trees (Schneider, Page 32 Figure 1. The top structure is a parse tree the sentence “the man that came eats bananas with a fork”) for each sentence as the sentence (Id) …, wherein a parse tree comprising a plurality of terms as the words in the sentence (Id);
generating one or more dependency graphs (Schneider, Page 32 Figure 1, The bottom structure is a dependency graph) for one or more parse trees as the top structure (Id from the set of parse trees as this is done for each sentence (Schneider, Page 31 “The parser has been trained, developed and tested on a large collection of syntactically analyzed sentences”), wherein a dependency graphs comprises:
the query parameter as a root node (Schneider, Page 32, Figure 1 see the root node “eats/V” in the dependency graph”); and
a plurality of nodes associated with the plurality of terms from an associated parse tree (Schneider, page 32 Figure 1 see the terms in the nodes of the dependency graph that are from the parse tree);
determining, for the one or more dependency graphs (Schneider, Page 32 Figure 1, The bottom structure is a dependency graph), a… is associated with a node from the plurality of nodes in the dependency graph (Schneider, page 32 Figure 1 see the nodes of the dependency graph), and… a relationship between a term associated with the node (Schneider, Page 32, see the relationship links in the parse tree and the dependency graph);
identifying similar nodes as relationships (Schneider, Page 32, Section 2.3 “a hierarchy of syntactic relations between lexical heads which serves as a bridgehead to semantics.”) in the one or more dependency graphs(Schneider, Page 32 Figure 1, The bottom structure is a dependency graph);…
It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have implemented the natural language parser taught by Shaw using the dependency parser taught by Schneider as Shaw explicitly lists this dependency parser as an example of a natural language parser that is usable by the disclosed system. (Shaw, Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”). The proposed combination would yield the expected results of providing a means of evaluating the linguistic relations between words and providing a means of calculating similarity (Shaw, Column 9, lines 43-63).
While the broadest reasonable interpretation of the claim language “wherein the insight data area is displayed in a first area proximate to, but separate from, a second area displaying the results search area” is addressed by Shaw as depicted above, for the sake of compact prosecution the following modification of Shaw in view of typical search systems is put forth:
Shaw teaches …, wherein the insight data area as the suggested alternative query phrases presented at the bottom (Shaw, Column 7, lines 1-4 “In typical search systems, the suggested alternative query phrases are presented in proximity to the query search results (e.g., at the bottom of a web page of search results).”) proximate to as in proximity to (Id), but separate from as at the bottom of the web page (Id), a second area displaying the search results area as the search results web page, e.g. the top of the page (Id).
It would have been obvious to one of ordinary skill in the art to which said subject matter pertains at the time in which the invention was filed to have implemented the UI presented within the proposed combination using the typical UI arrangement as it yields the predictable results of providing the search results and the suggested query phrase to the user (Shaw, Column 8, lines 40-42 “The system provides the search result, the suggested query phrase, and a user interface object for presentation to a user (step 3040).”) KSR rational (B) Simple substitution of one known element for another to obtain predictable results (MPEP 2141 (III)). The proposed combination would yield the predictable results of presenting the desired data elements, and would solve the problem of the user’s query not aligning well with the user’s intention (Shaw, Column 1, lines 14-18) by providing the user with suggested alternative queries (Shaw, Column 1, lines 22-24 “Some search engines provide to a user suggested alternative queries that the search engine identifies as being related to the user's query.”; Column 11, lines 27-34 “The search engine 1030 returns the search results for the suggested query phrase as it would for any other query. That is, without explicitly entering a new query into a search text field of the web page 2000 or even highlighting the new query, a user can receive search results for the suggested alternative query phrase 2050 by simply selecting the corresponding URL link 2060 displayed on the web page 2000.”)
Furthermore, the specific location in which the data is presented on the UI would be recognized by one of ordinary skill in the art a mere arrangement of parts amounting to an obvious matter of design choice (MPEP 2144.04 (VI)(C)). As long as the alternative suggested queries are presented to the user, the system would achieve the result of solving the problem of the user’s query not aligning well with the user’s intention (Shaw, Column 1, lines 14-18). As long as the user suggested query phrase is associated with search result (Shaw, Column 10, lines 40-53; Column 10, lines 59-63; Column 12, lines 43-51) the system will achieve the results of displaying the relationship between the result and the suggested query (Column 1, lines 29-32).
With regard to claim 11 the proposed combination further teaches wherein generating the refined set of search results (Shaw, Column 11, lines 19-25 “the user can select the URL link 2060 to invoke the suggested alternative query phrase 2050 in snippet 2040 as a new query. In response to the user input, the client device 1004 submits to the search engine 1030 the suggested query phrase (step S030). The search engine 1030 processes the suggested query phrase as a new query. The search system displays one or more different search results received from the search engine 1030 for the suggested query phrase (5040)”) comprises executing a second query comprising the query parameter and the instance of the insight data (Id).
With regard to claim 16 the proposed combination further teaches wherein receiving the request to refine as a click (Shaw, Column 8, lines 45-48 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query”) the initial set of search results as the three search results displayed (Shaw, Figure 2, 2005; Column 6, lines 51-52 “In response to the query 2010, the search engine 1030 returns a group of search results 2005.”) comprising
receiving activation as a click (Shaw, Column 8, lines 45-48 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query”) of a user interface element as the hyperlink (Id) associated with an instance of insight data as the suggested query phrase (Shaw, Column 8, lines 45-48 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a checkbox) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query”).
With regard to claim 17 Shaw teaches A computer storage medium comprising computer-executable instructions that, when executed by at least one processor (Shaw, Column 12, lines 63 through Column 13, line 5), perform a method comprising:
receiving an initial set of search results (Shaw, Column 6, line 52 “a group of search results 2005”; Figure 2, 2005; Column 7 lines 45-47 “The system receives a search result for a query, where the search result includes a link… to a resource (step 3010)”; Figure 3, 310), the initial set of search results generated based upon (Shaw, Column 7, lines 62-64 “the search results 2005 are received corresponding to one or more resources (e.g., web pages) that contain information relevant to the query 2010”) a query parameter (Shaw, Column 7, lines 41-42 “the system performs the process 3000 after the user submits the query 210 and requests that the search system 1014 conduct a search”; Figure 2, 2010 see query: “are gm foods dangerous to eat”);
receiving a set of sentences (Shaw, Column 8, line 35-36 “the system identifies a clause or sentence in the textual contents of the resource that includes words with high frequencies according to the histogram as the suggested query phrase”; Column 9, lines 4-6 “The system identifies multiple clauses in the text of the resource (step 4010). N some implementations, instead of clauses, the system identifies multiple sentences in the text of the resource”) from the initial set of search results as the resources (Id);
generating a set [[as parsing (Shaw, Column 9, lines 45-58 “Linguistic relation features for a segment of text, for example, a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text. The parser can identify linguistic relations (e.g., the relation between a verb and the main noun of the subject) as well as relation paths (e.g., the relation path between a main verb and an adjective of the main noun of the object of a sentence). A dependency parser is described, for example, in Gerold Schneider, "A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English," Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003.”) for each sentence in the set of sentences (Shaw, Column 8, line 35-36 “the system identifies a clause or sentence in the textual contents of the resource that includes words with high frequencies according to the histogram as the suggested query phrase”; Column 9, lines 4-6 “The system identifies multiple clauses in the text of the resource (step 4010). N some implementations, instead of clauses, the system identifies multiple sentences in the text of the resource”), wherein [[ as the terms in the query or clause (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”);
generating one or more dependency graphs for [[(Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”), wherein a dependency graphs comprises:
a keyword as keywords in the query 2010 (Shaw, Column 7, lines 54-56 “the search engine 1030 can use inverted-index posting lists for keywords in the query 2010 to find suitable search results.”) associated with the query parameter as query, e.g. query 2010 input by the user (Shaw, Column 9, lines 64-65 “A function used for calculating the similarity measure between the query and a clause”; Column 10, lines 6-11 “the suggested query phrase “GM food are not just potentially dangerous to individuals - They are also a thread to food diversity” is one example of a clause that might have high similarity score relative to the query 2010 “are gm foods dangerous to eat”) [[as the root node in the dependency parser applied to the query (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”); and
[[as the nodes in the dependency parser applied to the identified clause (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”) associated with the plurality of terms as the words in the identified clause (Shaw, Column 9, lines 4-18; Column 9, lines 20-22) from [[as the tree for the identified clause (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”);
determining, for the one or more dependency graphs (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”), a plurality of scores as the similarity measure (Shaw, Column 9, lines 4-18; Column 9, lines 20-22 “The system calculates a similarity measure for each identified clause, where the similarity measure is a measure of the similarity between the identified clause and the query”), wherein a score as a similarity score between the query and a specific clause (Id) from the plurality of scores as the similarity measures generated for each clause (Id) is associated with [[ (Shaw, Column 9, lines 43-48 “the similarity measure is calculated using a function that evaluates the linguistic relations between words. Linguistic relation features for a segment of text, for example, a query or a clause identified in a resource, can be identified by applying a natural language parser ( e.g., a dependency parser) to the text”), and wherein the score as the similarity measure (Shaw, Column 9, lines 58-63 “When linguistic relation features are used to calculate the similar measure between a query and a clause identified in a resource, the similarity measure increases with an increase in the co-occurrence of the linguistic relation features in the query and in the clause”) represents a relationship as the relations (Shaw, Column 9, lines 48-52 “The parser can identify linguistic relations… as well as relation paths”) between a term as words in the clause (Shaw, Column 9, lines 58-63 “When linguistic relation features are used to calculate the similar measure between a query and a clause identified in a resource, the similarity measure increases with an increase in the co-occurrence of the linguistic relation features in the query and in the clause”) [[as the words of the query (Id);
identifying similar as the linguistic relations, e.g. similarity measure between the query and the clause (Shaw, Column 9, lines 48-51 “The parser can identify linguistic relations (e.g., the relation between a verb and the main noun of the subject”; Column 9, lines 58-63 “When linguistic relation features are used to calculate the similarity measure between a query and a clause identified in a resource, the similarity measure increases with an increase in the co-occurrence of the linguistic relation features in the query and in the clause.”; Column 9, lines 64-65 “A function sued for calculating the similarity measure between the query and the clause”) nodes in the one or more dependency graphs as the parsed dependency structure (Shaw, Column 9, lines 47-48 “a query or a clause identified in a resource, can be identified by applying a natural language parser (e.g., a dependency parser) to the text.”; Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”);
aggregating scores for the similar nodes as the sum of the weights (Shaw, Column 10, lines 20-23 “A symmetric similarity measure can be calculated by dividing the sum of weights of the features in the intersection of the two groups by the sum of weights of the features in the union of the two groups”);
generating, based upon the plurality of scores as the similarity measure (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”), insight data as the suggested query phrase (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”) for the initial set of search results as the resource (Shaw, Column 10, lines 36-389 “The system then identifies a section of contiguous text from the resource that includes the suggested query phrase”), wherein the insight data as the suggested query phrase (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”) provides contextual information for the initial set of search results (Shaw, Column 8, lines 42-45 “The suggested query phrase and the user interface object are provided in context in a section of contiguous text from the resource”), and wherein the insight data comprises a ranked list of node values (Shaw, Column 10, lines 35-36 “the system identifies a clause with a highest similarity measure as the suggested query phrase (step 4030)”) based upon the aggregated scores (Shaw, Column 10, lines 20-23 “A symmetric similarity measure can be calculated by dividing the sum of weights of the features in the intersection of the two groups by the sum of weights of the features in the union of the two groups”); and
displaying, in a user interface (Shaw, Figure 2), at least a subset of the initial set of search results as the search results (Shaw, Figure 2, 2005; Column 6, lines 51-55) and the insight data as the suggested alternative query phrases (Shaw, Figure 2, 2050; Column 6, lines 66-67), wherein the user interface comprises:
a search results area as the group of search results 2005 (Shaw, Figure 2, 2005; Column 6, lines 51-55 “In response to the query 2010, the search engine 1030 returns a group of search results 2005.) of the user interface (Shaw, Figure 2), the search results area displaying the subset of initial set of search results as the three search results displayed (Shaw, Figure 2, 2005; Column 6, lines 51-52 “In response to the query 2010, the search engine 1030 returns a group of search results 2005.”); and
an insight data area as the area where the alternative query is displayed (Shaw, Figure 2, 2050; Column 6, line 66 through Column 7, line 1 “The search system 1014 can provide to the user one or more suggested alternative query phrases 2050 as alternatives for the query 2010”) of the user interface (Shaw, Figure 2), wherein the insight data as the suggested alternative query 2050 (Column 7, lines 17-22 “For example, in snippet 2040, the suggested alternative query phrase, "There is no evidence that GM foods are dangerous: There is no evidence that GM foods are safe." 2050 is presented in bold font to distinguish the suggested alternative query phrase 2050 from the rest of the respective snippet 2040.”) area as the area where the alternative query is displayed (Shaw, Figure 2, 2050; Column 6, line 66 through Column 7, line 1 “The search system 1014 can provide to the user one or more suggested alternative query phrases 2050 as alternatives for the query 2010”) is displayed in a first area as section of the display where the alternative query phrase is distinguished from the rest of the respective snippet (Shaw, Column 7, lines 15-17 “a suggested alternative query phrase is emphasized to distinguish the suggested alternative query phrase from the rest of the respective snippet.”; Column 8 lines 45-53 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query.”) proximate to as provided in context of (Column 8, lines 42-45 “The suggested query phrase and the user interface object are provided in context in a section of contiguous text from the resource (e.g., a snippet of content from the”), but separate from as making the alternative query phrase distinct from the snippet of the result, such as the suggested phrase being a separate interface element or appended at the end (Shaw, Column 7, lines 15-17 “a suggested alternative query phrase is emphasized to distinguish the suggested alternative query phrase from the rest of the respective snippet.”; Column 8 lines 45-53 “The user interface object can be any user interface element (e.g., a hyperlink, a button, or a check box) that the user can select to invoke (i.e., submit to the search engine) the suggested query phrase as a new query. For example, the server system 1014 can provide the search results 2005 as HTML code or in other conventional representations that describe the web page 2000, including the URL link 2060, which allows a user to invoke the suggested query phrase 2050 as a new query.”; Column 10, lines 59-63 “Alternatively, the system can identify a suggested query phrase from analyzing all the text of the resource (as described above) and can append the identified suggested query phrase to the section of contiguous text included with the received search result.”), a second area displaying the search results area as the group of search results 2005 (Shaw, Figure 2, 2020, 2030, 2040; Column 6, lines 51-55 “In response to the query 2010, the search engine 1030 returns a group of search results 2005. A search result can include, for each of a number of resources, a title 2020 for the resource, a selectable link 2030 to the resource, and a snippet 2040 of content from the resource.”), the insight data area displaying the insight data as the suggested alternative query phrases (Shaw, Figure 2, 2050; Column 6, lines 66-67).
Shaw does not explicitly teach generating a set of parse trees for each sentence …, wherein a parse tree comprising a plurality of terms; generating one or more dependency graphs for one or more parse trees from the set of parse trees, wherein a dependency graphs comprises: the … as a root node; and a plurality of nodes associated … from an associated parse tree; determining, for the one or more dependency graphs, … is associated with a node from the plurality of nodes in the dependency graph, and… associated with the node.
Schneider teaches generating a set of parse trees (Schneider, Page 32 Figure 1. The top structure is a parse tree the sentence “the man that came eats bananas with a fork”) for each sentence as the sentence (Id) …, wherein a parse tree comprising a plurality of terms as the words in the sentence (Id);
generating one or more dependency graphs (Schneider, Page 32 Figure 1, The bottom structure is a dependency graph) for one or more parse trees as the top structure (Id from the set of parse trees as this is done for each sentence (Schneider, Page 31 “The parser has been trained, developed and tested on a large collection of syntactically analyzed sentences”), wherein a dependency graphs comprises:
the query parameter as a root node (Schneider, Page 32, Figure 1 see the root node “eats/V” in the dependency graph”); and
a plurality of nodes associated with the plurality of terms from an associated parse tree (Schneider, page 32 Figure 1 see the terms in the nodes of the dependency graph that are from the parse tree);
determining, for the one or more dependency graphs (Schneider, Page 32 Figure 1, The bottom structure is a dependency graph), a… is associated with a node from the plurality of nodes in the dependency graph (Schneider, page 32 Figure 1 see the nodes of the dependency graph), and… a relationship between a term associated with the node (Schneider, Page 32, see the relationship links in the parse tree and the dependency graph);
identifying similar nodes as relationships (Schneider, Page 32, Section 2.3 “a hierarchy of syntactic relations between lexical heads which serves as a bridgehead to semantics.”) in the one or more dependency graphs(Schneider, Page 32 Figure 1, The bottom structure is a dependency graph);…
It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have implemented the natural language parser taught by Shaw using the dependency parser taught by Schneider as Shaw explicitly lists this dependency parser as an example of a natural language parser that is usable by the disclosed system. (Shaw, Column 9, lines 52-58 “A dependency parser is described, for example, in Gerold Schneider, “A Low-Complexity, Broad-Coverage Probabilistic Dependency Parser for English,” Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, pages 31-36, May-June 2003”). The proposed combination would yield the expected results of providing a means of evaluating the linguistic relations between words and providing a means of calculating similarity (Shaw, Column 9, lines 43-63).
While the broadest reasonable interpretation of the claim language “wherein the insight data area is displayed in a first area proximate to, but separate from, a second area displaying the results search area” is addressed by Shaw as depicted above, for the sake of compact prosecution the following modification of Shaw in view of typical search systems is put forth:
Shaw teaches …, wherein the insight data area as the suggested alternative query phrases presented at the bottom (Shaw, Column 7, lines 1-4 “In typical search systems, the suggested alternative query phrases are presented in proximity to the query search results (e.g., at the bottom of a web page of search results).”) proximate to as in proximity to (Id), but separate from as at the bottom of the web page (Id), a second area displaying the search results area as the search results web page, e.g. the top of the page (Id).
It would have been obvious to one of ordinary skill in the art to which said subject matter pertains at the time in which the invention was filed to have implemented the UI presented within the proposed combination using the typical UI arrangement as it yields the predictable results of providing the search results and the suggested query phrase to the user (Shaw, Column 8, lines 40-42 “The system provides the search result, the suggested query phrase, and a user interface object for presentation to a user (step 3040).”) KSR rational (B) Simple substitution of one known element for another to obtain predictable results (MPEP 2141 (III)). The proposed combination would yield the predictable results of presenting the desired data elements, and would solve the problem of the user’s query not aligning well with the user’s intention (Shaw, Column 1, lines 14-18) by providing the user with suggested alternative queries (Shaw, Column 1, lines 22-24 “Some search engines provide to a user suggested alternative queries that the search engine identifies as being related to the user's query.”; Column 11, lines 27-34 “The search engine 1030 returns the search results for the suggested query phrase as it would for any other query. That is, without explicitly entering a new query into a search text field of the web page 2000 or even highlighting the new query, a user can receive search results for the suggested alternative query phrase 2050 by simply selecting the corresponding URL link 2060 displayed on the web page 2000.”)
Furthermore, the specific location in which the data is presented on the UI would be recognized by one of ordinary skill in the art a mere arrangement of parts amounting to an obvious matter of design choice (MPEP 2144.04 (VI)(C)). As long as the alternative suggested queries are presented to the user, the system would achieve the result of solving the problem of the user’s query not aligning well with the user’s intention (Shaw, Column 1, lines 14-18). As long as the user suggested query phrase is associated with search result (Shaw, Column 10, lines 40-53; Column 10, lines 59-63; Column 12, lines 43-51) the system will achieve the results of displaying the relationship between the result and the suggested query (Column 1, lines 29-32).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Shaw in view of Schneider and Dimassimo [2013/0024440].
With regard to claim 10 the proposed combination further teaches wherein generating the refined set of search results (Shaw, Column 11, lines 19-25 “the user can select the URL link 2060 to invoke the suggested alternative query phrase 2050 in snippet 2040 as a new query. In response to the user input, the client device 1004 submits to the search engine 1030 the suggested query phrase (step S030). The search engine 1030 processes the suggested query phrase as a new query. The search system displays one or more different search results received from the search engine 1030 for the suggested query phrase (5040)”) comprises [[]].
Dimassimo teaches generating the refined set of search results (Dimassimo, ¶54 “faceted search results are presented in the search engine interface in response to a query”) comprises filtering (Dimassimo, ¶54 “The facets (themes, organizations, locations, and document types) each can trigger faceted search results if the user adds them as criteria to navigate, refine or filter the current result set”) the initial set of search results as the current result set (Id) based upon the instance of the insight data as the facets (Id).
It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have implemented the proposed device to have filtered the search results as taught by Dimassimo as it improves the overall precision of the facets and search results, and when done at query time provides an added benefit that content does not need to be re-classified (Dimassimo, ¶55). Please note that within the device taught by Dimassimo, the “facets” are additional search terms used to subsequently refine an original search query (Dimassimo, ¶17), wherein the UI presents the original search result with the facets, and enables the user to select facets to either execute a second search or to filter the first search (Dimassimo, ¶52-¶53).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Shaw in view of Schneider and Vasudevan [2015/0082263].
With regard to claim 20 the proposed combination further teaches wherein determining a plurality of scores (Shaw, Column 7, lines 65-67 “a weighted sum of the ranking function and a similarity function can be used to sort the received search results”) for the one or more dependency graphs (Shaw, Column 9, lines 45-62) comprises executing a [[ as the ranking function (Shaw, Column 7, lines 65-67) on the one or more dependency graphs as the similarity function generated using the dependency graph (Id).
Shaw does not explicitly teach a PageRank algorithm. Vasudevan teaches a PageRank algorithm (Vasudevan, ¶39 “PageRank is a seminal graph-ranging algorithm used by Google to rank web search results… PageRank analyzes the hyperlink structure of the World Wide Web graph to compute an importance score for each web page… The PageRank algorithm may be adapted to work for an RTL variable dependency graph instead of the web graph”; ¶77).
It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have implemented ranking function taught by the proposed combination as a page rank algorithm as taught by Vasudevan as it yields the predictable results of computing an importance score for each web page using a known ranking algorithm (Vasudevan, ¶39). Furthermore, Vasudevan explicitly highlights that the PageRank algorithm can be adapted to work for a variable dependency graph instead of the web graph (Vasudevan, ¶77-¶79). The use of the Page Rank algorithm captures the importance of a variable with respect to a target variable (Id).
Response to Arguments
With regard to the prior art, Applicant's arguments filed February 19, 2026 have been fully considered but they are not persuasive.
With regard to claim 1 applicant argues that Shaw does not teach a ranked list of node values. Specifically applicant argues that Shaw teaches relationship between verb and noun and does not identify similar nodes.
In response, the ‘verb’ and ‘noun’ being discussed are nodes within the dependency graph. Shaw explicitly teaches that the purpose is to calculate the similarity between the words in the query (e.g. mapped to the claimed ‘keyword’) and the words in the clause (e.g. mapped to the claimed ‘term’). The claims recite that the nodes are associated with the plurality of terms, but does not specific what the nodes entail. One of ordinary skill in the art may reasonably identify the ‘verb’ and ‘noun’ within Shaw as being nodes within the dependency graph. The description of the determining of the score within the claims, refers to these generic nodes. One of ordinary skill in the art would read the similarity calculation taught by Shaw, between the words in the query and the clause as reading on the claim language. It is suggested that the claims be amended to better define the scope of the ‘nodes’ and better define how the similarity is calculated. Applicant’s arguments suggest an intended meaning that is not captured or required by the claim language.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMANDA WILLIS whose telephone number is (571)270-7691. The examiner can normally be reached Monday-Friday 8am-2pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached at 571-272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMANDA L WILLIS/ Primary Examiner, Art Unit 2156