DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 8 and 15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite
determination of “entity relationship” among entities in a “to-be-recognized text” (Sp. ¶ 0056 S1: “may be a historical work experience information text of company employees” “For example, if a relationship between an employee and a company is to be recognized”). It begins by “performing sentence segmentation on the to-be-recognized text” (Sp. ¶ 0069 S1: “the recognized text statement needs to contain entities of at least two target entity types”). It next “perform[s] entity recognition on the statement texts” in order “to obtain a target statement text” (“determining the statement text containing all the target entity types in all the recognized entity types as the target statement text” (claim 4 limitation 2); this seems that basically a “target statement” possesses all the identified “entity” “types”).
It next determines “a to-be-recognized fuzzy text” by “replacing” all the “entities” “with a predetermined identifier” (Sp. ¶ 0080 lines 1+: “the target statement” “Li XX is an employee of Company A”, then “Li XX” gets replaced by “identifier” “[T]”; Sp. ¶ 0074 last 5 lines: “remov[ing] the personalized information in the target statement text and achieve the purpose of reserving the main structure information of the statement”). It next obtains a “corresponding example sentence fuzzy text”. This is done by using sample “entity relationship example sentence text[s]” (i.e., example sentences) which are each associated with an “optional entity relationship” and also “replacing” their “entities” “with the predefined identifier” as well to obtain “a corresponding example sentence fuzzy text”. Based on a “text similarity” calculation between the “to-be-recognized fuzzy text” and “all” the “corresponding example sentence fuzzy text[s]”, one “corresponding” “example sentence” is “determined” and its associated “optional entity relationship” is determined as “an entity relationship recognition result of the to-be-recognized text”.
Other than recitation of “at least one processor” (claim 8) and “a processor” (claim 15), everything recited can be done by a human using a paper and a pen and/or through a set of preset rules; e.g., the human could take any “to-be-recognized text” and identify all entities and their types and segment it between occurrences of “entities” appearing in the “to-be-recognized text”. The use “optional entity relationships” associated with “entity relationship example sentence text” to probe “entity relationship” of the “to-be-recognized text” is something that is traced to the human’s past experience. For example, if there are two entities in the “to-be-recognized text”, one associated with an employee name and another one associated with a company, the “human” by experience with similar sentences he had encountered in the past (which basically amounts to a similarity calculation mentally (by the human’s brain) done with similar past encountered sentences) could determine the relationship to be an employee relationship between the two entities. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls withing the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claims 8 and 15 recite only one additional element, using a “processor” to perform the “acquiring …”, “performing entity recognition …” “replacing entities …” “acquiring all optional entity relationships …” “calculating a similarity …” “screening all entity relationship example sentences …”. The “processor” is therefore recited at a high-level of generality without any specific constraints pertaining to of the aforementioned processes, such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are thus directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform all the above quoted limitations amounts to no more than mere instructions to apply the exception using a generic computer component and cannot therefore provide an inventive concept (SP. ¶ 0111 “The processor 10 is a Control Unit of the electronic device, which connects all components of the electronic device with various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (such as entity relationship recognition programs) stored in the memory 11 and calling data stored in the memory 11”). The claims are thus not patent eligible.
Regarding claims 2-3, 9-10 and 16, when a composite sentence comprises of plurality of subject nouns each associated with distinct topics would result in a user segmenting them between every two adjacent nouns. That requires the human to possess basic knowledge of vocabulary and grammar in a language.
Regarding claims 4, 11, and 17, recognition of entities of different types (e.g., person’s versus place name … etc.) in a sentence requires nothing beyond basic knowledge of vocabulary in a language.
Regarding claims 5, 12, and 18, they appear to suggest “reserving” (filtering) in stages a sentence under recognizing its entity relationship (i.e., its core intent). In general one could discern intent of a sentence by few words included in the sentence that any human in general is capable of without requiring any particular machine and/or software.
Regarding claims 6, 13, and 19, transforming sentences into predetermined vectors in order to do specific calculations does not require any particular machine, and is a routine practice.
Regarding claims 7 and 14, the human reading the “to-be-recognized text” could readily determine all the entity relationships in the said text and summarize them and record them on a sheet of paper.
Claims 15-20 recite an embodiment of the applicants' invention directed towards a “non-volatile computer readable storage medium storing a computer program”. It is noted, however, the recitation of the “medium” (also “non-volatile”) in the specification (¶ 0126) is not exclusory with respect to non-statutory medium types as no specific and limiting definition of “non-volatile computer readable storage medium” is provided. Thus, ¶under the broadest reasonable interpretation, the full claim scope of "non-volatile computer readable medium" would include non-statutory mediums such as carrier waves.
As per the recent USPTO notice signed by director David Kappos on 1/26/2010: “The United States Patent and Trademark Office (USPTO) is obliged to give claims their broadest reasonable interpretation consistent with the specification during proceedings before the USPTO. See In re Zletz, 893 F.2d 319(Fed. Cir. 1989) (during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 U.S.C.j101, Aug. 24,2009; p. 2.”
The scope of “computer-readable storage medium” therefore includes signal-based mediums. A signal does not fall within one of the four statutory categories of invention (i.e., process, machine, manufacture, or composition of matter) because it is an ephemeral, transient signal and thus is non-statutory. Since the scope of claims “15-20" includes these non-statutory instances, claims 15-20 are directed to non-statutory subject matter.
The examiner suggests replacing the phrase “computer readable storage medium” to “non-transitory computer readable storage medium”, which would exclude signal type embodiment and thereby overcome the 35 U.S.C. 101 rejection of the said claim.
Claim Objections
Claim 10 objected to because of the following informalities: “quantity of the statement identifier” appears to be misspelling of “quantity of the statement identifiers”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 stand rejected:
Claims 1, 8 and 15 recite the limitation "the statement text" in the second limitation line 1. There is insufficient antecedent basis for these limitations in the claims.
Claims 1, 8 and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The words “type” (in “entity types” or “target entity types”), “optional” (“optional entity relationships”) and “example” (“example sentence text”, “entity relationship example sentences”) “extend[] the scope of the expression so as to render it indefinite” (see MPEP 2173.05(b) under the section “TERMS OF DEGREE”).
Claims 5, 12 and 18 recite the limitation "the target words not contained in the sentence trunk" in limitation 2, “the target word” in limitation 3, and “the entity type of an entity of a non-target word” in limitation 4. There is insufficient antecedent basis for these limitations in the claims.
Claim 15 recites the limitation "the replaced target statement text" in limitation 2. There is insufficient antecedent basis for this limitation in the claim.
Regarding claims 2-7 (dependent on claim 1), and 9-14 (dependent on claim 8) and 16-20 (dependent on claim 15), as they do not obviate the indefiniteness of their respective parent claims, they are thus rejected under similar rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 6, 8, 13, 15, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over ZHANG LINGLING et al. (CN 112445899 A), and further in view of ZHANG KUN et al. (CN 110175334A).
Regarding claim 1, ZHANG LINGLING et al. do teach A method for entity relationship recognition (Abstract line 3: “generating candidate attributes” (relationship) “according to entities” (for entities)),
the method comprising:
acquiring a to-be-recognized text and target entity types requiring entity relationship recognition, and performing sentence segmentation on the to-be-recognized text to obtain one or more statement texts (¶ n0018 lines 3-4: “segment” (perform sentence segmentation) “the question” (a to be recognized text) into “words” (one or more statement texts, e.g., ¶ n0008 line 3: “segment the question and the candidate attributes into words”, or ¶ n0018 line 6: “the nth word after segmentation of the question”), e.g., ¶ n0003 S1, S3: “The purpose of attribute matching is to determine which attribute” (recognize a relationship) “of an entity” (of an entity of any target type) “the question is asking about”; “For example, the question “the person in charge of Kadena Air Base” is associated with the “attribute” “commander”);
performing entity recognition on the statement text, and performing entity type screening on all statement texts based on an entity recognition result and the target entity types to obtain a target statement text (¶ n0018 S1: “Generate candidate attributes by querying the knowledge base based on the identified entities” (performing entity recognition for all “entities” (using all words (statements texts) which results in the ”question” (to be recognized text or statement text) with recognized entities (becomes a target statement text), and comprises all entity types, e.g., in the example “the person in charge of Kadena Air Base”, there are entities associated with a person’s name as well a place name);
replacing entities corresponding to the target entity types in the target statement text with a predetermined identifier (¶ n0018 line 2: “replace” (replacing) “the text of the corresponding entity” (entities of all types) “in the question” (in the target statement text) “with a label” (with a predetermined identifier)),
and performing
acquiring all optional entity relationships among the entities of the target entity types and an entity relationship example sentence text corresponding to each of the optional entity relationships (¶ n0028 lines 11+: “Simultaneously, candidate attributes” (all optional entity relationships among the entities of all types) “for Kadena Air Base are generated” (are acquired in “How is the person in charge of Kadena Air Base” (an example sentence text))),
replacing the entities of the target entity types in the entity relationship example sentence text with the predetermined identifier, and performing
calculating a similarity between the to-be-recognized fuzzy text and the example sentence fuzzy text to obtain a text similarity (¶ n0028 lines 12-13: “Then a similarity” (a similarity) “between the question” (between the to be recognized fuzzy text) “and each attribute” (and the example sentence fuzzy text) “is calculated” (is obtained)).
ZHANG LINGLING et al. do not specifically disclose:
Performing part of speech tagging to the to be recognized fuzzy text and/or example sentence text;
and
screening all entity relationship example sentences based on the text similarity, and determining the optional entity relationship corresponding to a screened entity relationship example sentence as an entity relationship recognition result of the to-be-recognized text.
ZHANG KUN et al. do teach:
Performing part of speech tagging to the to be recognized fuzzy text and/or example sentence text (¶ 0010 page 5 lines 6+: “An entity relationship extraction algorithm performs part of speech analysis” (performing part of speech) “and semantic role labeling” (and tagging) “on the entities” (on e.g. a to be recognized) “extracted from the text” (fuzzy text and/or example sentence text));
and
screening all entity relationship example sentences based on the text similarity, and determining the optional entity relationship corresponding to a screened entity relationship example sentence as an entity relationship recognition result of the to-be-recognized text (¶ 0010 last S: “A knowledge structure evaluation algorithm evaluates the similarity matching” (using text similarity) “and the accuracy” (to determine to screen) “of the relationships” (an optional entity relationship which corresponds to entity relationship example sentence or recognition result) “between entities” (for the entities of e.g. a to be recognized text)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “knowledge structure evaluation algorithm” for “similarity” and “relationship between entities” of ZHANG KUN et al. into the “attribute matching method in knowledge base” of ZHANG LINGLING et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable ZHANG LINGLING et al. to determine not only “attributes” (relationships) of specific “entities” but also be able to determine “relationship between entities” as well as disclosed in ZHANG KUN et al. ¶ 0010 last sentence.
Regarding claim 6, ZHANG LINGLING et al. do teach the method for entity relationship recognition according to claim 1, wherein the calculating a similarity between the to-be-recognized fuzzy text and the example sentence fuzzy text to obtain a text similarity comprises:
transforming the to-be-recognized fuzzy text to a to-be-recognized fuzzy text vector (¶ n0008 lines 3-5: “segment the question and the candidate attributes into words, and send them to the word embedding layer to obtain the question word vector” (obtaining a to be recognized fuzzy text vector from transforming the “question” (to be recognized fuzzy text)), and
transforming the example sentence fuzzy text to an example sentence fuzzy text vector (¶ n0008 lines 3-5: “segment the question and the candidate attributes into words, and send them to the word embedding layer to obtain the question word vector” “and the candidate attribute word vector” (obtaining an example sentence fuzzy text vector from the “candidate attributes” (the example sentence fuzzy text))); and
calculating a vector similarity between the to-be-recognized fuzzy text vector and the example sentence fuzzy text vector to obtain the text similarity (¶ n0010 line 1: “calculate” (calculating) “the cosine similarity” (a vector or the text similarity) “of the word vectors” (between the to be recognized fuzzy text vector) “corresponding to each word in the question and candidate attributes” (and the example sentence fuzzy text vector))).
Regarding claim 8, ZHANG LINGLING et al. do teach an electronic device, the electronic device to execute the following steps:
acquiring a to-be-recognized text and target entity types requiring entity relationship recognition, and performing sentence segmentation on the to-be-recognized text to obtain one or more statement texts (¶ n0018 lines 3-4: “segment” (perform sentence segmentation) “the question” (a to be recognized text) into “words” (one or more statement texts, e.g., ¶ n0008 line 3: “segment the question and the candidate attributes into words”, or ¶ n0018 line 6: “the nth word after segmentation of the question”), e.g., ¶ n0003 S1, S3: “The purpose of attribute matching is to determine which attribute” (recognize a relationship) “of an entity” (of an entity of any target type) “the question is asking about”; “For example, the question “the person in charge of Kadena Air Base” is associated with the “attribute” “commander”);
performing entity recognition on the statement text, and performing entity type screening on all statement texts based on an entity recognition result and the target entity types to obtain a target statement text (¶ n0018 S1: “Generate candidate attributes by querying the knowledge base based on the identified entities” (performing entity recognition for all “entities” (using all words (statements texts) which results in the ”question” (to be recognized text or statement text) with recognized entities (becomes a target statement text), and comprises all entity types, e.g., in the example “the person in charge of Kadena Air Base”, there are entities associated with a person’s name as well a place name);
replacing entities corresponding to the target entity types in the target statement text with a predetermined identifier (¶ n0018 line 2: “replace” (replacing) “the text of the corresponding entity” (entities of all types) “in the question” (in the target statement text) “with a label” (with a predetermined identifier)),
and performingKadena Air Base with the tag<e> changes the question to “How is the person in charge of <e>?” (a fuzzy text is obtained by accepting replacement of the entity “Kadena Air Base” with the predetermined identifier “e” in the target statement text));
acquiring all optional entity relationships among the entities of the target entity types and an entity relationship example sentence text corresponding to each of the optional entity relationships (¶ n0028 lines 11+: “Simultaneously, candidate attributes” (all optional entity relationships among the entities of all types) “for Kadena Air Base are generated” (are acquired in “How is the person in charge of Kadena Air Base” (an example sentence text))),
replacing the entities of the target entity types in the entity relationship example sentence text with the predetermined identifier, and performing
calculating a similarity between the to-be-recognized fuzzy text and the example sentence fuzzy text to obtain a text similarity (¶ n0028 lines 12-13: “Then a similarity” (a similarity) “between the question” (between the to be recognized fuzzy text) “and each attribute” (and the example sentence fuzzy text) “is calculated” (is obtained)).
ZHANG LINGLING et al. do not specifically disclose:
The electronic device comprising:
At least one processor; and
A memory in communication connection with the at least one processor;
Wherein, the memory stores a computer program that can be executed by the at least one processor, and the computer program is executed by the at least one processor to enable the at least one processor to execute the following steps:
Performing part of speech tagging to the to be recognized fuzzy text and/or example sentence text;
and
screening all entity relationship example sentences based on the text similarity, and determining the optional entity relationship corresponding to a screened entity relationship example sentence as an entity relationship recognition result of the to-be-recognized text.
ZHANG KUN et al. do teach:
At least one processor; and a memory in communication connection with the at least one processor; Wherein, the memory stores a computer program that can be executed by the at least one processor, and the computer program is executed by the at least one processor (¶ 0056: “A computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the program, implements the steps of any of the methods described above”)
to enable the at least one processor to execute the following steps:
Performing part of speech tagging to the to be recognized fuzzy text (replaced target statement text) and/or example sentence text (¶ 0010 page 5 lines 6+: “An entity relationship extraction algorithm performs part of speech analysis” (performing part of speech) “and semantic role labeling” (and tagging) “on the entities” (on e.g. a to be recognized) “extracted from the text” (fuzzy text and/or example sentence text));
and
screening all entity relationship example sentences based on the text similarity, and determining the optional entity relationship corresponding to a screened entity relationship example sentence as an entity relationship recognition result of the to-be-recognized text (¶ 0010 last S: “A knowledge structure evaluation algorithm evaluates the similarity matching” (using text similarity) “and the accuracy” (to determine to screen) “of the relationships” (an optional entity relationship which corresponds to entity relationship example sentence or recognition result) “between entities” (for the entities of e.g. a to be recognized text)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “knowledge structure evaluation algorithm” for “similarity” and “relationship between entities” of ZHANG KUN et al. into the “attribute matching method in knowledge base” of ZHANG LINGLING et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable ZHANG LINGLING et al. to determine not only “attributes” (relationships) of specific “entities” but also be able to determine “relationship between entities” as well as disclosed in ZHANG KUN et al. ¶ 0010 last sentence.
Regarding claim 13, ZHANG LINGLING et al. do teach the electronic device according to claim 8, wherein the calculating a similarity between the to-be-recognized fuzzy text and the example sentence fuzzy text to obtain a text similarity comprises:
transforming the to-be-recognized fuzzy text to a to-be-recognized fuzzy text vector (¶ n0008 lines 3-5: “segment the question and the candidate attributes into words, and send them to the word embedding layer to obtain the question word vector” (obtaining a to be recognized fuzzy text vector from transforming the “question” (to be recognized fuzzy text)), and
transforming the example sentence fuzzy text to an example sentence fuzzy text vector (¶ n0008 lines 3-5: “segment the question and the candidate attributes into words, and send them to the word embedding layer to obtain the question word vector” “and the candidate attribute word vector” (obtaining an example sentence fuzzy text vector from the “candidate attributes” (the example sentence fuzzy text))); and
calculating a vector similarity between the to-be-recognized fuzzy text vector and the example sentence fuzzy text vector to obtain the text similarity (¶ n0010 line 1: “calculate” (calculating) “the cosine similarity” (a vector or the text similarity) “of the word vectors” (between the to be recognized fuzzy text vector) “corresponding to each word in the question and candidate attributes” (and the example sentence fuzzy text vector))).
Regarding claim 15, ZHANG LINGLING et al. do teach:
method for entity relationship recognition (Abstract line 3: “generating candidate attributes” (relationship) “according to entities” (for entities)),
the method comprising:
acquiring a to-be-recognized text and target entity types requiring entity relationship recognition, performing sentence segmentation on the to-be-recognized text to obtain one or more statement texts (¶ n0018 lines 3-4: “segment” (perform sentence segmentation) “the question” (a to be recognized text) into “words” (one or more statement texts, e.g., ¶ n0008 line 3: “segment the question and the candidate attributes into words”, or ¶ n0018 line 6: “the nth word after segmentation of the question”), e.g., ¶ n0003 S1, S3: “The purpose of attribute matching is to determine which attribute” (recognize a relationship) “of an entity” (of an entity of any target type) “the question is asking about”; “For example, the question “the person in charge of Kadena Air Base” is associated with the “attribute” “commander”);
performing entity recognition on the statement text, and performing entity type screening on all statement texts based on an entity recognition result and the target entity types to obtain a target statement text (¶ n0018 S1: “Generate candidate attributes by querying the knowledge base based on the identified entities” (performing entity recognition for all “entities” (using all words (statements texts) which results in the ”question” (to be recognized text or statement text) with recognized entities (becomes a target statement text), and comprises all entity types, e.g., in the example “the person in charge of Kadena Air Base”, there are entities associated with a person’s name as well a place name);
replacing entities corresponding to the target entity types in the target statement text with a predetermined identifier (¶ n0018 line 2: “replace” (replacing) “the text of the corresponding entity” (entities of all types) “in the question” (in the target statement text) “with a label” (with a predetermined identifier)),
and performing
acquiring all optional entity relationships among the entities of the target entity types and an entity relationship example sentence text corresponding to each of the optional entity relationships (¶ n0028 lines 11+: “Simultaneously, candidate attributes” (all optional entity relationships among the entities of all types) “for Kadena Air Base are generated” (are acquired in “How is the person in charge of Kadena Air Base” (an example sentence text))),
replacing the entities of the target entity types in the entity relationship example sentence text with the predetermined identifier, and performingentity relationship example) “Simultaneously” (and applying an acceptation on the resulting replaced entity relationship example) “candidate attributes” (to obtain a corresponding example sentence fuzzy text) “for Kadena Air Base are generated”),
calculating a similarity between the to-be-recognized fuzzy text and the example sentence fuzzy text to obtain a text similarity (¶ n0028 lines 12-13: “Then a similarity” (a similarity) “between the question” (between the to be recognized fuzzy text) “and each attribute” (and the example sentence fuzzy text) “is calculated” (is obtained)).
ZHANG LINGLING et al. do not specifically disclose:
Non-volatile computer readable storage medium storing a computer program, the computer program implementing following steps when being executed by a processor:
Performing part of speech tagging to the to be recognized fuzzy text and/or example sentence text;
and
screening all entity relationship example sentences based on the text similarity, and determining the optional entity relationship corresponding to a screened entity relationship example sentence as an entity relationship recognition result of the to-be-recognized text.
ZHANG KUN et al. do teach:
Non-volatile computer readable storage medium storing a computer program, the computer program implementing following steps when being executed by a processor (¶ 0057: “A computer-readable storage medium having a computer program stored thereon that, when executed by a processor, implements the steps of any of the methods described herein”):
Performing part of speech tagging to the to be recognized fuzzy text (replaced target statement text) and/or example sentence text (¶ 0010 page 5 lines 6+: “An entity relationship extraction algorithm performs part of speech analysis” (performing part of speech) “and semantic role labeling” (and tagging) “on the entities” (on e.g. a to be recognized) “extracted from the text” (fuzzy text and/or example sentence text));
and
screening all entity relationship example sentences based on the text similarity, and determining the optional entity relationship corresponding to the screened entity relationship example sentence as an entity relationship recognition result of the to-be-recognized text (¶ 0010 last S: “A knowledge structure evaluation algorithm evaluates the similarity matching” (using text similarity) “and the accuracy” (to determine to screen) “of the relationships” (an optional entity relationship which corresponds to entity relationship example sentence or recognition result) “between entities” (for the entities of e.g. a to be recognized text)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “knowledge structure evaluation algorithm” for “similarity” and “relationship between entities” of ZHANG KUN et al. into the “attribute matching method in knowledge base” of ZHANG LINGLING et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable ZHANG LINGLING et al. to determine not only “attributes” (relationships) of specific “entities” but also be able to determine “relationship between entities” as well as disclosed in ZHANG KUN et al. ¶ 0010 last sentence.
Regarding claim 19, ZHANG LINGLING et al. do teach the non-volatile computer readable storage medium according to claim 15, wherein the calculating a similarity between the to-be-recognized fuzzy text and the example sentence fuzzy text to obtain a text similarity comprises:
transforming the to-be-recognized fuzzy text to a to-be-recognized fuzzy text vector (¶ n0008 lines 3-5: “segment the question and the candidate attributes into words, and send them to the word embedding layer to obtain the question word vector” (obtaining a to be recognized fuzzy text vector from transforming the “question” (to be recognized fuzzy text)), and
transforming the example sentence fuzzy text to an example sentence fuzzy text vector (¶ n0008 lines 3-5: “segment the question and the candidate attributes into words, and send them to the word embedding layer to obtain the question word vector” “and the candidate attribute word vector” (obtaining an example sentence fuzzy text vector from the “candidate attributes” (the example sentence fuzzy text))); and
calculating a vector similarity between the to-be-recognized fuzzy text vector and the example sentence fuzzy text vector to obtain the text similarity (¶ n0010 line 1: “calculate” (calculating) “the cosine similarity” (a vector or the text similarity) “of the word vectors” (between the to be recognized fuzzy text vector) “corresponding to each word in the question and candidate attributes” (and the example sentence fuzzy text vector))).
Claim(s) 2-4, 9-11, 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over ZHANG LINGLING et al. in view of ZHANG KUN et al., and further in view of XU SHIFENG (CN 116304680).
Regarding claim 2, ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose the method for entity relationship recognition according to claim 1, wherein the performing sentence segmentation on the to-be-recognized text to obtain one or more statement texts comprises:
replacing all statement identifiers of a preset type contained in the to-be-recognized text with a preset segmentation symbol to obtain an initial to-be-segmented text;
adding the segmentation symbol before a first character of the initial to-be-segmented text to obtain a target to-be-segmented text; and
segmenting all characters between every two adjacent segmentation symbols in the target to-be-segmented text to obtain the statement text.
XU SHIFENG does teach:
replacing all statement identifiers of a preset type contained in the to-be-recognized text with a preset segmentation symbol to obtain an initial to-be-segmented text (¶ n0075 lines 2+: “[a] process” “to solve the problems of sentence” (a to be recognized or to be segmented text) “segmentation” (segmented) “entity label” (by replacing entity preset identifiers) “contains both sentence segmentation labels” (with preset segmentation symbols, e.g., “B-KEY” (defined as “keyword entity beginning tag” (a preset segmentation symbol ¶ n0069 line 10), “O” “represents a non-entity” (¶ n0067 line 11));
adding the segmentation symbol before a first character of the initial to-be-segmented text to obtain a target to-be-segmented text (e.g. see ¶ n0072 S1 and last S respectively: “the text data is” (a to be recognized text) “AB TV not bad at all” “Annotating the above text data” (replacing words in that text with preset symbols) “yields” (results in) “B-KEY I-KEY I-KEY E-KEY O O …”, where “B-KEY” (defined as “keyword entity beginning tag” (the preset segmentation symbol) ¶ n0069 lines 9-10) corresponds to a segmentation symbol added before everything else including “O” (a first “non-entity” character of the initial to be segmented text)); and
segmenting all characters between every two adjacent segmentation symbols in the target to-be-segmented text to obtain the statement text (¶ n0072 last S: “Annotating the above text data” (replacing words with preset symbols) “yields” (results in) “B-KEY I-KEY I-KEY E-KEY O O B-SEN E-SEN O S-SEP B-KEY…” which shows between two adjacent “B-KEY” (the preset segmentation symbols) there are other preset symbols representing all other characters in the statement text, and therefore these highlighted characters result in segmenting all the characters of the “text data” between their occurrence).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “entity” “type” and resulting “entity” to “entity label” or “entity” “tag” mapping techniques of XU SHIFENG into the overall “replacing” “entities” techniques of ZHANG LINGLING et al. in ZHANG LINGLING et al. in view of ZHANG KUN et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further “resulting in more accurate entity label prediction results for text” “which is beneficial for accurately identifying user intent” as disclosed in XU SHIEFNG ¶ n0004 last sentence.
Regarding claim 3, ZHANG LINGLING et al. do teach the method for entity relationship recognition according to claim 1, wherein the performing sentence segmentation on the to-be-recognized text to obtain one or more statement texts comprises:
when the quantity of the statement identifiers of the preset type contained in the to-be-recognized text is smaller than or equal to 1, determining the to-be-recognized text as the statement text (¶ n0028 lines 9-11: “the entity “How is the person in charge of Kadena Air Base?” is Kadena Air Base. Replacing Kadena Air Base with the tag<e> changes the question to “How is the person in charge of <e>?” (i.e., here there is only one statement identifier “e” or it has a quantity of one, and therefore this statement with the identifier “e” replacing an entity has resulted in the “question” (statement text)).
ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose:
judging whether a quantity of the statement identifier of a preset type contained in the to-be-recognized text is larger than 1;
when the quantity of the statement identifiers of the preset type contained in the to-be-recognized text is larger than 1, replacing all statement identifiers of the preset type contained in the to-be-recognized text with a preset segmentation symbol to obtain an initial to-be-segmented text;
adding the segmentation symbol before a first character of the initial to-be-segmented text to obtain a target to-be-segmented text;
segmenting all characters between every two adjacent segmentation symbols in the target to-be-segmented text to obtain the statement text.
XU SHIFENG does teach:
judging whether a quantity of the statement identifier of a preset type contained in the to-be-recognized text is larger than 1 (¶ n0072 line 1: “AB TV, not bad at all” “where A and B represent two different Chinese characters” (i.e., there are 3 entities or a plural or larger than 1 entity identifiers “A” “B” and “TV”));
when the quantity of the statement identifiers of the preset type contained in the to-be-recognized text is larger than 1, replacing all statement identifiers of the preset type contained in the to-be-recognized text with a preset segmentation symbol to obtain an initial to-be-segmented text (¶ n0075 lines 2+: “[a] process” “to solve the problems of sentence” (the to be recognized or to be segmented text) “segmentation” (segmented) “entity label” (by replacing entity preset identifiers) “contains both sentence segmentation labels” (with preset segmentation symbols, e.g., “B-KEY” (defined as “keyword entity beginning tag” (a preset segmentation symbol ¶ n0069 line 10), “O” “represents a non-entity” (¶ n0067 line 11));
adding the segmentation symbol before a first character of the initial to-be-segmented text to obtain a target to-be-segmented text (e.g. see ¶ n0072 S1 and last S respectively: “the text data is” (a to be recognized text) “AB TV not bad at all” “Annotating the above text data” (replacing words in that text with preset symbols) “yields” (results in) “B-KEY I-KEY I-KEY E-KEY O O …”, where “B-KEY” (defined as “keyword entity beginning tag” (the preset segmentation symbol) ¶ n0069 lines 9-10) corresponds to a segmentation symbol added before everything else including “O” (a first “non-entity” character of the initial to be segmented text));
segmenting all characters between every two adjacent segmentation symbols in the target to-be-segmented text to obtain the statement text (¶ n0072 last S: “Annotating the above text data” (replacing words with preset symbols) “yields” (results in) “B-KEY I-KEY I-KEY E-KEY O O B-SEN E-SEN O S-SEP B-KEY…” which shows between two adjacent “B-KEY” (the preset segmentation symbols) there are other preset symbols representing all other characters in the statement text, and therefore these highlighted characters result in segmenting all the characters of the “text data” between their occurrence)).
For obviousness to combine ZHANG LINGLING et al. in view of ZHANG KUN et al. and XU SHIEFENG see claim 2.
Regarding claim 4, ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose the method for entity relationship recognition according to claim 1, wherein the performing entity recognition on the statement text, and the performing entity type screening on all statement texts based on an entity recognition result and the target entity types to obtain a target statement text comprise:
recognizing entity types of all entities in the statement text; and
determining the statement text containing all the target entity types in all the recognized entity types as the target statement text.
XU SHIFENG does teach:
recognizing entity types of all entities in the statement text (¶ n0069 lines 7+: “based on these three entity types” “the following 11 types” (entity types of all entities in the “AB TV not bad at all” (¶ n0072 S1 (a statement text)) “can be obtained” (are recognized)); and
determining the statement text containing all the target entity types in all the recognized entity types as the target statement text (¶ n0072 last S: “Annotating” (determining all the target entity types) “the above text data” (for the statement text) “yields the following entity tags: B-KEY I-KEY I-KEY E-KEY O O B-SEN E-SEN O S-SEP B-KEY E-KEY B-SEN E-SEN O O B-KEY-SEP E-KEY B-SEN E-SEN” (to determine a target statement text); “O” “represents a non-entity” (¶ n0067 line 11)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “entity” “type” and resulting “entity” to “entity label” or “entity” “tag” mapping techniques of XU SHIFENG into the overall “replacing” “entities” techniques of ZHANG LINGLING et al. in ZHANG LINGLING et al. in view of ZHANG KUN et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further “resulting in more accurate entity label prediction results for text” as disclosed in XU SHIEFNG ¶ n0004 last sentence.
Regarding claim 9, ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose the electronic device according to claim 8, wherein the performing sentence segmentation on the to-be-recognized text to obtain one or more statement texts comprises:
replacing all statement identifiers of a preset type contained in the to-be-recognized text with a preset segmentation symbol to obtain an initial to-be-segmented text;
adding the segmentation symbol before a first character of the initial to-be-segmented text to obtain a target to-be-segmented text; and
segmenting all characters between every two adjacent segmentation symbols in the target to-be-segmented text to obtain the statement text.
XU SHIFENG does teach:
replacing all statement identifiers of a preset type contained in the to-be-recognized text with a preset segmentation symbol to obtain an initial to-be-segmented text (¶ n0075 lines 2+: “[a] process” “to solve the problems of sentence” (a to be recognized or to be segmented text) “segmentation” (segmented) “entity label” (by replacing entity preset identifiers) “contains both sentence segmentation labels” (with preset segmentation symbols, e.g., “B-KEY” (defined as “keyword entity beginning tag” (a preset segmentation symbol ¶ n0069 line 10), “O” “represents a non-entity” (¶ n0067 line 11));
adding the segmentation symbol before a first character of the initial to-be-segmented text to obtain a target to-be-segmented text (e.g. see ¶ n0072 S1 and last S respectively: “the text data is” (a to be recognized text) “AB TV not bad at all” “Annotating the above text data” (replacing words in that text with preset symbols) “yields” (results in) “B-KEY I-KEY I-KEY E-KEY O O …”, where “B-KEY” (defined as “keyword entity beginning tag” (the preset segmentation symbol) ¶ n0069 lines 9-10) corresponds to a segmentation symbol added before everything else including “O” (a first “non-entity” character of the initial to be segmented text)); and
segmenting all characters between every two adjacent segmentation symbols in the target to-be-segmented text to obtain the statement text (¶ n0072 last S: “Annotating the above text data” (replacing words with preset symbols) “yields” (results in) “B-KEY I-KEY I-KEY E-KEY O O B-SEN E-SEN O S-SEP B-KEY…” which shows between two adjacent “B-KEY” (the preset segmentation symbols) there are other preset symbols representing all other characters in the statement text, and therefore these highlighted characters result in segmenting all the characters of the “text data” between their occurrence).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “entity” “type” and resulting “entity” to “entity label” or “entity” “tag” mapping techniques of XU SHIFENG into the overall “replacing” “entities” techniques of ZHANG LINGLING et al. in ZHANG LINGLING et al. in view of ZHANG KUN et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further “resulting in more accurate entity label prediction results for text” “which is beneficial for accurately identifying user intent” as disclosed in XU SHIEFNG ¶ n0004 last sentence.
Regarding claim 10, ZHANG LINGLING et al. do teach the electronic device according to claim 8, wherein the performing sentence segmentation on the to-be-recognized text to obtain one or more statement texts comprises:
when the quantity of the statement identifiers of the preset type contained in the to-be-recognized text is smaller than or equal to 1, determining the to-be-recognized text as the statement text (¶ n0028 lines 9-11: “the entity “How is the person in charge of Kadena Air Base?” is Kadena Air Base. Replacing Kadena Air Base with the tag<e> changes the question to “How is the person in charge of <e>?” (i.e., here there is only one statement identifier “e” or it has a quantity of one, and therefore this statement with the identifier “e” replacing an entity has resulted in the “question” (statement text)).
ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose:
judging whether a quantity of the statement identifier of a preset type contained in the to-be-recognized text is larger than 1;
when the quantity of the statement identifiers of the preset type contained in the to-be-recognized text is larger than 1, replacing all statement identifiers of the preset type contained in the to-be-recognized text with a preset segmentation symbol to obtain an initial to-be-segmented text;
adding the segmentation symbol before a first character of the initial to-be-segmented text to obtain a target to-be-segmented text;
segmenting all characters between every two adjacent segmentation symbols in the target to-be-segmented text to obtain the statement text.
XU SHIFENG does teach:
judging whether a quantity of the statement identifier of a preset type contained in the to-be-recognized text is larger than 1 (¶ n0072 line 1: “AB TV, not bad at all” “where A and B represent two different Chinese characters” (i.e., there are 3 entities or a plural or larger than 1 entity identifiers “A” “B” and “TV”));
when the quantity of the statement identifiers of the preset type contained in the to-be-recognized text is larger than 1, replacing all statement identifiers of the preset type contained in the to-be-recognized text with a preset segmentation symbol to obtain an initial to-be-segmented text (¶ n0075 lines 2+: “[a] process” “to solve the problems of sentence” (the to be recognized or to be segmented text) “segmentation” (segmented) “entity label” (by replacing entity preset identifiers) “contains both sentence segmentation labels” (with preset segmentation symbols, e.g., “B-KEY” (defined as “keyword entity beginning tag” (a preset segmentation symbol ¶ n0069 line 10), “O” “represents a non-entity” (¶ n0067 line 11));
adding the segmentation symbol before a first character of the initial to-be-segmented text to obtain a target to-be-segmented text (e.g. see ¶ n0072 S1 and last S respectively: “the text data is” (a to be recognized text) “AB TV not bad at all” “Annotating the above text data” (replacing words in that text with preset symbols) “yields” (results in) “B-KEY I-KEY I-KEY E-KEY O O …”, where “B-KEY” (defined as “keyword entity beginning tag” (the preset segmentation symbol) ¶ n0069 lines 9-10) corresponds to a segmentation symbol added before everything else including “O” (a first “non-entity” character of the initial to be segmented text));
segmenting all characters between every two adjacent segmentation symbols in the target to-be-segmented text to obtain the statement text (¶ n0072 last S: “Annotating the above text data” (replacing words with preset symbols) “yields” (results in) “B-KEY I-KEY I-KEY E-KEY O O B-SEN E-SEN O S-SEP B-KEY…” which shows between two adjacent “B-KEY” (the preset segmentation symbols) there are other preset symbols representing all other characters in the statement text, and therefore these highlighted characters result in segmenting all the characters of the “text data” between their occurrence)).
For obviousness to combine ZHANG LINGLING et al. in view of ZHANG KUN et al. and XU SHIEFENG see claim 2.
Regarding claim 11, ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose the electronic device according to claim 8, wherein the performing entity recognition on the statement text, and the performing entity type screening on all statement texts based on an entity recognition result and the target entity types to obtain a target statement text comprise:
recognizing entity types of all entities in the statement text; and
determining the statement text containing all the target entity types in all the recognized entity types as the target statement text.
XU SHIFENG does teach:
recognizing entity types of all entities in the statement text (¶ n0069 lines 7+: “based on these three entity types” “the following 11 types” (entity types of all entities in the “AB TV not bad at all” (¶ n0072 S1 (a statement text)) “can be obtained” (are recognized)); and
determining the statement text containing all the target entity types in all the recognized entity types as the target statement text (¶ n0072 last S: “Annotating” (determining all the target entity types) “the above text data” (for the statement text) “yields the following entity tags: B-KEY I-KEY I-KEY E-KEY O O B-SEN E-SEN O S-SEP B-KEY E-KEY B-SEN E-SEN O O B-KEY-SEP E-KEY B-SEN E-SEN” (to determine a target statement text); “O” “represents a non-entity” (¶ n0067 line 11)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “entity” “type” and resulting “entity” to “entity label” or “entity” “tag” mapping techniques of XU SHIFENG into the overall “replacing” “entities” techniques of ZHANG LINGLING et al. in ZHANG LINGLING et al. in view of ZHANG KUN et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further “resulting in more accurate entity label prediction results for text” as disclosed in XU SHIEFNG ¶ n0004 last sentence.
Regarding claim 16, ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose the non-volatile computer readable storage medium according to claim 15, wherein the performing sentence segmentation on the to-be-recognized text to obtain one or more statement texts comprises:
replacing all statement identifiers of a preset type contained in the to-be-recognized text with a preset segmentation symbol to obtain an initial to-be-segmented text;
adding the segmentation symbol before a first character of the initial to-be-segmented text to obtain a target to-be-segmented text; and
segmenting all characters between every two adjacent segmentation symbols in the target to-be-segmented text to obtain the statement text.
XU SHIFENG does teach:
replacing all statement identifiers of a preset type contained in the to-be-recognized text with a preset segmentation symbol to obtain an initial to-be-segmented text (¶ n0075 lines 2+: “[a] process” “to solve the problems of sentence” (a to be recognized or to be segmented text) “segmentation” (segmented) “entity label” (by replacing entity preset identifiers) “contains both sentence segmentation labels” (with preset segmentation symbols, e.g., “B-KEY” (defined as “keyword entity beginning tag” (a preset segmentation symbol ¶ n0069 line 10), “O” “represents a non-entity” (¶ n0067 line 11));
adding the segmentation symbol before a first character of the initial to-be-segmented text to obtain a target to-be-segmented text (e.g. see ¶ n0072 S1 and last S respectively: “the text data is” (a to be recognized text) “AB TV not bad at all” “Annotating the above text data” (replacing words in that text with preset symbols) “yields” (results in) “B-KEY I-KEY I-KEY E-KEY O O …”, where “B-KEY” (defined as “keyword entity beginning tag” (the preset segmentation symbol) ¶ n0069 lines 9-10) corresponds to a segmentation symbol added before everything else including “O” (a first “non-entity” character of the initial to be segmented text)); and
segmenting all characters between every two adjacent segmentation symbols in the target to-be-segmented text to obtain the statement text (¶ n0072 last S: “Annotating the above text data” (replacing words with preset symbols) “yields” (results in) “B-KEY I-KEY I-KEY E-KEY O O B-SEN E-SEN O S-SEP B-KEY…” which shows between two adjacent “B-KEY” (the preset segmentation symbols) there are other preset symbols representing all other characters in the statement text, and therefore these highlighted characters result in segmenting all the characters of the “text data” between their occurrence).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “entity” “type” and resulting “entity” to “entity label” or “entity” “tag” mapping techniques of XU SHIFENG into the overall “replacing” “entities” techniques of ZHANG LINGLING et al. in ZHANG LINGLING et al. in view of ZHANG KUN et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further “resulting in more accurate entity label prediction results for text” “which is beneficial for accurately identifying user intent” as disclosed in XU SHIEFNG ¶ n0004 last sentence.
Regarding claim 17, ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose the computer readable storage medium according to claim 15, wherein the performing entity recognition on the statement text, and the performing entity type screening on all statement texts based on an entity recognition result and the target entity types to obtain a target statement text comprise:
recognizing entity types of all entities in the statement text; and
determining the statement text containing all the target entity types in all the recognized entity types as the target statement text.
XU SHIFENG does teach:
recognizing entity types of all entities in the statement text (¶ n0069 lines 7+: “based on these three entity types” “the following 11 types” (entity types of all entities in the “AB TV not bad at all” (¶ n0072 S1 (a statement text)) “can be obtained” (are recognized)); and
determining the statement text containing all the target entity types in all the recognized entity types as the target statement text (¶ n0072 last S: “Annotating” (determining all the target entity types) “the above text data” (for the statement text) “yields the following entity tags: B-KEY I-KEY I-KEY E-KEY O O B-SEN E-SEN O S-SEP B-KEY E-KEY B-SEN E-SEN O O B-KEY-SEP E-KEY B-SEN E-SEN” (to determine a target statement text); “O” “represents a non-entity” (¶ n0067 line 11)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “entity” “type” and resulting “entity” to “entity label” or “entity” “tag” mapping techniques of XU SHIFENG into the overall “replacing” “entities” techniques of ZHANG LINGLING et al. in ZHANG LINGLING et al. in view of ZHANG KUN et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further “resulting in more accurate entity label prediction results for text” as disclosed in XU SHIEFNG ¶ n0004 last sentence.
Claim(s) 7, 14, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over ZHANG LINGLING et al. in view of ZHANG KUN et al., and further in view of ZOU MENG et al. (CN 114610903).
Regarding claim 7, ZHANG LINGLING et al. do teach the method for entity relationship recognition according to claim 1, wherein the screening all entity relationship example sentences based on the text similarity, and the determining the optional entity relationship corresponding to the screened entity relationship example sentence as an entity relationship recognition result of the to-be-recognized text comprise:
determining the example sentence fuzzy text corresponding to an example sentence fuzzy text vector corresponding to a maximum text similarity as a target example sentence fuzzy text (¶ n0013 page 8 lines 1+: “After obtaining the similarity between the question and all candidate attributes, select the attribute” (determine an example sentence fuzzy text vector as a target example sentence fuzzy text) “with the highest similarity” (corresponding to a maximum text similarity));
determining the entity relationship example sentence corresponding to the target example sentence fuzzy text as a target entity relationship example sentence (¶ n0013 page 8 lines 2-3: “If the similarity exceeds a set threshold, add the candidate attribute” (determine a target entity relationship example sentence corresponding to the “candidate attribute” (the target example sentence fuzzy text))).
ZHANG LINGLING et al. do not specifically disclose:
determining the optional entity relationship corresponding to the target entity relationship example sentence as the entity relationship result in the target statement text;
summarizing all entity relationship recognition results in the target statement text to obtain the entity relationship recognition result of the to-be-recognized text.
ZHANG KUN et al. do teach:
determining the optional entity relationship corresponding to the target entity relationship example sentence as the entity relationship recognition result in the target statement text (¶ 0010 last S: “A knowledge structure evaluation algorithm evaluates the similarity matching” “and the accuracy” “of the relationships” (the optional entity relationship is determined to correspond to entity relationship example sentence in the target statement) “between entities” (for the entities of e.g. a to be recognized text)).
For obviousness to combine ZHANG LINGLING et al. and ZHANG KUN et al. see claim 1.
ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose:
summarizing all entity relationship recognition results in the target statement text to obtain the entity relationship recognition result of the to-be-recognized text.
ZOU MENG et al. do teach:
summarizing all entity relationship recognition results in the target statement text to obtain the entity relationship recognition result of the to-be-recognized text (¶ n0019 lines 6+: “full text” (a statement or to be recognized text) “input into the intra-sentence relation extraction model and the full-text relation extraction model to obtain the first entity relation information and the second entity relation information” (to extract all entity relationship recognition results) “The first entity relation and the second entity relation are overlaid and summarized” (and summarizing them) “to obtain the text relation extraction result” (to obtain the “full-text” (to be recognized text) entity relationship result)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “entity relation” and “text relation extraction result” determination techniques of ZOU MENG et al. into the “entity” “relationship” calculation of ZHANG KUN et al. in ZHANG LINGLING et al. in view of ZHANG KUN et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable ZHANG LINGLING et al. in view of ZHANG KUN et al. to “improve” their “text relation extraction accuracy” (overall entity relationship of a target statement) as disclosed in ZOU MENG et al. Abstract last sentence.
Regarding claim 14, ZHANG LINGLING et al. do teach the electronic device according to claim 8, wherein the screening all entity relationship example sentences based on the text similarity, and the determining the optional entity relationship corresponding to the screened entity relationship example sentence as an entity relationship recognition result of the to-be-recognized text comprise:
determining the example sentence fuzzy text corresponding to an example sentence fuzzy text vector corresponding to a maximum text similarity as a target example sentence fuzzy text (¶ n0013 page 8 lines 1+: “After obtaining the similarity between the question and all candidate attributes, select the attribute” (determine an example sentence fuzzy text vector as a target example sentence fuzzy text) “with the highest similarity” (corresponding to a maximum text similarity));
determining the entity relationship example sentence corresponding to the target example sentence fuzzy text as a target entity relationship example sentence (¶ n0013 page 8 lines 2-3: “If the similarity exceeds a set threshold, add the candidate attribute” (determine a target entity relationship example sentence corresponding to the “candidate attribute” (the target example sentence fuzzy text))).
ZHANG LINGLING et al. do not specifically disclose:
determining the optional entity relationship corresponding to the target entity relationship example sentence as the entity relationship result in the target statement text;
summarizing all entity relationship recognition results in the target statement text to obtain the entity relationship recognition result of the to-be-recognized text.
ZHANG KUN et al. do teach:
determining the optional entity relationship corresponding to the target entity relationship example sentence as the entity relationship recognition result in the target statement text (¶ 0010 last S: “A knowledge structure evaluation algorithm evaluates the similarity matching” “and the accuracy” “of the relationships” (the optional entity relationship is determined to correspond to entity relationship example sentence in the target statement) “between entities” (for the entities of e.g. a to be recognized text)).
For obviousness to combine ZHANG LINGLING et al. and ZHANG KUN et al. see claim 8.
ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose:
summarizing all entity relationship recognition results in the target statement text to obtain the entity relationship recognition result of the to-be-recognized text.
ZOU MENG et al. do teach:
summarizing all entity relationship recognition results in the target statement text to obtain the entity relationship recognition result of the to-be-recognized text (¶ n0019 lines 6+: “full text” (a statement or to be recognized text) “input into the intra-sentence relation extraction model and the full-text relation extraction model to obtain the first entity relation information and the second entity relation information” (to extract all entity relationship recognition results) “The first entity relation and the second entity relation are overlaid and summarized” (and summarizing them) “to obtain the text relation extraction result” (to obtain the “full-text” (to be recognized text) entity relationship result)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “entity relation” and “text relation extraction result” determination techniques of ZOU MENG et al. into the “entity” “relationship” calculation of ZHANG KUN et al. in ZHANG LINGLING et al. in view of ZHANG KUN et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable ZHANG LINGLING et al. in view of ZHANG KUN et al. to “improve” their “text relation extraction accuracy” (overall entity relationship of a target statement) as disclosed in ZOU MENG et al. Abstract last sentence.
Regarding claim 20, ZHANG LINGLING et al. do teach the non-volatile computer readable storage medium according to according to claim 15, wherein the screening all entity relationship example sentences based on the text similarity, and the determining the optional entity relationship corresponding to the screened entity relationship example sentence as an entity relationship recognition result of the to-be-recognized text comprise:
determining the example sentence fuzzy text corresponding to an example sentence fuzzy text vector corresponding to a maximum text similarity as the target example sentence fuzzy text (¶ n0013 page 8 lines 1+: “After obtaining the similarity between the question and all candidate attributes, select the attribute” (determine an example sentence fuzzy text vector as a target example sentence fuzzy text) “with the highest similarity” (corresponding to a maximum text similarity));
determining the entity relationship example sentence corresponding to the target example sentence fuzzy text as the target entity relationship example sentence (¶ n0013 page 8 lines 2-3: “If the similarity exceeds a set threshold, add the candidate attribute” (determine the target entity relationship example sentence corresponding to the “candidate attribute” (the target example sentence fuzzy text))).
ZHANG LINGLING et al. do not specifically disclose:
determining the optional entity relationship corresponding to the target entity relationship example sentence as the entity relationship result in the target statement text;
summarizing all entity relationship recognition results in the target statement text to obtain the entity relationship recognition result of the to-be-recognized text.
ZHANG KUN et al. do teach:
determining the optional entity relationship corresponding to the target entity relationship example sentence as the entity relationship recognition result in the target statement text (¶ 0010 last S: “A knowledge structure evaluation algorithm evaluates the similarity matching” “and the accuracy” “of the relationships” (the optional entity relationship is determined to correspond to entity relationship example sentence in the target statement) “between entities” (for the entities of e.g. a to be recognized text)).
For obviousness to combine ZHANG LINGLING et al. and ZHANG KUN et al. see claim 15.
ZHANG LINGLING et al. in view of ZHANG KUN et al. do not specifically disclose:
summarizing all entity relationship recognition results in the target statement text to obtain the entity relationship recognition result of the to-be-recognized text.
ZOU MENG et al. do teach:
summarizing all entity relationship recognition results in the target statement text to obtain the entity relationship recognition result of the to-be-recognized text (¶ n0019 lines 6+: “full text” (a statement or to be recognized text) “input into the intra-sentence relation extraction model and the full-text relation extraction model to obtain the first entity relation information and the second entity relation information” (to extract all entity relationship recognition results) “The first entity relation and the second entity relation are overlaid and summarized” (and summarizing them) “to obtain the text relation extraction result” (to obtain the “full-text” (to be recognized text) entity relationship result)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “entity relation” and “text relation extraction result” determination techniques of ZOU MENG et al. into the “entity” “relationship” calculation of ZHANG KUN et al. in ZHANG LINGLING et al. in view of ZHANG KUN et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable ZHANG LINGLING et al. in view of ZHANG KUN et al. to “improve” their “text relation extraction accuracy” (overall entity relationship of a target statement) as disclosed in ZOU MENG et al. Abstract last sentence.
Allowable Subject Matter
Claims 5, 12 and 18 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARZAD KAZEMINEZHAD whose telephone number is (571)270-5860. The examiner can normally be reached 10:30 am to 11:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Farzad Kazeminezhad/
Art Unit 2653
Feb 4th 2026.