Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is in response to correspondence 02/18/26 regarding application 18/651,183, in which in response to a requirement for restriction/election, Applicant elected Group I, claims 1-10, 17, and 19. Claims 1-20 are pending in the application, with non-elected claims 11-16, 18, and 20 withdrawn. Claims 1-10, 17, and 19 have been considered.
Election/Restrictions
Applicant’s election of Group I, claims 1-10, 17, and 19, in the reply filed on 02/18/26 is acknowledged. In the response to the restriction requirement, Applicant states “Applicant traverses the requirement to the extent of requesting that claims in Group II and Group III that correspond to allowable Group I claims be reinstated for allowance”.
In response, MPEP 818.01(a) requires that a traverse to a requirement for restriction must be complete as required by 37 CFR 1.111(b). Under this rule, the Applicant is required to specifically point out the reason(s) on which they base their conclusion(s) that a requirement to restrict is in error. Because Applicant did not distinctly and specifically point out the supposed errors in the restriction requirement, the election has been treated as an election without traverse (MPEP § 818.01(a)).
With regard to Applicant’s request that claims in Group II and Group III that correspond to allowable Group I claims be reinstated for allowance, should the claims in Group I be found in condition for allowance, the nonelected inventions will be considered for rejoinder (MPEP 821.04).
Foreign Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: Method of Generating Relational Triplets from Text, Training Method, and Electronic Device.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10, 17, and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites “encoding a text to be processed to obtain a feature information; identifying a plurality of entity information from the text, based on the feature information; generating a word relation tensor based on the feature information; and determining a relation between the plurality of entity information by using the word relation tensor, so as to generate a plurality of relational triplets related to the text.”.
The limitation of encoding a text to be processed to obtain a feature information, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, “encoding a text to be processed to obtain a feature information” in the context of this claim encompasses viewing text and writing down a code that represents e.g. word length on a sheet of paper.
Similarly, the limitation of “identifying a plurality of entity information from the text, based on the feature information”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, “identifying a plurality of entity information from the text, based on the feature information” in the context of this claim encompasses mentally identifying a plurality of entity information from the text, based on the feature information.
Similarly, the limitation of “generating a word relation tensor based on the feature information”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, “generating a word relation tensor based on the feature information ” in the context of this claim encompasses writing down a word relation tensor based on the feature information on the sheet of paper.
Similarly, the limitation of “determining a relation between the plurality of entity information by using the word relation tensor, so as to generate a plurality of relational triplets related to the text”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, “determining a relation between the plurality of entity information by using the word relation tensor, so as to generate a plurality of relational triplets related to the text” in the context of this claim encompasses mentally determining a relation between the plurality of entity information by using the word relation tensor, and then writing down a plurality of relational triplets related to the text on the sheet of paper.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claim does not recite any additional elements that would integrate the abstract idea into a practical application by imposing any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Specifically with respect to Step 2A, Prong Two, of the Alice/Mayo test, the judicial exception is not integrated into a practical application. Claim 1 does not recite any limitations that are not mental steps.
Specifically with respect to Step 2B of the Alice/Mayo test, “the claim as a whole does not amount to significantly more than the exception itself (there is no inventive concept in the claim)”. MPEP 2106.05 Il. There are no limitations in claim 1 outside of the judicial exception. As a whole, there does not appear to contain any inventive concept. As discussed above, claim 1 is a mental process that pertains to the mental process of extracting relational triplets from text, which can be performed entirely by a human with physical aids.
Dependent claims 2-10 depend from claim 1, do not remedy any of the deficiencies of claim 1, and therefore are rejected on the same grounds as claim 1 above.
Generally, claims 2-10 merely recite additional steps for extracting relational triplets from text, all of which could be performed mentally or by writing down relationships with a pen and paper, and do not amount to anything more than substantially the same abstract idea as explained with respect to claim 1.
Specifically:
Claim 2 recites “ the encoding a text to be processed to obtain a feature information comprises: extracting a plurality of word objects from the text; determining a plurality of word features for the plurality of word objects respectively; and encoding the plurality of word features to obtain the feature information” which could be performed by copying a plurality of word objects from the text to a sheet of paper, mentally determining a plurality of word features for the plurality of word objects respectively, and writing down a code encoding the plurality of word features to obtain the feature information.
Claim 3 recites “the encoding the plurality of word features of the plurality of word objects to obtain the feature information comprises: encoding the plurality of word features to obtain a plurality of context information of the plurality of word objects; determining a plurality of hidden features for the plurality of word objects respectively, according to the plurality of context information; and generating the feature information according to the plurality of hidden features” which could be performed by writing down codes for the plurality of word features to obtain a plurality of context information of the plurality of word objects, mentally determining a plurality of hidden features for the plurality of word objects respectively, according to the plurality of context information (e.g. by mentally guessing relationships between the words based on the features), writing down the feature information according to the plurality of hidden features.
Claim 4 recites “the identifying a plurality of entity information from the text, based on the feature information comprises: determining, from a plurality of preset label sequences, a target label sequence corresponding to the feature information; annotating an entity type for each of a plurality of word objects in the text, according to the target label sequence; determining an entity scope according to the entity types of the plurality of word objects, wherein the entity scope indicates a number of word objects comprised in an entity and position information of the word objects comprised in the entity in the text; and determining the plurality of entity information of the text according to the entity scope” which could be performed by mentally determining, from a plurality of preset label sequences, a target label sequence corresponding to the feature information; writing down an entity type for each of a plurality of word objects in the text, according to the target label sequence; mentally determining an entity scope according to the entity types of the plurality of word objects, wherein the entity scope indicates a number of word objects comprised in an entity and position information of the word objects comprised in the entity in the text; and mentally determining the plurality of entity information of the text according to the entity scope.
Claim 5 recites “the determining, from a plurality of preset label sequences, a target label sequence corresponding to the feature information comprises: determining an evaluation value matrix related to labels according to the feature information; determining a plurality of evaluation value functions for the plurality of preset label sequences respectively, according to evaluation value matrices of a plurality of preset labels comprised in the plurality of preset label sequences; determining a probability of each preset label sequence among the plurality of preset label sequences being selected as the target label sequence, according to the plurality of evaluation value functions; and determining the target label sequence from the plurality of preset label sequences according to the probability” which could be performed by mentally determining an evaluation value matrix related to labels according to the feature information; mentally determining a plurality of evaluation value functions for the plurality of preset label sequences respectively, according to evaluation value matrices of a plurality of preset labels comprised in the plurality of preset label sequences; mentally determining a probability of each preset label sequence among the plurality of preset label sequences being selected as the target label sequence, according to the plurality of evaluation value functions; and mentally determining the target label sequence from the plurality of preset label sequences according to the probability.
Claim 6 recites “the determining a probability of each preset label sequence among the plurality of preset label sequences being selected as the target label sequence, according to the plurality of evaluation value functions comprises: determining an expected value of the plurality of preset label sequences related to the text, according to the plurality of evaluation value functions; and determining the probability according to a ratio between each of the plurality of evaluation value functions and the expected value” which could be performed by mentally determining an expected value of the plurality of preset label sequences related to the text, according to the plurality of evaluation value functions; and mentally determining the probability according to a ratio between each of the plurality of evaluation value functions and the expected value.
Claim 7 recites “ the generating a word relation tensor based on the feature information comprises: determining a word feature matrix according to the feature information; and generating the word relation tensor according to the word feature matrix” which could be performed by mentally determining a word feature matrix according to the feature information; and writing down the word relation tensor according to the word feature matrix.
Claim 8 recites “the determining the word relation tensor according to the word feature matrix comprises: constructing a relation core tensor, wherein the relation core tensor comprises a plurality of relational basis matrices; generating a relation feature matrix according to a modular product between a preset relational weight matrix and the relation core tensor; and generating the word relation tensor according to the relation feature matrix and the word feature matrix” which could be performed by mentally constructing and writing down a relation core tensor, wherein the relation core tensor comprises a plurality of relational basis matrices; writing down a relation feature matrix according to a modular product between a preset relational weight matrix and the relation core tensor; and writing down the word relation tensor according to the relation feature matrix and the word feature matrix.
Claim 9 recites “the determining a relation between the plurality of entity information according to the plurality of word relation tensors, so as to obtain a plurality of relational triplets comprises: generating a plurality of entity pairs according to the plurality of entity information, wherein the entity pair comprises two entities indicated by any two entity information among the plurality of entity information; generating, in a case that a correlation between the plurality of entity pairs and a plurality of relations comprised in the word relation tensor meets a preset condition, the plurality of relational triplets according to the plurality of entity pairs and corresponding relations” which could be performed by writing down a plurality of entity pairs according to the plurality of entity information, wherein the entity pair comprises two entities indicated by any two entity information among the plurality of entity information, and mentally generating and writing down, in a case that a correlation between the plurality of entity pairs and a plurality of relations comprised in the word relation tensor meets a preset condition, the plurality of relational triplets according to the plurality of entity pairs and corresponding relations.
Claim 10 recites “the generating, in a case that a correlation between the plurality of entity pairs and a plurality of relations comprised in the word relation tensor meets a preset condition, the plurality of relational triplets according to the plurality of entity pairs and corresponding relations comprises: for each entity pair, acquiring two entity length information of two entities in the entity pair; determining a plurality of correlation values between the entity pair and the plurality of relations, according to the two entity length information and the word relation tensor; and generating, in a case that at least one correlation value among the plurality of correlation values is greater than or equal to a preset threshold, at least one relational triplet according to at least one relation corresponding to the at least one correlation value and the entity pair” which could be performed by for each entity pair, visually acquiring two entity length information of two entities in the entity pair; mentally determining a plurality of correlation values between the entity pair and the plurality of relations, according to the two entity length information and the word relation tensor; and mentally generating and writing down, in a case that at least one correlation value among the plurality of correlation values is greater than or equal to a preset threshold, at least one relational triplet according to at least one relation corresponding to the at least one correlation value and the entity pair.
In sum, claims 2-10 depend from claim 1 and further recite mental processes as explained above. None of the additional limitations recited in claims 2-10 amount to anything more than the same or a similar abstract idea as recited in claim 1. Nor do any limitations in claims 2-10 (a) integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or (b) amount to significantly more than the judicial exception. Claims 2-10 are not patent eligible.
Claim 17 is directed to an electronic device that corresponds to the method of claim 1 and is therefore rejected for the same reasons set for the above with respect to claim 1. While claim 17 recites generic computer components (at least one processor, memory, instructions), such generic computing components are recited at a high-level of generality (i.e., as a generic processor and memory performing generic computer instructions) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Claim 17 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations of using generic computer components amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Claim 17 is not patent eligible.
Claim 19 is directed to non-transitory computer-readable storage medium that corresponds to the method of claim 1 and is therefore rejected for the same reasons set for the above with respect to claim 1. While claim 19 recites generic computer components (instructions, computer), such generic computing components are recited at a high-level of generality (i.e., as a computer performing generic computer instructions) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Claim 19 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations of using generic computer components amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Claim 19 is not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3 and 7-9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhao et al. (“TDRE: A Tensor Decomposition Based Approach for Relation Extraction”. arXiv:2010.07533v1 [cs.AI] 15 Oct 2020).
Consider claim 1, Zhao discloses a method of processing a text, comprising:
encoding a text to be processed to obtain a feature information (text such as from the NYT10 corpus is encoded to generate word embeddings, page 3, section 3, page 7, section 4.1);
identifying a plurality of entity information from the text, based on the feature information (entity recognition using the embeddings, page 4, section 3.2);
generating a word relation tensor based on the feature information (tensor extraction representation X is extracted from word embedding matrix with elements indicating triplet related word pairs, page 5, section 3.4, page 3, Section 3.1); and
determining a relation between the plurality of entity information by using the word relation tensor, so as to generate a plurality of relational triplets related to the text (tensor decomposition determines a relation between represented entities resulting in decoded triplets, page 6, algorithm I).
Consider claim 2, Zhao discloses the encoding a text to be processed to obtain a feature information comprises:
extracting a plurality of word objects from the text (each word, page 3, section 3.1);
determining a plurality of word features for the plurality of word objects respectively (word-char embeddings, page 3, section 3.1); and
encoding the plurality of word features to obtain the feature information (a representation matrix for the given sentence encodes hidden states and word embeddings, page 3, Section 3.1).
Consider claim 3, Zhao discloses the encoding the plurality of word features of the plurality of word objects to obtain the feature information comprises:
encoding the plurality of word features to obtain a plurality of context information of the plurality of word objects (bidirectional contextual information, page 3, section 3.1);
determining a plurality of hidden features for the plurality of word objects respectively, according to the plurality of context information (forward and backward hidden states, page 3, section 3.1); and
generating the feature information according to the plurality of hidden features (representation matrix A, page 3, section 3.1).
Consider claim 7, Zhao discloses the generating a word relation tensor based on the feature information comprises:
determining a word feature matrix according to the feature information (from feature vector embeddings we get a final representation matrix A for the given sentence, Section 3.1, page 3); and
generating the word relation tensor according to the word feature matrix (word relation tensors are extracted from the matrix, page 5, Section 3.4).
Consider claim 8, Zhao discloses the determining the word relation tensor according to the word feature matrix comprises:
constructing a relation core tensor, wherein the relation core tensor comprises a plurality of relational basis matrices (decomposition into target tensor using diagonal matrices Dk ∈ Rdh× dh with entry (Dk)ii indicating the participation of ith latent component at relation k and i = 1,2,...,dh, page 5, Section 3.4);
generating a relation feature matrix according to a modular product between a preset relational weight matrix and the relation core tensor (final decomposition algorithm using factor of D in the structure of the tensor decomposition, page 5, Section 3.4); and
generating the word relation tensor according to the relation feature matrix and the word feature matrix (equation 11, page 5, Section 3.4, see above, algorithm 1, page 6).
Consider claim 9, Zhao discloses the determining a relation between the plurality of entity information according to the plurality of word relation tensors, so as to obtain a plurality of relational triplets comprises:
generating a plurality of entity pairs according to the plurality of entity information, wherein the entity pair comprises two entities indicated by any two entity information among the plurality of entity information (the two entities in the extracted triplet, Section 3.4, page 5);
generating, in a case that a correlation between the plurality of entity pairs and a plurality of relations comprised in the word relation tensor meets a preset condition, the plurality of relational triplets according to the plurality of entity pairs and corresponding relations (the final triplet decode procession judging whether the triplet is true for each triplet, equation 13, algorithm 1, page 6).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4, 5, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (“TDRE: A Tensor Decomposition Based Approach for Relation Extraction”. arXiv:2010.07533v1 [cs.AI] 15 Oct 2020) in view of Kirch et al. (US 20240193368).
Consider claim 4, Zhao discloses the identifying a plurality of entity information from the text, based on the feature information comprises:
determining, from a plurality of preset label sequences, a target label sequence corresponding to the feature information (we regard entity recognition task as a sequence labeling problem with a given entity tag set, i.e. preset label sequences, a probabilistic score for a sequence of entity tags, Section 3.2, page 4);
annotating an entity type for each of a plurality of word objects in the text, according to the target label sequence (with a tag from the set of entity tags, Section 3.2, page 4);
determining an entity scope according to the entity types of the plurality of word objects (the number of consecutive positions I assigned the entity tag yi defines the “scope” of the entity, Section 3.2, page 4),
determining the plurality of entity information of the text according to the entity scope (extracting the triplets from the specified entity spans, section 3.4, page 5).
Zhao does not specifically mention wherein the entity scope indicates a number of word objects comprised in an entity and position information of the word objects comprised in the entity in the text.
Kirch discloses an entity scope indicates a number of word objects comprised in an entity and position information of the word objects comprised in the entity in the text (span length, position, Fig. 4, [0045]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhao such that the entity scope explicitly indicates a number of word objects comprised in an entity and position information of the word objects comprised in the entity in the text in order to avoid losing information such as location information in nested entities, as suggested by Kirch ([0005]). Doing so would have predictably improved NER by considering the full context of the candidate named entities, as suggested by Kirch ([0006]).
Consider claim 5, Zhao discloses the determining, from a plurality of preset label sequences, a target label sequence corresponding to the feature information comprises:
determining an evaluation value matrix related to labels according to the feature information (the word-to-word pair mapping can be modeled as a matrix X ∈ Rn×n, where the element xij in the matrix X indicates whether the ith word and the jth word can form a triplet related word pair, Section 3.4, page 5);
determining a plurality of evaluation value functions for the plurality of preset label sequences respectively, according to evaluation value matrices of a plurality of preset labels comprised in the plurality of preset label sequences (We set Dr as a zero matrix, thus Xr turns into a zero matrix after calculation, which means no triplet in rth relation component, equation 10, Section 3.4, page 5);
determining a probability of each preset label sequence among the plurality of preset label sequences being selected as the target label sequence, according to the plurality of evaluation value functions (conditional probability where p represents probability distribution, xi, xj respectively denotes the i-th and j-th word representation in the sentence S and rk denotes the k-th relation type, Section 3.4, page 5); and
determining the target label sequence from the plurality of preset label sequences according to the probability (Xijk is the predicted triplet tensor, Section 3.4, page 6). The references cited are analogous art in the same field of natural language processing.
Consider claim 17, Zhao discloses the method of claim 1 (see claim 1).
Zhao does not specifically mention an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to implement.
Kirch discloses an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to implement (processors and memory on computer executing code, [0045]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhao by including an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to implement in order to practice the method in a more modular way, as suggested by Kirch, [0023], predictably improving flexibility.
Consider claim 19, Zhao discloses the method of claim 1 (see claim 1).
Zhao does not specifically mention a non-transitory computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions are configured to cause a computer to implement.
Kirch discloses a non-transitory computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions are configured to cause a computer to implement (processors and memory on computer executing code, [0045]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhao by including a non-transitory computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions are configured to cause a computer to implement for reasons similar to those for claim 17.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (“TDRE: A Tensor Decomposition Based Approach for Relation Extraction”. arXiv:2010.07533v1 [cs.AI] 15 Oct 2020) in view of Kirch et al. (US 20240193368), in further in view of Ma et al. (“End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF”. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1064–1074, Berlin, Germany, August 7-12, 2016).
Consider claim 6, Zhao discloses the determining a probability of each preset label sequence among the plurality of preset label sequences being selected as the target label sequence, according to the plurality of evaluation value functions (conditional probability where p represents probability distribution, xi, xj respectively denotes the i-th and j-th word representation in the sentence S and rk denotes the k-th relation type, Section 3.4, page 5).
Zhao and Kirch do not specifically mention: determining an expected value of the plurality of preset label sequences related to the text, according to the plurality of evaluation value functions; and determining the probability according to a ratio between each of the plurality of evaluation value functions and the expected value.
Ma discloses: determining an expected value of the plurality of preset label sequences related to the text, according to the plurality of evaluation value functions (generic sequence of values for z, for a possible label sequence for z, Section 2.3, page 1066); and determining the probability according to a ratio between each of the plurality of evaluation value functions and the expected value probabilistic model for sequence CFR defines a family of conditional probability over all possible label sequences, see equation for p(y|z;W,b), Section 2.3, page 1066).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhao and Kirch by determining an expected value of the plurality of preset label sequences related to the text, according to the plurality of evaluation value functions; and determining the probability according to a ratio between each of the plurality of evaluation value functions and the expected value in order to increase the range of sequence labeling tasks for which the model is effective, as suggested by Ma (Section 1, page 1065). Doing so would have led to predictable results of increasing NER accuracy in various fields, as suggested by Ma (Section 1, page 1065). The references cited are analogous art in the same field of natural language processing.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (“TDRE: A Tensor Decomposition Based Approach for Relation Extraction”. arXiv:2010.07533v1 [cs.AI] 15 Oct 2020) in view of Wadden et al. (“Entity, Relation, and Event Extraction with Contextualized Span Representations”. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 5784–5789, Hong Kong, China, November 3–7, 2019).
Consider claim 10, Zhao discloses generating, in a case that a correlation between the plurality of entity pairs and a plurality of relations comprised in the word relation tensor meets a preset condition, the plurality of relational triplets according to the plurality of entity pairs and corresponding relations (the final triplet decode procession judging whether the triplet is true for each triplet, equation 13, algorithm 1, page 6) and generating, in a case that at least one correlation value among the plurality of correlation values is greater than or equal to a preset threshold, at least one relational triplet according to at least one relation corresponding to the at least one correlation value and the entity pair final triplet decode procession judging whether the triplet is true for each triplet, and γ2 represents the threshold for judging whether a triplet is true, equation 13, algorithm 1, page 6).
Zhao does not specifically mention for each entity pair, acquiring two entity length information of two entities in the entity pair; determining a plurality of correlation values between the entity pair and the plurality of relations, according to the two entity length information and the word relation tensor.
Wadden discloses for each entity pair, acquiring two entity length information of two entities in the entity pair (predating entity type labels and relations for span pairs, Section 2.1, page 5785); determining a plurality of correlation values between the entity pair and the plurality of relations, according to the two entity length information and a word relation tensor (measures of similarity between spans I and j under task x, Section 2.1, page 5785).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhao by for each entity pair, acquiring two entity length information of two entities in the entity pair; determining a plurality of correlation values between the entity pair and the plurality of relations, according to the two entity length information and the word relation tensor in order to enable the model to disambiguate challenging entity mentions, as suggested by Wadden (Abstract, page 5784), with predictable benefits in information extraction tasks, as suggested by Wadden (Section 1, page 5784). The references cited are analogous art in the same field of natural language processing.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Wang et al. (“TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking”. Proceedings of the 28th International Conference on Computational Linguistics, pages 1572–1582 Barcelona, Spain (Online), December 8-13, 2020) discloses a one-stage joint neural extraction model for entities and relations from text
US 20190236492 Saha discloses initial learning of an adaptive deterministic classifier for entity and relation extraction
US 11423072 Chen discloses employing multimodal learning for analyzing entity record relationships
US 20160098645 Sharma discloses a high-precision limited supervision relationship extractor
US 11977569 Potter discloses autonomous open schema construction from unstructured text
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jesse Pullias whose telephone number is 571/270-5135. The examiner can normally be reached on M-F 8:00 AM - 4:30 PM. The examiner’s fax number is 571/270-6135.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Andrew Flanders can be reached on 571/272-7516.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Jesse S Pullias/
Primary Examiner, Art Unit 2655 03/17/26