Prosecution Insights
Last updated: April 19, 2026
Application No. 18/063,089

METHOD FOR SAMPLE AUGMENTATION

Non-Final OA §103
Filed
Dec 08, 2022
Examiner
YOUNG, CAMERON KENNETH
Art Unit
2655
Tech Center
2600 — Communications
Assignee
BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
82%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
14 granted / 20 resolved
+8.0% vs TC avg
Moderate +12% lift
Without
With
+12.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
58.9%
+18.9% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/03/2025 has been entered. Response to Amendment Applicant’s amendment, filed 12/03/2025, has been entered. Claims 12 and 20 have been cancelled. Claims 1, 3 – 11, and 14 – 19 remain pending within the application. The cancellation of claim 20 makes the previously laid out 35 U.S.C. § 101 rejections moot. As such, the 35 U.S.C. § 101 rejections have been withdrawn. Response to Arguments Applicant's arguments filed 12/03/2025 have been fully considered but they are not persuasive. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., dynamic interaction, feedback optimization, a dynamic active learning loop, the comparison of prediction outputs with real data of the training corpus) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In further detail: Applicant argues, on page 13 of Applicant’s Response, that the cited prior art fails to teach each of 3 limitations tabulated as: 1.1: “selecting a training corpus to be labeled from the batch of training corpora based on prediction results of each training corpus in the batch of training corpora after each training” 1.2: “generating a target triplet information extraction network, by adjusting the triplet information extraction network based on the labeled triplet information of the training corpus and the prediction triplet information” 1.3: “determining the third triplet information based on a voting mechanism from the pieces of candidate triplet information.” Particularly, Applicant argues, on pages 13 – 15 of Applicant’s Response, that Perera does not expressly teach “based on prediction results of each training corpus in the batch of training corpora after each training.” Examiner respectfully disagrees. Most notably, it is not Perera alone that teaches such a limitation. Perera was indeed indicated by the examiner to contain some elements of these limitations but it is Perera in view of Kotnis’ teachings of prediction of models for training that teach limitation 1.1. The Applicant acknowledges this is the core of the argument, on page 13 of Applicant’s Response, but Applicant does not provide an analysis that treats the teachings as such a combination beyond the general characterization that the Office Action and Advisory Action, dated 09/03/2025 and 11/24/2025 respectively, do not provide sufficient evidence for a combination by a person of ordinary skill in the art. Examiner disagrees. In the office action dated 09/03/2025, Examiner laid out: “It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun with the teachings of Perera to provide the limitations of claim 1. Doing so would have reduced the cost and time consumed generating training sets and increased the semantic diversity of the sets as recognized by Perera at ¶¶ [0009] – [0010].” This statement, along with the cited portion of Perera, codifies that Perera’s approach can reduce the cost and time consumed by generating training sets and increase semantic diversity of the training sets. Further still, Perera, Kotnis, and Sun all exist within very similar fields of endeavor which a person of ordinary skill in the art would have been motivated to combine because of their similar fields and related processes. It is not significant effort to combine similar art achieving similar goals while performing similar processes. Further still, Applicant alleges that the combination would not obtain a “dynamic active learning loop defined in limitation 1.1.” Examiner notes that the claim does not refer to, nor make clear, any sort of “dynamic active learning loop.” The limitation itself, under its broadest reasonable interpretation, is generally selecting a training corpus to be labeled, based on prediction results of trainings for a batch of training corpora. There is no reference to dynamic interaction and feedback optimization present within the claims. As such, the 35 U.S.C. § 103 rejections of claim 1, 3 – 11, and 14 – 19 are maintained for at least the reasons laid out above. Further, Applicant argues that there is a conceptual confusion of Kotnis because Kotnis allegedly does not describe adjusting the network during the training phase by comparing the model’s prediction outputs for the training data with real data of the training corpus. Examiner notes that this concept is not present within the claims. Instead, limitation 1.2 only refers to “generating a target triplet information extraction network by adjusting the triplet information extraction network based on the labeled triplet information of the training corpus and the prediction triplet information.” Basing such a generation of a target triplet information network on labeled triplet information and prediction triplet information is not, under its broadest reasonable interpretation, “comparing prediction outputs for the training data with real data of the training corpus.” As such, the 35 U.S.C. § 103 rejections of claims 1, 3 – 11, and 14 - 19 are maintained for at least the reasons laid out above. Further still, Applicant alleges the prior art fails to teach or suggest limitation 1.3. Particularly, Applicant alleges that Kotnis does not disclose the step of “selecting the high confidence triplet from multiple extracted triplets” nor the use of a “voting mechanism.” Examiner disagrees. As laid out in the 35 U.S.C. § 103 rejections laid out below, Kotnis-Sun-Perera teaches limitation 1.3. Particularly, Kotnis’ teaching of selecting high-confidence through an iterative process as recognized by Applicant, on page 17 of Applicant’s Response, demonstrates that the high-confidence triplets are extracted by feeding a sentence back into the model iteratively. Therefore, given that the sentence is fed through the model multiple times, the triplet extracted by the model will be extracted multiple times until a high-confidence triplet is achieved. This is, in effect, a voting mechanism. Further, the sentence is fed through the model and the triplet is extracted multiple times resulting in multiple extracted triplets which are used to formulate the high confidence triplet. The high-confidence triplet is selected from multiple extracted triplets because the high-confidence triplet is only high-confidence through an iterative process of extracting triplets multiple times. See Kotnis at ¶¶ [0024] – [0037] and Figs. 5 and 7. As such, Kotnis teaches limitation 1.3 and the 35 U.S.C. § 103 rejections of claims 1, 3 – 11, and 14 – 19 are maintained for at least the reasons laid out above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 14, 15 and 18 - 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2022/0309254 A1 to Bhushan Kotnis et al. (hereinafter Kotnis) in view of Korean Patent Application Publication No. 20200071877 A to Choi Key Sun et al. (hereinafter Sun) and further in view of U.S. Patent Application Publication No. 2020/0394461 A1 to Pathirage D. S. U. Perera et al. (hereinafter Perera). Regarding claim 1, Kotnis teaches a computer-implemented method for sample augmentation, applied to a knowledge graph and natural language processing, comprising: (Kotnis teaches a method for extracting machine-readable data from by augmenting training data in a corpus. Kotnis at ¶ [0014] and Fig. 8.) acquiring a second sample corpus and second triplet information of the second sample corpus, by performing data augmentation on a first sample corpus labeled with first triplet information; (Kotnis teaches obtaining a corpus including subject-predicate-object triples that is augmented to provide a corpus of augmented data structures (i.e., a second corpus acquired by performing data augmentation on a first corpus.) Kotnis at Fig. 8 and ¶¶ [0043] - [0052].) generating a set of training corpora for training a triplet information extraction network, based on the first sample corpus and the first triplet information, the second sample corpus and the second triplet information, as well as the third sample corpus and the third triplet information. (Kotnis further teaches training a multi-head self-attention transformer model to perform triplet extraction using a filtered corpus which includes the original corpus and the data of the augmented corpus. (i.e., the model is trained on both corpora) Kotnis at ¶ [0024].) wherein iteratively training the triplet information extraction network comprises: acquiring tokens of each training corpus in the batch of training corpora by segmenting the training corpus, and acquiring a word coding of each of the tokens; (Kotnis teaches acquiring token vectors by mapping each word of an input sentence to an embedding vector (i.e., tokens are acquired by segmenting the corpus into input sentences which are then tokenized into specific word vectors for each sentence (i.e., word codings)). Kotnis at ¶¶ [0036] - [0037].) outputting a semantic representation vector of each of the tokens by inputting the word coding of each of the tokens into a pre-trained language model in the triplet information extraction network for context association; (Kotnis teaches inputting the embedding vectors into a self-attention layer (i.e., pretrained language model) which yields a vector representation for each word. Kotnis at ¶¶ [0036] - [0037].) outputting prediction triplet information of the training corpus, by inputting the semantic representation vector of each of the tokens into a multi-pointer classification model for entity category prediction; (Kotnis teaches feeding the vector representations into a multi-head token classification model where each of the vector representations are predicted to be of a certain category. Kotnis at ¶¶ [0036] - [0037].) and generating a target triplet information extraction network, by adjusting the triplet information extraction network based on the labeled triplet information of the training corpus and the prediction triplet information. (Kotnis teaches inputting the results of the predictions back into the model to continue training. Kotnis at ¶¶ [0036] - [0037]. Kotnis further teaches training a multi-head self-attention transformer model to perform triplet extraction using a filtered corpus which includes the original corpus and the data of the augmented corpus. Kotnis at ¶ [0024] and Fig. 5. Particularly, Fig. 5 demonstrates that the training process is cyclical, repeatedly feeding the information from the process back into the model to continuously refine the results.) wherein performing the semi-supervised learning on the third sample corpus that is not labeled with triplet information comprises: training a plurality of first triplet prediction models with a plurality of categories based on the first sample corpus and the second sample corpus; (Kotnis teaches performing triplet extraction using prediction orders to extract triplets. The triplets are extracted using a first prediction order, then subsequently, the triplets are extracted using multiple different prediction orders. As such, the triplets predicted by multiple prediction orders are marked as high confidence (i.e., a voting mechanism determines they are high confidence triples. Triples that have not been labeled as high confidence are candidates for being high confidence.) Kotnis at ¶ [0035].) predicting pieces of candidate triplet information corresponding to the third sample corpus by inputting the third sample corpus into each of the first triplet prediction models; (Kotnis teaches performing multiple different prediction orders (i.e., inputting the candidate triplet information into each of the prediction models). Kotnis at ¶ [0035].) and determining the third triplet information based on a voting mechanism from the pieces of candidate triplet information. (Kotnis teaches marking triplets predicted by multiple prediction orders as high confidence (i.e., these triplets have received multiple "votes" marking them as high confidence. Hence a voting mechanism is employed to mark triplets as high confidence.) Kotnis at ¶ [0035].) Kotnis, however, does not alone teach acquiring third triplet information of a third sample corpus, by performing semi-supervised learning on the third sample corpus that is not labeled with triplet information. In a similar field of endeavor (e.g., information extraction using augmented iterative learning), Sun teaches acquiring third triplet information of a third sample corpus, by performing semi-supervised learning on the third sample corpus that is not labeled with triplet information; (Sun teaches performing iterative learning using distant supervision (i.e., semi-supervised learning) to refine the process of extracting information from unstructured text (i.e., a sample corpus without triplet information) and automatically generating a knowledge base at each stage such that new knowledge can be combined with existing learning data and used as learning data for the next stage Sun at ¶ [0029].) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis with the teachings of Sun to provide acquiring third triplet information of a third sample corpus, by performing semi-supervised learning on the third sample corpus that is not labeled with triplet information. Doing so would have improved the performance of the information extraction system as recognized by Sun at ¶ [0053]. Kotnis in view of Sun (hereinafter Kotnis-Sun) however, do not teach the further limitations of iteratively training the triplet information extraction network based on a batch of training corpora in the set of training corpora; selecting a training corpus to be labeled from the batch of training corpora based on prediction results of each training corpus in the batch of training corpora after each training; acquiring labeled triplet information for the training corpus to be labeled; and adding the training corpus to be labeled and the labeled triplet information to the set of training corpora and continuing a next training; In a similar field of endeavor (e.g., generating training sets for machine learning models), Perera teaches the method of claim 1, further comprising: iteratively training the triplet information extraction network based on a batch of training corpora in the set of training corpora; (Perera teaches iteratively training a machine learning model to generate training sets (i.e., training corpora) wherein the training comprises selecting specific documents of the training sets iteratively. (i.e., a document is a corpus of sentences or phrase, thereby selecting a specific document of a training set is akin to selecting a corpus from a set of corpora.) Perera at ¶¶ [0018], [0028], and [0035].) selecting a training corpus to be labeled from the batch of training corpora based on prediction results of each training corpus in the batch of training corpora after each training; (Perera teaches clustering documents using a selection module to label specific documents selected by the selection module. Further, Perera teaches the training module trains a machine learning module to map inputs and outputs based on relationships between labels and contents of documents in the training sets. Perera at ¶¶ [0028] - [0029].) acquiring labeled triplet information for the training corpus to be labeled; (Perera teaches the selection module labels documents and the training module uses the relationships between labels and contents of documents for training. (i.e., the labeled information is acquired by the training module). Perera at ¶¶ [0028] - [0029].) and adding the training corpus to be labeled and the labeled triplet information to the set of training corpora and continuing a next training. (Perera teaches the selected documents are added to the training set and in the next iteration, more selected documents are added to the training set. Perera at ¶ [0028].) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun with the teachings of Perera to provide the limitations of claim 1. Doing so would have reduced the cost and time consumed generating training sets and increased the semantic diversity of the sets as recognized by Perera at ¶¶ [0009] – [0010]. Regarding claim 14, Kotnis-Sun-Perera teaches all the limitations of claim 1 as laid out above. Further, Kotnis teaches the method of claim 1, wherein outputting the prediction triplet information of the training corpus comprises: acquiring first candidate entities predicted as a first entity category in the training corpus, and second candidate entities predicted as a second entity category; (Kotnis teaches token classification heads for each of a plurality of categories (i.e., a head for subject, predicate, and object). Each iteration of the process takes input tokens and predicts their label (subject, object, or predicate). As such, one iteration is a first candidate predicted as a subject, and another iteration is a second candidate predicted as an object, etc. Kotnis at ¶¶ [0034] - [0037].) selecting an entity with a prediction probability greater than a first set threshold from the first candidate entities, as a target first entity; (Kotnis teaches the predictions of the entities may be ordered by entropy (i.e., confidence in the prediction). Kotnis at ¶¶ [0034 - [0037]. As such, because the predictions are ordered by entropy and the high-confidence triples are marked (i.e., those triples with high confidence of subject, predicate, and object) then the entity of a particular category (e.g., subject) is selected with a high confidence for that triple (i.e., the confidence is above what is considered "high" and is, therefore, at or exceeding the high confidence threshold.)) selecting an entity with the prediction probability greater than a second set threshold from the second candidate entities, as a target second entity; (Kotnis teaches the predictions of the entities may be ordered by entropy (i.e., confidence in the prediction). Kotnis at ¶¶ [0034 - [0037]. As such, because the predictions are ordered by entropy and the high-confidence triples are marked (i.e., those triples with high confidence of subject, predicate, and object) then the entity of a particular category (e.g., object) is selected with a high confidence for that triple (i.e., the confidence is above what is considered "high" and is, therefore, at or exceeding the high confidence threshold.)) and generating the prediction triplet information of the training corpus based on the target first entity and the target second entity. (Kotnis teaches generating predictions for input tokens representing a first entity (i.e., the subject token in a first iteration) and a second entity (i.e., the object token in a second iteration). Kotnis at ¶¶ [0034] - [0037].) Regarding claim 15, Kotnis-Sun-Perera teaches all the limitations of claim 14 as laid out above. Further, Kotnis teaches the method of claim 14, wherein generating the prediction triplet information of the training corpus based on the target first entity and the target second entity, comprises: determining a first entity pair by combining the target first entity and the target second entity; (Kotnis teaches predicting triplets of a subject, predicate, and object wherein the predicted subject, predicate, and object form an entity-entity relationship between the subject and object wherein the predicate indicates the relationship. Kotnis at ¶¶ [0035] - [0037]. Further, the subject is a single entity and the object is a second entity determined through multiple iterations of the labeling process. As such, each subject is a target first entity and each object is a target second entity which are combined with predicate to form a predicted triplet.) and generating the prediction triplet information of the training corpus based on the first entity pair and an entity relationship in the first entity pair. (Kotnis teaches generating predictions for each element of a triplet (i.e., prediction triplet information) based on a subject, predicate, and object wherein the subject and object are related by the predicate (therefore the subject and object are an entity-entity pair and the predicate is an entity relationship). Kotnis at ¶¶ [0034] - [0037].) Regarding claim 18, Kotnis teaches an electronic device, comprising: at least one processor; and a memory stored with instructions executable by the at least one processor, (Kotnis teaches a system for extracting machine-readable data from unstructured text comprising a processor and computer readable medium storing instructions for performing a method. Kotnis at ¶¶ [0031] – [0032].) wherein when the instructions are performed by the at least one processor, the at least one processor is caused to perform a method for sample augmentation, the method comprising: (Kotnis teaches a method for extracting machine-readable data from by augmenting training data in a corpus. Kotnis at ¶ [0014] and Fig. 8.) acquiring a second sample corpus and second triplet information of the second sample corpus, by performing data augmentation on a first sample corpus labeled with first triplet information; (Kotnis teaches obtaining a corpus including subject-predicate-object triples that is augmented to provide a corpus of augmented data structures (i.e., a second corpus acquired by performing data augmentation on a first corpus.) Kotnis at Fig. 8 and ¶¶ [0043] - [0052].) generating a set of training corpora for training a triplet information extraction network, based on the first sample corpus and the first triplet information, the second sample corpus and the second triplet information, as well as the third sample corpus and the third triplet information. (Kotnis further teaches training a multi-head self-attention transformer model to perform triplet extraction using a filtered corpus which includes the original corpus and the data of the augmented corpus. (i.e., the model is trained on both corpora) Kotnis at ¶ [0024].) wherein iteratively training the triplet information extraction network comprises: acquiring tokens of each training corpus in the batch of training corpora by segmenting the training corpus, and acquiring a word coding of each of the tokens; (Kotnis teaches acquiring token vectors by mapping each word of an input sentence to an embedding vector (i.e., tokens are acquired by segmenting the corpus into input sentences which are then tokenized into specific word vectors for each sentence (i.e., word codings)). Kotnis at ¶¶ [0036] - [0037].) outputting a semantic representation vector of each of the tokens by inputting the word coding of each of the tokens into a pre-trained language model in the triplet information extraction network for context association; (Kotnis teaches inputting the embedding vectors into a self-attention layer (i.e., pretrained language model) which yields a vector representation for each word. Kotnis at ¶¶ [0036] - [0037].) outputting prediction triplet information of the training corpus, by inputting the semantic representation vector of each of the tokens into a multi-pointer classification model for entity category prediction; (Kotnis teaches feeding the vector representations into a multi-head token classification model where each of the vector representations are predicted to be of a certain category. Kotnis at ¶¶ [0036] - [0037].) and generating a target triplet information extraction network, by adjusting the triplet information extraction network based on the labeled triplet information of the training corpus and the prediction triplet information. (Kotnis teaches inputting the results of the predictions back into the model to continue training. Kotnis at ¶¶ [0036] - [0037]. Kotnis further teaches training a multi-head self-attention transformer model to perform triplet extraction using a filtered corpus which includes the original corpus and the data of the augmented corpus. Kotnis at ¶ [0024] and Fig. 5. Particularly, Fig. 5 demonstrates that the training process is cyclical, repeatedly feeding the information from the process back into the model to continuously refine the results.) wherein performing the semi-supervised learning on the third sample corpus that is not labeled with triplet information comprises: training a plurality of first triplet prediction models with a plurality of categories based on the first sample corpus and the second sample corpus; (Kotnis teaches performing triplet extraction using prediction orders to extract triplets. The triplets are extracted using a first prediction order, then subsequently, the triplets are extracted using multiple different prediction orders. As such, the triplets predicted by multiple prediction orders are marked as high confidence (i.e., a voting mechanism determines they are high confidence triples. Triples that have not been labeled as high confidence are candidates for being high confidence.) Kotnis at ¶ [0035].) predicting pieces of candidate triplet information corresponding to the third sample corpus by inputting the third sample corpus into each of the first triplet prediction models; (Kotnis teaches performing multiple different prediction orders (i.e., inputting the candidate triplet information into each of the prediction models). Kotnis at ¶ [0035].) and determining the third triplet information based on a voting mechanism from the pieces of candidate triplet information. (Kotnis teaches marking triplets predicted by multiple prediction orders as high confidence (i.e., these triplets have received multiple "votes" marking them as high confidence. Hence a voting mechanism is employed to mark triplets as high confidence.) Kotnis at ¶ [0035].) Kotnis, however, does not alone teach acquiring third triplet information of a third sample corpus, by performing semi-supervised learning on the third sample corpus that is not labeled with triplet information. In a similar field of endeavor (e.g., information extraction using augmented iterative learning), Sun teaches acquiring third triplet information of a third sample corpus, by performing semi-supervised learning on the third sample corpus that is not labeled with triplet information; (Sun teaches performing iterative learning using distant supervision (i.e., semi-supervised learning) to refine the process of extracting information from unstructured text (i.e., a sample corpus without triplet information) and automatically generating a knowledge base at each stage such that new knowledge can be combined with existing learning data and used as learning data for the next stage Sun at ¶ [0029].) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis with the teachings of Sun to provide acquiring third triplet information of a third sample corpus, by performing semi-supervised learning on the third sample corpus that is not labeled with triplet information. Doing so would have improved the performance of the information extraction system as recognized by Sun at ¶ [0053]. Kotnis-Sun however, do not teach the further limitations of iteratively training the triplet information extraction network based on a batch of training corpora in the set of training corpora; selecting a training corpus to be labeled from the batch of training corpora based on prediction results of each training corpus in the batch of training corpora after each training; acquiring labeled triplet information for the training corpus to be labeled; and adding the training corpus to be labeled and the labeled triplet information to the set of training corpora and continuing a next training; In a similar field of endeavor (e.g., generating training sets for machine learning models), Perera teaches the method of claim 1, further comprising: iteratively training the triplet information extraction network based on a batch of training corpora in the set of training corpora; (Perera teaches iteratively training a machine learning model to generate training sets (i.e., training corpora) wherein the training comprises selecting specific documents of the training sets iteratively. (i.e., a document is a corpus of sentences or phrase, thereby selecting a specific document of a training set is akin to selecting a corpus from a set of corpora.) Perera at ¶¶ [0018], [0028], and [0035].) selecting a training corpus to be labeled from the batch of training corpora based on prediction results of each training corpus in the batch of training corpora after each training; (Perera teaches clustering documents using a selection module to label specific documents selected by the selection module. Further, Perera teaches the training module trains a machine learning module to map inputs and outputs based on relationships between labels and contents of documents in the training sets. Perera at ¶¶ [0028] - [0029].) acquiring labeled triplet information for the training corpus to be labeled; (Perera teaches the selection module labels documents and the training module uses the relationships between labels and contents of documents for training. (i.e., the labeled information is acquired by the training module). Perera at ¶¶ [0028] - [0029].) and adding the training corpus to be labeled and the labeled triplet information to the set of training corpora and continuing a next training. (Perera teaches the selected documents are added to the training set and in the next iteration, more selected documents are added to the training set. Perera at ¶ [0028].) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun with the teachings of Perera to provide the limitations of claim 18. Doing so would have reduced the cost and time consumed generating training sets and increased the semantic diversity of the sets as recognized by Perera at ¶¶ [0009] – [0010]. Regarding claim 19, Kotnis teaches a non-transitory computer-readable storage medium having computer instructions stored thereon, (Kotnis teaches a system for extracting machine-readable data from unstructured text comprising a processor and computer readable medium storing instructions for performing a method. Kotnis at ¶¶ [0031] – [0032].) wherein the computer instructions are configured to cause a computer to perform a method for sample augmentation, the method comprising: (Kotnis teaches a method for extracting machine-readable data from by augmenting training data in a corpus. Kotnis at ¶ [0014] and Fig. 8.) acquiring a second sample corpus and second triplet information of the second sample corpus, by performing data augmentation on a first sample corpus labeled with first triplet information; (Kotnis teaches obtaining a corpus including subject-predicate-object triples that is augmented to provide a corpus of augmented data structures (i.e., a second corpus acquired by performing data augmentation on a first corpus.) Kotnis at Fig. 8 and ¶¶ [0043] - [0052].) and generating a set of training corpora for training a triplet information extraction network, based on the first sample corpus and the first triplet information, the second sample corpus and the second triplet information, as well as the third sample corpus and the third triplet information. (Kotnis further teaches training a multi-head self-attention transformer model to perform triplet extraction using a filtered corpus which includes the original corpus and the data of the augmented corpus. (i.e., the model is trained on both corpora) Kotnis at ¶ [0024].) wherein iteratively training the triplet information extraction network comprises: acquiring tokens of each training corpus in the batch of training corpora by segmenting the training corpus, and acquiring a word coding of each of the tokens; (Kotnis teaches acquiring token vectors by mapping each word of an input sentence to an embedding vector (i.e., tokens are acquired by segmenting the corpus into input sentences which are then tokenized into specific word vectors for each sentence (i.e., word codings)). Kotnis at ¶¶ [0036] - [0037].) outputting a semantic representation vector of each of the tokens by inputting the word coding of each of the tokens into a pre-trained language model in the triplet information extraction network for context association; (Kotnis teaches inputting the embedding vectors into a self-attention layer (i.e., pretrained language model) which yields a vector representation for each word. Kotnis at ¶¶ [0036] - [0037].) outputting prediction triplet information of the training corpus, by inputting the semantic representation vector of each of the tokens into a multi-pointer classification model for entity category prediction; (Kotnis teaches feeding the vector representations into a multi-head token classification model where each of the vector representations are predicted to be of a certain category. Kotnis at ¶¶ [0036] - [0037].) and generating a target triplet information extraction network, by adjusting the triplet information extraction network based on the labeled triplet information of the training corpus and the prediction triplet information. (Kotnis teaches inputting the results of the predictions back into the model to continue training. Kotnis at ¶¶ [0036] - [0037]. Kotnis further teaches training a multi-head self-attention transformer model to perform triplet extraction using a filtered corpus which includes the original corpus and the data of the augmented corpus. Kotnis at ¶ [0024] and Fig. 5. Particularly, Fig. 5 demonstrates that the training process is cyclical, repeatedly feeding the information from the process back into the model to continuously refine the results.) wherein performing the semi-supervised learning on the third sample corpus that is not labeled with triplet information comprises: training a plurality of first triplet prediction models with a plurality of categories based on the first sample corpus and the second sample corpus; (Kotnis teaches performing triplet extraction using prediction orders to extract triplets. The triplets are extracted using a first prediction order, then subsequently, the triplets are extracted using multiple different prediction orders. As such, the triplets predicted by multiple prediction orders are marked as high confidence (i.e., a voting mechanism determines they are high confidence triples. Triples that have not been labeled as high confidence are candidates for being high confidence.) Kotnis at ¶ [0035].) predicting pieces of candidate triplet information corresponding to the third sample corpus by inputting the third sample corpus into each of the first triplet prediction models; (Kotnis teaches performing multiple different prediction orders (i.e., inputting the candidate triplet information into each of the prediction models). Kotnis at ¶ [0035].) and determining the third triplet information based on a voting mechanism from the pieces of candidate triplet information. (Kotnis teaches marking triplets predicted by multiple prediction orders as high confidence (i.e., these triplets have received multiple "votes" marking them as high confidence. Hence a voting mechanism is employed to mark triplets as high confidence.) Kotnis at ¶ [0035].) Kotnis, however, does not alone teach acquiring third triplet information of a third sample corpus, by performing semi-supervised learning on the third sample corpus that is not labeled with triplet information. In a similar field of endeavor (e.g., information extraction using augmented iterative learning), Sun teaches acquiring third triplet information of a third sample corpus, by performing semi-supervised learning on the third sample corpus that is not labeled with triplet information; (Sun teaches performing iterative learning using distant supervision (i.e., semi-supervised learning) to refine the process of extracting information from unstructured text (i.e., a sample corpus without triplet information) and automatically generating a knowledge base at each stage such that new knowledge can be combined with existing learning data and used as learning data for the next stage Sun at ¶ [0029].) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis with the teachings of Sun to provide acquiring third triplet information of a third sample corpus, by performing semi-supervised learning on the third sample corpus that is not labeled with triplet information. Doing so would have improved the performance of the information extraction system as recognized by Sun at ¶ [0053]. Kotnis-Sun however, do not teach the further limitations of iteratively training the triplet information extraction network based on a batch of training corpora in the set of training corpora; selecting a training corpus to be labeled from the batch of training corpora based on prediction results of each training corpus in the batch of training corpora after each training; acquiring labeled triplet information for the training corpus to be labeled; and adding the training corpus to be labeled and the labeled triplet information to the set of training corpora and continuing a next training; In a similar field of endeavor (e.g., generating training sets for machine learning models), Perera teaches the method of claim 1, further comprising: iteratively training the triplet information extraction network based on a batch of training corpora in the set of training corpora; (Perera teaches iteratively training a machine learning model to generate training sets (i.e., training corpora) wherein the training comprises selecting specific documents of the training sets iteratively. (i.e., a document is a corpus of sentences or phrase, thereby selecting a specific document of a training set is akin to selecting a corpus from a set of corpora.) Perera at ¶¶ [0018], [0028], and [0035].) selecting a training corpus to be labeled from the batch of training corpora based on prediction results of each training corpus in the batch of training corpora after each training; (Perera teaches clustering documents using a selection module to label specific documents selected by the selection module. Further, Perera teaches the training module trains a machine learning module to map inputs and outputs based on relationships between labels and contents of documents in the training sets. Perera at ¶¶ [0028] - [0029].) acquiring labeled triplet information for the training corpus to be labeled; (Perera teaches the selection module labels documents and the training module uses the relationships between labels and contents of documents for training. (i.e., the labeled information is acquired by the training module). Perera at ¶¶ [0028] - [0029].) and adding the training corpus to be labeled and the labeled triplet information to the set of training corpora and continuing a next training. (Perera teaches the selected documents are added to the training set and in the next iteration, more selected documents are added to the training set. Perera at ¶ [0028].) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun with the teachings of Perera to provide the limitations of claim 19. Doing so would have reduced the cost and time consumed generating training sets and increased the semantic diversity of the sets as recognized by Perera at ¶¶ [0009] – [0010]. Claims 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Kotnis-Sun-Perera as applied to claim 1 above, and further in view of Non-Patent Literature Guiding Generative Language Models for Data Augmentation in Few-Shot Text Classification to Aleksandra Edwards et al. (hereinafter Edwards). Regarding claim 3, Kotnis-Sun-Perera teaches all the limitations of claim 1 as laid out above. Kotnis-Sun-Perera alone, however, do not teach all the limitations of claim 3. In a similar field of endeavor (e.g., performing data augmentation on datasets using language models), Edwards teaches the method of claim 1, wherein acquiring the second sample corpus and second triplet information of the second sample corpus comprises: acquiring the second sample corpus and the second triplet information, by performing data augmentation on the first sample corpus based on at least one data augmentation operation of: entity replacement, synonym replacement, token replacement of the same entity category and back translation. (Edwards teaches performing data augmentation including a plurality of different methods including word-replacement (i.e., entity replacement) synonym-replacement, sentence-replacement, back translation etc. Edwards at section 4.4, section 2, and tables 2 and 10.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun-Perera with the teachings of Edwards to provide the limitations of claim 3. Doing so would have improved classification performance using data augmentation as recognized by Edwards at page 1, section 1, column 2. Regarding claim 4, Kotnis-Sun-Perera in view of Edwards (hereinafter Kotnis-Sun-Perera-Edwards) teaches all the limitations of claim 3 as laid out above. Further, Kotnis teaches the method of claim 3, wherein acquiring the second sample corpus and the second triplet information comprises: generating the second triplet information by performing entity replacement on each entity in the first triplet information; (Kotnis teaches augmenting data (i.e., generating the second sample corpus) by replacing entities within the corpus with entities from OpenIE triples. Kotnis at ¶ [0042]. The text of the subject and the provenance sentences are replaced with text relating to the entity. (i.e., the text occupies a specific place in the corpus, therefore when the text is replaced, that position must be determined in order to replace the text. Replacement of the specific text would not be possible without determining a position of the text in the corpus. Further, because the text is replaced, there is no doubt that the text at the aforementioned position is replaced as the text would not be replaced unless the text at the specific position is replaced.) Kotnis at ¶ [0042].) determining a position where each entity in the first triplet information is located in the first sample corpus; (Kotnis teaches replacing entities within the corpus with entities from OpenIE triples. Kotnis at ¶ [0042]. As such, because the entities are replaced within the corpus, the location of the entities is determined when the entity is replaced.) and generating the second sample corpus by replacing the entity at the determined position with an entity in the second triplet information. (Kotnis teaches augmenting data (i.e., generating the second sample corpus) by replacing entities within the corpus with entities from OpenIE triples. Kotnis at ¶ [0042].) Claims 5 – 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kotnis-Sun-Perera-Edwards as applied to claim 1 and 4 above, and further in view of U.S. Patent Application Publication No. 2019/0205463 A1 to Navaneethan Santhanam et al. (hereinafter Santhanam). Regarding claim 5, Kotnis-Sun-Perera-Edwards teaches all the limitations of claim 4 as laid out above. Further, Kotnis teaches the method of claim 4, wherein generating the second triplet information by performing entity replacement on the entity in the first triplet information, comprises: … determining a target entity dictionary for entity replacement…; (Kotnis teaches replacing text relating to an entity within a specific space of text using OpenIE triples. (i.e., a target entity dictionary). Kotnis at ¶ [0042]. As such, a target entity dictionary is determined and used to replace the entity.) and generating the second triplet information by performing entity replacement on each entity in the first triplet information based on the target entity dictionary. (Kotnis teaches augmenting data using data from OpenIE as such the second triplet information (i.e., augmented data) is generated using data augmentation. Kotnis at ¶ [0042].) Kotnis-Sun-Perera-Edwards alone, however, does not teach recognizing whether there is an overlapping relationship between entities in the first triplet information. In a similar field of endeavor (i.e., recognizing entities from unstructured text), Santhanam teaches recognizing whether there is an overlapping relationship between entities in the first triplet information; (Santhanam teaches recognizing there are overlapping entities within the results of text processing and results from a database. Santhanam at ¶¶ [0027] - [0028]. As such Santhanam, in view of Kotnis-Sun-Perera-Edward's augmentation of data from a corpus and replacing text within a corpus, a person of ordinary skill in the art would have found it reasonable to locate overlapping entities in the corpus.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun-Perera-Edwards with the teachings of Santhanam to provide the limitations of claim 5. Doing so would have yielded improvements in named entity recognition as recognized by Santhanam at ¶ [0034]. As such, using named entity recognition in the context of generating an augmented corpus would reasonably allow for the removal of duplicate entities which would further refine the corpus as recognized by Santhanam at ¶ [0028]. Regarding claim 6, Kotnis-Sun-Perera-Edwards in view of Santhanam (hereinafter Kotnis-Sun-Perera-Edwards-Santhanam) teaches all the limitations of claim 5 as laid out above. Further, Kotnis teaches the method of claim 5, wherein determining the target entity dictionary for entity replacement based on the recognition result, comprises: acquiring a category of each entity in the first triplet information in response to the recognition result indicating that there is no overlapping relationship between the entities; (Kotnis teaches determining type data for each triple and performing type substitution on the triples to yield augmented data wherein the triples have been type-switched by substituting entities of the same type. (i.e., a category is determined for each triple). Kotnis at ¶¶ [0014] - [0018]. As such, whether there is an overlapping relationship or not, Kotnis performs type-switching of the entities. Therefore, were no overlapping relationship to be determined, the type-switching would be performed, and the category of each entity would be determined as a result.) and determining an entity dictionary corresponding to the category of each entity as the target entity dictionary. (Further, Kotnis teaches retrieving the type substitutes from OpenIE wherein OpenIE comprises databases of items such as typed entities. Kotnis at ¶ [0027].) Regarding claim 7, Kotnis-Sun-Perera-Edwards-Santhanam teaches all the limitations of claim 5 as laid out above. Further, Santhanam teaches the method of claim 5, wherein determining the target entity dictionary for entity replacement based on the recognition result, comprises: acquiring an overlapping entity dictionary as the target entity dictionary, in response to the recognition result indicating that there is an overlapping relationship between the entities, wherein the overlapping entity dictionary comprises entity pairs with an overlapping relationship. (Santhanam teaches recognizing there are overlapping entities within the results of text processing and results from a database. Santhanam at ¶¶ [0027] - [0028]. As such Santhanam, in view of Kotnis-Sun-Perera-Edward's augmentation of data from a corpus and replacing text within a corpus, a person of ordinary skill in the art would have found it reasonable to locate overlapping entities in the corpus. Further, Kotnis teaches using triple information etc. from OpenIE to replace entities within a corpus. Kotnis at ¶ [0027]. As such, OpenIE is a target entity dictionary for replacing entities, overlapping or otherwise.) Regarding claim 8, Kotnis-Sun-Perera-Edwards-Santhanam teaches all the limitations of claim 7 as laid out above. Further, Kotnis teaches the method of claim 7, wherein performing entity replacement on each entity in the first triplet information based on the target entity dictionary, comprises: acquiring an entity pair with the overlapping relationship in the first triplet information; (Kotnis teaches performing type substitution of triples (i.e., entity pairs) within a corpus. (i.e., an entity pair with overlapping relationship in the first triplet information) Kotnis at ¶¶ [0014] - [0018].) acquiring a replacement entity pair matching the entity pair in the first triplet information from the overlapping entity dictionary; (Kotnis teaches replacing entities within the corpus using entities from OpenIE (i.e., replacement entity pairs from a target dictionary). Kotnis at ¶ [0027].) and generating the second triplet information by performing entity replacement on the entity pair with the overlapping relationship based on the replacement entity pair. (Kotnis teaches replacing entities within the corpus using entities from OpenIE of the same type. Further, Kotnis teaches replacing both entities from a triple to form a new triple with overlapping relationship (i.e., the relationship between the entities remains the same.) Kotnis at ¶¶ [0014] - [0018] and [0027].) Claims 9 - 10 are rejected under 35 U.S.C. 103 as being unpatentable over Kotnis-Sun-Perera-Edwards as applied to claims 1 and 3 above, and further in view of U.S. Patent Application Publication No. 2021/0383066 A1 to Henghui Zhu et al. (hereinafter Zhu). Regarding claim 9, Kotnis-Sun-Perera-Edwards teaches all the limitations of claim 3 as laid out above. Kotnis-Sun-Perera-Edwards alone, however, do not teach the limitations of claim 9. In a similar field of endeavor (e.g., generating a training corpus using natural language processing), Zhu teaches the method of claim 3, wherein acquiring the second sample corpus and the second triplet information comprises: acquiring candidate tokens by segmenting the first sample corpus; (Zhu teaches generating a tokenized corpus of documents from various sources (i.e., segmenting the first sample corpus into candidate tokens by tokenizing the corpus). Zhu at ¶¶ [0024] - [0031].) Further, Kotnis teaches generating the second sample corpus by performing synonym replacement on a token other than tokens belonging to the entity in the first sample corpus, wherein the second triplet information is the same as the first triplet information. (Kotnis teaches performing type and code substitutions of entities within the corpus wherein the substitutions are entities of the same type (i.e., synonyms). Kotnis at ¶¶ [0014] - [0018]. As such, the tokenization of a corpus would be an obvious method of extracting information from the corpus in order to perform entity and type substitutions in order to provide synonym replacements of such entities.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun-Perera-Edwards with the teachings of Zhu to provide limitations of claim 9. Doing so would have yielded an improved corpora of training data as recognized by Zhu at ¶ [0024]. Regarding claim 10, Kotnis-Sun-Perera-Edwards teaches all the limitations of claim 3 as laid out above. Kotnis-Sun-Perera-Edwards alone, however, does not teach all the limitations of claim 10. In a similar field of endeavor (e.g., generating a training corpus using natural language processing), Zhu teaches the method of claim 3, wherein acquiring the second sample corpus and the second triplet information comprises: acquiring candidate tokens by segmenting the first sample corpus; (Zhu teaches generating a tokenized corpus of documents from various sources (i.e., segmenting the first sample corpus into candidate tokens by tokenizing the corpus). Zhu at ¶¶ [0024] - [0031].) Further, Kotnis teaches and selecting a token labeled with an entity category from the candidate tokens, as a target token; (Kotnis teaches obtaining entity types for all entities within a corpus and performing type substitution of the entity (i.e., a token of a certain category is selected as a target token.) Kotnis at ¶ [0014] - [0018].) acquiring a replacement token of the same entity category to which the target token belongs; (Kotnis teaches performing type substitution on entities within the corpus (i.e., the entity/token is replaced with an entity/token of the same type/category. In order for this to be achieved a replacement token must be acquired.). Kotnis at ¶¶ [0014] - [0018].) generating the second sample corpus by replacing the target token in the first sample corpus with the replacement token; (Kotnis teaches performing type substitution on entities within the corpus (i.e., the entity/token is replaced with an entity/token of the same type/category.) Kotnis at ¶¶ [0014] - [0018].) and generating the second triplet information by updating the first triplet information based on the replacement token. (Kotnis teaches obtaining a corpus including subject-predicate-object triples that is augmented to provide a corpus of augmented data structures (i.e., a second corpus acquired by performing data augmentation on a first corpus.) Kotnis at Fig. 8 and ¶¶ [0043] - [0052]. As such, the type substitution of specific entities as used to generate a second corpus includes generating second triplet information by substituting the entities within a triplet which inherently yields a second, alternate triplet based on the replaced entities.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun-Perera-Edwards with the teachings of Zhu to provide limitations of claim 10. Doing so would have yielded an improved corpora of training data as recognized by Zhu at ¶ [0024]. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kotnis-Sun-Perera-Edwards as applied to claims 1 and 3 above, and further in view of U.S. Patent Application Publication No. 2022/0215209 A1 to Thang Mingh Luong et al. (hereinafter Luong). Regarding claim 11, Kotnis-Sun-Perera-Edwards teaches all the limitations of claim 3 as laid out above. Kotnis-Sun-Perera-Edwards alone, however, do not teach all the limitations of claim 11. In a similar field of endeavor (e.g., training machine learning models with augmented data), Luong teaches the method of claim 3, wherein acquiring the second sample corpus and the second triplet information comprises: obtaining a replaced first sample corpus by replacing an entity in the first sample corpus with a target symbol; (Luong teaches back translating a text by randomly selecting specific words from an example then translating those words back into the original language. Luong at ¶ [0038]. As such, using this process the words not randomly selected are, in effect, replaced with a token, translated, then replaced with the original text. This results in preserving words of the original text as is achieved by Luong by only randomly selecting some words of the text for translating.) generating an intermediate sample corpus by translating the replaced first sample corpus; (Luong teaches generating an intermediate sample when the text is translated into a first language then back into the original language (the first translation of text is an intermediary between the original text and the back translated text.). Luong at ¶ [0038].) and acquiring the second sample corpus, by back translating the intermediate sample corpus and replacing the target symbol in the back-translated sample corpus with the entity, wherein the second triplet information is the same as the first triplet information. (Luong’s back translating along with Kotnis generation of a second sample corpus and Edwards backtranslation to perform data augmentation yields the result of back translating the intermediate sample corpus and replacing the target symbol in the back-translated sample corpus with the entity, wherein the second triplet information is the same as the first triplet information.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun-Perera-Edwards with the teachings of Luong to provide the limitations of claim 11. Doing so would have improved the robustness of the training process as recognized by Luong at ¶¶ [0038] – [0041]. Claims 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kotnis-Sun-Perera as applied to claims 1 and 14 above, and further in view of U.S. Patent Application Publication No. 2019/0005026 A1 to Zhenzhong Zhang et al. (hereinafter Zhang). Regarding claim 16, Kotnis-Sun-Perera teaches all the limitations of claim 14 as laid out above. Kotnis-Sun-Perera alone, however, do not teach the limitations of claim 16. In a similar field of endeavor (e.g., extracting information using semantic relationships and entity pairs), Zhang teaches the method of claim 14, wherein generating the prediction triplet information of the training corpus based on the target first entity and the target second entity, comprises: acquiring a distance between the target first entity and the target second entity, and determining a second entity pair based on the distance; (Zhang teaches screening semantic relationships by calculating the cosine similarity between two semantic relationships (i.e., calculating the distance between the semantic relationships (e.g., entity pairs)). Zhang at ¶¶ [0074] - [0077].) Further, Kotnis teaches generating the prediction triplet information of the training corpus based on the second entity pair and an entity relationship in the second entity pair. (Kotnis teaches replacing entities of the same type to augment the data of a corpus by generating new triples with replaced types. Kotnis at ¶¶ [0014] - [0018] and [0027].) As such, calculating the similarity of the semantic relationships by computing the cosine similarity (i.e., computing distances between entities) would allow the entities to be replaced as Kotnis performs when replacing entities of the same type. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun-Perera with the teachings of Zhang to provide acquiring a distance between a target first entity and a target second entity and determining a second entity pair based on the distance. Doing so would have provided a high efficiency, low calculation process that yields quick, convenient results as recognized by Zhang at ¶¶ [0074] – [0077]. Regarding claim 17, Kotnis-Sun-Perera teaches all the limitations of claim 14 as laid out above. Kotnis-Sun-Perera alone, however, do not teach the limitations of claim 17. In a similar field of endeavor (e.g., extracting information using semantic relationships and entity pairs), Zhang teaches The method of claim 14, wherein generating the prediction triplet information of the training corpus based on the target first entity and the target second entity, comprises: acquiring a distance between the target first entity and the target second entity; (Zhang teaches screening semantic relationships by calculating the cosine similarity between two semantic relationships (i.e., calculating the distance between the semantic relationships (e.g., entity pairs)). Zhang at ¶¶ [0074] - [0077].) Further, Kotnis teaches determining a third entity pair based on the distance and positions of the target first entity and the target second entity located in the training corpus; (Kotnis teaches replacing entities of the same type to augment the data of a corpus by generating new triples with replaced types. Kotnis at ¶¶ [0014] - [0018] and [0027]. As such, generating type replaced triples based on the original entity pair is determining a third entity pair to replace the original.) and generating the prediction triplet information of the training corpus based on the third entity pair and an entity relationship in the third entity pair. (Kotnis teaches replacing entities of the same type to augment the data of a corpus by generating new triples with replaced types. Kotnis at ¶¶ [0014] - [0018] and [0027]. Therefore, in the process of determining and generating a replacement triple, a separate entity pair and relationship is generated which replaces the original.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Kotnis-Sun-Perera with the teachings of Zhang to provide acquiring a distance between the target first entity and the target second entity. Doing so would have provided a high efficiency, low calculation process that yields quick, convenient results as recognized by Zhang at ¶¶ [0074] – [0077]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAMERON KENNETH YOUNG whose telephone number is (703)756-1527. The examiner can normally be reached Mon - Fri, 9:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAMERON KENNETH YOUNG/Examiner, Art Unit 2655 /ANDREW C FLANDERS/Supervisory Patent Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Dec 08, 2022
Application Filed
Mar 21, 2025
Non-Final Rejection — §103
Jun 26, 2025
Response Filed
Aug 22, 2025
Final Rejection — §103
Oct 30, 2025
Response after Non-Final Action
Dec 03, 2025
Request for Continued Examination
Dec 17, 2025
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602409
INFORMATION SEARCH SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12592230
RECOGNITION OR SYNTHESIS OF HUMAN-UTTERED HARMONIC SOUNDS
2y 5m to grant Granted Mar 31, 2026
Patent 12567429
VOICE CALL CONTROL METHOD AND APPARATUS, COMPUTER-READABLE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12525250
Cascade Architecture for Noise-Robust Keyword Spotting
2y 5m to grant Granted Jan 13, 2026
Patent 12493748
LARGE LANGUAGE MODEL UTTERANCE AUGMENTATION
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
82%
With Interview (+12.5%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month