Prosecution Insights
Last updated: April 19, 2026
Application No. 17/504,374

Systems and Methods for Updating Predictive Coding Based on a Confidence Threshold

Non-Final OA §103§112§DP
Filed
Oct 18, 2021
Examiner
SITIRICHE, LUIS A
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Open Text Holdings Inc.
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
363 granted / 468 resolved
+22.6% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
24 currently pending
Career history
492
Total Applications
across all art units

Statute-Specific Performance

§101
24.2%
-15.8% vs TC avg
§103
39.1%
-0.9% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 468 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION This Office Action is in response to the request for continuous examination entered on 10/20/2025. Claims 1, 3, 9, 11, 13 are amended. Claims 1-9, 11-21 are pending. Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/20/2025 has been entered. Response to Arguments The Applicant’s arguments regarding the rejection of above claims have been fully considered. In reference to Applicant’s arguments about: Claim rejections under 112 (b). Examiner’s response: Rejections are withdrawn in view of amendments. In reference to Applicant’s arguments about: Double Patenting. Examiner’s response: Rejections are maintained. In reference to Applicant’s arguments about: 35 USC 103 Rejections. Examiner’s response: Applicant asserts the prior art of record does not maintain the claimed distinction between a governed control set and a separate additional document set. Examiner understands that Applicant is referencing to the amended limitation “applying the updated coded control set to another set of additional documents to automatically code the another set of additional documents, wherein the coded control set is distinct from the another set of additional documents”. These arguments have been fully considered, however, are not persuasive. First, Examiner still understands that Davis implicitly teaches the coded control set being distinct from the additional documents as Davis first describes the categorization of documents once the training set has been retuned, therefore, since this categorization is happening after the retuning, it is reasonably interpreted that these further documents are new or distinct documents from the ones the classifier was trained on. However, after further consideration and search, Examiner reviewed reference Owens (which was cited as pertinent art in the Non final dated 04/18/2025) and found that this reference explicitly teaches this feature, as it can be seen at [0037]: “adapting to human feedback to periodically re-train portions of the system to more accurately categorize new documents”. Therefore, reference Owens’s classification of new documents is interpreted as the additional documents being different from the documents used for re-training the classifier. Applicant further asserts that the claimed invention requires dual-stage validation where the lifecycle terminates only when confidence threshold validation is satisfied on both the updated automatically coded control set and the automatically coded additional documents. Applicant asserts that Johnson fails to teach this feature and the feature of batch size determination driven by confidence threshold validation as claimed. Examiner respectfully disagrees. First, the manner in which the limitation was amended renders the claim unclear and indefinite, as explained below in this office action. The limitation, as amended, recites a confidence threshold validation being applied concurrently to two sets of documents, the coded control set and the coded another set of additional documents, and it is unclear when exactly the application of this confidence threshold validation is performed. It seems that it is done concurrently (interpreted as happening at the exact same time/ simultaneously), however, the order of the steps according to the claims is as follows: first, the coded control set is developed, and then, it is used to further code additional documents after. Furthermore, it is unclear if the same exact confidence threshold validation is applied to both the coded control set and the additional coded set of additional documents; or if these confidence threshold validation are different but are being applied simultaneously; or both. Nevertheless, Johnson’s machine learning algorithm assigns a confidence level to each annotation instance of the learned annotators for training and categorization of documents. This threshold is used for retuning the algorithm as any documents not meeting this threshold is presented for review and correction, and is further used for automatic acceptance or rejection of annotations instances in the classification of documents (see Johnson at [0017], [0112], Claims 2 and 11). Therefore, in view of the lack of clarity of the amended claim limitation, Examiner understands that there is no patentable distinction between the claims and the prior art of record. In addition, as explained below in this office action, Examiner could not find support for the amended limitation of Claim 1 recites “a confidence threshold validation concurrently applied to the updated automatically coded control set and the automatically coded another set of additional documents” in the instant application’s specification. Said limitation is considered to incorporate new matter in the claim which does not contain support in the original disclosure. For these reason above, Examiner maintains the 35 USC 103 rejection for claims 1-9, 11-21. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-9, 21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent claim 1 recites the limitation (as amended) “terminating the adaptive identification life cycle based on a confidence threshold validation concurrently applied to the updated automatically coded control set and the automatically coded another set of additional documents”. Said limitation is considered to incorporate new matter in the claim which does not contain support in the original disclosure. Therefore, examiner cannot find support in the specification for these limitations, and cannot conclude if the same exact confidence threshold validation is applied to both the coded control set and the additional coded set of additional documents; or if these confidence threshold validation are different but are being applied simultaneously; or both. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-9, 11-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Independent claim 1 recites the limitation (as amended) “terminating the adaptive identification life cycle based on a confidence threshold validation concurrently applied to the updated automatically coded control set and the automatically coded another set of additional documents”, and this limitation is considered unclear and indefinite. The limitation recites a confidence threshold validation being applied concurrently to two sets of documents, the coded control set and the coded another set of additional documents. First, it is unclear when exactly the application of this confidence threshold validation is performed as it seems that it is done concurrently (interpreted as happening simultaneously), however, the coded control set is developed first in order to further code additional documents after. Second, it is unclear if the same exact confidence threshold validation is applied to both the coded control set and the additional coded set of additional documents; or if these confidence threshold validation are different but are being applied simultaneously; or both. Clarification is required. Independent claim 11 recites the limitations: “update a coded control set with the relevant document having the hard coding correction, the update of the coded control set comprising an adaptive identification life cycle that receives the hard coding correction and updates the coded control set with the relevant document, and the adaptive identification life cycle terminates based on a confidence threshold validation applied to the updated automatically coded control set and the automatically coded another set of additional documents; apply the updated coded control set to another set of additional documents to automatically code the additional documents; terminate the adaptive identification life cycle based on a confidence threshold validation applied to the updated automatically coded control set and the automatically coded another set of additional documents”, and these limitations are considered unclear and indefinite. These limitations in bold seem to recite the exact same context, however, there is a step between them (apply the updated coded control set to another set of additional documents to automatically code the additional documents) that renders unclear whether the termination actually happens before or after this application of the updated coded control set to another set of additional documents. Furthermore, it is unclear if the same exact confidence threshold validation is applied to both the coded control set and the additional coded set of additional documents; or if these confidence threshold validation are different since they may be applied at different stages of the process. Clarification is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-9, 11-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-19 of U.S. Patent No. 11,023,828. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims are a broader version of the claims that appear in the Patent and they are both directed to document review by incorporating user input to identify a subject or category and coding documents accordingly, therefore, the inventive concept is the same and they are not patentably distinct from each other. Claims 1-9, 11-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. 9,595,005. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims and the claims that appear in the Patent are both directed to document review by incorporating user input to identify a subject or category and coding documents accordingly, therefore, the inventive concept is the same and they are not patentably distinct from each other. Claims 1-9, 11-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6-18 of U.S. Patent No. 8,489,538. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims are a broader version of the claims that appear in the Patent and they are both directed to document review by incorporating user input to identify a subject or category and coding documents accordingly, therefore, the inventive concept is the same and they are not patentably distinct from each other. Claims 1-9, 11-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 2-4, 6, 8-18 of U.S. Patent No. 8,554,716. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims are a broader version of the claims that appear in the Patent and they are both directed to document review by incorporating user input to identify a subject or category and coding documents accordingly, therefore, the inventive concept is the same and they are not patentably distinct from each other. Claims 1-9, 11-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6 of U.S. Patent No. 7,933,859. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims are a broader version of the claims that appear in the Patent and they are both directed to document review by incorporating user input to identify a subject or category and coding documents accordingly, therefore, the inventive concept is the same and they are not patentably distinct from each other. Claims 1-9, 11-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,282,000. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims are a broader version of the claims that appear in the Patent and they are both directed to document review by incorporating user input to identify a subject or category and coding documents accordingly, therefore, the inventive concept is the same and they are not patentably distinct from each other. Claim 1-9, 11-21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of copending Application No. 17/684,186. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims are a broader version of the claims that appear in the co-pending application which are both directed to document review by incorporating user input to identify a subject or category and coding documents accordingly, therefore, the inventive concept is the same and they are not patentably distinct from each other. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim 1-9, 11-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent 12,572,857. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims are a broader version of the claims that appear in the co-pending application which are both directed to document review by incorporating user input to identify a subject or category and coding documents accordingly, therefore, the inventive concept is the same and they are not patentably distinct from each other. Claim 1-9, 11-21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-8, 10-18, 20-22 of copending Application No. 17/220,445. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims and the claims that appear in the co-pending application are both directed to document review by incorporating user input to identify a subject or category and coding documents accordingly, therefore, the inventive concept is the same and they are not patentably distinct from each other. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim 1-9, 11-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-21 of U.S. Patent 12,547,944. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims are a broader version of the claims that appear in the co-pending application which are both directed to document review by incorporating user input to identify a subject or category and coding documents accordingly, therefore, the inventive concept is the same and they are not patentably distinct from each other. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1-5, 7-9, 11-15, 17-21 are rejected under 35 U.S.C. 103(a) as being unpatentable over Davis et al (US 7,089,238, as submitted in IDS dated 4/22/2022- hereinafter Davis) in view of Johnson et al (US 2005/0027664, as submitted in IDS dated 4/22/2022- hereinafter Johnson), and further in view of Owens (US Pub. 20100257127, as cited as pertinent prior art in the Non-Final office action dated 04/18/2025- hereinafter Owens). Referring to Claim 1, A method comprising: receiving a hard coding correction to at least a portion of an automated coding of a relevant document, the hard coding correction being produced by a human reviewer (see Davis at Column 2: line 51- Column 3 line 3; Davis teaches the process of editorial review for quality control of either random documents or documents that didn’t meet a confidence threshold. Further, Davis teaches that this editorial review is done by a human user, wherein the corrected classified document is flagged as being reviewed by the human); updating a coded control set with the relevant document having the hard coding correction (see Davis at Column 2: lines 54-56; Davis teaches that documents verified by editorial review are collected in a verified documents set 214 and used for incremental updating of the training set 223); applying the updated coded control set to another set of additional documents to automatically code the another set of additional documents (see Davis at Abstract; Davis teaches methods for incrementally updating the accuracy provided by documents in training set of used for automatic categorization. Furthermore, at Column 1: lines 49-51, Davis teaches that once the training set has been retuned, it can be used for categorization of documents. Therefore, the updated training set is used for automatic categorization/coding of additional documents). However, Davis fails to teach: the updating the coded control set comprising an adaptive identification life cycle that receives the hard coding correction and updates the coded control set with the relevant document; wherein the coded control set is distinct from the another set of additional documents; terminating the adaptive identification life cycle based on a confidence threshold validation concurrently applied to the updated automatically coded control set and the automatically coded another set of additional documents; identifying contextually similar documents to an initial set of relevant documents utilizing machine learning to automatically detect concepts within the initial set of relevant documents using a statistical analysis; supplementing the initial set of relevant documents with the contextually similar documents; and determining a batch size of relevant documents based on the confidence threshold validation. Johnson teaches, in an analogous system: the updating the coded control set comprising an adaptive identification life cycle that receives the hard coding correction and updates the coded control set with the relevant document (see Johnson at Abstract: “Through iterative interactive training sessions with a user the system trains annotators, and these are in turn used to discover more annotations in the text data. Once all of the text data or a sufficient amount of the text data is annotated, at the user's discretion, the system learns a final annotator or annotators, which are exported and available to annotate new textual data”. Further, see Claim 1: “iteratively learning annotators for the at least one named entity or class using a machine learning algorithm; applying the learned annotators to text data resulting in the annotation of at least one named entity or class annotation instance; and selectively presenting for review and correction, if determined, representations of the at least one named entity or class annotation instance identified by the applying of the learned annotators”. This iterative corresponds to the claimed adaptive identification life cycle); terminating the adaptive identification life cycle based on a confidence threshold validation concurrently applied to the updated automatically coded control set and the automatically coded another set of additional documents (see Johnson at Claim 1: “iteratively learning annotators for the at least one named entity or class using a machine learning algorithm; applying the learned annotators to text data resulting in the annotation of at least one named entity or class annotation instance; and selectively presenting for review and correction, if determined, representations of the at least one named entity or class annotation instance identified by the applying of the learned annotators”. This iterative corresponds to the claimed adaptive identification life cycle. Also, see Claim 2: “annotations instances are selectively presented for review and correction, if determined, based on a predetermined threshold value of a confidence level”, therefore, this corresponds to the claimed “confidence threshold”. Further, see [0016]: “At the end of each iteration, any annotation, generated from the learned annotators, having a confidence level within a confidence level range is corrected based on feedback”); determining a batch size of relevant documents based on the confidence threshold validation (see Johnson at Abstract: “Through iterative interactive training sessions with a user the system trains annotators, and these are in turn used to discover more annotations in the text data. Once all of the text data or a sufficient amount of the text data is annotated, at the user's discretion, the system learns a final annotator or annotators, which are exported and available to annotate new textual data”. Further, at [0109]: “then in subsequent step 220, the system generates and exports runtime annotators for general use in applications. In this way the system and method on the basis of unnannotated text data and seeds, iteratively learns, with user review and correction as needed, accurate annotators for named entities or classes in an efficient and effective manner”. Therefore, this final amounts of annotators after the iterative learning corresponds to the claimed ‘batch size of relevant documents”). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teachings of Davis with the above teachings of Johnson by receiving training data for generating a coded control set of data wherein the training data includes computer-suggested documents from machine learning, as taught by Davis, and comprising an adaptive identification lifecycle, as taught by Johnson. The modification would have been obvious because one of ordinary skill in the art would be motivated to incrementally improve its ability to assign annotations correctly and also allows for mechanisms for selectively presenting results and guiding the user in the evaluation and correction process (as suggested by Johnson at 0114). Owens teaches, in an analogous system, applying the updated coded control set to another set of additional documents to automatically code the another set of additional documents, wherein the coded control set is distinct from the another set of additional documents (even though Davis implicitly teaches that the coded control set is distinct from the additional documents as Davis teaches categorization of documents (new or distinct documents) once the training set has been retuned, Owens explicitly teaches it, as it can be seen at [0037]: “adapting to human feedback to periodically re-train portions of the system to more accurately categorize new documents”. Therefore, Owens classification of new documents is interpreted as the additional documents being different, from the documents used for re-training the classifier); identifying contextually similar documents to an initial set of relevant documents utilizing machine learning to automatically detect concepts within the initial set of relevant documents using a statistical analysis (see Owens at [0068]: “THE CLASSIFICATION MODULE: which examines a document and determines the relative degrees of probability that the document belongs to the set of categories that the core entity is configured to work with? The classification module can use any technique deemed suitable for the particular implementation however it is expected that the most commonly used technique will be some variation of a naive Bayesian categorization method”); and supplementing the initial set of relevant documents with the contextually similar documents (see Owens at [0082]; “THE TRAINING MONITOR: this is a software monitor that keeps track of documents as they progress through the approval process, folder monitors can register and update documents that they manipulate with the training monitor using event message architecture, or an API call. The training monitor responds to two three events: the Detected Event, the Moved Event, the Removed Event and the Accepted Event. The primary function of the Training Monitor is to decide which, if any, of the documents that are accepted (e.g. identified as fitting within a category) should be copied into the Exemplar Folders for the purpose of updating the Training Folders at regularly scheduled intervals”). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Davis and Johnson with the above teachings of Owens by receiving a hard coding correction to at least a portion of an automated coding of documents comprising an adaptive identification lifecycle, as taught by Davis and Johnson, and used for further classification of additional different documents, as taught by Owens. The modification would have been obvious because one of ordinary skill in the art would be motivated to classify new documents that matches the categories in the entity’s domain of interest with a measurable degree of accuracy (as suggested by Owens at [Claim 1]). Referring to Claim 2, the combination of Davis, Johnson and Owens teaches the method according to claim 1, further comprising receiving training data that comprises an initial set of relevant documents including computer- suggested documents from machine learning (see Davis at Column 2: lines 23-30; Davis teaches an initial set of documents, which are used for creating a training, set. This corresponds to ‘initial set of relevant documents). Referring to Claim 3, the combination of Davis, Johnson and Owens teaches the method according to claim 1, further comprising the generating the coded control set of data based on training data, using coding determinations for the coded control set (see Davis at Column 2: lines 34-53; Davis teaches addition of documents by receiving documents from multiple feeds and selecting a portion of them to add to the training set used in production for automatic classification of incoming documents. A categorization engine 211 is used to identify nearest neighbors and calculate similarity and category scores. The category score is higher or lower, corresponding to a degree of confidence in assignment of a particular document to a particular category. Therefore, since the initial training data is used for training the categorization engine, this corresponds to ‘generating a coded control set based on the training data), the coding determinations comprising automated coding of the relevant documents performed by a predictive coding system (see Davis at Column 2: lines 34-53; Davis teaches addition of documents by receiving documents from multiple feeds and selecting a portion of them to add to the training set used in production for automatic classification of incoming documents. A categorization engine 211 is used to identify nearest neighbors and calculate similarity and category scores. The category score is higher or lower, corresponding to a degree of confidence in assignment of a particular document to a particular category. Therefore, since the initial training data is used for training the categorization engine for automatic classification, this corresponds to ‘automated coding of the relevant documents by a predictive coding system). Referring to Claim 4, the combination of Davis, Johnson and Owens teaches the method according to claim 1, further comprising automatically coding the another set of additional documents with the coded control set using a predictive coding system (see Davis at Column 2: lines 34-53; Davis teaches a categorization engine 211 is used to identify nearest neighbors and calculate similarity and category scores. The category score is higher or lower, corresponding to a degree of confidence in assignment of a particular document to a particular category. Therefore, since the initial training data is used for training the categorization engine for automatic classification, this corresponds to ‘automatic coding of additional relevant documents). Referring to Claim 5, the combination of Davis, Johnson and Owens teaches the method according to claim 1, further comprising presenting the relevant document from a set of additional relevant documents to a human reviewer, the relevant document having the automated coding from a predictive coding system (see Davis at Column 2: line 51- Column 3 line 3; Davis teaches the process of editorial review for quality control of either random documents or documents that didn’t meet a confidence threshold. Further, Davis teaches that this editorial review is done by a human user. Therefore, the automated coded document not meeting the threshold for forwarding/presenting it to the reviewer corresponds to the relevant document). Referring to Claim 7, the combination of Davis, Johnson and Owens teaches the method according to claim 1, further comprising applying coding determinations of the coded control set to contextually similar data in a corpus of documents (see Owens at [0082]; “THE TRAINING MONITOR: this is a software monitor that keeps track of documents as they progress through the approval process, folder monitors can register and update documents that they manipulate with the training monitor using event message architecture, or an API call. The training monitor responds to two three events: the Detected Event, the Moved Event, the Removed Event and the Accepted Event. The primary function of the Training Monitor is to decide which, if any, of the documents that are accepted (e.g. identified as fitting within a category) should be copied into the Exemplar Folders for the purpose of updating the Training Folders at regularly scheduled intervals”). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Davis and Johnson with the above teachings of Owens by receiving a hard coding correction to at least a portion of an automated coding of documents comprising an adaptive identification lifecycle, as taught by Davis and Johnson, and applying coding determinations of the coded control set to contextually similar data in a corpus of documents, as taught by Owens. The modification would have been obvious because one of ordinary skill in the art would be motivated to classify new documents that matches the categories in the entity’s domain of interest with a measurable degree of accuracy (as suggested by Owens at [Claim 1]). Referring to Claim 8, the combination of Davis, Johnson and Owens teaches the method according to claim 1, further comprising receiving a review of each document in the updated coded control set to ensure that proper coding has been implemented (see Davis at Column 4: lines 42-49; Davis teaches the status column provides information regarding confidence in coding of a document. "Okay" may be used to indicate that a document has been correctly categorized; "missing" may be used to indicate that a document with a high score has not been assigned to a topic; and "suspicious" may indicate that a document with a low score has been assigned to the topic. This is interpreted as ensuring proper coding). Referring to Claim 9, the combination of Davis, Johnson and Owens teaches the method according to claim 1, further comprising: generating the coded control set of data based on a plurality of uncoded documents from a corpus, the plurality of uncoded documents comprising documents that are relevant or selected randomly (see Column 2: lines 28-30; Davis teaches “[u|ncoded documents 101 are loaded and registered 102 into a workfile. A user codes the documents to create a training set’. Moreover, at Column 2: lines 41-47: “[t]he documents 201 may be coded or uncoded. An input queue 202 may be used to organize addition of documents 201 to the training set, for instance, when a news dissemination service is receiving documents from multiple feeds and selecting a portion of them to add to the training set used in production for automatic classification of incoming documents”. Therefore, the coded control set is based on uncoded documents from a corpus, and since the documents are selected as a portion from multiple feeds this corresponds to the documents being relevantly or randomly selected), wherein the portion of the documents are randomly sampled from an un-reviewed document population (see Column 2: lines 56-62; Davis teaches editorial review, for quality control or other purposes, may also include a random sample 212 of documents that were above a confidence threshold during coding. Selection of a random sample 212 for editorial review balances addition to the training set of difficult cases, with low confidence scores, and easier cases, with higher confidence scores. Therefore, this corresponds to ‘selecting a portion of documents that are selected randomly); and receiving a determination if the document coded using the coded control set was miscoded, prior to the step of receiving the hard coding correction to the document (see Column 4: lines 42-49; Davis teaches the status column provides information regarding confidence in coding of a document. "Okay" may be used to indicate that a document has been correctly categorized; "missing" may be used to indicate that a document with a high score has not been assigned to a topic; and "suspicious" may indicate that a document with a low score has been assigned to the topic. "Missing" and "suspicious" documents may be referred to a human for editorial review; therefore, the miscoded determination is prior to have a hard coding correction by the editorial reviewer). Referring to independent Claim 11, it is rejected on the same basis as independent claim 1, mutatis mutandis, since they are analogous claims. Referring to dependent Claim 12, it is rejected on the same basis as dependent claim 2, mutatis mutandis, since they are analogous claims. Referring to dependent Claim 13, it is rejected on the same basis as dependent claim 3, mutatis mutandis, since they are analogous claims. Referring to dependent Claim 14, it is rejected on the same basis as dependent claim 4, mutatis mutandis, since they are analogous claims. Referring to dependent Claim 15, it is rejected on the same basis as dependent claim 5, mutatis mutandis, since they are analogous claims. Referring to dependent Claim 17, it is rejected on the same basis as dependent claim 7, mutatis mutandis, since they are analogous claims. Referring to dependent Claim 18, it is rejected on the same basis as dependent claim 8, mutatis mutandis, since they are analogous claims. Referring to dependent Claim 19 and 20, they are rejected on the same basis as dependent claim 9, mutatis mutandis, since they are analogous claims. Referring to Claim 21, the combination of Davis, Johnson and Owens teaches the system according to claim 1, wherein the initial set of relevant documents are generated by a user (see Johnson at [0033]: “To start the iterative mode of learning process, a user provides directly or indirectly via at least one of several optional means, a sample of text with selective portions of the text annotated, which includes using an editor to bracket and label named entity instances in the text, providing a list or lists of named entities (dictionaries or glossaries), or providing a pattern or patterns in the system provided pattern language”. Therefore, this sample of text with selective portions of the text annotated in order to start the learning process provided by the user are interpreted as the claimed ‘initial set of relevant documents”). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teachings of Davis with the above teachings of Johnson by receiving training data for generating a coded control set of data wherein the training data includes computer-suggested documents from machine learning, as taught by Davis, and comprising an adaptive identification lifecycle with an initial seed set provided by the user, as taught by Johnson. The modification would have been obvious because one of ordinary skill in the art would be motivated to incrementally improve its ability to assign annotations correctly and also allows for mechanisms for selectively presenting results and guiding the user in the evaluation and correction process (as suggested by Johnson at 0114). Claims 6 and 16 are rejected under 35 U.S.C. 103(a) as being unpatentable over Davis in view of Johnson, Owens, and further in view of Goodwin et al (US 2003/0135818, as submitted in IDS dated 4/22/2022- hereinafter Goodwin). Referring to Claim 6, the combination of Davis, Johnson and Owens teaches the method according to claim 1, further comprising allowing a human reviewer to correct at least a portion of the automated coding by performing a hard coding correction, the hard coding correction comprising changing of the at least a portion of the automated coding from a first coding to a second coding (see Davis at Column 2: lines 54-56: “[d]ocuments verified by editorial review are collected in a verified documents set 214 and used for incremental updating of the training set 223”. Therefore, this is interpreted as the human reviewer performing a correction in the editorial review for quality control); Even though Davis implicitly teaches that the correction comprises changing the coding from a first coding to a second one, Goodwin further teaches the hard coding correction comprising changing of the at least a portion of the automated coding from a first coding to a second coding (see [0026]; Goodwin teaches “categories may be changed or other categories may be assigned to the document by, for example, a system administrator or other user, after the document's creation” and “[a]n actions record may be maintained, step 106. The actions record may identify actions that have been performed on a document by a user, time of action, duration of action, and other criteria’. Moreover, at [0034]- “[c]ategory assigning module 302 may enable one or more categories to be assigned to a document. The categories may be, for example, assigned when the document is created. Alternatively, a system administrator and/or one or more users may be granted rights for assigning, changing, or deleting categories assigned to one or more documents”. Therefore, since Goodwin teaches that a system administrator or user can change the previous category of a document, this corresponds to the claimed hard correction comprising a change of one code to another). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Davis, Johnson and Owens with the above teachings of Goodwin by receiving training data for generating a coded control set of data wherein the training data includes computer-suggested documents from machine learning, as taught by Davis, Johnson, Owens, and allowing a human reviewer to correct the coding by changing coding from a first coding to a second coding, as taught by Goodwin. The modification would have been obvious because one of ordinary skill in the art would be motivated to change the category of a previous classified document to a new one for correcting and updating the accuracy provided by documents in a training set of used for automatic categorization (as suggested by Davis at Abstract) and maintain a record of user changes to the documents classifications (as suggested by Goodwin at [0026]). Referring to dependent Claim 16, it is rejected on the same basis as dependent claim 6 since they are analogous claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: NPL: “Document Categorization in Legal Electronic Discovery: Computer Classification vs. Manual Review” - this reference discloses a system for automatic document categorization for addressing the problem of time and costs from manual user reviews and proposes methods that may be useful to reduce the expense and time needed to conduct electronic discovery. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUIS A SITIRICHE whose telephone number is (571)270-1316. The examiner can normally be reached M-F 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached on (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LUIS A SITIRICHE/Primary Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Oct 18, 2021
Application Filed
Apr 15, 2025
Non-Final Rejection — §103, §112, §DP
Jul 08, 2025
Response Filed
Sep 11, 2025
Final Rejection — §103, §112, §DP
Oct 20, 2025
Request for Continued Examination
Oct 23, 2025
Response after Non-Final Action
Mar 12, 2026
Non-Final Rejection — §103, §112, §DP
Apr 06, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585947
MODIFYING COMPUTATIONAL GRAPHS
2y 5m to grant Granted Mar 24, 2026
Patent 12579476
ADAPTIVE LEARNING FOR IMAGE CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12579445
MODELS FOR PREDICTING RESISTANCE TRENDS
2y 5m to grant Granted Mar 17, 2026
Patent 12572791
METHOD, DEVICE AND COMPUTER PROGRAM FOR PREDICTING A SUITABLE CONFIGURATION OF A MACHINE LEARNING SYSTEM FOR A TRAINING DATA SET
2y 5m to grant Granted Mar 10, 2026
Patent 12572857
Adaptive Probabilistic Latent Semantic Analysis System For Automated Document Coding And Review In Electronic Discovery
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+22.1%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 468 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month