Prosecution Insights
Last updated: April 19, 2026
Application No. 18/523,312

METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR CONTEXTUALLY AWARE DEBIASING

Final Rejection §101§103
Filed
Nov 29, 2023
Examiner
TSUI, WILSON W
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Optum Inc.
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
365 granted / 593 resolved
+6.6% vs TC avg
Strong +58% interview lift
Without
With
+58.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
44 currently pending
Career history
637
Total Applications
across all art units

Statute-Specific Performance

§101
15.5%
-24.5% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 593 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 remain rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The following rejections are withdrawn in view of new grounds of rejection necessitated by applicant’s amendments: Claim(s) 1, 2, 6-9 and 13-16 rejected under 35 U.S.C. 103 as being unpatentable over Sahayaraj et al (US Application: US 2022/0180068, published: Jun. 9, 2022, filed: Dec. 7, 2020) in view of Gaur et al (US Application: US 20180341637, published: Nov. 29, 2018, filed: May 24, 2017). Claim(s) 3-5, 10-12, 17-20 rejected under 35 U.S.C. 103 as being unpatentable over Sahayaraj et al (US Application: US 2022/0180068, published: Jun. 9, 2022, filed: Dec. 7, 2020) in view of Gaur et al (US Application: US 20180341637, published: Nov. 29, 2018, filed: May 24, 2017) in view of Faal et al (“Domain Adaptation Multi-task Deep Neural Network for Mitigating Unintended Bias in Toxic Language Detection”, pages 932-940, publisher: Science and Technology Publications, published: 2021). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 remain rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1: Step 1: Claim 1 falls within a statutory category . Step 2A, Prong One: With regards to claim 1, the claim recites the following of which the limitations that are bolded recite a judicial exception for a mental process: A computer-implemented method comprising: generating, by one or more processors, one or more document segments that each comprise a sequence of terms from a syntactic debiased document; identifying, by the one or more processors, one or more candidate semantic bias terms from a document segment of the one or more document segments based on a semantic bias corpus; in response to the identification of the one or more candidate semantic bias terms, generating, by the one or more processors and using a classification model, a bias classification for the document segment; and in response to a positive bias classification, providing, by the one or more processors and using a semantic debiasing model, one or more replacement tokens for the one or more candidate semantic bias terms. A computer-implemented method comprising: identifying, by one or more processors and within a syntactic debiased document, a document segment that comprises a sequence of terms ;identifying, by the one or more processors, a candidate semantic bias term from the document segment based on a semantic bias corpus, in response to the identifying of the candidate semantic bias term, generating, by the one or more processors and using a classification model, a bias classification for the document segment, wherein the classification model comprises a first machine learning model that is trained …; and in response to the bias classification indicating a positive bias classification, providing, by the one or more processors and using a semantic debiasing model, a replacement token for the candidate semantic bias term, wherein the semantic debiasing model comprises a second machine learning model configured to identify the replacement token for the candidate semantic bias term based on a position of a masked token corresponding to the candidate semantic bias term within a tokenized subset of the syntactic debiased document. Of note, the examiner first points out in MPEP 2106.04(a)(2)III, the following citations: “the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions.” “The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation. See, e.g., Benson, 409 U.S. at 67, 65, 175 USPQ at 674-75, 674 (noting that the claimed "conversion of [binary-coded decimal] numerals to pure binary numerals can be done mentally," i.e., "as a person would do it by head and hand."); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1139, 120 USPQ2d 1473, 1474 (Fed. Cir. 2016) (holding that claims to a mental process of "translating a functional description of a logic circuit into a hardware component description of the logic circuit" are directed to an abstract idea, because the claims "read on an individual performing the claimed steps mentally or with pencil and paper"). Accordingly, with respect to the above bolded limitations for “a method comprising: one or more document segments that each comprise a sequence of terms from a syntactic debiased document; identifying, … one or more candidate semantic bias terms from a document segment of the one or more document segments based on a semantic bias corpus; in response to the identification of the one or more candidate semantic bias terms, … a bias classification for the document segment; and in response to a positive bias classification, providing, … one or more replacement tokens for the one or more candidate semantic bias terms” , these limitations are directed to a mental process because of at least: A person can evaluate document segments that comprise a sequence of terms from an manually evaluated semantic debiased document. A person can mentally evaluate a document segment based upon a manually evaluated semantic bias corpus and subsequently make a judgment to identify one or more candidate semantic bias terms based on the evaluation(s). A person can make a judgement to identify one or more candidate semantic bias terms in response to a manually evaluated bias classification and manually evaluated training data. A person can make a judgment to identify one or more replacement tokens based upon evaluating position of a masked token corresponding to the candidate semantic bias term. Step 2A, Prong Two The claim recites the following additional elements: “A computer-implemented method comprising: generating, by one or more processors, …” “identifying, by the one or more processors ”, “, generating by the one or more processors and using a classification model … wherein the classification model comprises”, “providing, by the one or more processors and using a semantic debiasing model, wherein the semantic debiasing model comprises a second machine learning model configured to identify”. These additional element(s) is/are considered merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The courts have identified these types of limitations to be insufficient to integrate a judicial exception into a practical application. C) “… using a classification model … the classification model comprises a first machine learning model that is trained using a training dataset that includes a training document segment assigned with a semantic context label based on a semantic context corresponding to context of use of at least one term within the training document segment” D)”, … “using a semantic debiasing model … … wherein the semantic debiasing model comprises a second machine learning model ”. These additional elements are also considered generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h). The courts have identified this type of limitation to be insufficient to integrate a judicial exception into a practical application. Step 2B: As discussed in step 2A, prong two, there are additional elements of: “A computer-implemented method comprising: generating, by one or more processors and using a classification model … wherein the classification model comprises …” “identifying, by the one or more processors …”, “, generating, by the one or more processors and using a classification model, …”, “providing, by the one or more processors and using a semantic debiasing model, comprises a second machine learning model configured to identify These additional element(s) is/are considered merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea. The courts have found this type of limitation to be insufficient to be ‘significantly more’ when recited in a claim with a judicial exception. (see also Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 ). C) “… using a classification model … the classification model comprises a first machine learning model that is trained using a training dataset that includes a training document segment assigned with a semantic context label based on a semantic context corresponding to context of use of at least one term within the training document segment” D)”, … “using a semantic debiasing model … … wherein the semantic debiasing model comprises a second machine learning model ”. These additional elements are considered generally linking the use of a judicial exception to a particular technological environment or field of use. The courts have found this type of limitation to be insufficient to be ‘significantly more’ when recited in a claim with a judicial exception (see also Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook, 437 U.S. 584, 588-90, 198 USPQ 193, 197-98 (1978) (MPEP § 2106.05(h). Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, and therefore do not provide an inventive concept. Claims 2-7: These claims (2-7) recite further operations directed to mental process(es) (such as evaluating grammar corrected document, evaluating syntactic debiasing criteria, making judgments for a syntactic debiased document, evaluating semantic bias criteria, evaluating candidate replacement tokens, and evaluating context information) and do not recite additional elements that would result in integrated their corresponding judicial exception(s) into a practical application nor do they contain additional elements that would amount to significantly more than their corresponding recited exception(s) (since the additional elements recited merely apply a computer (insignificant) and/or generally link the use of a judicial exception to a particular technological environment (using a model). Claim 8: With regards to claim 8, it is rejected under similar rationale as claim 1 (it is noted that it additionally recites a memory, which is considered merely using a computer/computer’s component’s as a tool to perform an abstract idea. The courts have identified these types of limitations to be insufficient to integrate a judicial exception into a practical application and also the courts have found these types of limitations to be insufficient to be ‘significantly more’ when recited in a claim with a judicial exception Claim 9-14: These claims (9-14) recite further operations directed to mental process(es) (such as evaluating grammar corrected document, evaluating syntactic debiasing criteria, making judgments for a syntactic debiased document, evaluating semantic bias criteria, evaluating candidate replacement tokens, and evaluating context information) and do not recite additional elements that would result in integrated their corresponding judicial exception(s) into a practical application nor do they contain additional elements that would amount to significantly more than their corresponding recited exception(s) (since the additional elements recited merely apply a computer ( as a tool / insignificant) and/or generally link the use of a judicial exception to a particular technological environment (using a model). Claim 15: With regards to claim 15, it is rejected under similar rationale as claim 1 (it is noted that it additionally recites a non-transitory memory, which is considered merely using a computer/computer’s components as a tool to perform an abstract idea. The courts have identified these types of limitations to be insufficient to integrate a judicial exception into a practical application and also the courts have found these types of limitations to be insufficient to be ‘significantly more’ when recited in a claim with a judicial exception Claim 16-20: These claims (16-20) recite further operations directed to mental process(es) (such as evaluating grammar corrected document, evaluating syntactic debiasing criteria, making judgments for a syntactic debiased document, evaluating semantic bias criteria, evaluating candidate replacement tokens, and evaluating context information) and do not recite additional elements that would result in integrated their corresponding judicial exception(s) into a practical application nor do they contain additional elements that would amount to significantly more than their corresponding recited exception(s) (since the additional elements recited merely apply a computer ( as a tool / insignificant) and/or generally link the use of a judicial exception to a particular technological environment (using a model). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3--8 , 10-15, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sahayaraj et al (US Application: US 2022/0180068, published: Jun. 9, 2022, filed: Dec. 7, 2020) in view of Ahmed et al (US Application: US 20240126995, published: Apr. 18, 2024, filed: Oct. 12, 2022). With regards to claim 1, Sahayaraj et al a computer-implemented method comprising: identifying, by one or more processors and within a syntactic debiased document (Fig. 2, Fig. 5: processor and memory is implemented to perform bias processing and correction steps and the document could have undergone at least one round of debiasing of portion(s) of document text and can undergo subsequent iterations of debiasing), one or more document segments that each comprise a sequence of terms from a … document (paragraph 0033: document word segments are generated in the form of vectors); identifying, by the one or more processors, candidate semantic bias terms from a document segment of the one or more document segments based on a semantic bias corpus (paragraph 0033 – 0036, 0041-43: a ‘first word embedding’ contains data about how a word (‘first word’) relates to other words in the document and the embedding is compared to secondary embedding(s) to identify whether the ‘first word’ is considered a candidate bias term via a model and determined bias metric ); in response to the identifying of the candidate semantic bias , generating, by the one or more processors and using a classification model, a bias classification for the document segment (paragraph 0041, 0043: the segment can be classified/labeled as being above a bias threshold and as requiring ‘debiasing’ while accounting for context of other terms within the document and the segment(s)/subset(s) of text is/are highlighted ); and in response to a positive bias classification, providing, by the one or more processors and using a semantic debiasing model, one or more replacement tokens for the one or more candidate semantic bias terms (paragraph 0029, 0034, 0039, 0041, 0043, 0044, 0049: a positive bias indicating a degree/magnitude of bias includes a number and a ‘X’. A candidate term/text can be suggested to the user as highlighted and option(s) for substitution of presented identified /highlighted subset of text are provided (based upon syntactic and semantic processing)) … identify the replacement token for the candidate semantic bias term based on a position of a masked token corresponding to the candidate semantic bias term within a tokenized subset of the syntactic debiased document (Fig 2: masked text (position of characters associated with token-text) can be updated with a replacement token for the candidate bias term). However although Sahayaraj et al teaches identifying, … a document segment; … generating … using a classification model, … wherein the classification model comprises a first machine learning model that is trained using a training dataset that includes a training document segment assigned with a semantic context label based on a semantic context corresponding to context of use of at least one term within the training document segment; … wherein the semantic debiasing model comprises a second machine learning model configured to identify the replacement token for the candidate semantic bias term within a tokenized subset of the syntactic debiased document. Yet Ahmed et al teaches identifying … a document segment; … generating … using a classification model, … wherein the classification model comprises a first machine learning model (Ahmed et al, paragraphs 0011, 0046, 0047, 0058 and 0059: classification can be implemented using a BERT learning model which can take in text tokenized and represented in vector format ) that is trained using a training dataset that includes a training document segment assigned with a semantic context label based on a semantic context corresponding to context of use of at least one term within the training document segment (Ahmed et al, paragraphs 0042 and 0045, training data (document(s)) for the classification model includes for particular terms in the training data/document are associated with semantic labels of positive bias or non-bias (negative for bias identification) based on how term(s) are used within context of an area/domain of corporate communication document(s). The examiner notes interpretation of negative bias in the manner of the explanation/citation above is consistent with the instant application’s explanation of negative bias in paragraph 0121 of the instant application); … wherein the semantic debiasing model comprises a second machine learning model configured to identify the replacement token for the candidate semantic bias term within a tokenized subset of the syntactic debiased document (Ahmed et al, paragraph 0039 and 0042: a second machine learning engine can be implemented to identify replacement token for a term of a plurality of terms from the document to replace a bias term with a non-bias term (based on score/weighting of the replacement terms amongst other terms)). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sahayaraj et al’s ability to generate segments from a document to perform document processing/correction to debias (via highlight of candidate terms and selectable alternative/replacement terms) terms in the document (using semantic and syntactic processing/analysis), such that the document could have undergone initial iterative document processing debiasing and then further implementation of first and second machine learning models for classification and replacement, as taught by Ahmed et al. The combination would have allowed Sahayaraj et al to have implemented an effective and efficient way to recognize and control unconscious bias (Ahmed et al, paragraph 0002). With regards to claim 3 , which depends on claim 1, the combination of Sahayaraj and Ahmed et al teaches wherein the first machine learning model is previously trained based on semantic bias criteria defining the semantic context, as explained in the rejection of claim 1 and is repeated here for ease of reference: “Ahmed, paragraphs 0042 and 0045, training data (document(s)) for the classification model includes for particular terms in the training data/document are associated with semantic labels of positive bias or non-bias (negative for bias identification) based on how term(s) are used within context of an area/domain of corporate communication document(s). The examiner notes interpretation of negative bias in the manner of the explanation/citation above is consistent with the instant application’s explanation of negative bias in paragraph 0121 of the instant application)”). With regards to claim 4, which depends on claim 3, the combination of Sahayaraj and Ahmed et al teaches wherein the semantic bias criteria defines the positive bias classification and a negative bias classification, as similarly explained in the rejection of claim 3 above, and is rejected under similar rationale. With regards to claim 5. The computer-implemented method of claim 4, the combination of Sahayaraj et al and Ahmed et al teaches wherein providing, using the semantic debiasing model, the replacement token for the candidate semantic bias term comprises: identifying a subset of document segments within the syntactic debiased document; and generating, using a tokenizer model, semantic debiasing model, the one or more replacement tokens for the one or more candidate semantic bias terms based on the subset of document segments, (as similarly explained in the rejection of claim 1, the teachings of Sahayaraj et al and Ahmed et al and Faal were combined to address these limitations, which allow highlighting of candidate terms and selection of one or more alternative /replacement terms for the candidate terms), and is rejected under similar rationale. With regards to claim 6. The computer-implemented method of claim 1, Sahayaraj teaches wherein the replacement token is selected from one or more candidate replacement tokens based on comparing the one or more candidate replacement tokens with the semantic bias corpus (paragraph 0041, 0043, 0044, 0049: a positive bias indicating a degree/magnitude of bias includes a number and a ‘X’. A candidate term/text can be suggested to the user as highlighted and option(s) for substitution of presented identified /highlighted subset of text are provided. It is noted as explained in paragraphs 0039 and 0043, bias graphs are compared to determine bias scores/degrees and a second bias graph (semantic bias corpus data) is further referenced to produce better alternative text suggestions). With regards to claim 7. The computer-implemented method of claim 6, Sahayaraj et al and Ahmed et al teaches wherein generating the one or more candidate replacement tokens comprises: assigning a relevancy score to a candidate replacement token from the one or more candidate replacement tokens for the one or more candidate semantic bias terms (as similarly explained in the rejection of claim 1, paragraph 0039 of Ahmed was explained to show that that candidate is selected based upon weightings amongst candidates, and is rejected under similar rationale). With regards to claim 8, the combination of Sahayaraj et al and Ahmed et al teaches a computing system comprising: one or more processors; and one or more memories storing processor-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: identifying within a syntactic debiased document, a document segment that comprises a sequence of terms; identifying candidate semantic bias term from the document segment based on a semantic bias corpus; in response to the identifying of the one or more candidate semantic bias term, generating, using a classification model, a bias classification for the document segment, wherein the classification model comprises a first machine learning model that is trained using a training dataset that includes a training document segment assigned with a semantic context label based on a semantic context corresponding to context of use of at least one term within the training document segment; and in response to the bias classification indicating a positive bias classification providing, using a semantic debiasing model, a replacement token for the candidate semantic bias term, wherein the semantic debiasing model comprises a second machine learning model configured to identify the replacement token for the candidate semantic bias term based on a position of a masked token corresponding to the candidate semantic bias term within a tokenized subset of the syntactic debiased document, as similarly explained in the rejection of claim 1, and is rejected under similar rationale. With regards to claim 10 , which depends on claim 8, the combination of Sahayaraj and Ahmed et al teaches wherein the first machine learning model is previously trained based on semantic bias criteria defining the semantic context, as explained in the rejection of claim 3, and is rejected under similar rationale. With regards to claim 11, which depends on claim 10, the combination of Sahayaraj and Ahmed et al teaches wherein the semantic bias criteria defines the positive bias classification and a negative bias classification, as similarly explained in the rejection of claim 4 above, and is rejected under similar rationale. With regards to claim 12. The computer-implemented method of claim 11, the combination of Sahayaraj et al and Ahmed et al teaches wherein providing, using the semantic debiasing model, the replacement token for the candidate semantic bias term comprises: identifying a subset of document segments within the syntactic debiased document; and generating, using a tokenizer model, semantic debiasing model, the one or more replacement tokens for the one or more candidate semantic bias terms based on the subset of document segments, as similarly explained in the rejection of claim 5, and is rejected under similar rationale. With regards to claim 13. The computing system of claim 8, the combination of Sahayaraj et al and Ahmed et al teaches wherein the replacement token is selected from one or more candidate replacement tokens based on comparing the one or more candidate replacement tokens with the semantic bias corpus, as similarly explained in the rejection of claim 6, and is rejected under similar rationale. With regards to claim 14. The computing system of claim 13, the combination of Sahayaraj et al and Ahmed et al teaches wherein generating the one or more candidate replacement tokens comprises : assigning a relevancy score to a candidate replacement token from the one or more candidate replacement tokens, as similarly explained in the rejection of claim 7, and is rejected under similar rationale. With regards to claim 15 the combination of Sahayaraj et al and Ahmed et al teaches one or more non-transitory computer-readable media storing processor-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: identifying within a syntactic debiased document, a document segment that comprises a sequence of terms; identifying a candidate semantic bias term from the document segment based on a semantic bias corpus; in response to the identifying of the candidate semantic bias term, generating, using a classification model, a bias classification for the document segment, wherein the classification model comprises a first machine learning model that is trained using a training dataset includes a training document segment assigned with a semantic context label based on a semantic context corresponding to context of use of at least one term within the training document segment; and in response to the bias classification indicating a positive bias classification, providing, using a semantic debiasing model, a replacement token for the candidate semantic bias term, wherein the semantic debiasing model comprises a second machine learning model configured to identify the replacement token for the candidate semantic bias term based on a position of a masked token corresponding to the candidate semantic bias term within a tokenized subset of the syntactic debiased document, as similarly explained in the rejection of claim 1, and is rejected under similar rationale. With regards to claim 17 , which depends on claim 15, the combination of Sahayaraj and Ahmed et al teaches wherein the first machine learning model is previously trained based on semantic bias criteria defining the semantic context, as explained in the rejection of claim 3, and is rejected under similar rationale. With regards to claim 18, which depends on claim 17, the combination of Sahayaraj and Ahmed et al teaches wherein the semantic bias criteria defines the positive bias classification and a negative bias classification, as similarly explained in the rejection of claim 4 above, and is rejected under similar rationale. With regards to claim 19. The computer-implemented method of claim 18, the combination of Sahayaraj et al and Ahmed et al teaches wherein providing, using the semantic debiasing model, the replacement token for the candidate semantic bias term comprises: identifying a subset of document segments within the syntactic debiased document; and generating, using a tokenizer model, semantic debiasing model, the one or more replacement tokens for the one or more candidate semantic bias terms based on the subset of document segments, as similarly explained in the rejection of claim 5, and is rejected under similar rationale. With regards to claim 20. The computing system of claim 19, the combination of Sahayaraj et al and Ahmed et al teaches wherein the replacement token is selected from one or more candidate replacement tokens based on comparing the one or more candidate replacement tokens with the semantic bias corpus, as similarly explained in the rejection of claim 6, and is rejected under similar rationale. Claim(s) 2, 9 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sahayaraj et al (US Application: US 2022/0180068, published: Jun. 9, 2022, filed: Dec. 7, 2020) in view of Ahmed et al (US Application: US 20240126995, published: Apr. 18, 2024, filed: Oct. 12, 2022) in view of Gaur et al (US Application: US 20180341637, published: Nov. 29, 2018, filed: May 24, 2017). With regards to claim 2. The computer-implemented method of claim 1, the combination of Sahayaraj et al and Ahmed et al teaches wherein the syntactic debiased document is previously generated using syntactic debiasing criteria by: … ; generating a corresponding non-bias term for the syntactic bias term based on the syntactic debiasing criteria; and generating the syntactic debiased document by replacing the syntactic bias term with the corresponding non-bias term (as similarly explained in the rejection of claim 1, the combination of Sahayaraj et al and Ahmed et al teaches a non-bias term is generated for replacement of a bias term via semantic and syntactic processing/analysis, and is rejected under similar rationale).. However the combination of Sahayaraj et al and Ahmed et al teaches identifying a syntactic bias term in a grammar corrected document; … generating the syntactic debiased document by replacing the syntactic bias term with the corresponding non-bias term within the grammar corrected document. Yet Gaur et al teaches identifying a syntactic bias term in a grammar corrected document; … generating the syntactic debiased document by replacing the syntactic bias term with the corresponding non-bias term within the grammar corrected document (Fig 1B, Fig. 2, paragraphs 0014 and 0015 of Gaur et al teaches a pronoun term ‘he’ is identified in a document (that could have undergone grammar processing/correction first) based on selection of ‘Uncon Bias’ as a follow-up proofread option/action. Non-bias alternative terms are provided such as ‘he/she’, ‘they’ and ‘the selected candidate’ and the document can be updated/debiased when the user selects one of the alternate terms). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sahayaraj et al and Ahmed et al’s ability to generate segments from a document to perform document processing/correction to debias (via highlight of candidate terms and selectable alternative/replacement terms) terms in the document (using semantic and syntactic processing/analysis), such that the document could have undergone initial grammar correction prior to replacement of a bias term, as taught by Gaur et al. The combination would have allowed Sahayaraj et al and Ahmed et al to have detected instances of unconscious bias and brought [them] to the user's attention and presented alternative suggestions to the user to avoid the unconscious bias (Gaur et al, paragraph 0007). With regards to claim 9. The computing system of claim 8, the combination of Sahayaraj et al, Ahmed et al and Gaur et al teaches wherein the syntactic debiased document is previously generated using syntactic debiasing criteria by: identifying a syntactic bias term in a grammar corrected document; generating a corresponding non-bias term for the syntactic bias term based on the syntactic debiasing criteria; and generating the syntactic debiased document by replacing the syntactic bias term with the corresponding non-bias term within the grammar corrected document, as similarly explained in the rejection of claim 2, and is rejected under similar rationale. With regards to claim 16. The one or more non-transitory computer-readable media of claim 15, the combination of Sahayaraj et al, Ahmed et al and Gaur et al teaches wherein the syntactic debiased document is previously generated using syntactic debiasing criteria by: identifying a syntactic bias term in a grammar corrected document; generating a corresponding non-bias term for the syntactic bias term based on the syntactic debiasing criteria; and generating the syntactic debiased document by replacing the syntactic bias term with the corresponding non-bias term within the grammar corrected document, as similarly explained in the rejection of claim 2, and is rejected under similar rationale. Response to Arguments Applicant's arguments filed 10/30/2025 have been fully considered but they are not persuasive. With regards to claim 1 and its corresponding 35 USC 101 rejection, the applicant argues the training technique and prediction technique recited for configuring their respective models cannot be practically performed within the human mind. In response the examiner respectfully points out that portions of the newly amended training and prediction techniques to perform classification and identification of replacement tokens still can be mental process(es), which are explained in the updated 35 USC 101 rejection of claim 1 above. With regards to other portions that are not addressed in Step 2A, Prong one, they are addressed in step 2B and Prong 2 of the 35 USC 101 rejection above. With regards to step 2A, Prong two, the applicant argues the amendments to claim 1 constitute an improvement in a technical field (machine learning) and computer functionality (e.g. machine learning models and debiasing engines). More specifically, the applicant argues machine learning technology is improved by leveraging semantic context to convert unlabeled historical data into training a machine learning model for binary classification task [and] ‘the machine learning methodologies of claim 1 allows a machine learning model to consider context of use of a term during the classification task, which improves performance of the machine learning model and debiasing engines incorporating the machine learning model. However this argument is not persuasive since the data consumed /referenced by the machine learning model is considered a field of use. With regards to applicant’s arguments concerning ‘provide contextually relevant replacement tokens in a manner that reduces computing expense with respect to the machine learning model and the debiasing engine’, there is no evidence provided for how computing expense is reduced nor what limitations would be responsible for effectuating and enabling the alleged improvement. Thus these arguments are not persuasive. With regards to applicants arguments for claim 1 and step 2B analysis, the applicant argues ‘the added elements cannot be considered to be well-understood, routine, or known within the industry at least because they do not appear to be taught by the prior art of record’. In response to these arguments , the examiner directs the applicant to how the amended/added elements are addressed and shown to be taught by prior art within the 35 USC 103 rejection above. With regards claim 1 and its corresponding amended language, the applicant argues Sahayaraj does not expressly teach the amended limitations. The examiner notes that the amendments have necessitated a new grounds of rejection and Sahayaraj is now combined with Ahmed et al to teach the amended limitations. Is it is further noted that Fig 2 of Sahayaraj includes teaches for masked text (position of characters associated with token-text) being updated with a replacement token for the candidate bias term. Thus, applicant’s argument concerning Sahayaraj being silent for a masked token is not persuasive (additionally Sahayaraj is combined with Ahmed et al to further teach how masked token text is updated with replacement token(s). The applicant argues claims 8 and 15 are allowable for reasons provided by the applicant for claim 1. However claim 1 has been shown/explained to be rejected above, and thus, applicant’s argument is not persuasive. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILSON W TSUI whose telephone number is (571)272-7596. The examiner can normally be reached Monday - Friday 9 am -6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILSON W TSUI/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Nov 29, 2023
Application Filed
Oct 07, 2024
Response after Non-Final Action
Jul 26, 2025
Non-Final Rejection — §101, §103
Sep 24, 2025
Examiner Interview Summary
Sep 24, 2025
Examiner Interview (Telephonic)
Oct 30, 2025
Response Filed
Feb 27, 2026
Final Rejection — §101, §103
Apr 02, 2026
Examiner Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602535
COMMENT DISPLAY METHOD AND APPARATUS OF A DOCUMENT, AND DEVICE AND MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12589766
AUTONOMOUS DRIVING SYSTEM AND METHOD OF CONTROLLING SAME
2y 5m to grant Granted Mar 31, 2026
Patent 12570284
AUTONOMOUS DRIVING METHOD AND DEVICE FOR A MOTORIZED LAND VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12552376
VEHICLE CONTROL APPARATUS
2y 5m to grant Granted Feb 17, 2026
Patent 12511993
SYSTEMS AND METHODS FOR CONFIGURING A HIERARCHICAL TRAFFIC MANAGEMENT SYSTEM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+58.1%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 593 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month