Prosecution Insights
Last updated: April 19, 2026
Application No. 17/685,106

SYSTEM AND METHOD FOR ANONYMIZING MEDICAL RECORDS

Final Rejection §103
Filed
Mar 02, 2022
Examiner
ERICKSON, BENNETT S
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Claritrics Inc. D B A Buddi AI
OA Round
4 (Final)
38%
Grant Probability
At Risk
5-6
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
53 granted / 141 resolved
-14.4% vs TC avg
Strong +46% interview lift
Without
With
+45.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
47 currently pending
Career history
188
Total Applications
across all art units

Statute-Specific Performance

§101
32.4%
-7.6% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 141 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In the amendment filed on October 15, 2025, the following has occurred: claim(s) 1, 10, 19 have been amended. Now, claim(s) 1-3, 5-6, 8-12, 14-15, 17-20 are pending. Notice to Applicant The Examiner has not applied 35 U.S.C. 101 rejection(s) to claims 1-3, 5-6, 8-12, 14-15, 17-20 as the claimed limitations are primarily directed to performing tokenization on medical records, generating templatized sentences, and using a trained model to identify one or more protected health information (PHI) containing sentences, which does not fall into one of the categories of abstract ideas. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 5-6, 8-12, 14-15, 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ardhanari et al. (U.S. Patent Pre-Grant Publication No. 2021/0248268) in view of Hachey (U.S. Patent Pre-Grant Publication No. 2021/0256160). As per independent claim 1, Ardhanari discloses a method of anonymizing medical records, comprising: generating a plurality of templatized sentences by performing templatization on the tokenized data (See Paragraphs [0190]-[0194]: After dividing the corpus into sentences, each unique sentence may be mapped to a syntax template that includes tokens associated with alphabetic, numeric, and alphanumeric characters, the system may then create a list of the sentences within the corpus that align to each unique template generated using the tokens above and applies a statistical NER model to determine the quantity of sentences within the corpus that align to a selected template, using this token-based approach tokenizes every word, number, and alphanumeric character in the sentence, which the Examiner is interpreting the mapping to encompass templatization, the token-based approach to tokenize every word, number, and alphanumeric character in the sentence to encompass tokenized data, and the unique template to encompass a plurality of templatized sentences), wherein performing the templatization comprises replacing one or more known patterns in the tokenized data with one or more predefined patterns (See Paragraphs [0190]-[0194]: The system may then create a list of the sentences within the corpus that align to each unique template generated using the tokens, and the syntax tokens in the pattern template directly translate to standard syntactical pattern matching methods, which the Examiner is interpreting the pattern template to encompass a predefined pattern); generating a set of Protected Health Information (PHI) containing sentences from the plurality of templatized sentences by processing the plurality of templatized sentences using a PHI sentence classifier (See Fig. 13 and Paragraphs [0195], [0199]-[0201]: As part of the pre-processing, the system may deconstruct the batch of notes and assign sentence identifiers (“sentence IDs”) to the individual sentences such that each individual sentence from each of the notes can be processed individually, retaining a record of the sentence IDs for later compiling, and a whitelist tagger may be employed prior to introducing the sentences to tagger models, which the Examiner is interpreting the sentence IDs when utilized with the tagging models to encompass a PHI sentence classifier as it can be identified if PHI is not present (whitelist tagger), and the classification may also allow an operator to identify the average number of PHI elements per record based on any one of the classification metrics and by identifying the prevalence of PHI based on classification, an operator may then prioritize note types (also referred to as PHI-enriched note types) and model training to focus on the records that contain high amounts of PHI data, which the Examiner is interpreting PHI-enriched note types to encompass a set of Protected Health Information (PHI) containing sentences from the plurality of templatized sentences), wherein the PHI sentence classifier comprises a trained deep-learning based PHI classifier and a rule-based PHI classifier (See Paragraphs [0195], [0201]-[0202]: Some of the tagger models may be trained using different training approaches such as rule-based approach, deep-learning models and pattern-based models), and wherein generating the set of PHI containing sentences from the plurality of templatized sentences comprises: processing the plurality of templatized sentences using the trained deep-learning based PHI classifier for classifying the plurality of templatized sentences into PHI containing templatized sentences and non-PHI containing templatized sentences (See Fig. 13 and Paragraphs [0199]-[0202], [0204], [0206]: As part of the pre-processing, the system may deconstruct the batch of notes and assign sentence identifiers (“sentence IDs”) to the individual sentences such that each individual sentence from each of the notes can be processed individually, retaining a record of the sentence IDs for later compiling, which the Examiner is interpreting sentence identifiers to encompass a PHI sentence classifier as it can be identified if PHI is not present (whitelist tagger), and the identification of whitelisted entries that do not contain PHI sentences to be removed from the data to be processed and routed to aggregator for later compiling to encompass classifying the plurality of templatized sentences into PHI containing templatized sentences and non-PHI containing templatized sentences); processing the non-PHI containing templatized sentences using the rule-based PHI classifier to determine whether the non-PHI containing templatized sentences comprises one or more PHI containing templatized sentences that were missed by the trained deep-learning based PHI classifier (See Paragraphs [0203]-[0204]: Dreg filters may serve as a final processing filter against specific entity types to ensure that individual entities missed in the previous processing steps are not produced as output to users that should not have access to PHI, the dreg filters may use rule-based templates based on PHI-enriched note types to filter out additional PHI that was not identified by the tagger models, the rule-base templates may be tailored to individual sentence structures within data records to best identify PHI data, which the Examiner is interpreting dreg filters may use rule-based templates based on PHI-enriched note types to filter out additional PHI that was not identified by the tagger models to encompass one or more PHI containing templatized sentences that were missed by the trained deep-learning based PHI classifier); and generating the set of PHI containing sentences which includes the PHI containing templatized sentences classified by the trained deep-learning based PHI classifier and the one or more missed out PHI containing templatized sentences identified by the rule-based PHI classifier (See Paragraphs [0203]-[0204]: The rule-base templates may be tailored to individual sentence structures within data records to best identify PHI data, each dreg filter cascade may be directed to a different entity type and include a plurality of rule-based templates, dreg filters may also employ pattern-matching filters or similar approaches to identify PHI data, the system may also compile each of the notes from the batch data set using the sentence IDs stored prior to the tagging carried out by tagging models, which the Examiner is interpreting the system may also compile each of the notes from the batch data set using the sentence IDs stored prior to the tagging carried out by tagging models to encompass generating the set of PHI sentences which includes the PHI containing templatized sentences classified by the trained deep-learning based PHI classifier and the one or more missed out PHI containing templatized sentences identified by the rule-based PHI classifier); identifying one or more PHI entities in the input medical record by processing the generated set of PHI containing sentences using a trained model (See Paragraphs [0225]-[0227], [0234], [0290]-[0294]: Neural network model 1560 includes a configuration 1562, which defines a plurality of layers of neural network model 1560 and the relationships among the layers, illustrative examples of layers include input layers, output layers, convolutional layers, densely connected layers, merge layers, and the like, and relationships among the aggregated data may be identified using a neural network model (e.g., neural network model 1560) or other information retrieval methods configured to mine relationships from the aggregated data, which the Examiner is interpreting the neural network to encompass the trained model, and interpreting the curated set of health data may expand the set of entities that the neural network is able to recognize ([0291]) to encompass identifying one or more PHI entities in the input medical record by processing the generated set of PHI containing sentences), wherein the trained model comprises at least a word embedding layer, a character embedding layer, a sequential representation layer (See Paragraphs [0225]-[0226], [0240]: Neural network model 1560 is trained to make predictions (e.g., inferences) based on input data, neural network model 1560 includes a configuration 1562, which defines a plurality of layers of neural network model 1560 and the relationships among the layers, illustrative examples of layers include input layers, output layers, convolutional layers, densely connected layers, merge layers, and the like. In some embodiments, neural network model 1560 may be configured as a deep neural network with at least one hidden layer between the input and output layers, and the NLP pipeline may include NLP primitives (e.g., tokenization, embedding, named entity recognition, etc.)), and an attention mechanism or transformer-based architecture for identifying PHI entities, wherein the attention mechanism is configured to attend to contextual relationships within concatenated word and character representations to identify PHI entities in PHI-containing sentences (See Paragraphs [0185]-[0188]: In a preferred embodiment of the disclosure each tagging model is an attention based model, e.g., the BERT model, which the Examiner is interpreting an attention based model to encompass an attention mechanism, and an entity type can include PHI elements ([0185]), and the BERT model understands word context by looking at text from both directions (left and right), which the Examiner is interpreting to encompass attend to contextual relationships within concatenated word and character representations to identify PHI entities in PHI-containing sentences as PHI context can be taken into account ([0190]-[0191], [0253])) and wherein identifying the one or more PHI entities in the input medical record comprises: generating, using the word embedding layer, word level representations for each PHI containing templatized sentence of the generated set of PHI containing sentences (See Paragraphs [0191], [0225]-[0226], [0240], [0279]-[0280], [0294]: The NLP pipeline may include NLP primitives (e.g., tokenization, embedding, named entity recognition, etc.), and the neural network model includes a configuration which defines a plurality of layers of neural network model and the relationships among the layers); generating, using the character embedding layer, character level representations for each character of a set of tokenized sentences of the plurality of tokenized sentences (See Paragraphs [0191], [0225]-[0226], [0240], [0279]-[0280], [0294]: The NLP pipeline may include NLP primitives (e.g., tokenization, embedding, named entity recognition, etc.), and the neural network model includes a configuration which defines a plurality of layers of neural network model and the relationships among the layers), wherein the set of tokenized sentences corresponds to the set of PHI containing sentences (See Paragraphs [0293]-[0294]: The uncurated set of health records may be tokenized, e.g., broken into sentences, words, or other text fragments and, the size of the uncurated set of health records may be larger than the curated set of health records (e.g., it may include a greater number of records, a greater overall amount of patient data, or both)); concatenating the word level representations and the character level representations to generate final representations corresponding to each PHI containing sentence of the set of PHI containing sentences (See Paragraphs [0191], [0225]-[0226], [0240], [0245], [0294]: The plurality of identifiers may be stored in an appropriate data structure, such as a triplet that identifies the array index of the occurrence of the token with a contiguous array of concatenated documents, the document identifier of the occurrence, and the offset within the identified document to the occurrence, which the Examiner is interpreting the subset of corpus corresponds to a concatenation of each document in the subcorpus to encompass concatenating the word level representations and the character level representations to generate final representations); identifying, using the sequential representation layer, the one or more PHI entities in the input medical record by processing the final representations (See Paragraphs [0172]-[0173], [0191], [0225]-[0226], [0240], [0245], [0294]: Computations can be broken into sub-tasks that are then processed in pipelines, either concurrently or sequentially or both based on the arrangement of the pipelines, and the neural network model includes a configuration which defines a plurality of layers of neural network model and the relationships among the layers); and transmitting the anonymized medical record to an external entity (See Figure 1A and Paragraphs [0074]-[0075], [0110], [0173]-[0175]: The chain of trust extends to a fourth entity who receives the results of the processing, which the Examiner is interpreting the fourth entity to encompass an external entity and the results of the processing to encompass the anonymized medical record.) While Ardhanari teaches the method as described above, Ardhanari may not explicitly teach performing tokenization on an input medical record comprising a plurality of sentences to generate tokenized data, wherein the tokenized data comprises a plurality of tokenized sentences; and generating an anonymized medical record by anonymizing the identified one or more PHI entities in the input medical record. Hachey teaches a method for performing tokenization on an input medical record comprising a plurality of sentences to generate tokenized data, wherein the tokenized data comprises a plurality of tokenized sentences (See Paragraphs [0027]-[0030]: The medical service provider may transmit related electronic health records (EHR) to the system to inform the recognition component of clinical report details including patient name, doctor name and other PHI which may need to be removed, and the tokenising component may receive the parsed text or raw optical character recognition (OCR) text, and the tokenising component may generate tokens from the informational items of words, characters, or groups of words or characters (e.g., sentences)); and generating an anonymized medical record by anonymizing the identified one or more PHI entities in the input medical record (See Paragraphs [0077], [0081], [0085]: The clinical report is analysed by the symbolic AI component and PHI labels are identified, and removed to form an anonymised report, which the Examiner is interpreting the anonymised report to encompass an anonymized medical record.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Ardhanari to include performing tokenization on an input medical record comprising a plurality of sentences to generate tokenized data, wherein the tokenized data comprises a plurality of tokenized sentences; and generating an anonymized medical record by anonymizing the identified one or more PHI entities in the input medical record as taught by Hachey. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Ardhanari with Hachey with the motivation of anonymising or de-identifying protected health information for use in training a machine learning model (See Background of Invention of Hachey in Paragraph [0005]). Claim(s) 10 and 19 mirror claim 1 only within different statutory categories, and are rejected for similar reasons as claim 1. Claim 10 includes the additional limitations of "a memory storing computer executable instructions; and at least one processor in electronic communication with the memory and configured to:" which are encompassed by Ardhanari in Paragraphs [0309]-[0311] that a processor will receive instructions and data from a read only memory or a random access memory or both. Claim 19 includes the additional limitation of "one or more instructions executable by at least one processor, the one or more instructions comprising:" which are encompassed by Ardhanari in Paragraphs [0309]-[03 l l] that a computer program instructions can be executed by a processor. As per claim 2, Ardhanari/Hachey discloses the method of claim 1 as described above. Ardhanari may not explicitly teach further comprising: performing pre-processing on the input medical record prior to performing the tokenization, wherein performing the pre-processing comprises cleaning up the input medical record by performing one or more operations comprising: removing stop words, removing special characters, removing punctuations, stemming, lemmatization, removing extra white spaces, and converting whole medical record in lowercase letters. Hachey teaches a method further comprising: performing pre-processing on the input medical record prior to performing the tokenization, wherein performing the pre-processing comprises cleaning up the input medical record by performing one or more operations comprising: removing stop words, removing special characters, removing punctuations, stemming, lemmatization, removing extra white spaces, and converting whole medical record in lowercase letters (See Paragraphs [0065]-[0066]: The document format and tokenisation processes may include software components such as tokenizer that cleans text to remove extra whitespace, which the Examiner is interpreting to encompass performing pre-processing on the input medical record prior to performing the tokenization, wherein performing the pre-processing comprises cleaning up the input medical record by performing the operation comprising removing extra white spaces.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Ardhanari to include performing pre-processing on the input medical record prior to performing the tokenization, wherein performing the pre-processing comprises cleaning up the input medical record by performing the operation comprising removing extra white spaces as taught by Hachey. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Ardhanari with Hachey with the motivation of anonymising or de-identifying protected health information for use in training a machine learning model (See Background of Invention of Hachey in Paragraph [0005]). Claim(s) 11 mirrors claim 2 only within a different statutory category, and is rejected for similar reasons as claim 2. As per claim 3, Ardhanari/Hachey discloses the method of claims 1-2 as described above. Ardhanari further teaches wherein performing pre-processing on the input medical record further comprises: merging a sentence of the input medical record with previous and/or next sentences using a deep learning based context merger classifier (See Paragraphs [0189]-[0190]: One or more short sentences may be combined, which may provide additional context to the machine learning models and may improve the NER tagging, and the pre-processing can be completed by natural language processing tools, which the Examiner is interpreting natural language processing tools to encompass a deep learning based context merger classifier.) Claim(s) 12 mirrors claim 3 only within a different statutory category, and is rejected for similar reasons as claim 3. As per claim 5, Ardhanari/Hachey discloses the method of claim 1 as described above. Ardhanari further teaches wherein the trained deep-learning based PHI classifier is a sequential deep learning classifier (See Paragraphs [0188], [0240], [0291]: A neural network is trained using the curated of health records, which the Examiner is interpreting the neural network to encompass a sequential deep learning classifier as the tagging model can be a sequence model (e.g., LSTMs) or recurrent neural networks could be used for tagging entities.) Claim(s) 14 mirrors claim 5 only within a different statutory category, and is rejected for similar reasons as claim 5. As per claim 6, Ardhanari/Hachey discloses the method of claim 1 as described above. Ardhanari further teaches further comprising: identifying one or more missed out PHI entities in the input medical record by generated set of PHI containing sentences using a rule based parser (See Paragraphs [0203]-[0204]: Dreg filters may serve as a final processing filter against specific entity types to ensure that individual entities missed in the previous processing steps are not produced as output to users that should not have access to PHI, the dreg filters may use rule-based templates based on PHI-enriched note types to filter out additional PHI that was not identified by the tagger models, the rule-base templates may be tailored to individual sentence structures within data records to best identify PHI data, which the Examiner is interpreting dreg filters may use rule-based templates based on PHI-enriched note types to filter out additional PHI that was not identified by the tagger models to encompass identifying one or more missed out PHI entities in the input medical record by generated set of PHI containing sentences using a rule based parser) Claim(s) 15 and 20 mirror claim 6 only within different statutory categories, and are rejected for similar reasons as claim 6. As per claim 8, Ardhanari/Hachey discloses the method of claim 1 as described above. Ardhanari further teaches wherein generating the anonymized medical record comprises: generating the anonymized medical record by replacing the one or more identified PHI entities with one or more character strings, wherein the one or more character strings comprise random character strings or one or more PHI strings equivalent to the one or more identified PHI entities (See Paragraphs [0305], [0324]: The information in the text sequence comprises changing a value of one or more tagged entities to a randomized value, which the Examiner is interpreting a randomized value to encompass random character strings or one or more PHI strings equivalent to the one or more identified PHI entities.) Claim(s) 17 mirrors claim 8 only within a different statutory category, and are rejected for similar reasons as claim 8. As per claim 9, Ardhanari/Hachey discloses the method of claims 1 and 8 as described above. Ardhanari further teaches further comprising: storing a mapping between the identified one or more PHI entities and the one or more character strings in an encrypted file (See Paragraphs [0110], [0287], [0325]: The data inside an enclave is in encrypted form and is decrypted before processing and it may be re-encrypted before the results are shared with external entities and the rule-based template may be created by mapping each of the one or more portions of the text sequence); and converting the anonymized medical record back into the input medical record by replacing the one or more character strings with the one or more PHI entities based on the mapping stored in the encrypted file (See Paragraphs [0053]-[0057], [0110], [0325]: Encrypted data once injected into a secure enclave could then be decrypted within the secure enclave, processed, and the results could be encrypted in preparation for output, which the Examiner is interpreting the decrypting to encompass converting the anonymized medical record back into the input medical record.) Claim(s) 18 mirrors claim 9 only within a different statutory category, and are rejected for similar reasons as claim 9. Response to Arguments In the Remarks filed on October 15, 2025, the Applicant argues that the newly amended and/or added claims overcome the 35 U.S.C. 103 rejection(s). The Examiner does not acknowledge that the newly added and/or amended claims overcome the 35 U.S.C. 103 rejection(s). The Applicant argues that: (1) Ardhanari and Hachey both generally discuss the use of neural networks for natural language processing tasks, including PHI identification. However, the entirety of Ardhanari's disclosure is directed to traditional sequential models such as LSTM, Bi-LSTM, CRF, and RNN architectures (see, e.g., Ardhanari, [0188], [0190], [0240], [0291]). While Ardhanari generically references "neural network models" and "layers," there is no disclosure or suggestion of a transformer-based architecture or an attention mechanism as required by the amended claim. Similarly, Hachey is directed to symbolic AI pipelines, rule-based approaches, and conventional machine learning models for tokenization and PHI detection (see, e.g., Hachey, [0027]-[0030], [0065]- [0066]). Hachey does not disclose or suggest the use of transformer models (e.g., BERT, GPT, or similar architectures) or any attention mechanism for PHI entity identification; (2) amended claim 1 requires not only the presence of an attention mechanism or transformer-based architecture, but also that the attention mechanism is "configured to attend to contextual relationships within concatenated word and character representations to identify PHI entities in PHI-containing sentences." This is a specific technical improvement over traditional sequential models, as transformer-based architectures leverage self-attention to capture long-range dependencies and contextual relationships in text, which is fundamentally different from the operation of LSTM, CRF, or RNN models. Neither Ardhanari nor Hachey teaches or suggests concatenating word and character representations and then processing those concatenated representations using an attention mechanism or transformer-based architecture for PHI identification. Ardhanari's disclosure of "embedding layers" and "sequential layers" is limited to conventional neural network architectures and does not extend to the use of self-attention or transformer blocks. Hachey is even further removed, as it does not discuss deep learning architectures for PHI entity recognition at all; (3) there is no motivation or suggestion in Ardhanari or Hachey to modify the disclosed systems to incorporate a transformer-based architecture or an attention mechanism for the purpose of identifying PHI entities. The cited references do not recognize the advantages of attention mechanisms or transformer models for this task, nor do they provide any teaching or suggestion that would lead a person of ordinary skill in the art to implement such an architecture in the context of PHI entity identification. Accordingly, the cited references, either individually or in combination, fail to teach or suggest the amended limitation requiring an attention mechanism or transformer-based architecture for identifying PHI entities, wherein the attention mechanism is configured to attend to contextual relationships within concatenated word and character representations to identify PHI entities in PHI-containing sentences. The Applicant respectfully submits that the rejection should be withdrawn. Therefore, Ardhanari and Hachey, either alone or in combination, fail to teach or suggest the above-mentioned features of amended independent claim 1. Accordingly, the Applicant respectfully requests that the rejection of independent claims 1, 10, and 19 under 35 U.S.C. § 103 be withdrawn; (4) amended independent claims 10 and 19, though different in scope, recite subject matter analogous to the subject matter recited in amended independent claim 1. Therefore, the remarks presented above for amended independent claim 1 are equally applicable to amended independent claims 10 and 19. Accordingly, the Applicant respectfully requests that the rejection of independent claims 1, 10, and 19 under 35 U.S.C. § 103 be withdrawn. Dependent claims 2-3, 5-6, 8-9, 11-12, 14-15, and 17-18, and 20 are also believed to overcome this outstanding rejection by the virtue of their respective dependencies on amended independent claims 1, 10, and 19, which have been shown to overcome the outstanding rejection, and as well as for their additional claimed features. Accordingly, the Applicant respectfully requests that the rejection of claims 1-3, 5-6, 8-12, 14-15, and 17-20 under 35 U.S.C. § 103 be withdrawn. In response to argument (1), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that Ardhanari’s disclosure in Paragraph [0188] describes “In a preferred embodiment of the disclosure each tagging model is an attention based model, e.g., the BERT model described in Devlin, et al., “BERT: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, which is incorporated by reference herein in its entirety. However other models, such as sequence models (e.g., long short-term memory networks (LSTMs), LSTM with a conditional random field (LSTM-CRFs), or recurrent neural networks (RNNs)) could be used for tagging entities.”, and the BERT model takes word context by looking at text from the left and the right. The BERT model is considered an attention mechanism. The 35 U.S.C. 103 rejection(s) stand. In response to argument (2), the Examiner does not find the Applicant’s argument(s) persuasive. The BERT model understands word context by looking at text from both directions (left and right), which the Examiner is interpreting to encompass attend to contextual relationships within concatenated word and character representations to identify PHI entities in PHI-containing sentences as PHI context can be taken into account ([0190]-[0191], [0253]). The 35 U.S.C. 103 rejection(s) stand. In response to argument (3), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that one of ordinary skill in the art would have been motivated to modify Ardhanari with Hachey with the motivation of anonymising or de-identifying protected health information for use in training a machine learning model (See Background of Invention of Hachey in Paragraph [0005]). Ardhanari discloses the identification of PHI elements ([0194]-[0195], [0200], [0204]), and Ardhanari describes in Paragraph [0188] that “a preferred embodiment of the disclosure each tagging model is an attention based model, e.g., the BERT model”. Hachey describes detecting PHI ([0027]-[0028], [0032]-[0034]), and utilizing AI model training components, and a zoning model ([0035]). The Examiner maintains that the combination of Ardhanari/Hachey encompasses the newly amended claims. The 35 U.S.C. 103 rejection(s) stand. In response to argument (4), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that amended independent claims 10 and 19 are similarly rejected as independent claim 1. Dependent claims 2-3, 5-6, 8-9, 11-12, 14-15, 17-18, and 20 are rejected individually as described above in the 35 U.S.C. 103 rejection(s), and are rejected due to their dependency to independent claims 1, 10, and 19. The 35 U.S.C. 103 rejection(s) stand. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bennett S Erickson whose telephone number is (571)270-3690. The examiner can normally be reached Monday - Friday: 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached at (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Bennett Stephen Erickson/Primary Examiner, Art Unit 3683
Read full office action

Prosecution Timeline

Mar 02, 2022
Application Filed
Feb 20, 2024
Non-Final Rejection — §103
Aug 23, 2024
Response Filed
Nov 19, 2024
Final Rejection — §103
May 23, 2025
Request for Continued Examination
May 27, 2025
Response after Non-Final Action
Jun 13, 2025
Non-Final Rejection — §103
Oct 15, 2025
Response Filed
Jan 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597518
INCORPORATING CLINICAL AND ECONOMIC OBJECTIVES FOR MEDICAL AI DEPLOYMENT IN CLINICAL DECISION MAKING
2y 5m to grant Granted Apr 07, 2026
Patent 12580069
AUTOMATIC SETTING OF IMAGING PARAMETERS
2y 5m to grant Granted Mar 17, 2026
Patent 12580061
System and Method for Virtual Verification in Pharmacy Workflow
2y 5m to grant Granted Mar 17, 2026
Patent 12567501
STABILITY ESTIMATION OF A POINT SET REGISTRATION
2y 5m to grant Granted Mar 03, 2026
Patent 12499978
METHODS, SYSTEMS, AND DEVICES FOR DETERMINING MUTLI-PARTY COLOCATION
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+45.9%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 141 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month