DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 5/2/2024 and 10/22/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3 and 5-12, and 14-20 stand rejected:
The independent claims 1, 10 and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims teach respectively a “method”, “non-transitory computer-readable storage medium” and “system” for a case which at a high level is about how to determine a “normalized named entity” (e.g., the actual name of e.g. a person) from a “transaction record” (SP Par. 0041 lines 7+: “For example, for the transaction record, “TRIA * CIRC14-21348-882-821NK”) which comprises of a “non-normalized” version of the name which in this case is “CIRC”. For this reason it transforms the “transaction record” into a “latent” “vector” (also called a “first embedding” representation) and compares the resulting “first embedding” (associated with the “transaction record”) with “second embeddings” associated with “historical transactions” (i.e., “stored by the computing server” (Sp. ¶ 0043 S2)) and do a “similarity” comparison (i.e., determine “distance between [their associated] vectors in the latent space” (Sp. ¶ 0060 lines 13-14)) wherein the “smaller” the distance the greater is the similarity. This latter part is done by “a second transformer model” (e.g. an “open source large language model” (Sp. ¶ 0062 S2)). This “second transformer model” then “classifies” (basically identifies the “normalized” version of) the “non-normalized” version of the “named entity” which for the example above is “CIRCLE” after using a “list” in the “historical transactions” in its comparisons. If the “normalized” version is determined, then it “associates” the “transaction record” with the determined “normalized named entity”.
Almost everything here can be done mentally. For example, but for the recitation of the “one or more processors” (in claims 11 and 19), a human upon receiving a receipt of a transaction (transaction record) if it has names not specified in their regular format or somehow some characters were not legible or lost, he can compare them with past similar receipts and determine a receipt with similar name (even not in regular format) but using mostly the same letters and determines they are associated with the same name entity and if the historical receipt has a record of the full name (normalized named entity) of the party who had been involved with it, then the human can classify the searched receipt with the said “full name” (“normalized named entity”). If a claim limitation and/or limitations, under their broadest reasonable interpretation, cover performance of the limitation and/or limitations in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
The judicial exception is not integrated into a practical application. In particular, Claim 1 recites “first transformer model” and “second transformer model”, while the claims 10 and 19 also in addition recite one additional element, i.e., “one or more processors” to do all the claim limitations. The “one or more processors” as well as the “first transformer model” and the “second transformer model” in all the said limitations are recited at a high-level of generality (i.e., as a generic processor and/or software performing generic computer functions associated with the claim limitations outlined above) such that they amount no more than mere instructions to apply the exception using generic computer components. These additional elements are defined in “[0080] Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software engines, alone or in combination with other devices. In some embodiments, a software engine is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described”; ¶ 0060 S2: “The first transformer model may be an embedding model”; ¶ 0062 S2: “ In some embodiments, the second transformer model is an open-source large language model that is fine-tuned to perform the classification of the non-normalized merchant name”. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are thus directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a “processor” and “first” and “second transform model[s]” to perform all the limitations outlined above amount to no more than mere instructions to apply the exception using a generic computer components or software. Mere instructions to apply an exception using a generic computer components or software cannot provide an inventive concept. The claims are thus not patent eligible.
Regarding claims 2, 11 and 20 if the human could not determine a proper name associated with the receipt’s poorly legible name under search, he could use other means such as guess the correct characters not legible in the poorly legible name by basically relying on knowledge of vocabulary pertaining to common names in his language and upon validation simply add the said transaction record with valid name to a list to be used for future.
Regarding claims 3 and 12, the human could use any dictionary of names in a language
Regarding claims 5 and 14, the human could rely on his own personal knowledge of vocabulary in his language and/or a friend’s knowledge
Regarding claims 6 and 15, the human would intuitively use semantic considerations (such as a name consistent with the language of the transaction) in guessing the correction to the ineligible name of the transaction record.
Regarding claims 7 and 16, the past receipts (set of similar transactions) should all possess the actual names (candidate normalized named entities) associated with receipt under investigation with the past similar receipts.
Regarding claims 8 and 17, the human could use any additional needed information that he may deem necessary in determining the actual names (candidate normalized) of the receipt under investigation.
Regarding claims 9 and 18, the receipt could correspond to a bank statement or bank related financial activity such as a monetary withdrawal.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5-8, 10-12, 14-17, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Subramanian (US 2023/0185796), and further in view of Goravar et al. (US 2024/0177818).
Regarding claim 1, Subramanian does teach a computer-implemented method (¶ 0010 S1: “A computer-implemented method includes memory hardware configured to store a predictive analyzer module, a fallout transaction history record database, and computer-executable instructions”),
comprising:
receiving a transaction record, the transaction record comprising a text string that includes a non-normalized version of a name of a named entity (Fig. 9 steps “928” “934” “936” and “940” respectively: “scan document” (receive e.g. see ¶ 0025 a “prescription” “document” (a transaction record) also defined in ¶ 0232 last S. as “transaction records (e.g., records for prescription fill request transactions)”) “to obtain text data” (comprising of a text string) “obtain entity field information” (which includes an entity) “Determine whether any identified entity fields have missing values” “Any field values missing?” “Y” (is non-normalized version of a name as it has “missed” characters));
generating a first embedding of the text string “missing data” (a non-normalized version of a name)) “referred to as an embedding” (also defined as a first embedding) “may be used” (is generated));
identifying a set of similar transactions using the first embedding, wherein identifying the set of similar transactions comprises comparing the first embedding to
second embeddings representing historical transactions, the first embedding and the second embeddings in the latent space of the first transformer model (¶ 0103 S1+S2: “The machine learning model data 412 may include” “one or more machine learning models” “The machine learning model data 412 may include historical feature vector inputs” (identifying a set of similar transactions or second embeddings representing historical transactions in “vector” (latent space)) “that are used to train one or more machine learning models to generate a prediction output, such as a prediction of a correct entity field in a document” “for example, when a document includes one or more entity fields that are missing data” (to be compared with a latent representation corresponding to an associated entity or the first embedding which was not identified due to lack of recognition of some of its characters when the “document” (e.g. the “prescription” (transaction record)) was scanned with optical character recognition);
inputting the text string of the transaction record and the set of similar transactions in natural language into a second transformer model to request the second transformer model to determine whether the non-normalized version of the name of the named entity is classifiable to a normalized named entity in a list of candidate normalized named entities (¶ 0103 S1+S2: “The machine learning model data 412 may include” “one or more machine learning models” (a second transformer model) “The machine learning model data 412” “may include” (is inputted) “historical feature vector inputs” (the set of similar transactions in natural language which define a list of candidate normalized named entities) “that are used to train one or more machine learning models to generate a prediction output, such as a prediction of a correct entity field” (to determine whether a corresponding normalized named entity) “in a document” “for example, when a document includes one or more entity fields that are missing data” (corresponding to an input text string of the transaction record can be classified or obtained) “or were not identified when scanning the document with optical character recognition”; ¶ 0103 last S: “historical feature vector inputs” “such as historical prescription fill request documents that were received and successfully processed to fill a prescription” (i.e., the “historical” (historical) “feature vector inputs” (second embeddings) represent “historical prescription[s]” (of list of transaction records used to determine “missing data” (text string))));
receiving an output from the second transformer model (¶ 0103 S1+S2: “The machine learning model data 412 may include” “one or more machine learning models” (the second transformer model) “The machine learning model data 412” “may include” (is inputted) “historical feature vector inputs” “that are used to train one or more machine learning models to generate a prediction output” (generates an output));
determining that the output indicates that the non-normalized version of the name in the transaction record is classifiable to one of the normalized named entities in the list (¶ 0103 S2: “The machine learning model data 412” “may include” “historical feature vector inputs” (using the list to determine) “that are used to train one or more machine learning models to generate a prediction output” (to determine the output) “such as a prediction of a correct entity field” (as the “correct” (normalized) named entity for the non-normalized version of the name)); and
associating the transaction record with a classified normalized named entity (¶ 0103 last S: the “correct entity field” (the classified normalized named entity) helps “successfully” “to fill a prescription” (associates with the transaction record)).
Subramanian does not specifically disclose its word embedding into vectors to be attributed to a first transformer model.
Goravar et al. do teach a transformer model responsible for generating embedding vectors in application to named entity recognition (¶ 0067 S2: “entity recognition model comprises: tokenizing the medical report into a plurality of tokens at operation 602, encoding each of the plurality of tokens into a corresponding embedding vector” (an embedding including vectors) “e.g., by using a transformer model” (obtained by a first transformer model)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “transformer model” of Goravar et al. into Subramanian would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Subramanian to benefit from a systematic approach to generate its “embedding” “vectors” by virtue of using a model.
Regarding claim 2, Subramanian does teach the method of claim 1, further comprising:
determining that the output indicates that the non-normalized version of the name in the transaction record is not classifiable to one of the normalized named entities in the list (Fig. 16 steps “1616”, “1620” respectively: “Perform validity checks against existing database records”, “Request valid?” “N” (i.e., it is determined that the output failed a “validity” “check” (is not classifiable to a normalized named entity) in the “existing database” (list));
generating a normalized version of the name in the transaction record (Fig. 16 step “1632” followed by Figs. 17 and 18 steps “1728” “1816” “1828” “1832” respectively: ”Transmit prescription fill request to fallout processing module (See FIG. 17)” “Transmit fallout prescription fill request to predictive analyzer module (See FIG. 18)” “DOB matching used?” “Y” “Prediction successful?” “Y” (generating a normalized version of the name in the “prescription” (transaction record)); and
storing the generated normalized version of the name (step “1840” and steps “1944” “1948” respectively: “Create invoice for prescription fill request and generate processing workflow” “Return successful predicted prescription information to predictive analyzer module” “Store new record in fall out transaction history data” (store the “new record” (comprising “valid” normalized name)).
Regarding claim 3, Subramanian does teach the method of claim 1, wherein the second transformer model is an open-source large language model that is fine-tuned to perform classification of the non-normalized version of the name of the named entity (¶ 0024: “FIG. 8 is a graphical representation of layers or an example long short-term memory (LSTM) machine learning model” (i.e., the “machine learning model” (the second transformer model) is “LSTM” (an open-source large language model) used for the steps in Fig. 8 quoted above tailored to “successful” “prediction” (classification) of “entity fields that are missing data” (non-normalized) to normalized names)).
Regarding claim 5, Subramanian does not specifically disclose the method of claim 1, wherein the first transformer model is an off-the-shelf embedding model.
Goravar et al. do teach the method of claim 1, wherein the first transformer model is an off-the-shelf embedding model (¶ 0067 S2: “entity recognition model comprises: tokenizing the medical report into a plurality of tokens at operation 602, encoding each of the plurality of tokens into a corresponding embedding vector (e.g., by using a transformer model “ (a “transformer model” specifically tailored to “medical reports” (i.e., an off-the-shelf model)).
For obviousness to combine Subramanian and Goravar et al. see claim 1.
Regarding claim 6, Subramanian does teach the method of claim 1, wherein the list of candidate normalized named entities comprises normalized named entities with high semantic similarity to the text string of the transaction record (¶ 0211: “matching threshold value” “e.g., a similarity score that sufficiently indicates a correct match” (semantic similarity) “between the predicted missing name field value” (between the text string of normalized named entities) “and the document scan candidate name field value” (and the text string of the transaction record) “based on the" “thresholds may include, but are not limited to, 0.75 (where 0 is no match at all and 1 is an exact match), 0.85, 0.9” (is 90 percent, e.g., see “TABLE 2” in ¶ 0213 which shows how semantically close the two sets are)).
Regarding claim 7, Subramanian doe teach the method of claim 1, wherein the list of candidate normalized named entities comprises normalized named entities associated with the transactions in the set of similar transactions (¶ 0103 S2+: “The machine learning model data 412” “may include” “historical feature vector inputs” “that are used to train one or more machine learning models to generate a prediction output, such as a prediction of a correct entity field” (the normalized named entities are associated with) “in a document” “for example, when a document includes one or more entity fields that are missing data” “or were not identified when scanning the document with optical character recognition” “historical feature vector inputs” “such as historical” (similar) “prescription fill” (transactions) “request documents that were received and successfully processed to fill a prescription”).
Regarding claim 8, Subramanian does teach the method of claim 1, wherein inputting the text string of the transaction record and the set of similar transactions in natural language into the second transformer model further comprises inputting additional information about the transaction record into the second transformer model (¶ 0103 S2+: “The machine learning model data 412” (the second transformer model)“may include”(comprises) “historical feature vector inputs”(second embeddings representing historical transactions) “that are used to train one or more machine learning models to generate a prediction output, such as a prediction of a correct entity field” “in a document” “for example, when a document includes one or more entity fields that are missing data” “or were not identified when scanning the document with optical character recognition” “The historical feature vector inputs may include the historical data structures which are specific to multiple historical database entities” “such as historical” “prescription fill” “request documents” (and additional information about a “prescription refill” (the transaction record)) “that were received and successfully processed to fill a prescription”).
Regarding claim 10, Subramanian does teach a non-transitory computer-readable storage medium configured to store computer code comprising instructions, wherein the instructions, when executed by one or more processors ( ¶ 0057 S1: “The order processing device 114 may include circuitry, a processor, a memory to store data and instructions, and communication functionality”; ¶ 0282 sentence before last: “Similarly, one or more instructions stored in a non-transitory computer-readable medium may be executed in different order (or concurrently) without altering the principles of the present disclosure”),
Cause the one or more processors to:
receive a transaction record, the transaction record comprising a text string that includes a non-normalized version of a name of a named entity (Fig. 9 steps “928” “934” “936” and “940” respectively: “scan document” (receive e.g. see ¶ 0025 a “prescription” “document” (a transaction record) also defined in ¶ 0232 last S. as “transaction records (e.g., records for prescription fill request transactions)”) “to obtain text data” (comprising of a text string) “obtain entity field information” (which includes an entity) “Determine whether any identified entity fields have missing values” “Any field values missing?” “Y” (is non-normalized version of a name as it has “missed” characters));
generate a first embedding of the text string using
identify a set of similar transactions using the first embedding, wherein identifying the set of similar transactions comprises comparing the first embedding to
second embeddings representing historical transactions, the first embedding and the second embeddings in the latent space of the first transformer model (¶ 0103 S1+S2: “The machine learning model data 412 may include” “one or more machine learning models” (a second transformer model) “The machine learning model data 412 may include historical feature vector inputs” (identifying a set of similar transactions or second embeddings representing historical transactions in “vector” (latent space)) “that are used to train one or more machine learning models to generate a prediction output, such as a prediction of a correct entity field in a document” “for example, when a document includes one or more entity fields that are missing data” (to be compared with a latent representation corresponding to an associated entity or the first embedding which was not identified due to lack of recognition of some of its characters when the “document” (e.g. the “prescription” (transaction record)) was scanned with optical character recognition);
input the text string of the transaction record and the set of similar transactions in natural language into a second transformer model to request the second transformer model to determine whether the non-normalized version of the name of the named entity is classifiable to a normalized named entity in a list of candidate normalized named entities (¶ 0103 S1+S2: “The machine learning model data 412 may include” “one or more machine learning models” (the second transformer model) “The machine learning model data 412” “may include” (is inputted) “historical feature vector inputs” (the set of similar transactions in natural language which define a list of candidate normalized named entities) “that are used to train one or more machine learning models to generate a prediction output, such as a prediction of a correct entity field” (to determine whether a corresponding normalized named entity) “in a document” “for example, when a document includes one or more entity fields that are missing data” (corresponding to an input text string of the transaction record can be classified or obtained) “or were not identified when scanning the document with optical character recognition”; ¶ 0103 last S: “historical feature vector inputs” “such as historical prescription fill request documents that were received and successfully processed to fill a prescription” (i.e., the “historical” (historical) “feature vector inputs” (second embeddings) represent “historical prescription[s]” (of list of transaction records used to determine “missing data” (text string))));
receive an output from the second transformer model (¶ 0103 S1+S2: “The machine learning model data 412 may include” “one or more machine learning models” (the second transformer model) “The machine learning model data 412” (a second transformer model) “may include” (is inputted) “historical feature vector inputs” “that are used to train one or more machine learning models to generate a prediction output” (generates an output));
determine that the output indicates that the non-normalized version of the name in the transaction record is classifiable to one of the normalized named entities in the list (¶ 0103 S2: “The machine learning model data 412” “may include” “historical feature vector inputs” (using the list to determine) “that are used to train one or more machine learning models to generate a prediction output” (to determine the output) “such as a prediction of a correct entity field” (as the “correct” (normalized) named entity for the non-normalized version of the name)); and
associate the transaction record with a classified normalized named entity (¶ 0103 last S: the “correct entity field” (the classified normalized named entity) helps “successfully” “to fill a prescription” (associates with the transaction record)).
Subramanian does not specifically disclose its word embedding into vectors to be attributed to a first transformer model.
Goravar et al. do teach a transformer model responsible for generating embedding vectors in application to named entity recognition (¶ 0067 S2: “entity recognition model comprises: tokenizing the medical report into a plurality of tokens at operation 602, encoding each of the plurality of tokens into a corresponding embedding vector” (an embedding including vectors) “e.g., by using a transformer model” (obtained by a first transformer model)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “transformer model” of Goravar et al. into Subramanian would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Subramanian to benefit from a systematic approach to generate its “embedding” “vectors” by virtue of using a model.
Regarding claim 11, Subramanian does teach the non-transitory computer-readable storage medium of claim 10, further comprising instructions that, when executed by the one or more processors, cause the one or more processors to:
determine that the output indicates that the non-normalized version of the name in the transaction record is not classifiable to one of the normalized named entities in the list (Fig. 16 steps “1616”, “1620” respectively: “Perform validity checks against existing database records”, “Request valid?” “N” (i.e., it is determined that the output failed a “validity” “check” (is not classifiable to a normalized named entity) in the “existing database” (list));
generate a normalized version of the name in the transaction record (Fig. 16 step “1632” followed by Figs. 17 and 18 steps “1728” “1816” “1828” “1832” respectively: ”Transmit prescription fill request to fallout processing module (See FIG. 17)” “Transmit fallout prescription fill request to predictive analyzer module (See FIG. 18)” “DOB matching used?” “Y” “Prediction successful?” “Y” (generating a normalized version of the name in the “prescription” (transaction record)); and
store the generated normalized version of the name (step “1840” and steps “1944” “1948” respectively: “Create invoice for prescription fill request and generate processing workflow” “Return successful predicted prescription information to predictive analyzer module” “Store new record in fall out transaction history data” (store the “new record” (comprising “valid” normalized name)).
Regarding claim 12, Subramanian does teach the non-transitory computer -readable storage medium of claim 10, wherein the second transformer model is an open-source large language model that is fine-tuned to perform classification of the non-normalized version of the name of the named entity (¶ 0024: “FIG. 8 is a graphical representation of layers or an example long short-term memory (LSTM) machine learning model” (i.e., the “machine learning model” (the second transformer model) is “LSTM” (an open-source large language model) used for the steps in Fig. 8 quoted above tailored to “successful” “prediction” (classification) of “entity fields that are missing data” (non-normalized) to normalized names)).
Regarding claim 14, Subramanian does not specifically disclose the non-transitory computer-readable storage medium of claim 10, wherein the first transformer model is an off-the-shelf embedding model.
Goravar et al. do teach the non-transitory computer-readable storage medium of claim 10, wherein the first transformer model is an off-the-shelf embedding model (¶ 0067 S2: “entity recognition model comprises: tokenizing the medical report into a plurality of tokens at operation 602, encoding each of the plurality of tokens into a corresponding embedding vector (e.g., by using a transformer model “ (a “transformer model” specifically tailored to “medical reports” (i.e., an off-the-shelf model)).
For obviousness to combine Subramanian and Goravar et al. see claim 1.
Regarding claim 15, Subramanian does teach the non-transitory computer-readable storage medium of claim 10, wherein the list of candidate normalized named entities comprises normalized named entities with high semantic similarity to the text string of the transaction record (¶ 0211: “matching threshold value” “e.g., a similarity score that sufficiently indicates a correct match” (semantic similarity) “between the predicted missing name field value” (between the text string of normalized named entities) “and the document scan candidate name field value” (and the text string of the transaction record) “based on the" “thresholds may include, but are not limited to, 0.75 (where 0 is no match at all and 1 is an exact match), 0.85, 0.9” (is 90 percent, e.g., see “TABLE 2” in ¶ 0213 which shows how semantically close the two sets are)).
Regarding claim 16, Subramanian does teach the non-transitory computer-readable storage medium of claim 10, wherein the list of candidate normalized named entities comprises normalized named entities associated with the transactions in the set of similar transactions (¶ 0103 S2+: “The machine learning model data 412” “may include” “historical feature vector inputs” “that are used to train one or more machine learning models to generate a prediction output, such as a prediction of a correct entity field” (the normalized named entities are associated with) “in a document” “for example, when a document includes one or more entity fields that are missing data” “or were not identified when scanning the document with optical character recognition” “historical feature vector inputs” “such as historical” (similar) “prescription fill” (transactions) “request documents that were received and successfully processed to fill a prescription”).
Regarding claim 17, Subramanian does teach the non-transitory computer-readable storage medium of claim 10, wherein the instruction for inputting the text string of the transaction record and the set of similar transactions in natural language into the second transformer model further comprises instructions that, when executed by the one or more processors, cause the one or more processors to input additional information about the transaction record into the second transformer model (¶ 0103 S2+: “The machine learning model data 412” (the second transformer model)“may include”(comprises) “historical feature vector inputs”(second embeddings representing historical transactions) “that are used to train one or more machine learning models to generate a prediction output, such as a prediction of a correct entity field” “in a document” “for example, when a document includes one or more entity fields that are missing data” “or were not identified when scanning the document with optical character recognition” “The historical feature vector inputs may include the historical data structures which are specific to multiple historical database entities” “such as historical” “prescription fill” “request documents” (and additional information about a “prescription refill” (the transaction record)) “that were received and successfully processed to fill a prescription”).
Regarding claim 19, Subramanian does teach a system, comprising: one or more processors and memory, the memory configured to store instructions (¶ 0057 S1: “The order processing device 114 may include circuitry, a processor, a memory to store data and instructions, and communication functionality”),
Wherein the instructions, when executed by the one or more processors, cause the one or more processors to:
receive a transaction record, the transaction record comprising a text string that includes a non-normalized version of a name of a named entity (Fig. 9 steps “928” “934” “936” and “940” respectively: “scan document” (receive e.g. see ¶ 0025 a “prescription” “document” (a transaction record) also defined in ¶ 0232 last S. as “transaction records (e.g., records for prescription fill request transactions)”) “to obtain text data” (comprising of a text string) “obtain entity field information” (which includes an entity) “Determine whether any identified entity fields have missing values” “Any field values missing?” “Y” (is non-normalized version of a name as it has “missed” characters));
generate a first embedding of the text string using
identify a set of similar transactions using the first embedding, wherein identifying the set of similar transactions comprises comparing the first embedding to
second embeddings representing historical transactions, the first embedding and the second embeddings in the latent space of the first transformer model (¶ 0103 S1+S2: “The machine learning model data 412 may include” “one or more machine learning models” (a second transformer model): “The machine learning model data 412 may include historical feature vector inputs” (identifying a set of similar transactions or second embeddings representing historical transactions in “vector” (latent space)) “that are used to train one or more machine learning models to generate a prediction output, such as a prediction of a correct entity field in a document” “for example, when a document includes one or more entity fields that are missing data” (to be compared with a latent representation corresponding to an associated entity or the first embedding which was not identified due to lack of recognition of some of its characters when the “document” (e.g. the “prescription” (transaction record)) was scanned with optical character recognition);
input the text string of the transaction record and the set of similar transactions in natural language into a second transformer model to request the second transformer model to determine whether the non-normalized version of the name of the named entity is classifiable to a normalized named entity in a list of candidate normalized named entities (¶ 0103 S1+S2: “The machine learning model data 412 may include” “one or more machine learning models” (the second transformer model) “The machine learning model data 412” “may include” (is inputted) “historical feature vector inputs” (the set of similar transactions in natural language which define a list of candidate normalized named entities) “that are used to train one or more machine learning models to generate a prediction output, such as a prediction of a correct entity field” (to determine whether a corresponding normalized named entity) “in a document” “for example, when a document includes one or more entity fields that are missing data” (corresponding to an input text string of the transaction record can be classified or obtained) “or were not identified when scanning the document with optical character recognition”; ¶ 0103 last S: “historical feature vector inputs” “such as historical prescription fill request documents that were received and successfully processed to fill a prescription” (i.e., the “historical” (historical) “feature vector inputs” (second embeddings) represent “historical prescription[s]” (of list of transaction records used to determine “missing data” (text string))));
receive an output from the second transformer model (¶ 0103 S1+S2: “The machine learning model data 412 may include” “one or more machine learning models” (the second transformer model) “The machine learning model data 412” “may include” (is inputted) “historical feature vector inputs” “that are used to train one or more machine learning models to generate a prediction output” (generates an output));
determine that the output indicates that the non-normalized version of the name in the transaction record is classifiable to one of the normalized named entities in the list (¶ 0103 S2: “The machine learning model data 412” “may include” “historical feature vector inputs” (using the list to determine) “that are used to train one or more machine learning models to generate a prediction output” (to determine the output) “such as a prediction of a correct entity field” (as the “correct” (normalized) named entity for the non-normalized version of the name)); and
associate the transaction record with a classified normalized named entity (¶ 0103 last S: the “correct entity field” (the classified normalized named entity) helps “successfully” “to fill a prescription” (associates with the transaction record)).
Subramanian does not specifically disclose its word embedding into vectors to be attributed to a first transformer model.
Goravar et al. do teach a transformer model responsible for generating embedding vectors in application to named entity recognition (¶ 0067 S2: “entity recognition model comprises: tokenizing the medical report into a plurality of tokens at operation 602, encoding each of the plurality of tokens into a corresponding embedding vector” (an embedding including vectors) “e.g., by using a transformer model” (obtained by a first transformer model)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “transformer model” of Goravar et al. into Subramanian would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Subramanian to benefit from a systematic approach to generate its “embedding” “vectors” by virtue of using a model.
Regarding claim 20, Subramanian does teach the system of claim 19, further comprising instructions that, when executed by the one or more processors, cause the one or more processors to:
determine that the output indicates that the non-normalized version of the name in the transaction record is not classifiable to one of the normalized named entities in the list (Fig. 16 steps “1616”, “1620” respectively: “Perform validity checks against existing database records”, “Request valid?” “N” (i.e., it is determined that the output failed a “validity” “check” (is not classifiable to a normalized named entity) in the “existing database” (list));
generate a normalized version of the name in the transaction record (Fig. 16 step “1632” followed by Figs. 17 and 18 steps “1728” “1816” “1828” “1832” respectively: ”Transmit prescription fill request to fallout processing module (See FIG. 17)” “Transmit fallout prescription fill request to predictive analyzer module (See FIG. 18)” “DOB matching used?” “Y” “Prediction successful?” “Y” (generating a normalized version of the name in the “prescription” (transaction record)); and
store the generated normalized version of the name (step “1840” and steps “1944” “1948” respectively: “Create invoice for prescription fill request and generate processing workflow” “Return successful predicted prescription information to predictive analyzer module” “Store new record in fall out transaction history data” (store the “new record” (comprising “valid” normalized name)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 9, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Subramanian in view of Goravar et al., and further in view of KR20140115078A.
Regarding claim 9, Subramanian in view of Goravar et al. do not specifically disclose the method of claim 1, wherein the transaction record is a bank transfer payment record and the set of similar transactions is a set of similar bank transfer payment records.
KR20140115078 does teach the method of claim 1, wherein the transaction record is a bank transfer payment record and the set of similar transactions is a set of similar bank transfer payment records (“Description” S2: “The electronic tax invoice” (a historical transaction) “and the cash receipt transaction details of the bank receipt” (and a bank transfer payment record) “transaction history are compared” (are compared) “with each other, and when the matching business name” (to detect a normalized name entity) “is detected”).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “matching business name” techniques of KR20140115078 into the “prescription document” management of Subramanian in Subramanian in view of Goravar et al. for “predicting” “correct entity fields” would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Subramanian to manage “prescription documents” comprising of “the amount of money received by the pharmacy” (Subramanian ¶ 0048) by enabling it to predict e.g. their associated “matching business name” such as insurance companies on the fly as discussed in KR20140115078.
Regarding claim 18, Subramanian does not specifically disclose the non-transitory computer-readable storage medium of claim 10, wherein the transaction record is a bank transfer payment record and the set of similar transactions is a set of similar bank transfer payment records.
KR20140115078 does teach the non-transitory computer-readable storage medium of claim 10, wherein the transaction record is a bank transfer payment record and the set of similar transactions is a set of similar bank transfer payment records (“Description” S2: “The electronic tax invoice” (a historical transaction) “and the cash receipt transaction details of the bank receipt” (and a bank transfer payment record) “transaction history are compared” (are compared) “with each other, and when the matching business name” (to detect a normalized name entity) “is detected”).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “matching business name” techniques of KR20140115078 into the “prescription document” management for “predicting” “correct entity fields” would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Subramanian to manage “prescription documents” comprising of “the amount of money received by the pharmacy” (Subramanian ¶ 0048) by enabling it to predict e.g. their associated “matching business name” such as insurance companies on the fly as discussed in KR20140115078.
Allowable Subject Matter
Claims 4, and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARZAD KAZEMINEZHAD whose telephone number is (571)270-5860. The examiner can normally be reached 10:30 am to 11:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Farzad Kazeminezhad/
Art Unit 2653
March 5th 2026.