Prosecution Insights
Last updated: April 19, 2026
Application No. 17/972,637

TRANSACTION CLASSIFYING

Non-Final OA §103
Filed
Oct 25, 2022
Examiner
PHAM, KHANH B
Art Unit
2166
Tech Center
2100 — Computer Architecture & Software
Assignee
BANK OF AMERICA CORPORATION
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
88%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
604 granted / 835 resolved
+17.3% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
34 currently pending
Career history
869
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
30.7%
-9.3% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 835 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/31/2026 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8-15, 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Horesh (US 2023/0195931 A1), hereinafter “Horesh1”, and in view of Horesh et al. (US 2021/0150129 A1), herein after “Horesh2.”. As per claim 1, Horesh1 teaches an intelligent transaction classifying computer program product, comprising executable instructions, when executed by a processor on a computer system: “receive a training set of transaction data comprising two or more training transactions, each training transaction comprising: a business name; a short description; and a defined transaction category code, store the training set in a non-transitory memory; preprocess the training set” at [0017]-[0018], [0024]-[0025], [0035]; (Horesh1 teaches receiving a set of transaction records, each includes a business name, a transaction type, a transaction description, storing the transaction records in a transaction data repository. Horesh teaches the categorization model is pretrained model that is trained based on categorizations of transaction records from multiple users) “analyze, through one or more artificial intelligence/machine learning (AI/ML) algorithms, the training set to determine one or more unique characteristics of each training transaction correlating to the transaction category code, wherein each of the one or more unique characteristics comprises a training unigram” at [0047]-[0048] and Fig. 3; (Horesh1 teaches the Personal and Global categorization models analyze the transaction records to determine the transaction records include a unique characteristic comprise test unigrams such as ‘Pharma’, ‘house’, ‘home’. “generate a document term matrix wherein each row comprises one or more training unigrams, (Horesh1 teaches at Fig. 3 the Personal Categorization Model Training Data and Global Categorization Model Training Data which include training unigrams such as ‘Home’ and an associated transaction category code such as ‘Shopping’) “receive a test set of transaction data comprising two or more test transactions, each test transaction comprising: a business name, a short description; and an other transaction category code; store the test set in the non-transitory memory; pre-process the test set” at [0047]-[0049]; (Horesh1 teaches receiving a new transaction record has a description “FL VAC PHARMA HOUSE”, which has not processed by the categorization model and is not assigned a defined category (i.e. “other transaction category code”). The personal categorization model has not process “Pharma” stores yet, so the personal categorization model issues “shopping” as the personal assigned category because of the similarity between ‘house’ and ‘home’) “analyzing, through one or more artificial intelligence/machine learning algorithms, the test set to determine one or more unique characteristics of each test transaction, wherein each of the one or more unique characteristics comprise a test unigram” at [0047]-[0048] and Fig. 3; (Horesh1 teaches personal categorization model analyzes the transaction records to determine the transaction records include a unique characteristic comprise test unigram ‘Pharma’, ‘house’, ‘home’. “compare, through one or more comparison artificial intelligence/machine learning (cAI/ML) algorithms, each test unigram to the document term matrix to determine a predicted transaction category code for each test transaction” at [0047] and Fig. 3; (Horesh1 teaches the personal categorization model has not process “Pharma” stores yet, so the personal categorization model issues “shopping” as the personal assigned category (i.e., “predicted transaction category code”) because of the similarity between ‘house’ and ‘home’ that exist in a single data point. Horesh 1’s Fig. 3 shows the personal categorization model training data which includes unigram ‘Home’ associated with ‘Shopping’ category) “iterate the comparison and determination to a pre-determined threshold level of confidence in the predicted transaction category code” at [0048]; (Horesh 1 teaches because the personal assigned categorization confidence value is low at 0.2, below a threshold, the global categorization mode is used. The unigram “Pharma” is compared with multiple training data records that has the unigram “Pharma” in the descriptions (e.g., “TRI STATE PHARMACHY” AND “Cheap Pharmaceutical CREDIT ON 07/21”). The multiple transaction records are assigned the “Health and Fitness” category. Thus, the global categorization model returns the global assigned category of “Health and Fitness” with the global assigned categorization confidence value of 0.8 (i.e., “threshold level of confidence”)) “assign the predicted transaction category code to a correspond test transaction, creating a classified test set of transaction data, wherein the classified test set of transaction does not comprise other transaction category codes” at [0047]-[0049]. (Horesh1 teaches assigning the “Health and Fitness” category to the new transaction. Horesh1 therefore teaches a classified set of transaction data which comprises “the now categorized new transaction record” which has the description “FL VAC PHARMA HOUSE” and a defined/assigned category “Health and Fitness”) Horesh1 does not teach the document term matrix include “one column comprises a frequency of each unigram within the training set” nor “wherein the classified set of transaction data is data-mined” as claimed. However, in another reference by the same inventor, Horesh2 teaches a method for generating unigram index data for correction of acquired transaction text fields, including steps of receiving a set of transaction data, performing data mining on the set of transaction data by extracting text field data from the transaction data, generating unigram index data from the text field data, determining unigram count data, and generating unigram index data which includes a column comprise frequency of each unigram within the transaction data at [0067]-[0077], [0118]-[0123] and Figs. 2, 5. Thus, it would have been obvious to one of ordinary skill in the art to combine the two references by the same inventor to generate the unigram index data comprises a frequency of each unigram in order to correct errors in the transaction data based on training data which includes the frequency of unigrams, as suggested by Horesh 2 [0007]-[0011] As per claim 2, Horesh1 and Horesh2 teach the product of claim 1 discussed above. Horesh1 also teaches: wherein “the classified test set of transaction data is added to the training set” at [0049]. As per claim 3, Horesh1 and Horesh2 teach the product of claim 1 discussed above. Horesh1 et al. also teaches: wherein “the pre-processing comprises removing whitespace” at [0048]. As per claim 4, Horesh1 and Horesh2 teach the product of claim 1 discussed above. Horesh2 also teaches: wherein “the pre-processing comprises removing punctuation marks” at [0120] As per claim 5, Horesh1 and Horesh2. teach the product to claim 1 discussed above. Horesh1 also teaches: wherein “one of the one or more cAI/ML algorithm is a recurrent neural network model” at [0029]. As per claim 8, Horesh1 and Horesh2 teach the product of claim 1 discussed above. Horesh1 also teaches: wherein “each training transaction further comprise a business name aggregate” at Fig. 3. As per claim 9, Horesh1 and Horesh2 teach the product of claim 1 discussed above. Horesh1 also teaches: wherein “each test transaction further comprises a business name aggregate” at [0047]-[0048]. As per claim 10, Horesh1 and Horesh2. teach the product of claim 1 discussed above. Horesh1 also teaches: wherein “the training set comprises more than seven hundred unique transaction category codes” at [0021], [0047]. Claims 11, 18 recite similar limitations as in claim 1 and are therefore rejected by the same reasons. As per claim 12, Horesh1 and Horesh2 the intelligent transaction classifying computer program product of claim 11 discussed above. Horesh1 also teaches: “wherein the predicted transaction category code is OTHER” at [0021], [0047]-[0048]. As per claim 13, Horesh1 and Horesh2 teach the intelligent transaction classifying computer program product of claim 11 discussed above. Horesh1 also teaches: “wherein the pre-determined threshold level of confidence is modifiable” at [0037]. As per claim 14, Horesh1 and Horesh2 teach the intelligent transaction classifying computer program product of claim 13 discussed above. Horesh1 also teaches: “wherein an artificial intelligence/machine learning ("AI/ML") algorithm modifies the pre-determined threshold level” at [0037]. As per claim 15, Horesh1 and Horesh2 teaches the intelligent transaction classifying computer program product of claim 11 discussed above. Horesh1 also teaches: “wherein a system administrator adjusts the pre-determined threshold level of confidence” at [0037]. As per claim 17, Horesh1 and Horesh2 teaches the intelligent transaction classifying computer program product of claim 11 discussed above. Horesh1 also teaches: “wherein an artificial intelligence/machine learning ("AI/ML") algorithm data-mines the classified set of transaction data” at [0002], [0026]-[0027]. As per claim 19, Horesh1 and Horesh2 teaches the method of claim 18 discussed above. Horesh1 also teaches: “wherein one of the one or more cAI/ML algorithms is a recurrent neural network model” at [0029]. Claims 6-7, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Horesh1 and Horesh2 as applied to claims above, and further in view of Chatterjee et al. (US 2020/0210817 A1), hereinafter “Chatterjee”. As per claim 6, Horesh1 and Horesh2 teach the product to claim 1 discussed above. Horesh1 does not explicitly teach: wherein “the recurrent neural network model is a long-shot term memory model (LSTM)” as claimed. However, Chatterjee teaches at [0038] a method for providing an explanation for a prediction generated by an artificial neural network model, wherein the model is a bidirectional long short term memory. The Bidirectional LSTM is a type of Recurrent Neural Network (RNN). Thus, it would have been obvious to one of ordinary skill in the art to combine Chatterjee with Horesh1’s teaching because “for text based input data, RNN may be better selected as it is better suited for sequence type of data, such as textual data. The bidirectional architecture may allow the LSTM layer to scan the input data both backward and forward”, and “this may help the network to perform more efficiently”, as suggested by Chatterjee at [0038]. As per claim 7, Horesh1-Horesh2. and Chatterjee teach the product of claim 6 discussed above. Chatterjee also teaches: wherein “the LSTM is bidirectional” at [0038]. Response to Arguments Applicant's arguments filed 1/31/2026 have been fully considered but they are not persuasive. The examiner respectfully traverses Applicant’s arguments. Regarding independent claims 1, 11 and 18, Applicant argued that Horesh1 does not teach each test transaction comprising “an “other” transaction category code” at [0047]-[0049] because “these paragraph simply show moving from one defined category (“shopping”) with one confidence value, to another defined category (“Health and Fitness”) with a different confidence value” and “Para. 0047 of Horesh 1 specifically states that the subject transaction is initially assigned to the “shopping” category, which is a defined category, not an “OTHER”/undefined category”. On the contrary, as pointed out by Applicant’s arguments above, the new transaction is not associated any category, or in other words, is associated with “other”/undefined/blank category, because the category “shopping”, “Health and Fitness” is assigned to the new transaction after receiving the transaction. A category assigned to a new transaction after receiving the transaction is not a defined category, which must be defined and associated with the transaction before receiving the transaction. The assigned category corresponds to the claimed “a predicted transaction category code for each test transaction” because it is predicted by the machine learning, and is associated with a “confidence value”, as required by the claims. Applicant further argued that “in order for Horesh1 to function, any uncategorized transaction must be first be assigned a category by “a personal categorization model”. Applicant therefore admitted that the received transaction is “uncategorized” (or in “other”/undefined category”), and the category is predicted and assigned to the transaction by the machine learning model. Applicant further argued that “Applicant maintains that this distinction is important to the claimed invention. Horesh1 is dealing with known categories with various confidence values, while the disclosed invention is dealing with completely unknown categories”. On the contrary, categories with various confidence values are clearly unknown, they are predicted categories of the unknow category of the transaction. In Horesh1, the category of the received new transaction is not known, the “personal categorization model” and the “global categorization model” are utilized to predict the unknow category and assign the predict category to the new transaction. Regarding claim 12, Applicant argued that Horesh1 does not teach “wherein the predicted transaction category code is other”, because “claim 12 is discussing the situation where the disclosed invention is unable to classify the transaction to an acceptable value (and it thus remains as an other). On the contrary, Horesh1 teaches at [0047]-[0048] the personal categorization model is used to predict a category of a newly received transaction, the predicted category is associated with a confidence value. If the confidence value is below a threshold, the predicted category is not assigned to the transaction, and the transaction remains as an other/undefined category. Applicant further argued that Horesh does not teach “the classified data is data-mined”. On the contrary, Horesh2 teaches at [0067]-[0077], [0118]-[0123] and Figs. 2, 5 steps of receiving a set of transaction data comprising a category field, performing data mining on the set of transaction data such as extracting text field data from the transaction data, generating unigram index data from the text field data, determining unigram count data, and generating unigram index data which includes a column comprise frequency of each unigram within the transaction data. In light of the foregoing arguments, the 35 U.S.C 103 rejection are hereby sustained. Conclusion Examiner's Note: Examiner has cited particular columns and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHANH B PHAM whose telephone number is (571)272-4116. The examiner can normally be reached Monday - Friday, 8am to 4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sanjiv Shah can be reached at (571)272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KHANH B PHAM/Primary Examiner, Art Unit 2166 March 23, 2026
Read full office action

Prosecution Timeline

Oct 25, 2022
Application Filed
Jul 15, 2025
Non-Final Rejection — §103
Oct 16, 2025
Response Filed
Oct 30, 2025
Final Rejection — §103
Jan 31, 2026
Request for Continued Examination
Feb 09, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602358
DATABASE AND DATA STRUCTURE MANAGEMENT SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12585915
TRAINING METHOD AND APPARATUS FOR A NEURAL NETWORK MODEL, DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579116
DATABASE AND DATA STRUCTURE MANAGEMENT SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12579163
SYSTEMS AND METHODS FOR DETECTING PERFORMANCE DEGRADATION IN DISTRIBUTED DATABASE DEPLOYMENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12579161
ETL JOB DISTRIBUTED PROCESSING SYSTEM AND METHOD BASED ON DYNAMIC CLUSTERING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
88%
With Interview (+15.2%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 835 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month