Prosecution Insights
Last updated: April 19, 2026
Application No. 17/409,330

COMPUTER-BASED SYSTEMS CONFIGURED FOR RECORD ANNOTATION AND METHODS OF USE THEREOF

Final Rejection §103
Filed
Aug 23, 2021
Examiner
HWANG, MEGAN ELIZABETH
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
4 (Final)
47%
Grant Probability
Moderate
5-6
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
9 granted / 19 resolved
-7.6% vs TC avg
Strong +60% interview lift
Without
With
+60.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examining. This Office Action is responsive to the amendment filed on 12/11/2025, which has been entered into the above identified application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, 9-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cowan (US Patent 10579682 B1, filed 04/20/2017) in view of Cao et al. (“HitFraud: A Broad Learning Approach for Collective Fraud Detection in Heterogeneous Information Networks”, published 12/18/2017), hereinafter Cao; in further view of Mehta et al. (US Patent 11657180 B1, filed 05/10/2021), hereinafter Mehta. Cowan and Mehta were cited in a previous Office Action. Regarding Claim 1, Cowan teaches a method comprising: receiving, by one or more processors, user-specific data from a plurality of personal computing devices each associated with one of a plurality of users (Cowan: “Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving data that represents actions taken by a particular user with respect to entities of a plurality of entity types” [Abstract]; “For situations in which the systems discussed here collect personal information about users, or may make use of personal information, the users may be provided with an opportunity to control whether programs or features collect personal information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location), or to control whether and/or how to receive content from the content server that may be more relevant to the user.” [Col. 11, Lines 57-65]), the user-specific data comprising: a plurality of user-specific annotating content items associated with the first plurality of users (Cowan: “The labeled training data may additionally or alternatively include other non-financial transaction related data, e.g., data from a review website, news articles, investment material, marketing material, social network, shopping search corpus, patent search corpus, book search corpus, news search corpus, sales offer corpus, general search corpus, translation corpus, image hosting website, or video hosting website, associated with identified entities or users.” [Col. 9, Lines 49-56]), a plurality of user-specific records from the first plurality of users, the plurality of user-specific records associated with the plurality of user-specific annotating content items (Cowan: “the labeled training data may include financial transaction data associated with identified entities” [Col. 3, Lines 1-2]; “selecting the entity that is associated with the particular financial transaction from the candidate entities based on a user profile may include determining that, on a date indicated by the financial transaction data, the user profile indicates that the user was located in a location associated with the entity” [Col. 4, Lines 4-9]; BRI is that “financial transaction data” is associated with a user, thereby being user-specific), and at least one relationship between at least two of the plurality of user-specific records (Cowan: “Additionally or alternatively, the annotator 110 may associate entities with particular financial transactions on a batch basis. For example, the annotator 110 may receive financial transaction data representing multiple financial transactions, and associate two or more of the financial transactions with entities in parallel. The annotator 110 may use financial transaction data representing a first financial transaction to disambiguate financial transaction data representing a second financial transaction” [Col. 7, Lines 40-48]); generating, by the one or more processors, a training dataset based at least in part on the user-specific data (Cowan: “The labeled training data used to train the machine-learning based annotator may include text and entities identified as being associated with the text. For example, the machine-learning based annotator may be trained using excerpts of the website for “Best Restaurant” that are identified as being associated with the entity “Best Restaurant.” The labeled training data may include other non-financial transaction related data, e.g., text from a review website, associated with identified entities, social network interactions of users, search terms used by users, or user provided training data including explicit confirmations that entities identified are correct or incorrect” [Col. 2, Lines 56-67]); training, by one or more processors, a record annotation machine learning model using the training dataset to obtain a trained record annotation machine learning model (Cowan: “The machine-learning based annotator may be trained to identify entities associated with financial transaction data based on labeled training data.” [Col. 2, Lines 29-31]; “a machine learning based annotator that is trained to recognize entities and entity attributes of the entities in the data and annotating, by the system the data that represents actions taken by a particular user with respect to entities of a plurality of entity types with respective entity identifier that each identify a particular entity” [Col. 1, Lines 52-58]); receiving, by the one or more processors, from a first personal computing device, at least one first user-specific record and at least one first user-specific annotating content item being associated with at least one first user-specific record (Cowan: “Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving data that represents actions taken by a particular user with respect to entities of a plurality of entity types” [Abstract]; “Generally, the system 100 includes a machine-learning based annotator 110 that disambiguates financial transaction data that represent financial transactions and a client device 140 that enables a user to interact with the server 110 to request information regarding the financial transactions.” [Col. 6, Lines 35-39]; “The combined identifier and selector may receive financial transaction data 120, labeled training data 230, and user profile data 240 to generate annotated financial transaction data 250” [Col. 11, Lines 13-16]); receiving, by the one or more processors, from the first personal computing device at least one second user-specific record (Cowan: “he annotator 110 may associate an entity with a particular financial transaction on a single financial transaction basis. For example, the annotator 110 may receive financial transaction data representing a particular financial transaction, and wait until the annotator 110 associates an entity with the particular financial transaction before the annotator 110 receives additional financial transaction data representing another financial transaction. Alternatively, the annotator 110 may receive financial transaction data representing multiple financial transactions and associate entities with the financial transactions one by one. Additionally or alternatively, the annotator 110 may associate entities with particular financial transactions on a batch basis.” [Col. 7, Lines 30-42]); and utilizing, by the one or more processors, the trained record annotation machine learning model to: generate at least one derived user-specific annotating content item based at least in part on the at least one first user-specific annotating content item and data of the at least one first user-specific record (Cowan: “The candidate entity selector 226 may receive the identified candidate entities 224 and select one or more candidate entities to associate with the financial transaction. To make the selection, the selector 226 may also receive user profiles of users that are parties to financial transactions represented by financial transaction data” [Col. 10, Lines 13-18]; “The combined identifier and selector may receive financial transaction data 120, labeled training data 230, and user profile data 240 to generate annotated financial transaction data 250” [Col. 11, Lines 13-16]; “The system may associate the financial transactions with entities identified by the machine-learning based annotator. The system may annotate financial transaction data with data that includes an identifier that represents the identified entity. For example, the system may annotate the financial transaction data [“4/11/2013,” “BEST R NEW YORK NY,” “45.78”] with an entity identifier “00542687” to result in annotated financial transaction data [“4/11/2013,” “BEST R NEW YORK NY,” “45.78,” “00542687”] where “00542687” is a unique identifier for the entity “Best Restaurant.” [Col. 6, Lines 3-13]; “The annotator 110 may receive the financial transaction data 120 and disambiguate the financial transaction data 120 by identifying an entity that is associated with the financial transaction data 120. For example, the annotator 110 may identify that the entity that is the restaurant “Thai Food Spot” is associated with the particular financial transaction data. As explained in more detail regarding FIG. 2, the annotator 110 may identify entities based on learning patterns associated with the entities, recognizing the patterns in financial transaction data without identified associated entities, and identifying the entities as associated with the financial transaction data based on recognizing the patterns.” [Col. 6, Lines 49-60]). However, Cowan fails to expressly disclose utilizing the trained model to identify a relationship between the at least one second user-specific record and the at least one first user-specific record; and transmitting, by the one or more processors, an API call to an annotating storage engine that programs the annotating storage engine to generate a database entry comprising the at least one second user-specific record annotated by the at least one derived user-specific annotating content item. In the same field of endeavor, Cao teaches utilizing the trained model to identify a relationship between the at least one second user-specific record and the at least one first user-specific record (Cao: “we seek to capture the inter-transaction dependency. It is critical to explore such relationships among suspicious transactions because fraudulent behaviors are often correlated and fast evolving. (1) HINs provide us with an effective and compact representation of linked transactions in various semantics, e.g., the same currency, the same IP address, and the same game titles. The statistics of the label information (i.e., fraud or normal) of these linked transactions can be aggregated, and thereby add a new dimension of measurements to distinguish suspicious transactions from normal ones based on the correlated fraudulent behaviors. (2) In order to tackle the problem of fast evolving fraudulent behaviors, we should not only consider the inter-transaction dependency across training transactions and test transactions, but also include the dependency among test transactions. Hence, suspicious transactions are identified in a semi-supervised manner by iteratively obtaining the predicted labels of test transactions and updating the statistics of linked transactions in alternation.” [Section I. Introduction]; “A HIN is constructed by linking entities of interest from several selected databases. Transactions are the target instances on which fraud decisions are made, so each transaction ID is represented as a node in the network, and the set of transaction IDs compose a node type in the network schema. In addition, other entities that are directly or indirectly related to a transaction are considered here, and they compose other node types in the schema, including billing accounts, user accounts, game titles, IP addresses, etc. Links are added based on common semantics. For example, a transaction is linked with a user if the user placed the transaction, and a transaction is linked with an item if the transaction contains the item.” [Section III. Dataset]; “within the network schema, containing a certain sequence of link types. For example, in Figure 2, a meta-path “transaction −−containsItem−→ item −−isTitle−→ title −−isTitle−1−→ item −−containsItem−1−→ transaction” denotes a composite relation between transactions where containsItem−1 represents the inverted relation of containsItem. The semantic meaning of this meta-path is that transactions contain items that belong to the same game title. Different meta-paths usually represent different semantic meanings between linked nodes. In this manner, various relationships among transactions can be described by a set of meta-paths. By capturing such inter transaction dependency and aggregating the label information of the linked transactions, we could better detect correlated fraudulent behaviors. In other words, we could identify trans actions with highly risky values in a categorical variable, e.g., game title.” [Section IV.A. Capturing Inter-Transaction Dependency). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated utilizing the trained model to identify a relationship between the at least one second user-specific record and the at least one first user-specific record, as taught by Cao to the method of Cowan because both of these methods are directed towards training a model on relationships between transactions, entities, and relevant contextual information to predict new relationships as more data is retrieved. In making this combination and training the model to predict specifically relationships between transactions/records, it would allow the method of Cowan to explore meta-paths between nodes in a transaction data graph to identify relevant inter-transaction dependency features for a new transaction given a related transaction in the graph (Cao: [Section I. Introduction]. Cowan and Cao still fail to expressly disclose transmitting, by the one or more processors, an API call to an annotating storage engine that programs the annotating storage engine to generate a database entry comprising the at least one second user-specific record annotated by the at least one derived user-specific annotating content item. In the same field of endeavor, Mehta teaches transmitting, by the one or more processors, a second API call to the annotating storage engine that programs the annotating storage engine to annotate the at least one second user-specific record with the at least one derived user-specific annotating content item (Mehta: “The system includes a third API. The third API is configured to classify the aggregated data of the user according to a classification scheme. The third API is configured to generate a user data corpus comprising the classified data of the user. The third API is configured to store the user data corpus in the user data repository.” [Col. 1, Lines 45-51]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated transmitting, by the one or more processors, a second API call to the annotating storage engine that programs the annotating storage engine to annotate the at least one second user-specific record with the at least one derived user-specific annotating content item, as taught by Mehta to the method of Cowan and Cao because they are both directed to prompting external sources for user data and associating different types of user data. In making this combination and utilizing API calls to request data and store associated data, it would provide the method of Cowan and Cao “a mechanism for data sharing to effectively provide an aggregated data set” (Mehta: [Col. 4, Lines 22-23]). Regarding Claim 2, Cowan, Cao, and Mehta teach the method of Claim 1, further comprising: presenting, by the one or more processors, the at least one first user-specific annotating content item in association with the at least one second user-specific record related to the at least one first user at a first graphical user interface (GUI) of an application executed by the first personal computing device (Cowan: “Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification” [Col. 13, lines 49-56]). Regarding Claim 3, Cowan, Cao, and Mehta teach the method of Claim 1, further comprising: obtaining, by the one or more processors, at least one second user-specific annotating content item associated with the second user-specific record of the at least one first user (Cowan: “an aspect includes receiving data that represents actions taken by a particular user with respect to entities of a plurality of entity types” [Col. 1, lines 33-35]) and utilizing, by the one or more processors, the record annotating machine learning model to annotate the second user-specific record based at least in part on the obtained at least one second user-specific annotating content item (Cowan: “The system may associate the financial transactions with entities identified by the machine-learning based annotator. The system may annotate financial transaction data with data that includes an identifier that represents the identified entity.” [Col. 3, lines 6-9]). Regarding Claim 4, Cowan, Cao, and Mehta teach the method of Claim 1, wherein the receiving of at least one first user-specific annotating content item being associated with at least one first user-specific record of at least one first user comprises: automatically obtaining, by the one or more processors, the at least one first user-specific annotating content item from sources other than the at least one first user (Cowan: “the users may be provided with an opportunity to control whether programs or features collect personal information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location), or to control whether and/or how to receive content from the content server that may be more relevant to the user” [Col. 11, lines 59-65]). Regarding Claim 5, Cowan, Cao, and Mehta teach the method of Claim 1, further comprising: extracting, by the one or more processors, the at least one derived user-specific annotating content item from the at least one first annotating user-specific content item via at least one of: text recognition technique, voice recognition technique, or image recognition technique (Cowan: “the identifier 222 may be trained to recognize and associate particular textual formats with particular entities. The identifier 222 may recognize order numbers in financial transaction data 120 and associate particular formats of order numbers with particular entities” [Col. 9, lines 25-29]). Regarding Claim 6, Cowan, Cao, and Mehta teach the method of Claim 1, wherein the at least one first user-specific annotating content item comprises information associated with a record of the at least one first user (Cowan: “The entity data associated with a financial transaction by the annotator 110 may be used to provide information regarding financial transactions to a user 150” [Col. 7, lines 52-54]). Regarding Claim 7, Cowan, Cao, and Mehta teach the method of Claim 1, further comprising: categorizing, by the one or more processors, a first plurality of user-specific records of the at least one first user based on the annotating of the first plurality of user-specific records (Cowan: “The system may analyze financial transaction data to provide information regarding the financial transactions. For example, the system may categorize financial transactions so that a user may view the amount or percentage that the user spent in particular categories” [Col. 2, lines 9-13]). Regarding Claim 9, Cowan, Cao, and Mehta teach the method of Claim 7, further comprising: querying, by the one or more processors, the first user-specific plurality of records of the at least one first user based on the categorizing of the first plurality of user-specific records (Cowan: “The annotator 110 may receive the request and provide a response to the request based on the entities the annotator 110 associated with financial transactions. For example, for a request for what restaurant the user ate at last Saturday, the annotator 110 may identify the financial transactions that occurred last Saturday that the annotator 110 associated with entities that falls under the category of 'RESTAURANT.'” [Col. 8, lines 6-12]). Regarding Claims 10-16 and 18-20, they are system claims that correspond to the method of claims 1-7 and 9 above. Therefore, they are rejected for the same reasons as claims 1-7 and 9 above. Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Cowan in view of Coa and Mehta, as applied to Claims 1 and 10 above, in view of Shaashua et al. (US PGPUB 20160342906 A1), hereinafter Shaashua. Shaashua was cited in a previous Office Action. Regarding Claim 8, Cowan, Cao, and Mehta teach the method of Claim 1. However, they fail to expressly wherein the trained record annotation machine learning model is user-specific. In the same field of endeavor, Shaashua teaches wherein the trained record annotation machine learning model is user-specific (Shaashua: “Modelling and learning of entity behavior can be done using probabilistic machine learning models. For example, the IoT integration platform can train: a global model of the entire user population; a subpopulation model of users that might exhibit common behavioral patterns; a user specific model; a location specific model; location cluster specific model for learning the behavior of a group of locations; a device specific model; a device cluster specific model; or any combination thereof.” [0178]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the method of claim 1, wherein the trained record annotation machine learning model is user-specific, as taught by Shaashua to the method of Cowan, Cao, and Mehta because both of these methods are directed towards organizing data from multiple sources to provide associated context for user behavior using machine learning. By making this combination, it allows the method of Cowan, Cao, and Mehta to capture users’ “unique personal behavioral patterns” (Shaashua: [0178]). Regarding Claim 17, it is a system claim that corresponds to the method of claim 8 above. Therefore, it is rejected for the same reason as claim 8 above. Response to Arguments The Examiner acknowledges the Applicant’s amendments to Claims 1-2, 10-11, and 19-20. Applicant’s arguments, filed 12/11/2025, traversing the rejection of Claims 1-20 under 35 U.S.C. § 101 have been fully considered and are persuasive. The rejection has been withdrawn. Applicant’s arguments, filed 12/11/2025, regarding the rejection of Claims 1-20 under 35 U.S.C. § 103 have been fully considered and are found moot in light of the new grounds of rejection (see rejection above). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li et al. (“Recommendation as link prediction in bipartite graphs: A graph kernel-based machine learning approach”) discusses mapping transactions to a bipartite user-item interaction graph to capture relationship information between users and items for use in a recommendation system. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEGAN E HWANG whose telephone number is (703)756-1377. The examiner can normally be reached Monday-Thursday 10:00-7:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.E.H./Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Aug 23, 2021
Application Filed
Oct 24, 2024
Non-Final Rejection — §103
Feb 04, 2025
Response Filed
Apr 10, 2025
Final Rejection — §103
Aug 18, 2025
Request for Continued Examination
Aug 28, 2025
Response after Non-Final Action
Sep 25, 2025
Non-Final Rejection — §103
Dec 11, 2025
Response Filed
Dec 30, 2025
Interview Requested
Jan 06, 2026
Examiner Interview Summary
Jan 06, 2026
Applicant Interview (Telephonic)
Mar 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12456093
Corporate Hierarchy Tagging
2y 5m to grant Granted Oct 28, 2025
Patent 12437514
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437517
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437518
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437519
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
47%
Grant Probability
99%
With Interview (+60.2%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month