Prosecution Insights
Last updated: April 19, 2026
Application No. 18/605,606

COMPUTER-BASED SYSTEMS CONFIGURED FOR A CLOUD-FIRST MULTIFUNCTION API CLUSTER PROVIDING MICROSERVICES CAPABLE OF BOTH BATCH AND REAL-TIME PROCESSING AND METHOD AND USE THEREOF

Non-Final OA §103
Filed
Mar 14, 2024
Examiner
QIAN, SHELLY X
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
3 (Non-Final)
37%
Grant Probability
At Risk
3-4
OA Rounds
3y 11m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
47 granted / 126 resolved
-17.7% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
28 currently pending
Career history
154
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
64.0%
+24.0% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 126 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/6/2025 has been entered. Response to Arguments Applicant's arguments filed 10/6/2025 have been fully considered but they are not persuasive. The rejection of claims 7-12 under 35 U.S.C. 101 is withdrawn because the claims are cancelled. Applicant states (pp. 14) that the cited prior art of record combined teaches only one matching step being a logical fuzzy matching between raw records and cached records, but does not teach a blocking step and machine learning step to identify first potential matches and then actual matches to merge the actual matches. Examiner respectfully disagrees. Nachnani determines, using fuzzy matching logic [0008], if any record from the feed matches an existing record associated with an entity (i.e., candidate/potential match entity pair). Matching records from the feed are then merged with the existing record (i.e., actual match entity pair) to form a merged composite record to be associated with the cluster (i.e., blocking engine) for the entity [0051]. The overall confidence score for the merged composite record is based on weighted confidence scores that are calculated for features (i.e., feature engine) of the composite record [0013]. Nachnani does not disclose claim element “machine learning engine”; however, nodes (i.e., engines) in a cluster of Elasticsearch play different roles, such as data nodes (i.e., blocking engine) for storing and operating on data, client nodes for balancing request load, ingestion nodes (i.e., cleansing engine) for preprocessing documents before indexing, and machine learning nodes (i.e., machine learning engine) for machine learning tasks (Kathare: sec. 2.5). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Kathare to Nachnani. One having ordinary skill in the art would have found motivation to implement record matching and merging of Nachnani using the open-source distributed search engine of Kathare. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nachnani et al. US patent application 2012/0023107 [herein “Nachnani”], and further in view of Kathare et al. A Comprehensive Study of Elastic Search. JRSE 4:11, Nov. 2022 [herein “Kathare”]. 1. Claim 1 recites “A computer-implemented method comprising: receiving, by at least one processor, a plurality of entity records, the plurality of entity records stored in an elastic search environment, the plurality of entity records associated with at least one candidate entity record;” Nachnani receives records (i.e., entity records) from a feed that tracks (i.e., monitors) real-time changes [0026] [0173] in data objects (i.e., entities) [0008] stored in a database [0013], and performs search on stored data [0038]. Nachnani manages applications running in a virtual machine (i.e., containerization technology) [0036] in a cloud computing environment [0201]. Claim 1 further recites “utilizing, by the at least one processor, in real time, a microservice module capable of leveraging a containerization technology in a cloud-first computer-implemented real-time scalable processing cluster to”. Nachnani determines if a record from the feed matches an existing record associated with a cluster for the entity (i.e., candidate entity pair). Matching records from the feed are then merged with the existing record (i.e., matching entity pair) to form a merged composite record to be associated with the cluster [0051]. Claim 1 further recites “determine a match of each candidate entity pair, each candidate entity pair comprising: at least 20,000 entity records of the plurality of entity records and the at least one candidate entity record; wherein the microservice module monitoring the cloud-first computer-implemented real-time scalable processing cluster is configured to: utilize a cleansing engine to cleanse the at least one candidate entity record;” Nachnani analyzes merged composite records to determine a confidence (i.e., similarity) score indicative of the quality of matching records [0051]. The score may depend on the quality and recency of individual records in the cluster [0052]. If a record is incomplete, it is enriched (i.e., cleansed) with available information [0098]. Claim 1 further recites “determine a status of a blocking engine, the status representing a processing load associated with the blocking engine; utilize, responsive to the status of the blocking engine, the blocking engine to determine candidate entity pairs representing potential matches to the at least one candidate entity record from the plurality of entity records;” The interface between system and network in Nachnani includes load sharing functionality to balance loads (i.e., status) and distribute incoming requests evenly over a plurality of servers (i.e., engines) [0035]. Claim 1 further recites “utilize at least one feature engine to generate candidate entity pair features for each of the candidate entity pairs based at least in part on the at least one candidate entity records and the potential matches; utilize at least one machine learning engine to determine a similarity score of each candidate entity pair based at least in part on the candidate entity pair features of each candidate entity pair produced by the blocking engine;” Nachnani determines, using fuzzy matching logic [0008], if any record from the feed matches an existing record associated with an entity. Matching records from the feed are then merged with the existing record to form a merged composite record to be associated with the cluster (i.e., blocking engine) for the entity [0051]. The overall confidence score for the merged composite record is based on weighted confidence scores that are calculated for features (i.e., feature engine) of the composite record (i.e., candidate entity pair features) [0013]. Claim 1 further recites “display the similarity score representing the at least one candidate entity record; determine at least one identified matching pair from the candidate entity pairs based at least in part on the similarity score of each candidate entity pair; and merge the at least one candidate entity record and the potential matches of the at least one identified match based on the similarity score and a user selection.” Given a record from the feed, Nachnani uses a FindMatchingRecords method to find existing records matching the given record 'sufficiently well' [0072], whose results including match score (Table 10, [0071]) are displayed to a user [0208]. Nachnani does not disclose claim element “microservice module”; however, Kathare teaches Beats as a collection of lightweight agents (i.e., microservices) installed on servers for data shipping to Elasticsearch (i.e., elastic search environment) (Kathare: sec. 5.4). Nachnani does not teach claim element “at least 20,000 entity records”; however, Kathare divides an index into a number of shards which are then distributed and replicated across a number of clusters (Kathare: sec. 2.6-7). Elasticsearch scales horizontally up to few petabytes of data (i.e., at least 20K records), with preferred limit per shard at 50GB (Kathare: sec. 4.1). Nachnani does not disclose claim element “machine learning engine”; however, nodes (i.e., engines) in a cluster of Elasticsearch play different roles, such as data nodes (i.e., blocking engine) for storing and operating on data, client nodes for balancing request load, ingestion nodes (i.e., cleansing engine) for preprocessing documents before indexing, and machine learning nodes (i.e., machine learning engine) for machine learning tasks (Kathare: sec. 2.5). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Kathare to Nachnani. One having ordinary skill in the art would have found motivation to implement record matching and merging of Nachnani using the open-source distributed search engine of Kathare. Claim 13 is analogous to claim 1, and is similarly rejected. Claim 2 recites “The computer-implemented method of claim 1, wherein the microservice module instantiates at least one second blocking engine based at least in part on a system status.” The interface between system and network in Nachnani includes load sharing functionality to balance loads (i.e., status) and distributes incoming requests evenly over a plurality of servers (i.e., engines) [0035]. Nachnani and Kathare teach claim 1, where Kathare teaches Beats as a collection of lightweight agents (i.e., microservices) installed on servers for data shipping to Elasticsearch (Kathare: sec. 5.4). An index is divided into a number of shards which are then distributed and replicated (i.e., instantiated) across a number of clusters (Kathare: sec. 2.6-7). Claim 14 is analogous to claim 2, and is similarly rejected. Claim 3 recites “The computer-implemented method of claim 1, wherein the microservice module instantiates at least one second blocking engine based at least in part on a system status and the status of the blocking engine exceeds a threshold.” The interface between system and network in Nachnani includes load sharing functionality to balance loads (i.e., status) and distribute incoming requests evenly over a plurality of servers (i.e., engines) [0035]. Nachnani and Kathare teach claim 1, where Kathare teaches Beats as a collection of lightweight agents (i.e., microservices) installed on servers for data shipping to Elasticsearch (Kathare: sec. 5.4). An index is divided into a number of shards which are then distributed and replicated (i.e., instantiated) across a number of clusters (Kathare: sec. 2.6-7). Elasticsearch scales horizontally up to few petabytes of data, with preferred limit (i.e., threshold) per shard at 50GB (Kathare: sec. 4.1). Claim 15 is analogous to claim 3, and is similarly rejected. Claim 4 recites “The computer-implemented method of claim 1, wherein the microservice module instantiates at least one second feature engine based at least in part on a system status.” The interface between system and network in Nachnani includes load sharing functionality to balance loads (i.e., status) and distribute incoming requests evenly over a plurality of servers (i.e., engines) [0035]. Nachnani and Kathare teach claim 1, where Kathare teaches Beats as a collection of lightweight agents (i.e., microservices) installed on servers for data shipping to Elasticsearch (Kathare: sec. 5.4). An index is divided into a number of shards which are then distributed and replicated (i.e., instantiated) across a number of clusters (Kathare: sec. 2.6-7). Nodes (i.e., engines) in a cluster of Elasticsearch play different roles, such as data nodes for storing and operating on data, client nodes for balancing request load, ingestion nodes for preprocessing documents before indexing, and machine learning nodes for matching and scoring (i.e., feature) (Kathare: sec. 2.5). Claim 16 is analogous to claim 4, and is similarly rejected. Claim 5 recites “The computer-implemented method of claim 1, wherein the microservice module instantiates at least one second machine learning engine based at least in part on a system status.” The interface between system and network in Nachnani includes load sharing functionality to balance loads (i.e., status) and distribute incoming requests evenly over a plurality of servers (i.e., engines) [0035]. Nachnani and Kathare teach claim 1, where Kathare teaches Beats as a collection of lightweight agents (i.e., microservices) installed on servers for data shipping to Elasticsearch (Kathare: sec. 5.4). An index is divided into a number of shards which are then distributed and replicated (i.e., instantiated) across a number of clusters (Kathare: sec. 2.6-7). Nodes (i.e., engines) in a cluster of Elasticsearch play different roles, such as data nodes for storing and operating on data, client nodes for balancing request load, ingestion nodes for preprocessing documents before indexing, and machine learning nodes for matching and scoring (Kathare: sec. 2.5). Claim 17 is analogous to claim 5, and is similarly rejected. Claim 6 recites “The computer-implemented method of claim 1, wherein the microservice module instantiates at least one second blocking engine and at least one feature engine based at least in part on a system status.” The interface between system and network in Nachnani includes load sharing functionality to balance loads (i.e., status) and distribute incoming requests evenly over a plurality of servers (i.e., engines) [0035]. Nachnani and Kathare teach claim 1, where Kathare teaches Beats as a collection of lightweight agents (i.e., microservices) installed on servers for data shipping to Elasticsearch (Kathare: sec. 5.4). An index is divided into a number of shards which are then distributed and replicated (i.e., instantiated) across a number of clusters (Kathare: sec. 2.6-7). Claim 18 is analogous to claim 6, and is similarly rejected. Claim 19 recites “The at least one non-transitory computer-readable storage medium of claim 13, wherein the microservice module combines the at least one blocking engine with at least one second blocking engine based at least in part on a status of the at least one blocking engine.” The interface between system and network in Nachnani includes load sharing functionality to balance loads (i.e., status) and distribute incoming requests evenly over a plurality of servers (i.e., engines) [0035]. Nachnani and Kathare teach claim 1, where Kathare teaches Beats as a collection of lightweight agents (i.e., microservices) installed on servers for data shipping to Elasticsearch (Kathare: sec. 5.4). An index is divided into a number of shards which are then distributed and replicated across a number of clusters (Kathare: sec. 2.6-7). Nachnani does not disclose this claim; however, Elasticsearch supports splitting existing shards into more shards, or shrinking (i.e., combining) existing shards into fewer shards, depending on load (Kathare: sec. 5, para. 5). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Kathare to Nachnani. One having ordinary skill in the art would have found motivation to implement record matching and merging of Nachnani using the open-source distributed search engine of Kathare, which can be dynamically scaled up or down based on load. Claim 20 is analogous to claim 19, and is similarly rejected. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure, such as Franciosa et al. US patent application 2005/0086224. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHELLY X. QIAN whose telephone number is (408)918-7599. The examiner can normally be reached Monday - Friday 8-5 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHELLY X QIAN/Examiner, Art Unit 2154 /SYED H HASAN/Primary Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Mar 14, 2024
Application Filed
Apr 02, 2025
Non-Final Rejection — §103
Jul 07, 2025
Response Filed
Aug 04, 2025
Final Rejection — §103
Oct 06, 2025
Response after Non-Final Action
Nov 07, 2025
Request for Continued Examination
Nov 16, 2025
Response after Non-Final Action
Jan 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578892
FINGERPRINT TRACKING STRUCTURE FOR STORAGE SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12475044
Method And System For Estimating Garbage Collection Suspension Contributions Of Individual Allocation Sites
2y 5m to grant Granted Nov 18, 2025
Patent 12450197
BACKGROUND DATASET MAINTENANCE
2y 5m to grant Granted Oct 21, 2025
Patent 12386904
SYSTEMS AND METHODS FOR MEASURING COLLECTED CONTENT SIGNIFICANCE
2y 5m to grant Granted Aug 12, 2025
Patent 12314225
CONTINUOUS INGESTION OF CUSTOM FILE FORMATS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
37%
Grant Probability
57%
With Interview (+19.4%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 126 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month