Prosecution Insights
Last updated: April 19, 2026
Application No. 18/635,600

AUTOMATED CONTAINER ORCHESTRATION PLATFORM TESTING

Non-Final OA §103
Filed
Apr 15, 2024
Examiner
PAN, HANG
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
468 granted / 628 resolved
+19.5% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
662
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
59.0%
+19.0% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 628 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending and examined in this office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, 10-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lopes et al. (US PGPUB 2024/0168855) hereinafter Lopes, in view of Patel et al. (US PGPUB 2019/0065345) hereinafter Patel. Per claim 1, Lopes discloses a system: one or more memories; and one or more processors, communicatively coupled to the one or more memories, configured to:” (Fig. 11; a computer system with processors and memories); obtain, via a notebook repository, one or more training notebooks that are associated with respective pipeline types of the container orchestration platform, wherein the one or more training notebooks are interactive computational documents that include executable code and plain-text-formatted information (claims 1, 8; paragraphs [0019][0031][0043]; a system for orchestrating machine learning pipeline stages in a workflow that can be deployed into production data infrastructures using notebooks; a notebook collector can be configured to continuously scan internal and external repositories for notebooks with topics on ML; collecting different machine learning notebook data structures and different machine learning pipeline data indicating different machine learning pipeline stages associated with the different machine learning notebook data structures; i.e. the different ML notebooks are associated with different pipeline stages/types; lines of codes are extracted from the notebooks; an automated framework can extract topics (textual information) from a data science workspace such as a machine learning notebook); extract, from the one or more training notebooks, one or more executable code elements, to obtain one or more sets of executable code for respective training notebooks of the one or more training notebooks, using respective sets of executable code of the one or more sets of executable code to generate one or more test pipelines (paragraphs [0042]-[0046]; the lines of codes are extracted from the notebooks, they are assigned cells, labeled to different stages of testing pipelines, test pipelines are formed from the labeled cells); perform, via a machine learning operation system of the container orchestration platform, one or more cluster tests using respective test pipelines from the one or more test pipelines; and provide, for display, result information indicating results of the one or more cluster tests (paragraphs [0019][0046][0048][0049][0050]; the system can orchestrate the machine learning pipeline stages in a workflow that can be deployed into production data infrastructures; test pipelines are executed in parallel, results (success/failure) are output to the user). While Lopes discloses using respective sets of executable code of the one or more sets of executable code to generate one or more test pipelines, Lopes does not explicitly teach insert testing information into respective sets of executable code of the one or more sets of executable code to generate one or more test pipelines. However, Patel suggests inserting testing information into executable code (paragraphs [0117]-0119]; using a generic test case to test different applications, by replacing placeholder fields in test execution methods of a test case with regular expressions (such as URLs); i.e. testing information (URLs) are inserted into executable code in a test case). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lopes and Patel to insert testing information into executable code to generate one or more test pipelines, this allows a generic test template (executable code) be used to test different applications, thereby increase reusability of test code. Per claim 2, Lopes further suggests provide, for display via a user device, at least one training notebook of the one or more training notebooks (paragraphs [0031][0036]0093]; retrieving training notebooks from repository, the system also includes a display for displaying data; thus, it would have been obvious to display the retrieved notebooks for user’s inspection). Per claim 3, Patel further discloses wherein the testing information includes at least one of: one or more arguments, or one or more configurable code elements (paragraphs [0117]-0119]; using a generic test case to test different applications, by replacing placeholder fields in test execution methods of a test case with regular expressions (configurable code elements)). Per claim 4, Patel further discloses wherein the one or more processors, to insert the testing information, are configured to: detect, in a set of executable code from the one or more sets of executable code, a placeholder element; and replace the placeholder element with a configurable code element of the one or more configurable code elements, wherein the configurable code element corresponds to the placeholder element (paragraphs [0117]-0119]; using a generic test case to test different applications, by replacing placeholder fields in test execution methods of a test case with regular expressions (configurable code elements)). Per claim 5, Lopes further suggests wherein the one or more cluster tests are associated with a testing event, and wherein the one or more processors, to obtain the one or more training notebooks, are configured to: obtain the one or more training notebooks based on an occurrence of the testing event (paragraphs [0036][0048]; a public source collector, such as a Github application programming interface (API), can be used periodically (such as daily) to retrieve notebooks from repositories with relevant topics; i.e. a periodic trigger (testing event) is associated with obtaining the training notebooks). Per claim 6, Lopes further suggests obtain configuration information indicating the notebook repository (paragraphs [0036][0048]; a public source collector, such as a Github application programming interface (API), can be used periodically (such as daily) to retrieve notebooks from repositories with relevant topics; i.e. the address of a repository must be obtained in order to access the repository). Patel further suggests obtain configuration information indicating the testing information (paragraphs [0117]-0119]; using a generic test case to test different applications, by replacing placeholder fields in test execution methods of a test case with obtained regular expressions). Per claim 7, Lopes further suggests wherein the one or more test pipelines indicate respective workflows for the machine learning operation system, and wherein the one or more processors, to perform the one or more cluster tests, are configured to: execute, via the machine learning operation system, the respective workflows, wherein the result information indicates whether the respective workflows were successfully executed (paragraphs [0019][0034][0046][0048]; an automated framework can extract topics from a data science workspace such as a machine learning notebook, transform and annotate cells of the machine learning notebook to various machine learning pipeline stages, and orchestrate the machine learning pipeline stages in a workflow that can be deployed into production data infrastructures; after the pipelines are generated, artifacts can be sent to a queue for testing and verifying if the stages of the pipelines/workflows are accurate or not; the experiments can be configured validate if a potential pipeline meets an end goal (SUCCESS) or not (FAILURE)). Claims 8, 11-15 recite similar limitations as claims 1-5, 7. Therefore, claims 8, 11-15 are rejected under similar rationales as claims 1-5, 7. Per claim 10, Lopes further suggests wherein the one or more training notebooks include training information for the respective pipeline types associated with the container orchestration platform (claims 1, 8; paragraphs [0019][0031][0043]; a system for orchestrating machine learning pipeline stages in a workflow that can be deployed into production data infrastructures using notebooks; a notebook collector can be configured to continuously scan internal and external repositories for notebooks with topics on ML; collecting different machine learning notebook data structures and different machine learning pipeline data indicating different machine learning pipeline stages associated with the different machine learning notebook data structures; i.e. the different ML notebooks are associated with different pipeline stages/types). Claims 16 recites similar limitations as claims 1+2. Therefore, claim 16 is rejected under similar rationales as claims 1+2. Claim 20 recites similar limitations as claim 5. Therefore, claim 20 is rejected under similar rationales as claim 5. Per claim 17, Lopes further suggests extract, from the one or more training notebooks, one or more executable code elements included in the executable code (paragraphs [0042]-[0046]; the lines of codes are extracted from the notebooks, they are assigned cells, labeled to different stages of testing pipelines, test pipelines are formed from the labeled cells). Per claim 18, Patel further suggests insert testing information into one or more executable code elements to generate the one or more test pipelines (paragraphs [0117]-0119]; using a generic test case to test different applications, by replacing placeholder fields in test execution methods of a test case with regular expressions). Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lopes, in view of Patel, in view of Clement et al. (US PGPUB 2023/0177261) hereinafter Clement. Per claim 9, Lopes does not explicitly teach wherein the one or more training notebooks are interactive computational documents that include executable code and plain-text-formatted information, and wherein extracting the one or more executable code elements comprises: removing, from the one or more training notebooks, any information that is presented via a plain-text formatting syntax. However, Clement suggests the above (paragraphs [0004][0020][0023]; providing an interactive digital notebook that contains computer code and rich text elements, the notebook is organized by cells, containing rich markdown cells (text elements) and code cells (executable code elements); the content of each markdown cell is masked (filtered out)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lopes, Patel and Clement to remove text information from the training notebooks, because they are not needed to generate the pipelines. Claim 19 recites similar limitations as claim 9. Therefore, claim 19 is rejected under similar rationales as claim 9. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HANG PAN whose telephone number is (571)270-7667. The examiner can normally be reached 9 AM to 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HANG PAN/Primary Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Apr 15, 2024
Application Filed
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585574
UNIT TESTING OF COMPONENTS OF DATAFLOW GRAPHS
2y 5m to grant Granted Mar 24, 2026
Patent 12579052
MACHINE LEARNING-BASED DEVICE MATRIX PREDICTION
2y 5m to grant Granted Mar 17, 2026
Patent 12572354
CI/Cd Template Framework for DevSecOps Teams
2y 5m to grant Granted Mar 10, 2026
Patent 12561182
STATELESS CONTENT MANAGEMENT SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12561230
DEBUGGING FRAMEWORK FOR A RECONFIGURABLE DATA PROCESSOR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+25.1%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 628 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month