Prosecution Insights
Last updated: April 19, 2026
Application No. 18/248,690

SYSTEM, METHOD, AND MEDIUM FOR LIFECYCLE MANAGEMENT TESTING OF CONTAINERIZED APPLICATIONS

Non-Final OA §103
Filed
Apr 12, 2023
Examiner
RUSIN, KAYO LISA
Art Unit
2114
Tech Center
2100 — Computer Architecture & Software
Assignee
Rakuten Mobile Inc.
OA Round
3 (Non-Final)
91%
Grant Probability
Favorable
3-4
OA Rounds
2y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
21 granted / 23 resolved
+36.3% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
10 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 23 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending. Claims 1-20 are rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lee, Nate (“How to Test Autoscaling in Kubernetes,” Speed Scale, Aug 19, 2022) from henceforth referred to as Lee in view of Ahrens, Ken (“Combine load testing and observability with Speed Scale and New Relic One, Jan 18, 2022) from henceforth referred to as Ahrens in further view of Jet Brains article titled “Automated Testing for CI/CD” from its web accessible TeamCity CI/CD Guide from henceforth referred to as Jet Brains NPL. Per claim 1 Lee teaches the system comprising: a memory that stores instructions; and (page 2, memories) at least one processor configured by the instructions to perform operations comprising: (page 2, CPUs) … automatically displaying the first testing sequence results, wherein the first testing sequence results comprise an assessment of a health of the second containerized application; and (page 10, the report is generated) checking autoscaling behavior of the second containerized application, wherein the checking comprises determining whether the second containerized application replicates or deletes instances of itself in response to increased or decreased demand for its services (page 10, the report is generated showing the number of pods scaling up or down depending on CPU usage) Lee fails to explicitly teach A system for lifecycle management testing of containerized applications executing a first testing script by a first containerized application thereby causing an Application Programming Interface (API) call to be issued to at least one automated testing system; running, by the at least one automated testing system, a first testing sequence on a second containerized application, different from the first containerized application, based on the API call However, Ahrens teach …one automated testing system (page 2, using Speed Scale as an automated testing tool that is part of the CI environment) It is obvious to the person of ordinary skill in the art to combine Lee in view of Ahrens because Lee teaches how Speed Scale can be used to generate traffic and test whether the pod scaling is done as expected and Ahrens show how that can then be used as part of the CI environment as an automated testing system. Lee in view of Ahrens fails to teach A system for lifecycle management testing of containerized applications executing a first testing script by a first containerized application thereby causing an Application Programming Interface (API) call to be issued to [an automated system] running… a first testing sequence on a second containerized application, different from the first containerized application, based on the API call However, Jet Brains NPL teaches A system for lifecycle management testing of containerized applications (page 2, “Automated testing tools & CI/CD” section) executing a first testing script by a first containerized application thereby causing an Application Programming Interface (API) call to be issued to [an automated system] (page 2) running… a first testing sequence on a second containerized application, different from the first containerized application, based on the API call (page 2) It is obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to combine the teachings of Lee in view of Ahrens to that of Jet Brains NPL because it simply expands on the idea of how the automated testing tool can be used in the CI/CD testing environment. Jet Brains NPL teaches that certain components of the automated testing system can be triggered externally. It is obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to implement such programmatic triggers using an API call since APIs were well-known and conventional mechanism for invoking services in distributed systems. Per claim 2, Lee in view of Ahrens in further view of Jet Brains NPL teaches The system of claim 1, wherein the first containerized application runs in a separate container from the second containerized application (Lee, page 2, shows a second containerized application being tested as “Pod 1” for instance; in Jet Brains NPL, page 2, it indicates an external containerized application within the CI environment that would trigger the automatic testing of the second containerized application. These are in separate containers since a person of ordinary skill in the art prior to the effective filing date of the claimed invention understands that the purpose of the container is to contain all of the necessary dependencies and environment factors for that specific application to function in a modular fashion. The testing component or triggering of the said testing component is very functionally separate from the actual functioning of the component and thus would reside in a separate container) Per claim 3, Lee in view of Ahrens in further view of Jet Brains NPL teaches The system of claim 1, wherein the operations further comprise: running, by a second automated testing system, a second testing sequence on the second containerized application, using the testing sequence results from the first testing sequence as an input parameter (page 2, “Automated testing tools & CI/CD” section, first paragraph, “many automated testing tools support integration with CI/CD tools, so you can feed the test data into the pipeline and run the tests in stages, with results provided after each step. Depending on your CI tool, you can choose whether to move a build to the next stage based on the outcome of the tests in the previous step”) Per claim 4, Lee in view of Ahrens in further view of Jet Brains NPL teaches The system of claim 3, wherein the operations further comprise: running, by a third automated testing system, a third testing sequence on the second containerized application, using the testing sequence results from the first testing sequence as an input parameter. (page 2, “Automated testing tools & CI/CD” section, first paragraph, “many automated testing tools support integration with CI/CD tools, so you can feed the test data into the pipeline and run the tests in stages, with results provided after each step. Depending on your CI tool, you can choose whether to move a build to the next stage based on the outcome of the tests in the previous step”) Per claim 5, Lee in view of Ahrens in further view of Jet Brains NPL teaches The system of claim 1, wherein the first and second containerized applications are in different pods on a same cluster of a network (Jet Brains NPL, page 2, teaches that there are different testing component within the CI/CD environment. A person of ordinary skill in the art prior to the effective filing date of the claimed invention would understand that they are in different pods since they are functionally different and distinct and that they would still need to be in the same cluster of a network for the testing via traffic generation to occur). Per claim 6, Lee in view of Ahrens in further view of Jet Brains NPL teaches The system of claim 1, wherein the operations further comprise: executing a second testing script, wherein the execution of the second testing script causes the at least one automated testing system to generate a testing sequence to be used in a testing pipeline (Jet Brains NPL, page 2, teaches a testing sequence being used in a testing pipeline and a testing sequence triggering an automated testing system; Ahrens, page 3, teaches an automated testing system that derives and executes testing sequences by capturing and replaying application traffic within a CI environment). Per claim 7, Lee in view of Ahrens in further view of Jet Brains NPL teaches The system of claim 1, wherein the assessment of the health of the second containerized application comprises at least one of: verifying whether the second containerized application is up and running, verifying whether the second containerized application is ready to accept traffic, verifying whether node scheduling of the second containerized application is optimized, calculating a number of copies of the second containerized application that should be scheduled, verifying a container image size of the second containerized application, or verifying node failover information associated with the second containerized application (Lee, page 8-10, teaches issuing test requests to the application in which the data is dependent on the application’s responses, therefore verifying that the application is up and running during testing) Per claim 8, Lee teaches the method comprising: automatically displaying the first testing sequence results, wherein the first testing sequence results comprise an assessment of a health of the second containerized application; and (page 10, the report is generated) checking autoscaling behavior of the second containerized application, wherein the checking comprises determining whether the second containerized application replicates or deletes instances of itself in response to increased or decreased demand for its services (page 10, the report is generated showing the number of pods scaling up or down depending on CPU usage) Lee fails to teach … lifecycle management testing of containerized applications… executing a first testing script by a first containerized application thereby causing an Application Programming Interface (API) call to be issued to at least one automated testing system; running, by the at least one automated testing system, a first testing sequence on a second containerized application, different from the first containerized application, based on the API call; However, Ahrens teach …one automated testing system (page 2, using Speed Scale as an automated testing tool that is part of the CI environment) It is obvious to the person of ordinary skill in the art to combine Lee in view of Ahrens because Lee teaches how Speed Scale can be used to generate traffic and test whether the pod scaling is done as expected and Ahrens show how that can then be used as part of the CI environment as an automated testing system. Lee in view of Ahrens fails to teach … lifecycle management testing of containerized applications… executing a first testing script by a first containerized application thereby causing an Application Programming Interface (API) call to be issued to…; running… a first testing sequence on a second containerized application, different from the first containerized application, based on the API call; However Jet Brains NPL teaches … lifecycle management testing of containerized applications…(page 2, “Automated testing tools & CI/CD” section) executing a first testing script by a first containerized application thereby causing an Application Programming Interface (API) call to be issued to…; (page 2) running… a first testing sequence on a second containerized application, different from the first containerized application, based on the API call; (page 2) It is obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to combine the teachings of Lee in view of Ahrens to that of Jet Brains NPL because it simply expands on the idea of how the automated testing tool can be used in the CI/CD testing environment. Jet Brains NPL teaches that certain components of the automated testing system can be triggered externally. It is obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to implement such programmatic triggers using an API call since APIs were well-known and conventional mechanism for invoking services in distributed systems. Per claims 9-14, the claims recite similar claim limitations as those of claims 2-7 and therefore are rejected for similar reasons as claims 2-7. Per claim 15, Lee teaches … automatically displaying the first testing sequence results, wherein the first testing sequence results comprise an assessment of a health of the second containerized application; and (page 10, the report is generated) checking autoscaling behavior of the second containerized application, wherein the checking comprises determining whether the second containerized application replicates or deletes instances of itself in response to increased or decreased demand for its services. (page 10, the report is generated showing the number of pods scaling up or down depending on cpu usage) Lee fails to teach A non-transitory computer-readable medium for lifecycle management testing of containerized applications, storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising: executing a test script by a first containerized application thereby causing an Application Programming Interface (API) call to be issued to at least one automated testing system; running, by the at least one automated testing system, a first testing sequence on a second containerized application, different from the first containerized application, based on the API call; However, Ahrens teach one automated testing system; (page 2, using Speed Scale as an automated testing tool that is part of the CI environment) It is obvious to the person of ordinary skill in the art to combine Lee in view of Ahrens because Lee teaches how Speed Scale can be used to generate traffic and test whether the pod scaling is done as expected and Ahrens show how that can then be used as part of the CI environment as an automated testing system. Lee in view of Ahrens fails to teach A non-transitory computer-readable medium for lifecycle management testing of containerized applications, storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising: executing a test script by a first containerized application thereby causing an Application Programming Interface (API) call to be issued to … running… a first testing sequence on a second containerized application, different from the first containerized application, based on the API call; However, Jet Brains NPL teaches A non-transitory computer-readable medium for lifecycle management testing of containerized applications, storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising: (page 2, “Automated testing tools & CI/CD” section) executing a test script by a first containerized application thereby causing an Application Programming Interface (API) call to be issued to … (page 2, “Automated testing tools & CI/CD” section) running… a first testing sequence on a second containerized application, different from the first containerized application, based on the API call; (page 2, “Automated testing tools & CI/CD” section) It is obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to combine the teachings of Lee in view of Ahrens to that of Jet Brains NPL because it simply expands on the idea of how the automated testing tool can be used in the CI/CD testing environment. Jet Brains NPL teaches that certain components of the automated testing system can be triggered externally. It is obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to implement such programmatic triggers using an API call since APIs were well-known and conventional mechanism for invoking services in distributed systems. Per claims 16, 17, 18, and 19, the claims recite similar claim limitations to those of claims 2, 3, 5, and 6 respectively and thus are rejected for similar reasons. Per claim 20, Lee in view of Ahrens in further view of Jet Brains NPL teaches The medium of claim 15, wherein the assessment of the health of the second containerized application comprises at least one of: verifying whether the second containerized application is up and running, verifying whether the second containerized application is ready to accept traffic, verifying whether node scheduling of the second containerized application is optimized, calculating a number of copies of the second containerized application that should be scheduled, verifying a container image size of the second containerized application, or-verifying node failover information associated with the second containerized application, and wherein in response to an increased demand for the second containerized application's services, at least one new copy of the second containerized application is made, in response to a decreased demand for the second containerized application's services, at least one copy of the second containerized application is deleted, and in response to the making of at least one copy of the second containerized application or the deletion of at least one copy of the second containerized application, performance of the second containerized application is measured. (Lee, page 10, showcases how Speed Scale can be used to show the success rate of the requests when artificial traffic is generated in order to test the autoscaling capabilities. It generates report so that the user can check that “the autoscaler has scaled up with the increasing load the same was as in the previous section.” A person of ordinary skill in the arts prior to the effective filing date of the claimed invention will appreciate the fact that autoscaling also works the other way around – when the traffic decreases, so does the number of pods. Using the broadest reasonable interpretation, this process does include calculating a number of copies of the second containerized application that should be scheduled, since that number is necessary in order to know whether the autoscaling is working as necessary). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Amir, Li-or, “Kubernetes architecture: control plane, data plane, and 11 core components explained,” May 15, 2021 teaches the Kubernetes architecture in further detail. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAYO LISA RUSIN whose telephone number is (703)756-1679. The examiner can normally be reached Monday-Friday 8:30 - 5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ashish Thomas can be reached at 571-272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.L.R./Examiner, Art Unit 2114 /ASHISH THOMAS/Supervisory Patent Examiner, Art Unit 2114
Read full office action

Prosecution Timeline

Apr 12, 2023
Application Filed
Nov 15, 2024
Non-Final Rejection — §103
Mar 16, 2025
Response Filed
Jun 04, 2025
Final Rejection — §103
Sep 08, 2025
Request for Continued Examination
Sep 19, 2025
Response after Non-Final Action
Jan 01, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591500
Event Monitoring and Code Autocorrecting Batch Processing System
2y 5m to grant Granted Mar 31, 2026
Patent 12579040
Optimized Snapshot Storage And Restoration Using An Offload Target
2y 5m to grant Granted Mar 17, 2026
Patent 12566670
SUPPORTING AUTOMATIC AND FAILSAFE BOOTING OF BMC AND BIOS FIRMWARE IN A CRITICAL SECURED SERVER SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12554601
ELECTRONIC APPARATUS AND CONTROL METHOD THEREROF FOR HANDLING A CEC MALFUNCTION
2y 5m to grant Granted Feb 17, 2026
Patent 12554609
DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR PROVIDING ENVIRONMENT TRACKING CONTENT
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+13.3%)
2y 3m
Median Time to Grant
High
PTA Risk
Based on 23 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month