Prosecution Insights
Last updated: April 19, 2026
Application No. 18/380,658

CONTAINERIZED MICROSERVICE ARCHITECTURE FOR MANAGEMENT APPLICATIONS

Non-Final OA §103
Filed
Oct 17, 2023
Examiner
DAO, TUAN C.
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
VMware, Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
642 granted / 782 resolved
+27.1% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
38 currently pending
Career history
820
Total Applications
across all art units

Statute-Specific Performance

§101
18.3%
-21.7% vs TC avg
§103
51.8%
+11.8% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
5.3%
-34.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 782 resolved cases

Office Action

§103
DETAILED ACTION The instant application having Application No. 18/380,658 filed on 10/17/2023 is presented for examination by the examiner. Claim 1-26 is/are pending in the application. Claims 1, 11 and 18 is/are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Priority As required by M.P.E.P. 201iz.14(c), acknowledgement is made of applicant’s claim for priority based on applications filed on 07/25/2023. Drawings The applicant’s drawings submitted are acceptable for examination purposes. Allowable Subject Matter Claims 3-5, 13-15 and 19-22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c). Prior arts: US 2021/0216583 to Wu Container images form a basis for activation of container instance services and contain all the data required by containers. Before activation of container instance services, complete container images have to be downloaded locally. A common Docker container framework implements a client-repository model. Therein, a client end manages the actual lifecycle of a container, including activation, pause, restoration and destruction. The repository end serves to centrally store container images to be deployed for the client end to acquire the required images freely. US 2018/0092026 to Patel Service clusters are hosted on virtual machines of an embodiment cloud network. An embodiment cloud network may include a plurality of geographically diverse deployment sites (e.g., data centers) where various virtual machines are physically deployed. Decomposition of the system into a set of services allows each service (e.g., each service provided by the PTX platform) to be independently deployed and managed. Thus, system resilience may be improved as failures are localized to individual services. Furthermore, rapid and agile deployment of services may also be achieved. US 2018/0026944 to Phillips With reference to FIGS. 1, 2 and 3, the policy generation component 302 can be configured to facilitate users with generating firewall policy rules for their applications or services deployed on the service provider network 102 (e.g., via one or more of the one or more server devices 104.sub.1-n, the one or more VMs 106.sub.1-n, the one or more data stores 108.sub.1-n, or combinations thereof). For example, a user that created, owns, manages, or otherwise has authority to control security measures associated with a service or application deployed on the service provider network 102 can interface with the service provider network 102 (e.g., via the user portal 223) and employ the policy generation component 302 to generate a firewall policy rule for the service or application. US 2018/0019948 to Patwardhan A virtual networking switch on a host computing device can receive a first data packet of a micro-service data flow from a virtual machine running on the host computing device. The virtual machine can be hosting a set of one or more container instances providing micro-services and the first data packet can include micro-service flow data identifying a first container instance from the set of one more container instances that transmitted the first data packet. US 2018/0006935 to Mutnuru As discussed above, service nodes 10 may comprise virtual service instances deployed as VMs or other virtual entities hosted on one or more physical devices, e.g., servers, to offer individual services or chains of services. The use of virtual service instances enables automatic scaling of the services on-demand. In one example, SDN controller 19 may establish or instantiate additional virtual service instances within services complex 9 for a given service based on an increased needed for the given service by packets of packet flows 26 received by gateway 8. US 2017/0295073 to Wu Alternatively, the controller determines a service function to be deployed, applies for a resource needed for the service function that needs to be deployed, determines a service deployment network element, and allocates the resource that is applied for to the service function deployment network element. The controller virtualizes, based on the determined service function to be deployed, a determined service deployment area, the determined service deployment resource, or the like, a hardware resource in a data center that corresponds to the service deployment area into a virtual machine by using a virtualization technology. US 2015/0188788 to Kolesnik As a further example, virtualization manager 114 may be a consumer and the external service may be VM hosting as a service which may be employed by the virtualization manager as a source of virtual machine hosts to be subject to its management (e.g., hosts to which virtual machines can be deployed). The prior art of record does not disclose and/or fairly suggest at least claimed limitations recited in such manners in dependent claims 3-5, 13-15 and 19-22. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 7, 9-11, 17-18 and 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over US 2017/0244593 to Rangasami et al. (hereafter “Rangasami”) and in further view of US 2019/0379578 to Mishra et al. (hereafter “Mishra”). As per claim 1, Rangasami discloses a method for implementing a microservice architecture for a management application (FIGs. 1-2; paragraphs 0036-0037: “The various microservices expose interfaces that enable the microservices to invoke one another to exchange data and perform the respective sets of functions in order to create one or more overall applications. Each of the microservices may adhere to a well-defined Application Programming Interface (API) and may be orchestrated by invoking the API of the microservice.”), the method comprising: deploying a first service of the management application on a first container running on a container host (FIG. 1; paragraphs 0036-0037: “each cloud service 124 may host or include a plurality of containers 126, 129 that each provides an execution environment for at least one application (e.g., microservice) deployed by enterprise 116.” [Wingdings font/0xE0] cloud service provider as container host as claimed); employing a service-to-service communication mechanism to control communication between the first service and a second service of the management application (FIGs. 1, 3 and 5; paragraphs 0032, 0036-0037 and 0078: “Applications executing on containers 125 may communicate with applications executing on containers 126, 129 via virtual circuits 127A, 127B (“virtual circuits 127”) provisioned for cloud exchange 102 to interconnect enterprise 116 with cloud services 124. DR manager 140 of cloud exchange 102 may provision virtual circuits 127 in networking platform 108 of cloud exchange 102, to transport data packets between containers 126 of the first cloud service 124A and containers 129 of the second cloud service 124B. DR manager 140 of cloud exchange 102 may communicate code and state from the containers 126 of the first cloud service 124A to the containers 129 of the DR infrastructure layers 144 of the second cloud service 124B via virtual circuits 127.”); employing a proxy to control communication between the first service and an external application in an external device (FIGs. 1, 3 and 5; paragraphs 0059-0061: “As another example, routers 110 of networking platform 108 may be configured to redirect application traffic from container 126A at first cloud service 124A to container 129A at second cloud service 124B. For instance, router 110A may be configured to send application traffic addressed to subnet 128A via DR virtual circuit 127B to subnet 128B of cloud service 124B.”); and enabling a container orchestrator to monitor and manage the first service (FIGs. 1, and 5-6; paragraphs 0058, 0061-0064, 0100, 0102 and 0110). Rangasami does not explicitly disclose employing an inter-process communication mechanism to control communication between the first service and the container host using named pipes. Mishra further discloses employing an inter-process communication mechanism to control communication between the first service and the container host using named pipes (FIG. 7; paragraphs 0081-0082: in view of the instant specification, the named pipe is a queue [Wingdings font/0xE0] Mishra FIG. 7 paragraphs 0081-0082 teaches “FIG. 7 also illustrates a set of message queues 773 for communicating between the service VM and the service containers.”) [Wingdings font/0xE0] container service (as service) and service VM as a container host) It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Mishra into Rangasami’s teaching because it would provide for the purpose of allocates memory to a service DCN that operates a set of containers for providing partner network services for data messages received by the service DCN. The service DCN and the containers share the allocated memory and the method stores data messages received by the service DCN in the allocated memory (Mishra, paragraph 0005). As per claim 7, Rangasami discloses configuring a common data model (CDM) that is shared between the first container and a second container that runs a second service of the management application (FIG. 3; paragraphs 0083, and 0087-0089), wherein the CDM comprises configuration data of the first service and second service (FIG. 3; paragraphs 0083, and 0087-0089). Rangasami does not explicitly disclose wherein the CDM comprises database. Mishra further discloses wherein the CDM comprises database (FIG. 3; paragraph 0034, 0066, and 0072-0075). It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Mishra into Rangasami’s teaching because it would provide for the purpose of allocates memory to a service DCN that operates a set of containers for providing partner network services for data messages received by the service DCN. The service DCN and the containers share the allocated memory and the method stores data messages received by the service DCN in the allocated memory (Mishra, paragraph 0005). As per claim 9, Rangasami does not explicitly disclose wherein the container host comprises a physical server or a virtual machine running on the physical server. Mishra further discloses wherein the container host comprises a physical server or a virtual machine running on the physical server (FIG. 5; paragraph 0025-0027). It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Mishra into Rangasami’s teaching because it would provide for the purpose of allocates memory to a service DCN that operates a set of containers for providing partner network services for data messages received by the service DCN. The service DCN and the containers share the allocated memory and the method stores data messages received by the service DCN in the allocated memory (Mishra, paragraph 0005). As per claim 10, Rangasami discloses wherein the first container and a second container that runs the second service are deployed in a server management appliance, an on-premises physical server, a cloud server, or any combination thereof (FIG. 1; paragraphs 0035-0037). As per claim 11, it is medium claim, which recite(s) the same limitations as those of claim 1. Accordingly, claim 11 is rejected for the same reasons as set forth in the rejection of claim 1. As per claim 17, it is medium claim, which recite(s) the same limitations as those of claim 7. Accordingly, claim 17 is rejected for the same reasons as set forth in the rejection of claim 7. As per claim 18, Rangasami discloses A computer system for transforming a management application into a microservices architecture (FIGs. 1-2; paragraphs 0036-0037: “(FIGs. 1-2; paragraphs 0036-0037: “The various microservices expose interfaces that enable the microservices to invoke one another to exchange data and perform the respective sets of functions in order to create one or more overall applications. Each of the microservices may adhere to a well-defined Application Programming Interface (API) and may be orchestrated by invoking the API of the microservice.”), comprising: a container platform to execute containerized services of a management application (FIG. 1; paragraphs 0036-0037: “each cloud service 124 may host or include a plurality of containers 126, 129 that each provides an execution environment for at least one application (e.g., microservice) deployed by enterprise 116.”), wherein the container platform comprises a plurality of containers, each container executing a containerized service (FIG. 1; paragraphs 0036-0037: “each cloud service 124 may host or include a plurality of containers 126, 129 that each provides an execution environment for at least one application (e.g., microservice) deployed by enterprise 116.” [Wingdings font/0xE0] cloud service provider as container host as claimed); a service discovery module to control communication between the containerized services within the container platform using an application programming interface (API)-based communication ((FIGs. 1-2; paragraphs 0036-0037: “The various microservices expose interfaces that enable the microservices to invoke one another to exchange data and perform the respective sets of functions in order to create one or more overall applications. Each of the microservices may adhere to a well-defined Application Programming Interface (API) and may be orchestrated by invoking the API of the microservice.”)”); a proxy running on the container platform to control communication between the containerized services and an external device (FIGs. 1, 3 and 5; paragraphs 0059-0061: “As another example, routers 110 of networking platform 108 may be configured to redirect application traffic from container 126A at first cloud service 124A to container 129A at second cloud service 124B. For instance, router 110A may be configured to send application traffic addressed to subnet 128A via DR virtual circuit 127B to subnet 128B of cloud service 124B.”); and a container orchestrator to monitor and manage the containerized services (FIGs. 1, and 5-6; paragraphs 0058, 0061-0064, 0100, 0102 and 0110). Rangasami does not explicitly disclose a daemon running on the container platform to orchestrate communication between the containerized services and the container platform using named pipes. Mishra further discloses a daemon (FIGs. 7-8: Photon OS and service VM) running on the container platform to orchestrate communication between the containerized services and the container platform using named pipes (FIG. 7-8; paragraphs 0081-0083: in view of the instant specification, the named pipe is a queue [Wingdings font/0xE0] Mishra FIG. 7 paragraphs 0081-0082 teaches “FIG. 7 also illustrates a set of message queues 773 for communicating between the service VM and the service containers.”) [Wingdings font/0xE0] container service (as service) and service VM as a container host) It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Mishra into Rangasami’s teaching because it would provide for the purpose of allocates memory to a service DCN that operates a set of containers for providing partner network services for data messages received by the service DCN. The service DCN and the containers share the allocated memory and the method stores data messages received by the service DCN in the allocated memory (Mishra, paragraph 0005). As per claim 24, it is a system claim, which recite(s) the same limitations as those of claim 7. Accordingly, claim 24 is rejected for the same reasons as set forth in the rejection of claim 7. As per claim 25, it is a system claim, which recite(s) the same limitations as those of claim 9. Accordingly, claim 25 is rejected for the same reasons as set forth in the rejection of claim 9. As per claim 26, it is a system claim, which recite(s) the same limitations as those of claim 10. Accordingly, claim 26 is rejected for the same reasons as set forth in the rejection of claim 10. Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Rangasami and in further view of Mishra, as applied to claims 1 and 11, and further in view of US 2021/0141645 to Kramer et al. (hereafter “Kramer”) and US 2011/0126275 to Anderson et al. (hereafter “Anderson”) As per claim 2, Rangasami does not explicitly disclose wherein deploying the first service on the first container comprises: obtaining information about the first service of the management application, wherein the obtained information comprises dependency data of the first service; based on the obtained information about the first service, generating a container file including instructions for building the first container that executes the first service; based on the container file, creating a container image for the first service; and based on the container image, deploying the first container for execution on the container host. Kramer further discloses wherein deploying the first service on the first container comprises: obtaining information about the first service of the management application, wherein the obtained information comprises dependency data of the first service (paragraphs 0025, and 0027-0029: dependency info from manifest of dependencies); based on the obtained information about the first service, generating a container file including instructions for building the first container that executes the first service (paragraphs 0025, and 0027-0029: building an application backages based dependency info from manifest of dependencies). It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Kramer into Rangasami’s teaching and Mishra’s teaching because it would provide for the purpose of Enhancing the bootstrap execution environment based on the manifest of dependencies may include installing application dependencies (Kramer, paragraph 0005). Anderson further discloses based on the container file, creating a container image for the first service (paragraphs 0073 and 0079: building an image of VM 114b from the container); and based on the container image (paragraphs 0073 and 0079), deploying the first container for execution on the container host (paragraph 0073 and 0079). It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Anderson into Rangasami’s teaching, Mishra’s teaching and Kramer’s teaching because it would provide for the purpose of manage development and deployment for services and applications provisioned in the infrastructure (Anderson, paragraph 0072). As per claim 12, it is medium claim, which recite(s) the same limitations as those of claim 2. Accordingly, claim 12 is rejected for the same reasons as set forth in the rejection of claim 2. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Rangasami and in further view of Mishra, as applied to claims 1 and 11, and further in view of US 2015/0317088 to Hussain et al. (hereafter “Hussain”) As per claim 6, Rangasami does not explicitly disclose wherein employing the inter-process communication mechanism to control communication between the first service and the container host comprises: transmitting a command that need to be executed on the container host from the first container to the container host through a first named pipe; and transmitting a result associated with an execution of the command from the container host to the first container through a second named pipe. Hussain further discloses wherein employing the inter-process communication mechanism to control communication between the first service and the container host (FIGs. 1-3; paragraph 0025: VM 110 with driver/application in block 100 (container host/system/environment)) comprises: transmitting a command that need to be executed on the container host from the first container to the container host through a first named pipe (FIGs. 1-3; paragraph 0025: in view of the instant specification, the named pipe is a queue); and transmitting a result associated with an execution of the command from the container host to the first container through a second named pipe (FIGs. 1-3; paragraph 0025: in view of the instant specification, the named pipe is a queue). It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Hussain into Rangasami’s teaching, and Mishra’s teaching because it would provide for the purpose of each of the VMs running on the host has its own namespace(s) and can access its storage devices directly through its own virtual NVMe controller (Hussain, paragraph 0011). As per claim 16, it is medium claim, which recite(s) the same limitations as those of claim 6. Accordingly, claim 16 is rejected for the same reasons as set forth in the rejection of claim 6. Claims 8 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Rangasami and in further view of Mishra, as applied to claims 1 and 18, and further in view of US 2021/0406386 to Ortiz et al. (hereafter “Ortiz”) As per claim 8, Rangasami discloses when the first service and the second service are running on different server platforms (FIG. 1). Rangasami does not explicitly disclose generating an encrypted overlay network that spans the different server platforms to enable communication between the first service and the second service. Ortiz further discloses generating an encrypted overlay network that spans the different server platforms to enable communication between the first service and the second service (paragraph 0014). It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Ortiz into Rangasami’s teaching and Mishra’s teaching because it would provide for the purpose of A trusted, neutral, cross-party platform spanning across multiple services and interaction points is desirable to provide a scalable, un-biased solution providing transparent levels of control to stakeholders (Ortiz, paragraph 0014). As per claim 23, Rangasami discloses the containerized services (FIG. 1) and when the containerized services are running on different server platforms (FIG. 1). Rangasami does not explicitly disclose an encrypted overlay network that spans the different server platforms to enable communication between the containerized services. Ortiz further discloses an encrypted overlay network that spans the different server platforms to enable communication between the containerized services (paragraph 0014). It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Ortiz into Rangasami’s teaching and Mishra’s teaching because it would provide for the purpose of A trusted, neutral, cross-party platform spanning across multiple services and interaction points is desirable to provide a scalable, un-biased solution providing transparent levels of control to stakeholders (Ortiz, paragraph 0014). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tuan Dao whose telephone number is (571) 270 3387. The examiner can normally be reached on Monday to Friday from 09am to 05pm. The examiner can also be reached on alternate Fridays. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital, can be reached at telephone number (571) 272 4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /TUAN C DAO/ Primary Examiner, Art Unit 2198
Read full office action

Prosecution Timeline

Oct 17, 2023
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602257
ELECTRONIC DEVICE AND OPERATING METHOD WITH MODEL CO-LOCATION
2y 5m to grant Granted Apr 14, 2026
Patent 12566648
METHOD OF PROCESSING AGREEMENT TASK
2y 5m to grant Granted Mar 03, 2026
Patent 12566627
PREDICTING THE NEXT BEST COMPRESSOR IN A STREAM DATA PLATFORM
2y 5m to grant Granted Mar 03, 2026
Patent 12561173
METHOD FOR DATA PROCESSING AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12561591
CLASSIFICATION AND TRANSFORMATION OF SEQUENTIAL EVENT DATA
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+15.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 782 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month