Prosecution Insights
Last updated: April 19, 2026
Application No. 18/510,351

HYPERVISOR-HOSTING-BASED SERVICE MESH SOLUTION

Final Rejection §102§103
Filed
Nov 15, 2023
Examiner
DU, ZONGHUA A
Art Unit
2444
Tech Center
2400 — Computer Networks
Assignee
VMware, Inc.
OA Round
2 (Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
47 granted / 78 resolved
+2.3% vs TC avg
Strong +46% interview lift
Without
With
+45.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
22 currently pending
Career history
100
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
60.9%
+20.9% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the communication filed on 08/01/2025. Claims 1-19 and 21 are pending in this application. Examiner Note If applicant has any questions or wishes to amend claims, applicant is encouraged to contact the examiner to ensure that any proposed amendments would overcome current rejection(s). The examiner can normally be reached at (408) 918-7596 or Zonghua.Du@uspto.gov, Monday-Friday, 8 AM - 5 PM PST, and examiner is happy assist applicant as needed to provide any help/feedback, thank you. Priority This application claims foreign priority of PCTCN2023124839, filed 10/17/2023. The assignee of record is VMWare, INC. The listed inventor(s) is/are: Lin, Bo; Zhou, Zhengsheng; Han, Donghai; Chen, Dongping; Liang, Xiao. Response to Arguments Applicant’s arguments filed 08/01/2025 have been fully considered but they are not persuasive. Applicant argues: a. Applicant states that “The rejection of claim 1 over Mashargah is respectfully traversed because, as discussed during the interview, Mashargah fails to teach or disclose a data plane that forwards application flows based on policy rules that are applied by a software component that is separate from the data plane and deployed on the same host computer as the data plane (Reply, p. 6).” Applicant further states that “Claim 1 is distinguishable from Mashargah, because claim 1 places operative responsibility for applying the rules to the claimed ‘application service agent,’ not the claimed ‘application service data plane.’ Mashargah, in contrast, describes, in , ¶ [0080], a system in which the data plane configures itself based on data plane policy it receives from its network controller 102 ( depicted as network manager 102 in FIG. 1) through virtual daemon 110, which sets up a secure Datagram Transport Layer Security connection 112 with virtual daemon 108 (see , ¶ [0036]). There is no teaching or disclosure in Mashargah that virtual daemon 110 (or any other software component separate from the data plane and residing in virtual router 106) is applying the data plane policy. Therefore, claim 1 is clearly distinguishable from, and not anticipated by, Mashargah (Reply, p. 7).” a. Examiner respectfully disagrees with the arguments. Mashargah discloses that a Vdaemon of a data plane proxy (Mashargah, e.g. Vdaemon 110 as exemplified in FIG. 1 and ¶ 0036) performs the operation of receiving the service mesh data plane policy over a transport layer connection with a virtual router (Mashargah, recited in claim 12), then the service data plane receives the service mesh data plane policy from the virtual router and is configured based on the service mesh data plane policy (Mashargah, recited in claim 9 and ¶ 0080). By applying the broadest reasonable interpretation in light of the specification (the instant specification ¶ 0006 recites that the application service agent is configured to “apply” the policy rules “by providing the policy rules to a service insertion module of the application service data plane”) and taking into account the meaning of the words in their ordinary usage as they would be understood by one of ordinary skill in the art (e.g. “apply” has meaning of “to put to use for a practical purpose”), the Vdaemon 110 as exemplified in FIG. 1 of Mashargah is considered to apply/provide the service mesh data plane policy. Therefore, claim 1 is considered to be anticipated by Mashargah. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 7-8, 10, 17-19 and 21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mashargah et al. (US 20220109693 A1, published 04/07/2022; hereinafter Mashargah). For Claim 1, Mashargah teaches a method of implementing a virtualization software-based service mesh for a network comprising a plurality of host computers, each host computer comprising a set of virtualization software that executes a set of application instances (Mashargah, ¶ 0015 “… This disclosure describes an integrated management method to manage a service mesh data plane over a network fabric …”; ¶ 0018 “… Instances of the rnicroservice applications may be hosted at different locations, and the microservice applications instances may communicate with each other over an SD-WAN …”; ¶ 0022 “… The integrated management may provide for simplified communication across the SD-WAN between cloud-native microservices hosted at different locations …”), the method comprising: for each host computer (Mashargah, for each edge device such as Virtual Router 106 of Figure 1, ¶ 0030 and ¶ 0031): deploying, to the set of virtualization software of the host computer, (i) an application service agent (Mashargah, FIG. 1 and ¶ 0036 exemplifies Vdaemon 110) and (ii) an application service data plane (Mashargah, FIG. 1 and ¶ 0037 exemplifies service mesh data plane 116) that comprises a set of data plane service mesh levels (Mashargah, FIG. 1, ¶ 0041 – ¶ 0047 exemplifies data plane runtime models including different services; ¶ 0036 “… The network manager 102 and the virtual router 106 each include a virtual daemon 108 and 110, respectively. The virtual daemon 108 and virtual daemon 110 create and maintain a secure Datagram Transport Layer Security (DTLS) connection 112 between the network manager 102 and the virtual router 106, for each virtual router 106 in the SD-WAN …”; ¶ 0037 “… The CSMM module 114 is configured to send and receive messages to or from, respectively, a service mesh data plane 116 of the virtual router 106 …”); configuring the application service agent to apply policy rules defined for application flows associated with the set of application instances to the application flows on the application service data plane (Mashargah, Claim 12 recites “… receiving the at least one service mesh data plane policy from the virtual router includes a vdaemon of a data plane proxy associated with the service receiving the at least one service mesh data plane policy over a transport layer connection with the virtual router …”); and configuring the application service data plane to forward the application flows for the set of application instances to and from services provided at each data plane service mesh level in the set of data plane service mesh levels according to the policy rules applied by the application service agent (Mashargah, FIG. 1, FIG. 7, FIG. 8, ¶ 0078 “… FIG. 7 is a flowchart illustrating a process by which a configuration manager may configure a service mesh data plane, such as from a central location, over a network fabric. At 702, at least one service mesh data plane policy is determined for a microservice of a service mesh … At 704, the at least one service mesh data plane policy is sent, over a network fabric, to a virtual router associated with the microservice …”; ¶ 0080 “… FIG. 8 is a flowchart illustrating a process by which a service mesh data plane may configure itself based at least in part on a service mesh data plane policy. At 802, a data plane of a microservice receives at least one service mesh data plane policy from a virtual router associated with the microservice … At 804, the microservice data plane is configured based at least in part on the at least one service mesh data plane policy …”). For Claim 2, Mashargah teaches the method of claim 1, wherein configuring the application service agent to apply the policy rules comprises configuring the application service agent (i) to receive policy configurations from a central application service control plane server, (ii) to convert the received policy configurations into policy rules, and (iii) to apply the policy rules to application flows on the application service data plane (Mashargah, FIG. 1, FIG. 8, ¶ 0080 “… FIG. 8 is a flowchart illustrating a process by which a service mesh data plane may configure itself based at least in part on a service mesh data plane policy. At 802, a data plane of a microservice receives at least one service mesh data plane policy from a virtual router associated with the microservice. For example, the data plane may be within the virtual router 106 and the virtual router may receive the at least one service mesh data plane policy via a secure transport layer connection between the virtual router 106 and the network controller 102. At 804, the microservice data plane is configured based at least in part on the at least one service mesh data plane policy …”; Claim 12 recites the vdaemon receiving the service mesh data plane policy). For Claim 3, Mashargah teaches the method of claim 1, wherein the set of data plane service mesh levels comprises (i) an infrastructure services first level, (ii) a tenant services second level, (iii) an application services third level, and (iv) an instance services fourth level (Mashargah, FIG. 1, ¶ 0041 – ¶ 0047 exemplifies data plane runtime models including different services “… Data plane runtime models, as an example, may include the following services/objects: Forwarder service: services to which traffic can be routed in a target cluster … Monitor service: service to publish ports on which to listen for traffic … Routes service: service for traffic routing decisions … Gatekeeper service: service for certificates distribution to application microservices … Health check service: service for end-point health monitoring … Load balancing service: service to organize application accessibility based on the load balancing algorithms …”; Examiner notes that the instant Specification (¶ 0009 - ¶ 0011) discloses the different data plane service mesh levels may provide the similar categories of services depending on the situation of the application instances). For Claim 4, Mashargah teaches the method of claim 3, wherein the infrastructure services first level comprises common services that are accessible to each application instance in the set of application instances (Mashargah, FIG. 1, ¶ 0041 – ¶ 0047 exemplifies data plane runtime models including different services for the application instances). For Claim 7, Mashargah teaches the method of claim 3, wherein the set of application instances comprises at least a first subset of application instances associated with a first application and a second subset of application instances associated with a second application, wherein the application services third level comprises a first set of application services for the first subset of application instances associated with the first application and a second set of application services for the second subset of application instances associated with the second application (Mashargah, FIG. 1, ¶ 0041 – ¶ 0047 exemplifies data plane runtime models including different services for the application instances, ¶ 0021 discloses that the microservice application instances might be managed separately “… microservice application instances of cloud native applications, in a service mesh environment, are many times are individually managed, via a local interface. For example, a branch facility, such as a store of a multi-store coffee chain, may be expected to manage its own instances of micro service applications, such as to troubleshoot issues, perform upgrades, and other microservice management functions …”). For Claim 8, Mashargah teaches the method of claim 7, wherein the first set of application services is accessible to the first subset of application instances and is not accessible to the second subset of application instances, and the second set of application services is accessible to the second subset of application instances and is not accessible to the first subset of application instances (Mashargah discloses that the microservice application instances might be managed separately; ¶ 0021 “… microservice application instances of cloud native applications, in a service mesh environment, are many times are individually managed, via a local interface. For example, a branch facility, such as a store of a multi-store coffee chain, may be expected to manage its own instances of micro service applications, such as to troubleshoot issues, perform upgrades, and other microservice management functions …”). For Claim 10, Mashargah teaches the method of claim 1, wherein configuring the application service data plane further comprises configuring the application service data plane to implement a set of data plane services, wherein the set of data plane services comprises at least a service discovery service, a load balancing service, a tracing service, and a securities service (Mashargah, FIG. 1, ¶ 0041 – ¶ 0047 exemplifies data plane runtime models including different services “… Data plane runtime models, as an example, may include the following services/objects: Forwarder service: services to which traffic can be routed in a target cluster … Monitor service: service to publish ports on which to listen for traffic … Routes service: service for traffic routing decisions … Gatekeeper service: service for certificates distribution to application microservices … Health check service: service for end-point health monitoring … Load balancing service: service to organize application accessibility based on the load balancing algorithms …”). For Claim 17, Mashargah teaches the method of claim 1, wherein deploying the application service data plane comprises implementing the application service data plane as a set of one or more machine instances on the set of virtualization software of the host computer (Mashargah discloses implementing the service mesh data plane by deploying the application pods in a virtual machine; FIG. 2, ¶ 0049 “… The front service mesh data plane 204 may communicate with application pod 206a, application pod 206b and application pod 206c within a cluster 208 deployed in a virtual machine. For example, application pod 206a may include a dataplane sidecar 210a and an application pod workload 212a that may be comprised of microservices on the service mesh. Similarly, application pod 206b may include a dataplane sidecar 210b and an application pod workload 212b that may be comprised of microservices on the service mesh. And application pod 206c may include a dataplane sidecar 210c and an application pod workload 212c that may be comprised of microservices on the service mesh …”). For Claim 18, Mashargah teaches the method of claim 17, wherein the set of machine instances comprises a set of pods (Mashargah, FIG. 2, ¶ 0049 “… The front service mesh data plane 204 may communicate with application pod 206a, application pod 206b and application pod 206c within a cluster 208 deployed in a virtual machine. For example, application pod 206a may include a dataplane sidecar 210a and an application pod workload 212a that may be comprised of microservices on the service mesh …”). For Claim 19, Mashargah teaches the method of claim 1, wherein deploying the application service data plane comprises one of (i) integrating the application service data plane in a kernel of the set of virtualization software of the host computer, and (ii) implementing the application service data plane as a set of one or more machine instances on the set of virtualization software of the host computer (Mashargah, FIG. 2, FIG. 3, ¶ 0049 “… The front service mesh data plane 204 may communicate with application pod 206a, application pod 206b and application pod 206c within a cluster 208 deployed in a virtual machine. For example, application pod 206a may include a dataplane sidecar 210a and an application pod workload 212a that may be comprised of microservices on the service mesh …”; ¶ 0054 “… the microservices may be deployed using a container-as-a-service platform that simplifies provisioning and ongoing operations for Kubernetes across cloud, data center, and edge. For example, the microservices may be in a virtualized and containerized deployment with multiple hypervisors The platform may include tools for application performance monitoring, application placement, and cloud mobility …”). For Claim 21, Mashargah teaches a computer system that implements a virtualization software-based service mesh (Mashargah, ¶ 0015 “… This disclosure describes an integrated management method to manage a service mesh data plane over a network fabric …”; ¶ 0017 “… the techniques described herein may be performed by a system and/or device having non-transitory computer-readable media storing computer-executable instructions …”), the computer system comprising: a plurality of host computers (Mashargah, ¶ 0018 “… Instances of the rnicroservice applications may be hosted at different locations, and the microservice applications instances may communicate with each other over an SD-WAN …”; ¶ 0022 “… The integrated management may provide for simplified communication across the SD-WAN between cloud-native microservices hosted at different locations …”), each host computer (Mashargah, for each edge device such as Virtual Router 106 of Figure 1, ¶ 0030 and ¶ 0031) comprising a set of virtualization software that executes a set of application instances, and (i) an application service agent (Mashargah, FIG. 1 and ¶ 0036 exemplifies Vdaemon 110) and (ii) an application service data plane (Mashargah, FIG. 1 and ¶ 0037 exemplifies service mesh data plane 116) that comprises a set of data plane service mesh levels, that are deployed to the set of virtualization software (Mashargah, FIG. 1, FIG. 3; ¶ 0041 – ¶ 0047 exemplifies data plane runtime models including different services; ¶ 0036 “… The network manager 102 and the virtual router 106 each include a virtual daemon 108 and 110, respectively. The virtual daemon 108 and virtual daemon 110 create and maintain a secure Datagram Transport Layer Security (DTLS) connection 112 between the network manager 102 and the virtual router 106, for each virtual router 106 in the SD-WAN …”; ¶ 0037 “… The CSMM module 114 is configured to send and receive messages to or from, respectively, a service mesh data plane 116 of the virtual router 106 …”; 0054 “… FIG. 3 illustrates an example integrated system 300 that provides for integrated central management of cloud-native microservices as well as management of an SD-WAN that provides, in part, for communication among the microservices … the microservices may be in a virtualized and containerized deployment with multiple hypervisors …”), wherein the application service agent is configured to apply policy rules defined for application flows associated with the set of application instances to the application flows on the application service data plane (Mashargah, Claim 12 recites “… receiving the at least one service mesh data plane policy from the virtual router includes a vdaemon of a data plane proxy associated with the service receiving the at least one service mesh data plane policy over a transport layer connection with the virtual router …”), and the application service data plane is configured to forward the application flows for the set of application instances to and from services provided at each data plane service mesh level in the set of data plane service mesh levels according to the policy rules applied by the application service agent (Mashargah, FIG. 1, FIG. 7, FIG. 8, ¶ 0078 “… FIG. 7 is a flowchart illustrating a process by which a configuration manager may configure a service mesh data plane, such as from a central location, over a network fabric. At 702, at least one service mesh data plane policy is determined for a microservice of a service mesh … At 704, the at least one service mesh data plane policy is sent, over a network fabric, to a virtual router associated with the microservice …”; ¶ 0080 “… FIG. 8 is a flowchart illustrating a process by which a service mesh data plane may configure itself based at least in part on a service mesh data plane policy. At 802, a data plane of a microservice receives at least one service mesh data plane policy from a virtual router associated with the microservice … At 804, the microservice data plane is configured based at least in part on the at least one service mesh data plane policy …”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 5-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mashargah et al. (US 20220109693 A1, published 04/07/2022; hereinafter Mashargah), in view of Eberlein (US 20180146056 A1, published 05/24/2018; hereinafter Eberlein). For Claim 5, Mashargah teaches the method of claim 3. Mashargah does not explicitly teach, but Eberlein teaches wherein the set of application instances comprises a first subset of application instances belonging to a first tenant and a second subset of application instances belonging to a second tenant, wherein the tenant services second level comprises a first set of tenant services for the first subset of application instances of the first tenant and a second set of tenant services for the second subset of application instances of the second tenant (Eberlein discloses the service instances are specific for a tenant; ¶ 0016 “… for Applications that leverage separation of instances of a Service (hereinafter, ‘Service Instances’) for a tenant (hereinafter, ‘Tenant’) (for example, each Tenant stores its data in a separate database schema), this type of static binding is not sufficient. Such Applications need to be able to create additional Service Instances dynamically at runtime whenever a new Tenant is added (or onboarded) to a cloud-computing-type environment and also need to connect to any one of these Service Instances when processing a request applicable to a specific Tenant. When a new Tenant subscribes to an Application, the Application is made aware by an onboarding process that the Tenant is new and the Application receives a chance to prepare provision of its services to the Tenant …”). Eberlein and Mashargah are analogous art because they are both related to microservice management. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to use the separating service instances for a tenant techniques of Eberlein with the system of Mashargah to elaborate the requirements of service instances in the multi-tenancy environment (Eberlein ¶ 0001). For Claim 6, Mashargah teaches the method of claim 5. Mashargah does not explicitly teach, but Eberlein teaches wherein the first set of tenant services are not accessible to the second subset of application instances and the second set of tenant services are not accessible to the first subset of application instances (Eberlein discloses the service instances are specific for a tenant; ¶ 0016 “… for Applications that leverage separation of instances of a Service (hereinafter, ‘Service Instances’) for a tenant (hereinafter, ‘Tenant’) (for example, each Tenant stores its data in a separate database schema), this type of static binding is not sufficient. Such Applications need to be able to create additional Service Instances dynamically at runtime whenever a new Tenant is added (or onboarded) to a cloud-computing-type environment and also need to connect to any one of these Service Instances when processing a request applicable to a specific Tenant. When a new Tenant subscribes to an Application, the Application is made aware by an onboarding process that the Tenant is new and the Application receives a chance to prepare provision of its services to the Tenant …”). See motivation to combine for claim 5. Claim Rejections - 35 USC § 103 Claim 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mashargah et al. (US 20220109693 A1, published 04/07/2022; hereinafter Mashargah), in view of Novak et al. (US 20190050440 A1, published 02/14/2019; hereinafter Novak). For Claim 9, Mashargah teaches the method of claim 3. Mashargah does not explicitly teach, but Novak teaches wherein: the set of application instances comprises at least first and second subsets of application instances associated with a first application; the instance services fourth level comprises a first set of instance services that are a first version of a particular set of instance services, and a second set of instance services that are a second version of the particular set of instance services; and the first set of instance services are accessible only to the first subset of application instances associated with the first application and the second set of instance services are accessible only to the second subset of application instances associated with the first application (Novak discloses that different versions of services are used for different application instances of an application; FIG. 4; ¶ 0165 “… if an instance of a serialized interaction representation 400 represents a web page, the activation information 410 can include information about an application used to generate the instance, and on which the content can be accessed, (e.g. Microsoft Edge), and other applications that may be suitable for accessing the content, including on other devices (e.g., Google Chrome for an Android-based device or Safari for an iOS-based device, or a particular application having different versions for different devices, such as an application for a movie streaming service having different versions of the application for iOS, Windows, Chrome, etc.) …”). Novak and Mashargah are analogous art because they are both related to application instances. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to use the applying different service versions techniques of Novak with the system of Mashargah to facilitate various consumer services (Novak ¶ 0002). Claim Rejections - 35 USC § 103 Claims 11-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mashargah et al. (US 20220109693 A1, published 04/07/2022; hereinafter Mashargah), in view of Rolando et al. (US 20200274945 A1, published 08/27/2020; hereinafter Rolando). For Claim 11, Mashargah teaches the method of claim 10. Mashargah does not explicitly teach, but Rolando teaches wherein each policy rule comprises a set of match attributes and one or more data plane services to be applied to application flows that match to the set of match attributes, wherein configuring the application service agent to apply policy rules defined for the application flows comprises configuring the application service agent to provide the policy rules to a service insertion module of the application service data plane, wherein the service insertion module (i) matches a set of flow attributes of an application flow to one or more sets of match attributes of one or more policy rules, and (ii) applies the matched policy rules to the application flow (Rolando discloses matching the attributes of data message flows with the attributes of service insertion rules, and applying service operations to the data message flows with matching attributes; ¶ 0059 “… the service insertion (SI) rules associate flow identifiers with service chain identifiers. In other words, some embodiments try to match a data message's flow attributes to the flow identifiers (referred to below as rule identifiers of the SI rules) of the service insertion rules, in order to identify a matching service insertion rule (i.e., a rule with a set of flow identifiers that matches the data message's flow attributes) and to assign this matching rule's specified service chain as the service chain of the data message. A specific flow identifier (e.g., one defined by reference to a five-tuple identifier) could identify one specific data message flow, while a more general flow identifier ( e.g., one defined by reference to less than the five tuples) can identify a set of several different data message flows that match the more general flow identifier. As such, a matching data message flow is any set of data messages that have a common set of attributes that matches a rule identifier of a service insertion rule …”). Rolando and Mashargah are analogous art because they are both related to service data planes. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to use the matching attributes of service insertion rules techniques of Rolando with the system of Mashargah to “seamlessly distribute data messages in the datacenter between different application and/or service layers” (Rolando ¶ 0001). For Claim 12, Mashargah-Rolando teaches the method of claim 11, wherein the set of flow attributes comprises a five-tuple identifier of the application flow, the five-tuple identifier comprising a source IP (Internet Protocol) address of the application flow, a destination IP address of the application flow, a source port of the application flow, a destination port of the application flow, and a protocol of the application flow (Rolando, ¶ 0084 “… each service rule has a rule identifier that is defined in terms of data message attributes (e.g., five tuple attributes, which are the source and destination IP address, source and destination port addresses and the protocol) …”). See motivation to combine for claim 11. For Claim 13, Mashargah-Rolando teaches the method of claim 11, wherein the service insertion module applies the matched policy rules to the application flow by setting one or more flags in headers of packets of the application flow, the one or more flags corresponding to the one or more data plane services specified by the matched policy rules (Rolando discloses specifying the service metadata header attributes for the data message, FIG. 6, ¶ 0071 “… In FIG. 6, the software switches 120, 122, and 124 and modules 610, 612, 614, 620, 624, 626, and 628 implement two different layers of the service plane, which are the service insertion layer 602 and the service transport layer 604. The service insertion layer 602 (1) identifies the service chain for a data message, (2) selects the service path to use to perform the service operations of the service chain, (3) identifies the next-hop service nodes at each hop in the selected service path (including the identification of the source host computer to which the data message should be returned upon the completion of the service chain), and ( 4) for the service path, specifies the service metadata (SMD) header attributes for the data message …”; ¶ 0073 “… embodiments, the service insertion (SI) layer 602 includes an SI pre-processor 610 and an SI post-processor 612, in each the two IO chains 650 and 652 (i.e., the egress IO chain 650 and the ingress IO chain 652) of a GVM for which one or more service chains are defined …”). See motivation to combine for claim 11. For Claim 14, Mashargah-Rolando teaches the method of claim 13, wherein configuring the application service data plane to forward application flows according to the policy rules applied by the application service agent comprises configuring the application service data plane to apply data plane services corresponding to the one or more flags in the headers of the packets of the application flow (Rolando discloses specifying the service path in the service metadata header attributes for the data messages, FIG. 6, ¶ 0074 “… For a data message that passes through a GVM's ingress or egress datapath, the SI pre-processor 610 on this datapath performs several operations. It identifies the service chain for the data message and selects the service path for the identified service chain. The pre-processor also identifies the network address for a first hop service node in the selected service path and specifies the SMD attributes for the data message. The SMD attributes include in some embodiments the service chain identifier (SCI), the SPI and SI values, and the direction (e.g., forward or reverse) for processing the service operations of the service chain. In some embodiments, the SPI value identifies the service path while the SI value specifies the number of service nodes …”). See motivation to combine for claim 11. For Claim 15, Mashargah-Rolando teaches the method of claim 14, wherein the policy rules defined for the application flows comprise policy rules defined for application flows at each data plane service mesh level (Mashargah discloses providing various service mesh policies; ¶ 0055 “… the integrated central management may support the SD-WAN in an edge deployment configuration, such as to retrieve related data that can be used as an input to the service mesh policies. Such data may include, for example, information about virtual private networks (VPNs), identity of datacenters and branches, and routing protocols …”; ¶ 0056 “Thus, for example, an integrated central management platform may be provided to allow for mesh connector policies configuration and SD-WAN cloud services mesh management policies configuration. The integrated management platform may access APIs exposed by a centralized management function that implements the policies …”). Claim Rejections - 35 USC § 103 Claim 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mashargah et al. (US 20220109693 A1, published 04/07/2022; hereinafter Mashargah), in view of Meduri et al. (US 11457080 B1, published 09/27/2022; hereinafter Meduri). For Claim 16, Mashargah teaches the method of claim 1. Mashargah does not explicitly teach, but Meduri teaches wherein configuring the application service data plane further comprises: deploying, for each service mesh level of the set of data plane service mesh levels, (i) a distributed DNS (domain name service) proxy server and (ii) a distributed load balancer (Meduri, FIG. 1, col. 7, ll. 33-48 “… the proxy and the service mesh may have the same URL but different public IP addresses. Distributed DNS servers may be configured to resolve the URL to an IP address for the proxy or backing service that is geographically closest …”; col. 8, ll. 50-63 “… service mesh control plane 102 operates in conjunction with an application load balancer, in which the application load balancer also emits metrics collected by proxies 110 to a telemetry service, which the customer may use for configuring monitoring services of a computer resource service provider and/or trigger policies for scaling computer system resources …”); configuring the distributed DNS proxy server to intercept and respond to DNS requests from the set of application instances that are associated with services provided by the service mesh level (Meduri, FIG. 1, col. 7, ll. 33-48 “… Proxies 110 may be accessible via public network addresses, such as IP addresses. For instance, each of the proxies 110 may be associated with a corresponding uniform resource locator (URL) that is different than a URL used for the corresponding backing service. For instance, in the example of a proxy and service being in different geographic jurisdictions, a proxy may have a URL in the form of service<dot>countryl <dot>serviceprovider<dot>com while the backing service may have a URL of the form service<dot>country2<dot>serviceprovider<dot>com, where <dot> represents the character in the brackets used for delimiting domains and sub-domains. In other examples, the proxy and the service mesh may have the same URL but different public IP addresses. Distributed DNS servers may be configured to resolve the URL to an IP address for the proxy or backing service that is geographically closest …”); and configuring the distributed load balancer to intercept service calls associated with services provided by the service mesh level and redirect the service calls to service instances on one or more of the plurality of host computers (Meduri, FIG. 1, col. 8, ll. 50-63 “… In some embodiments, service mesh control plane 102 may operate the service mesh and its associated nodes 106A-C without additional control plane infrastructure. In other implementations, service mesh control plane 102 operates in conjunction with an application load balancer, in which the application load balancer also emits metrics collected by proxies 110 to a telemetry service, which the customer may use for configuring monitoring services of a computer resource service provider and/or trigger policies for scaling computer system resources. For example, the customer may configure, to scale scaling computer system resources, the alarm to trigger if a request rate from nodes 106A-C to the application load balancer is above a certain threshold …”). Meduri and Mashargah are analogous art because they are both related to microservice management. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to use the DNS and load balancing techniques of Meduri with the system of Mashargah to build logic to provide mechanisms that monitor, control or debug microservices of an application (Meduri, col. 1, ll. 15-22). Citation of Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is listed below, thank you: i. Tsirkin et al. (US 2021/0144177 A1) discloses that a packet is received by a hypervisor from a first container, the packet to be provided to a second container, the packet including a header including a first network address associated with the second container. A network policy is identified for the packet in view of the first network address. A second network address corresponding to the second container is determined in view of the network policy. A network address translation is performed by the hypervisor to modify the header of the packet to include the second network address corresponding to the second container (Tsirkin, Abstract). ii. Nainar et al. (US 2020/0278892 A1) discloses that The service mesh platform 300 may be logically divided into a control plane 301 and a data plane 321 (Nainar, FIG. 3, ¶ 0045). The data plane 321 can comprise a set of intelligent proxies 325A, 325B, and 325C (collectively, "325") as sidecars. A sidecar is a container that can operate alongside a service container (e.g., the service containers 328) to provide the service container with additional capabilities. The sidecar proxies 325 can mediate and control network communication between services and microservices (Nainar, ¶ 0049). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZONGHUA DU whose telephone number is (408)918-7596. The examiner can normally be reached Monday - Friday 8 AM - 5 PM PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Follansbee can be reached on (571) 272-3964. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Z.D./Examiner, Art Unit 2444 /JOHN A FOLLANSBEE/Supervisory Patent Examiner, Art Unit 2444
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Apr 24, 2025
Non-Final Rejection — §102, §103
Jul 24, 2025
Interview Requested
Jul 31, 2025
Examiner Interview Summary
Jul 31, 2025
Applicant Interview (Telephonic)
Aug 01, 2025
Response Filed
Oct 03, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603929
Metrics Collection And Reporting In 5G Media Streaming
2y 5m to grant Granted Apr 14, 2026
Patent 12592861
ADAPTIVE BATCH PROCESSING METHOD AND SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12562961
OPERATING AN AUTOMATION SYSTEM OF A MACHINE OR AN INSTALLATION
2y 5m to grant Granted Feb 24, 2026
Patent 12476892
METHOD AND SYSTEM FOR SELECTING DATA CENTERS BASED ON NETWORK METERING
2y 5m to grant Granted Nov 18, 2025
Patent 12469289
VIDEO GENERATION USING A HEADLESS BROWSER
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+45.9%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month