Prosecution Insights
Last updated: April 19, 2026
Application No. 17/963,662

SERVICE MESH FOR COMPOSABLE CLOUD-NATIVE NETWORK FUNCTIONS

Non-Final OA §101§102
Filed
Oct 11, 2022
Examiner
NGUYEN, VAN H
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
759 granted / 851 resolved
+34.2% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
18 currently pending
Career history
869
Total Applications
across all art units

Statute-Specific Performance

§101
23.1%
-16.9% vs TC avg
§103
24.0%
-16.0% vs TC avg
§102
27.2%
-12.8% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 851 resolved cases

Office Action

§101 §102
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the application filed 10/11/2022. Claims 1-21 are presented for examination. Information Disclosure Statement 2. The Applicants’ Information Disclosure Statement (filed 11/01/2023) has been received, entered into the record, and considered. A copy of PTO 1449 form is attached. Drawings 3. The drawings filed 10/11/2022 are acceptable for examination purposes. Specification 4. The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant's cooperation is requested in correcting any errors of which applicant may become aware in the specification. Examiner’s Note 5. The following set of rejections regarding Claims 4, 14, and 21 are based upon the examiner’s interpretation of “and/or” equal “or”. Claim Rejections - 35 USC § 101 6. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 1-9 recite “at least one non-tangible computer-readable medium.” However, the Specification fails to provide a definition of “at least one non-tangible computer-readable medium” that excludes transitory propagating signals. Thus, the recited “at least one non-tangible computer-readable medium” is interpreted to include non-statutory subject matter (e.g., signals, carrier waves, etc.). Accordingly, claims 1-9 fails to recite statutory subject matter under 35 U.S.C. 101. The examiner suggests amending the above claims to explicitly exclude signals (e.g., by adding the phrase “non-transitory”) to obviate the rejection. Claim Rejections - 35 USC § 102 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-21 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wangde et al. (US 20220291973). It is noted that any citations to specific, pages, columns, paragraphs, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123. As to claim 18: Wangde teaches a method comprising: executing at least one process according to a thread model and in-process with a sidecar, wherein the thread model is set for the at least one process during runtime of the at least one process ([0025]: In a service mesh, requests for a particular microservice are routed between the microservices through proxies in their own infrastructure layer. The individual proxies that are used to run the microservices may collectively form a service mesh. The proxies are sometimes referred to as sidecars, such as in the Kubernetes architecture for example, because they run alongside a microservice rather than within a microservice. Sidecars may handle communication between microservices and other services, monitoring of microservice performance, and microservice security related matters... Envoy is one example of a microservice sidecar proxy that may form a transparent communication mesh in which a microservice, and/or the application that includes the microservice, is able to send and receive messages to and from a localhost without having to have an awareness of the network topology; [0031]: With continued reference to FIG. 1, the CaaS 102 may take the form, for example, of a cloud service that may enable developers or operators to scale, organize, and manage containers using container-based virtualization. Once the container is deployed, the service mesh proxy 118 may be deployed as a respective microservice sidecar proxy alongside each of a plurality of microservices. For example, a platform such as the Envoy platform (https://www.envoyproxy.io/) may be used in this process as a deployable sidecar proxy to a microservice, and such a platform may help to add and remove microservice sidecar proxies dynamically which, in turn, may lend flexibility and responsiveness to the management of a containerized application infrastructure; [0053-0056]: The method 200 may begin at 202 where a service request is received 202, such as by a service mesh proxy from a control plane of a CaaS. The service request may specify, among other things, the particular service needed, such as a translation service for example, and an API of the CSP that provides the requested service... At 204, the service mesh proxy, which may take the form of sidecar proxy to the CaaS, may parse the service request to identify the API....When the endpoint evaluation 208 reveals that an endpoint other than the one implicated by the initial service request has been determined to provide the best performance...provide the best service; see also [0075]). As to claim 19: Wangde teaches the at least one process is to perform a network function and wherein the network function comprises one or more of: firewall, load balancer, Network Address Translation (NAT), or gateway ([0035] and [0039]). As to claim 20: Wangde teaches the at least one process is allocated to a processor based on dependency data ([0022-0023]). As to claim 21: Wangde teaches the at least one process is executed on at least one platform comprising a cluster and/or co-located machines ([0026] and [0031]). As to claim 1: Wangde teaches at least one non-tangible computer-readable medium ([0069]: A non-transitory storage medium) comprising instructions stored thereon, that if executed by one or more processors on a platform, cause the one or more processors on the platform to ([0069]: A non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations comprising the operations): receive dependency data for at least one process, wherein the dependency data is to indicate data dependency between the at least one process and a second process ([0015]: enable an optimized matching of an endpoint service with an application that requested an endpoint service enable ready development of dependency graphs showing the relation between microservices of an application; [0053]: a service request is received 202, such as by a service mesh proxy from a control plane of a CaaS. The service request may specify, among other things, the particular service needed, such as a translation service for example, and an API of the CSP that provides the requested service); determine a thread model for execution of the at least one process by the one or more processors ([0054]: At 204, the service mesh proxy, which may take the form of sidecar proxy to the CaaS, may parse the service request to identify the API. Next, the service mesh proxy may then examine any rules and guidelines pertaining to the request to determine 206 if the service request from the CaaS meets applicable criteria. If the criteria are not met, the method 200 may stop 207. On the other hand, if the criteria are met, the service mesh proxy may then request the evaluation 208, by an evaluator, of an endpoint that corresponds to the service request; [0056] When the endpoint evaluation 208 reveals that an endpoint other than the one implicated by the initial service request has been determined to provide the best performance, as among those endpoints evaluated, a transformer of the service mesh proxy may transform 212 the initial request API to reflect the API of the endpoint determined to be capable of providing the best service. The request with the transformed API may then be sent 214 to the selected endpoint determined to provide the best service, and the service may then be provided by the selected endpoint to the requestor, the CaaS for example; see also [0075]); and during runtime of the at least one process, cause the one or more processors to execute the at least one process according to the determined thread model and in-process with a sidecar, wherein the sidecar is to communicate with a service mesh to communicate with one or more microservices of a cloud native application ([0025]: In a service mesh, requests for a particular microservice are routed between the microservices through proxies in their own infrastructure layer. The individual proxies that are used to run the microservices may collectively form a service mesh. The proxies are sometimes referred to as sidecars, such as in the Kubernetes architecture for example, because they run alongside a microservice rather than within a microservice. Sidecars may handle communication between microservices and other services, monitoring of microservice performance, and microservice security related matters... Envoy is one example of a microservice sidecar proxy that may form a transparent communication mesh in which a microservice, and/or the application that includes the microservice, is able to send and receive messages to and from a localhost without having to have an awareness of the network topology; [0031]: With continued reference to FIG. 1, the CaaS 102 may take the form, for example, of a cloud service that may enable developers or operators to scale, organize, and manage containers using container-based virtualization. Once the container is deployed, the service mesh proxy 118 may be deployed as a respective microservice sidecar proxy alongside each of a plurality of microservices. For example, a platform such as the Envoy platform (https://www.envoyproxy.io/) may be used in this process as a deployable sidecar proxy to a microservice, and such a platform may help to add and remove microservice sidecar proxies dynamically which, in turn, may lend flexibility and responsiveness to the management of a containerized application infrastructure; [0053-0056]: The method 200 may begin at 202 where a service request is received 202, such as by a service mesh proxy from a control plane of a CaaS. The service request may specify, among other things, the particular service needed, such as a translation service for example, and an API of the CSP that provides the requested service... At 204, the service mesh proxy, which may take the form of sidecar proxy to the CaaS, may parse the service request to identify the API....When the endpoint evaluation 208 reveals that an endpoint other than the one implicated by the initial service request has been determined to provide the best performance...provide the best service ). As to claim 2: Wangde teaches the second process is to execute on a different platform than that of the platform and the different platform is coupled to the platform using a network ([0015] and [0041]). As to claim 3: Wangde teaches the second process is to execute on a different processor than the one or more processors ([0018-0019] and [0072-0073]). As to claim 4: Wangde teaches to execute the at least one process in-process with a sidecar, the process and sidecar are to execute on a same core, same process, and/or same container ([0022] and [0031]). As to claim 5: Wangde teaches the at least one process is to perform a network function ([0039] and [0041]). As to claim 6: Wangde teaches the network function comprises one or more of: firewall, load balancer, Network Address Translation (NAT), or gateway ([0035] and [0039]). As to claim 7: Wangde teaches the at least one process in- process with the sidecar comprises the at least one process and the sidecar are to execute on a same core and share memory and cache ([0031] and [0040]). As to claim 8: Wangde teaches the one or more processors is to translate dependency data to a format for processing by the platform ([0039-0040]). As to claim 9: Wangde teaches the at least one process has an associated indicator of logical core permitted to execute the at least one process and wherein the indicator is based on the dependency data ([0028] and [0045]). As to claim 10: Wangde teaches an apparatus ([0012]: systems) comprising: a memory comprising instructions stored thereon and at least one processor, that based on execution of the instructions stored in the memory ([0069]: A non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations comprising the operations), is to: cause transmission of a request to at least one platform to execute multiple services, wherein the multiple services utilize data according to a data dependency relationship ([0015]: enable an optimized matching of an endpoint service with an application that requested an endpoint service enable ready development of dependency graphs showing the relation between microservices of an application; [0022]: so-called service meshes have been developed that may, among other things, enable and authorize service discovery, perform traffic control, provide for security of the microservices. In a service mesh architecture, each microservice may have a sidecar proxy that is external to the microservice. The microservice can interact with external entities and services by way of the sidecar proxy. As well, the parent application that includes the microservice may be connected to the sidecar proxy, which will have the same lifespan and/or lifecycle as the parent application; [0039]: given a pre-defined set of services and plugins using a multi-cloud approach, such embodiments may transform service requests, such as may be issued by one or more microservices and/or applications, to match respective API specifications of different CSPs. To continue with the earlier example of a translation service provided by GCP and AWS, if a service request from a microservice or application specifies the GCP service endpoint, but the GCP service endpoint is unreachable, or not available in a particular region, such embodiments may route that service request, an example of which is the service request 152 for example, to the AWS translation API to provide the needed functionality to the microservice or application that requested the translation functionality); cause transmission of a dependency graph, based on the data dependency relationship, to the at least one platform ([0015]: enable an optimized matching of an endpoint service with an application that requested an endpoint service enable ready development of dependency graphs showing the relation between microservices of an application; [0053]: a service request is received 202, such as by a service mesh proxy from a control plane of a CaaS. The service request may specify, among other things, the particular service needed, such as a translation service for example, and an API of the CSP that provides the requested service); and cause the at least one platform to: execute at least one of the multiple services on a processor that executes a side car and to share memory between the at least one of the multiple services and the side car and to set a thread binding model at runtime of the at least one of the multiple services ([0025]: In a service mesh, requests for a particular microservice are routed between the microservices through proxies in their own infrastructure layer. The individual proxies that are used to run the microservices may collectively form a service mesh. The proxies are sometimes referred to as sidecars, such as in the Kubernetes architecture for example, because they run alongside a microservice rather than within a microservice. Sidecars may handle communication between microservices and other services, monitoring of microservice performance, and microservice security related matters... Envoy is one example of a microservice sidecar proxy that may form a transparent communication mesh in which a microservice, and/or the application that includes the microservice, is able to send and receive messages to and from a localhost without having to have an awareness of the network topology; [0031]: With continued reference to FIG. 1, the CaaS 102 may take the form, for example, of a cloud service that may enable developers or operators to scale, organize, and manage containers using container-based virtualization. Once the container is deployed, the service mesh proxy 118 may be deployed as a respective microservice sidecar proxy alongside each of a plurality of microservices. For example, a platform such as the Envoy platform (https://www.envoyproxy.io/) may be used in this process as a deployable sidecar proxy to a microservice, and such a platform may help to add and remove microservice sidecar proxies dynamically which, in turn, may lend flexibility and responsiveness to the management of a containerized application infrastructure; [0053-0056]: The method 200 may begin at 202 where a service request is received 202, such as by a service mesh proxy from a control plane of a CaaS. The service request may specify, among other things, the particular service needed, such as a translation service for example, and an API of the CSP that provides the requested service... At 204, the service mesh proxy, which may take the form of sidecar proxy to the CaaS, may parse the service request to identify the API....When the endpoint evaluation 208 reveals that an endpoint other than the one implicated by the initial service request has been determined to provide the best performance...provide the best service; see also [0075] ). . As to claim 11: Wangde teaches the at least one of the multiple services is to execute on a different platform than that of at least one other of the multiple services ([0015] and [0041]). As to claim 12: Wangde teaches the at least one of the multiple services is to execute on a different processor than that of at least one other of the multiple services ([0018-0019] and [0072-0073]). As to claim 13: Wangde teaches the at least one of the multiple services is to execute on a same processor as that of at least one other of the multiple services ([0022-0023]). As to claim 14: Wangde teaches the at least one platform comprises a cluster and/or co- located machines ([0026] and [0031]). As to claim 15: Wangde teaches the sidecar is to provide communications among different services of the multiple services ([0025] and [0040]). As to claim 16: Wangde teaches cause the at least one platform to translate the dependency graph to a format for processing by the at least one platform ([0039-0040]). As to claim 17: Wangde teaches provide an indicator of at least one logical core permitted to execute at least one of the multiple services and wherein the indicator is based on the dependency graph ([0028] and [0045]). Conclusion 8. The prior art made of record, listed on PTO 892 provided to Applicant is considered to have relevancy to the claimed invention. Applicant should review each identified reference carefully before responding to this office action to properly advance the case in light of the prior art. Contact Information 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN H. NGUYEN whose telephone number is (571) 272-3765. The examiner can normally be reached on Monday- Friday from 9:00AM to 5:30 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LEWIS BULLOCK, can be reached at telephone number (571) 272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /VAN H NGUYEN/Primary Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Oct 11, 2022
Application Filed
Dec 07, 2022
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §101, §102
Apr 10, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602262
SHARED RESOURCE POOL WITH PERIODIC REBALANCING IN A MULTI-CORE SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12591467
SYSTEM AND METHOD FOR HALTING PROCESSING CORES IN A MULTICORE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12591456
METHOD AND APPARATUS FOR CONTROLLING HARDWARE ACCELERATOR
2y 5m to grant Granted Mar 31, 2026
Patent 12591468
DYNAMIC MANAGEMENT OF FEATURES FOR PROCESSES EXECUTABLE ON AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12585496
METHOD, APPARATUS AND COMPUTER PROGRAM FOR ACTIVATING A SCHEDULING CONFIGURATION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+18.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 851 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month