Prosecution Insights
Last updated: April 19, 2026
Application No. 18/127,960

SERVICE MESH ARCHITECTURE FOR INTEGRATION WITH ACCELERATOR SYSTEMS

Non-Final OA §102§103§112
Filed
Mar 29, 2023
Examiner
KAMRAN, MEHRAN
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
434 granted / 484 resolved
+34.7% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
510
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 484 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-25 are presented for examination. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 23-25 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. Claim 23 recites the limitation " shared memory mechanism from the host”. There is insufficient antecedent basis for the term “the host” in the claim. If by host “host device” is meant, this needs to stated explicitly. The remaining claims, not specifically mentioned, are rejected for being dependent upon one of the claims above. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 1 is rejected under 35 U.S.C. 102(a)(2) as anticipated by Lenrow (US 12,255,875 B2) As per claim 1, Lenrow teaches A processing apparatus comprising: a memory device including a user space for executing user applications; (Lenrow Fig 3 Block 102 (Non-side car system) VMs 306A-306N constitute the user space and [col12, lines 22-25] In one embodiment, method 500 can further release the proxy to a pool of available proxy objects stored in memory to allow for quicker attachment of proxies to VMs.) The examiner will take the user space to have a lower privilege then kernel space as described in the specification and is used for executing applications ([0089] … memory device including a user space for executing user applications; [0105] …..memory devices having virtual memory configured into a user space having a first privilege level and a kernel space having a second privilege level higher than the first privilege level); and infrastructure communication circuitry configured to: receive a request from a user application executing in the user space; ( Lenrow [col 2, lines 29-30] a service mesh gateway (SMG) receives a request to instantiate a proxy for a non-sidecar application. In an embodiment, the SMG can receive the request directly from a non-sidecar application and [col 9, lines 25-35] In step 402, method 400 can comprise monitoring VM network traffic. In an embodiment, the VM is located in a non-sidecar system such as non-sidecar system 102. In an embodiment, method 400 can be operated by a TOR switch and can promiscuously monitor all traffic from VMs running on the servers of the rack containing the TOR switch. In other embodiments, one or more VMs can be configured to transmit data to method 400. For example, in a cloud-based system, a VM instance can be configured to transmit data over a virtual NIC to an SMG executing method 400..) and perform a service mesh operation, in response to the request, without a sidecar proxy. (Lenrow [col 10, lines 57-63] In step 502, method 500 can comprise instantiating a VM. Although VMs are used as examples, other types of non-sidecar applications (e.g., bare metal applications, containerized applications without sidecar proxies, etc.) can be used in method 500. In an embodiment, method 500 can instantiate a non-sidecar application by issuing an API call to an orchestrator or other similar computing service or device. and [Abstract] Disclosed are embodiments for injecting sidecar proxy capabilities into non-sidecar applications, allowing such non-sidecar applications to communicate with a service mesh architecture. In an embodiment, a method comprises receiving a request to instantiate a proxy for a non-sidecar application at a service mesh gateway (SMG). The SMG then instantiates the proxy in response to the request and broadcasts network information of the non-sidecar application to a mesh controller deployed in a containerized environment) Claim 23 is rejected under 35 U.S.C. 102(a)(2) as anticipated by Saito (US 2024/0291767 A) As per claim 23, Saito teaches An accelerator apparatus comprising: a communication interface coupled to a host device; (Saito Fig 4 Blocks 121 and 122 and [0075] The OS 120 includes an L3/L4 protocol/ACC function/argument data packetizing unit (hereinafter referred to as an ACC function/argument data packetizing unit) 121, an L3/L4 protocol/ ACC function/return value data parsing unit (hereinafter referred to as an ACC function/return value data parsing unit) 122, a packet processing inline insertion unit 123, and an NIC driver unit 124) This independent claim is written quite broadly. It does not recite doing things without a sidecar (claim 1) or different level of privilege in executing code (claims 14 and 18). This will be examined based on Fig 5 (accompanying paragraph 36). It will be treated as a host having a communication interface and an accelerated NIC used to communicate with the outside world using L4 protocol. coprocessor circuitry coupled to the communication interface and configured to receive input data over a shared memory mechanism from the host, the input data including L4 payloads; (Saito [Abstract] An OS (120) of a client (100) includes: an L3/L4 protocol/ACC function/argument data packetizing unit (121) that serializes a function name/argument input from an application side according to a format of a predetermined protocol and packetizes the function name/argument as a payload; and an L3/L4 protocol/ACC function/return value data parsing unit (122) that deserializes packet data input from a server (200) side according to a format of a predetermined protocol and acquires a function name/execution result). L4 payloads refer to the data content within a network packet's Transport Layer, typically TCP or UDP. It carries application-level information, such as HTTP requests, but is processed by devices like load balancers based only on IP addresses and port numbers, not by inspecting the actual content. What is key about L4 payloads is how load balancers handle them (without inspecting it, leading to higher performance and lower latency compared to Layer 7 (application) inspection). It is mostly a protocol and determines how it is handled. This what this interpretation is based on https://www.f5.com/glossary/layer-4-load-balancing perform an accelerator function on the input data on behalf of the host. (Sato [0069] In the arithmetic processing offload system 1000, the client 100 offloads specific processing of the application to an accelerator 212 disposed in the server 200 to perform arithmetic processing). This mapping is treating the client as the host and the term “on behalf of the host” will be interpreted to mean the host (client in this case) is not performing this acceleration itself but is relying on some other device to help it (in the case the server). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Lenrow (US 12,255,875 B2) in view of Chisnall (US 2021/0004469 A1). As per claim 2, Lenrow do not teach the user space has a first privilege level and wherein the memory device further includes a kernel space having a second privilege level higher than the first privilege level; and the infrastructure communication circuitry is configured to: execute within a system space of the memory device, the system space having a third privilege level higher than the first privilege level and lower than the second privilege level; and responsive to receiving the request, control network traffic corresponding to the request. However, Chisnall teaches the user space has a first privilege level and wherein the memory device further includes a kernel space having a second privilege level higher than the first privilege level; and the infrastructure communication circuitry is configured to: execute within a system space of the memory device, the system space having a third privilege level higher than the first privilege level and lower than the second privilege level; and responsive to receiving the request, control network traffic corresponding to the request. (Chisnall [0032] FIG. 4 is an example, similar to FIG. 3, but where there are more than two privilege levels. In this example the highest privilege level is a gatekeeper 406 and there are two other privilege levels (an intermediate privilege level 408 and a lower privilege level 404). Both the intermediate privilege level 408 and the lower privilege level 404 are isolated except that the lower privilege level is able to communicate with the intermediate privilege level and the intermediate privilege level is able to communicate with the highest privilege level. The highest privilege level 406 is able to communicate directly with the lowest privilege level 404, though whether it does so depends on the specific implementation (such as where device pass-through is done in the kernel). The lowest privilege level 404 is unable to initiate communication with the highest privilege level 406. [0033] Application code, which potentially comprises security vulnerabilities, executes at the lower privilege level 404 and the intermediate privilege level 408. In an example the application code comprises a full operating system and userspace. [0034] The highest privilege level 406 acts as a gatekeeper as described earlier. It applies a policy in order to drop, modify or forward communications with the intermediate privilege level 408) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Chisnall with the system of Lenrow to execute processes at an intermediate level. One having ordinary skill in the art would have been motivated to use Chisnall into the system of Lenrow for the purpose of enforcing separation between the at least two execution environments (Chisnall paragraph 06) As per claim 13, Lenrow teaches further comprising a network interface circuitry coupled between at least two host devices executing at least two user applications. (Lenrow Fig 2 Block 208 (Mesh Controller)) Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Lenrow (US 12,255,875 B2) in view of Chisnall (US 2021/0004469 A1) in further view of Anderson (US 2006/0130060 A1). As per claim 3, Lenrow and Chisnall do not teach wherein operations of the system space are executed in ring 1 or ring 2 of a four-ring protection architecture. However, Anderson teaches wherein operations of the system space are executed in ring 1 or ring 2 of a four-ring protection architecture. (Anderson [0021] Most instruction set architectures (ISAs), including the ISA of the Intel Pentium.RTM. 4 (herein referred to as the IA-32 ISA), are designed with the concept of privilege levels in the instruction set architecture; these privilege levels are referred to herein as ISA privilege levels. Referring to FIGS. 2A and 2B, there is shown a block diagram illustrating platforms with various ISA privilege levels. The IA-32 ISA, for example, has four ISA privilege levels, referred to as ring levels ring-0 301, ring-1 303, ring-2 305 and ring-3 307. In the IA-32 ISA, ring-0 (301) is the most privileged ISA privilege level while ring-3 (307) is the least privileged ISA privilege level). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Anderson with the system of Lenrow and Chisnall to use a ring-1 or ring-2 architecture. One having ordinary skill in the art would have been motivated to use Anderson into the system of Lenrow and Chisnall for the purpose of running components of a virtual machine monitor at a reduced privilege level (Anderson paragraph 01) Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Lenrow (US 12,255,875 B2) in view of Chisnall (US 2021/0004469 A1) in further view of Ruemann (US 2018/0165117 A1). As per claim 4, Lenrow and Chisnall do not teach wherein the infrastructure communication circuitry is configured to transmit data in a hardware-assisted shared memory mechanism between the user space and the kernel space. However, Reumann teaches wherein the infrastructure communication circuitry is configured to transmit data in a hardware-assisted shared memory mechanism between the user space and the kernel space.(Ruemann [0131] The shared memory is accessible in user space or kernel space depending on where each SWS runs, and the hardware device which is responsible for transmitting data to the physical network. The SWHYPE layer is responsible for initializing an isolation domain and bringing up all ports). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Ruemann with the system of Lenrow and Chisnall to transmit data in a shared memory. One having ordinary skill in the art would have been motivated to use Ruemann into the system of Lenrow and Chisnall for the purpose of improving linkage between two ports that are forwarding to each other while also executing packet processing on an SWS as packets transit between the ports.(Reumann paragraph 10) Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Lenrow (US 12,255,875 B2) in view of Weiser (US 2013/0147900 A1). As per claim 5, Lenrow does not teach further comprising an infrastructure processing unit (IPU) or data processing unit (DPU) configured to encapsulate user space application data for transmission in L4 payloads. However, Weiser teaches further comprising an infrastructure processing unit (IPU) or data processing unit (DPU) configured to encapsulate user space application data for transmission in L4 payloads (Weiser [0244] The communication application 920 executing on the CPU of the computing device receives and processes the audio portion via the application layer, such as via application layer payload of a transport layer protocol packet(s). As the video portion of the video and audio conference stream is processed by the audio/video processor of the integrated device, the corresponding audio portion of the video and audio conference stream is processed by the CPU of the computing device. Upon processing each of the audio and video portions, as the processor of the integrated device transmits the processed video portion on the network via the network interface of the integrated device, the CPU of the computing device transmits the audio portion via the network stack and onto the network via the Ethernet adapter and network interface of the integrated device) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Weiser with the system of Lenrow to use L4 payloads. One having ordinary skill in the art would have been motivated to use Weiser into the system of Lenrow for the purpose of mixing the video streams (Weiser paragraph 06) Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Lenrow (US 12,255,875 B2) in view of Weiser (US 2013/0147900 A1) in further view of Bernat (US 2021/0328886 A1). As per claim 6, Lenrow and Weiser do not teach wherein transmission is performed over PCIe circuitry. However, Bernat teaches wherein transmission is performed over PCIe circuitry. (Bernat [0024] Service meshes (e.g., mesh proxies) are components implemented in containers that implement a common set of functionalities needed for directory lookups to locate other services on the same or a different machine [0182] The example FPGA circuitry 1600 of FIG. 16 also includes example Dedicated Operations Circuitry 1614. In this example, the Dedicated Operations Circuitry 1614 includes special purpose circuitry 1616 that may be invoked to implement commonly used functions to avoid the need to program those functions in the field. Examples of such special purpose circuitry 1616 include memory (e.g., DRAM) controller circuitry, PCIe controller circuitry, clock circuitry, transceiver circuitry, memory, and multiplier-accumulator circuitry. Other types of special purpose circuitry may be present. In some examples, the FPGA circuitry 1600 may also include example general purpose programmable circuitry 1618 such as an example CPU 1620 and/or an example DSP 1622. Other general purpose programmable circuitry 1618 may additionally or alternatively be present such as a GPU, an XPU, etc., that can be programmed to perform other operations) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Bernat with the system of Lenrow and Weiser to use PCIe circuitry. One having ordinary skill in the art would have been motivated to use Bernat into the system of Lenrow and Weiser for the purpose of facilitating service proxying. (Bernat paragraph 01) Claims 7-11 are rejected under 35 U.S.C. 103 as being unpatentable over Lenrow (US 12,255,875 B2) in view of Weiser (US 2013/0147900 A1) in further view of Plummer (US 2024/0004705 A1). As per claim 7, Lenrow and Weiser do not teach wherein the IPU/DPU couples two host devices. However, Plummer teaches wherein the IPU/DPU couples two host devices. (Plummer [0029] Each network device 104 and 112 may additionally or alternatively include other components, such as a network switch (e.g., an Ethernet switch), a network interface controller (NIC), a CPU, a DPU, or any other suitable device used to process data and/or control the flow of data between devices connected to communication network 108. Each network device 104 and 112 may include or be connected to one or more of Personal Computer (PC), a laptop, a tablet, a smartphone, a server, a collection of servers, and/or the like. Although only two network devices are shown, more or fewer network devices may be included in the system 100). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Plummer with the system of Lenrow and Weiser to couple two host devices. One having ordinary skill in the art would have been motivated to use Plummer into the system of Lenrow and Weiser for the purpose of adjusting a load profile of one or more processing devices processing a workload in a bulk-synchronous mode. (Plummer paragraph 03) As per claim 8, Plummer teaches wherein applications executing on each of the two host devices communicate through the IPU/DPU. (Plummer [0029] Each network device 104 and 112 may additionally or alternatively include other components, such as a network switch (e.g., an Ethernet switch), a network interface controller (NIC), a CPU, a DPU, or any other suitable device used to process data and/or control the flow of data between devices connected to communication network 108. Each network device 104 and 112 may include or be connected to one or more of Personal Computer (PC), a laptop, a tablet, a smartphone, a server, a collection of servers, and/or the like. Although only two network devices are shown, more or fewer network devices may be included in the system 100). As per claim 9, Lenrow and Weiser do not teach wherein the IPU/DPU includes a hardware data processing circuitry for network communication with a host system. However, Plummer teaches wherein the IPU/DPU includes a hardware data processing circuitry for network communication with a host system (Plummer [0029] In at least one example embodiment, network devices 104 and 112 correspond to or include one or more processing devices 128 and 132 that are capable of running a bulk-synchronous workload as part of a cluster. Non-limiting examples for the bulk-synchronous workload include workloads for Natural Language Processing (NLP), workloads for reinforcement learning, workloads for artificial intelligence, workloads for complex image processing, and/or the like. In one non-limiting embodiment, the processing devices 128 and 132 each include one or more GPUs for processing the workloads described herein (see GPUs 202 in FIG. 2). Embodiments are not limited to using GPUs and other processing devices may handle bulk-synchronous workloads, such as central processing units (CPUs), data processing units (DPUs), and/or the like. Each network device 104 and 112 may additionally or alternatively include other components, such as a network switch (e.g., an Ethernet switch), a network interface controller (NIC), a CPU, a DPU, or any other suitable device used to process data and/or control the flow of data between devices connected to communication network 108. Each network device 104 and 112 may include or be connected to one or more of Personal Computer (PC), a laptop, a tablet, a smartphone, a server, a collection of servers, and/or the like. Although only two network devices are shown, more or fewer network devices may be included in the system 100. [0031] The one or more processing devices 128 and the one or more processing devices 132 may include one or more processing circuits for carrying out computing tasks, for example, tasks associated with processing data and/or controlling the flow of data within each network device 104 and 112 and/or over the communication network 108. Such processing circuits may comprise software, hardware, or a combination thereof. For example, a processing circuit may include a memory including executable instructions and at least one processor (e.g., a microprocessor) that executes the instructions on the memory. The memory may correspond to any suitable type of memory device or collection of memory devices configured to store instructions. Non-limiting examples of suitable memory devices that may be used include Flash memory, Random Access Memory (RAM), Read Only Memory (ROM), variants thereof, combinations thereof, or the like. In some embodiments, the memory and processor may be integrated into a common device (e.g., a microprocessor may include integrated memory). Additionally or alternatively, a processing circuit may comprise hardware, such as an application specific integrated circuit (ASIC). Other non-limiting examples of the processing circuits include an Integrated Circuit (IC) chip, a Central Processing Unit (CPU), a microprocessor, a Field Programmable Gate Array (FPGA), a collection of logic gates or transistors, resistors, capacitors, inductors, diodes, or the like. Some or all of the processing circuits may be provided on a Printed Circuit Board (PCB) or collection of PCBs. It should be appreciated that any appropriate type of electrical component or collection of electrical components may be suitable for inclusion in the processing circuitry). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Plummer with the system of Lenrow and Weiser to use a hardware data processing circuitry. One having ordinary skill in the art would have been motivated to use Plummer into the system of Lenrow and Weiser for the purpose of adjusting a load profile of one or more processing devices processing a workload in a bulk-synchronous mode. (Plummer paragraph 03) As per claim 10, Plummer teaches wherein the hardware data processing circuitry comprises a system on chip (SoC). (Plummer [0035] FIG. 2 illustrates a block diagram of a system 200 for managing load profiles of processing devices according to at least one example embodiment. The system 200 includes GPUs 202a, 202b . . . 202n, a cluster manager 204, and controllers 208a, 208b . . . 208n. As may be appreciated, more or fewer GPUs 202 having the same or similar structure as GPUs 202a to 202n may be included in the system 200. As noted above for FIG. 1, network devices 104 and 112 may comprise a cluster of processing devices embodied as processing devices 128 and/or 132 for handling workloads. FIG. 2 illustrates an example where the cluster of processing devices 128 and/or 132 include or are implemented with the GPUs 202a, 202b . . . 202n. Each GPU 202 includes a respective controller 208a, 208b . . . 208n, and each controller 208a, 20b8b . . . 208n may correspond to a Baseboard Management Controller (BMC) of a GPU or a Graphics Processing Management Unit (GPMU) of a GPU. Controllers 208a to 208n may have the same or similar processing capabilities and/or processor structures as those described herein with respect to processing devices 128 and 132. In at least one non-limiting embodiment, each controller 208a to 208n comprises a System on Chip (SoC) Advanced RISC Machine-based processor (ARM-based processor). Each controller 208a to 208n may, among other things, perform tasks for an associated GPU 202a to 202n, such as environment monitoring (for temperature, humidity, particulates, etc.), power management, diagnostics, and/or the like. [0037] As shown in FIG. 2, each controller 208a to 208n includes one or more current sink circuits (212a, 212b, and 212c), one or more current throttle circuits (216a, 216b, and 216c), and one or more load detector circuits (220a, 220b, 220c). The current sink circuit(s) 212, the current throttle circuit(s) 216, and/or the load detector circuit(s) 220 for each controller 208 may be fabricated on the same SoC as the aforementioned BMC or GPMU. In this way, the current sink circuit(s) 212, the current throttle circuit(s), and/or the load detector circuit(s) are “on-die” circuits). As per claim 11, Plummer teaches wherein the hardware data processing circuitry comprises a field programmable gate array (FPGA). (Plummer [0029] In at least one example embodiment, network devices 104 and 112 correspond to or include one or more processing devices 128 and 132 that are capable of running a bulk-synchronous workload as part of a cluster. Non-limiting examples for the bulk-synchronous workload include workloads for Natural Language Processing (NLP), workloads for reinforcement learning, workloads for artificial intelligence, workloads for complex image processing, and/or the like. In one non-limiting embodiment, the processing devices 128 and 132 each include one or more GPUs for processing the workloads described herein (see GPUs 202 in FIG. 2). Embodiments are not limited to using GPUs and other processing devices may handle bulk-synchronous workloads, such as central processing units (CPUs), data processing units (DPUs), and/or the like. Each network device 104 and 112 may additionally or alternatively include other components, such as a network switch (e.g., an Ethernet switch), a network interface controller (NIC), a CPU, a DPU, or any other suitable device used to process data and/or control the flow of data between devices connected to communication network 108. Each network device 104 and 112 may include or be connected to one or more of Personal Computer (PC), a laptop, a tablet, a smartphone, a server, a collection of servers, and/or the like. Although only two network devices are shown, more or fewer network devices may be included in the system 100. [0031] The one or more processing devices 128 and the one or more processing devices 132 may include one or more processing circuits for carrying out computing tasks, for example, tasks associated with processing data and/or controlling the flow of data within each network device 104 and 112 and/or over the communication network 108. Such processing circuits may comprise software, hardware, or a combination thereof. For example, a processing circuit may include a memory including executable instructions and at least one processor (e.g., a microprocessor) that executes the instructions on the memory. The memory may correspond to any suitable type of memory device or collection of memory devices configured to store instructions. Non-limiting examples of suitable memory devices that may be used include Flash memory, Random Access Memory (RAM), Read Only Memory (ROM), variants thereof, combinations thereof, or the like. In some embodiments, the memory and processor may be integrated into a common device (e.g., a microprocessor may include integrated memory). Additionally or alternatively, a processing circuit may comprise hardware, such as an application specific integrated circuit (ASIC). Other non-limiting examples of the processing circuits include an Integrated Circuit (IC) chip, a Central Processing Unit (CPU), a microprocessor, a Field Programmable Gate Array (FPGA), a collection of logic gates or transistors, resistors, capacitors, inductors, diodes, or the like. Some or all of the processing circuits may be provided on a Printed Circuit Board (PCB) or collection of PCBs. It should be appreciated that any appropriate type of electrical component or collection of electrical components may be suitable for inclusion in the processing circuitry). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Lenrow (US 12,255,875 B2) in view of Chisnall (US 2021/0004469 A1) in further view of Wu (US 2021/0342092 A1) As per claim 12, Lenrow and Chisnall do not teach wherein the request to perform the process comprises a trigger to trigger a context switch to the system space. However, Wu teaches wherein the request to perform the process comprises a trigger to trigger a context switch to the system space. (Wu [0042] The request 152 for OTP memory may be sent be a user application 116 that is executing at a priority that is lower or more restrictive than the priority of the process A 110. Alternatively, the request 152 for OTP memory may be received from any software process that is executing at a lower priority than the priority of the process A 110 or a process that is executing at the same priority as the process A 110. For example the request 152 for OTP memory may be sent by a software program that is executing within process A 110. [0048] The kernel executing in process A 110 is configured to execute at an intermediate privilege level and to manage the lifecycle of, and allocate resources for, the user space process 106. The kernel executing in process A 110 may be configured to execute a plurality of user space processes 106 each having multiple applications 108. The intermediate privilege level used to execute the kernel 110 is higher than the privilege level used to execute the user space processes 106 thereby allowing the kernel process A 110 to load and/or start applications 108 in a user space process 106, while preventing any applications 108 executing in a user space process 106 form accessing of corrupting the kernel process A 110. The intermediate privilege level used to execute the kernel in process A 110 is lower than the process B thereby preventing the OS kernel form accessing or modifying computer resources and memory functionality reserved to the process B 112). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Wu with the system of Lenrow and Chisnall to trigger a context switch. One having ordinary skill in the art would have been motivated to use Wu into the system of Lenrow and Chisnall for the purpose of executing software processes at different level of privilege (Wu paragraph 08) Claims 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 2021/0342092 A1) in view of Chisnall (US 2021/0004469 A1). As per claim 14, Wu teaches A method comprising: triggering, by an originating application included in a user space of an apparatus, a context switch to switch context to a distributed system space having a higher privilege level than the user space and a lower privilege level than a kernel space of the apparatus; (Wu [0042] The request 152 for OTP memory may be sent be a user application 116 that is executing at a priority that is lower or more restrictive than the priority of the process A 110. Alternatively, the request 152 for OTP memory may be received from any software process that is executing at a lower priority than the priority of the process A 110 or a process that is executing at the same priority as the process A 110. For example the request 152 for OTP memory may be sent by a software program that is executing within process A 110. [0048] The kernel executing in process A 110 is configured to execute at an intermediate privilege level and to manage the lifecycle of, and allocate resources for, the user space process 106. The kernel executing in process A 110 may be configured to execute a plurality of user space processes 106 each having multiple applications 108. The intermediate privilege level used to execute the kernel 110 is higher than the privilege level used to execute the user space processes 106 thereby allowing the kernel process A 110 to load and/or start applications 108 in a user space process 106, while preventing any applications 108 executing in a user space process 106 form accessing of corrupting the kernel process A 110. The intermediate privilege level used to execute the kernel in process A 110 is lower than the process B thereby preventing the OS kernel form accessing or modifying computer resources and memory functionality reserved to the process B 112). The context switch within the context of this invention is treated as going from one privilege level to another in this case application is running in a less privileged kernel but when process A is taken up it switches to a different less restrictive space. This is consistent with Fig 5 and Fig 6 of the specification. Wu does not teach responsive to the context switch, perform service mesh operations and control network traffic corresponding to the context switch, the distributed system space having higher privilege level than the system user space, the distributed system space having a lower privilege level than a kernel system space. However, Chisnall teaches responsive to the context switch, perform service mesh operations and control network traffic corresponding to the context switch, the distributed system space having higher privilege level than the system user space, the distributed system space having a lower privilege level than a kernel system space. (Chisnall [0032] FIG. 4 is an example, similar to FIG. 3, but where there are more than two privilege levels. In this example the highest privilege level is a gatekeeper 406 and there are two other privilege levels (an intermediate privilege level 408 and a lower privilege level 404). Both the intermediate privilege level 408 and the lower privilege level 404 are isolated except that the lower privilege level is able to communicate with the intermediate privilege level and the intermediate privilege level is able to communicate with the highest privilege level. The highest privilege level 406 is able to communicate directly with the lowest privilege level 404, though whether it does so depends on the specific implementation (such as where device pass-through is done in the kernel). The lowest privilege level 404 is unable to initiate communication with the highest privilege level 406. [0033] Application code, which potentially comprises security vulnerabilities, executes at the lower privilege level 404 and the intermediate privilege level 408 [similar to process B of Wu which also executes in an intermediate privilege]. In an example the application code comprises a full operating system and userspace. [0034] The highest privilege level 406 [similar process B of Wu] acts as a gatekeeper as described earlier. It applies a policy in order to drop, modify or forward communications with the intermediate privilege level 408) Mesh operations (as understood in the art) involves communication among nodes. This structure is shown in Fig 1 of Chisnall which shows different entities (party A and Party B) communicating with a data center (specifically compute Node 102) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Chisnall with the system of Wu to execute at an intermediate security level. One having ordinary skill in the art would have been motivated to use Chisnall into the system of Wu for the purpose of enforcing separation between the at least two execution environments (Chisnall paragraph 06) As per claim 18, Wu teaches A system comprising: at least two host apparatuses including memory devices having virtual memory configured into a user space having a first privilege level and a kernel space having a second privilege level higher than the first privilege level; (Wu [0039] User applications 108 executing in a user space process 106 are not allowed to access the memory management functionality necessary to allocate or otherwise control memory 104. Access to memory management functionality, such as hardware memory controllers, is reserved for higher privileged processes including process A 110 and process B 112. For example, in certain embodiments unprivileged processes such as the user space process 106, are provided with an abstraction of the memory 104, often referred to as virtual memory or virtual memory space, and are not able to access and in fact are not even aware of the physical memory devices that make up the memory 104. The user space processes 106 do not have information about how the virtual memory space relates to the physical computer memory 104. When a lower privileged process [first level of privilege], such as the user space processes 106, requires memory management functions, such as memory allocation, initialization, etc., it must send a request to a higher privileged process such as process A 110, which in certain embodiments may include an operating system, to perform the desired memory operations on its behalf) Wu does not teach infrastructure communication circuitry configured to execute within a system space of the memory device, the system space having a third privilege level higher than the first privilege level and lower than the second privilege level, the infrastructure communication circuitry configured to: receive, from the user space, a request to perform a process for a corresponding user application in the user space; and responsive to receiving the request, perform service mesh operations and control network traffic corresponding to the request. However, Chisnall teaches infrastructure communication circuitry configured to execute within a system space of the memory device, the system space having a third privilege level higher than the first privilege level and lower than the second privilege level, the infrastructure communication circuitry configured to: receive, from the user space, a request to perform a process for a corresponding user application in the user space; and responsive to receiving the request, perform service mesh operations and control network traffic corresponding to the request. (Chisnall [0032] FIG. 4 is an example, similar to FIG. 3, but where there are more than two privilege levels. In this example the highest privilege level is a gatekeeper 406 and there are two other privilege levels (an intermediate privilege level 408 and a lower privilege level 404). Both the intermediate privilege level 408 and the lower privilege level 404 are isolated except that the lower privilege level is able to communicate with the intermediate privilege level and the intermediate privilege level is able to communicate with the highest privilege level. The highest privilege level 406 is able to communicate directly with the lowest privilege level 404, though whether it does so depends on the specific implementation (such as where device pass-through is done in the kernel). The lowest privilege level 404 is unable to initiate communication with the highest privilege level 406. [0033] Application code, which potentially comprises security vulnerabilities, executes at the lower privilege level 404 and the intermediate privilege level 408 [similar to process B of Wu which also executes in an intermediate privilege. This constitutes the third privilege level]. In an example the application code comprises a full operating system and userspace. [0034] The highest privilege level 406 [similar process B of wu] acts as a gatekeeper as described earlier. It applies a policy in order to drop, modify or forward communications with the intermediate privilege level 408) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Chisnall with the system of Wu to execute at an intermediate security level. One having ordinary skill in the art would have been motivated to use Chisnall into the system of Wu for the purpose of enforcing separation between the at least two execution environments (Chisnall paragraph 06) Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 2021/0342092 A1) in view of Chisnall (US 2021/0004469 A1) in further view Luo (US 12,524,850 B1). As per claim 15, Wu and Chisnall do not teach wherein the service mesh operations are executed by invoking an application programming interface to negotiate shared memory usage with a second apparatus. However, Luo teaches wherein the service mesh operations are executed by invoking an application programming interface to negotiate shared memory usage with a second apparatus. (Luo [col 157, lines 17-37] In at least one embodiment, OpenCL defines a “platform” that allows a host to control devices connected to a host. In at least one embodiment, an OpenCL framework provides a platform layer API and a runtime API, shown as platform API 5803 and runtime API 5809. In at least one embodiment, runtime API 5809 uses contexts to manage execution of kernels on devices. In at least one embodiment, each identified device may be associated with a respective context, which runtime API 5809 may use to manage command queues, program objects, and kernel objects, share memory objects, among other things, for that device. In at least one embodiment, platform API 5803 exposes functions that permit device contexts to be used to select and initialize devices, submit work to devices via command queues, and enable data transfer to and from devices, among other things. In addition, OpenCL framework provides various built-in functions (not shown), including math functions, relational functions, and image processing functions, among others, in at least one embodiment). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Luo with the system of Wu and Chisnall to use an API to use shared memory usage . One having ordinary skill in the art would have been motivated to use Luo into the system of Wu and Chisnall for the purpose of optimizing any type of operations associated with machine learning (col 66, lines 8-10). Claims 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 2021/0342092 A1) in view of Chisnall (US 2021/0004469 A1) in further view Parthasarathy (US 2019/0327310 A1). As per claim 16, Wu and Chisnall do not teach wherein the context switch includes a request to access a second application, the second application on a same host as the originating application. However, Parthasarathy teaches wherein the context switch includes a request to access a second application, the second application on a same host as the originating application. (Parthasarathy [0047] As described above, in some embodiments, the request received in step 200 corresponds to a request to access one of multiple applications running on the nodes 106 at the server side 102. In such embodiments, a request to access a second application also or alternatively may be received at the same node 106 at which the request to access the first application was received or at a different node. In such embodiments, the request to access the second application may include the set of values including the value corresponding to the session metadata 114, the value corresponding to the session timeframe, and the value corresponding to the session signature associated with the session 110). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Parthasarathy with the system of Wu and Chisnall to access a second application on the same host. One having ordinary skill in the art would have been motivated to use Parthasarathy into the system of Wu and Chisnall for the purpose of achieving session failover for HTTP traffic. ( Parthasarathy paragraph 02) As per claim 17, Wu and Chisnall do not teach wherein the context switch includes a request to access a second application on a different host than the originating application. However, Parthasarathy teaches wherein the context switch includes a request to access a second application on a different host than the originating application. (Parthasarathy [0047] As described above, in some embodiments, the request received in step 200 corresponds to a request to access one of multiple applications running on the nodes 106 at the server side 102. In such embodiments, a request to access a second application also or alternatively may be received at the same node 106 at which the request to access the first application was received or at a different node. In such embodiments, the request to access the second application may include the set of values including the value corresponding to the session metadata 114, the value corresponding to the session timeframe, and the value corresponding to the session signature associated with the session 110). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Parthasarathy with the system of Wu and Chisnall to access a second application on a different host. One having ordinary skill in the art would have been motivated to use Parthasarathy into the system of Wu and Chisnall for the purpose of achieving session failover for HTTP traffic. ( Parthasarathy paragraph 02) Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 2021/0342092 A1) in view of Chisnall (US 2021/0004469 A1) in further view Anderson (US 2006/0130060 A1). As per claim 19, Wu and Chisnall do not teach wherein the system space operations are executed in ring 1 or ring 2 of a four-ring protection architecture. However, Anderson teaches wherein the system space operations are executed in ring 1 or ring 2 of a four-ring protection architecture. (Anderson [0021] Most instruction set architectures (ISAs), including the ISA of the Intel Pentium.RTM. 4 (herein referred to as the IA-32 ISA), are designed with the concept of privilege levels in the instruction set architecture; these privilege levels are referred to herein as ISA privilege levels. Referring to FIGS. 2A and 2B, there is shown a block diagram illustrating platforms with various ISA privilege levels. The IA-32 ISA, for example, has four ISA privilege levels, referred to as ring levels ring-0 301, ring-1 303, ring-2 305 and ring-3 307. In the IA-32 ISA, ring-0 (301) is the most privileged ISA privilege level while ring-3 (307) is the least privileged ISA privilege level). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Anderson with the system of Wu and Chisnall to use a ring 1 or ring 2 architecture. One having ordinary skill in the art would have been motivated to use Anderson into the system of Wu and Chisnall for the purpose of running components of a virtual machine monitor at a reduced privilege level (Anderson paragraph 01) Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 2021/0342092 A1) in view of Chisnall (US 2021/0004469 A1) in further view Ruemann (US 2018/0165117 A1). As per claim 20, Wu and Chisnall do not teach wherein the infrastructure communication circuitry is configured to transmit data in a hardware-assisted shared memory mechanism between the user space and the kernel space. However, Ruemann teaches wherein the infrastructure communication circuitry is configured to transmit data in a hardware-assisted shared memory mechanism between the user space and the kernel space. (Ruemann [0131] The shared memory is accessible in user space or kernel space depending on where each SWS runs, and the hardware device which is responsible for transmitting data to the physical network. The SWHYPE layer is responsible for initializing an isolation domain and bringing up all ports). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Ruemann with the system of Wu and Chisnall to transmit data in a shared memory. One having ordinary skill in the art would have been motivated to use Ruemann into the system of Wu and Chisnall for the purpose of improving linkage between two ports that are forwarding to each each other while also executing packet processing on an SWS as packets transit between the ports.(Reumann paragraph 10) Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 2021/0342092 A1) in view of Chisnall (US 2021/0004469 A1) in further view Zhou (US 2016/0094519 A1). As per claim 21, Wu and Chisnall do not teach comprising at least one of an infrastructure processing unit (IPU) or data processing unit (DPU) configured to encapsulate user space application data for transmission in L4 payloads. However, Zhou teaches comprising at least one of an infrastructure processing unit (IPU) or data processing unit (DPU) configured to encapsulate user space application data for transmission in L4 payloads. (Zhou [0057] In an aspect, each device queue 312 can be associated with one or more direct cache access (DCA) control settings, such as 320-1, 320-2, and 320-3, which may be collectively referred to as DCA control settings 320, which define one or more parts/portions of an incoming packet that are to be copied/written to cache of the CPU under various circumstances. For instance, device queue 312-1 has an associated DCA control represented by 320-1, device queue 312-2 has an associated DCA control represented by 320-2, and device queue 312-3 has an associated DCA control represented by 320-3. As shown, for device queue 312-1, the DCA control setting 320-1 defines that its corresponding CPU A cache 308-1 requires only the header information including L2, L3, and L4 information (represented by filled checkboxes in 320-1). Similarly, for device queue 312-2, DCA control setting 320-2 defines that its corresponding CPU B cache 308-2 requires both the header information including L2, L3, and L4 as well as payload information PL. Similarly, for device queue 312-3, DCA control setting 320-3 defines that its corresponding CPU Z cache 308-3 requires no information/segment from the incoming packet. Therefore, as intelligent network I/O device 302 understands the protocol/format of the incoming packets P1, P2, . . . , Pn, and also understands the nature and needs of the CPU queue to which the packets are to be forwarded and the applications being run on respective CPU, it is able to define rules/policies specifying various subsets of received packets, instead of the complete packet, that are to be sent to the respective CPUs for efficient processing). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Zhou with the system of Wu and Chisnall to transmit data user a L4 protocol. One having ordinary skill in the art would have been motivated to use Zhou into the system of Wu and Chisnall for the purpose of improving efficiency of direct cache access (DCA). (Zhou paragraph 19) Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 2021/0342092 A1) in view of Chisnall (US 2021/0004469 A1) in further view Zhou (US 2016/0094519 A1) and Plummer (US 2024/0004705 A1). As per claim 22, Wu and Chisnall and Zhou do not teach wherein the IPU/DPU couples two host apparatuses. However, Plummer teaches wherein the IPU/DPU couples two host apparatuses. (Plummer [0029] Each network device 104 and 112 may additionally or alternatively include other components, such as a network switch (e.g., an Ethernet switch), a network interface controller (NIC), a CPU, a DPU, or any other suitable device used to process data and/or control the flow of data between devices connected to communication network 108. Each network device 104 and 112 may include or be connected to one or more of Personal Computer (PC), a laptop, a tablet, a smartphone, a server, a collection of servers, and/or the like. Although only two network devices are shown, more or fewer network devices may be included in the system 100). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Plummer with the system of Wu and Chisnall and Zhou to couple hosts together. One having ordinary skill in the art would have been motivated to use Plummer into the system of Wu and Chisnall and Zhou for the purpose of adjusting a load profile of one or more processing devices processing a workload in a bulk-synchronous mode. (Plummer paragraph 03) Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Saito (US 2024/0291767 A1) in view of Newton (US 2011/0292206 A1) As per claim 24, Saito does not teach wherein the input data does not include ethernet header information. However, Newton teaches wherein the input data does not include ethernet header information. (Newton [0100] …inspecting the frame data at L2 (Ethernet header)/L3 (IP header)/L4 (TCP header) protocol layers, applying rules for accepting/rejecting/modifying frames, and then re-transmitting them. The NAT router 1002 may be sited between port 10A of a first switch module 1004 and the public network connection 1006. Instead of or in addition to a NAT router, a SPI unit can also be used). Ethernet header information are mostly associated with L2 protocol. L4 protocol for the most part (like TCP and UDP) do not see or contain the Ethernet header. It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Newton with the system of Saito to not include ethernet header information. One having ordinary skill in the art would have been motivated to use Newton into the system of Saito for the purpose of creating a point to point, plug and play configuration. (Newton paragraph 09) Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Saito (US 2024/0291767 A1) in view of Newton (US 2011/0292206 A1) in further view McCann (US 2016/0337271 A1) As per claim 25, Saito and Newton do not teach wherein the coprocessor circuitry is configured to add ethernet header information to the input data. However, McCann teaches wherein the coprocessor circuitry is configured to add ethernet header information to the input data. (McCann [0128] … In one embodiment, the control point 106 (e.g., ONOS) creates Ethernet headers to add to packets of LTE users) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine McCann with the system of Saito and Newton to add ethernet header. One having ordinary skill in the art would have been motivated to use McCann into the system of Saito and Newton for the purpose of performing traffic flow management at the control point. (McCann paragraph 04) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20160301831 A1 – discloses techniques of timing synchronization of audio and video (AV) data in a network. In particular, the techniques for a AV master to distribute AV data encoded with one or more time markers to a plurality of processing nodes. The one or more time markers may be indexed to a precision time protocol (PTP) time stamp used as a time reference. In one technique, the nodes extract the time markers to determine an offset value that is applied to a PLL to synchronize AV data packets at a distribution node or a processing node. In another technique the distribution node or the processing node determines the worst case path, which corresponds to a system offset value. The distribution node then reports the system offset value to the AV master, which in turn adjusts the phase based on the report. US 20250322066 A1 – discloses An attack detection and handling control system includes a controller and a hardware accelerator. The hardware accelerator includes a data acquisition unit that acquires communication data from a communication device, a data preprocessing unit that performs preprocessing on the acquired data, an attack detection unit that determines an attack using a learning model, a detection alert notification unit that generates a detection alert, and a handling performance unit that performs attack handling based on a handling control policy. The controller includes a learning unit that generates the learning model for detecting the attack and a handling determination unit that creates the handling control policy for the attack. US 20250068554 A1 – discloses a processing device to send a storage command to a queue without routing the storage command through a kernel space. The queue is reserved for direct access by the application and may be associated with a set of permissions, a set of quality-of-service parameters, and/or a set of blocks on the storage devices of a storage system. US 12014179 B1 – discloses data collection functions are interposed to generate input data for an observability pipeline system. In some aspects, a data collection function is made available to an application running on a computer system, with the data collection function having the same name as an original function referenced by the application. In response to a call to the original function, the data collection function is executed and data is extracted from the application. The original function is then executed. A reporting thread of the application is executed; executing the reporting thread generates observability pipeline input data by formatting the extracted data and sends the observability pipeline input data from the computer system to an observability pipeline system. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAN KAMRAN whose telephone number is (571)272-3401. The examiner can normally be reached on 9-5. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEHRAN KAMRAN/ Primary Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Mar 29, 2023
Application Filed
May 22, 2023
Response after Non-Final Action
Feb 10, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591444
Hardware Virtual Machine for Controlling Access to Physical Memory Space
2y 5m to grant Granted Mar 31, 2026
Patent 12585486
SYSTEMS AND METHODS FOR DEPLOYING A CONTAINERIZED NETWORK FUNCTION (CNF) BASED ON INFORMATION REGARDING THE CNF
2y 5m to grant Granted Mar 24, 2026
Patent 12585497
AMBIENT COOPERATIVE CANCELLATION WITH GREEN THREADS
2y 5m to grant Granted Mar 24, 2026
Patent 12572394
METHODS, SYSTEMS AND APPARATUS TO DYNAMICALLY FACILITATE BOUNDARYLESS, HIGH AVAILABILITY SYSTEM MANAGEMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12561158
DEPLOYMENT OF A VIRTUALIZED SERVICE ON A CLOUD INFRASTRUCTURE BASED ON INTEROPERABILITY REQUIREMENTS BETWEEN SERVICE FUNCTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 484 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month