Prosecution Insights
Last updated: April 19, 2026
Application No. 19/142,722

Computing Device, Server, and Data Processing Method

Non-Final OA §103
Filed
Jun 24, 2025
Examiner
BARTELS, CHRISTOPHER A.
Art Unit
2184
Tech Center
2100 — Computer Architecture & Software
Assignee
Suzhou MetaBrain Intelligent Technology Co., Ltd.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
79%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
364 granted / 547 resolved
+11.5% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
40 currently pending
Career history
587
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
66.9%
+26.9% vs TC avg
§102
23.9%
-16.1% vs TC avg
§112
3.6%
-36.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 547 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to the claim listing filed on June 24th, 2025. Claims 1, 3-19, 21, and 22 are currently pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/24/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings were received on 06/24/2025. These drawings are accepted. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5-8, 10, 14-19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Choudhary (USPGPUB No. 2022/0342841 A1, hereinafter referred to as Choudhary) in view of Schuette et al. (USPGPUB No. 2014/0129753 A1, hereinafter referred to as Schuette) Referring to claim 1, Choudhary discloses a computing device, comprising {“example system utilizing a CXL link 450”, see Fig. 4 [0056], 1st sentence}: a Central Processing Unit (CPU) {“host processor 405”, see Fig. 4 [0056], 2nd sentence}; an accelerator {“accelerator device 410”, see Fig. 4, [0056], 2nd sentence}; and a first Peripheral Component Interconnect Express (PCIe) circuit comprising {“[a circuit] may utilize a physical layer 515 based on a PCIe physical layer”, see Fig. 5 [0059], 4th sentence}: a first downstream port connected to the accelerator {“other D2D adapter 1010 (e.g., of the downstream port of the device)”, see Fig. 16B, [0110]}, an upstream port connected to the CPU {“functionality provided for in CXL/FlexBus [port] may be provided” (see Figs. 4 or 5, [0094]) connecting to CPU bidirectional “ CXL enables communication between host processors (e.g., CPUs)” ([0054], 3rd sentence)}, and multiple ports each supporting Compute Express Link (CXL) protocol {“set by Downstream Ports to inform the remote Link Port partner that it is a Downstream Port”, Table 7 after [0114], last 4 lines}; and a memory expansion unit {“rack or even the pod level for enabling resource pooling”, see Fig. 9 [0072], 4th sentence} connected to a second downstream port of the first PCIe circuit {“using UCIe retimers to transport the underlying protocols (e.g., PCIe, CXL)” (see Figs. 8a-8d [0072]), such PCIe and CXL comprising a plurality of ports “upstream ports that connect to a UCIe root port can be a PCI” ([0090], last two sentences; or “host downstream UCIe port… with a CXL DVSEC capability and relevant PCIe capabilities” ([also [0090])}; wherein the accelerator is configured to perform access operations on a host memory {“HBM Connect is used to connect memory on-package)”, see Fig. 5, [0064], 1st sentence} based on the CXL protocol {“rack/pod-level disaggregation may be implemented using [Type 2] CXL 2.0 (or later) protocol”, see Figs. 12a, 12b, 12c, and 13 [0085]}; wherein local memory of the memory expansion unit {local memory “pod level for enabling resource pooling” via memory expansion unit “components attached at the board level such as memory, accelerators”, [0072] 2nd sentence, Emphasis added by Examiner} and the host memory form a converged memory {“implemented as a converged logical physical layer 545 that can operate in either PCIe mode or CXL mode based on results of alternate mode negotiation during the link training process” (see Fig. 5 [0059])}, and the accelerator is configured to perform access operations {those link training processes made possible through “retimers may be used to extend the UCIe connectivity beyond the package using off-package links”, see Figs. 12a, 12b, and 12c, [0085]} on the converged memory based on the CXL protocol {such retimers/link trainers perform access operations “connects on its local package and ensures that the [access operations] flits are delivered” via CXL/UCIe, see Figs. 12a, 12b, and 12c, [0085], 2nd sentence}. Choudhary does not appear to explicitly disclose wherein the PCIe circuit is a PCIe switch. However, Schuette discloses wherein the PCIe circuit is a PCIe switch {“PCIe switch 90 routes data and request signals (PCIe packets) over the PCIe lanes between the host computer system”, see Fig. 9 [0057], 3rd sentence}. Choudhary and Schuette are analogous art because they are from the same problem-solving area, method and systems for handling PCIe devices. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Choudhary and Schuette before him or her, to modify Choudhary’s “underlying [transport] protocols (e.g., PCIe, CXL)” (see Figs. 8a-8d [0072]) incorporating Schuette’s “PCIe switch 90” (see Fig. 9, [0057]). The suggestion/motivation for doing so would have been to provide a PCIe switch comprising another set of PCIe lanes to one or more PCIe edge connectors adapted to be inserted into PCIe expansion slots of a motherboard of the host computer system to achieve the technical effect of inking a processor expansion card and SSD expansion card via the switch, in turn faster throughput is achieved as compared to a link through a chipset input/output hub (IOH) controller containing a PCIe root complex (Schuette ([0013], last two sentences). Therefore, it would have been obvious to combine Schuette with Choudhary to obtain the invention as specified in the instant claim(s). As per claim 3, the rejection of claim 2 is incorporated and Choudhary discloses wherein the memory expansion unit comprises at least one first processing unit {“[processing] components attached at the board level such as memory, accelerators, networking devices, modem, etc. can be integrated at the package level”, see Figs. 8a-8d [0072], 2nd sentence} having an independent memory {“[said memory] facilitate off-package connections, including server-scale [independent] interconnections between devices”, see Figs. 8a-8D [0072]}. As per claim 5, the rejection of claim 1 is incorporated and Choudhary discloses wherein the memory expansion unit comprises a memory expansion board {“[processing] components attached at the board level such as memory, accelerators, networking devices, modem, etc. can be integrated at the package level”, see Figs. 8a-8d [0072], 2nd sentence; another example “allowing for on-board components such as accelerators, memory expanders, and I/O expanders to be moved on-package seamlessly”, [0075], 3rd sentence}. As per claim 6, the rejection of claim 5 is incorporated and Choudhary discloses wherein the memory expansion board comprises at least one Dynamic Random Access Memory (DRAM) {“ a memory in the system, such as DRAM, cache”, see Fig. 1 [0179], 1st sentence}, or, at least one Storage Class Memory (SCM) {Examiner’s Note: the recitation “or” term renders this dependent claim as a Markush claim, thus the reference disclose at least one member in the group to address the claim}. As per claim 7, the rejection of claim 1 is incorporated and Schuette discloses further comprising: a Non-Volatile Memory Express (NVMe) Solid State Drive (SSD) connected {“NVM Express standard (NVMe), formerly known as Enterprise non-volatile memory host controller interface (NVMHCI), a specification for accessing SSDs”, see Fig. 1 [0020], 1st sentence} to a third downstream port of the first PCIe switch {“ accessing SSDs over a PCIe channel [coupled to PCIe switch]” ([0020], 1st sentence) further illustrated by “PCIe switch 62 to allow direct communication between the PCIe-based processor and SSD expansion cards 140a and 140b. The PCIe switch 62 “ (see Fig. 6, [0052])}; As per claim 8, the rejection of claim 1 is incorporated and Schuette discloses further comprising: a Network Interface Controller (NIC) connected to a fourth downstream port {“ same host computer system or else may come from a remote location such as a network-attached-storage (NAS) device on the level of the file system (see Fig. 19 and below)” utilizing an appropriate port via NIC since NAS devices over the Internet “file system using Internet Protocol” (both citations in [0067], last sentence)} of the first PCIe switch {“ bridge connection could comprise a PCIe switch 62 similar to what is represented for the daughter board 60 in FIGS. 6 and 7”, [0070], 2nd sentence}; As per claim 10, the rejection of claim 1 is incorporated and Schuette discloses wherein the memory expansion unit comprises: a memory pool comprising {“comprising a [memory pool] flash memory array 44 and a SSD controller 28,”, see Figs. 2 and 3 [0047], last two sentences}: at least one second processing unit having an independent memory {“functionally coupled to a [independent memory] cache memory 46”, see Fig. 14 [0064]}, and at least one second PCIe switch {“The [second] PCIe switch 62 allows peer-to-peer communication of the two expansion cards 140a and 140b”, see Fig. 7, [0053], last sentence}; wherein a downstream port {“ Arbitration of [port] connections may be done according to the base address registers (BAR) defining the address range of individual target”, see Fig. 10, [0059], 2nd sentence} of the second PCIe switch is connected to a first endpoint port of the second processing unit {“ Arbitration of [a plurality of] connections may be done according to the base address registers (BAR) defining the [port] address range of individual target [processing unit(s)], [0059], 2nd sentence}, and a second endpoint port of the second processing unit {other endpoint ports “data processing through any of the video ports such as DVI, HDMI, or DisplayPort”, see Fig. 9, [0058]} is connected to a fifth downstream port of the first PCIe switch {“ the non-volatile memory controller through the PCIe switch 90 using GPU-Direct [port] or a comparable protocol” ([0059], last sentence) among the “built-in PCIe bank switch (not shown) to select the eight PCIe lanes connected to the host PCIe bus connector 26 via the PCIe link #2 120b, or else select a second set of PCIe lanes PCIe link #3 120c” (see Fig. 11, [0060]}. As per claim 14, the rejection of claim 1 is incorporated and Choudhary discloses wherein the accelerator is a Graphic Processing Unit (GPU) {“ set of workload accelerators (e.g., graphics processing units (GPUs)”, see Figs. 1 and 2 [0054]}. Referring to claim 15 discloses a server claim treated as a system claim reciting claim functional language corresponding to the device claim of claim 1 thereby rejected under the same rationale as claims 1 recited above. Referring to claim 16 is a method claim reciting claim functional language corresponding to the device claim of claim 1 thereby rejected under the same rationale as claims 1 recited above. As per claim 17, the rejection of claim 16 is incorporated and Choudhary discloses wherein the method further comprises: executing, by the accelerator, a computational task {“intermediate stage between transaction layer 205”, see Figs. 1 and 2, [0045], 1st sentence} based on the first task data to generate first result data {“ either PCIe mode or CXL mode based on [first result data] results of alternate mode negotiation during the link training process”, see Fig. 5, [0059]}; and receiving, by the accelerator {“CPU-to-memory interconnect designed [received/transmitted]”, [0054], 1st sentence}, a second data request message sent by the CPU {“interactions between the device and host as a number of requests”, [0060]}, and sending the first result data to the host memory {“at least one associated response message and sometimes a data transfer” to the host memory as claimed, [0060]}. As per claim 18, the rejection of claim 16 is incorporated and Choudhary discloses wherein the method further comprises: sending, by the accelerator, a third data request message {“accelerator 620 and/or I/O tile 625 can be connected to CPU device(s) 610, 615 using CXL transactions running”, see Fig. 6, [0069], 2nd sentence} to the memory expansion unit based on the CXL protocol {memory expansion unit “applications requiring high bandwidth such as memory access (e.g., High Bandwidth Memory” ([0066], last sentence) “connected to CPU device(s) 610, 615 using CXL transactions”, see Fig. 6, [0069], 2nd sentence}; and sending, by the memory expansion unit and in response to the third data request message {responses including “ Error detection, error correction, retry, and other functionality provided by the D2D adapter”, see Fig. 10, [0077], second task data to the accelerator via the first PCIe switch {“ data transfer using direct memory access, software discovery, error handling, etc., are addressed with PCIe/CXL.io”, see Fig. 10, [0073], last sentence}. As per claim 19, the rejection of claim 16 is incorporated and Schuette discloses wherein the method further comprises: storing, by the accelerator, third task data {“ storing large amounts of data and allowing direct low-latency access”, see Fig. 4, [0050], 1st sentence} to the memory expansion unit via the first PCIe switch {“[memory] integrated expansion card 140” (see Fig. 4, [0050], 1st sentence) via the PCIe switch “with a PCIe switch 62 to allow direct communication between the PCIe-based processor and SSD expansion cards 140a and 140b” (see Fig. 6, [0052]). As per claim 21, the rejection of claim 8 is incorporated and Choudhary discloses wherein each group of PCIe switch downstream ports is configured with dual GPUs and dual NICs {“such as an I/O device, a Network Interface Controller (NIC),”, see Fig. 1, [0037]; another “network interface, co-processors” (see Fig. 22, [0160], last sentence) and the appropriate GPU “workload accelerators” (see Fig. 1, [0054])}. Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Choudhary in view of Schuette and further in view of Duan (USPGPUB No. 2023/0403232 A1). As per claim 4, the rejection of claim 3 is incorporated however neither Choudhary or Schuette does not appears to explicitly disclose any limitation in this dependent claim. Furthermore, Duan discloses wherein the first processing unit comprises any one or a combination of: a Field Programmable Gate Array (FPGA) {“Programmable Gate Array (FPGA) devices”, see Fig. 4 [0029]}, a Complex Programmable Logic Device (CPLD) {“complex programmable logic device CPLD”, see Figs. 3 and 7, [0110]}, a Programmable Logic Device (PLD) {“a programmable logic device (PLD), or a combination thereof”, see Figs. 3 and 7, [0110]}, an Application Specific Integrated Circuit (ASIC) {“application-specific integrated circuit (ASIC),”, see Figs. 3 and 7, [0110]}, a Generic Array Logic (GAL) device {“generic array logic”, see Figs. 3 and 7, [0110]}, a System on Chip (SOC), a Software Defined Infrastructure (SDI) device {Examiner’s Note: the recitation “or” term renders this dependent claim as a Markush claim, thus the reference disclose at least one member in the group to address the claim}, and an Artificial Intelligence (AI) device {“accelerator 111 may be any one of AI chips such as a GPU, an NPU, a TPU, and a DPU”, see Figs. 3, 7, and 11 [0105], last sentence}. Choudhary/Schuette and Duan are analogous art because they are from the same problem-solving area, method and systems for handling PCIe devices. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Choudhary/Schuette and Mundt before him or her, to modify Choudhary/Schuette’s device incorporating Duan’s “processor 121” (see Fig. 7, [0110]). The suggestion/motivation for doing so would have been to provide a processor architecture variety to recognize and resolve a problem of insufficient computing power relying on During distributed computing, each computing device or chip generates data required by another computing device or chip, which involves data exchange between different computing devices or different chips consequently improving efficiency of data transmission between different computing devices or different chips is an effective way to improve distributed computing efficiency device drivers associated with (Duan [0003], paraphrased). Therefore, it would have been obvious to combine Duan with Choudhary/Schuette to obtain the invention as specified in the instant claim(s). As per claim 12, the rejection of claim 10 is incorporated however neither Choudhary or Schuette does not appears to explicitly disclose any limitation in this dependent claim. Furthermore, Duan discloses wherein the second processing unit comprises any one or a combination of: a Field Programmable Gate Array (FPGA) {“Programmable Gate Array (FPGA) devices”, see Fig. 4 [0029]}, a Complex Programmable Logic Device (CPLD) {“complex programmable logic device CPLD”, see Figs. 3 and 7, [0110]}, a Programmable Logic Device (PLD) {“a programmable logic device (PLD), or a combination thereof”, see Figs. 3 and 7, [0110]}, an Application Specific Integrated Circuit (ASIC) {“application-specific integrated circuit (ASIC),”, see Figs. 3 and 7, [0110]}, a Generic Array Logic (GAL) device {“generic array logic”, see Figs. 3 and 7, [0110]}, a System on Chip (SOC), a Software Defined Infrastructure (SDI) device {Examiner’s Note: the recitation “or” term renders this dependent claim as a Markush claim, thus the reference disclose at least one member in the group to address the claim}, and an Artificial Intelligence (AI) device {“accelerator 111 may be any one of AI chips such as a GPU, an NPU, a TPU, and a DPU”, see Figs. 3, 7, and 11 [0105], last sentence}. The 103 motivation for this dependent claim relied upon as recited in claim 4 above. Claims 9, 11, 13, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Choudhary in view of Schuette and further in view of Mundt et al. (USPGPUB No. 2023/0394001 A1, hereinafter referred to as Mundt). As per claim 9, the rejection of claim 11 is incorporated however neither Choudhary or Schuette does not appears to explicitly disclose any limitation in this dependent claim. Mundt discloses wherein the memory pool further comprising: a first Mini Cool Edge Input/Output (MCIO) connector {{“resource devices 404a-404c in the resource systems 306a-306c/400 may be considered a “pool” of resources that are available to the resource management system 304 for use in composing LCSs”, see Fig. 1, [0029], last sentence}; “ provided by a Mini Cool Edge Input/Output (MCIO) connector”, see Fig. 7 [0049], last sentence} connected to a Fabric port {“storage devices (e.g., Non-Volatile Memory express over Fabric (NVMe-oF) storage devices”, see Fig. 1 [0029]} of the first PCIe switch {“[switching] expansion device 710 may be connected to the SCP device or DPU device provided by the orchestrator device 706 via PCIe connectors”, see Fig. 7 [0066], last sentence}; wherein the first MCIO connector is configured to connect to a second MCIO connector of another computing device {“may be provided by a Micro Twin-ax coaxial cable terminated with the MCIO connectors discussed above, and/or other types of cables that would be apparent to one of skill in the art in possession of the present disclosure” among “device 706” and “device 406”, see Figs. 4 and 7 [0052]}. Choudhary/Schuette and Mundt are analogous art because they are from the same problem-solving area, method and systems for handling PCIe devices. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Choudhary/Schuette and Mundt before him or her, to modify Choudhary/Schuette’s device incorporating Mundt’s “MCIO” (see Fig. 7, [0049]). The suggestion/motivation for doing so would have been to provide an LCS networking device multi-host primary circuit board system that addresses the issues (Mundt [0006]) surrounding inability to connect them to the multiple hosts in the LCS prevents, for example, the provisioning of their functionality to either or both the first host/host processor and the second host/orchestrator processor in the LCS (Mundt [0005], last sentence). Therefore, it would have been obvious to combine Duan with Choudhary/Schuette to obtain the invention as specified in the instant claim(s). As per claim 11, the rejection of claim 10 is incorporated however neither Choudhary or Schuette does not appears to explicitly disclose any limitation in this dependent claim. Mundt discloses wherein the memory pool further comprises {“resource devices 404a-404c in the resource systems 306a-306c/400 may be considered a “pool” of resources that are available to the resource management system 304 for use in composing LCSs”, see Fig. 1, [0029], last sentence}: a third MCIO connector {“ provided by a Mini Cool Edge Input/Output (MCIO) connector”, see Fig. 7 [0049], last sentence} connected to a Fabric port {“storage devices (e.g., Non-Volatile Memory express over Fabric (NVMe-oF) storage devices”, see Fig. 1 [0029]} of the second PCIe switch {“[switching] expansion device 710 may be connected to the SCP device or DPU device provided by the orchestrator device 706 via PCIe connectors”, see Fig. 7 [0066], last sentence}; wherein the third MCIO connector is configured to connect to a fourth MCIO connector of another computing device {“may be provided by a Micro Twin-ax coaxial cable terminated with the MCIO connectors discussed above, and/or other types of cables that would be apparent to one of skill in the art in possession of the present disclosure” among “device 706” and “device 406”, see Figs. 4 and 7 [0052]}. The 103 motivation for this dependent claim relied upon as recited in claim 9 above. As per claim 13, the rejection of claim 12 is incorporated however neither Choudhary or Schuette does not appears to explicitly disclose any limitation in this dependent claim. Furthermore, Mundt discloses wherein the FPGA is configured to partition internal resources of the FPGA into different areas by using a dynamic partition technology {Examiner’s Note: the recitation “or” term renders this dependent claim as a Markush claim, thus the reference disclose at least one member in the group to address the claim}, or implement dynamic memory allocation and data transmission of a memory {“optimize the allocation of resources to workloads to provide improved scalability and efficiency”, see Fig. 1 [0020], last sentence} by using at least one Direct Memory Access (DMA) controller {“provide functionality not available in the orchestrator device 712 e.g., RDMA functionality”, see Fig. 7 [0056], last sentence}. The 103 motivation for this dependent claim relied upon as recited in claim 9 above. As per claim 22, the rejection of claim 13 is incorporated and Choudhary discloses wherein the FPGA has a Multi Channel DMA IP for PCI Express {“utilize the [multi-channel] sideband messaging channels provided in the layer interfaces”, see Fig. 10, [0091], 2nd sentence}, and the FPGA comprises a Host-to-Device Data Mover (H2DDM) module {“A die-to-die (D2D) adapter block 1010”, see Fig. 10, [0073], 3rd sentence} and a Device-to-Host Data Mover (D2HDM) module {“to ensure successful and reliable data transfer”, see Fig. 10, [0076]}. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are indicative of the current state of the art regarding claim 1’S “computing device”, “PCIe switch”, or “downstream port”: US 20150347345 A1, US 20190370203 A1, US 20200004685 A1, US 20210141731 A1, US 20210050941 A1, US 20220066976 A1, US 20220100694 A1, US 20220350771 A1, US 20220365750 A1, US 20230214326 A1, and US 20230214346 A1. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER A. BARTELS whose telephone number is (571)270-3182. The examiner can normally be reached on Monday-Friday 9:00a-5:30pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dr. Henry Tsai can be reached on 571-272-4176. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C. B./ Examiner, Art Unit 2184 /STEVEN G SNYDER/Primary Examiner, Art Unit 2184
Read full office action

Prosecution Timeline

Jun 24, 2025
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602339
STRAIN RELIEF FOR FLOATING CARD ELECTROMECHANICAL CONNECTOR
2y 5m to grant Granted Apr 14, 2026
Patent 12596662
METHOD FOR INTEGRATING INTO A DATA TRANSMISSION A NUMBER OF I/O MODULES CONNECTED TO AN I/O STATION, STATION HEAD FOR CARRYING OUT A METHOD OF THIS TYPE, AND SYSTEM HAVING A STATION HEAD OF THIS TYPE
2y 5m to grant Granted Apr 07, 2026
Patent 12579090
METHOD AND SYSTEM FOR SHIFTING DATA WITHIN MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572491
MEMORY WITH CACHE-COHERENT INTERCONNECT
2y 5m to grant Granted Mar 10, 2026
Patent 12572486
Subgraph segmented optimization method based on inter-core storage access, and application
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
79%
With Interview (+12.8%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 547 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month