Prosecution Insights
Last updated: April 19, 2026
Application No. 18/742,729

MULTI-PLANE, MULTI-PROTOCOL MEMORY SWITCH FABRIC WITH CONFIGURABLE TRANSPORT

Non-Final OA §103
Filed
Jun 13, 2024
Examiner
ZAMAN, FAISAL M
Art Unit
2175
Tech Center
2100 — Computer Architecture & Software
Assignee
Enfabrica Corporation
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
81%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
614 granted / 917 resolved
+12.0% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
43 currently pending
Career history
960
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
63.4%
+23.4% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 917 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Bao et al. (U.S. Patent Number 10,437,504) and Choudhary et al. (U.S. Patent Application Publication Number 2020/0327088). Regarding Claim 1, Bao discloses a system for multi-protocol data movement and placement in a multi-tiered data placement hierarchy, comprising: a plurality of endpoint devices (Figure 1, items 112 and 114) communicatively coupled via a plurality of interfaces (Column 4, lines 20-27; i.e., the various storage devices 112 and 114 are coupled to the data movers 110 via plural interfaces); and a plurality of data moving components (Figure 1, item 110) configured to perform the data movement between the plurality of endpoint devices within a same tier or between different tiers of the hierarchy (Column 4, lines 20-27; i.e., data movers 110 can perform data movement between endpoints 112 and 114 in different tiers 106 and 108), wherein: the multi-tiered hierarchy is organized by one or more of data access latency and achievable capacity (Column 3, lines 40-51 and Column 9, lines 6-11), and the data movement is performed by the data moving components (Column 4, lines 20-27). Bao does not expressly disclose wherein the data movement is performed using one or more semantics based on a plurality of interface interconnect protocols between the plurality of interfaces. In the same field of endeavor (e.g., storage communication techniques), Choudhary teaches wherein the data movement is performed using one or more semantics based on a plurality of interface interconnect protocols (paragraph 0034; e.g., PCIe, Compute Express Link (CXL), Gen-Z, OpenCAPI, In-Die Interface, Cache Coherent Interconnect for Accelerators (CCIX), UltraPath Interconnect (UPI), etc.) between the plurality of interfaces (Figure 1, items 160, paragraphs 0036 and 0038; i.e., the SFI semantics can be used to support a variety of different interface interconnect protocols). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Choudhary’s teachings of storage communication techniques with the teachings of Bao, for the purpose of greatly simplify storage overhead in the context of read/write ports on the receiver and providing a balance of receiver complexity with bandwidth scaling capabilities (see Choudhary, paragraphs 0035 and 0039). Regarding Claim 2, Bao discloses wherein the endpoint devices are memory and data storage devices (Column 3, line 63 - Column 4, line 19), each of the memory and data storage devices is arbitrarily configured (i.e., a particular storage device could arbitrarily be placed in the fast tier 106 or the capacity tier 108) to be a remote device (Figure 1, item 108, Column 7, lines 56-58) or a local device (Column 7, lines 27-35; i.e., the fast tier 106 endpoints may be local to compute nodes 102 [Figure 1]) in the hierarchy, the remote device is a network attached device that is communicated through network addressing (Column 7, lines 39-42 and 56-58; i.e., Ethernet-based network-attached storage systems are known in the art to utilize network addressing [e.g., an IP address]), and the local device is included within a compute rack (Column 19, lines 25-30; i.e., the components may be disposed on a VxRack compute rack) utilizing at least one of peripheral component interconnect express (PCIe) (Column 11, lines 11-17) and computer express link (CXL) interfaces and protocols, and is communicated either through memory mapped addressing or through network addressing (Column 11, lines 11-17; i.e., the PCI Express standard is known in the art to use memory mapped addressing [e.g., memory mapped I/O or MMIO]). Regarding Claim 3, Bao discloses wherein the remote device comprises at least one remote storage device such as a hard disk drive (HDD) or solid state device (SSD) (Column 4, lines 14-19), or at least one remote memory such as non-volatile memory (NVME) or dynamic random access memory (DRAM). Regarding Claim 4, Bao discloses wherein the local device includes at least one local disaggregated memory such as NVME or DRAM (Column 3, line 63 - Column 4, line 4), or at least a main memory such as DRAM. Regarding Claim 5, Bao discloses wherein the remote device is attached to a network via an Ethernet interface (Column 7, lines 39-42 and 56-58). Regarding Claim 6, Choudhary teaches wherein the one or more semantics include at least input/output semantics or network semantics, wherein the input/output semantics is based on load and store operations (paragraph 0039), and network semantics allows packet-based data transfer (paragraphs 0039-0040; i.e., PCIe-based semantics allow for packet-based data transfer). Regarding Claim 7, Choudhary teaches wherein each data moving component of the plurality of data moving components comprises: a bulk data transfer engine configured to use the network semantics to transfer large blocks of data between local devices (FabQ) (paragraph 0040; i.e., receiver decoding may be simplified, with the interface scaling to support a wide range of data payloads [e.g., from as small as 4B to as large as 4 KB [or larger]]; an improved streaming interface may allow multiple packets to be delivered in the same cycle, allowing a scalable interface across a variety of payload sizes while maintaining a common set of semantics and ordering); and a cache line exchange engine configured to transfer low-latency (paragraph 0041) messages between local devices via PCIe/CXL interfaces (FabX) (paragraphs 0045-0046; i.e., the cache lines can be fetched using low-latency messaging with PCIe or CXL interfaces). Regarding Claim 8, Choudhary teaches wherein the bulk data transfer engine is further configured to perform large blocks of data movement among local devices via PCIe/CXL interfaces using network semantics (paragraph 0039), and wherein the large blocks of data include data sized in kilobytes or megabytes (paragraph 0040). Regarding Claim 9, Choudhary teaches wherein the cacheline exchange engine is further configured to perform low-latency local cacheline operations via CXL.mem and CXL.cache (paragraph 0041). Regarding Claim 10, Bao and Choudhary do not expressly disclose wherein the low latency indicates a processing time that is no greater than 50 nanoseconds. However, CXL is known in the art to have a latency in the realm of nanoseconds. It would have been obvious to one of ordinary skill in the art to have modified the various components in the system by, e.g., reducing the distance between components, in order for the latency to be no greater than 50 nanoseconds. Further, it has been held that "where the general conditions of a claim are disclosed in the prior art, it is not inventive to discover the optimum or workable ranges by routine experimentation." In re Aller, 220 F.2d 454, 456, 105 USPQ 233, 235 (CCPA 1955). Regarding Claim 11, Choudhary teaches wherein to transfer the low-latency messages between the local devices, the cache line exchange engine is further configured to transfer the low-latency messages between application processors (Figure 1, items 110, 120, 115, and 125, paragraph 0046), or between application processors and controlling hosts. Regarding Claim 12, Choudhary teaches wherein the data moving component is further configured to extend load and store operations (paragraph 0031) to remote devices over a network through remote memory access (paragraph 0033; i.e., the various components 110-145 [Figure 1] could be connected using a network on chip [NoC]; while the devices are not physically remote, the NoC architecture treats the connected devices as separate, addressable entities that communicate via message passing or packet switching, similar to how devices interact on a traditional network like Ethernet; this is therefore equivalent to the claimed feature). Regarding Claim 13, Bao discloses wherein the data moving component further comprises one or more network interface controllers (Column 19, lines 21-24) utilizing standard protocol stacks, and wherein: the standard protocol stacks comprise at least one of Ethernet (Column 7, lines 39-42), a transport protocol, and a network protocol, the transport protocol comprises at least one of a Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP), and the network protocol comprises an Internet Protocol (IP). Claims 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Bao and Choudhary as applied to claim 5 above, and further in view of Schwetman, Jr. et al. (U.S. Patent Application Publication Number 2016/0062894) and Wijnands et al. (U.S. Patent Application Publication Number 2020/0145335). Regarding Claim 14, Bao and Choudhary do not expressly disclose wherein the data moving component is further configured to: use network ports to prefetch data from one of the at least one remote storage device; send the prefetched data directly to a local compute node for processing; retrieve a processed result from the local compute node; and send the processed result to a next compute node. In the same field of endeavor (e.g., storage communication techniques), Schwetman teaches wherein the data moving component is further configured to: use network ports (Figure 2, items 216 and 236) to prefetch data from one of the at least one remote storage device (Figure 2, item 204, paragraph 0036); send the prefetched data directly to a local compute node for processing (Figure 8, item 890, paragraph 0075). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Schwetman’s teachings of storage communication techniques with the teachings of Bao and Choudhary, for the purpose of reducing the latency for the compute nodes to receive the required data (i.e., the data will be present at the local compute as soon as it is needed). Also in the same field of endeavor (e.g., storage communication techniques), Wijnands teaches retrieve a processed result from the local compute node (Figure 4C, item 474; i.e., the offload platform [the claimed “local compute node”] sends the processed packet back to the router); and send the processed result to a next compute node (Figure 4C, item 476, paragraph 0087; i.e., the router sends the processed result to a next compute node). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Wijnand’s teachings of storage communication techniques with the teachings of Bao and Choudhary, for the purpose of allowing for a complex algorithm to be executed (i.e., each of a plurality of nodes can execute a particular portion of the algorithm). Regarding Claim 18, Bao discloses wherein at least one of the local compute node or the next compute node is a device that performs computation, the device comprising at least one of an application process or a control processor (Figure 1, item 102, Column 3, lines 26-30). Claims 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Bao and Choudhary as applied to claim 5 above, and further in view of Schwetman. Regarding Claim 15, Bao and Choudhary do not expressly disclose wherein the data moving component is further configured to: use network ports to prefetch data from one of the at least one remote storage device; and store the data in at least one of a local storage device or a local memory device. In the same field of endeavor (e.g., storage communication techniques), Schwetman teaches wherein the data moving component is further configured to: use network ports (Figure 2, items 216 and 236) to prefetch data from one of the at least one remote storage device (Figure 2, item 204, paragraph 0036); store the data in at least one of a local storage device or a local memory device (Figure 8, item 890, paragraph 0075). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Schwetman’s teachings of storage communication techniques with the teachings of Bao and Choudhary, for the purpose of reducing the latency for the compute nodes to receive the required data (i.e., the data will be present at the local compute as soon as it is needed). Regarding Claim 16, Choudhary teaches wherein the data moving component is further configured to use FabQ to move the data from a local storage device to a local memory device (paragraphs 0040 and 0041; i.e., the bulk data transfer engine can move large blocks of data between a cache [the “local storage device”] and a host memory [the “local memory device”]). Regarding Claim 17, Choudhary teaches wherein the data moving component is further configured to use FabQ to move the data from a local memory device to an application processor’s main memory (paragraphs 0040 and 0042; i.e., the bulk data transfer engine can move large blocks of data between a CPU host 405 memory [the “local memory device”] and a device memory 410 [the “application processor’s main memory”]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure because each reference discloses a system in which multiple tiers of storage devices communicate with one another using different protocols. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FAISAL M ZAMAN whose telephone number is (571)272-6495. The examiner can normally be reached Monday - Friday, 8 am - 5 pm, alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew J. Jung can be reached at 571-270-3779. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FAISAL M ZAMAN/ Primary Examiner, Art Unit 2175
Read full office action

Prosecution Timeline

Jun 13, 2024
Application Filed
Dec 15, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578780
CIRCUIT SLEEP METHOD AND SLEEP CIRCUIT
2y 5m to grant Granted Mar 17, 2026
Patent 12572490
LINKS FOR PLANARIZED DEVICES
2y 5m to grant Granted Mar 10, 2026
Patent 12560993
POWER MANAGEMENT OF DEVICES WITH DIFFERENTIATED POWER SCALING BASED ON RELATIVE POWER BENEFIT ESTIMATION
2y 5m to grant Granted Feb 24, 2026
Patent 12561267
Multiple Independent On-chip Interconnect
2y 5m to grant Granted Feb 24, 2026
Patent 12562599
Contactless Power Feeder
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
81%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 917 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month