Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is in response to the claim listing filed on November 25th, 2025. Claims 1-20 are currently pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/25/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jen et al. (USPGPUB No. 2019/0384733 A1, hereinafter referred to as Jen) in view of Malladi et al. (USPGPUB No. 2021/0374056 A1, hereinafter referred to as Malladi).
Referring to claim 1, Jen discloses a system comprising:
a fabric adapter coupled {“Ultra Path Interface (UPI), Intel” with an adapter part of “computer device 1000”, see Fig. 8 [0073]} to a plurality of endpoint devices {“Downstream and upstream port logic found in CPUs and endpoints”, see Fig. 1 [0005]} and a plurality of network ports {“Unlike upstream and downstream ports”, see Fig. 1 [0029], 1st sentence}, each endpoint device of the plurality of endpoint devices being a remote device or a local device {“partly on the [local device] user's computer and partly on a remote computer or entirely on the remote [device] computer or server”, see Fig. 8 [0086], last two sentences}, wherein the fabric adapter is configured to {“input/output (I/O) interface 1018”, see Fig. 8, [0078], 1st sentence}:
use at least one of a peripheral component interconnect express (PCIe) interface and protocol {“using Peripheral Component Interconnect Express (PCIe) electricals”, see Fig. 1 and [0027], 2nd sentence} or a computer express link (CXL) interface {“interconnects in accordance with a Compute Express Link Specification”, see Fig. 1 [0017] last sentence} and protocol to handle a memory request associated with the local device {“coherent interconnect protocol for various functions, such as coherent [memory] requests and memory flows with [local] host processor 445”, see Fig. 4 [0045]};
and use at least one network interface {“first NIC 1012 providing communications to the network 150 over Ethernet,”, see Fig. 8, [0075]} and protocol to handle a memory request {“link 489 may be operable to support multiple protocols and communication of data and messages via the multiple interconnect protocols, including a [memory request] CXL protocol”, see Fig. 4, [0044]} associated with the remote device {“physical layer 454”, see Fig. 4, [0050]}.
Jen does not appear to explicitly disclose receiving a memory request;
Handle the memory requesting using at least one network interface and protocol to handle a memory request associated with the remote device.
Handle the memory request using, at least one network interface and protocol if the memory request is associated with the remote device;
However, Malladi discloses receiving a memory request {“facilitate RDMA requests between CXL memory devices ”, see Fig. 1a [0058], last sentence};
Handle the memory requesting {“other than Ethernet (e.g., for use with a switch that is configured to handle other network protocols”, see Fig. 1a [0063], last two sentences} using at least one network interface {“the ToR Ethernet switch 110 and the network interface circuits 125” which can include “ROCE, Infiniband, and iWarp packets”, see Fig. 1a [0058], last sentence} and protocol to handle a memory request associated with the remote device {“to the processing circuits 115 upon receiving remote [device] write requests”, see Fig. 1c, [0072], last two sentences};
Handle the memory request using, at least one network interface {“receive straight remote direct memory access (RDMA) requests through the network switch”, see Fig. 1b [0065]} and protocol if the memory request is associated with the remote device {“DDIO technology is enabled, and remote data [from the memory request] is first pulled to last level cache (LLC) of the processing circuit”, see Fig. 1a [0061], last sentence};
Jen and Malladi are analogous because they are from the same field of endeavor, routing packet stream(s).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Jen and Malladi before him or her, to modify Jen’s “server computer devices” (see Figs. 1 and 8 [0026]) incorporating Malladi’s “network interface circuit 125” (see Figs. 1b, [0063]) along with “DDIO of the processing circuit 115”.
The suggestion/motivation for doing so would have been to implement second controller includes at least one of a channel request queue, a volatile-memory request scheduler (Malladi [0010]) realizing one or more of the following advantages: reduce network latencies and improve network stability and operational data transfer rates and, in turn, improve the user experience; Reduce costs associated with routing network traffic, network maintenance, network upgrades; reduce the power consumption and/or bandwidth of devices on a network (Malladi [0012]).
Therefore, it would have been obvious to combine Malladi with Jen to obtain the invention as specified in the instant claim(s).
Referring to claim 19 is a method claim reciting claim functional language corresponding to the system claim of claim 1, thereby rejected under the same rationale as claim 1 recited above.
As per claim 20, the rejection of claim 19 is incorporated and Malladi discloses further comprising handling, by the fabric adapter, the memory request associated with the remote device or local device {“to the processing circuits 115 upon receiving remote [device] write requests”, see Fig. 1c, [0072], last two sentences} using one or more semantics {“CXL.io nay include I/O semantics, which may be similar to PCIe. CXL.cache may include caching semantics, and CXL.memory may include memory semantics”, see Fig. 1a [0052]}, wherein the one or more semantics include at least input/output semantics or network semantics {input/output semantics “CXL.io nay include I/O semantics, which may be similar to PCIe. Cry semantics”, see Fig. 1a [0052]}, the input/output (I/O) semantics is based on load and store operations {“enhanced capability CXL switch 130, to provide a load-store interface”, see Fig. 1E [0085], 1st sentence}, and the network semantics allows data transfer {“CXL.io nay include I/O semantics, which may be similar to PCIe [networking].”, see Fig. 1a [0052]} based on one or more network protocols {“the ToR Ethernet switch 110 and the network interface circuits 125” which can include “ROCE, Infiniband, and iWarp packets”, see Fig. 1a [0058], last sentence}.
Claims 2-15 and 17 are rejected 35 U.S.C. 103 as being unpatentable over Jen in view of Malladi and further in view of Dalal et al. (USPGPUB No. 2019/0109793 A1, hereinafter referred to as Dalal).
As per claim 2, the rejection of claim 1 is incorporated and Jen discloses wherein:
the plurality of endpoint devices are memory {“Downstream and upstream port logic found in CPUs and endpoints”, see Fig. 1 [0005]} and data storage devices {“memory circuitry 1004 may include one or more mass-storage devices, such as a solid state disk drive (SSDD);”, see Fig. 8 [0068], last sentence}, the remote device including at least one of a remote storage device or a remote memory device {“on the remote [device] computer or server”, see Fig. 8 [0086], last two sentences}, and each local device of the plurality of endpoint devices is included within a compute rack {“server computer devices (e.g., stand-alone, rack-mounted, blade, etc.”, see Figs. 1 and 8 [0026]} utilizing at least one of PCle and CXL interfaces and protocols {“transaction layer 315 includes a PCIe transaction layer 316 and additional circuitry 318 for handling enhancements to PCIe transaction layer 316 for handling CXL.io transactions”, see Fig. 3a [0037], last two sentences} and communicates either through memory mapped addressing {Examiner’s note: recitation “or” renders this dependent claim as Markush claim, thus the reference needs only disclose one group member in order to address the claim.} or through network addressing {“type of network, including a local area network (LAN) or a wide area network (WAN) [address], or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider”, see Fig. 8 [0086], last sentence}, the local device including at least one of a local storage device or a local memory device {“memory controllers, storage controllers (e.g., redundant array of independent disk (RAID) controllers” with logical name/address, see Fig. 8 [0078]};
Jen and Malladi does not appear to explicitly disclose wherein each remote device of the plurality of endpoint devices is a network attached device that communicates through network addressing;
However, Dalal discloses wherein each remote device of the plurality of endpoint devices {endpoints “computing nodes that execute operations in system memory”, see Fig. 17, [0165]} is a network attached device {endpoint devices “Apache Spark type data processing system 1701” (see Fig. 17, [0165], 1st sentence)} that communicates through network addressing {said “system 1701” over a network “layer 1804 can include an access path way to other networks, including other systems, such as a LAN, WAN or the Internet, as but a few examples” including network address, see Fig. 18 [0169], last sentence};
Jen/Malladi and Dalal are analogous because they are from the same field of endeavor, routing packet stream(s).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Jen/Malladi and Dalal before him or her, to modify Jen’s “server computer devices” (see Figs. 1 and 8 [0026]) incorporating Dalal’s “Apache Spark type data processing system 1701” (see Fig. 17, [0165], 1st sentence).
The suggestion/motivation for doing so would have been to implement a multiplexed connection fabric of the computing element can be programmable, enabling processing pipelines to be configured as needed for an application (Dalal [0027], last sentence) where remote computing elements can each have fast access memory to receive data from a previous stage of the pipeline, can be capable of sending data to a fast access memory of a next computing element in the pipeline (Dalal [0028], paraphrased).
Therefore, it would have been obvious to combine Dalal with Jen/Malladi to obtain the invention as specified in the instant claim(s).
As per claim 3, the rejection of claim 2 is incorporated and Jen discloses wherein:
the remote storage device comprises at least one of a hard disk drive (HDD) {“hard disk drives (HDDs); micro HDDs”, see Fig. 8, [0070]} or a solid state device (SSD) {“solid state drives (SSDs); solid state disk drive (SSDD); serial AT attachment (SATA) storage devices (e.g., SATA SSDs”, see Fig. 8 [0070], 2nd sentence}, and the remote memory device comprises at least one of non-volatile memory (NVME) {Examiner’s note: recitation “or” renders this dependent claim as Markush claim, thus the reference needs only disclose one group member in order to address the claim.} or dynamic random access memory (DRAM) {“DRAM”, see Fig. 8 [0068]}.
As per claim 4, the rejection of claim 2 is incorporated and Jen discloses wherein: the local storage device comprises at least one of a HDD {“hard disk drives (HDDs); micro HDDs”, see Fig. 8, [0070]} or a SSD {“solid state drives (SSDs); solid state disk drive (SSDD); serial AT attachment (SATA) storage devices (e.g., SATA SSDs”, see Fig. 8 [0070], 2nd sentence}, and the local memory device comprises at least one of local disaggregated memory including NVME {Examiner’s note: recitation “or” renders this dependent claim as Markush claim, thus the reference needs only disclose one group member in order to address the claim.} or DRAM {“DRAM”, see Fig. 8 [0068]}, or at least a main memory including DRAM {“DRAM”, see Fig. 8 [0068] as in main memory “memory 1004” ([0068], 1st sentence)}.
As per claim 5, the rejection of claim 2 is incorporated and Jen discloses wherein the remote device is attached to a network via an Ethernet interface {“a first NIC 1012 providing communications to the network 150 over Ethernet”, see Fig. 8 [0075]}.
As per claim 6, the rejection of claim 2 is incorporated and Jen discloses wherein the fabric adapter is further configured to handle the memory request associated with the remote device or local device using one or more semantics {“memory and caching semantics”, see Fig. 1 [0004], 3rd sentence}, wherein the one or more semantics include at least input/output semantics or network semantics {“Intel AL protocol are used in latency sensitive applications”, see Fig. 1 [0004]}, the input/output (I/O) semantics is based on load and store operations {“memory cells may be used to [load and] store data in lookup-tables (LUTs”, see Fig. 8 [0069]}, and the network semantics allows data transfer based on one or more network protocols {“Intel® Accelerator Link IAL”, see Fig. 8 [0073]}.
As per claim 7, the rejection of claim 6 is incorporated and Jen discloses wherein the fabric adapter comprises:
a local memory request handler {“Intel® Accelerator Link (IAL)”, see Fig. 8 [0073]} configured to use the I/O semantics to handle the memory request {“memory and caching semantics that are part of the Intel AL protocol are used in latency sensitive applications”, see Fig. 8 [0004]} associated with the local device via PCle/CXL interfaces {“enable transaction layer processing for PCIe/CXL.io communications and CXL.cache and CXL.memory transactions”, see Fig. 3a [0037]}; and
a transport handler configured {“hardware accelerator 1003 may be incorporated with the sync header suppression enabling technology of the present disclosure, to enable Intel® AL protocols to be transported off-package using PCIe electricals”, see Fig. 8, [0065], 2nd sentence} to use the network semantics {“Intel® Accelerator Link (IAL), or some other proprietary bus used in a SoC based interface”, see Fig. 8 [0073]} to handle the memory request associated with the remote device {“on the remote [device] computer or server”, see Fig. 8 [0086], last two sentences}.
As per claim 8, the rejection of claim 2 is incorporated and Jen discloses where the fabric adapter is further configured to:
translate the memory request to a network memory request {“a coherent interconnect protocol for various functions, such as coherent requests and memory flows with host processor 445 via interface logic 413 and circuitry 427”, see Fig. 4 [0045]}; and
identify a location of a memory request handler from the network memory request {“such as coherent requests and memory flows with host processor 445” (see Fig. 4 [0045]) identifying “determine environmental conditions or location information related” (see Fig. 8, [0080], 2nd sentence}.
As per claim 9, the rejection of claim 8 is incorporated and Jen discloses wherein the location of the memory request handler is associated with the local memory device {“[local memory] circuit 300 includes a transaction layer 310, a link layer 320, and a physical layer 340”, see Fig. 3a [0037], 3rd sentence}, and the memory request handler is further configured to:
perform local cacheline operations via CXL.mem and CXL.cache {“transaction layer processing for PCIe/CXL.io communications and CXL.cache and CXL.memory transactions.”, see Fig. 3a [0037]} in response to the memory request {“handling enhancements to PCIe transaction layer 316 for handling [requests] CXL.io transactions”, see Fig. 3a [0037], last sentence}.
As per claim 10, the rejection of claim 8 is incorporated and Jen discloses wherein the location of the memory request handler is associated with the local storage device {“memory controllers, storage controllers (e.g., redundant array of independent disk (RAID) controllers” with logical name/address, see Fig. 8 [0078]}, and the memory request handler is further configured to:
receive and execute the memory request via a PCle interface {“turn, CXL.cache and CXL.memory transaction layer 319 may perform transaction layer processing for these protocols.”, see Fig. 3a [0037] last sentence}.
As per claim 11, the rejection of claim 8 is incorporated and Jen discloses wherein the location of the memory request handler is associated with the remote memory device, and the transport handler is further configured to:
apply one or more network protocols {“additional NIC 1012 may be included to allow connect to a second network”, see Fig. 8 [0075]};
insert the memory request into a network flow {“insertion circuit 368”, see Fig. 3a [0042]} targeted to a remote memory request handler {“circuitry 318 for handling enhancements to PCIe transaction layer 316 for handling CXL.io transactions”, see Fig. 3a [0037] last two sentence} associated with the remote device through a network interface {“NIC 1012 may be included to provide a wired communication line”, see Fig. 8 [0075] 1st sentence};
receive a memory response {“In response to this indication, the physical layer may disable a header insertion circuit, ”, see Fig. 5 [0052]} from the memory request handler associated with the remote device {“physical layer 454”, see Fig. 4, [0050]};
and transmit the memory response {“dynamically control insertion of ordered sets at predetermined intervals within a data stream, when operating in a sync header suppression mode.”, see Fig. 4 [0050] last sentence} to a requestor of the memory request {“[requestors that utilize] coherent interconnect protocol for various functions, such as coherent requests and”, see Fig. 4 [0045]}.
As per claim 12, the rejection of claim 8 is incorporated and Jen discloses wherein the location of the memory request handler is associated with the remote storage device, and the transport handler is further configured to:
apply one or more network protocols {“additional NIC 1012 may be included to allow connect to a second network”, see Fig. 8 [0075]};
insert the memory request into a network flow {“insertion circuit 368”, see Fig. 3a [0042]} targeted to a remote memory request handler {“circuitry 318 for handling enhancements to PCIe transaction layer 316 for handling CXL.io transactions”, see Fig. 3a [0037] last two sentence} associated with the remote device through a network interface {“NIC 1012 may be included to provide a wired communication link”, see Fig. 8 [0075] 1st sentence};
receive a memory response {“In response to this indication, the physical layer may disable a header insertion circuit, ”, see Fig. 5 [0052]} from the memory request handler associated with the remote device {“physical layer 454”, see Fig. 4, [0050]};
and transmit the memory response {“dynamically control insertion of ordered sets at predetermined intervals within a data stream, when operating in a sync header suppression mode.”, see Fig. 4 [0050] last sentence} to a requestor of the memory request {“[requestors that utilize] coherent interconnect protocol for various functions, such as coherent requests and”, see Fig. 4 [0045]}.
As per claim 13, the rejection of claim 1 is incorporated and Jen discloses wherein the fabric adapter is further configured to extend load and store operations to remote devices {“physical layer 454”, see Fig. 4, [0050]} over a network through remote memory access {“on the remote [device] computer or server”, see Fig. 8 [0086], last two sentences}.
As per claim 14, the rejection of claim 1 is incorporated and Jen discloses wherein the fabric adaptor is further configured to use memory mapped addressing {“type of network, including a local area network (LAN) or a wide area network (WAN) [address], or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider”, see Fig. 8 [0086], last sentence} to cache application data in high bandwidth memory of a graphical processing unit {“At [high bandwidth] 8GT/s or higher data rates”, see Figs. 1and 8 [0004], for “one or more GPUs” (see Fig. 8, [0066])}.
As per claim 15, the rejection of claim 1 is incorporated and Jen discloses wherein at least one of the transport handler and the local memory request handler is a software process running on a device attached to the fabric adapter {Examiner’s note: recitation “or” renders this dependent claim as Markush claim, thus the reference needs only disclose one group member in order to address the claim.} or a hardware unit of the fabric adapter {“Processor circuitry 1002 may be implemented as a standalone system/device/package or as part of an existing system/device/package”, see Fig. 8 [0066]}.
As per claim 17, the rejection of claim 1 is incorporated and Jen discloses wherein the fabric adapter is further configured to manage local cache of the remote memory device {“include coherence logic (or coherence and cache logic) 455”, see Fig. 8 [0047], 2ND sentence}.
Claims 16 and 18 are rejected 35 U.S.C. 103 as being unpatentable over Jen in view of Malladi and further in view of Dalal and further in view of Doshi et al. (USPGPUB No. 2021/0117249 A1, hereinafter referred to as Doshi).
As per claim 16, the rejection of claim 1 is incorporated however neither one of the group consisting of Jen, Malladi, and Dalal appears to disclose any limitation in this dependent claim.
Furthermore, Doshi discloses wherein the fabric adapter further comprises one or more network interface controllers utilizing standard protocol stacks {“there are [one or more SmartNICs” (see Fig. 20 [0188], 3rd sentence), each “SmartNIC” utilizes “offload I/O data path operations to an IPU… network protocol… e.g. TCP, UDP” ([0155], and [0147], 1st sentence)}, and wherein: the standard protocol stacks comprise at least one of Ethernet {“routing both Ethernet protocol communications”, see Fig. 2, [0061], last sentence}, a transport protocol {“TCP/reliable transport”, see Fig. 18, [0146]}, and a network protocol {“performing networking stack processing operations”, see Fig. 16, [0142]}, the transport protocol comprises at least one of a Transmission Control Protocol (TCP) {“network protocol (e.g. TCP, UDP, etc.) offload”, see Fig. 18, [0155] last sentence} or a User Datagram Protocol (UDP) {“network protocol (e.g. TCP, UDP, etc.) offload”, see Fig. 18, [0155] last sentence}, and the network protocol comprises an Internet Protocol (IP) {“carrying Internet Protocol (IP) packets”, see Fig. 2, [0061], last sentence}.
Jen/Malladi/Dalal and Doshi are analogous because they are from the same field of endeavor, routing packet stream(s).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Jen and Dalal before him or her, to modify Jen/ Malladi/Dalal’s system incorporating Doshi’s “SmartNICs” (see Figs. 16, 17, and 18 [0142]).
The suggestion/motivation for doing so would have been to implement SmartNIC functionality including perform offloading of Global Hierarchical Software-defined Control Plane management to an IPU, such as an IPU hosted local hierarchical control plane for one or more nodes, such as multi-host and multi-homing, thereby enabling faster response time and better scalability based on localized node requirements, live migration, resource allocation (Doshi [0158]).
Therefore, it would have been obvious to combine Doshi with Jen/ Malladi/Dalal to obtain the invention as specified in the instant claim(s).
As per claim 18, the rejection of claim 8 is incorporated however neither one of the group consisting of Jen, Malladi, and Dalal appears to disclose any limitation in this dependent claim.
Furthermore, Doshi discloses wherein the fabric adapter is further configured to: determine a hot ranking {“core affinity”, see Fig. 33a, [0342], last sentence} of a memory page {“classification and data processing can occur based on the control plane composition of disaggregated functions” (see Fig. 33a, [0342], 2nd sentence) where the data processing includes “such rules, IPUs could implement multiple groups of page tables through which accesses to physical pages are handled” ([0240], 2nd sentence)}; and move the memory page between local {“performance and reduce data movement and latency”, see Fig. 18, [0146], 2nd sentence} and remote memory locations based on the hot ranking {“flexibly composing such multi-subprocess spaces in which the page table pages can be organized into a hierarchy of equivalence [remote memory locations] sets and subsets”, see Fig. 22, [0240], last sentence.}.
The 103 motivation for this dependent claim relied upon as recited in claim 16.
Response to Arguments
Applicant’s arguments filed on 11/25/2025 have been considered but deemed moot in view of the new ground of rejection(s).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references indicative current state of the art regarding claim 1’s “memory request”, “fabric adapter”, or “network interface”: US 20250117503 A1, US 20240176898 A1, US 20240154799 A1, US 11909884 B2, US 20230221985 A1, US 20220021544 A1, US 20210264042 A1, and US 11089022 B2.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER A. BARTELS whose telephone number is (571)270-3182. The examiner can normally be reached on Monday-Friday 9:00a-5:30pm EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dr. Henry Tsai can be reached on 571-272-4176. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C. B./
Examiner, Art Unit 2184
/HENRY TSAI/Supervisory Patent Examiner, Art Unit 2184