Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/20/2026 has been entered.
Response to Arguments
Applicant’s arguments, see pages 7-9, filed on 01/20/2026, with respect to the rejection(s) of claims 1, 8 and 15 under 35 U.S.C. 103 as being unpatentable over Patel et al. (US 20240244008 A1—hereinafter--"Patel”) in view of Makaram et al. (US 20200328879 A1 –hereinafter--Makaram) in further view of Choudhary e al. (US 20230237168 A1--hereinafter – Choudhary). Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of AGARWAL et al. (US 20230325225 A1 –hereinafter—“ AGARWAL”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (US 20240244008 A1—hereinafter--"Patel”) in view of Makaram et al. (US 20200328879 A1 –hereinafter--Makaram) in further view of Choudhary e al. (US 20230237168 A1--hereinafter – Choudhary) in further view of AGARWAL et al. (US 20230325225 A1 –hereinafter—“ AGARWAL”).
Patel discloses a computing device, comprising: at least one circuit configured to: ([0010] FIG. 1 is a simplified block diagram of a data processing network 100, in accordance with embodiments of the present disclosure. Data processing network 100 includes multiple integrated circuits (ICs) or chips, such as host ICs 102 and device ICs 104. A host IC may include a one or more processors. A chip-to-chip gateway 106 of a host IC 102 couples to corresponding chip-to-chip gateways 108 on device IC 104 to provide one or more communication links. The links enable messages to be passed, in one or more flits, between the host ICs and device ICs. The links may include switches 110 to enable a host IC to communicate with two or more device ICs or to enable two or more host ICs to communicate with the same device IC or with each other.);
generate a message to be sent to a destination over a network ([0011] An example link is Compute Express Link™ (CXL™) of the Compute Express Link Consortium, Inc. CXL™ provides a coherent interface for ultra-high-speed transfers between a host and a device, including transaction and link layer protocols together with logical and analog physical layer specifications. [0015] Circuitry for determining the number of messages that can be efficiently packed into a network flit is placed before the packing logic block. This enables the process to be performed dynamically based on incoming request stream, from central processing unit (CPU) and peripheral component express (PCIe) request agents, and corresponding responses. [0016] FIG. 2 The gateway block receives request messages from various local request agents at host interface 202. These request messages are generated when a request agent needs to send a request to a destination that resides on a different chip. The request is sent to its destination via gateway block 200 and SMP/C2C link 204. Gateway block 200 is also configured to handle the responses from local agents.);
the message having a variable size based on a flit size and on a number of existing requests that target the destination ([0014] Transactions between chips may involve an exchange of messages, such as requests and responses. A packing logic block packs transaction messages and data into flow control units or “flits” to be sent over a symmetric multi-processor (SMP) or chip-to-chip (C2C) link. Herein, a packing logic block is an integrated circuit block, or software description thereof, used in a modular data processing chip. In order to increase the bandwidth and link utilization, the packing logic block maximizes the number of request messages and data packed into each flit. The size of a request message size may vary. For example, a message may have a variable number of extension portions. The extension portions may be referred to herein as “extensions.” Thus, for the packing logic block to work most efficiently, it should be able to observe pending messages in order to determine the maximum number of messages and data that can fit into each network flit. However, this can increase the complexity, area and latency of the packing logic. [0017] A request from a local agent is allocated within local request tracker 206. Local request tracker 206 is a mechanism for monitoring transaction requests and may include a table for storing request identifiers and associated data such as transaction status. Requests that are ready to send are passed through request dispatch pipeline 208. Dispatch pipeline 208 may include a tracker request picker and a dispatch first-in, first-out (FIFO) buffer, for example. Message analyzer 210 observes the request messages and determines the number of messages to send. The selected messages 212 are sent to packing logic block 214. In addition, message analyzer 210 may provide signal 216, indicating the number of messages to be packed, to packing logic block 214. In turn, packing logic block 214 packs requests 212 into a transaction layer flit packet 218 (containing one or more network flits) and sends the packet to transmission gateway 220 to be transmitted over the SMP/C2C communication link 204. Response messages are treated in a similar manner. Message analyzer 210 is configured to analyze both request and response messages, collectively called “transaction messages” or just “messages.” Message analyzer 210 and packing logic block 214 may be implemented as a single logic block or as two or more logic blocks);
wherein generating the message comprises: adding two or more flits to the message in response to the two or more flits, from among flits to be sent over the network, having destination identifiers ([0010] Data processing network 100 includes multiple integrated circuits (ICs) or chips, such as host ICs 102 and device ICs 104. A host IC may include a one or more processors. A chip-to-chip gateway 106 of a host IC 102 couples to corresponding chip-to-chip gateways 108 on device IC 104 to provide one or more communication links. The links enable messages to be passed, in one or more flits, between the host ICs and device ICs. The links may include switches 110 to enable a host IC to communicate with two or more device ICs or to enable two or more host ICs to communicate with the same device IC or with each other. [0055] receiving transaction messages for transmission in one or more network flow control units (flits) across a communication link of a data processing network, determining, based, at least in part, on sizes of the received transaction messages and a size of a network flit of the one or more network flits, a group of transaction messages having a maximum number of transaction messages that can be packed into the network flit; packing the group of transaction messages into the network flit; and transmitting the network flit across a communication link of the data processing network. [0056] where a network flit of the one or more network flits has a plurality of slots and where determining the group of transaction messages includes determining how many of the received transaction messages can be packed into the network flit without leaving unused slots large enough to store a received message)
send the message to a network switch configured to route the message to the destination corresponding to the destination identifiers ([0010] The links enable messages to be passed, in one or more flits, between the host ICs and device ICs. The links may include switches 110 to enable a host IC to communicate with two or more device ICs or to enable two or more host ICs to communicate with the same device IC or with each other. [0055] receiving transaction messages for transmission in one or more network flow control units (flits) across a communication link of a data processing network, determining, based, at least in part, on sizes of the received transaction messages and a size of a network flit of the one or more network flits, a group of transaction messages having a maximum number of transaction messages that can be packed into the network flit; packing the group of transaction messages into the network flit; and transmitting the network flit across a communication link of the data processing network).
Patel does not explicitly disclose the destination identifier is a matching destination node identifier and appending a message authentication code to a last flit of the message. Makaram, in analogous art however, discloses the destination identifier is a matching destination node identifier ([0084] The MAC generator 1212 may interact with the protocol flit generator 1208 to embed the MAC tag value in outbound protocol flits. The IDE/Flow Control Flit Generator 1214 may include logic or other circuitry associated with CXLCM IDE Control flit insertion. In the example shown, all control flits generated by this block are neither encrypted nor integrity protected (as outlined in the CXL IDE definition). The CRC Generator 1214 may be responsible for multiplexing between Control and Encrypted Protocol flits generated by the generators 1214 and 1208, respectively. The CRC generator 1214 may compute CRC codes for link error protection and may be responsible for shifting CXLCM flits towards the Physical CXL Link) and; appending a message authentication code to a last flit of the message (Figure 11: 1106 In deterministic containment mode of operation, a transmitting device may accumulate an integrity value over a particular predetermined number of flits (e.g., containment_flit_count described above), and the transmitter may send the flit containing this integrity value (e.g., MAC) at the earliest possible time. There may be a delay between the transmission of last flit that was part of an integrity computation and the actual transmission of the MAC flit. In some cases, this delay may be bounded to be at most 5 flits. On the receive side, flits cannot be released for consumption in this mode of operation until the flit that contains the integrity value (e.g., MAC) for those flits has been received and the integrity value has been checked. Since there can be a delay in the transmission of MAC flit during which time valid flits continue to be sent, the receiver may buffer the subsequent flits as well to ensure there is no loss of data. [0076] The earliest point at which the MAC flit 1106 (for the flits 1102) can be transmitted is accordingly 5 flits after the last flit 1103 that was part of the integrity value encapsulated in that MAC flit. On the receiver side of the link, both sets of flits 1102 and 1104 are queued or buffered until the MAC flit 1106 is received and an integrity check passes). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the claimed limitations disclosed by Patel to include the destination identifier is a matching destination node identifier and appending a message authentication code to a last flit of the message. This modification would have been obvious because a person having ordinary skill in the art would have been motivated by the desire to provide implementation of one or more layers of a Compute Express Link (CXL)-based protocol that includes an agent to obtain information to be transmitted to another device over a link based on the CXL-based protocol via a flit and provide encryption of a portion of the information to yield a ciphertext, generate a cyclic redundancy check (CRC) code based on the ciphertext, and cause a flit to be generated comprising the ciphertext as suggested by Makaram ([0020-0022]).
Patel and Makaram do not explicitly disclose each flit of the message having a request embedded therein. Choudhary, in analogous art however, discloses each flit of the message having a request embedded therein ([0021-0022] CXL.cachemem IDE Containment Mode, as defined by the CXL Specification (e.g., the Compute Express Link Specification Revision 3.0, Version 1.0, published by the Compute Express Link Consortium, Inc., published Aug. 1, 2022) may direct that data transferred over a CXL link may only be released after a message authentication code (MAC) is received and checked for integrity. Because a MAC may be generated based on a variable number of flits (which may be referred to herein as a “MAC epoch”), this requirement for an integrity check may inherently add latencies for requests. Specifically, a transmitter may be required to send the flits of the MAC epoch, generate the MAC, and then insert the MAC into an appropriate slot for subsequent transmission (e.g., on a subsequent epoch). This latency may be compounded with loaded links, link bifurcation, and data flits that may all contribute to a delayed MAC transmission. As used herein, the term “flit” may generally refer to a unit amount of data when a message is being transmitted over a link. More specifically, the term flit as used herein, may be, or may be similar to, the term “flit” as described or defined by the Compute Express Link Specification as referenced above, or some other future version of such specification. [0029] Host to Device memory read that is held up in the device controller (e.g., the device cachemem controller) until the MAC epoch completes, and the MAC is transmitted and checked for integrity (labelled in FIG. 2 as “Containment Point”). [0032] Based on the above, the values of Tables 1 and 2 may represent the number of flit transfers across the CXL link that may be required to occur to complete a MAC authentication process. [0058] The process may include or relate to identifying, at 702 from a second electronic device over a communication link (e.g., a CXL link), a flit related to a request from the second electronic device to access a resource of the first electronic device, wherein the flit is an element of a MAC epoch; generating, at 704 based on the flit, a cache/mem interface (e.g., a CPI) message related to the request, wherein the cache/mem interface message includes an indication of the MAC epoch; and transmitting, at 706 to a device fabric of the first electronic device, the cache/mem interface message prior to receipt of a MAC related to the MAC epoch).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the claimed limitations of the flits disclosed by Patel and Makaram to include e each flit of the message having a request embedded therein. This modification would have been obvious because a person having ordinary skill in the art would have been motivated by the desire to provide a link controller that can be configured to identify a flit related to a request from a second electronic device to access a resource of a first electronic device, wherein the flit is an element of a message authentication code (MAC) epoch and generate based on the flit, a cache/mem interface message related to the request, wherein the cache/mem interface message includes an indication of the MAC epoch as suggested by Choudhary ( [0012-0014]).
Patel, Makaram and Choudhary do not explicitly disclose wherein the message authentication code ("MAC") is generated based on the two or more flits. AGARWAL, in analogous art however, discloses wherein the message authentication code ("MAC") is generated based on the two or more flits ([0034] With continued reference to FIG. 3A, in addition, the CXL root port may generate a message authentication code (MAC) to ensure the integrity of the data across the physical link. Data across the physical links may be transported in flits (e.g., 512 bits) including the MAC header. In one example, multiple flits may be processed at the same time to generate the MAC for the multiple flits, so that the MAC is not generated for each flit. As an example, four flits may be processed to generate the MAC. The MAC may be generated using cryptographic hash functions (e.g., HMAC) or using block cipher algorithms. Still referring to FIG. 3A, the read A request travels from the CXL root port to the CXL endpoint, which in turn retrieves the data from the far memory. The CXL endpoint verifies the integrity of the received data by locally generating a MAC based on the received flits and comparing the generated MAC with the received MAC. The retrieved data for cache line CL $A travels back from the CXL endpoint to the CXL root port. The CXL endpoint also generates a MAC for a certain number of flits and transmits that back to the CXL root port. The CXL root port verifies the integrity of the received data by locally generating a MAC based on the received flits and comparing the generated MAC with the received MAC. The data is received by the internal memory controller, which uses the per-VM key (key A) to decrypt the data and provide the data to the requestor (e.g., the home agent). The data (A) is also stored in the near memory as part of the swapping of data between the near memory and the far memory. With respect to the integrity-related processing (using the MAC) shown in FIGS. 3A and 3B, the system may operate in two modes. One mode may be referred to as the containment mode and the other mode may be referred to as the skid mode. In the containment mode, the CXL endpoint may only release the data after the integrity check passes. As a result, several flits (e.g., four flits) may need to be buffered until the MAC has been received by the CXL endpoint and the integrity check (e.g., by comparing the locally generated MAC with the received MAC) has been performed).
AGARWAL discloses wherein the message authentication code further in ([0051] With reference to FIG. 3A, the CXL root port generates a message authentication code (MAC) to ensure the integrity of the data across the physical link. Data across the physical links may be transported in flits (e.g., 512 bits) including the MAC header. In one example, multiple flits may be processed at the same time to generate the MAC for the multiple flits, so that the MAC is not generated for each flit. As an example, four flits may be processed to generate the MAC. The MAC may be generated using cryptographic hash functions (e.g., HMAC) or using block cipher algorithms. The CXL endpoint that is the counterpart to these transactions may perform the integrity check by comparing a locally generated MAC against the received MAC. A match may indicate an integrity check pass condition, whereas a lack of match between the locally generated MAC and the received MAC may indicate an integrity check failure condition).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the claimed limitations of the flits disclosed by Patel, Makaram and Choudhary to include wherein the message authentication code ("MAC") is generated based on the two or more flits. This modification would have been obvious because a person having ordinary skill in the art would have been motivated by the desire to provide a method for swapping out a second block of data having an address conflict with the first block of data from the near memory to the far memory, where the second block of data is encrypted using a second key for exclusive use by a second virtual machine associated with the system as suggested by AGARWAL ( [0012-0014]).
As per claim 8:
Claim 8 is are directed to a system comprising: at least one physical processor; and physical memory comprising computer-executable instructions that, when executed by the at least one physical processor, cause the at least one physical processor to perform substantially similar corresponding limitations of claim 1 and therefore claim 8 is rejected with the same rationale given above to reject claim 1.
As per claim 15:
Claim 15 is directed to a computer-implemented method having substantially similar corresponding limitations of claim 1 and therefore claim 15 is rejected with the same rationale given above to reject claim 1.
Allowable Subject Matter
Claims 2-7, 9-14 and 16-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the prior arts of record either taken alone or in combination does not disclose the particular limitation features of claims 2-7, 9-14 and 16-20 as they are recited in the respective claims
Conclusion
The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure. See the notice of reference cited in form PTO-892 for additional prior arts.
Abraham et al. (US 20210218548) discuses an encrypted link is established between a local and remote processor over a point-to-point interconnect. The encrypted link is operated for some time until the encryption key should be updated. The local processor sends a key update message to the remote processor notifying the remote processor of the change. The remote processor prepares for the change and sends a key update confirmation message to the local processor. The local processor then sends a key switch message to the remote processor. The local processor pauses transmission of encrypted message while the remote processor completes use of the encrypted message. After a pause, the local processor continues sending encrypted messages with the updated encryption key.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TECHANE GERGISO whose telephone number is (571)272-3784. The examiner can normally be reached 9:30am to 6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LINGLAN EDWARDS can be reached on (571) 270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TECHANE GERGISO/Primary Examiner, Art Unit 2408