Detailed Noticed
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The references cited in the Information Disclosure Statements (IDS) filed on
07/11/2025 have been considered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, and 5-8 are rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of Qian et al. (US-20090060009-A1 hereafter Qian).
Regarding claim 1 a data storage device (see Petrie par.0045: “Processor 105 is coupled to controller hub 115 through front-side bus (FSB) 106.”), comprising:
a memory device (see Petrie par.0046: “System memory 110 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible by devices in system 100.”); and
a controller coupled to the memory device (see Petrie par.0046: “System memory 110 is coupled to controller hub 115 through memory interface 116.”), wherein the controller is configured to:
create an integrity and data encryption (IDE) transaction layer packet (TLP) using a first TLP and a second TLP, wherein the IDE TLP includes an IDE TLP message authentication code (MAC) (see Petrie par.0044: “process 100 may include sending one or more packets (TLP) including the first MAC and the second MAC at operation 114. In one or more examples, a single packet may include the first MAC and the second MAC and the MACs may protect information in the integrity protected portion of the single packet. Additionally or alternatively, a single packet may include the first MAC and the second MAC, and the first MAC and the second MAC may protect information in the integrity protected portions of multiple packets in sequence (e.g., a stream, without limitation).”, par.0065: “Packet format 300 includes fields for: sequence numbers, local prefixes, IDE TLP prefixes, other end-to-end prefixes, header, data, first MAC, second MAC, and LCRC.” par.0076: “Packet stream format 400 includes a packet format for a packet 404 that includes fields for a first MAC and a second MAC, and a packet format for a second packet 402 that does not include any fields for a MAC (nor for a LCRC). The first MAC and second MAC included in the fields of packet 404 protect the integrity of the integrity protected portion of packet 402 and the integrity protected portion of packet 404.”);
Petrie appear to be silence however Qian teaches
prepare a third TLP (see Qian par.0056: “If the third packet addressed to the first receiver address is received while the first packet is still being processed, the third packet is appended to the aggregate, provided this does not increase the size of the aggregate over a predetermined limit. If the aggregate is too large to accept the third packet, then a new aggregate is created starting with the third packet.”);
determine whether to aggregate the third TLP with the first TLP and the second TLP (see Qian par.0056: “If while the first packet addressed to the first receiver address is still being processed a second packet addressed to the first receiver addressed is received, then an aggregate (such as an A-MSDU) is created and the second packet is inserted into the aggregate. After the first packet is processed, the aggregate is processed. If the third packet addressed to the first receiver address is received while the first packet is still being processed, the third packet is appended to the aggregate,”); and
send the IDE TLP MAC to a host device with a last TLP. (see Qian par.0024: “Host processor (Host) 202 receives packets for processing, such as from an input 204. The packets are forwarded from host processor 204 to MAC processor (MAC) 206. MAC processor 206 forwards the frames to Physical Layer Processor (PHY) 208 for transmission across a media.”, par.0050-0051: “If at 606 a determination is made that a frame for the same receiver address is being processed (YES), at 610 a determination is made as to whether an aggregate already exists for the receiver address. If an aggregate for the receiver address if found at 610 (YES), at 612 a determination is made whether the frame can be appended or concatenated onto the aggregate. For example, the aggregate data frame may have a maximum size. If adding the frame to the aggregate data frame would make the aggregate exceed the maximum size, then the data frame is not added to the aggregate.”, par.0054: “At 618, the aggregate is stored until processing at 608. For example, the aggregate is stored in memory and data frames can be added to the aggregate while the first data frame is being processed. As another example, the aggregate can be stored until it reaches the maximum allowable size. The aggregate may also be stored until processing of the incoming data stream is completed (e.g. the incoming queue is empty).”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie teaching “Enhanced security Packet integrity protection for the whole path. Packet integrity protection for path segments. Performance enhancement: A voids multiple steps of IDE TLP encryption/decryption. One step approach using secondary IDE TLP MAC.”, (see Petrie par.0083) with Qian teaching “The encryption process may need to know the size of the data being encrypted before encrypting the frame. For example, a first frame with a first destination address may be received at input 104. While this frame is being processed (e.g. encrypted) a second frame with the first destination address is received at input 104. Since the first frame is already being processed, an aggregate frame for the first destination address is created beginning with the second frame. The second frame is not processed before being put into the aggregate. Any subsequent frames for the first destination address received before processing of the first frame is completed is placed into the aggregate unprocessed. For example, if a third frame for the first destination address is received while the first frame is still being processed, the third frame is placed into the aggregate unprocessed. After processing of the first frame is completed,”, (see Qian par.0023).
Regarding claim 3 Petrie in view of Qian teach the data storage device of claim 1, Qian further teaches wherein upon determining that the second TLP is a non-user data packet, the controller is configured to send the second TLP to the host device. (See Qian par.0041-0042: “an incoming packet (which may also be referred to herein as a frame or data frame) is received. In an example embodiment, a plurality of data frames is received, e.g. placed into an incoming queue. If at 506 it is determined that no frames with the same receiver address are being processed (NO), then at 510 the frame is processed (e.g. placed in an outbound queue or otherwise forwarded to the next stage of the transmission process).”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Qian teaching of claim 1 with Qian teaching “The frame aggregation techniques described herein can be beneficial for operating on congested channels and can work well with lightly loaded channels. For example, the aggregate is created for a destination address while processing of a first frame for the destination address. In a lightly loaded channel, frames would be sent as soon as they are received so frames aggregation is not likely to occur because they wouldn't be stored in the queues for a long enough time period. In a heavily congested channel, the first frame may not be processed immediately, so the frames are re-grouped by receiver address increasing the likelihood that an aggregate can be created, which can alleviate channel congestion.”, (see Qian par.0031).
Regarding claim 5 Petrie in view of Qian teach the data storage device of claim 1, Quian further teaches wherein the controller is configured to aggregate the third TLP with the first TLP and second TLP upon determining that the third TLP is a user data packet TLP. (See Qian par.0056: “If while the first packet addressed to the first receiver address is still being processed a second packet addressed to the first receiver addressed is received, then an aggregate (such as an A-MSDU) is created and the second packet is inserted into the aggregate. After the first packet is processed, the aggregate is processed. If the third packet addressed to the first receiver address is received while the first packet is still being processed, the third packet is appended to the aggregate”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Qian teaching of claim 1 with Qian teaching “the assembling of an aggregate frame ceases due to one or more conditions. For example, assembly of an aggregate may stop responsive to the incoming queue being empty. As another example, assembly of an aggregate stops responsive to the size of the aggregate data frame exceeding a predetermined threshold. If this condition occurs, a new aggregate may be created. As another example, assembly of an aggregate stops responsive to a data frame in the queue having a different address than the first receiver.”, (see Qian par.0046).
Regarding claim 6 Petrie in view of Qian teach the data storage device of claim 1, Qian further teaches wherein the controller is configured to aggregate up to eight TLPs into the IDE TLP. (See Qian par.0054: “the aggregate is stored in memory and data frames can be added to the aggregate while the first data frame is being processed. As another example, the aggregate can be stored until it reaches the maximum allowable size. The aggregate may also be stored until processing of the incoming data stream is completed”) Examiner interpret the maximum allowable size as (up to eight) packet that can be aggregated.
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Qian teaching of claim 1 with Qian teaching “a method comprising receiving a plurality of data frames, wherein a first group of the plurality of data frames is addressed to a first receiver and a second group of the plurality of data frames is addressed to a second receiver. The plurality of data frames are grouped by destination address. An aggregate data frame is created that is addressed to the first receiver from the grouped data frames.”, (see Qian par.0004).
Regarding claim 7 Petrie in view of Qian teach the data storage device of claim 6, Petrie further teaches wherein the IDE TLP comprises at least two integrity protected portions, at least two sequence numbers, and the IDE TLP MAC. (See Petrie par.0065-0071: “Packet format 300 includes fields for: sequence numbers, local prefixes, IDE TLP prefixes, other end-to-end prefixes, header, data, first MAC, second MAC, and LCRC.”, 0075: “FIG. 4 is a schematic diagram of a packet stream format 400 that includes fields for a first MAC and a second MAC utilized to protect information in the integrity protected portions of multiple packets in sequence (e.g., a stream, without limitation), in accordance with one or more examples.”).
Regarding claim 8 Petrie in view of Qian teach the data storage device of claim 7, Petrie further teaches wherein a first integrity protected portion of the at least two integrity protected portions is for the first TLP, a second integrity protected portion of the at least two integrity protected portions is for the second TLP, a first sequence number of the at least two sequence numbers is for the first TLP, a second sequence number of the at least two sequence numbers if for the second TLP, and the IDE TLP MAC is for both the first TLP and the second TLP. (see Petrie fig4:
PNG
media_image1.png
553
973
media_image1.png
Greyscale
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of Qian et al. (US-20090060009-A1 hereafter Qian) in further view of Miura et al. (US-20250097137-A1 hereafter Miura).
Regarding claim 2 Petrie in view of Qian teach the data storage device of claim 1, Petrie in view of Qian do not explicitly teach however Miura explicitly teaches wherein the determining comprises determining whether the second TLP is a user data packet. (See Miura par.0044-0046: “The packet type determination unit 103 determines the packet type based on the analysis result by the packet analysis unit 102 (step S3 in FIG. 3). Packet types include “service use application”(user data packet) and “service use”. The packet type determination unit 103 sends a packet whose packet type is “service use application” to the user information extraction unit 104, and sends a packet whose packet type is “service use” to the data management unit 105. The user information extraction unit 104 acquires user information (an ID address, a port number, a contract number, service information, a priority, and the like) described in the packet received from the packet type determination unit 103”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Qian teaching of claim 1 with Miura teaching “a packet analysis unit 102 that analyzes a received packet, a packet type determination unit 103 that determines the type of the received packet, a user information extraction unit 104 that extracts user information from the packet, a data management unit 105 that transfers the packet received from the user to a tag addition unit 107 at a timing according to a priority, a storage unit 106 configured to temporarily store the packet received from the user, the tag addition unit 107 that adds a tag to the packet received from the user, a data communication control unit 108, a user management unit 109 that manages the information of a user who has applied use of a service, issues a user ID, and notifies the user of the issued user ID, a Quality of Service (QOS) control unit 110 that decides the priority of transfer of a packet”, (see Miura par.0042).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of Qian et al. (US-20090060009-A1 hereafter Qian), in further view of Enderby et al. (US-20080313738-A1 hereafter Enderby).
Regarding claim 4 Petrie in view of Qian teach the data storage device of claim 3, Petrie further teaches wherein the IDE TLP MAC is a signature for protecting the IDE TLP(See Petrie par.0028: “a MAC can be used in scenarios where secure data transmission is required. For example, AES (Advanced Encryption Standard) is a symmetric encryption algorithm for generating an integrity code based on information (e.g., a cryptographic key, password, integrity protected portion of a packet including encrypted portions, combinations thereof, without limitation).”, par.0031: “a packet format includes two or more fields (or sub-fields) for respective integrity codes (e.g., MAC, without limitation). The first integrity code is used to check integrity of a segment of the link between a first device and a non-transparent bridge, and the second integrity code is used to check integrity of a segment of the link between the bridge and the second device.”).
Petrie in view of Qian appear to be silence however Enderby teaches and wherein the signature is for the first TLP, the second TLP, and the third TLP. (See Enderby par.0027: “a first packet signature is calculated. The first through third signatures are each a function of the received packet, and in particular, are functions of payload of the packet. The packet signatures may also be a function of the headers of the packet. Any signature represents a characterization of the packet. In an embodiment of the invention, a signature of the packet may be a mathematical function of the binary data that constitutes the packet, such as a hash or a cyclic redundancy code (CRC) of the packet. Moreover, the first through third signatures of a packet may be calculated in the same way, such that the three signatures may be the same.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Qian teaching of claim 3 with Enderby teaching “suspect packets are compared to entries in the database to more comprehensively determine whether or not the packets represent an attempt to subvert the information processing system.”, (see Enderby abstract).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of Qian et al. (US-20090060009-A1 hereafter Qian), in further view of Borker et al. (US-8953608-B1 hereafter Borker).
Regarding claim 9 Petrie in view of Qian teach the data storage device of claim 1, Petrie in view of Qian appear to be silence however Borker teaches wherein the controller comprises a host interface module (HIM) that includes an IDE TLP dynamic aggregation module. (See Borker Col.6 lines 54-60: “The aggregation buffer 41 may be used to store the data portions for a plurality of frames for an I/O exchange (HIM) based on a determination that is made by an aggregation module 51 (aggregation module) that maintains an aggregation data structure 53 that is also described below in detail. The data portion for one or more frames may be assembled into an aggregation data unit 55 that is described below with respect to FIG. 2E. In one embodiment, as described below in detail, the aggregation module 51 adds a header 59 and a trailer 61 to the aggregation data unit 55.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Qian teaching of claim 1 with Borker teaching “The aggregation module 51 aggregates N frames based on certain criteria that is described below. As an example, aggregation module 51 may aggregate an average of L number of frames. This reduces the number of interrupts from M to M/L. This is efficient compared to the conventional systems because the FC 2 layer 49C now has to process fewer frames i.e. N/L number of frames and deal with fewer interrupts i.e. M/L number of interrupts compared to M number of interrupts. L, N and M are positive numbers.”, (see Borker Col.8 lines 14-22).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of Qian et al. (US-20090060009-A1 hereafter Qian), in further view of Margolin et al. (US-12164793-B1 hereafter Margolin).
Regarding claim 10 Petrie in view of Qian teach the data storage device of claim 1, Petrie in view of Qian appear to be silence however Margolin teaches wherein the controller comprises a host interface module (HIM) that includes an IDE aggregation speculation execution module. (See Margolin Col.9 lines 35-42: “The host processor 202 and the controller(s) 204 may access the memory 206 and optionally communicate with each other via a bus 208 (HIM) comprising one or more interconnections, channels, busses, links, and/or the like such as, for example, system bus, memory bus, Peripheral Component Interconnect (PCI), PCI Express (PCIe), InfiniBand, and/or the like. The bus 208 may employ one or more bus architectures as known in the art.”, Col.9 lines 9-20: “rather than waiting for the predefined field(s) to arrive and update in the memory, the host processor may initiate a plurality of speculative execution (speculation module) threads each for processing the incoming packet(s) according to a receptive one of a plurality of possible (valid) values of one or more segments of the packet(s), for example, a field, a data value, and/or the like. Upon arrival of the field(s) or data value(s) according to which the packet should be processed, the host processor may maintain a process speculatively initiated to process the packet according to the actual value of the field or data that arrived from the controller and terminate all other threads.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Qian teaching of claim 1 with Margolin teaching “initiating, prior to the one or more packet segments written to the one or more memory blocks, a plurality of speculative execution threads each according to a respective one of a plurality of valid values of the one or more fields, reading a value of the one or more fields responsive to determining the one or more packet segments were written in the one or more memory blocks, and terminating each of the plurality of speculative execution threads which was initiated according to a respective value of the one or more fields different from the determined value of the one or more fields.”, (see Margolin Col.2 lines 22-32).
Claims 11, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of Qian et al. (US-20090060009-A1 hereafter Qian), in further view of Wagh et al. (US-20050144339-A1 hereafter Wagh).
Regarding claim 11 Petrie in view of Qian teach the data storage device of claim 1, wherein the controller is configured to start speculative usage of another IDE TLP before completing a protection check. (See Wagh par.0043-0046: “At the transaction layer, a transaction layer engine 134 performs pre-processing of the TLP 122, which includes the header 152 and the data 154 that was speculatively transmitted by the link layer engine 134. The transaction layer engine 124 ensures that the transaction request 114 is not globally visible (i.e., available to the core) until validated by the link layer engine 134. The memory 126 within the transaction layer 120, however, stores both speculatively transmitted packets and verified packets simultaneously. Thus, pointers are used to distinguish between the packets having different status, which are stored in the same memory. For illustration, the memory 126 of FIG. 4 depicts a TLP 122A, a TLP 122B, a TLP 122C, and a TLP 122D (collectively, TLPs 122). The TLPs 122A and 122B are recently stored TLPs, in which the link layer engine 134 has not performed CRC verification. The TLP 122C is a TLP in which the CRC verification from the link layer engine is complete, but processing by the transaction layer engine 124 is incomplete. The TLP 122D is one in which has been fully processed in the link layer and the transaction layer and, thus, is ready for transmission to the core 112. The transaction layer engine 124 uses a load pointer 28A, a speculative pointer 28B, and an unload pointer 28C (collectively, pointers 28) to keep track of the status of the TLPs 122 within the memory 126. The load pointer 28A points to the address where the current TLP 122A is speculatively stored. Any new packets sent by the link layer engine are stored at the address pointed to by the load pointer. The unload pointer 28C points to the address where TLPs which are ready for transmission to the core 112 are stored. The TLP 122C has both been "released" by the link layer engine 134, having passed CRC verification, and by the transaction layer engine 124, having been processed there as well. Between the load pointer 28A and the unload pointer 28C, the speculative pointer 28B essentially floats, pointing to intermediate address locations of the memory 126. The position of the speculative pointer 28B is governed by whether the link layer engine 134 has confirmed the validity of the speculatively forwarded TLP or not to the transaction layer engine 124.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Qian teaching of claim 1 with Wagh teaching “transaction layer packets are speculatively forwarded from the link layer to the transaction layer before processing at the link layer is completed, and without the use of memory storage at the link layer. A link layer engine minimally processes the data link layer packet by checking the sequence number only and not the CRC before forwarding the packet to the transaction layer. This allows the transaction layer to pre-process the packet, such as verifying header information. However, the transaction layer is unable to make the transaction globally available until the link layer has verified the CRC of the packet. The simultaneous processing of the packet by both the link layer and the transaction layer reduces latency, in some embodiments, and lessens the amount of memory needed for processing.”, (see Wagh par.0016).
Regarding claim 18 a data storage device (see Petrie par.0045: “Processor 105 is coupled to controller hub 115 through front-side bus (FSB) 106.”), comprising:
means to store data (see Petrie par.0046: “System memory 110 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible by devices in system 100.”); and
a controller coupled to the means to store data (see Petrie par.0046: “System memory 110 is coupled to controller hub 115 through memory interface 116.”), wherein the controller is configured to:
Petrie appear to be silence however Qian teaches
determine whether to aggregate data packets based upon whether the packet contains non-user data (see Qian par.0050: “If at 606 a determination is made that a frame for the same receiver address is being processed (YES), at 610 a determination is made as to whether an aggregate already exists for the receiver address (non-user data).”, par.0051: “If an aggregate for the receiver address if found at 610 (YES), at 612 a determination is made whether the frame can be appended or concatenated onto the aggregate. For example, the aggregate data frame may have a maximum size. If adding the frame to the aggregate data frame would make the aggregate exceed the maximum size, then the data frame is not added to the aggregate. If at 612 a determination is made that the frame can be appended to the aggregate data frame (YES), then at 614 the frame is appended to the aggregate data frame.”);
directly post a first packet to a host device without aggregating packets if the packet contains non-user data (see Qian par.0049: “At 604, the receiver address (RA, also referred to herein as the destination address) is ascertained. The receiver address corresponds to a destination address for the frame. At 606, a determination is made as to whether a frame for the receiver address is being processed. If at 606 it is determined that there are no frames being processed for the receiver address (NO), then at 608 the frame is processed.”);
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie teaching “Enhanced security Packet integrity protection for the whole path. Packet integrity protection for path segments. Performance enhancement: A voids multiple steps of IDE TLP encryption/decryption. One step approach using secondary IDE TLP MAC.”, (see Petrie par.0083) with Qian teaching “The encryption process may need to know the size of the data being encrypted before encrypting the frame. For example, a first frame with a first destination address may be received at input 104. While this frame is being processed (e.g. encrypted) a second frame with the first destination address is received at input 104. Since the first frame is already being processed, an aggregate frame for the first destination address is created beginning with the second frame. The second frame is not processed before being put into the aggregate. Any subsequent frames for the first destination address received before processing of the first frame is completed is placed into the aggregate unprocessed. For example, if a third frame for the first destination address is received while the first frame is still being processed, the third frame is placed into the aggregate unprocessed. After processing of the first frame is completed,”, (see Qian par.0023).
Petrie in view of Qian appear to be silence however Wagh teaches
and
perform speculative usage of a second packet before completing a protection check of the second packet. (See Wagh par.0043-0046: “At the transaction layer, a transaction layer engine 134 performs pre-processing of the TLP 122, which includes the header 152 and the data 154 that was speculatively transmitted by the link layer engine 134. The transaction layer engine 124 ensures that the transaction request 114 is not globally visible (i.e., available to the core) until validated by the link layer engine 134. The memory 126 within the transaction layer 120, however, stores both speculatively transmitted packets and verified packets simultaneously. Thus, pointers are used to distinguish between the packets having different status, which are stored in the same memory. For illustration, the memory 126 of FIG. 4 depicts a TLP 122A, a TLP 122B, a TLP 122C, and a TLP 122D (collectively, TLPs 122). The TLPs 122A and 122B are recently stored TLPs, in which the link layer engine 134 has not performed CRC verification. The TLP 122C is a TLP in which the CRC verification from the link layer engine is complete, but processing by the transaction layer engine 124 is incomplete. The TLP 122D is one in which has been fully processed in the link layer and the transaction layer and, thus, is ready for transmission to the core 112. The transaction layer engine 124 uses a load pointer 28A, a speculative pointer 28B, and an unload pointer 28C (collectively, pointers 28) to keep track of the status of the TLPs 122 within the memory 126. The load pointer 28A points to the address where the current TLP 122A is speculatively stored. Any new packets sent by the link layer engine are stored at the address pointed to by the load pointer. The unload pointer 28C points to the address where TLPs which are ready for transmission to the core 112 are stored. The TLP 122C has both been "released" by the link layer engine 134, having passed CRC verification, and by the transaction layer engine 124, having been processed there as well. Between the load pointer 28A and the unload pointer 28C, the speculative pointer 28B essentially floats, pointing to intermediate address locations of the memory 126. The position of the speculative pointer 28B is governed by whether the link layer engine 134 has confirmed the validity of the speculatively forwarded TLP or not to the transaction layer engine 124.”).).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Qian described above with Wagh teaching “transaction layer packets are speculatively forwarded from the link layer to the transaction layer before processing at the link layer is completed, and without the use of memory storage at the link layer. A link layer engine minimally processes the data link layer packet by checking the sequence number only and not the CRC before forwarding the packet to the transaction layer. This allows the transaction layer to pre-process the packet, such as verifying header information. However, the transaction layer is unable to make the transaction globally available until the link layer has verified the CRC of the packet. The simultaneous processing of the packet by both the link layer and the transaction layer reduces latency, in some embodiments, and lessens the amount of memory needed for processing.”, (see Wagh par.0016).
Regarding claim 20 Petrie in view of Qian, and Wagh teach the data storage device of claim 18, Petrie further teaches wherein the aggregated data packets are an integrity and data encryption (IDE) transaction layer packet (TLP) that includes an IDE TLP media access controller (MAC), wherein the IDE TLP MAC is a signature for protecting the IDE TLP, and wherein the signature is for all aggregated data packets of the IDE TLP. (see Petrie par.0065: “Packet format 300 includes fields for: sequence numbers, local prefixes, IDE TLP prefixes, other end-to-end prefixes, header, data, first MAC, second MAC, and LCRC.” par.0076: “Packet stream format 400 includes a packet format for a packet 404 that includes fields for a first MAC and a second MAC, and a packet format for a second packet 402 that does not include any fields for a MAC (nor for a LCRC). The first MAC and second MAC included in the fields of packet 404 protect the integrity of the integrity protected portion of packet 402 and the integrity protected portion of packet 404.”, par.0028: “a MAC can be used in scenarios where secure data transmission is required. For example, AES (Advanced Encryption Standard) is a symmetric encryption algorithm for generating an integrity code based on information (e.g., a cryptographic key, password, integrity protected portion of a packet including encrypted portions, combinations thereof, without limitation).”).
Claims 12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of AKAVARAM et al. (US-20240143434-A1 hereafter AKAVARAM), in view of Benisty et al. (US- US-10558367-B2 hereafter Benisty), in further view of Wagh et al. (US-20050144339-A1 hereafter Wagh).
Regarding claim 12 a data storage device (see Petrie par.0045: “Processor 105 is coupled to controller hub 115 through front-side bus (FSB) 106.”), comprising:
a memory device (see Petrie par.0046: “System memory 110 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible by devices in system 100.”); and
a controller coupled to the memory device (see Petrie par.0046: “System memory 110 is coupled to controller hub 115 through memory interface 116.”), wherein the controller is configured to:
Petrie appear to be silence however Akavaram teaches
receive a first chunk of an integrity and data encryption (IDE) transaction layer packet (TLP) (see Akavaram par.0053: “receiving TLPs at a receiver according to some aspects. In one example, the process 600 can be implemented at any PCIe device (e.g., the receiver 304) described herein. After receiving a TLP from a link partner, at 602, the receiver determines whether or not the packet is valid (i.e., CRC check passed).”);
determine whether the first chunk is the last chunk of the IDE TLP (see Akavaram par.0053: “At 604, the receiver determines whether or not the TLP has the correct or anticipated sequence number, for example, the NRS number maintained at the packet check block 314 of the receiver. For example, if the TLP has a sequence number later than the NRS, it indicates that the TLP is out of sequence.”);
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie teaching “Enhanced security Packet integrity protection for the whole path. Packet integrity protection for path segments. Performance enhancement: A voids multiple steps of IDE TLP encryption/decryption. One step approach using secondary IDE TLP MAC.”, (see Petrie par.0083) with Akavaram teaching “ At a receiver (e.g., a receiving link partner), TLPs are accepted and processed in an order according to their sequence number order. A TLP with an earlier sequence number is processed at the transaction layer before a TLP with a later sequence number can be presented to the transaction layer for processing. In the current PCIe implementation, if the sequence number of a received TLP is not the expected or anticipated sequence number, the TLP is discarded, and the receiver waits for another TLP with the expected sequence number. If the TLP has the correct sequence number, the receiver can present the TLP to the transaction layer for processing.”, (see Akavaram par.0044).
Petrie in view of Akavaram appear to be silence however Benisty teaches
determine if the first chunk is a non-user data packet (see Benisty Col.8 lines 34-44: “At block 620, DMA 133 determines if latency is critical (non-user data) for the host read request. For example, DMA 133 may determine that the host read request is associated with a low host command submission queue depth, with a host memory buffer, with a forced unit access for a write operation, or with other operations. If latency is determined to be not critical, DMA 133 issued a host read request for the highest supported read request size allowable by maximum TLP payload size ceiling and by static MRRS 420 at block 630.”);
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Akavaram teaching described above with Benisty teaching “a method of accessing data by a storage device with reduced latency includes determining whether a host read request to be issued is latency critical. A controller determines whether a transaction size to be requested by a host read request is greater than a maximum TLP payload size floor. The controller selects a read request size in response to determining the transaction size is greater than the maximum transaction layer packet payload size floor.”, (see Benisty Col.1 lines: 46-54).
Petrie in view of Akavaram, and Benisty appear to be silence however Wagh teaches and
perform speculative usage of the IDE TLP before completing a protection check. (See Wagh par.0043-0046: “At the transaction layer, a transaction layer engine 134 performs pre-processing of the TLP 122, which includes the header 152 and the data 154 that was speculatively transmitted by the link layer engine 134. The transaction layer engine 124 ensures that the transaction request 114 is not globally visible (i.e., available to the core) until validated by the link layer engine 134. The memory 126 within the transaction layer 120, however, stores both speculatively transmitted packets and verified packets simultaneously. Thus, pointers are used to distinguish between the packets having different status, which are stored in the same memory. For illustration, the memory 126 of FIG. 4 depicts a TLP 122A, a TLP 122B, a TLP 122C, and a TLP 122D (collectively, TLPs 122). The TLPs 122A and 122B are recently stored TLPs, in which the link layer engine 134 has not performed CRC verification. The TLP 122C is a TLP in which the CRC verification from the link layer engine is complete, but processing by the transaction layer engine 124 is incomplete. The TLP 122D is one in which has been fully processed in the link layer and the transaction layer and, thus, is ready for transmission to the core 112. The transaction layer engine 124 uses a load pointer 28A, a speculative pointer 28B, and an unload pointer 28C (collectively, pointers 28) to keep track of the status of the TLPs 122 within the memory 126. The load pointer 28A points to the address where the current TLP 122A is speculatively stored. Any new packets sent by the link layer engine are stored at the address pointed to by the load pointer. The unload pointer 28C points to the address where TLPs which are ready for transmission to the core 112 are stored. The TLP 122C has both been "released" by the link layer engine 134, having passed CRC verification, and by the transaction layer engine 124, having been processed there as well. Between the load pointer 28A and the unload pointer 28C, the speculative pointer 28B essentially floats, pointing to intermediate address locations of the memory 126. The position of the speculative pointer 28B is governed by whether the link layer engine 134 has confirmed the validity of the speculatively forwarded TLP or not to the transaction layer engine 124.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Akavaram, and Benisty teaching described above with Wagh teaching “transaction layer packets are speculatively forwarded from the link layer to the transaction layer before processing at the link layer is completed, and without the use of memory storage at the link layer. A link layer engine minimally processes the data link layer packet by checking the sequence number only and not the CRC before forwarding the packet to the transaction layer. This allows the transaction layer to pre-process the packet, such as verifying header information. However, the transaction layer is unable to make the transaction globally available until the link layer has verified the CRC of the packet. The simultaneous processing of the packet by both the link layer and the transaction layer reduces latency, in some embodiments, and lessens the amount of memory needed for processing.”, (see Wagh par.0016).
Regarding claim 17 Petrie in view of Akavaram, Benisty, and Wagh teach the data storage device of claim 12, Wagh further teaches wherein the controller is configured to encrypt the chunk and ignore the encrypted chunk if the chunk is determined to be a bad packet. (See Wagh Col.16 lines 55-64: “it can be determined whether this completion packet includes critical data (diamond 815). This determination may be made, e.g., based on a portion of the header of the completion packet that indicates that this is part of the first portion of a wrap memory read request response. If it is determined at diamond 815 that the completion packet includes critical data, control passes to block 820 where CRC processing may performed on the critical data. Note that as there is a limited amount of critical data.”, Col.17 lines 9-12: If instead an error is detected, control passes to block 825, where the critical data packet may be discarded and a retry request may be issued to the completer.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Akavaram, Benisty, and Wagh teaching of claim 12 with Wagh teaching “a responsibility of the data link layer 310 is providing a reliable mechanism for exchanging Transaction Layer Packets (TLPs) between two components a link. One side of the Data Link Layer 310 accepts TLPs assembled by the Transaction Layer 305, applies packet sequence identifier 311, i.e., an identification number or packet number, calculates and applies an error detection code, i.e., CRC 312, and submits the modified TLPs to the Physical Layer 320 for transmission across a physical to an external device.”, (see Wagh Col.10 lines:6-15)
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of AKAVARAM et al. (US-20240143434-A1 hereafter AKAVARAM), in view of Benisty et al. (US- US-10558367-B2 hereafter Benisty), in view of Wagh et al. (US-20050144339-A1 hereafter Wagh), in further view of Qian et al. (US-20090060009-A1 hereafter Qian).
Regarding claim 13 Petrie in view of Akavaram, Benisty, and Wagh teach the data storage device of claim 12, Petrie in view of Akavaram, Benisty, and Wagh appear to be silence however Qian teaches wherein the controller is configured to wait for a second chunk upon determining that the first chunk is a non-data packet. (See Qian par.0050: “If at 606 a determination is made that a frame for the same receiver address is being processed (YES), at 610 a determination is made as to whether an aggregate already exists for the receiver address.”, par.0054: “At 618, the aggregate is stored until processing at 608. For example, the aggregate is stored in memory and data frames can be added to the aggregate while the first data frame is being processed. As another example, the aggregate can be stored until it reaches the maximum allowable size. The aggregate may also be stored until processing of the incoming data stream is completed ”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Akavaram, Benisty, and Wagh teaching of claim 12 with Qian teaching ““The encryption process may need to know the size of the data being encrypted before encrypting the frame. For example, a first frame with a first destination address may be received at input 104. While this frame is being processed (e.g. encrypted) a second frame with the first destination address is received at input 104. Since the first frame is already being processed, an aggregate frame for the first destination address is created beginning with the second frame. The second frame is not processed before being put into the aggregate. Any subsequent frames for the first destination address received before processing of the first frame is completed is placed into the aggregate unprocessed.”, (see Qian par.0023).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of AKAVARAM et al. (US-20240143434-A1 hereafter AKAVARAM), in view of Benisty et al. (US- US-10558367-B2 hereafter Benisty), in view of Wagh et al. (US-20050144339-A1 hereafter Wagh) in further view of Filippo et al. (US-7266673-B2 hereafter Filippo).
Regarding claim 14 Petrie in view of Akavaram, Benisty, and Wagh teach the data storage device of claim 12, Petrie in view of Akavaram, Benisty, and Wagh appear to be silence however Filippo teaches wherein the controller is configured to perform the speculative usage upon determining that the first chunk is not a non-data packet. (see Filippo Col.2 lines 43-50: “performing data speculation for an operation (first chunk); a verification unit verifying the data speculation performed for the operation; the verification unit generating a speculation pointer indicating that the operation is not data-speculative (non-data packet) with respect to the verification unit in response to said verifying; and, in response to the speculation pointer indicating that the operation is not data-speculative with respect to the verification unit”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Akavaram, Benisty, and Wagh teaching of claim 12 with Filippo teaching “a load store unit 126 may track which operations the load store unit has performed data speculation for. Each time that load store unit 126 verifies one of those data-speculative operations, the load store unit 126 may advance its speculation pointer to indicate that all operations up to the next operation for which the load store unit performed data speculation are non-data-speculative with respect to the load store unit.”, (see Filippo Col.10 lines: 14-21).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of AKAVARAM et al. (US-20240143434-A1 hereafter AKAVARAM), in view of Benisty et al. (US- US-10558367-B2 hereafter Benisty), in view of Wagh et al. (US-20050144339-A1 hereafter Wagh) in further view of Dautenhahn et al. (US-20140281705-A1 hereafter Dautenhahn).
Regarding claim 15 Petrie in view of Akavaram, Benisty, and Wagh teach the data storage device of claim 12, Petrie in view of Akavaram, Benisty, and Wagh appear to be silence however Dautenhahn teaches wherein the controller is configured to perform the protection check upon determining that the first chunk is the last chunk. (See Dautenhahn par.0027: “memory operations of IAVs may be interrupted and a combination of emulation of micro-operations and speculative execution of the macro-instruction is used to capture the global visible memory state as observed during the recording.”, par.0042: “a process relating to the last instruction of a chunk and begins with a pending instruction at block 402. At block 404, a determination is made as to whether the chunk packet's NTB equals zero. If yes, the process continues with executing the macro-instruction at block 406…At block 412, a determination is made as to whether the executed memory operations equal NTB. If not, the process returns to block 410. If yes, at block 414, the process continues with terminating the chunk and saving any NTB information for reference for this thread's next chunk and stalling the thread.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Akavaram, Benisty, and Wagh teaching of claim 12 with Dautenhahn teaching “if the pending instruction is determined to be the last instruction of the chunk, at block 458, a determination is made as to if the thread's prior chunk had NTB value equaling zero. If yes, the last instruction of the chunk is executed”, (see Dautenhahn par.0045).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of AKAVARAM et al. (US-20240143434-A1 hereafter AKAVARAM), in view of Benisty et al. (US- US-10558367-B2 hereafter Benisty), in view of Wagh et al. (US-20050144339-A1 hereafter Wagh) in further view of Margolin et al. (US-12164793-B1 hereafter Margolin).
Regarding claim 16 Petrie in view of Akavaram, Benisty, and Wagh teach the data storage device of claim 12, Petrie in view of Akavaram, Benisty, and Wagh appear to be silence however Morgolian teaches wherein the controller is configured to wait for a second chunk while performing the speculative usage. (See Morgolian Col.8 lines 57-60: “process 100 may be executed by one or more processors, typically host processors to process incoming data packets, messages, and/or blocks, collectively designated packets herein after, into memory by one or more other controllers while the packets (second chunk) are still incoming,”, Col.9 lines 9-20: “rather than waiting for the predefined field(s) to arrive and update in the memory, the host processor may initiate a plurality of speculative execution threads each for processing the incoming packet(s) according to a receptive one of a plurality of possible (valid) values of one or more segments of the packet(s), for example, a field, a data value, and/or the like. Upon arrival of the field(s) or data value(s) according to which the packet should be processed, the host processor may maintain a process speculatively initiated to process the packet according to the actual value of the field or data that arrived from the controller and terminate all other threads.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Akavaram, Benisty, and Wagh teaching of claim 12 with Morgolian teaching “ speculatively initiating a plurality of execution threads each launched for processing an incoming packet according to a respective one of a plurality of valid values of one or more sections of the incoming packet according to which the packet should be processed may further increase packet processing performance since the host processor may not idly wait for these sections to arrive and update in memory but may rather start processing the packet significantly earlier thus expediting processing of the packet.”, (see Morgolian Col.6 lines:54-62).
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Petrie et al. (63608003 hereafter Petrie (the reference 63608003 is provisional application of publication US 20250190634 A1)), in view of Qian et al. (US-20090060009-A1 hereafter Qian), in view of Wagh et al. (US-20050144339-A1 hereafter Wagh), in further view of Shibayama et al. (US-20020178349-A1 hereafter Shibayama)
.
Regarding claim 19 Petrie in view of Qian, and Wagh teach the data storage device of claim 18, Petrie in view of Qian, and Wagh appear to be silence however Shibayama teaches wherein the controller is further configured to perform a protection check and cancel the speculative usage upon determining the protection check fails. (See Shibayama par.0341: “control device 55 of the second embodiment cancels the speculative execution for a load instruction whose speculative execution is expected to fail, and executes the load instruction later in the program order by means of the non-speculative execution (protection check), thereby the failure rate of the speculative execution is reduced and thereby the program execution performance is improved.”).
It would have been obvious to someone of ordinary skill in the art before the
effective filing date of the claimed invention to have combined Petrie in view of Qian, and Wagh teaching of claim 18 with Shibayama teaching “the speculative execution of a load instruction whose speculative execution is expected to fail is canceled, and the load instruction is executed later by means of the non-speculative execution, thereby the failure rate of the speculative execution is reduced and thereby the program execution performance can be improved.”, (see Shibayama par.0191).
Conclusion
The prior art made of record and not relied upon is considered pertinent to
applicant's disclosure:
Benisty et al. (US 20190278477 A1) The performance level of the maximum payload size is compared to the performance level of the reduced payload size to determine which payload size has the higher performance level. The payload size having the higher performance level is then selected, and the storage device sends data in packets in the size of the payload size having the higher performance level to the host device. the storage device can select the most efficient payload size for sending data based on the size of the command received. As such, the storage device may alternate between sending data in the MPS and the reduced payload size as needed in order to achieve the highest performance level and to fully utilize the pipe interface.
Harriman et al. (US-20200151362-A1) a packet is to traverse to a link partner on a secure stream, authenticate a receiving port of the link partner, configure a transaction layer packet (TLP) prefix to identify the TLP as a secure TLP, associating the secure TLP with the secure stream, apply integrity protection and data encryption to the Secure TLP, transmit the secure TLP across the secure stream to the link partner. PCI Express uses packets to communicate information between components. Packets are formed in the transaction layer 205 and data link layer 210 to carry the information from the transmitting component to the receiving component. As the transmitted packets flow through the other layers, they are extended with additional information necessary to handle packets at those layers. At the receiving side the reverse process occurs, and packets get transformed from their physical layer 220 representation to the data link layer 210 representation and finally (for transaction layer packets) to the form that can be processed by the transaction layer 205 of the receiving device.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUILIO MUNGUIA whose telephone number is (571)270-5277. The examiner can normally be reached M-F 9:30 - 5:00Pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUILIO MUNGUIA/Examiner, Art Unit 2497 /ELENI A SHIFERAW/Supervisory Patent Examiner, Art Unit 2497