Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, and 4-20, is/are rejected under 35 U.S.C. 103 as being unpatentable over Guim Bernat et al., (US 2023/0409511) and Panesar (US 2007/0294444).
It has been noted that, a claimed invention is unpatentable if the differences between it and the prior art are "such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art." 35 U.S.C. § 103(a) (2000); KSRInt'lr. Teleflex Inc., 127 S.Ct. 1727, 1734 (2007); Graham v.John Deere Co., 383 U.S. 1, 13-14 (1966).
In Graham, the Court held that that the obviousness analysis is bottomed on several basic factual inquiries: "[(1)] the scope and content of the prior art are to be determined; [(2)] differences between the prior art and the claims at issue are to be ascertained; and [(3)] the level of ordinary skill in the pertinent art resolved." 383 U.S. at 17. See also KSR, 127 S.Ct. at 1734. "The combination of familiar elements according to known methods is likely to be obvious when it does no more; than yield predictable results." KSR, at 1739.
"When a work is available in one field of endeavor, design incentives and other market forces can prompt variations of it, either in the same field or in a different one. If a person of ordinary skill in the art can implement a predictable variation, § 103 likely bars its patentability." Id. at 1740.
"For the same reason, if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill." Id.
"Under the correct analysis, any need or problem known in the field of endeavor at the time of invention and addressed by the patent can provide a reason for combining the elements in the manner claimed." Id. 11742.636
As per claim 1, Guim Bernat teaches a networking switch (part of network interface device (NID), 120, Fig. , [0011-0012] connected to Switch, 110 and Host Interface, 160) comprising: a descriptor decoder (part of network interface (500) device with a direct memory (510) access (DMA) circuitry, Fig. 5) configured to obtain descriptor data (“Direct memory access (DMA) engine 552 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa”, [0062]) by decoding a direct memory access (DMA) descriptor (“..Descriptor queues 520 can include descriptors that reference data or packets in transmit queue 506 or receive queue 508,” [0063]) packet (metadata in a header field and/or payload, [0020]) encapsulated in a packet transferred, the packet from a host device ([0011-0014, 0016]); and a link power controller (obviously part of resource manager, 130) configured to adjust, based on the descriptor data (“service identifier can be determined based on header field content. Packet processor 126 can provide the service identifier and flow identifier to resource selection 132. Based on a configuration for a service identifier and flow identifier stored in configuration 140, resource selection 132 can select a circuitry to process the packet. However, if a configuration for a service identifier and flow identifier is not present in configuration 140, then resource selection 132 can select an available circuitry or processor that can apply best efforts for processing the data,” [0018]) a power state (“Resource monitoring 130 can periodically generate utilization data 150 based on monitored load and power states of circuitry (e.g., accelerators 106-0, 106-1, 128-0 to 128-X, and other processors,” [0013]), of a link (“Based on selection of one or more circuitry to process the data, resource selection 132 can cause the packet headers, packet payloads, and/or other data to be provided to the selected accelerator on NID 120, part of platform 100,” [0025]) to be passed (resource selection 132 can determine complexity of processing a received packet header and/or payload and determined estimated time or number of clock cycles to complete processing of the network packet or other data based on the determined complexity. In some examples, a received packet can include metadata in a header field and/or payload and the metadata can indicate a level of processing resources or time or number of clock cycles estimated to process the data, [0020]) through in a bus interface ([0025]) connected to the networking switch.
Guim Bernat discloses the NID (Fig. 1) receiving a packet that can include header and/or payload data and further using this ‘decoded’ data to determine which particular processor or accelerator (128-0 to 128-x) will be selected based on the data processing measurement. Specifically, Guim Bernat’s “..resource monitoring 130 and/or resource selection 132 can determine a latency to process data by a candidate circuitry based on a time or number of clock cycles to transmit data over a link (e.g., switch 110, host interface 160, or an interconnect in NID 120) to the candidate circuitry, including a time or number of clock cycles to wake up or power up the link between NID 120 and the candidate circuitry to transfer data at a particular throughput level, as well as time or number of clock cycles to wake up or power up the candidate circuitry to operate at a particular throughput level, and can consider use of turbo boost or other manner of ramping increases to power and/or frequency of operation of the candidate circuitry;” ([0030]) therein, the changing of the power state is based on the metadata in a header field and/or payload ([0020]) in the encapsulated packet. ([0021]) Further, as previously stated “Resource monitoring 130 can periodically generate utilization data 150 based on monitored load and power states of circuitry (e.g., accelerators 106-0, 106-1, 128-0 to 128-X, and other processors,” [0013], of a link and therefore obviously change the power state of the accelerators 106-0, 106-1, 128-0 to 128-X, and other processors to a powered-off or enter reduced power or to wake up or power up the link between NID 120 and the candidate circuitry to transfer data at a particular throughput level.
In another analogous art, Panesar discloses it is well known in the art that the format of a transaction layer packet (TLP) may include a header 701, a payload 703 and a digest 705. A TLP request packet may include a header 707 containing a requester ID and a message code. Message codes may include set configuration, read configuration descriptor, set address, read detailed configuration, set power state, reset device and similar message codes. (Fig.7, [0056-0059])
It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to modify the teaching of Guim Bernat’s header field and/or payload (transmission control protocol (TCP) and User Datagram Protocol (UDP), etc) with the teaching of Panesar wherein the motivation for doing so would be to have unique device message code – such as a set power state – encapsulated wherein the packet’s header field and/or payload would improving operational access. (Panesar, [0056])
As per claim 2, Guim Bernat – Panesar teaches wherein the packet is a transaction layer packet (TLP) (Panesar [0057], Guim Bernat ([0016] – “..source and/or destination User Datagram Protocol (UDP) ports, source/destination TCP ports, or any other header field”), and the descriptor decoder is configured to decode the DMA descriptor packet (Guim Bernat teaches “Direct memory access (DMA) engine 552 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa”, [0062]) according to a determination that the TLP (Panesar’s TLP request packet may include a header 707 containing a requester ID and a message code ([0056]) combined with Guim Bernat’s header field and/or payload (transmission control protocol (TCP) and User Datagram Protocol (UDP), etc) comprises the DMA descriptor packet.
As per claim 4, Guim Bernat – Panesar teaches wherein the descriptor decoder is configured to, based on the determination that the TLP comprises the DMA descriptor packet (Panesar teaches TLP having data descriptors and combined with Guim Bernat that further discloses DMA descriptor), decode the DMA descriptor packet (Guim Bernat teaches “Direct memory access (DMA) engine 552 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa”, [0062]) as obtained from a data payload of the DMA (Guim Bernat - “..Descriptor queues 520 can include descriptors that reference data or packets in transmit queue 506 or receive queue 508,” [0063]) packet (metadata in a header field and/or payload, [0020] encapsulated in a packet transferred, the packet from a host device [0011-0014, 0016]) descriptor packet.
As per claim 5, Guim Bernat – Panesar teaches wherein the descriptor decoder is configured to add information used to adjust the power state of the link to be passed through by using a reserved area as defined by a standard descriptor format of the DMA descriptor packet.
Panesar discloses a type of reserve area of a TLP request packet that may include a header 707 containing a requester ID and a message code. Message codes may include set configuration, read configuration descriptor, set address, read detailed configuration, set power state, reset device and similar message codes. (Fig.7, [0056-0059])
Guim Bernat discloses metadata in a header field and/or payload ([0020]) encapsulated in a packet transferred, the packet from a host device ([0011-0014, 0016]). Guim Bernat’s resource manager, 130 performs some or all of the functions of a link power controller because the resource manager, 130 is configured to adjust, based on the descriptor data (“service identifier can be determined based on header field content. Packet processor 126 can provide the service identifier and flow identifier to resource selection 132. Based on a configuration for a service identifier and flow identifier stored in configuration 140, resource selection 132 can select a circuitry to process the packet. However, if a configuration for a service identifier and flow identifier is not present in configuration 140, then resource selection 132 can select an available circuitry or processor that can apply best efforts for processing the data,” [0018]) a power state (“Resource monitoring 130 can periodically generate utilization data 150 based on monitored load and power states of circuitry (e.g., accelerators 106-0, 106-1, 128-0 to 128-X, and other processors,” [0013]), of a link (“Based on selection of one or more circuitry to process the data, resource selection 132 can cause the packet headers, packet payloads, and/or other data to be provided to the selected accelerator on NID 120, part of platform 100,” [0025])
As per claim 6, Guim Bernat – Panesar teaches wherein the descriptor data comprises an amount of data to be transferred according to the DMA descriptor packet, a source address of the data, and/or a destination address of the data. (Guim Bernat teaches “Direct memory access (DMA) engine 552 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa”, [0062]) as obtained from a data payload of the DMA (Guim Bernat - “..Descriptor queues 520 can include descriptors that reference data or packets in transmit queue 506 or receive queue 508,” [0063]) packet (metadata in a header field and/or payload, [0020] encapsulated in a packet transferred, the packet from a host device [0011-0014, 0016] descriptor packet.)
As per claim 7, Guim Bernat – Panesar teaches wherein the link power controller is configured to adjust the power state of the link to be passed through based on an operation code of the DMA descriptor packet, a size of data transferred through the DMA descriptor packet, a state of a source of the data transferred through the DMA descriptor packet, and/or a state of a destination of the data. Guim Bernat’s resource manager, 130 performs some or all of the functions of a link power controller because the resource manager, 130 is configured to adjust, based on the descriptor data (“service identifier can be determined based on header field content. Packet processor 126 can provide the service identifier and flow identifier to resource selection 132. Based on a configuration for a service identifier and flow identifier stored in configuration 140, resource selection 132 can select a circuitry to process the packet. However, if a configuration for a service identifier and flow identifier is not present in configuration 140, then resource selection 132 can select an available circuitry or processor that can apply best efforts for processing the data,” [0018]) a power state (“Resource monitoring 130 can periodically generate utilization data 150 based on monitored load and power states of circuitry (e.g., accelerators 106-0, 106-1, 128-0 to 128-X, and other processors,” [0013]), of a link (“Based on selection of one or more circuitry to process the data, resource selection 132 can cause the packet headers, packet payloads, and/or other data to be provided to the selected accelerator on NID 120, part of platform 100,” [0025])
As per claim 8, Guim Bernat – Panesar teaches wherein the link power controller is configured to determine whether to maintain the power state of the link to be passed through in an active state or to switch the power state to a low power state, based on whether a DMA operation is to be performed for a memory device, an amount of data to transferred by the DMA operation, a current power state of the link to be passed through, a power state to be changed of the link to be passed through, or latency for switching the power state of the link to be passed through.
Guim Bernat’s resource manager, 130 performs some or all of the functions of a link power controller because the resource manager, 130 is configured to adjust, based on the descriptor data (“service identifier can be determined based on header field content. Packet processor 126 can provide the service identifier and flow identifier to resource selection 132. Based on a configuration for a service identifier and flow identifier stored in configuration 140, resource selection 132 can select a circuitry to process the packet. However, if a configuration for a service identifier and flow identifier is not present in configuration 140, then resource selection 132 can select an available circuitry or processor that can apply best efforts for processing the data,” [0018]) a power state (“Resource monitoring 130 can periodically generate utilization data 150 based on monitored load and power states of circuitry (e.g., accelerators 106-0, 106-1, 128-0 to 128-X, and other processors,” [0013]), of a link (“Based on selection of one or more circuitry to process the data, resource selection 132 can cause the packet headers, packet payloads, and/or other data to be provided to the selected accelerator on NID 120, part of platform 100,” [0025])
As per claim 9, Guim Bernat – Panesar teaches wherein the link power controller is configured to, in response to the amount of data being smaller than a predetermined reference size, switch the power state of the link to be passed through to the low power state independently of whether the packet comprises the DMA descriptor packet. Guim Bernat discloses that the resource selection 132 can determine complexity of processing a received packet header and/or payload and determined estimated time or number of clock cycles to complete processing of the network packet or other data based on the determined complexity. In some examples, a received packet can include metadata in a header field and/or payload and the metadata can indicate a level of processing resources or time or number of clock cycles estimated to process the data, [0020] through in a bus interface ([0025]) connected to the networking switch.
As per claim 10, Guim Bernat – Panesar teaches wherein the link power controller is configured to transfer the DMA descriptor packet, which has an indication of an adjusted power state of the link to be passed through, to a DMA engine. (Guim Bernat teaches “Direct memory access (DMA) engine 552 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa”, [0062])
As per claim 11, Guim Bernat – Panesar teaches wherein the link power controller is configured to, in response to the DMA operation for the memory device being performed as the DMA descriptor packet, which has the adjusted power state of the link to be passed through, is transferred to the DMA engine, and an input/output (I/O) device connected to the networking switch being not used for a predetermined period of time, switch the power state of the link connected to the I/O device to the low power state. (Guim Bernat discloses a circuitry can be in one of several states: (1) resource on (e.g., interconnect to circuitry and circuitry are powered-on), (2) resource on but in reduced power state, or (3) resource powered off. In some examples, resource monitoring 130 and/or resource selection 132 can determine a time or number of clock cycles to process data by circuitry that is in a low or reduced power state or powered off state. Resource monitoring 130 and/or resource selection 132 can determine a latency to process data by a candidate circuitry based on a time or number of clock cycles to transmit data over a link (e.g., switch 110, host interface 160, or an interconnect in NID 120) to the candidate circuitry, including a time or number of clock cycles to wake up or power up the link between NID 120 and the candidate circuitry to transfer data at a particular throughput level, as well as time or number of clock cycles to wake up or power up the candidate circuitry to operate at a particular throughput level, and can consider use of turbo boost or other manner of ramping increases to power and/or frequency of operation of the candidate circuitry. If the time for the candidate circuitry to process the data is within a permitted time-to-completion (or number of clock cycles), the candidate circuitry can be selected. ([0013, 0020,0025])
As per claim 12, Guim Bernat – Panesar teaches wherein the DMA engine is included in the I/O device or the networking switch. (Guim Bernat – note Fig. 1. The combination part of network interface device (NID), 120, Fig.1, [0011-0012] connected to Switch, 110 and Host Interface, 160)
As per claim 13, Guim Bernat – Panesar teaches wherein the networking switch is configured to, based on the networking switch including the DMA engine and based on recognizing access of the host device to a DMA control register in the networking switch, receive the descriptor data from the host device, and adjust the power state of the link to be passed through to the active state or the low power state (Guim Bernat teaches including a time or number of clock cycles to wake up or power up the link between NID 120 and the candidate circuitry to transfer data at a particular throughput level, as well as time or number of clock cycles to wake up or power up the candidate circuitry to operate at a particular throughput level, and can consider use of turbo boost or other manner of ramping increases to power and/or frequency of operation of the candidate circuitry. by interpreting the descriptor data.)
Guim Bernat further discloses Direct memory access (DMA) engine 552 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa”, [0062]) by decoding a direct memory access (DMA) descriptor (“..Descriptor queues 520 can include descriptors that reference data or packets in transmit queue 506 or receive queue 508,” [0063]) packet (metadata in a header field and/or payload, [0020]
As per claim 14, Guim Bernat – Panesar teaches wherein, based on a number of I/O devices (Guim Bernat, Fig. 1, Fig. 6, [0070], “..600 can include one or more buses or bus systems between devices, such as a memory bus, a graphics bus, interface buses, or others. Buses or other signal lines can communicatively or electrically couple components together, or both communicatively and electrically couple the components. Buses can include physical communication lines, point-to-point connections, bridges, adapters, controllers, or other circuitry or a combination.” ) connected to the networking switch being more than one ([0071], “..In some examples, packet processing device or network interface device 650 can refer to one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNIC, router, switch, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU). An example IPU or DPU is described with respect to FIG. 5”), and the I/O devices performing a DMA operation for a memory device, the host device is configured to maintain the link to be passed through in an active state in a bus interface of a target I/O device to perform the DMA operation ([0089], DMA circuitry, a network interface, a host interface, an interface, and circuitry to: for a packet flow, determine available hardware resources, wherein the available hardware resources include a hardware resource in a reduced power (active) state”) among the plurality of I/O devices, and transfer the DMA descriptor packet through the link in the active state. (Guim Bernat, [0013, 0020,0025])
As per claim 15, Guim Bernat – Panesar teaches wherein the link power controller is configured to, based on the number of host devices being more than one and a first packet comprising the DMA descriptor packet and a second packet not comprising the DMA descriptor packet ([0089], Guim Bernat further teaches wherein the DMA circuitry, a network interface, a host interface, an interface, and circuitry to: for a packet flow, determine available hardware resources, wherein the available hardware resources include a hardware resource in a reduced power (active) state) being transferred from the plurality of host devices: switch, to an active state, a link to be passed through by the first packet; and switch, to a low power state, a link to be passed through by the second packet. Guim Bernat teaches flow can be a sequence of packets being transferred between two endpoints therein buses can include physical communication lines, point-to-point connections, bridges, adapters, controllers, or other circuitry or a combination. ([0070])
As per claim 16, see the rejection for claims 1, 5, 7, 8, 11, and 14 above.
As per claim 17, see the rejection for claims 7, 8, 11, and 14 above.
As per claim 18, see the rejection for claims 1, 5, 7, 8, 11, and 14 above.
As per claim 19, see the rejection for claims 1, and 7-11 above.
As per claim 20, see the rejection for claims 1, and 7-11, 13, and 14 above
Claims 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guim Bernat et al., (US 2023/0409511) and Panesar (US 2007/0294444) and in further view of Iyer et al., (US 12, 306,778).
As per claim 3, Panesar nor Guim Bernat expressly teach wherein the descriptor decoder is configured to determine that the TLP comprises the DMA descriptor packet by decoding a reserved area comprising either a reserved bit included in a header of the TLP or a prefix of the TLP based on a smart data accelerator interface (SDXI) protocol, wherein the reserved area is defined by the host device.
However, Panesar discloses a type of reserve area of a TLP request packet that may include a header 707 containing a requester ID and a message code. Message codes may include set configuration, read configuration descriptor, set address, read detailed configuration, set power state, reset device and similar message codes. (Fig.7, [0056-0059])
In another analogous art, Iyer discloses a system layout including smart data accelerator interface (SDXI) hardware and DMA memory-addressed data structures. Wherein, Iyer’s system includes offload DMA engines when performing memory-to-memory data movement of current SDXI descriptors for a source and a destination transaction. (Iyer, col. 31, lines 45-col. 32, lines 1-13)
It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to modify the teaching of Guim Bernat-Panesar’s TLP header field and/or payload to be further implemented in a system layout including SDXI hardware as taught by Iyer because adding a SDXI function would add and expand the flexibility of Guim Bernat-Panesar when performs data movement operations.
RELEVENT ART CITED BY THE EXAMINER
The following prior art made of record and relied upon is citied to establish the level of skill in the
applicant's art and those arts considered reasonably pertinent to applicant's disclosure. See MPEP
707.05(c). McBridge et al., (US 10,528,494), further discloses multiple DMA engines are provided for using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines. (Note Abstract)
Conclusion
The examiner requests, in response to this office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 C.F.R.I .Hi(c). In amending in reply to a rejection of claims in an application or patent under reexamination, the applicant or patent owner must clearly point out the patentable novelty which he or she thinks the claims present in view the state of the art disclosed by the references cited or the objections made. The applicant or patent owner must also show how the amendments avoid such references or objections. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tammara Peyton whose telephone number is (571) 272-4157. The examiner can normally be reached between 8:30- 6:00 from Monday to Thursday, (I am off every first Friday), and 7:30- 4:00 every second Friday. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor Henry Tsai can be reached on (571)272-4176. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Any inquiry of a general nature of relating to the status of this application should be directed to the Group receptionist whose telephone number is (571) 272- 2100.
/TAMMARA R PEYTON/Primary Examiner, Art Unit 2184 March 5, 2026