Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 2-21 are pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection mailed on 08/22/2025. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/22/2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2-21 are rejected 35 U.S.C. 103 as being unpatentable over by Smith (US Pat No. 9432298 B1) in view of Arellano et al. (USPGPUB No. 2017/0091108 A1, hereinafter referred to as Arellano) and further in view of Ho (USPGPUB No. 2016/0378545 A1) and further in view of Terrell et al. (USPGPUB No. 2008/0008202 A1, hereinafter referred to as Terrell).
Referring to claim 2, Smith discloses a method comprising:
receiving a request {request “Memory access requests”, Col 9, lines 40-45.} to communicate data from a device to a system {host system performing “step 19-910”, see Fig. 19-9, Col 89, lines 50-58.}, the system comprising a processing device, a memory device {see Fig. 3, showing both processing device "CPU" and memory device "DIMMs" (Col 25).};
Smith does not appears to explicitly disclose a first communication path configured to communicate data from the device to the processing device at the system, and a second communication path configured to communicate data from the device to the memory device;
generating, based on the request, a routing instruction;
Furthermore, Arellano discloses a first communication path configured to communicate {the request “TLP for routing among paths, see Fig. 9, [0114]} data from the device to the processing device at the system {“traffic class is an end-to-end label of a transaction layer packet… different points along the data path” [0051]; because PCIE utilizes TLP for routing among paths, such as "transmitting paths 916 and 917 in a PCIe link", see Fig. 9, [0114, and a second communication path configured to communicate data from the device to the memory device {"transmitting paths 916 and 917 in a PCIe link" to the memory device as claimed, see Fig. 9, [0114].};
generating, based on the request, a routing instruction {routing instruction “traffic class” ([0069]) as claimed based on claimed request “transaction layer 705 is the assembly [generating]… of packets (i.e., transaction layer packets, or TLPs)” ([0069]);
Smith/Arellano and Ho are analogous because they are from the same field of endeavor, managing PCIe compatible devices.
At the time of the invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Smith and Arellano before him or her, to modify Smith’s data handling device incorporating Arellano’s corresponding circuitry in “computing system 300” ([0049], see Fig. 3) implementing DDIO operation ([0050]).
The suggestion/motivation for doing so would have been to utilize to take advantage of DDIO allow IO transactions to target Cache directly (Arellano [0002]).
Neither Smith or Arellano appears to be explicitly disclosing routing, based on the routing instruction, using the first communication path, a first portion of the data from the device to the processing device at the system; and
routing, based on the routing instruction, using the first communication path, a second portion of the data from the device to the processing device at the system.
However, Ho discloses routing, based on the routing instruction {“[routing instruction part of] software call bypasses 51, 53 and 55, shown as bi-direction dotted lines, represent call, data and the like actually moving between multi-core processor 12 and main memory 18, I/O bypasses 41, 43 and 45 represent I/O events”, see Fig. 5 [0183]}, using the first communication path {“controllers 20 in order to have I/O related to that application and core routed to the appropriate cache and core”, see Fig. 5 [0182], 1st sentence}, a first portion of the data from the device {“[data portions] related I/O events are processed by the same core to maintain cache coherency”, see Fig. 5 [0184]} to the processing device at the system {“the bi-directional arrows in this and other figures, such [host] calls and events typically move in both directions” ([0175], see Fig. 5); such as “host OS which would traditionally direct that call [requests] to OS kernel-space 19 for processing by host OS kernel facilities 107”, [0176], 1st sentence}; and
routing, based on the routing instruction {“software call bypasses 51, 53 and 55, shown as bi-direction dotted lines, represent call, data and the like actually moving between multi-core processor 12 and main memory 18, I/O bypasses 41, 43 and 45 represent I/O events”, see Fig. 5 [0183]}, using the first communication path {“controllers 20 in order to have I/O related to that application and core routed to the appropriate cache and core”, see Fig. 5 [0182], 1st sentence}, a second portion {“[other data portions] related I/O events are processed by the same core to maintain cache coherency”, see Fig. 5 [0184]; “order to maintain cache coherence by operating core 0 in a parallel processing mode [for other data portions]”, see Figs. 1, 2, and 5} of the data from the device {“[data portions] related I/O events are processed by the same core to maintain cache coherency”, see Fig. 5 [0184]} to the processing device at the system {“the bi-directional arrows in this and other figures, such [host] calls and events typically move in both directions” ([0175], see Fig. 5); such as “host OS which would traditionally direct that call [requests] to OS kernel-space 19 for processing by host OS kernel facilities 107”, [0176], 1st sentence}.
Smith/Arellano and Ho are analogous because they are from the same field of endeavor, managing PCIe compatible devices.
At the time of the invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Smith and Arellano before him or her, to modify Smith/Arellano’s data handling device incorporating Ho’s “[parallel processing] I/O 77, 82 and 83 serve to ‘intercept’ events and data from one or more of a plurality of I/O controllers 20” (see Fig. 5, [0180], last sentence).
The suggestion/motivation for doing so would have been to utilize a set of resource management services selected for each particular group may include selecting a set of resource managements services to be applied to execution of software applications in each group (and thereby selectively replacing and emulating the SMP OS's native and equivalent resource management services), based on the related requirements for resource management services of that group, to reduce processing overhead and architectural limitations of SMP OS's native resource management services by reducing mode switching, contentions, non-locality of caches, inter-cache communications and/or kernel synchronizations during execution of software applications in the first plurality of software applications (Ho [0072]).
Therefore, it would have been obvious to combine Ho with Smith/Arellano to obtain the invention as specified in the instant claim(s).
Neither one of the group consisting of Smith, Arellano, and Ho appears to explicitly disclose generating, based on the request and the first communication path, a routing instruction; wherein the processing device is a processor; wherein the processor comprises a first physical device, and the memory device comprises a second physical device.
However, Terrell discloses generating, based on the request {“a frame received at a first port logic circuit is tested” (see Fig. 16, [0261]) that frame containing the request “the frame as a link service request” (see Fig. 16, [0259])} and the first communication path {said frame and request sent via first path “acting on link service requests, providing link service replies, advising proxy processes (e.g., of link service actions, link state, network”, see Fig. 2, [0195]}, a routing instruction {“the flow and subflow results are used to build a [routing instruction] forward frame”, step 1728, see Figs. 17-20 [0261]}; wherein the processing device is a processor {“Routing processor 1161”, see Figs. 17-20, [0260], 3rd sentence}; wherein the processor comprises a first physical device {“A proxy process of managing processor 1112 acting as an [physical device] initiator may send 1629 a message”, see Fig. 16, [0258]}, and the memory device {memory device “CAM 1306” (see Fig. 16, [0206]} comprises a second physical device {“memory controller 1302”, see Fig. 2 [0206]}.
Smith/Arellano/Ho and Terrell are analogous because they are from the same field of endeavor, managing PCI compatible devices.
At the time of the invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Smith/Arellano/Ho and Terrell before him or her, to modify Smith/Arellano/Ho’s data handling device incorporating Terrell’s “routers of network 101” performing “use commands of SNMP to read any register or region of memory in a router 102-105 (see Fig. 1, [0096], last two sentences).
The suggestion/motivation for doing so would have been to implementing a router performing and detecting page boundary crossings and initiating data frames, a requester may operate on a virtual resource without knowledge of the structure and organization of the corresponding nonvirtual resource, simplifying such operations from the point of view of the requester (Terrell [0038], [0039], paraphrased).
Therefore, it would have been obvious to combine Terrell with Smith/Arellano/Ho to obtain the invention as specified in the instant claim(s).
As per clam 3, the rejection of claim 2 is incorporated and Arellano discloses wherein the first portion of the data is communicated using the first communication path {“I/O interconnect 316”, see Fig. 3, [0051].} based on a direct data IO protocol {“for DDIO operation”, [0050]}.
As per claim 4, the rejection of claim 2 is incorporated and Arellano discloses wherein the routing instruction is generated {routing instruction comprising “attribute field 804”, see Fig. 8, [0106].}.based on a characteristic of the data {further expanding “attribute field 804” describing a plurality of characteristics “allows modification of the default handling of transactions” per communication path, see Fig. 8, [0106].}.
As per claim 5, the rejection of claim 2 is incorporated and Arellano discloses wherein the request is a first request {the request “TLP for routing among paths, see Fig. 9, [0114]}, the data is first data {data associated with a “TLP routing” request as claimed, [0114]}, the routing instruction is a first routing instruction {“packets (i.e., transaction layer packets, or TLPs)”, [0069]}, the method further comprising:
receiving a second request {multiple request “TLP for routing” among paths “transmitting paths 916 and 917 in a PCIe link, see Fig. 9, [0114]} to communicate second data from the device to the system {devices and systems respectively communicating via “transmitting paths 916 and 917 in a PCIe link, see Fig. 9, [0114]};
generating, based on the second request, a second routing instruction {multiple routing instructions per “traffic class” category ([0069]) as claimed based on claimed request “transaction layer 705 is the assembly [generating]… of packets (i.e., transaction layer packets, or TLPs)” ([0069]);
Ho disclosing routing, at the host, based on the second routing instruction {“[routing instruction part of] software call bypasses 51, 53 and 55, shown as bi-direction dotted lines, represent call, data and the like actually moving between multi-core processor 12 and main memory 18, I/O bypasses 41, 43 and 45 represent I/O events”, see Fig. 5 [0183]}, using the second communication path {“controllers 20 in order to have I/O related to that application and core routed to the appropriate cache and core”, see Fig. 5 [0182], 1st sentence}, a first portion of the second data from the device {“[data portions] related I/O events are processed by the same core to maintain cache coherency”, see Fig. 5 [0184]} to the memory device {“the bi-directional arrows in this and other figures, such [host] calls and events typically move in both directions” ([0175], see Fig. 5); such as “host OS which would traditionally direct that call [requests] to OS kernel-space 19 for processing by host OS kernel facilities 107”, [0176], 1st sentence};
and routing, at the host, based on the second routing instruction {“software call bypasses 51, 53 and 55, shown as bi-direction dotted lines, represent call, data and the like actually moving between multi-core processor 12 and main memory 18, I/O bypasses 41, 43 and 45 represent I/O events”, see Fig. 5 [0183]}, using the second communication path {“controllers 20 in order to have I/O related to that application and core routed to the appropriate cache and core”, see Fig. 5 [0182], 1st sentence}, a second portion {“[other data portions] related I/O events are processed by the same core to maintain cache coherency”, see Fig. 5 [0184]; “order to maintain cache coherence by operating core 0 in a parallel processing mode [for other data portions]”, see Figs. 1, 2, and 5} of the second data from the device {“[data portions] related I/O events are processed by the same core to maintain cache coherency”, see Fig. 5 [0184]} to the memory device {“the bi-directional arrows in this and other figures, such [host] calls and events typically move in both directions” ([0175], see Fig. 5); such as “host OS which would traditionally direct that call [requests] to OS kernel-space 19 for processing by host OS kernel facilities 107”, [0176], 1st sentence}.
As per claim 6, the rejection of claim 5 is incorporated and Smith discloses wherein the first portion of the second data is communicated using the second communication path based on a direct memory access protocol {“DMA operations”, see Fig. 15, Col 44, lines 48-50.}.
As per claim 7, the rejection of claim 5 is incorporated and Arellano discloses wherein: the first routing instruction is generated based on a first characteristic of the first data {“attribute field 804” describing a plurality of characteristics “allows modification of the default handling of transactions” per communication path, see Fig. 8, [0106].}; and the second routing instruction is generated based on a second characteristic of the second data {“attribute field 804” describing a plurality of characteristics “allows modification of the default handling of transactions” per communication path, see Fig. 8, [0106].}.
As per claim 8, the rejection of claim 2 is incorporated and Smith discloses further comprising storing the routing instruction {“scratchpad memory”, last line of Col 444.} at a data transfer device {“Tx datapath”, see Fig. 26-5, Col 444, lines 54-57.}.
As per claim 9, the rejection of claim 8 is incorporated and Smith discloses wherein the routing instruction is communicated from the data transfer device to a routing control register {see fig. 16, “routing of transactions" (Col 50, lines 30-40) via register "One or more routing tables may be stored in each logic chip... registers" (Col 50, lines 53-55).} of the system {host system performing “step 19-910” (see Fig. 19-9), Col 89, lines 50-58.}.
Referring to claim 10-17 are apparatus claims reciting claim functionality to the method claim of claims 1-9, in the perspective of reordering some limitations, but maintaining the same scope of claims 1-9 recited above, inter alia, in claim 17, Smith discloses wherein the system further comprises a demultiplexer configured to {“FIG. 27-2, bus 27-234 and bus 27-230 may be demultiplexed from (e.g. split from, sourced by, connected with, coupled to,”, Col 491, lines 44-46):
communicate the first data from the data transfer device to the processor {“Tx datapath”, see Fig. 26-5, Col 444, lines 54-57.};
and communicate the second data from the data transfer device to the memory device {“scratchpad memory”, last line of Col 444.}.
Referring to claim 18-21 are system claims reciting claim functionality to the method claim of claims 1-9, thereby rejected by the same rationale as claims 1-9 recited above, inter alia, Terrell discloses wherein the first data {“When context table 626 and/or virtual context table 630 are stored locally, a [first data] frame received at a first port logic circuit is tested”, [0261]} has a destination {“whether the routing processor has access to [destination] context (1726); and, if context is stored elsewhere, the flow and subflow results are used to build a forward frame (1728), marked for further processing by another [destination] routing processor (1730) where the context is available”, see Fig. 2, [0261]} at an end of the first communication path {said frame and request sent via first path “acting on link service requests, providing link service replies, advising proxy processes (e.g., of link service actions, link state, network”, see Fig. 2, [0195]}; and wherein the processor {“Routing processor 1161”, see Figs. 17-20, [0260], 3rd sentence} is the destination for the first data {“the flow and subflow results are used to build a [routing/destination instruction] forward frame”, step 1728, see Figs. 17-20 [0261]}.
Smith/Arellano/Ho and Terrell are analogous because they are from the same field of endeavor, managing PCI compatible devices.
At the time of the invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Smith/Arellano/Ho and Terrell before him or her, to modify Smith/Arellano/Ho’s data handling device incorporating Terrell’s “routers of network 101” performing “use commands of SNMP to read any register or region of memory in a router 102-105 (see Fig. 1, [0096], last two sentences).
The suggestion/motivation for doing so would have been to implementing a router performing and detecting page boundary crossings and initiating data frames, a requester may operate on a virtual resource without knowledge of the structure and organization of the corresponding nonvirtual resource, simplifying such operations from the point of view of the requester (Terrell [0038], [0039], paraphrased).
Therefore, it would have been obvious to combine Terrell with Smith/Arellano/Ho to obtain the invention as specified in the instant claim(s).
Response to Arguments
Applicant’s arguments, filed on 12/19/2025, have been considered however rendered moot in view of the new ground of rejection(s).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references indicative of the current state of the art regarding claim 1’s “processor”, “memory device”, or “communication path”: US 11256641 B2, US 11249647 B2, US 20220164104 A1, US 11252232 B2, US 11232058 B2, US 20210208572 A1, US 20210019138 A1, US 20190226450 A1, US 10216596 B1, US 20180349235 A1, and US 20170168883 A1.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER A. BARTELS whose telephone number is (571)270-3182. The examiner can normally be reached on Monday-Friday 9:00a-5:30pm EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dr. Henry Tsai can be reached on 571-272-4176. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.A.B./
Examiner
Art Unit 2184
/HENRY TSAI/Supervisory Patent Examiner, Art Unit 2184