DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claim 18 has been cancelled. Claim 21 has been added. Claims 19-20 have been amended. Claims 1-17 and 19-21 are currently pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/28/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim states “A non-volatile readable storage medium”, however neither the claim nor the specification states that the non-volatile readable storage medium is also non-transitory. Applicant’s Specification filed 10/21/2024 states on page 22, lines 16-17 that “The non-volatile readable storage medium can be a transient storage medium or a non-transient storage medium”, thus the claimed subject matter could fall under transitory signals.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 13-16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ishida (US 2007/0204074) in view of McBride (US 2018/0300634).
Regarding claim 1, Ishida teaches a data transmission method, comprising: receiving a transmission request (Fig. 2, DMA control unit 11 receives request from CPU 20; Paragraph 0053, Upon receipt of a data transfer directive from the CPU 20 of the information processing device 1), wherein the transmission request comprises a source of a target data block (Fig. 2, Source of target data is memory 13; Paragraph 0053, the DMA control unit 11 determines… data to be transferred (hereinafter referred to as transfer data) from the memory 30 or the built-in memory 13), a target of the target data block (Fig. 2, Target is communication controller 15; Paragraph 0057, controller 15 is a transfer destination device, and the data transfer destination in the DMA data transfer), and a length of the target data block (Fig. 4, Length of data block is A1-Am; Paragraph 0012, data divided and transferred by a DMA engine 221 is referred to as block data. The A1, A2, . . . , Am (1 to m are positive integers in the descriptions above and below) are block data); obtaining the target data block from the source, and dividing the target data block into a plurality of data sub-blocks according to the length of the target data block (Fig. 2, DMA control unit 11 causes data transfer from memory 13 (i.e. the source) and causes division of data at the DMA controller 12 based on division size (i.e. length); Paragraph 0062, DMA control unit 11 interprets the communication command from the information processing device 1, determines the division size of the transfer data such that each DMA engine 120 can transfer the data, and issues a directive to transfer data by the DMA to the DMA controller 12); and distributing the plurality of the data sub-blocks to a plurality of direct memory access (DMA) engines (Fig. 2, Plurality of DMA engines 120 receive distributed divided sub-blocks from DMA controller 12; Paragraph 0053, DMA control unit 11 determines the division size of data to be transferred… and directs (a plurality of DMA engines 120 of) the DMA controller 12 to transfer data by the DMA), so that each of the DMA engines respectively transmits corresponding one or more data sub-blocks to the target to complete transmission of the target data block (Fig. 2, DMA engines 120 transmit predetermined sub-blocks to target 15; Paragraph 0055, plurality of DMA engines 120 each divide the transfer data… and transfer the block data of the divided transfer data to the communication controller 15).
Ishida does not teach the data transmission method, comprising: wherein the transmission request comprises a source address of a target data block, a target address of the target data block, and a length of the target data block; obtaining the target data block from the source address, and dividing the target data block into a plurality of data sub-blocks according to the length of the target data block.
McBride teaches the data transmission method, comprising: receiving a transmission request, wherein the transmission request comprises a source address of a target data block, a target address of the target data block (Fig. 2, Data transfer descriptor 202 includes source address and target address of data block 220; Paragraph 0041, data transfer may be configured using a descriptor(s) 202 that generally includes the source memory address, destination memory address), and a length of the target data block (Fig. 2, Data transfer descriptor 202 includes size 220; Paragraph 0041, each of the descriptors 202 includes the parameters X dimension and Y dimension for a data block for transfer); obtaining the target data block from the source address (Fig. 1, Source address references source memory 108 where data is retrieved; Paragraph 0040, DMA engine 102 moves blocks of data from a source memory to a destination memory, such as from a source memory address to a destination memory address), and dividing the target data block into a plurality of data sub-blocks according to the length of the target data block (Fig. 3, Data block 220 is fragmented into data blocks 220; Paragraph 0047, DMA fragmenter 224 fragments, for example, the data block 220 associated with a descriptor 202).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ishida’s method to incorporate the teachings of McBride and include a DMA source address, target address, and length in the data transfer request from the CPU.
One of ordinary skill in the art would be motivated to make the modifications in order to implement a conventional DMA command format that enables data to be rapidly transmitted between a source and target address without burdening the processor, thus increasing processing speed and reducing power consumption (See McBride: Paragraphs 0005-0006 and 0014).
Regarding claim 13, Ishida in view of McBride teaches the data transmission method of claim 1. Ishida teaches the data transmission method comprising wherein the dividing the target data block into a plurality of data sub-blocks according to the length of the target data block comprises: dividing the target data block into the plurality of the data sub-blocks according to the length of the target data block and an optimal data transmission length of a DMA engine (Fig. 4, Target data block A1-Am is divided into optimal data transmission lengths based on the size of the DMA engines; Paragraph 0012, the data divided and transferred by a DMA engine 221 is referred to as block data. The A1, A2, . . . , Am (1 to m are positive integers in the descriptions above and below) are block data), wherein a data length of each data sub-block is not greater than the optimal data transmission length of the DMA engine (Fig. 4, Data lengths of A1 to Am is equal to a maximum predetermined size of each DMA engine; Paragraph 0053, transferable size (maximum length of data) for the DMA engine 120 is predetermined, and equal for each DMA engine 120).
Regarding claim 14, Ishida in view of McBride teaches the data transmission method of claim 13.
McBride teaches the data transmission method comprising wherein transmission efficiency of the DMA engine during transmission of a data block with a data length not greater than the optimal data transmission length is higher than target transmission efficiency (Fig. 4, A threshold performance is calculated with the optimal transmission length in steps 406 and 408, where if it is greater than the threshold then it is a higher efficiency for the system to use; Paragraph 0060, DMA fragmenter 224 may calculate or estimate the duration needed to transfer the data block based on the technology particulars of the system 100… Paragraph 0032, compare the duration to… a transfer duration threshold: when the duration… is greater than or equal to the transfer duration threshold, fragment the data block).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ishida/McBride’s method to further incorporate the teachings of McBride and include calculating performance metric thresholds based on system requirements.
One of ordinary skill in the art would be motivated to make the modifications in order to ensure quality-of-service (QoS) values related to the data transfer system are met (See McBride: Paragraph 0058).
Regarding claim 15, Ishida in view of McBride teaches the data transmission method of claim 13. Ishida teaches the data transmission method comprising wherein the dividing the target data block into the plurality of data sub-blocks according to the length of the target data block and an optimal data transmission length of a DMA engine comprises: averagely dividing the target data block into the plurality of the data sub-blocks with an equal data block length according to the length of the target data block and the optimal data transmission length of the DMA engine (Fig. 3, Data A transferred from DMA engine 120 can be specified in an average size to transfer which is equal to a predetermined optimal size of the DMA engine; Paragraph 0076, DMA control unit 11 determines the division size of the transfer data A such that the data can be transferred by the DMA engine 120, and issues to the DMA controller 12 a data transfer directive (DMA data transfer request) to divide the transfer data A into a specified size and transfer the transfer data A… Paragraph 0053, transferable size (maximum length of data) for the DMA engine 120 is predetermined, and equal for each DMA engine 120).
Regarding claim 16, Ishida in view of McBride teaches the data transmission method of claim 15. Ishida teaches the data transmission method comprising wherein the distributing the plurality of the data sub-blocks to a plurality of DMA engines comprises: averagely distributing the plurality of the data sub-blocks to the plurality of the DMA engines (Fig. 3, DMA controller 12 averagely distributes data A to each DMA engine 120; Paragraph 0055, includes a plurality of DMA engines 120. The plurality of DMA engines 120 each divide the transfer data from the memory 30).
Regarding claim 19, Ishida in view of McBride teaches the data transmission method of claim 1. Ishida teaches an electronic device, comprising: a memory, configured to store a computer program (Fig. 2, Memory 13 and 30 stores instructions); and a processor (Fig. 2, Processor 20), configured to implement, when run the computer program, the steps of the data transmission method according to claim 1 (Fig. 2, CPU 20 runs a program using memory 30; Paragraph 0050, an information processing device 1 is, for example, a computer independently operating by itself, and includes a CPU (Central Processing Unit) 20, a memory 30).
Regarding claim 20, Ishida in view of McBride teaches the data transmission method of claim 1. Ishida teaches a non-volatile readable storage medium, having a computer program stored thereon, wherein the computer program, when run by a processor, implements the steps of the data transmission method according to claim 1 (Fig. 2, CPU 20 runs a program using memory 30; Paragraph 0050, an information processing device 1 is, for example, a computer independently operating by itself, and includes a CPU (Central Processing Unit) 20, a memory 30).
Claims 2-5, 17, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Ishida (US 2007/0204074) in view of McBride (US 2018/0300634) and further in view of Potter (US 6,832,279).
Regarding claim 2, Ishida in view of McBride teaches the data transmission method of claim 1. Neither Ishida nor McBride teaches the data transmission method comprising wherein in a case that the transmission request comprises a plurality of transmission requests, after the receiving a transmission request, the method further comprises: sorting the plurality of the transmission requests in chronological order; and processing each of the transmission requests according to a sorting order.
Potter teaches the data transmission method comprising wherein in a case that the transmission request comprises a plurality of transmission requests (Fig. 3, Source 310 contains queue 330 with a plurality of requests; Col. 5, Lines 56-57, first-in first-out (FIFO) queues 330 for storing various requests and response packets), after the receiving a transmission request (Fig. 3, Request is received in posted queue 332 and non-posted queue 334), the method further comprises: sorting the plurality of the transmission requests in chronological order (Fig. 3, Queues 332 and 334 are FIFO order and thus is first in first out); and processing each of the transmission requests according to a sorting order (Fig. 3, End target 320b receives the transmission requests from queues 332 and 334 to retrieve/store (i.e. processing the request) data in a FIFO sort order; Col. 6, Lines 19-20, ensure ordering of transactions over an external I/O bus 300).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ishida/McBride’s method to incorporate the teachings of Potter and include FIFO sorting of the requests.
One of ordinary skill in the art would be motivated to make the modifications in order to ensure data transfer fairness for clients/users, thus improving system performance and data transfer efficiency while complying with service-level guarantees (See Potter: Col. 2, Lines 60-67).
Regarding claim 3, the combination of Ishida/McBride/Potter teaches the data transmission method of claim 2.
Potter teaches the data transmission method comprising wherein the sorting the plurality of the transmission requests in chronological order comprises: respectively writing the plurality of the transmission requests into a request queue in chronological order, the request queue following a first-in first-out rule (Fig. 3, Queues 330 are FIFO queues thus requests are written first in; Col. 5, Lines 57-59, A first FIFO queue 332 is designated for storing posted requests, i.e., requests for which there are typically no responses); and the processing each of the transmission requests according to a sorting order comprises: respectively processing each of the transmission requests according to an order of writing the transmission requests into the request queue (Fig. 3, End target 320b receives the transmission requests from queues 332 and 334 to retrieve/store (i.e. processing the request) data in a FIFO sort order; Col. 6, Lines 19-20, ensure ordering of transactions over an external I/O bus 300).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ishida/McBride’s method to incorporate the teachings of Potter and include FIFO sorting of the requests.
One of ordinary skill in the art would be motivated to make the modifications in order to ensure data transfer fairness for clients/users, thus improving system performance and data transfer efficiency while complying with service-level guarantees (See Potter: Col. 2, Lines 60-67).
Regarding claim 4, the combination of Ishida/McBride/Potter teaches the data transmission method of claim 3. Ishida teaches the data transmission method comprising wherein after the distributing the plurality of the data sub-blocks to a plurality of DMA engines, so that each of the DMA engines respectively transmits corresponding one or more data sub- blocks to the target address to complete transmission of the target data block, the method further comprises: determining whether the plurality of the DMA engines complete the transmission of the target data block (Fig. 3, Data transfer frequency counter 152 counts the number of data transfers from each DMA engine 120 to determine if target data block is completely transferred; Paragraph 0078, DMA engine 120 which terminates the data transfer for the block data A1 or A2 each issues the transfer termination notice (A1 transfer termination notice or A2 transfer termination notice) to the communication controller 15 (steps S14 and Sl5). Correspondingly, the data transfer number counter 152 counts the number of data transfer).
Potter teaches the data transmission method comprising in a case that the plurality of the DMA engines (Fig. 2, DMA controller 240) complete the transmission of the target data block, executing a step of obtaining a next transmission request from the request queue (Fig. 4, Response manager 470 sends a clear signal in response to completed write and allows output queue manager 460 to send the next write transaction; Col. 10, Lines 25-30, If any of the bits 712 of the match bit map 710 associated with the write request WRA at the head of the low priority FIFO 422 are asserted, the output queue manager 460 does not send the WRA to the request queues 330 at the interface to the I/O bus 300 because the asserted bit denotes a potential conflict… Lines 41-44, When the output of the logical OR function 730 is zero, indicating that all of the bits of the match bit map 710 have been cleared, the output queue manager forwards the request to the appropriate posted or non-posted request queue 330).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ishida/McBride/Potter’s method to further incorporate the teachings of Potter and include FIFO sorting of the requests that retrieves the next request once processing of the previous request is completed.
One of ordinary skill in the art would be motivated to make the modifications in order to ensure data transfer fairness for clients/users while preventing memory access conflicts, thus improving system performance and data transfer efficiency while complying with service-level guarantees (See Potter: Col. 2, Lines 60-67).
Regarding claim 5, the combination of Ishida/McBride/Potter teaches the data transmission method of claim 4. Ishida teaches the data transmission method comprising wherein after the determining whether the plurality of the DMA engines complete the transmission of the target data block (Fig. 3, Data transfer frequency counter 152 counts the number of data transfers from each DMA engine 120 to determine if target data block is completely transferred; Paragraph 0078, DMA engine 120 which terminates the data transfer for the block data A1 or A2 each issues the transfer termination notice (A1 transfer termination notice or A2 transfer termination notice) to the communication controller 15 (steps S14 and Sl5). Correspondingly, the data transfer number counter 152 counts the number of data transfer).
Potter teaches the data transmission method comprising in a case that the plurality of the DMA engines do not complete the transmission of the target data block, prohibiting the obtaining the next transmission request from the request queue (Fig. 4, Response manager 470 sends a clear signal in response to completed write and allows output queue manager 460 to send the next write transaction, otherwise sending the next signal to the queue is prohibited due to conflicts; Col. 10, Lines 25-30, If any of the bits 712 of the match bit map 710 associated with the write request WRA at the head of the low priority FIFO 422 are asserted, the output queue manager 460 does not send the WRA to the request queues 330 at the interface to the I/O bus 300 because the asserted bit denotes a potential conflict).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ishida/McBride/Potter’s method to further incorporate the teachings of Potter and include FIFO sorting of the requests that retrieves the next request once processing of the previous request is completed.
One of ordinary skill in the art would be motivated to make the modifications in order to ensure data transfer fairness for clients/users while preventing memory access conflicts, thus improving system performance and data transfer efficiency while complying with service-level guarantees (See Potter: Col. 2, Lines 60-67).
Regarding claim 17, Ishida in view of McBride teaches the data transmission method of claim 16. Neither Ishida nor McBride teaches the data transmission method comprising wherein in a case that each DMA engine corresponds to more than one of the data sub-blocks, the method further comprises: respectively controlling the DMA engine to transmit the data sub-blocks corresponding to the DMA engine according to an order.
Potter teaches the data transmission method comprising wherein in a case that each DMA engine corresponds to more than one of the data sub-blocks, the method further comprises: respectively controlling the DMA engine to transmit the data sub-blocks corresponding to the DMA engine according to an order (Fig. 2, DMA controller 240 sends requests that perform data transfer in a FIFO order; Col. 6, Lines 20-22, I/O controller 250 maintains a data structure that keeps track of the state of outstanding requests issued by the sources, such as processor 210 or the DMA controller 240… Col. 5, Lines 55-57, the HPT bus typically defines three buffers or first-in first-out (FIFO) queues 330 for storing various requests and response packets).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ishida/McBride’s method to incorporate the teachings of Potter and include FIFO sorting of the requests.
One of ordinary skill in the art would be motivated to make the modifications in order to ensure data transfer fairness for clients/users, thus improving system performance and data transfer efficiency while complying with service-level guarantees (See Potter: Col. 2, Lines 60-67).
Regarding claim 21, the combination of Ishida/McBride/Potter teaches the data transmission method of claim 2. Ishida teaches the data transmission method comprising wherein the dividing the target data block into a plurality of data sub-blocks according to the length of the target data block comprises: dividing the target data block into the plurality of the data sub-blocks according to the length of the target data block and an optimal data transmission length of a DMA engine (Fig. 4, Target data block A1-Am is divided into optimal data transmission lengths based on the size of the DMA engines; Paragraph 0012, the data divided and transferred by a DMA engine 221 is referred to as block data. The A1, A2, . . . , Am (1 to m are positive integers in the descriptions above and below) are block data), wherein a data length of each data sub-block is not greater than the optimal data transmission length of the DMA engine (Fig. 4, Data lengths of A1 to Am is equal to a maximum predetermined size of each DMA engine; Paragraph 0053, transferable size (maximum length of data) for the DMA engine 120 is predetermined, and equal for each DMA engine 120).
Allowable Subject Matter
Claims 6-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US PGPUB 2017/0170905 to Tanaka discloses a DMA descriptor that contains a source address, destination address, and target block length (See Tanaka: Figure 4, DMA descriptor; Paragraph 0048, setting item 31 includes a source address 311, source address increment 312, a destination address 313, destination address increment 314, a data length 315).
US PGPUB 2009/0254683 to Camer discloses a multi-channel DMA data transfer system wherein frames of data are split into separate blocks transferred over the different DMA channels.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARRY Z WANG whose telephone number is (571)270-1716. The examiner can normally be reached 9 am - 3 pm (Monday-Friday).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henry Tsai can be reached at 571-272-4176. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.Z.W./Examiner, Art Unit 2184
/HENRY TSAI/Supervisory Patent Examiner, Art Unit 2184