DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-7 are presented for the examination.
The following is a quotation of 35 U.S.C. 112(f):
Element in Claim for a Combination. An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents there of.The following is a quotation of pre-AlA35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination maybe expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection |, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre- AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: an external command dispatcher configured to in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Shimada( US 20060218357 A1) in view of Hegde(US 6876654 B1) and further in view of MOON(US 20210225426 A1).
As to claim 1, Shimada teaches an external command dispatcher configured to receive an address and access information( one of the processors 15a and 15b having received the data request from the host computer 3 for the data stored in one of the disk drives (disk drive bunches 12a and 12b) transmits a plurality of first requests, para[0069], ln 1-5/ the first request includes specified addresses of the shared memory 17, so that the shared memory access control unit 161-3 successively accesses all addresses specified by the first requests, para[0049], ln 7-11/ The shared memory access control unit 161-3 transmits in a step S12, the shared memory addresses specified in the first requests in order of access to the shared memory 17 to the FIFO 162. Thus, the FIFO 162 successively stores the shared memory addresses, para[0050])
a first data access unit electrically connected to the external command dispatcher and a global buffer( The FIFO 162 is a memory supplying data to the access result control unit 163-1 in the order of data pieces stored therein, para[0027], ln 8-14/ n a step S5, the access result control unit 163-1 stores each of the access results in one of data buffers 164 having the acquired data buffer number., para[0034], ln 1-5);
wherein the first data access unit is configured to obtain first data from a
PNG
media_image1.png
10
7
media_image1.png
Greyscale
storage device according to the access information, and send the first data to the global buffer( In a step S4, the access result control unit 163-1 successively acquires the data from the shared memory 17 as an access result by the shared memory access control unit 161-1 as well as successively acquires the data buffer numbers from the FIFO 162. Acquisition of the data buffer numbers from the FIFO 162 is synchronous with that of the access result from the shared memory 17. In a step S5, the access result control unit 163-1 stores each of the access results in one of data buffers 164 having the acquired data buffer number, para[0033] to para[0034])
wherein the second data access unit is configured to obtain second data from the storage device according to the access information, and send the second data( In a step S6, the processor 15a transmits to the data buffer control unit 165-1 the second request (for example, the result retrieve command) for transmitting the access result from the shared memory 17. The data buffer control unit 165-1 having received the second request acquires the access result of the shared memory 17 from the data buffer 164 that stores the access result out of the four data buffers 164 in a step S7. More specifically, because the second request includes a specified data buffer number, the data buffer control unit 165-1 acquires the access result from the data buffer 164 having the data buffer number specified by the second request. In a step S8, the data buffer control unit 165-1 returns (transmits) the access result acquired from the data buffer 164 to the processor 15a( para[0035] to para[0036], ln 1-4)
Hegde teaches a second data access unit electrically connected to the external command dispatcher( Port interfaces 120-1 . . . 120-N respectively communicate with ports 50-1 . . . 50-N, and memory interface 130 manages access to shared memory 90. It should be noted that in this configuration, both switch engine 100 and CPU 80 (via CPU interface 110 and memory interface 130) can forward packets on ports 50-1 . . . 50-N via port interfaces 120-1 . . . 120-N, although switch engine 100 can forward packets at wire speeds while CPU 80 can do so only with processing overhead, col 5, ln 50-60/ the packet header information, including source and destination addresses, and source and destination sockets (sockets identify applications communicating on the hosts associated with the source and destination addresses), col 5, ln 20-26)
wherein, the external command dispatcher sends the access information to one of the first data access unit and the second data access unit according to the address( ( The FIFO 162 is a memory supplying data to the access result control unit 163-1 in the order of data pieces stored therein, para[0027], ln 8-14/ n a step S5, the access result control unit 163-1 stores each of the access results in one of data buffers 164 having the acquired data buffer number., para[0034], ln 1-5 / Further, the access result control unit 163-3 successively obtains the shared memory addresses from the FIFO 162. Acquiring of the memory addresses is synchronized with acquiring of the access results, para[0051], ln 3-7);
a data/command switch electrically connected the second data access unit, the global buffer and an internal command dispatcher( it includes a switch module 60 and a flow table 70. Switch module 60 further communicates with a packet buffer 75, a CPU 80 and a shared memory 90, col 4, ln 2-8/ Port interfaces 120-1 . . . 120-N send and receive packets between the nodes to which they are attached. Packets received from attached nodes are buffered in packet buffer 75, col 6, ln 2-6)
wherein the data/command switch is configured to obtain the address and the second data from the second data access unit ( shown in FIG. 3; data packets arrive at ports 50-1 . . . 50-N. As will be described in more detail below, switch module 60 continually monitors each of the ports for incoming traffic. When an IP/IPX data packet arrives, it is buffered in packet buffer 75. While the data is flowing into packet buffer 75, the switch module 60 checks the packet header information, including source and destination addresses, and source and destination sockets (sockets identify applications communicating on the hosts associated with the source and destination addresses), col 5, ln 15-25);
and send the second data to one of the global buffer and the internal command dispatcher according to the address( The destination node, if attached to the switch, will respond and the response packet will be processed as in FIG. 6 and steps S100-S120 described above. The response packet will have information concerning the destination node in the source portion of the packet header. Since no address resolution record will exist for the destination (now the source of the response packet), switch engine 100 will forward the packet to CPU 80, col 11, ln 9-16).
It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Shimada with Hegde to incorporate the above feature because this provides Decision-making tasks are thus more efficiently partitioned between the switch and the CPU so as to minimize processing overhead.
Moon teaches An artificial intelligence accelerator for external command dispatcher configured to receive an address and access information( Electronic devices such as a smartphone, a graphics accelerator, and an artificial intelligence (AI) accelerator process data by using a memory device, para[0003], ln 1-3/ For example, through the host interface 210, the memory device 200 may transmit the read data strobe signal RDQS and the data signal DQ to the memory controller 100 and may receive the clock signal CK, the command/address signal C/A, the write data strobe signal WDQS, and the data signal DQ from the memory controller 100. The host interface 210 may generate control signals iCTRL based on a signal provided from the memory controller 100, para[0034], ln 6-25/ the buffer die 310 and the core dies 320 to 350 may be stacked and may be electrically connected by using through silicon vias (TSV). As such, the stacked memory device 300 may have a three-dimensional memory structure in which the plurality of dies 310 to 350 are stacked, para[0123], ln 1-8/ In an exemplary embodiment, the buffer die 310 may include a plurality of pins for receiving signals from the external host device. Through the plurality of pins, the buffer die 310 may receive the clock signal CK, the command/address signal C/A, the write data strobe signal WDQS, and the data signal DQ and may transmit the read data strobe signal RDQS and the data signal DQ., para[0132], ln 1-8).
It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Sand Hegde with Moon to incorporate the above feature because this supports a high bandwidth, data may be transmitted between a memory controller and the memory device at high speed. To secure the integrity of data when the data are transmitted at high speed, a data strobe signal may be exchanged between the memory controller and the memory device.
As to claim 4, it is rejected for the same reason as to claim 1 above.
Claims 2, 3, 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Shimada( US 20060218357 A1) in view of Hegde(US 6876654 B1) in view of MOON(US 20210225426 A1) and further in view of Kapur( US 6134622 A).
As to claim 2, Kapur teaches the address and the access information conform to a bus format(Inbound transaction controller 245A formats the control and address information (the read request packet) into an expander bus packet and outputs this address and control information onto line 416A to store this address and control information in the ITQ 220A via mux 340A, col 16, ln 19-25).
It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Shimada, Hegde and Moon with Kapur to incorporate the above feature because this provides a need for a more flexible technique for interconnecting PCI buses to a host bus without adding additional electrical loads to the host bus.
As to claim 3, Shimada teaches the address is a first address, and the access information is first access information( para[0049], ln 9-15); a second address and second access information( para[0054], ln 1-6) , and send the second access information to one of the first data access unit and the second data access unit according to the second address( para[0035], ln 1-5/ para[0054], ln 1-6);the first data access unit is further configured to obtain an output data from the global buffer according to the second access information( para[0042] to para[0044], ln 1-6) ; and the second data access unit is further configured to obtain the second data from the global buffer according to the second access information, and send the second data( para[0035], ln 8-14 to para[0036]) and Moon teaches the external command dispatcher is further configured to receive a second address and second access information( para[0034], ln 6-25) for the same reason as to claim 1 above.
As to claim 5, it is rejected for the same reason as to claim 2 above.
As to claim 6, it is rejected for the same reason as to claim 3 above. In additional, , Hegde teaches and sending, by one of the first data access unit and the second data access unit, the output data to the storage device( col 5, ln 15-30) for the same reason as to claim 1 above.
As to claim 7, Kapur teaches wherein the second address and the second access information conform to a bus format( col 16, ln 19-25) for the same reason as to claim 2 above.
Aggarwal( US 20200366682 A1).
Conclusion
US 20050218357 teaches the processor 15a transmits, in a step S20, a plurality of initiating requests (for example, initiating commands) for initiating the access operation to the shared memory 17. Each of the initiating requests specifies a parameter buffer number (parameter buffer identification information) and a shared memory address, and a local memory address. The shared memory address and the local memory address are stored in the parameter buffer 170 having the specified parameter buffer number. In a step S21, the parameter buffer control unit 169-6 reads, in response to the initiating request from the processor 15a, the shared memory address and the local memory address rom each of the parameter buffers 170.
US 11151033 B1 teaches FIG. 2A, a tile 102 controls operation of a switch 220 using either the processor 200, or separate switch processor dedicated to controlling the switching circuitry 224. Separating the control of the processor 200 and the switch 220 allows the processor 200 to take arbitrary data dependent branches without disturbing the routing of independent messages passing through the switch 220.
US 20180011636 teaches A1 he memory mapping unit MMAP converts the address transferred from the arithmetic processing unit PU via the switch unit SW2b or the like to an address of a memory to be accessed out of the non-volatile memory MEM1 and the volatile memory MEM2.
US 20200104196 A1 teaches sharing the pointer between a writer that allocated the buffer and the multiple readers that wish to use the pointer is that a writer and reader may have different memory spaces. For example and in one embodiment, a buffer pointer may have an address of 1000 in the writer's memory space. However, a memory address of 1000 for a reader may point to a different physical memory location.
US 5864539 A teaches he input line cards 210, 213, 215 and output line cards are coupled to a main switching fabric 250.
US 5923654 A teaches the input switch 202 has m output signals, collectively referred to as IBUFm, where m is an integer from 0 to 59 identifying individual signals IBUF0, IBUF1, . . . IBUF59. The IBUFm signals provide data to respective inputs of memory packet buffers 206 , individually referred to as BUFFERm or BUFFER0, BUFFER1, . . . BUFFER59. Thus, in the embodiment shown, there are sixty (60) packet buffers 206 , where each has an output for providing data on a corresponding output signal OBUFm to respective inputs of the output switch 204 within the switch matrix 200. In this manner, data packets received
US 6876654 B1 teaches CPU interface 110 communicates with CPU 80, thereby providing means of communication between CPU 80 and switch engine 100, address registers 105, port interfaces 120-1 . . . 120-N, and memory interface 130. Port interfaces 120-1 . . . 120-N respectively communicate with ports.
US 20170054659 A1teaches ich includes the subject matter of any of Examples 1-12, the apparatus may include a packet buffer defined within the shared memory space and associated with a virtual network interface of the second virtual server to buffer packets received by the second virtual server from either the first virtual server or the virtual switch.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LECHI TRUONG whose telephone number is (571)272-3767. The examiner can normally be reached 10-8 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Young Kevin can be reached on (571)270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LECHI TRUONG/ Primary Examiner, Art Unit 2194