DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/11/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claims 5-6 are objected to because of the following informalities:
“SMP OS” in line 3 of claim 5 should read as “Symmetrical Multiprocessing OS (SMP OS)”.
Claim 6 is objected to because it is dependent on objected claim 5. Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6 and 13-15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Tanaka (US 2024/0385960).
The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement.
Regarding claim 1, Tanaka teaches a storage system (Fig. 1, Storage system 100) comprising: a first protocol chip (Fig. 1, Frontend FEIF0 113 is a first protocol chip; Paragraph 0039, frontend interfaces 113 and 123 include protocol chips) and a second protocol chip (Fig. 1, Frontend FEIF1 123 is a second protocol chip) that receive I/O commands from a host (Fig. 1, Host accesses frontends 113 and 123 using commands; Paragraph 0038, host apparatus… accesses… via the frontend interfaces 113 and 123… Paragraph 0066, host I/O commands received from the host apparatus); a first controller that includes a first CPU (Fig. 1, First controller CTL0 110 includes first CPU0 111) to which a queue used for control communication with the first protocol chip (Figs. 1 and 2, First CPU 111 (i.e. first CPU of first controller 110) has memory MEM0 112 that includes queue OQ 201/IQ 102 for frontend 113 (i.e. first protocol chip); Paragraph 0058, memory 112 includes an OQ 201 and an IQ 202 controlling data transfer between the processor 111 and the frontend interface 113) and a queue used for control communication with the second protocol chip are assigned (Figs. 1 and 2, Memory MEM0 112 includes queue OQ 203 for communication with frontend 123 (i.e. second protocol chip); Paragraph 0058, memory 112 includes an OQ 203 and an controlling data transfer between the processor 111 and the frontend interface 123); a second controller that includes a second CPU (Fig. 1, Second controller CTL1 120 contains second CPU1 121) to which a queue used for control communication with the first protocol chip (Fig. 1, Second CPU 121 (i.e. second CPU of second controller 120) has memory 122 MEM1 that includes queue OQ 223/IQ 224 for communication with frontend 113 (i.e. first protocol chip); Paragraph 0059, memory 122 includes an OQ 223 and an IQ 224 controlling data transfer between the processor 121 and the frontend interface 113) and a queue used for control communication with the second protocol chip are assigned (Fig. 1, Memory 122 includes queue OQ221/IQ 222 for frontend 123 (i.e. second protocol chip); Paragraph 0059, memory 122 includes an OQ 221 and an IQ 222 controlling data transfer between the processor 121 and the frontend interface 123); a first PCI switch (Fig. 1, Switch SW0 115 uses PCI; Paragraph 0037, a PCI express (PCIe) switch 115) that is disposed between the first protocol chip and the first and second CPUs (Fig. 1, Switch SW0 115 is between frontend 113 (i.e. first protocol chip) and first CPU0 111 and second CPU1 121) and configured to set a communication path between the first protocol chip and the first CPU (Fig. 1, Path 117 between frontend 113 (i.e. first protocol chip) and first CPU0 111) and a communication path (Fig. 1, Path 128) between the first protocol chip and the second CPU (Fig. 1, First PCI switch SW0 115 switches I/O commands received at frontend 113 (i.e. first protocol chip) to first path 117 or third path 128 based on CPU0 111 of controller 110 (i.e. a controller side); Paragraph 0106, processor 111 sends a queue switch command to switch… to the frontend interface 113); and a second PCI switch (Fig. 1, Switch SW1 125 uses PCI) that is disposed between the second protocol chip and the first and second CPUs (Fig. 1, Switch SW1 125 is between frontend 123 (i.e. second protocol chip) and first CPU0 111 and second CPU1 121) and configured to set a communication path between the second protocol chip and the first CPU (Fig. 1, Path 127) and a communication path (Fig. 1, Path 118) between the second protocol chip and the second CPU (Fig. 1, Second PCI switch SW1 125 switches I/O commands received at frontend 123 (i.e. second protocol chip) to second path 127 or fourth path 118 based on CPU1 121 of controller 120 (i.e. the controller side); Paragraph 0063, switch setting situation is stored in the registers of the frontend interfaces 113 and 123 accessible with the PCIe, and thus can be read by the processors 111 and 121); wherein the first and second protocol chips each have queue control information that defines the queue at a transmission destination of the I/O commands received from the host (Fig. 2, Frontend 113 and frontend 123 (i.e. first and second protocol chips) includes interrupt registers 217/218 and 237/238, respectively, that perform host I/O command routing to queues using interrupts to define route; Paragraph 0046, frontend interface 113 includes interrupt setting registers 217 and 218… interrupt is used in… an operation of a control queue for data transfer… Paragraph 0047, frontend interface 123 includes interrupt setting registers 237 and 238… Paragraph 0084, frontend interface 113 sends an interrupt to an address set in the interrupt setting register 217 to notify the processor 111 that the entry is enqueued in the OQ), and cause the first and second PCI switches to set the communication path for the I/O commands in accordance with the queue control information (Fig. 2, Switches 115 and 125 (i.e. first and second PCI switches) set path based on the interrupt registers in frontends; Paragraph 0063, foregoing switch setting situation is stored in the registers of the frontend interfaces 113 and 123).
Regarding claim 2, Tanaka teaches the storage system of claim 1. Tanaka teaches the storage system further comprising wherein either one of the first controller and the second controller sends a queue switching instruction to either one of the first protocol chip and the second protocol chip (Fig. 8, First controller CPU0 111 sends update OQ10_CI (i.e. queue switching instruction) 807 to frontend 113 (i.e. first protocol chip)), and upon receiving the queue switching instruction, either one of the first protocol chip and the second protocol chip updates the queue control information in accordance with the queue switching instruction (Fig. 8, Frontend 113 updates queue control information in response to 807; Paragraph 0096, processor 121 updates OQ_CI of the OQ 223 which is in the frontend interface 113 (step 807)) so as to switch the communication path for the I/O commands to the communication path according to the updated queue control information (Fig. 8, Frontend 113 switches paths based on 807; Paragraph 0091, FIG. 8 is a diagram illustrating a data transfer sequence after a switch in an enqueueing destination queue of the frontend interface).
Regarding claim 3, Tanaka teaches the storage system of claim 2. Tanaka teaches the storage system further comprising wherein the communication path between the first protocol chip and the first CPU is a communication path for normal I/O processing that is used when the first CPU is not halted (Fig. 2, Path 117 is a normal path between frontend 113 (i.e. first protocol chip) and first CPU 111), and the communication path between the first protocol chip and the second CPU is a communication path for alternative I/O processing that is used when the first CPU is halted, when halting the first CPU (Fig. 2, Path 128 is alternative path between frontend 113 (i.e. first protocol chip) and second CPU 121 when first CPU 111 is halted), the first controller sends the queue switching instruction to the first protocol chip, and the first protocol chip switches the communication path for the I/O commands received by the first protocol chip from a communication path for the normal I/O processing to a communication path for the alternative I/O processing in accordance with the queue switching instruction (Fig. 2, First controller 110 with processor 111 sends queue switching instruction via switch SW0 115 which causes switching between paths when CPU 111 is halted from path 117 to queues 201/202 to path 128 to queues 223/224; Paragraph 0061, frontend interface 113 can switch whether to use the OQ 201 and the IQ 202 or use the OQ 223 and the IQ 224 in accordance with an instruction from the processor 111).
Regarding claim 4, Tanaka teaches the storage system of claim 3. Tanaka teaches the storage system further comprising wherein the communication path between the second protocol chip and the second CPU is a communication path for normal I/O processing that is used when the second CPU is not halted (Fig. 2, Path 127 is when normal operations between frontend 123 and CPU 121 is not halted), the communication path between the second protocol chip and the first CPU is a communication path for alternative I/O processing that is used when the second CPU is to be halted (Fig. 2, Path 118 is alternative path when operations are halted), when halting the second CPU, the second controller sends the queue switching instruction to the second protocol chip, and the second protocol chip switches the communication path for the I/O commands received by the second protocol chip from a communication path for the normal I/O processing to a communication path for the alternative I/O processing in accordance with the queue switching instruction (Fig. 2, Second controller 120 with processor 121 sends queue switching instruction via switch SW1 125 which causes switching between paths when CPU 121 is halted from path 127 to queues 221/222 to path 118 to queues 203/204; Paragraph 0061, the frontend interface 123 can switch whether to use the OQ 221 and the IQ 222 or use the OQ 203 and the IQ 204 in accordance with an instruction from the processor 121).
Regarding claim 5, Tanaka teaches the storage system of claim 2. Tanaka teaches the storage system further comprising wherein each of the first and second CPUs is a multicore CPU in an SMP OS (Fig. 2, CPUs 111 and 121 are symmetrical to each other and contain multiple processing cores; Paragraph 0037, processors 111 and 121 include a plurality of processor cores), and when updating the SMP OS running on the first CPU, the first protocol chip updates the queue control information so as to switch the queue at a transmission destination of the I/O commands received by the first protocol chip from a queue assigned to the first CPU to a queue assigned to the second CPU (Fig. 8, Frontend 113 (i.e. first protocol chip) updates queue information of the memory 112 of CPU 111 (i.e. first CPU) to assign to second CPU using update 805; Paragraph 0095, frontend interface 113 updates OQ_PI of the OQ 223 which is in the memory 112 (step 805)).
Regarding claim 6, Tanaka teaches the storage system of claim 5. Tanaka teaches the storage system further comprising wherein, when updating the SMP OS running on the second CPU, the second protocol chip updates the queue control information so as to switch the queue at a transmission destination of the I/O commands received by the second protocol chip from a queue assigned to the second CPU to a queue assigned to the first CPU (Fig. 2, Frontend 123 (i.e. second protocol chip) can switch between queues by updating registers; Paragraph 0061, the frontend interface 123 can switch whether to use the OQ 221 and the IQ 222 or use the OQ 203 and the IQ 204 in accordance with an instruction from the processor 121).
Regarding claim 13, Tanaka teaches the storage system of claim 2. Tanaka teaches the storage system further comprising wherein, when either one of the first CPU and the second CPU is halted due to a failure, either one of the first protocol chip and the second protocol chip updates the queue control information in such a manner that the queue at a transmission destination of the received I/O commands serves as a queue assigned to either one of the first CPU and the second CPU, whichever is not halted due to a failure (Fig. 6, Even when a First CPU 111 or a second CPU 121 fails, the frontend 113 (i.e. first protocol chip) and the frontend 123 (i.e. second protocol chip) can route I/O commands to the queue 112 or 122 based on updated register information; Paragraph 0191, when a failure occurs in the first storage controller, the second processor detecting the failure sends the first enqueueing destination switch instruction to the first frontend interface).
Regarding claim 14, Tanaka teaches a storage system (Fig. 1, Storage system 100) comprising: a first protocol chip (Fig. 1, Frontend FEIF0 113 is a first protocol chip; Paragraph 0039, frontend interfaces 113 and 123 include protocol chips) and a second protocol chip (Fig. 1, Frontend FEIF1 123 is a second protocol chip) that receive I/O commands from a host (Fig. 1, A host accesses frontends 113 and 123 using commands; Paragraph 0038, host apparatus… accesses… via the frontend interfaces 113 and 123… Paragraph 0066, host I/O commands received from the host apparatus); a first controller and a second controller that include a CPU configured to receive the I/O commands from the first protocol chip and the second protocol chip (Fig. 1, First storage controller 110 includes CPU0 111 and second storage controller 120 contains CPU1 121 which are configured to received I/O data transfer commands from frontends 113/123 (i.e. first and second protocol chips); Paragraph 0041, processors 111 and 121 control data transfer between the host apparatus connected via the frontend interfaces 113 and 123… Paragraph 0090, processor 111 can process the host I/O command 701 received by the frontend interface 113); a first communication path between the first protocol chip and the first controller (Fig. 1, Frontend 113 (i.e. first protocol chip) is coupled within first controller 110 using a first path 117 within 110); a second communication path between the second protocol chip and the second controller (Fig. 1, Frontend 123 (i.e. second protocol chip) is coupled within second controller 120 using a second path 127 within 123); a third communication path between the first protocol chip and the second controller (Fig. 1, Frontend 113 (i.e. first protocol chip) is coupled to second controller 121 via switch SW0 115 using third communication path 128; Paragraph 0064, Each of the frontend interfaces 113 and 123 is connected to any of the processors 111 and 121 via… the links 127 and 128); a fourth communication path between the second protocol chip and the first controller (Fig. 1, Frontend 123 (i.e. second protocol chip) is coupled to first controller 111 via switch SW1 125 using fourth communication path 118; Paragraph 0064, Each of the frontend interfaces 113 and 123 is connected to any of the processors 111 and 121 via… links 117 and 118); a first PCI switch (Fig. 1, Switch SW0 115 uses PCI; Paragraph 0037, a PCI express (PCIe) switch 115) that switches the communication path for the I/O commands received by the first protocol chip to either one of the first communication path and the third communication path in accordance with an instruction from a controller side (Fig. 1, First PCI switch SW0 115 switches I/O commands received at frontend 113 (i.e. first protocol chip) to first path 117 or third path 128 based on CPU0 111 of controller 110 (i.e. a controller side); Paragraph 0106, processor 111 sends a queue switch command to switch… to the frontend interface 113); and a second PCI switch (Fig. 1, Switch SW1 125 uses PCI) that switches the communication path for the I/O commands received by the second protocol chip to either one of the second communication path and the fourth communication path in accordance with an instruction from the controller side (Fig. 1, Second PCI switch SW1 125 switches I/O commands received at frontend 123 (i.e. second protocol chip) to second path 127 or fourth path 118 based on CPU1 121 of controller 120 (i.e. the controller side); Paragraph 0063, switch setting situation is stored in the registers of the frontend interfaces 113 and 123 accessible with the PCIe, and thus can be read by the processors 111 and 121).
Regarding claim 15, Tanaka teaches a storage system management method that is applied to a storage system (Fig. 1, Storage system 100 has a method of operation) including a first protocol chip (Fig. 1, Frontend 113), a second protocol chip (Fig. 1, Frontend 123), a first controller (Fig. 1, Controller 111), a second controller (Fig. 1, Controller 121), a first PCI switch (Fig. 1, Switch 115), and a second PCI switch (Fig. 1, Switch 125); wherein the first protocol chip and the second protocol chip receive I/O commands from a host (Fig. 1, A host accesses frontends 113 and 123 using commands; Paragraph 0038, host apparatus… accesses… via the frontend interfaces 113 and 123… Paragraph 0066, host I/O commands received from the host apparatus); the first controller that includes a first CPU (Fig. 1, First controller CTL0 110 includes first CPU0 111) to which a queue used for control communication with the first protocol chip (Figs. 1 and 2, Memory MEM0 112 of first controller 110 includes queue OQ 201/IQ 102 for frontend 113 (i.e. first protocol chip); Paragraph 0058, memory 112 includes an OQ 201 and an IQ 202 controlling data transfer between the processor 111 and the frontend interface 113) and a queue used for control communication with the second protocol chip are assigned (Figs. 1 and 2, Memory MEM0 112 includes queue OQ 203 for frontend 123 (i.e. second protocol chip); Paragraph 0058, memory 112 includes an OQ 203 and an controlling data transfer between the processor 111 and the frontend interface 123); the second controller that includes a second CPU (Fig. 1, Second controller CTL1 120 contains second CPU1 121) to which a queue used for control communication with the first protocol chip (Fig. 1, Memory 122 MEM1 of CPU1 (i.e. second controller) includes queue OQ 223/IQ 224 for frontend 113 (i.e. first protocol chip); Paragraph 0059, memory 122 includes an OQ 223 and an IQ 224 controlling data transfer between the processor 121 and the frontend interface 113) and a queue used for control communication with the second protocol chip are assigned (Fig. 1, Memory 122 includes queue OQ221/IQ 222 for frontend 123 (i.e. second protocol chip); Paragraph 0059, memory 122 includes an OQ 221 and an IQ 222 controlling data transfer between the processor 121 and the frontend interface 123); the first PCI switch (Fig. 1, Switch SW0 115 uses PCI; Paragraph 0037, a PCI express (PCIe) switch 115) that is disposed between the first protocol chip and the first and second CPUs (Fig. 1, Switch SW0 115 is between frontend 113 (i.e. first protocol chip) and first CPU0 111 and second CPU1 121) and configured to set a communication path between the first protocol chip and the first CPU (Fig. 1, Path 117 between frontend 113 (i.e. first protocol chip) and first CPU0 111) and a communication path (Fig. 1, Path 128) between the first protocol chip and the second CPU (Fig. 1, First PCI switch SW0 115 switches I/O commands received at frontend 113 (i.e. first protocol chip) to first path 117 or third path 128 based on CPU0 111 of controller 110 (i.e. a controller side); Paragraph 0106, processor 111 sends a queue switch command to switch… to the frontend interface 113); and the second PCI switch (Fig. 1, Switch SW1 125 uses PCI) that is disposed between the second protocol chip and the first and second CPUs (Fig. 1, Switch SW1 125 is between frontend 123 (i.e. second protocol chip) and first CPU0 111 and second CPU1 121) and configured to set a communication path between the second protocol chip and the first CPU (Fig. 1, Path 127) and a communication path (Fig. 1, Path 118) between the second protocol chip and the second CPU (Fig. 1, Second PCI switch SW1 125 switches I/O commands received at frontend 123 (i.e. second protocol chip) to second path 127 or fourth path 118 based on CPU1 121 of controller 120 (i.e. the controller side); Paragraph 0063, switch setting situation is stored in the registers of the frontend interfaces 113 and 123 accessible with the PCIe, and thus can be read by the processors 111 and 121); the first and second protocol chips each have queue control information that defines the queue at a transmission destination of the I/O commands received from the host (Fig. 2, Frontend 113 and frontend 123 (i.e. first and second protocol chips) includes registers 217/218 and 237/238, respectively, that perform routing to queues of the I/O commands received from host; Paragraph 0046, frontend interface 113 includes interrupt setting registers 217 and 218… interrupt is used in association with an error notification from the frontend interface to the processor or an operation of a control queue for data transfer… Paragraph 0047, Similarly, the frontend interface 123 includes interrupt setting registers 237 and 238… Paragraph 0084, frontend interface 113 sends an interrupt to an address set in the interrupt setting register 217 to notify the processor 111 that the entry is enqueued in the OQ), and cause the first and second PCI switches to set the communication path for the I/O commands in accordance with the queue control information (Fig. 2, Switches 115 and 125 (i.e. first and second PCI switches) set path based on registers in frontends; Paragraph 0063, foregoing switch setting situation is stored in the registers of the frontend interfaces 113 and 123).
Allowable Subject Matter
Claims 7-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Patent 8,694,698 to Mizuno discloses multiple host computers coupled to a network which are each redundantly coupled to frontend boards and a disk enclosure.
US PGPUB 2004/0139240 discloses a fiber channel and SCSI routing system between a plurality of hosts that performs protocol conversion.
US PGPUB 2010/0088456 to Chu discloses a plurality of non-transparent bridge devices coupled to hosts and further coupled to an exchange device switch further coupled to storage devices.
US PGPUB 2015/0301969 to Armstead discloses a plurality of PCIe drivers coupled to a protocol mux and further coupled to switches.
US PGPUB 2015/0370749 to Yu discloses a first and second motherboard with PCIe control modules coupled to first and second PCIe switch chips.
US PGPUB 2012/0166699 to Kumar discloses a plurality of servers coupled to redundant switches and further coupled to redundant storage controllers (See Figure 5).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARRY Z WANG whose telephone number is (571)270-1716. The examiner can normally be reached 9 am - 3 pm (Monday-Friday).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henry Tsai can be reached at 571-272-4176. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.Z.W./Examiner, Art Unit 2184
/HENRY TSAI/Supervisory Patent Examiner, Art Unit 2184