Final Office Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 6-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claim 6 is not limited to non-transitory, tangible embodiments. In view of Applicant's disclosure, specification para. [0025] and [0029], the medium is not limited to tangible, non-transitory embodiments, instead being defined as including both tangible, non-transitory embodiments (e.g., RAM, ROM, flash memory) and any other intangible, transitory embodiments since the list of computer-readable storage media is open-ended, e.g. “etc.”. As such, the claim is not limited to statutory subject matter and is therefore non-statutory. Claims 7-10 do not cure the deficiencies of claim 6. The Examiner recommends amending the claims to recite "A non-transitory computer-readable storage medium".
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 1, 6, 11, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anchi US 2022/0179804 A1, in view of the Compute Express Link Specification, Revision 2.0, and further in view of Arata et al., US 2010/0138686 A1.
Referring to claim 1:
In para. 0023, 0033, and Fig. 1, Anchi et al. disclose a storage system (information processing system) comprising: a storage cluster including a plurality of storage servers (first and second host devices); a plurality of switches coupled to the storage cluster (one or more switch fabrics); a plurality of non-volatile memories (NVMs) coupled to the plurality of switches (storage arrays include NVM devices—para. 0033).
However, Anchi et al. do not explicitly disclose the plurality of switches are computer express link (CXL) switches. On page 202, under section 7.1.2, the Compute Express Link Specification discloses a multiple VCS (virtual CXL switch) switch which consists of multiple upstream ports and one or more downstream ports per VCS.
It would have been obvious to one of ordinary skill at the time of filing of the invention to include the multiple VCS switch of the Compute Express Link Specification into the system of Anchi et al. A person of ordinary skill in the art would have been motivated to make the modification because in para. 0038, Anchi et al. disclose that other protocols and networks may be used. CXL is compatible with PCIe, disclosed by Anchi et al. and designed to support memory devices, also disclosed by Anchi et al. A key benefit of CXL is that it provides a low-latency, high-bandwidth path for the system to access the memory attached to the CXL device (see section 1.4.1 of Compute Express Link Specification). Further, substituting a CXL switch for the switch of Anchi et al. yields predictable results since CXL is compatible with Anchi et al.
In para. 0023, 0048 and 0072, Anchi et al. disclose a first storage server of the plurality of storage servers connected to a first NVM in the plurality of NVMs via a selected CXL switch of the plurality of CXL switches (CXL switches taught by the combination of Anchi et al. and the Compute Express Link Specification), a second storage server of the plurality of storage servers is connected to the first NVM via the selected CXL switch, and configure the second storage server to host first data resident on the first NVM, wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster (para. 0072: paths can change over time as a result of zoning and masking changes or other types of storage system reconfigurations).
In para. 0072, Anchi et al. disclose that addition or deletion of paths can occur as a result of zoning and masking changes or other types of storage system reconfigurations. However, neither Anchi et al. nor the Compute Express Link Specification explicitly disclose a processor to execute a cluster service; wherein execution of the cluster service causes the processor to: detect a first failure in a first storage server of the plurality of servers, select a second storage server of the plurality of servers, and configure the second storage server to host first data resident on the first NVM, wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster (in response to the failure).
In para. 0043-0045 Arata et al. disclose a processor; and a cluster service memory coupled to the processor (para. 0043: management server), the cluster service memory including a set of instructions, which when executed by the processor, cause the processor to: detect a first failure in a first storage server of the plurality of servers (para. 0044: failure management unit), select a second storage server of the plurality of servers (para. 0045: the failover unit selects a standby server and fails over the active server to the standby server), and configure the second storage server to host first data resident on the first NVM (para. 0045: the server fail-over unit changes over connection to a logical unit of the storage apparatus from the active server to the standby server), wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster (cluster rebalancing does not occur).
It would have been obvious to one of ordinary skill at the time of filing of the invention to include the management server and server failover process of Arata et al. into the combined information processing system of Anchi et al. and the Compute Express Link Specification. A person of ordinary skill in the art would have been motivated to make the modification because in para. 0072, Anchi et al. disclose that addition or deletion of paths can occur as a result of zoning and masking changes or other types of storage system reconfigurations, therefore, server failover could also occur with reasonable expectation of success. Additionally having one or more standby servers for active servers increases the reliability of the computer system (see Arata et al.: para. 0003).
Referring to claim 6:
In para. 0199, Anchi et al. disclose at least one computer readable storage medium having instructions stored thereon, which are executed by a computing system.
In para. 0023, 0048 and 0072, Anchi et al. disclose a first storage cluster including a plurality of storage servers, wherein the first storage server is connected to a first non-volatile memory (NVM) via a switch, a second storage server that is connected to the first NVM via the switch, wherein the first storage server and the second storage server are in a storage cluster, and configure the second storage server to host first data resident on the first NVM, wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster (para. 0072: paths can change over time as a result of zoning and masking changes or other types of storage system reconfigurations).
However, Anchi et al. do not explicitly disclose wherein the first storage server is connected to a first non-volatile memory (NVM) via a selected compute express link (CXL) switch of a plurality of CXL switches coupled to a plurality of NVMs, and a second storage server that is connected to the first NVM via the selected CXL switch. On page 202, under section 7.1.2, the Compute Express Link Specification discloses a multiple VCS (virtual CXL switch) switch which consists of multiple upstream ports and one or more downstream ports per VCS. It would have been obvious to one of ordinary skill at the time of filing of the invention to include the multiple VCS switch of the Compute Express Link Specification into the system of Anchi et al. A person of ordinary skill in the art would have been motivated to make the modification because in para. 0038, Anchi et al. disclose that other protocols and networks may be used. CXL is compatible with PCIe, disclosed by Anchi et al. and designed to support memory devices, also disclosed by Anchi et al. A key benefit of CXL is that it provides a low-latency, high-bandwidth path for the system to access the memory attached to the CXL device (see section 1.4.1 of Compute Express Link Specification). Further, substituting a CXL switch for the switch of Anchi et al. yields predictable results since CXL is compatible with Anchi et al.
In para. 0072, Anchi et al. disclose that addition or deletion of paths can occur as a result of zoning and masking changes or other types of storage system reconfigurations. However, neither Anchi et al. nor the Compute Express Link Specification explicitly disclose detect a first failure in a first storage server of a storage cluster including a plurality of storage servers, select, from the plurality of storage servers, a second storage server (in response to the failure), and configure the second storage server to host first data resident on the first NVM (in response to the failure), wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster.
In para. 0043-0045 Arata et al. disclose detect a first failure in a first storage server (para. 0044: failure management unit) of a storage cluster including a plurality of storage servers, select, from the plurality of storage servers, a second storage server (para. 0045: the failover unit selects a standby server and fails over the active server to the standby server), and configure the second storage server to host first data resident on the first NVM (para. 0045: the server fail-over unit changes over connection to a logical unit of the storage apparatus from the active server to the standby server), wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster (cluster rebalancing does not occur).
It would have been obvious to one of ordinary skill at the time of filing of the invention to include the management server and server failover process of Arata et al. into the combined information processing system of Anchi et al. and the Compute Express Link Specification. A person of ordinary skill in the art would have been motivated to make the modification because in para. 0072, Anchi et al. disclose that addition or deletion of paths can occur as a result of zoning and masking changes or other types of storage system reconfigurations, therefore, server failover could also occur with reasonable expectation of success. Additionally having one or more standby servers for active servers increases the reliability of the computer system (see Arata et al.: para. 0003).
Referring to claim 11:
In para. 0023, 0048 and 0072, Anchi et al. disclose the first storage server is connected to a first non-volatile memory (NVM) via a switch, a second storage server that is connected to the first NVM via the switch, wherein the first storage server and the second storage server are in a storage cluster, and configure the second storage server to host first data resident on the first NVM, wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster (para. 0072: paths can change over time as a result of zoning and masking changes or other types of storage system reconfigurations).
However, Anchi et al. do not explicitly disclose wherein the first storage server is connected to a first non-volatile memory (NVM) via a selected compute express link (CXL) switch of a plurality of CXL switches coupled to a plurality of NVMs, and a second storage server that is connected to the first NVM via the selected CXL switch. On page 202, under section 7.1.2, the Compute Express Link Specification discloses a multiple VCS (virtual CXL switch) switch which consists of multiple upstream ports and one or more downstream ports per VCS. It would have been obvious to one of ordinary skill at the time of filing of the invention to include the multiple VCS switch of the Compute Express Link Specification into the system of Anchi et al. A person of ordinary skill in the art would have been motivated to make the modification because in para. 0038, Anchi et al. disclose that other protocols and networks may be used. CXL is compatible with PCIe, disclosed by Anchi et al. and designed to support memory devices, also disclosed by Anchi et al. A key benefit of CXL is that it provides a low-latency, high-bandwidth path for the system to access the memory attached to the CXL device (see section 1.4.1 of Compute Express Link Specification). Further, substituting a CXL switch for the switch of Anchi et al. yields predictable results since CXL is compatible with Anchi et al.
In para. 0072, Anchi et al. disclose that addition or deletion of paths can occur as a result of zoning and masking changes or other types of storage system reconfigurations. In para. 00198-0199, Anchi et al. disclose a processing platform (a communication connection and a processor). However, neither Anchi et al. nor the Compute Express Link Specification explicitly disclose a semiconductor apparatus comprising a communication connection to a storage cluster including a first storage server and a second storage server; and a processor to execute a cluster service to: detect a first failure in a first storage server, select a second storage server (in response to the failure), and configure the second storage server to host first data resident on the first NVM (in response to the failure), wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster.
In para. 0043-0045 and 0057, Arata et al. disclose the processor to execute a cluster service (para. 0057: a processor is disclosed) to: detect a first failure in a first storage server (para. 0044: failure management unit), select a second storage server (para. 0045: the failover unit selects a standby server and fails over the active server to the standby server), and configure the second storage server to host first data resident on the first NVM (para. 0045: the server fail-over unit changes over connection to a logical unit of the storage apparatus from the active server to the standby server), wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster (cluster rebalancing does not occur).
It would have been obvious to one of ordinary skill at the time of filing of the invention to include the management server and server failover process of Arata et al. into the combined information processing system of Anchi et al. and the Compute Express Link Specification. A person of ordinary skill in the art would have been motivated to make the modification because in para. 0072, Anchi et al. disclose that addition or deletion of paths can occur as a result of zoning and masking changes or other types of storage system reconfigurations, therefore, server failover could also occur with reasonable expectation of success. Additionally having one or more standby servers for active servers increases the reliability of the computer system (see Arata et al.: para. 0003).
Referring to claim 16:
In para. 0023, 0048 and 0072, Anchi et al. disclose a first storage server of a storage cluster including a plurality of storage servers, wherein the first storage server is connected to a first non-volatile memory (NVM) via a switch, a second storage server that is connected to the first NVM via the switch, wherein the first storage server and the second storage server are in a storage cluster, and configuring the second storage server to host first data resident on the first NVM, wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster (para. 0072: paths can change over time as a result of zoning and masking changes or other types of storage system reconfigurations).
However, Anchi et al. do not explicitly disclose wherein the first storage server is connected to a first non-volatile memory (NVM) via a selected compute express link (CXL) switch of a plurality of CXL switches coupled to a plurality of NVMs, and a second storage server that is connected to the first NVM via the selected CXL switch. On page 202, under section 7.1.2, the Compute Express Link Specification discloses a multiple VCS (virtual CXL switch) switch which consists of multiple upstream ports and one or more downstream ports per VCS. It would have been obvious to one of ordinary skill at the time of filing of the invention to include the multiple VCS switch of the Compute Express Link Specification into the system of Anchi et al. A person of ordinary skill in the art would have been motivated to make the modification because in para. 0038, Anchi et al. disclose that other protocols and networks may be used. CXL is compatible with PCIe, disclosed by Anchi et al. and designed to support memory devices, also disclosed by Anchi et al. A key benefit of CXL is that it provides a low-latency, high-bandwidth path for the system to access the memory attached to the CXL device (see section 1.4.1 of Compute Express Link Specification). Further, substituting a CXL switch for the switch of Anchi et al. yields predictable results since CXL is compatible with Anchi et al.
In para. 0072, Anchi et al. disclose that addition or deletion of paths can occur as a result of zoning and masking changes or other types of storage system reconfigurations. However, neither Anchi et al. nor the Compute Express Link Specification explicitly disclose detecting a first failure in a first storage server of a storage cluster including a plurality of storage servers, selecting a second storage server (in response to the failure) from the plurality of storage servers, and configuring the second storage server to host first data resident on the first NVM (in response to the failure), wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster.
In para. 0043-0045 and 0057, Arata et al. disclose detecting a first failure in a first storage server (para. 0044: failure management unit) of a storage cluster including a plurality of storage servers, selecting, from the plurality of storage servers, a second storage server (para. 0045: the failover unit selects a standby server and fails over the active server to the standby server), and configure the second storage server to host first data resident on the first NVM (para. 0045: the server fail-over unit changes over connection to a logical unit of the storage apparatus from the active server to the standby server), wherein configuring the second storage server to host the first data bypasses a cluster-wide rebalance of the storage cluster (cluster rebalancing does not occur).
It would have been obvious to one of ordinary skill at the time of filing of the invention to include the management server and server failover process of Arata et al. into the combined information processing system of Anchi et al. and the Compute Express Link Specification. A person of ordinary skill in the art would have been motivated to make the modification because in para. 0072, Anchi et al. disclose that addition or deletion of paths can occur as a result of zoning and masking changes or other types of storage system reconfigurations, therefore, server failover could also occur with reasonable expectation of success. Additionally having one or more standby servers for active servers increases the reliability of the computer system (see Arata et al.: para. 0003).
Allowable Subject Matter
Claims 2-5, 12-15, and 17-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter.
With respect to claims 2, 7, 12, and 17, the prior art does not teach or reasonably suggest, in combination with the remaining limitations, wherein the second storage server is selected based on topology data associated with the storage cluster and one or more hash criteria.
With respect to claims 3, 8, 13, and 18, the prior art does not teach or reasonably suggest, in combination with the remaining limitations, wherein to configure the second storage server to host the first data comprises: conduct a hot-plug flow with respect to the first NVM and the second storage server, initiate a storage daemon service, and read metadata from the selected CXL switch.
With respect to claims 4, 9, 14, and 19, the prior art does not teach or reasonably suggest, in combination with the remaining limitations, further including a volatile memory connected to the first storage server and the second storage server via the selected CXL switch, wherein the processor is to configure the second storage server to host second data resident on the volatile memory in response to the first failure in the first storage server.
With respect to claims 5, 10, 15, and 20, the prior art does not teach or reasonably suggest, in combination with the remaining limitations, the processor to: detect a second failure in a second NVM of the plurality of NVMs, wherein the second NVM includes a redundant copy of the first data, establish a source-ordered virtual channel between the first NVM and a third NVM; and copy the first data from the first NVM to the third NVM over the source-ordered virtual channel via one or more unordered stream writes.
Response to Arguments
Applicant's arguments filed February 3, 2026 have been fully considered but they are not persuasive.
On page 8 under Claim Rejection – 35 U.S.C. § 101, the Applicant argues, “The definition of ‘etc.’ or et cetera is the addition of an unknown number of similar items. Seeing that et cetera specifically means ‘similar items,’ it is contrary to the well understood interpretation maxim that the list cannot be interpreted to cover dissimilar items. Thus, when Applicant's Specification explicitly calls out ‘RAM, ROM, PROM, firmware, flash memory, etc., in hardware’ (emphasis added), there cannot be any reasonable interpretation of Applicant's computer-readable storage medium as covering anything but a tangible embodiment. Furthermore, since the entire problem with non-transitory media is that they are transitory and cannot store data but merely transmit it, it is also flatly incorrect to interpret a "storage medium" as a transmission media. Therefore, Applicant's claims are directed to statutory subject matter, and Applicant respectfully requests that the rejection of the claims be withdrawn.” The Examiner disagrees. The discussion of “etc.” does not amount to a special definition. Use of etc. just means that the list is not exhaustive and that “similar” items are included. Given the broadest, reasonable interpretation, a “similar” item is a signal as it also performs the basic function of storing data/information. The term “etc.” follows a list of machine- or computer-readable storage media, and therefore would include a signal as it also performs the basic function of storing data/information. The Examiner notes that the term “in hardware” follows “etc.” and is not part of the list of machine- or computer-readable storage media. Given the broadest reasonable, interpretation it is proper to interpret a “storage medium” as a transmission medium since signals and carrier waves perform the basic function of storing data/information. The rejection under 35 U.S.C. 101 is maintained.
See the following MPEP sections reproduced in part:
2111 Claim Interpretation; Broadest Reasonable Interpretation:
During patent examination, the pending claims must be "given their broadest reasonable interpretation consistent with the specification." The Federal Circuit’s en banc decision in Phillips v. AWH Corp., 415 F.3d 1303, 1316, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005) expressly recognized that the USPTO employs the "broadest reasonable interpretation" standard….
The broadest reasonable interpretation does not mean the broadest possible interpretation. Rather, the meaning given to a claim term must be consistent with the ordinary and customary meaning of the term (unless the term has been given a special definition in the specification), and must be consistent with the use of the claim term in the specification and drawings.
2111.01 Plain Meaning:
Under a broadest reasonable interpretation (BRI), words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. The plain meaning of a term means the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time…The presumption that a term is given its ordinary and customary meaning may be rebutted by the applicant by clearly setting forth a different definition of the term in the specification.
II. "PLAIN MEANING" REFERS TO THE ORDINARY AND CUSTOMARY MEANING GIVEN TO THE TERM BY THOSE OF ORDINARY SKILL IN THE ART
[T]he ordinary and customary meaning of a claim term is the meaning that the term would have to a person of ordinary skill in the art in question at the time of the invention, i.e., as of the effective filing date of the patent application." Phillips v. AWH Corp.,415 F.3d 1303, 1313, 75 USPQ2d 1321, 1326 (Fed. Cir. 2005) (en banc); Sunrace Roots Enter. Co. v. SRAM Corp., 336 F.3d 1298, 1302, 67 USPQ2d 1438, 1441 (Fed. Cir. 2003); Brookhill-Wilk 1, LLC v. Intuitive Surgical, Inc., 334 F.3d 1294, 1298, 67 USPQ2d 1132, 1136 (Fed. Cir. 2003) ("In the absence of an express intent to impart a novel meaning to the claim terms, the words are presumed to take on the ordinary and customary meanings attributed to them by those of ordinary skill in the art.")….The ordinary and customary meaning of a term may be evidenced by a variety of sources, including the words of the claims themselves, the specification, drawings, and prior art. However, the best source for determining the meaning of a claim term is the specification – the greatest clarity is obtained when the specification serves as a glossary for the claim terms.
Any meaning of a claim term taken from the prior art must be consistent with the use of the claim term in the specification and drawings. Moreover, when the specification is clear about the scope and content of a claim term, there is no need to turn to extrinsic evidence for claim interpretation.
III. "PLAIN MEANING" REFERS TO THE ORDINARY AND CUSTOMARY MEANING GIVEN TO THE TERM BY THOSE OF ORDINARY SKILL IN THE ART
"[T]he ordinary and customary meaning of a claim term is the meaning that the term would have to a person of ordinary skill in the art in question at the time of the invention,
IV. APPLICANT MAY BE OWN LEXICOGRAPHER AND/OR MAY DISAVOW CLAIM SCOPE
The only exceptions to giving the words in a claim their ordinary and customary meaning in the art are (1) when the applicant acts as their own lexicographer; and (2) when the applicant disavows or disclaims the full scope of a claim term in the specification. To act as their own lexicographer, the applicant must clearly set forth a special definition of a claim term in the specification that differs from the plain and ordinary meaning it would otherwise possess.
Applicant’s arguments with respect to rejection of claim(s) 1, 6, 11, and 16, under 35 U.S.C. 103, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Prout discloses hosts connected to memory via CXL Switch.
Gouk et al. disclose memory disaggregation and performing memory disaggregation using Compute Express Link.
Agarwal discloses the evolution of CXL across versions and releases.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C MASKULINSKI whose telephone number is (571)272-3649. The examiner can normally be reached Monday-Friday 8:00 am-5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at (571) 272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL MASKULINSKI/Primary Examiner, Art Unit 2113