DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter. Claims 1-20 are rejected under 35 U.S.C. 101. As per claim 1, the claim recites a method , therefore is a process . “ . . . generating models of input-output (IO) response time for a first compute node and a second compute node of a storage engine, where the first compute node is connected with the second compute node via a fabric-less link between switches . . . using the models to compute . . . “ These limitations, as drafted, are processes that, under its broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of generic computer components. Thus, the claim recites a mental process. The limitation of “ . . . receipt of a first IO . . . ” amounts to data gathering which is considered to be insignificant extra solution activity (MPEP 2106.05(g); this limitation is also a mere generic transmission and presentation of collected and analyzed data which is considered to be insignificant extra solution activity (MPEP 2106.05(g). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. As discussed above, of “ . . . receipt of a first IO . . . ” amounts to data gathering which is considered to be insignificant extra solution activity (MPEP 2106.05(g); “ . . . allocating a data slot in volatile memory of the second compute node for servicing the first IO via the fabric-less link ” is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Tummala . The claim is ineligible. As per claim 2, see rejection on claim 1. “ using cut-through mode . . . “ is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Primadani . The claim is ineligible. As per claim 3, see rejection on claim 1. “. . . using the models to compute that workload . . . ” These limitations, as drafted, are processes that, under its broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of generic computer components. Thus, the claim recites a mental process. As per claim 4, see rejection on claim 1. “. . . using the models to compute that workload . . . ” These limitations, as drafted, are processes that, under its broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of generic computer components. Thus, the claim recites a mental process. As per claim 5, see rejection on claim 4. “. . . using cut-through mode . . . ” is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Primadani . The claim is ineligible. As per claim 6, see rejection on claim 1. “. . . monitoring . . . ” These limitations, as drafted, are processes that, under its broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of generic computer components. Thus, the claim recites a mental process. As per claim 7, “inputting . . . “ amounts to data gathering which is considered to be insignificant extra solution activity (MPEP 2106.05(g); this limitation is also a mere generic transmission and presentation of collected and analyzed data which is considered to be insignificant extra solution activity (MPEP 2106.05(g). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. As per claims 8-14, see rejections on claims 1-7. As per claims 15-20, see rejections on claims 1-6. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 8, 10-11, 15, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Li (US 2017/ 0109207 ) (hereinafter Li ) in view of Tummala et al (US 10296255 ) (hereinafter Tummala ) further in view of Luo et al ( CN110035022A ) (hereinafter Luo ). As per claim 1, Li teaches: A method comprising: generating models of time for a first compute node and a second compute node of a storage engine, where the first compute node is connected with the second compute node via a link (Li, [0020, [0023]—under BRI, models of time for a first compute node and a second compute node can be predicted execution times of overloaded and idle nodes]) ; Li does not expressly teach: responsive to receipt of a first task by the first compute node, using the models to compute that workload on the first compute node exceeds workload on the second compute node by a predetermined amount ; and responsive to workload on the first compute node exceeding workload on the second compute node by the predetermined amount, allocating a data slot in volatile memory of the second compute node for servicing the first IO via the fabric-less link. wherein the time is IO response time ; wherein the engine is a storage engine ; wherein the link is a fabric-less link between switches; wherein the task is IO; However, Tummala discloses: responsive to receipt of a first task by the first compute node, using the models to compute that workload on the first compute node exceeds workload on the second compute node by a predetermined amount ( Tummala , col 12, ll 41-45 —under BRI, responsive to receipt of a first task can be alive/ not dead [a synonym to responsive] to receipt of application on source storage ) ; and responsive to workload on the first compute node exceeding workload on the second compute node by the predetermined amount, allocating a data slot in memory of the second compute node for servicing the first task via the link ( Tummala , col 12, ll 58-60—under BRI, allocating a data slot in memory of the second compute node for servicing the first IO can be allocating the memory /volume used to accept data on target storage) . wherein the engine is a storage engine ( Tummala , Fig 8) ; wherein the task is IO ( Tummala , col 13, ll 4-9) ; wherein the memory is volatile ( Tummala , col 25, ll36) ; Both Tummala and Li pertain to the art of load balancing. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use Tummala’s method to migrate data because it is well-known in the art that load balancing optimizes network operations by distributing incoming traffic across multiple servers, ensuring high availability, improved performance, and seamless scalability. Li/ Tummala does not expressly teach: wherein the time is IO response time; wherein the link is a fabric-less link between switches; However, Luo discloses: wherein the time is IO response time ( Luo, claim 1—under BRI, IO response time can be cache read time slice); wherein the link is a fabric-less link between switches (Luo, [0049]—under BRI, a fabric-less link between switches can be local bus) ; Both Luo and Li/ Tummala pertain to the art of IO systems. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use Luo’s method to use an IO response time and fabric-less link because it is well-known in the art that c onnecting switches without a shared fabrics is typically done for reasons of cost, scalability, management simplicity, and architectural flexibility. A PHOSITA would know to track IO response time because I/O (Input/Output) response time, or latency, measures the time a storage system takes to complete a read/write request, crucial for performance and user experience. As per claim 3, Li/ Tummala /Luo teaches: The method of claim 1 (see rejection on claim 1) further comprising using the models to compute that workload on the first compute node as represented by response time exceeds workload on the second compute node as represented by response time by the predetermined amount ( Li, [0020, [0023]) . As per claim 4, Li/ Tummala /Luo teaches: The method of claim 1 (See rejection on claim 1). further comprising responsive to receipt of a second IO by the first compute node, using the models to compute that workload on the first compute node as represented by response time does not exceed workload on the second compute node as represented by response time by a predetermined amount ( Li, [0012]) and, in response, allocating a data slot in volatile memory of the first compute node for servicing the second IO (Li, [0012]) . As per claim 8, see rejection on claim 1. As per claim 10, see rejection on claim 3. As per claim 11, see rejection on claim 4. As per claim 15, see rejection on claim 1. As per claim 17, see rejection on claim 3. As per claim 18, see rejection on claim 4. Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Li/ Tummala /Luo as applied above, and further in view of Primadani et al (US 2011/ 0154165 ) (hereinafter Primadani ) . As per claim 2, teaches: The method of claim 1 (See rejection on claim 1). Li/ Tummala /Luo does not expressly teach: further comprising using cut-through mode on remote read using dual-casting. However, Primadani discloses: further comprising using cut-through mode on remote read using dual-casting ( Primadani , [0052]) . Both Primadani and Li/ Tummala /Luo pertain to the art of IO systems. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use Primadani’s method to use cut-through mode on remote read using dual-casting because it is well-known in the art that cut-through mode in PCI Express (PCIe) switches significantly reduces transaction layer packet (TLP) latency by forwarding data immediately after reading the address header, rather than waiting to receive and verify the entire packet. As per claim 9, see rejection on claim 2. As per claim 16, see rejection on claim 2. Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Li/ Tummala /Luo as applied above , and further in view of Cheng et al (WO 2021184741) (hereinafter Cheng) . As per claim 5, Li/ Tummala /Luo teaches: The method of claim 4 (See rejection on claim 4). Li/ Tummala /Luo does not expressly teach: further comprising using cut-through mode on local read. However, Cheng discloses: further comprising using cut-through mode on local read ( Cheng, S04—pass through read) Both Cheng and Li/ Tummala /Luo pertain to the art of IO systems. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use Cheng’s met hod to use cut-through mode on local read because it is well-known in the art that cut-through mode in PCI Express (PCIe) switches significantly reduces transaction layer packet (TLP) latency by forwarding data immediately after reading the address header, rather than waiting to receive and verify the entire packet. As per claim 12, see rejection on claim 5. As per claim 19 , see rejection on claim 5. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2016/0034300 teaches a method of updating memory table in response to a memory access exception. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT CHARLIE SUN whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-5100 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 9AM-5PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Pierre Vital can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-4215 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLIE SUN/ Primary Examiner, Art Unit 2198