Prosecution Insights
Last updated: April 19, 2026
Application No. 18/775,302

MAINTAINING CONNECTION WITH CXL HOST ON RESET

Final Rejection §103
Filed
Jul 17, 2024
Examiner
KRIEGER, JONAH C
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
95%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
127 granted / 147 resolved
+31.4% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
31 currently pending
Career history
178
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
69.8%
+29.8% vs TC avg
§102
12.5%
-27.5% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 147 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1, 11 and 17-18 have been amended. No claims have been cancelled or added. Claims 1-20 remain pending and are ready for examination. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US Publication No. 2025/0307171 – “Lee”) in view of Inbar et al. (US Publication No. 2024/0296097 – “Inbar”) in further view of Miller et al. (US Publication No. 2023/0138817 – “Miller”). Regarding claim 1, Lee teaches A memory device comprising: a host interface circuit configured to receive commands from a host device via a compute express link (CXL) interconnect; (Lee paragraph [0007], According to an aspect of one or more embodiments, there is provided a computing system comprising a host; a memory including a volatile memory and a memory controller; and a storage device that is connected with the host through a first interface and that includes a nonvolatile memory and a storage controller, the storage device being configured to communicate with the host through a first port, to communicate with the memory through a second port, and to manage the memory. A host may be connected through an interface to execute commands and operations with a storage device, which can be done through a CXL link, see Lee paragraph [0039], In an embodiment, the host 101, the CXL storage 110, and the CXL memory 120 may be configured to share the same interface. For example, the host 101, the CXL storage 110, and the CXL memory 120 may communicate with each other through a CXL interface IF_CXL) and processing logic circuitry configured to control transactions with a memory device; (Lee paragraph [0039], The CXL memory 120 may include a CXL memory controller 121 and a buffer memory BFM. Under control of the host 101, the CXL memory controller 121 may store data in the buffer memory BFM or may send data stored in the buffer memory BFM to the host 101. In an embodiment, the buffer memory BFM may be a DRAM, but the present disclosure is not limited thereto. In an embodiment, the host 101, the CXL storage 110, and the CXL memory 120 may be configured to share the same interface. For example, the host 101, the CXL storage 110, and the CXL memory 120 may communicate with each other through a CXL interface IF_CXL. A memory controller connected through the CXL can be used to implement commands to the memory device). Lee does not teach wherein the host interface circuit is configured to suppress communication with the host device by pausing or stopping memory device commands, and wherein the host interface circuit maintains a connection with the host device via the CXL interface while the processing logic circuitry undergoes a reset that is initiated internally to the memory device. However, Inbar teaches maintain a connection … while the processing logic circuitry undergoes a reset that is initiated internally to the memory device (Inbar paragraph [0055], The following embodiments can be used to enhance the recovery of a memory element with an internal reset/power cycle of non-functionals memory die(s) through an agreed protocol with the host. This can result is a greatly-reduced chance of the negative impact of a die removal. For example, performing a hardware reset (e.g., power cycling or performing a hard reset) of a memory die can clear information stored in latches/cache of the memory die, state machine information, interface settings/information, etc. that can be corrupted and cause the memory die to fail. This can recover data storage device components (e.g., NAND dies) that methods, such as exclusive-or (XOR) or a software reset, cannot. This can also help reduce the number of retired dies and help improve performance, overprovisioning, and capacity. A reset command can be initiated by the memory device, which can be implemented towards the control circuitry, see Inbar paragraph [0017], In some embodiments, the controller is further configured to perform the hardware reset on the subset of the plurality of memory dies by: sending a command to all memory dies of the plurality of memory dies to ignore a hardware reset command, wherein because the subset of the plurality of memory dies is non-responsive, the subset of the plurality of memory dies does not receive the command to ignore the hardware reset command; and sending the hardware reset command on a communication channel shared by the plurality of memory dies). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee with those of Inbar. Inbar teaches performing a reset command initiated internal the memory device, which can improve memory function by minimizing the processing power/interruption, as well as reducing the impact of memory resetting (i.e., see Inbar paragraph [0055], The following embodiments can be used to enhance the recovery of a memory element with an internal reset/power cycle of non-functionals memory die(s) through an agreed protocol with the host. This can result is a greatly-reduced chance of the negative impact of a die removal. For example, performing a hardware reset (e.g., power cycling or performing a hard reset) of a memory die can clear information stored in latches/cache of the memory die, state machine information, interface settings/information, etc. that can be corrupted and cause the memory die to fail. This can recover data storage device components (e.g., NAND dies) that methods, such as exclusive-or (XOR) or a software reset, cannot. This can also help reduce the number of retired dies and help improve performance, overprovisioning, and capacity). Lee in view of Inbar does not teach wherein the host interface circuit is configured to suppress communication with the host device by pausing or stopping memory device commands, and wherein the host interface circuit maintains a connection with the host device via the CXL interface. However, Miller teaches wherein the host interface circuit is configured to suppress communication with the host device by pausing or stopping memory device commands, and wherein the host interface circuit maintains a connection with the host device via the CXL interface (Miller paragraph [0024], For some embodiments, including those that employ a CXL external interface such as that described below with respect to FIG. 5, the automatic secure recovery technique provides a way to preserve operability of the CXL interface even during the failure mode of operation. In such a circumstance, separate reset zones may be configured for the multi-processor device 100 to allow for partial operability in one region of the multi-processor device 100, while allowing for partial resetting of other non-operating regions of the multi-processor device 100. Partitioning reset zones in this manner provides operational flexibility such that the primary processor 102 is not necessarily required for the CXL interface to successfully operate. As a result, recovery operations of the primary processor 102 may be carried out as background operations without affecting memory access operations that are being carried out over the CXL interface. For some embodiments, however, pausing of CXL-related command processing, log writing, and so forth may occur over the CXL interface during the failure mode of operation. The connection between the host and the CXL interface may be maintained while host commands/memory access operations are paused, also see Miller paragraph [0026]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee and Inbar with those of Miller. Miller teaches maintaining a connection between a host and a CXL interface which may improve the reliability of the memory system and associated data recovery (i.e., see Miller paragraphs [0025-0026], The multi-processor device 100 and the associated recovery methods described above lend themselves well to applications involving distributed processing with hardware-based security schemes. In the field of distributed memory processing, CXL Type 3 devices, such as CXL buffers, may exhibit significantly improved reliability through adoption of the multi-processor device structures and associated methods disclosed herein. FIG. 5 illustrates one specific embodiment of a memory system, generally designated 500, that employs a CXL Type 3 memory device in the form of a CXL buffer 510. The memory system 500 includes a host 502 that interfaces with a memory module 504 primarily through a CXL link 506. For one embodiment, the host includes a host CXL interface controller 508 for communicating over the CXL link 506 utilizing protocols consistent with the CXL standards, such as CXL.io and CXL.mem. For some embodiments that involve CXL Type 2 devices, an additional CXL.cache protocol may also be utilized). Regarding claim 2, Lee in view of Inbar in further view of Miller teaches The memory device of claim 1, wherein the reset is in response to a detected device fault of the memory device (Inbar paragraph [0061], In another embodiment, there can be some heuristic that the former protocol of retirement is triggered if a die exhibits “no-response” for several times in a certain period of time. This embodiment is shown in the flow chart 600 in FIG. 6. As shown in FIG. 6, after die non-responsiveness has been detected (act 610), the controller 102 determines whether the die has shown non-responsiveness over a period of time (e.g., in the last X minutes) (act 620). (Act 620 can be modified with another heuristic that considers irregular behavior of the subject die other than “no-response,” such as irregular power, read/write latency profile, or any other characteristic that indicates that a component in the data storage device 100 is non-functional or is encountering a fault). The reset of the memory device can be initiated due to a device fault/non-functioning detection). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee with those of Inbar. Inbar teaches performing a reset command initiated internal the memory device, which can improve memory function by minimizing the processing power/interruption, as well as reducing the impact of memory resetting (i.e., see Inbar paragraph [0055], The following embodiments can be used to enhance the recovery of a memory element with an internal reset/power cycle of non-functionals memory die(s) through an agreed protocol with the host. This can result is a greatly-reduced chance of the negative impact of a die removal. For example, performing a hardware reset (e.g., power cycling or performing a hard reset) of a memory die can clear information stored in latches/cache of the memory die, state machine information, interface settings/information, etc. that can be corrupted and cause the memory die to fail. This can recover data storage device components (e.g., NAND dies) that methods, such as exclusive-or (XOR) or a software reset, cannot. This can also help reduce the number of retired dies and help improve performance, overprovisioning, and capacity). Regarding claim 3, Lee in view of Inbar in further view of Miller teaches The memory device of claim 1, wherein the reset is in response to a firmware update for the memory device (Inbar paragraph [0058], As used herein, a “hardware reset” can refer to a power cycling operation or a hard reset that restores a component to the state it was in when it left the factory. With a hardware reset, settings, applications, and user data can be removed. In contrast, a “firmware (or soft) reset” can refer to a restart of a component to clear data from volatile memory and restart an application without shutting down the component completely. A firmware/hardware update can be used to initiate a reset operation). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee with those of Inbar. Inbar teaches performing a reset command initiated internal the memory device, which can improve memory function by minimizing the processing power/interruption, as well as reducing the impact of memory resetting (i.e., see Inbar paragraph [0055], The following embodiments can be used to enhance the recovery of a memory element with an internal reset/power cycle of non-functionals memory die(s) through an agreed protocol with the host. This can result is a greatly-reduced chance of the negative impact of a die removal. For example, performing a hardware reset (e.g., power cycling or performing a hard reset) of a memory die can clear information stored in latches/cache of the memory die, state machine information, interface settings/information, etc. that can be corrupted and cause the memory die to fail. This can recover data storage device components (e.g., NAND dies) that methods, such as exclusive-or (XOR) or a software reset, cannot. This can also help reduce the number of retired dies and help improve performance, overprovisioning, and capacity). Claim(s) 4-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Inbar in further view of Miller as applied to claim 1 above, and further in view of Ponnuru et al. (US Publication No. 2024/0303380 – “Ponnuru”). Regarding claim 4, Lee in view of Inbar in further view of Miller and further in view of Ponnuru teaches The memory device of claim 1, wherein in an autonomous operating mode, the host interface circuit is configured to manage CXL and side-band commands received from the host device; (Ponnuru paragraph [0062], Embodiments of the present disclosure provide a multi-Function FRU representation system 300 that enables techniques for a host (e.g., RAC 230) to distinguish if a multi-Function PCIe/CXL component is composed of a single FRU or composed of multiple FRUs, and in the event it is composed of a single FRU, then this disclosure provides a way to authenticate each Function in the FRU without the burden of reading the full certificate chain from each Function (a heavy operation, especially for SMBus). Host interface circuitry can be utilized to perform sideband and CXL commands, also see Ponnuru Fig. 1 and paragraph [0030], As indicated in FIG. 1, chassis 100 may also include one or more storage sleds 115n that provide access to storage drives 175n via a storage controller 195. In some embodiments, storage controller 195 may provide support for RAID (Redundant Array of Independent Disks) configurations of logical and physical storage drives, such as storage drives provided by storage sled 115n. In some embodiments, storage controller 195 may be a HBA (Host Bus Adapter) that provides more limited capabilities in accessing storage drives 175n) and wherein in a distribution operating mode, the processing logic circuitry is configured to manage CXL and side-band commands received from the host device (Ponnuru paragraph [0056], Remote access controller 230 supports monitoring and administration of the managed devices of an IHS via a sideband bus 253. For instance, messages utilized in device and/or system management may be transmitted using I2C sideband bus 253 connections that may be individually established with each of the respective managed devices 205, 235a-b, 240, 250, 255, 260 of the IHS 200 through the operation of an I2C multiplexer 230d of the remote access controller. In certain operation modes, a side-band command system may be utilized to perform commands from the host. This can also include CXL based commands, see Ponnuru paragraph [0061], The SPDM protocol used by the RAC may require a mechanism to extend the certificate retrieval mechanism to specify the PCIe/CXL Device/Function path from which to retrieve the certificate (e.g., OEM command or a new standard command)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee, Inbar and Miller with those of Ponnuru. Ponnuru teaches using processing logical circuitry to perform side-band and CXL commands, which can allow for more efficient device management and messaging (i.e., see Ponnuru paragraph [0056], Remote access controller 230 supports monitoring and administration of the managed devices of an IHS via a sideband bus 253. For instance, messages utilized in device and/or system management may be transmitted using I2C sideband bus 253 connections that may be individually established with each of the respective managed devices 205, 235a-b, 240, 250, 255, 260 of the IHS 200 through the operation of an I2C multiplexer 230d of the remote access controller. As illustrated in FIG. 2, the managed devices 205, 235a-b, 240, 250, 255, 260 of IHS 200 are coupled to the CPUs 205, either directly or directly, via in-line buses that are separate from the I2C sideband bus 253 connections used by the remote access controller 230 for device management). Regarding claim 5, Lee in view of Inbar in further view of Miller and further in view of Ponnuru teaches The memory device of claim 4, wherein in the autonomous operating mode and in the distribution operating mode, the host interface circuit is configured to coordinate a device response to any one or more of a PCIe reset, a CXL reset, a transaction layer packet, and a side-band request received from the host device (Ponnuru paragraph [0054], In some embodiments, remote access controller 230 may implement monitoring and management operations using MCTP (Management Component Transport Protocol) messages that may be communicated to managed devices 205, 235a-b, 240, 250, 255, 260 via management connections supported by a sideband bus 253. In some embodiments, the remote access controller 230 may additionally or alternatively use MCTP messaging to transmit Vendor Defined Messages (VDMs) via the in-line PCIe switch fabric supported by PCIe switches 265a-b. In some instances, the sideband management connections supported by remote access controller 230 may include PLDM (Platform Level Data Model) management communications with the managed devices 205, 235a-b, 240, 250, 255, 260 of IHS 200. Side-band requests/communications can be transmitted and responded to via the host circuitry). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee, Inbar and Miller with those of Ponnuru. Ponnuru teaches using processing logical circuitry to perform side-band and CXL commands, which can allow for more efficient device management and messaging (i.e., see Ponnuru paragraph [0056], Remote access controller 230 supports monitoring and administration of the managed devices of an IHS via a sideband bus 253. For instance, messages utilized in device and/or system management may be transmitted using I2C sideband bus 253 connections that may be individually established with each of the respective managed devices 205, 235a-b, 240, 250, 255, 260 of the IHS 200 through the operation of an I2C multiplexer 230d of the remote access controller. As illustrated in FIG. 2, the managed devices 205, 235a-b, 240, 250, 255, 260 of IHS 200 are coupled to the CPUs 205, either directly or directly, via in-line buses that are separate from the I2C sideband bus 253 connections used by the remote access controller 230 for device management). Regarding claim 6, Lee in view of Inbar in further view of Miller and further in view of Ponnuru teaches The memory device of claim 4, further comprising a cache memory; wherein in the autonomous operating mode, the processing logic circuitry is configured to load contents of the cache memory to the memory device before initiating the reset (Inbar paragraph [0048], Non-volatile memory die 104 includes peripheral circuitry 141 and non-volatile memory array 142. Non-volatile memory array 142 includes the non-volatile memory cells used to store data. The non-volatile memory cells may be any suitable non-volatile memory cells, including ReRAM, MRAM, PCM, NAND flash memory cells and/or NOR flash memory cells in a two-dimensional and/or three-dimensional configuration. Non-volatile memory die 104 further includes a data cache 156 that caches data. Peripheral circuitry 141 includes a state machine 152 that provides status information to the controller 102. The NVM may contain a section for caching data, which can be done perform performing a reset enabling data recovery, see Inbar paragraph [0050], The FTL may include a logical-to-physical address (L2P) map (sometimes referred to herein as a table or data structure) and allotted cache memory. In this way, the FTL translates logical block addresses (“LBAs”) from the host to physical addresses in the memory 104. The FTL can include other features, such as, but not limited to, power-off recovery (so that the data structures of the FTL can be recovered in the event of a sudden power loss) and wear leveling (so that the wear across memory blocks is even to prevent certain blocks from excessive wear, which would result in a greater chance of failure). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee with those of Inbar. Inbar teaches performing a reset command initiated internal the memory device, which can improve memory function by minimizing the processing power/interruption, as well as reducing the impact of memory resetting (i.e., see Inbar paragraph [0055], The following embodiments can be used to enhance the recovery of a memory element with an internal reset/power cycle of non-functionals memory die(s) through an agreed protocol with the host. This can result is a greatly-reduced chance of the negative impact of a die removal. For example, performing a hardware reset (e.g., power cycling or performing a hard reset) of a memory die can clear information stored in latches/cache of the memory die, state machine information, interface settings/information, etc. that can be corrupted and cause the memory die to fail. This can recover data storage device components (e.g., NAND dies) that methods, such as exclusive-or (XOR) or a software reset, cannot. This can also help reduce the number of retired dies and help improve performance, overprovisioning, and capacity). Regarding claim 7, Lee in view of Inbar in further view of Miller and further in view of Ponnuru teaches The memory device of claim 4, further comprising the memory device, wherein the memory device is a DRAM memory device (Lee paragraph [0032], In an embodiment, the buffer memory 13b may be a high-speed memory such as a DRAM. As the capacity of the nonvolatile memory 13c increases, the size of necessary map data may increase. However, because the capacity of the buffer memory 13b included in the single storage device 13 is limited, it is impossible to cope with the increase in the size of the map data due to the increase in the capacity of the nonvolatile memory 13c). Claim(s) 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Inbar in further view of Miller as applied to claim 1 above, and further in view of Steed et al. (US Publication No. 2024/0134757 – “Steed”). Regarding claim 8, Lee in view of Inbar in further view of Miller and further in view of Steed teaches The memory device of claim 4, wherein the processing logic circuitry is configured to determine the memory device is initialized prior to using the autonomous operating mode (Steed paragraph [0034], The memory unit 102 includes circuitry for detection of an error or disruptive volatile memory events (DVME), backup of data in the volatile memory devices 110 to the non-volatile memory devices 112 before corruption of the data, and restoring of the volatile memory devices 110 with the data backed up from the non-volatile memory devices 112 in an autonomous manner. A DVME is defined as events external to the volatile memory devices 110 that could result in unintended loss of data previously stored in the volatile memory devices 110. The memory device may be initialized to operate in the autonomous mode). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee, Inbar and Miller with those of Steed. Steed teaches initializing a memory device to operate prior to performing operations in autonomous mode in preparation for a trigger event, such as a power loss, resulting in improved reliability for the CXL system (i.e., Steed paragraph [0034], The memory unit 102 includes circuitry for detection of an error or disruptive volatile memory events (DVME), backup of data in the volatile memory devices 110 to the non-volatile memory devices 112 before corruption of the data, and restoring of the volatile memory devices 110 with the data backed up from the non-volatile memory devices 112 in an autonomous manner. A DVME is defined as events external to the volatile memory devices 110 that could result in unintended loss of data previously stored in the volatile memory devices 110. Examples of the DVME are the computing system 100 failures that can include an operating system (OS) crash, a central processor unit (CPU fault), a memory controller unit (MCU) failure, mother board (MB) internal power supply faults, a power loss, intermittent power drop-outs, or faults with system memory signals to the volatile memory devices 110). Regarding claim 9, Lee in view of Inbar in further view of Miller and further in view of Steed teaches The memory device of claim 4, comprising a logic circuit configured to initiate the autonomous operating mode (1) following a cold boot and before the memory device is enabled by the host device, or (2) in response to a reset command received from the host device, or (3) in response to a reset initiated by the memory device (Steed paragraph [0032], The NVDIMM monitors the CPU, memory controller clock, self-refresh, and power supplies and in the event of detected failures to intelligently and autonomously initiate DRAM self-refresh, switches the memory bus to the NVDIMM, and backs up DRAM data to flash. A self-refresh/reset may initialize the autonomous operation mode, also see Steed paragraph [0095], Following are examples of autonomous self-refresh modes that the present invention can be configured to provide. In a first example, a configuration of non-volatile flash DIMM, also referred to as a NVDIMM, can be used to backup volatile DRAMs. A memory controller, such as the non-volatile controller unit 202 or FPGA, completes all active memory cycles (closes all open rows and banks) and activates signals to trigger self-refresh of the DRAMs). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee, Inbar and Miller with those of Steed. Steed teaches initializing a memory device to operate prior to performing operations in autonomous mode in preparation for a trigger event, such as a power loss, resulting in improved reliability for the CXL system (i.e., Steed paragraph [0034], The memory unit 102 includes circuitry for detection of an error or disruptive volatile memory events (DVME), backup of data in the volatile memory devices 110 to the non-volatile memory devices 112 before corruption of the data, and restoring of the volatile memory devices 110 with the data backed up from the non-volatile memory devices 112 in an autonomous manner. A DVME is defined as events external to the volatile memory devices 110 that could result in unintended loss of data previously stored in the volatile memory devices 110. Examples of the DVME are the computing system 100 failures that can include an operating system (OS) crash, a central processor unit (CPU fault), a memory controller unit (MCU) failure, mother board (MB) internal power supply faults, a power loss, intermittent power drop-outs, or faults with system memory signals to the volatile memory devices 110). Regarding claim 10, Lee in view of Inbar in further view of Miller and further in view of Steed teaches The memory device of claim 4, wherein in the autonomous operating mode, the host interface circuit is configured to suppress communication with the host device by representing zero CXL transaction credits are available to the host device (Steed paragraph [0145-0146], For a CXL module managed failure, a CXL device interrupt is generated and the host processes Event Records. If the NV Save is successful, the CXL device state is set to Clean. Otherwise, the state will be Dirty as described above. For a module surprise failure, there are many possible failure modes such as: CXL Interface, Memory Buffer, on-board regulator failures, etc. In some embodiments, the CXL device interrupt, host process of Event Records, and GPF Phase 1 and 2 are incomplete or not possible to complete. In these cases, the NVRAM may autonomously perform an NV Save, but set the State to Dirty. The DSC may also be incremented prior to NV Restore. In some embodiments, Management Component Transport Protocol (MCTP) OOB access to the NVRAM subsystem allows event reporting and/or recovery. In the autonomous mode, the host may not communicate directly with the CXL by representing the CXL data as dirty, i.e., having no transaction credits pending). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee, Inbar and Miller with those of Steed. Steed teaches initializing a memory device to operate prior to performing operations in autonomous mode in preparation for a trigger event, such as a power loss, resulting in improved reliability for the CXL system (i.e., Steed paragraph [0034], The memory unit 102 includes circuitry for detection of an error or disruptive volatile memory events (DVME), backup of data in the volatile memory devices 110 to the non-volatile memory devices 112 before corruption of the data, and restoring of the volatile memory devices 110 with the data backed up from the non-volatile memory devices 112 in an autonomous manner. A DVME is defined as events external to the volatile memory devices 110 that could result in unintended loss of data previously stored in the volatile memory devices 110. Examples of the DVME are the computing system 100 failures that can include an operating system (OS) crash, a central processor unit (CPU fault), a memory controller unit (MCU) failure, mother board (MB) internal power supply faults, a power loss, intermittent power drop-outs, or faults with system memory signals to the volatile memory devices 110). Claim(s) 11 and 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US Publication No. 2025/0307171 – “Lee”) in view of Steed et al. (US Publication No. 2024/0134757 – “Steed”) in further view of Miller et al. (US Publication No. 2023/0138817 – “Miller”). Regarding claim 11, Lee teaches A method comprising: detecting a reset condition at a memory device, wherein the memory device is coupled to a host device using a compute express link (CXL) interconnect; (Lee paragraph [0007], According to an aspect of one or more embodiments, there is provided a computing system comprising a host; a memory including a volatile memory and a memory controller; and a storage device that is connected with the host through a first interface and that includes a nonvolatile memory and a storage controller, the storage device being configured to communicate with the host through a first port, to communicate with the memory through a second port, and to manage the memory. A host may be connected through an interface to execute commands and operations with a storage device, which can be done through a CXL link, see Lee paragraph [0039], In an embodiment, the host 101, the CXL storage 110, and the CXL memory 120 may be configured to share the same interface. For example, the host 101, the CXL storage 110, and the CXL memory 120 may communicate with each other through a CXL interface IF_CXL) in response to the reset condition: during a first phase, quiescing transactions with the host device, using a host interface circuit of the memory device to maintain a connection with the host device, and resetting processing logic circuitry of the memory device; (Lee paragraph [0160-0161], FIG. 9 is a flowchart illustrating a power-off operation of a computing system of FIG. 3A, according to some embodiments. In an embodiment, a power-off operation of a computing system will be described with reference to FIG. 9, but the present disclosure is not limited thereto. For example, it may be understood that the operating method of FIG. 9 is applicable to the power-off operation or reset operation of each of various components (e.g., a host, CXL storage, a CXL memory, and a CXL switch) included in the computing system. Referring to FIGS. 3A and 9, in operation POF-S10, the host 201 may output power-off information IFM_off through the CXL host interface circuit 201a. The host 201 may send the power-off information IFM_off to the CXL storage 210 through the first interface IF1. For example, the host 201 may recognize or detect information about power-off of the computing system 100. The CXL storage 210 may receive the power-off information IFM_off, which allows the CXL storage 210 to perform the power-off operation, through the first CXL storage interface circuit 211a (or the first port PT1). In a reset/power-off operation, the communication can be stopped/halted while the reset is performed for the memory device) and during a third phase that follows the second phase, using the host interface circuit of the memory device and the processing logic circuitry of the memory device to manage transactions with the host device (Lee paragraph [0186-0187], In an embodiment, because the CXL memory 320 is not connected with the CXL switch SW_CXL and is directly connected with the CXL storage 310, the host 301 and the CXL memory 320 may not communicate with each other. The host 301 may not access the entire area of the CXL memory 320. As described above, compared to the computing system 200 of FIG. 3B, the computing system 300 of FIG. 10 may further include the CXL switch SW_CXL. The computing system 300 may perform the initialization operation, the read operation, the write operation, and the power-off operation based on the manners described with reference to FIGS. 4 to 9. However, the communications between the host 301 and the CXL storage 310 may be performed through the CXL switch SW_CXL. After the reset operation, the memory device can be initialized to perform the commands/transactions issued from the host). Lee does not teach quiescing transactions with the host device by pausing or stopping memory device commands … during a second phase that follows the first phase, using the processing logic circuitry of the memory device to maintain the connection with the host device while resetting the host interface circuit of the memory device. However, Steed teaches during a second phase that follows the first phase, using the processing logic circuitry of the memory device to maintain the connection with the host device while resetting the host interface circuit of the memory device (Steed paragraph [0133], In cases where the CXL Controller only supports a single port, the Save function can cause the CXL Controller to reconfigure that one port to be on one or more lanes that connect between the NVC and the CXL Controller instead of between the CXL Host and the CXL Controller. In cases where the CXL Controller supports more than one port, the Save function can use an already configured port between the NVC and the CXL Controller without needing to change the port between the CXL Host and the CXL Controller. In some embodiments, communicating with the DRAM through the CXL Controller allows the device to be agnostic regarding what type of memory comprises the DRAM. For instance, the embodiments disclosed herein would be able to work with DDR4, DDR5, etc. without requiring specific access points in the DRAM. . The host can maintain a connection even when communication/interaction with the memory device is suspended, also see Steed paragraph [0146], For a module surprise failure, there are many possible failure modes such as: CXL Interface, Memory Buffer, on-board regulator failures, etc. In some embodiments, the CXL device interrupt, host process of Event Records, and GPF Phase 1 and 2 are incomplete or not possible to complete. In these cases, the NVRAM may autonomously perform an NV Save, but set the State to Dirty. The DSC may also be incremented prior to NV Restore. In some embodiments, Management Component Transport Protocol (MCTP) OOB access to the NVRAM subsystem allows event reporting and/or recovery). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee with those of Steed. Steed teaches maintaining a connection between a host and a memory device system (i.e., Steed paragraph [0034], The memory unit 102 includes circuitry for detection of an error or disruptive volatile memory events (DVME), backup of data in the volatile memory devices 110 to the non-volatile memory devices 112 before corruption of the data, and restoring of the volatile memory devices 110 with the data backed up from the non-volatile memory devices 112 in an autonomous manner. A DVME is defined as events external to the volatile memory devices 110 that could result in unintended loss of data previously stored in the volatile memory devices 110. Examples of the DVME are the computing system 100 failures that can include an operating system (OS) crash, a central processor unit (CPU fault), a memory controller unit (MCU) failure, mother board (MB) internal power supply faults, a power loss, intermittent power drop-outs, or faults with system memory signals to the volatile memory devices 110). Lee in view of Steed does not teach quiescing transactions with the host device by pausing or stopping memory device commands. However, Miller teaches quiescing transactions with the host device by pausing or stopping memory device commands (Miller paragraph [0024], For some embodiments, including those that employ a CXL external interface such as that described below with respect to FIG. 5, the automatic secure recovery technique provides a way to preserve operability of the CXL interface even during the failure mode of operation. In such a circumstance, separate reset zones may be configured for the multi-processor device 100 to allow for partial operability in one region of the multi-processor device 100, while allowing for partial resetting of other non-operating regions of the multi-processor device 100. Partitioning reset zones in this manner provides operational flexibility such that the primary processor 102 is not necessarily required for the CXL interface to successfully operate. As a result, recovery operations of the primary processor 102 may be carried out as background operations without affecting memory access operations that are being carried out over the CXL interface. For some embodiments, however, pausing of CXL-related command processing, log writing, and so forth may occur over the CXL interface during the failure mode of operation. The connection between the host and the CXL interface may be maintained while host commands/memory access operations are paused, also see Miller paragraph [0026]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee and Steed with those of Miller. Miller teaches maintaining a connection between a host and a CXL interface which may improve the reliability of the memory system and associated data recovery (i.e., see Miller paragraphs [0025-0026], The multi-processor device 100 and the associated recovery methods described above lend themselves well to applications involving distributed processing with hardware-based security schemes. In the field of distributed memory processing, CXL Type 3 devices, such as CXL buffers, may exhibit significantly improved reliability through adoption of the multi-processor device structures and associated methods disclosed herein. FIG. 5 illustrates one specific embodiment of a memory system, generally designated 500, that employs a CXL Type 3 memory device in the form of a CXL buffer 510. The memory system 500 includes a host 502 that interfaces with a memory module 504 primarily through a CXL link 506. For one embodiment, the host includes a host CXL interface controller 508 for communicating over the CXL link 506 utilizing protocols consistent with the CXL standards, such as CXL.io and CXL.mem. For some embodiments that involve CXL Type 2 devices, an additional CXL.cache protocol may also be utilized). Regarding claim 15, Lee in view of Steed in further view of Miller teaches The method of claim 11, wherein detecting the memory device reset condition includes receiving a reset command from the host device to reset at least a portion of the memory device (Lee paragraph [0049], The processor 111b may be configured to control an overall operation of the CXL storage controller 111. The RAM 111c may be used as a working memory or a buffer memory of the CXL storage controller 111. In an embodiment, the RAM 111c may be an SRAM and may be used as a read buffer and a write buffer for the CXL storage 110. In an embodiment, as will be described below, the RAM 111c may be configured to temporarily store the map data MD read from the CXL memory 120 or a portion of the map data MD. The reset may be targeted towards a particular section/portion of the memory device, see also Lee paragraphs [0160] and [0162] for requests targeted to particular memory sections). Regarding claim 16, Lee in view of Steed in further view of Miller teaches The method of claim 11, wherein the third phase is initiated in response to receiving a command from the host device to enable the memory device (Steed paragraph [0098], In another example, registers of the NVDIMM initiate self-refresh process of DRAMs by generating signals to start a self-refresh. The registers handshake to the memory controller that self-refresh is active and operating. The memory controller receives handshake from the registers and proceeds to switch the FETs of multiplexors. The memory controller completes configuring the multiplexors and deactivates the self-refresh to guarantee the clock signals have properly transitioned and back-up can be enabled. Further self-refresh cycles are repeated in the same manner until backup of the DRAM has completed. The memory device can be enabled via the host device, see Steed paragraph [0010], The method also includes: detecting a disruptive volatile memory event; copying the data of the volatile memory device to the NV device through the serial host interface based on the disruptive volatile memory event; and restoring the data of the volatile memory device from the NV device through the serial host interface. In this way, Dynamic Random-Access Memory (DRAM) level endurance and speed/latency can be provided while making it NV with the use of e.g., NAND Flash over a serial host interface). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee with those of Steed and Miller. Steed teaches maintaining a connection between a host and a memory device system (i.e., Steed paragraph [0034], The memory unit 102 includes circuitry for detection of an error or disruptive volatile memory events (DVME), backup of data in the volatile memory devices 110 to the non-volatile memory devices 112 before corruption of the data, and restoring of the volatile memory devices 110 with the data backed up from the non-volatile memory devices 112 in an autonomous manner. A DVME is defined as events external to the volatile memory devices 110 that could result in unintended loss of data previously stored in the volatile memory devices 110. Examples of the DVME are the computing system 100 failures that can include an operating system (OS) crash, a central processor unit (CPU fault), a memory controller unit (MCU) failure, mother board (MB) internal power supply faults, a power loss, intermittent power drop-outs, or faults with system memory signals to the volatile memory devices 110). Regarding claim 17, Lee in view of Steed in further view of Miller teaches The method of claim 11, wherein the first phase concludes when a reset of the processing logic circuitry is completed (Lee paragraph [0186-0187], In an embodiment, because the CXL memory 320 is not connected with the CXL switch SW_CXL and is directly connected with the CXL storage 310, the host 301 and the CXL memory 320 may not communicate with each other. The host 301 may not access the entire area of the CXL memory 320. As described above, compared to the computing system 200 of FIG. 3B, the computing system 300 of FIG. 10 may further include the CXL switch SW_CXL. The computing system 300 may perform the initialization operation, the read operation, the write operation, and the power-off operation based on the manners described with reference to FIGS. 4 to 9. However, the communications between the host 301 and the CXL storage 310 may be performed through the CXL switch SW_CXL. After the reset operation, the memory device can be initialized to perform the commands/transactions issued from the host). Claim(s) 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Steed in further view of Miller as applied to claim 11 above, and further in view of Abraham et al. (US Publication No. 2014/0229769 – “Abraham”). Regarding claim 12, Lee in view of Steed in further view of Miller and further in view of Abraham teaches The method of claim 11, wherein detecting the memory device reset condition includes detecting a fault condition internally to the memory device (Abraham paragraph [0034], Step 302 comprises detecting a firmware fault at a PCIe I/O device implementing SR-IOV. The firmware fault causes some or all of the firmware at the PCIe device (which are implemented in firmware) to stop functioning. Thus, the VFs of the I/O device become unresponsive to queries from the VMs that they are associated with. A firmware fault comprises any condition that causes one or more VFs to become unavailable. For example, a firmware fault may include a reboot, an ongoing firmware update, a hard or soft "crash," a Function Level Reset (FLR) of a VF, an exception, and/or other conditions. The firmware fault may be detected by a control unit at the PCIe device, such as an integrated component of a PF at the PCIe device. The firmware fault need not cause every VF to become non-responsive, so long as it causes at least one VF to become non-responsive. In one embodiment, a firmware fault is detected when a PF driver determines that firmware in the PCIe I/O device is in a fault state, when firmware at the PCIe I/O device sends an "async" event notification to reset the adapter, or when a regularly timed "heartbeat signal" between the PF driver and the firmware of the PCIe I/O device is not received. The reset condition can be based on a fault being detected). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee and Steed and Miller with those of Abraham. Abraham teaches certain device reset conditions, such as detected faults or firmware update, which can ensure reliable performance of the memory device (i.e., see Abraham paragraph [0028], Because VFs 214 and the SR-PCIM are implemented by firmware at physical I/O device 210, whenever a firmware fault is encountered (e.g., from an error or a reset of device 210), VFs 214 would normally become unresponsive. Meanwhile, PF 212 remains operable because it is implemented by hardware circuitry). Regarding claim 13, Lee in view of Steed in further view of Miller and further in view of Abraham teaches The method of claim 11, wherein detecting the memory device reset condition includes receiving an indication that a firmware update is available for the memory device (Abraham paragraph [0034], Step 302 comprises detecting a firmware fault at a PCIe I/O device implementing SR-IOV. The firmware fault causes some or all of the firmware at the PCIe device (which are implemented in firmware) to stop functioning. Thus, the VFs of the I/O device become unresponsive to queries from the VMs that they are associated with. A firmware fault comprises any condition that causes one or more VFs to become unavailable. For example, a firmware fault may include a reboot, an ongoing firmware update, a hard or soft "crash," a Function Level Reset (FLR) of a VF, an exception, and/or other conditions. The firmware fault may be detected by a control unit at the PCIe device, such as an integrated component of a PF at the PCIe device. The firmware fault need not cause every VF to become non-responsive, so long as it causes at least one VF to become non-responsive. In one embodiment, a firmware fault is detected when a PF driver determines that firmware in the PCIe I/O device is in a fault state, when firmware at the PCIe I/O device sends an "async" event notification to reset the adapter, or when a regularly timed "heartbeat signal" between the PF driver and the firmware of the PCIe I/O device is not received. The reset condition can be based on a firmware update). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee and Steed and Miller with those of Abraham. Abraham teaches certain device reset conditions, such as detected faults or firmware update, which can ensure reliable performance of the memory device (i.e., see Abraham paragraph [0028], Because VFs 214 and the SR-PCIM are implemented by firmware at physical I/O device 210, whenever a firmware fault is encountered (e.g., from an error or a reset of device 210), VFs 214 would normally become unresponsive. Meanwhile, PF 212 remains operable because it is implemented by hardware circuitry). Regarding claim 14, Lee in view of Steed in further view of Miller and further in view of Abraham teaches The method of claim 13, wherein quiescing transactions with the host device includes allowing memory device-internal commands to complete before resetting the processing logic circuitry, wherein the processing logic circuitry comprises a subsystem manager circuit of the memory device (Lee paragraph [0209], The first storage server 2210 may include a processor 2211, a memory 2212, a switch 2213, a storage device 2215, a CXL memory 2214, and a network interface card (NIC) 2216. The processor 2211 may control an overall operation of the first storage server 2210 and may access the memory 2212 to execute an instruction loaded onto the memory 2212 or to process data. The memory 2212 may be implemented with a DDR SDRAM (Double Data Rate Synchronous DRAM), an HBM (High Bandwidth Memory), an HMC (Hybrid Memory Cube), a DIMM (Dual In-line Memory Module), an Optane DIMM, and/or an NVMDIMM (Non-Volatile DIMM). The processor 2211 and the memory 2212 may be directly connected, and the numbers of processors and memories included in one storage server 2210 may be variously selected. Various processing commands may be preloaded instructions into the memory device, and may be executed before the power reset operation). Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US Publication No. 2025/0307171 – “Lee”) in view of Ponnuru et al. (US Publication No. 2024/0303380 – “Ponnuru”) in further view of Miller et al. (US Publication No. 2023/0138817 – “Miller”). Regarding claim 18, Lee teaches A method for operating a peripheral device coupled to a host device using a compute express link (CXL) interconnect, the method comprising: (Lee paragraph [0007], According to an aspect of one or more embodiments, there is provided a computing system comprising a host; a memory including a volatile memory and a memory controller; and a storage device that is connected with the host through a first interface and that includes a nonvolatile memory and a storage controller, the storage device being configured to communicate with the host through a first port, to communicate with the memory through a second port, and to manage the memory. A host may be connected through an interface to execute commands and operations with a storage device, which can be done through a CXL link, see Lee paragraph [0039], In an embodiment, the host 101, the CXL storage 110, and the CXL memory 120 may be configured to share the same interface. For example, the host 101, the CXL storage 110, and the CXL memory 120 may communicate with each other through a CXL interface IF_CXL). However, Lee does not teach operating a host interface circuit of the peripheral device in one of an autonomous mode and a distribution mode, wherein in the autonomous mode the host interface circuit is configured to maintain a connection and suppress communication between the host device and the peripheral device by pausing or stopping memory device commands while processing logic circuitry of the peripheral device is unavailable, and wherein in the distribution mode, the host interface circuit is configured to allow communication between the host device and the processing logic circuitry of the peripheral device. However, Ponnuru teaches operating a host interface circuit of the peripheral device in one of an autonomous mode and a distribution mode, (Ponnuru paragraph [0062], Embodiments of the present disclosure provide a multi-Function FRU representation system 300 that enables techniques for a host (e.g., RAC 230) to distinguish if a multi-Function PCIe/CXL component is composed of a single FRU or composed of multiple FRUs, and in the event it is composed of a single FRU, then this disclosure provides a way to authenticate each Function in the FRU without the burden of reading the full certificate chain from each Function (a heavy operation, especially for SMBus. Different circuitry may be used to perform CXL commands (i.e., different operating modes) with the host and memory device based on availability, as described further below) wherein in the autonomous mode the host interface circuit is configured to maintain a connection and suppress communication between the host device and the peripheral device … while processing logic circuitry of the peripheral device is unavailable, (Ponnuru paragraph [0062], Embodiments of the present disclosure provide a multi-Function FRU representation system 300 that enables techniques for a host (e.g., RAC 230) to distinguish if a multi-Function PCIe/CXL component is composed of a single FRU or composed of multiple FRUs, and in the event it is composed of a single FRU, then this disclosure provides a way to authenticate each Function in the FRU without the burden of reading the full certificate chain from each Function (a heavy operation, especially for SMBus). Host interface circuitry can be utilized to perform sideband and CXL commands, also see Ponnuru Fig. 1 and paragraph [0030], As indicated in FIG. 1, chassis 100 may also include one or more storage sleds 115n that provide access to storage drives 175n via a storage controller 195. In some embodiments, storage controller 195 may provide support for RAID (Redundant Array of Independent Disks) configurations of logical and physical storage drives, such as storage drives provided by storage sled 115n. In some embodiments, storage controller 195 may be a HBA (Host Bus Adapter) that provides more limited capabilities in accessing storage drives 175n and wherein in the distribution mode, the host interface circuit is configured to allow communication between the host device and the processing logic circuitry of the peripheral device (Ponnuru paragraph [0056], Remote access controller 230 supports monitoring and administration of the managed devices of an IHS via a sideband bus 253. For instance, messages utilized in device and/or system management may be transmitted using I2C sideband bus 253 connections that may be individually established with each of the respective managed devices 205, 235a-b, 240, 250, 255, 260 of the IHS 200 through the operation of an I2C multiplexer 230d of the remote access controller. In certain operation modes, a side-band and/or remote command system may be utilized to perform commands from the host. This can also include CXL based commands, see Ponnuru paragraph [0061], The SPDM protocol used by the RAC may require a mechanism to extend the certificate retrieval mechanism to specify the PCIe/CXL Device/Function path from which to retrieve the certificate (e.g., OEM command or a new standard command)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee with those of Ponnuru. Ponnuru teaches using processing logical circuitry to perform side-band and CXL commands, which can allow for more efficient device management and messaging (i.e., see Ponnuru paragraph [0056], Remote access controller 230 supports monitoring and administration of the managed devices of an IHS via a sideband bus 253. For instance, messages utilized in device and/or system management may be transmitted using I2C sideband bus 253 connections that may be individually established with each of the respective managed devices 205, 235a-b, 240, 250, 255, 260 of the IHS 200 through the operation of an I2C multiplexer 230d of the remote access controller. As illustrated in FIG. 2, the managed devices 205, 235a-b, 240, 250, 255, 260 of IHS 200 are coupled to the CPUs 205, either directly or directly, via in-line buses that are separate from the I2C sideband bus 253 connections used by the remote access controller 230 for device management). Lee in view of Ponnuru does not teach suppress communication between the host device and the peripheral device by pausing or stopping memory device commands. However, Miller teaches suppress communication between the host device and the peripheral device by pausing or stopping memory device commands (Miller paragraph [0024], For some embodiments, including those that employ a CXL external interface such as that described below with respect to FIG. 5, the automatic secure recovery technique provides a way to preserve operability of the CXL interface even during the failure mode of operation. In such a circumstance, separate reset zones may be configured for the multi-processor device 100 to allow for partial operability in one region of the multi-processor device 100, while allowing for partial resetting of other non-operating regions of the multi-processor device 100. Partitioning reset zones in this manner provides operational flexibility such that the primary processor 102 is not necessarily required for the CXL interface to successfully operate. As a result, recovery operations of the primary processor 102 may be carried out as background operations without affecting memory access operations that are being carried out over the CXL interface. For some embodiments, however, pausing of CXL-related command processing, log writing, and so forth may occur over the CXL interface during the failure mode of operation. The connection between the host and the CXL interface may be maintained while host commands/memory access operations are paused, also see Miller paragraph [0026]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee and Ponnuru with those of Miller. Miller teaches maintaining a connection between a host and a CXL interface which may improve the reliability of the memory system and associated data recovery (i.e., see Miller paragraphs [0025-0026], The multi-processor device 100 and the associated recovery methods described above lend themselves well to applications involving distributed processing with hardware-based security schemes. In the field of distributed memory processing, CXL Type 3 devices, such as CXL buffers, may exhibit significantly improved reliability through adoption of the multi-processor device structures and associated methods disclosed herein. FIG. 5 illustrates one specific embodiment of a memory system, generally designated 500, that employs a CXL Type 3 memory device in the form of a CXL buffer 510. The memory system 500 includes a host 502 that interfaces with a memory module 504 primarily through a CXL link 506. For one embodiment, the host includes a host CXL interface controller 508 for communicating over the CXL link 506 utilizing protocols consistent with the CXL standards, such as CXL.io and CXL.mem. For some embodiments that involve CXL Type 2 devices, an additional CXL.cache protocol may also be utilized). Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Ponnuru in further view of Miller as applied to claim 18 above, and further in view of Abraham et al. (US Publication No. 2014/0229769 – “Abraham”). Regarding claim 19, Lee in view of Ponnuru in further view of Miller and further in view of Abraham teaches The method of claim 18, wherein the autonomous mode is initiated in response to a reset command received by the peripheral device; (Lee paragraph [0209], The first storage server 2210 may include a processor 2211, a memory 2212, a switch 2213, a storage device 2215, a CXL memory 2214, and a network interface card (NIC) 2216. The processor 2211 may control an overall operation of the first storage server 2210 and may access the memory 2212 to execute an instruction loaded onto the memory 2212 or to process data. The memory 2212 may be implemented with a DDR SDRAM (Double Data Rate Synchronous DRAM), an HBM (High Bandwidth Memory), an HMC (Hybrid Memory Cube), a DIMM (Dual In-line Memory Module), an Optane DIMM, and/or an NVMDIMM (Non-Volatile DIMM). The processor 2211 and the memory 2212 may be directly connected, and the numbers of processors and memories included in one storage server 2210 may be variously selected. Various processing commands may be preloaded instructions into the memory device, and may be executed before the power reset operation) and wherein in the autonomous mode and in response to the reset command, the processing logic circuitry is configured to perform a reset routine that includes loading new firmware for the processing logic circuitry, (Abraham paragraph [0034], Step 302 comprises detecting a firmware fault at a PCIe I/O device implementing SR-IOV. The firmware fault causes some or all of the firmware at the PCIe device (which are implemented in firmware) to stop functioning. Thus, the VFs of the I/O device become unresponsive to queries from the VMs that they are associated with. A firmware fault comprises any condition that causes one or more VFs to become unavailable. For example, a firmware fault may include a reboot, an ongoing firmware update, a hard or soft "crash," a Function Level Reset (FLR) of a VF, an exception, and/or other conditions. The firmware fault may be detected by a control unit at the PCIe device, such as an integrated component of a PF at the PCIe device. The firmware fault need not cause every VF to become non-responsive, so long as it causes at least one VF to become non-responsive. In one embodiment, a firmware fault is detected when a PF driver determines that firmware in the PCIe I/O device is in a fault state, when firmware at the PCIe I/O device sends an "async" event notification to reset the adapter, or when a regularly timed "heartbeat signal" between the PF driver and the firmware of the PCIe I/O device is not received. The reset condition can be based on a firmware update) and wherein the processing logic circuitry comprises a subsystem manager circuit (Lee paragraph [0161], Referring to FIGS. 3A and 9, in operation POF-S10, the host 201 may output power-off information IFM_off through the CXL host interface circuit 201a. The host 201 may send the power-off information IFM_off to the CXL storage 210 through the first interface IF1. For example, the host 201 may recognize or detect information about power-off of the computing system 100. The CXL storage 210 may receive the power-off information IFM_off, which allows the CXL storage 210 to perform the power-off operation, through the first CXL storage interface circuit 211a (or the first port PT1). The internal memory may utilize a managing CXL interface circuit for processing logic). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee and Ponnuru and Miller with those of Abraham. Abraham teaches certain device reset conditions, such as detected faults or firmware update, which can ensure reliable performance of the memory device (i.e., see Abraham paragraph [0028], Because VFs 214 and the SR-PCIM are implemented by firmware at physical I/O device 210, whenever a firmware fault is encountered (e.g., from an error or a reset of device 210), VFs 214 would normally become unresponsive. Meanwhile, PF 212 remains operable because it is implemented by hardware circuitry). Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Ponnuru in further view of Miller as applied to claim 18 above, and further in view of Abraham and further in view of Steed et al. (US Publication No. 2024/0134757 – “Steed”). Regarding claim 20, Lee in view of Ponnuru in further view of Miller and further in view of Abraham and further in view of Steed teaches The method of claim 18, wherein the autonomous mode is initiated in response to a fault condition detected at the peripheral device; (Abraham paragraph [0034], Step 302 comprises detecting a firmware fault at a PCIe I/O device implementing SR-IOV. The firmware fault causes some or all of the firmware at the PCIe device (which are implemented in firmware) to stop functioning. Thus, the VFs of the I/O device become unresponsive to queries from the VMs that they are associated with. A firmware fault comprises any condition that causes one or more VFs to become unavailable. For example, a firmware fault may include a reboot, an ongoing firmware update, a hard or soft "crash," a Function Level Reset (FLR) of a VF, an exception, and/or other conditions. The firmware fault may be detected by a control unit at the PCIe device, such as an integrated component of a PF at the PCIe device. The firmware fault need not cause every VF to become non-responsive, so long as it causes at least one VF to become non-responsive. In one embodiment, a firmware fault is detected when a PF driver determines that firmware in the PCIe I/O device is in a fault state, when firmware at the PCIe I/O device sends an "async" event notification to reset the adapter, or when a regularly timed "heartbeat signal" between the PF driver and the firmware of the PCIe I/O device is not received. The reset condition can be based on a fault being detected) and wherein in the autonomous mode, the processing logic circuitry is configured to perform a reset routine that includes: in response to the detected fault condition, (see Abraham for fault condition) resetting a subsystem manager circuit while the host interface circuit maintains the connection between the host device and the peripheral device; (Steed paragraph [0133], In cases where the CXL Controller only supports a single port, the Save function can cause the CXL Controller to reconfigure that one port to be on one or more lanes that connect between the NVC and the CXL Controller instead of between the CXL Host and the CXL Controller. In cases where the CXL Controller supports more than one port, the Save function can use an already configured port between the NVC and the CXL Controller without needing to change the port between the CXL Host and the CXL Controller. In some embodiments, communicating with the DRAM through the CXL Controller allows the device to be agnostic regarding what type of memory comprises the DRAM. For instance, the embodiments disclosed herein would be able to work with DDR4, DDR5, etc. without requiring specific access points in the DRAM. The host can maintain a connection even when communication/interaction with the memory device is suspended, also see Steed paragraph [0146], For a module surprise failure, there are many possible failure modes such as: CXL Interface, Memory Buffer, on-board regulator failures, etc. In some embodiments, the CXL device interrupt, host process of Event Records, and GPF Phase 1 and 2 are incomplete or not possible to complete. In these cases, the NVRAM may autonomously perform an NV Save, but set the State to Dirty. The DSC may also be incremented prior to NV Restore. In some embodiments, Management Component Transport Protocol (MCTP) OOB access to the NVRAM subsystem allows event reporting and/or recovery) and resetting the host interface circuit while the subsystem manager circuit maintains the connection between the host device and the peripheral device (Lee paragraph [0160-0161], FIG. 9 is a flowchart illustrating a power-off operation of a computing system of FIG. 3A, according to some embodiments. In an embodiment, a power-off operation of a computing system will be described with reference to FIG. 9, but the present disclosure is not limited thereto. For example, it may be understood that the operating method of FIG. 9 is applicable to the power-off operation or reset operation of each of various components (e.g., a host, CXL storage, a CXL memory, and a CXL switch) included in the computing system. Referring to FIGS. 3A and 9, in operation POF-S10, the host 201 may output power-off information IFM_off through the CXL host interface circuit 201a. The host 201 may send the power-off information IFM_off to the CXL storage 210 through the first interface IF1. For example, the host 201 may recognize or detect information about power-off of the computing system 100. The CXL storage 210 may receive the power-off information IFM_off, which allows the CXL storage 210 to perform the power-off operation, through the first CXL storage interface circuit 211a (or the first port PT1). In a reset/power-off operation, the communication can be stopped/halted while the reset is performed for the memory device). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Lee, Ponnuru, Miller and Abraham with those of Steed. See claim 19 above for combination with Abraham reference; regarding Steed, Steed teaches maintaining a connection between a host and a memory device system (i.e., Steed paragraph [0034], The memory unit 102 includes circuitry for detection of an error or disruptive volatile memory events (DVME), backup of data in the volatile memory devices 110 to the non-volatile memory devices 112 before corruption of the data, and restoring of the volatile memory devices 110 with the data backed up from the non-volatile memory devices 112 in an autonomous manner. A DVME is defined as events external to the volatile memory devices 110 that could result in unintended loss of data previously stored in the volatile memory devices 110. Examples of the DVME are the computing system 100 failures that can include an operating system (OS) crash, a central processor unit (CPU fault), a memory controller unit (MCU) failure, mother board (MB) internal power supply faults, a power loss, intermittent power drop-outs, or faults with system memory signals to the volatile memory devices 110). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONAH C KRIEGER whose telephone number is (571)272-3627. The examiner can normally be reached Monday - Friday 8 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocio Del Mar Perez-Velez can be reached at (571)-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.C.K./ Examiner, Art Unit 2133 /ROCIO DEL MAR PEREZ-VELEZ/ Supervisory Patent Examiner, Art Unit 2133
Read full office action

Prosecution Timeline

Jul 17, 2024
Application Filed
Nov 14, 2025
Non-Final Rejection — §103
Dec 09, 2025
Response Filed
Mar 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572298
ADAPTIVE SCANS OF MEMORY DEVICES OF A MEMORY SUB-SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12566705
SYSTEM ON CHIP, A COMPUTING SYSTEM, AND A STASHING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12566556
DATA SECURITY PROTECTION METHOD, DEVICE, SYSTEM, SERVER-SIDE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12554441
TRANSFERRING COMPRESSED DATA BETWEEN LOCATIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12547582
Cloning a Managed Directory of a File System
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
95%
With Interview (+8.2%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 147 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month