DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1, 3, 7, 9 and 16 have been amended. No claims have been added or cancelled. Claims 1-17 remain pending and are ready for examination.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on July 1st, 2025 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claims 1 and 7 objected to because of the following informalities: Claim 1, line 12 reads “wherein in response to that the direct conversion mode is selected…”. The examiner suggests amending the claim to “wherein in response to the direct conversion mode being selected …”. Claim 7 is objected to for the same rationale as claim 1.
Claim 11 objected to because of the following informalities: Claim 11, lines 6-7 reads “… having a plurality of blocks of a second type, a forth region…” The claim should read “… having a plurality of blocks of a second type, a fourth region….”
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5 and 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Redaelli (US Publication No. 2025/0181247 – “Redaelli”) in view of Gohain et al. (US Publication No. 2024/0289031 – “Gohain”) in further view of Vijendra Kumar Lakshmi et al. (US Publication No. 2023/0367486 – “Lakshmi”).
Regarding claim 1, Redaelli teaches A method of controlling a flash memory, comprising: in response to a host write command, writing host data associated a host write command into a target block of a region of the flash memory (Redaelli paragraph [0012], A memory sub-system can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction with FIG. 1. In general, a host system can utilize a memory sub-system that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. The host can write data to a target block in memory, which can be non-volatile flash memory, see Redaelli paragraph [0013], A memory sub-system can include high-density non-volatile memory devices where retention of data is desired when no power is supplied to the memory device. One example of non-volatile memory devices is a negative-and (NAND) memory device) in a specific write mode with one-shot programming; (Redaelli paragraph [0041], Each of the memory devices 130 can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices 130 can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices 130 can be grouped as pages that can refer to a logical unit of the memory device used to store data. The data can be written to memory cells in SLC formatting (i.e., one-pass programming)) performing a block reliability examination on the target block of the region to generate a block reliability indication; (Redaelli paragraph [0081], In at least some embodiments, the scan manager 113 stores the metadata associated with attributes associated with BGMS scanning to memory, e.g., to the local memory 119 and/or to the memory device 130. In some embodiments, for example, this metadata may be instantiated as firmware values stored to NVM of the memory device 130 and may be cached in the local memory 119 during use. The scan manager 113 can periodically analyze this metadata relative to memory cell degradation (e.g., associated with different attributes) and decide, on a block-by-block basis, whether each block qualifies for a data refresh. The scan manager 113 can initiate a refresh operation for each block that satisfies particular criteria (e.g., threshold values for respective attributes) intended to trigger such a refresh operation. Specific threshold values may be scanned through a background operation to determine reliability risks for memory blocks) a garbage collection-based (GC-based) conversion mode according to the block reliability indication; (Redaelli paragraph [0082], In some embodiments of refreshing multi-level cell data (e.g., MLC, TLC, QLC, or PLC data), the scan manager 113 performs a media management operation, e.g., by folding the block to be refreshed before being read and written to an erased block of the IC memory device 130. For example, the data of a memory block may be folded if any codeword demonstrates a trigger rate or reliability risk that satisfies particular criteria that has been discussed. The folding operation may involve relocating the data stored at the affected block of the memory device to another block. Because full scans are time-consuming, a sampling scan may be performed in which one or more pages of each block is read or tracked in terms of the attributes and data state metrics. Data can be folded from multi-level cell blocks based on a reliability risk evaluation) and performing a cell level reconfiguration with respect to the target block according to the selected conversion mode (Redaelli paragraph [0082], In some embodiments of refreshing multi-level cell data (e.g., MLC, TLC, QLC, or PLC data), the scan manager 113 performs a media management operation, e.g., by folding the block to be refreshed before being read and written to an erased block of the IC memory device 130. For example, the data of a memory block may be folded if any codeword demonstrates a trigger rate or reliability risk that satisfies particular criteria that has been discussed. The folding operation may involve relocating the data stored at the affected block of the memory device to another block. Because full scans are time-consuming, a sampling scan may be performed in which one or more pages of each block is read or tracked in terms of the attributes and data state metrics. Data can be folded from multi-level cell blocks based on a reliability risk evaluation).
Redaelli does not teach selecting one of a direct conversion mode … according to the block reliability indication; wherein in response to that the direct conversion is selected, the cell level reconfiguration comprises reprogramming the target block itself to store an increased number of bits per memory cell.
However, Gohain teaches selecting one of a direct conversion mode … according to the block reliability indication (Gohain paragraph [0038], In some cases, because a programming time (e.g., TProg) for higher-density memory cells (e.g., QLCs) may be relatively high, based on input/output pattern, idle time availability, operating temperature for the memory system 110 (e.g., Xtemp), or any combination thereof, recovery of free blocks may be prioritized (e.g., without data separation) to meet quality of service (QoS) requirements. Based on urgency of garbage collection considering an amount of free blocks (e.g., blocks 170, virtual blocks 180), an amount of fragmentation in data stored at the memory system 110, an amount of expected incoming data, or any combination thereof, data may be folded from source blocks (e.g., SLCs, TLCs, QLCs) without data separation. The source blocks (e.g., one or more identifiers for the source blocks) may be recorded (e.g., stored) by firmware of the memory system 110 (e.g., by a memory system controller 115) and selected for garbage collection with data separation once the memory system 110 is out of an urgent garbage collection condition. The data can either be directly converted (i.e., folding) or garbage collected based on a garbage collection urgency determination (i.e., reliability determination)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Redaelli with those of Gohain. Gohain teaches using a direct conversion method to convert data from a particular storage type (i.e., TLC) to a different cell type (i.e., QLC). This enables more efficient redistribution of data when a full garbage collection is not required (i.e., see Gohain paragraph [0012], In accordance with examples as disclosed herein, a memory system may store one or more characteristics of data, which may be utilized to improve performance associated with transferring data from one block of memory cells to another. For example, to support a configuration of a given transfer operation (e.g., to transfer data from one or more source block to one or more target blocks), a memory system may be configured to evaluate whether or how to separate data according to characteristics of the data (e.g., during data transfer operations, based on one or more operating conditions of the memory system)).
Redaelli in view of Gohain does not teach wherein in response to that the direct conversion is selected, the cell level reconfiguration comprises reprogramming the target block itself to store an increased number of bits per memory cell.
However, Lakshmi teaches wherein in response to that the direct conversion is selected, the cell level reconfiguration comprises reprogramming the target block itself to store an increased number of bits per memory cell (Lakshmi paragraph [0012], According to the techniques described herein, a memory system that retires higher density blocks may preserve storage capacity by converting lower density blocks into higher density blocks. For example, if the memory system retires an MLC block, the memory system may convert an SLC block into an MLC block. If the converted MLC block has already been subject to one or more access operations, the memory system may use a parameter, such as a conversion factor, to determine a quantity of remaining access operations permitted to be performed on the converted MLC block, which will allow the memory system to determine an appropriate timing (e.g., a predicted timing) to retire the converted MLC block. The target block can be reprogrammed to store more bits/cell, such as converting from SLC to MLC).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Redaelli and Gohain with those of Lakshmi. Lakshmi teaches reprogramming target block cells from a lower density to a higher density (more bits per cell) which can enable reuse of degraded memory cells rather than retiring memory blocks (i.e., see Lakshmi paragraph [0011-0012], A memory system with blocks of memory cells may configure some blocks as lower density block and other blocks as higher density blocks. For instance, some blocks may be configured as single-level cell (SLC) blocks (e.g., lower density blocks) in which the memory cells are each configured for storing a single bit, whereas other blocks may be configured as multiple-level cell (MLC) blocks (e.g., higher density blocks) in which the memory cells are each configured for storing multiple bits (e.g., 2 bits, 3 bits, 4 bits). As blocks in the memory system begin to degrade over time and become unreliable for storing information, or if blocks are determined to be unreliable at initialization for example, the memory system may retire the unreliable blocks by operating the unreliable blocks as read-only blocks or by avoiding accessing the unreliable blocks altogether. But retiring unreliable blocks reduces the storage capacity of the memory system, particularly if the retired blocks are higher density blocks (which often degrade faster than lower density blocks)).
Claim 7 is the corresponding device claim to method claim 1. It is rejected with the same references and rationale.
Regarding claim 2, Redaelli in view of Gohain in further view of Lakshmi teaches The method of claim 1, wherein the step of performing the cell level reconfiguration with respect to the target block according to the selected conversion mode comprises: reading a lower page data, a middle page data and an upper page data from the target block; (Redaelli paragraph [0021], Multi-level cell (MLC) physical page types can include pages at multiple page levels such as LPs and upper logical pages (UPs). TLC physical page types include LPs, UPs, and pages at an additional page level referred to as extra logical pages (XPs). Lower page data, upper page data, and extra logical page data (i.e., middle page data) can be read from the target block) reading a valid page data from a source block of the flash memory that is different from the target block; (Redaelli paragraph [0028], In response to qualifying for a refresh, the data in that block may be stored in a refresh queue in volatile memory of the memory sub-system, which can be local memory of the memory sub-system controller, for example. This refresh data is then programmed into a new block (e.g., an erased block) of memory in the memory device, thus updating the V.sub.t levels of the memory cells for this data, and commensurately more healthy read voltage reference levels. Where the data is programmed into multi-level cells (e.g., MLC, TLC, QLC, PLC data), the programming of the data to the new blocks is referred to as folding. Data can be moved from a plurality of blocks to a target block) and if the direct conversion mode is selected, programming the lower page data, the middle page data and the upper page data from the target block as wells as the valid page data from the source block into the target block in a QLC write mode with one pass programming (Redaelli paragraph [0028], In response to qualifying for a refresh, the data in that block may be stored in a refresh queue in volatile memory of the memory sub-system, which can be local memory of the memory sub-system controller, for example. This refresh data is then programmed into a new block (e.g., an erased block) of memory in the memory device, thus updating the V.sub.t levels of the memory cells for this data, and commensurately more healthy read voltage reference levels. Where the data is programmed into multi-level cells (e.g., MLC, TLC, QLC, PLC data), the programming of the data to the new blocks is referred to as folding. Data can be moved from a plurality of blocks to a target block, which can be designated at any cell level including QLC in a folding operation (i.e., one shot programming)).
Regarding claim 3, Redaelli in view of Gohain in further view of Lakshmi teaches The method of claim 1, wherein the step of performing a cell level reconfiguration with respect to the target block according to the selected conversion mode comprises: reading a lower page data, a middle page data and an upper page data from the target block; (Redaelli paragraph [0021], Multi-level cell (MLC) physical page types can include pages at multiple page levels such as LPs and upper logical pages (UPs). TLC physical page types include LPs, UPs, and pages at an additional page level referred to as extra logical pages (XPs). Lower page data, upper page data, and extra logical page data (i.e., middle page data) can be read from the target block) reading a valid page data from a source block of the flash memory that is different from the target block; (Redaelli paragraph [0028], In response to qualifying for a refresh, the data in that block may be stored in a refresh queue in volatile memory of the memory sub-system, which can be local memory of the memory sub-system controller, for example. This refresh data is then programmed into a new block (e.g., an erased block) of memory in the memory device, thus updating the V.sub.t levels of the memory cells for this data, and commensurately more healthy read voltage reference levels. Where the data is programmed into multi-level cells (e.g., MLC, TLC, QLC, PLC data), the programming of the data to the new blocks is referred to as folding. Data can be moved from a plurality of blocks to a target block) and if the GC-based conversion mode is selected, programming the lower page data, the middle page data and the upper page data from the target block as wells as the valid page data from the source block into a GC destination block that is different from the target block in a QLC write mode (Redaelli paragraph [0028], In response to qualifying for a refresh, the data in that block may be stored in a refresh queue in volatile memory of the memory sub-system, which can be local memory of the memory sub-system controller, for example. This refresh data is then programmed into a new block (e.g., an erased block) of memory in the memory device, thus updating the V.sub.t levels of the memory cells for this data, and commensurately more healthy read voltage reference levels. Where the data is programmed into multi-level cells (e.g., MLC, TLC, QLC, PLC data), the programming of the data to the new blocks is referred to as folding. Data can be moved from a plurality of blocks to a target block, which can be designated at any cell level including QLC in a folding operation) with two pass programming (Gohain paragraph [0038], In some cases, because a programming time (e.g., TProg) for higher-density memory cells (e.g., QLCs) may be relatively high, based on input/output pattern, idle time availability, operating temperature for the memory system 110 (e.g., Xtemp), or any combination thereof, recovery of free blocks may be prioritized (e.g., without data separation) to meet quality of service (QoS) requirements. Based on urgency of garbage collection considering an amount of free blocks (e.g., blocks 170, virtual blocks 180), an amount of fragmentation in data stored at the memory system 110, an amount of expected incoming data, or any combination thereof, data may be folded from source blocks (e.g., SLCs, TLCs, QLCs) without data separation. The source blocks (e.g., one or more identifiers for the source blocks) may be recorded (e.g., stored) by firmware of the memory system 110 (e.g., by a memory system controller 115) and selected for garbage collection with data separation once the memory system 110 is out of an urgent garbage collection condition. The data can either be directly converted (i.e., folding) or garbage collected based on a garbage collection urgency determination (i.e., reliability determination)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Redaelli with those of Gohain and Lakshmi. Gohain teaches using a direct conversion method to convert data from a particular storage type (i.e., TLC) to a different cell type (i.e., QLC). This enables more efficient redistribution of data when a full garbage collection is not required (i.e., see Gohain paragraph [0012], In accordance with examples as disclosed herein, a memory system may store one or more characteristics of data, which may be utilized to improve performance associated with transferring data from one block of memory cells to another. For example, to support a configuration of a given transfer operation (e.g., to transfer data from one or more source block to one or more target blocks), a memory system may be configured to evaluate whether or how to separate data according to characteristics of the data (e.g., during data transfer operations, based on one or more operating conditions of the memory system)).
Regarding claim 4, Redaelli in view of Gohain in further view of Lakshmi teaches The method of claim 1, wherein the step of performing the block reliability examination on the target block comprises: generating the block reliability indication according to whether a write temperature regarding data is written to the target block falls within a predetermined temperature range; generating the block reliability indication according to whether a read count regarding a number of times data in the target block has been read exceeds a predetermined read count threshold; generating the block reliability indication according to whether a block lifetime regarding an elapsed time since the target block is written with data has been read exceeds a predetermined block lifetime threshold; generating the block reliability indication according to whether a soft decoding operation has ever been activated in reading data in the target block; and/or generating the block reliability indication according to whether a read retry count regarding a number of times a read retry operation has been performed in reading data in the target block exceeds a predetermined read retry count threshold (Redaelli paragraph [0023], In some embodiments, an inverse relationship exists between data retention and either the Total Byte Written (TBW) or the temperature that affects a device over time. For example, when either or both TBW and temperature increase, data retention decreases, making a refresh operation necessary. From the perspective of data retention, some memory devices comply with JESD47 that, in case of an unbiased device, can be summarized as follows: 5 years at 55° C. at 10% of TBW or 1 year at 55° C. at maximum TBW. This data retention versus TBW may apply to both multi-level cells and single-level cell namespaces. These values of years, temperature, and TBW are illustrated only by way of example, and are not intended to be limiting. The reliability of the blocks may be determined based on the temperature, also see Redaelli paragraph [0026], Furthermore, memory cells in a memory device can wear out over time or with increased temperature as their ability to retain a charge (i.e., data) and, consequently, to remain at a particular programming level deteriorates with the passage of time as well as with increased use and/or exposure to higher temperatures. Thus, in some cases, the quality of data retention can be reflected by a measurable degree of data degradation indicated by an error rate experienced during a read operation performed on the data. This degree of degradation can be reflected by and can correspond to various respective values of data state metrics (e.g., valley shift values, read counts, valley width values, error counts, RBER, RWB, etc.). These values (e.g., of valley shift or read count) and their corresponding indication of data retention quality or capability on a memory device, can be known from statistics and historical data obtained from scans (such as BGMS) and testing of various memory devices. Furthermore, the effect of these temporal shifts on the trigger rate can be expected to worsen with the additional passage of time and increased use of the device and paragraph [0027], For example, when a block of memory is sampled by a read-based health scan, particular V.sub.t levels or particular threshold levels of other tracked attributes may trigger the qualification of that memory block for a background refresh).
Regarding claim 5, Redaelli in view of Gohain in further view of Lakshmi teaches The method of claim 1, wherein the step of selecting one of the direct conversion mode and the GC-based conversion mode according to the block reliability indication comprises: selecting the direct conversion mode if the block reliability indication indicates that the target block exhibits strong reliability; (Redaelli paragraph [0022], In various embodiments, to improve data retention on minimally-accessed logical block addresses (LBAs), a memory sub-system controller (e.g., processing device) performs a background media scan to read data periodically from the memory blocks. As such, the host system can either relocate data stored in a block to another block to refresh the data, or the controller can monitor a bit error rate (BER) of a page or block to determine whether the page or block is decaying. Data retention is the length of time the storage media (e.g., NAND or other non-volatile memory (NVM) storage media) in a memory device retains data with biased or unbiased conditions. Because data retention is limited, memory device scanning and refresh may be performed and may be managed by the memory sub-system controller through a background media scan (BGMS) process. The direct conversion may be performed in a refresh data operation for lower risk blocks (i.e., not needing a full GC operation to be performed or trigger fail)) and selecting GC-based conversion mode if the block reliability indication indicates the target block exhibits weak reliability (Redaelli paragraph [0082], In some embodiments of refreshing multi-level cell data (e.g., MLC, TLC, QLC, or PLC data), the scan manager 113 performs a media management operation, e.g., by folding the block to be refreshed before being read and written to an erased block of the IC memory device 130. For example, the data of a memory block may be folded if any codeword demonstrates a trigger rate or reliability risk that satisfies particular criteria that has been discussed. The folding operation may involve relocating the data stored at the affected block of the memory device to another block. Because full scans are time-consuming, a sampling scan may be performed in which one or more pages of each block is read or tracked in terms of the attributes and data state metrics. When the data scanned represents a high risk for block data, the folding operation may be completed in a garbage collection operation).
Regarding claim 8, Redaelli in view of Gohain in further view of Lakshmi teaches A data storage device comprising a memory controller of claim 7 (see claim 1 and 7 above) and a flash memory (Redaelli paragraph [0040], Some examples of non-volatile memory devices (e.g., memory device 130) include a negative-and (NAND) type flash memory and write-in-place memory).
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Redaelli in view of Gohain in further view of Lakshmi as applied to claim 1 above, and further in view of Jin et al. (US Publication No. 2021/0141565 – “Jin”).
Regarding claim 6, Redaelli in view of Gohain in further view of Lakshmi in further view of Jin teaches The method of claim 1, wherein the flash memory is a quad-level cell (QLC) flash memory, and the target block is a triple-level cell (TLC) block of a TLC region of the QLC flash memory, in which 3-bit data is written per memory cell in the TLC region (Jin paragraph [0060], The controller 200 may transmit a ratio of a cold data storage space to a total data storage space of the nonvolatile memory device 100 to the host CPU 20. For example, the controller 200 may transmit information for a ratio of a space used in the QLC mode to the total data storage space or information for a ratio of the TLC/MLC mode memory region to the QLC mode memory region to the host CPU 20. Various methods may be applied to transmit the ratio of the cold data storage space to the total data storage space of the nonvolatile memory device 100 according to the needs of a user such as an operator. For example, the controller 200 may transmit the ratio of the cold data storage space to the total data storage space of the nonvolatile memory device 100 to the host CPU 20 in real time, or may transmit the ratio to the host CPU 20 according to a request of the host CPU 20. The flash memory may utilize both QLC and a TLC region which stores 3-bit data per memory cell, and can transfer data between TLC and QLC).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Redaelli and Gohain and Lakshmi with those of Jin. Jin teaches using a memory device with both TLC and QLC memory cell modes, which can allow for further memory optimization by allowing more frequently accessed data to be stored in lower level cell modes (i.e., TLC), while less frequency accessed data can be stored more efficiently in higher level cell modes (i.e., QLC) (see Jin paragraph [0060], For example, the controller 200 may control the write data to be stored in the MLC mode memory region or the TLC mode memory region when it is determined that the write data is the hot data or the warm data, and control the write data to be stored in the QLC mode memory region when it is determined that the write data is the cold data).
Claim(s) 9-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US Publication No. 2024/0419365 – “Sharma”) in view of Gundecha et al. (US Publication No. 2025/0130719 – “Gundecha”) in further view of Lim (US Publication No. 2021/0011651 – “Lim”).
Regarding claim 9, Sharma teaches A method of controlling a flash memory, comprising: (Sharma paragraph [0029], In some embodiments, data storage devices 120 are, or include, solid-state drives (SSDs). Each data storage device 120 may include a non-volatile memory (NVM) or device controller 130 based on compute resources (processor and memory) and a plurality of NVM or media devices 140 as a non-volatile storage medium for data storage (e.g., one or more NVM device(s), such as one or more flash memory devices)) selecting a first strategy to configure the flash memory and control the flash memory based on the first strategy; (Sharma paragraph [0057], Storage device interface 534 may include an interface protocol and set of functions, parameters, and data structures for managing the data storage devices in the virtual storage pool and the backend connections to those storage devices. For example, storage device interface 534 may be configured to maintain a storage device connection to each storage device through storage bus interface 516 or network interface 518. For example, at least one set of local storage devices may be connected to the virtual storage pool through storage bus interface 516 and at least one set of network storage devices may be connected to the virtual storage pool through network interface 518. Storage device interface 534 may support the allocation of host connections and host storage commands by host command handler 532 and may support the use of DOPs from device profile manager 540 to manage operational configurations of the storage devices, along with other configuration and storage device management functions. Storage device interface 534 may include a storage interface protocol configured to comply with the physical, transport, and storage application protocols supported by the storage devices for communication over storage bus interface 516 and/or network interface 518. The flash memory may be configured and operated in accordance with a particular protocol, also see paragraph [0059]) and selecting a second strategy to configure the flash memory and control the flash memory based on the second strategy if the wearing condition indication value exceeds the predetermined threshold; (Sharma paragraph [0076], Background operations manager 638 may include logic and data structures for managing the background operations of storage device 600. For example, background operations manager 638 may determine when background operations are needed and queue them for processing through read/write processor 636. In some configurations, background operations manager 638 may collect, receive, or access storage metrics 638.1 related to the ongoing storage operations to non-volatile memory 620, such as capacities, memory locations used, valid or invalid fragment counts, endurance values, read/write access metrics, etc. The strategy/protocol used to perform background operations (i.e., GC) may be determined based on data characteristics, including wearing condition (i.e., endurance values)) wherein in the first strategy, a garbage collection operation is not immediately performed once an available number of blocks in a first region of the flash memory goes below a lower bound; (Sharma paragraph [0076], Background trigger logic 638.3 may include a set of rules for determining different types of background operations and when they should be added to background queues 638.4 based on background operation thresholds 638.2. Background queues 638.2 may operate similarly to host command queues 632.3 and hold background operation commands in first-in-first-out order for processing through read/write (R/W) processor 636 and NVM controller 640. In some configurations, background queues 638.2 may be configured with priority parameters relative to one another (for prioritizing different types of background operations) and/or host command queues 632.3 that enable storage manager 634 to control the relative use of processor and NVM resources for different types of operations. In some configurations, background operations manager 638 may determine one or more background states 638.5 that are available to other components or systems for monitoring operation of storage device 600. For example, background states 638.5 may reflect evaluation of background thresholds 638.2 by background trigger logic 638.3 and/or queue depths of background queues 638.4 to determine whether and when one or more background states may change from unnecessary, to normal maintenance, to critical maintenance levels. The garbage collection may be performed based on various triggers, such as via host command, in a particular operating mode) and in the second strategy, the garbage collection operation is immediately performed once the available number of blocks in the first region of the flash memory goes below the lower bound (Sharma paragraph [0076], One or more of storage metrics 638.1 may be compared to one or more background operation thresholds 638.2 for determining when background operations should be initiated and/or the priority they should be given. For example, background operation thresholds 638.2 may include an available capacity threshold or invalid fragment threshold to trigger migration of SLC data to MLC (SLC/MLC migration 638.7) or garbage collection 638.6 of deleted or otherwise invalid blocks and consolidation of valid blocks to free up capacity to be rewritten. Background trigger logic 638.3 may include a set of rules for determining different types of background operations. One storage protocol may trigger a garbage collection immediately upon an available block number falling below a threshold value).
Sharma does not teach determining a wearing condition indication value by calculating a weighted sum of program/erase (P/E) cycles from a plurality of different memory regions of the flash memory; determining whether a wearing condition indication value regarding the flash memory exceeds a predetermined threshold … if the wearing condition indication value exceeds the predetermined threshold.
However, Gundecha teaches determining whether a wearing condition indication value regarding the flash memory exceeds a predetermined threshold … if the wearing condition indication value exceeds the predetermined threshold (Gundecha paragraph [0002], To spread the wear and tear across blocks in a memory device and cause the memory device to last longer, the controller may execute wear leveling operations and arrange how data is programmed and/or erased (PE), so the PE cycles are distributed among the blocks in the memory device. The controller may use a wear leveling algorithm to determine which physical block to use each time data is programmed and/or erased. In an existing wear leveling algorithm, the controller may obtain the average program/erase count (PEC) value of the partition and check for the least PEC value associated with a closed block. The controller may determine if the difference between the average PEC value in the partition and the least PEC value associated with a closed block is less than a predefined PEC threshold that is used to maintain PEC values for all blocks within the partition within an expected range. If the difference between the average PEC value in the partition and the least PEC value associated with a closed block is less than a predefined PEC threshold, the data from the closed block associated with the least PEC value (referred to herein as a source block) may be moved to a block in a free blocks pool with the highest PEC value (referred to herein as destination block)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sharma with those of Gundecha. While Sharma teaches using different policies to determine how to perform a garbage collection and other background operations, it does not explicitly teach a wear threshold value, which is instead disclosed by Gundecha in the form of a PEC threshold value, which can be used to more accurately determine blocks and data to be collected and relocated to different blocks (i.e., see Gundecha paragraph [0021], Controller 108 may process background operations including, for example, executing internal operations to manage the resources on storage device 104 In managing the resources of storage device 104, controller 108 may execute relocation functions including compaction, read scrubbing, wear leveling, garbage collection, and the like, to move data from one location to another on the memory device, optimize how space on the memory device is used, and improve efficiency. In executing wear leveling operations, controller 108 may arrange data, so that PE cycles are distributed among all the blocks in memory device 110).
Sharma in view of Gundecha does not teach determining a wearing condition indication value by calculating a weighted sum of program/erase (P/E) cycles from a plurality of different memory regions of the flash memory.
However, Lim teaches determining a wearing condition indication value by calculating a weighted sum of program/erase (P/E) cycles from a plurality of different memory regions of the flash memory (Lim paragraph [0048], Each of the wear-level management unit 523B and 523C may store the number of program/erase (P/E) cycles permitted to each of the nonvolatile memory devices 420B and 420C and the cumulative number of P/E cycles, or store the number of writing permitted to each of the memory devices 420B and 420C and the cumulative number of writing. The wear-level management unit 523B and 523C may control leveling of a wear level for memory regions (banks, blocks, or the like) constituting the nonvolatile memory devices 420B and 420C. The cumulative number of P/E cycles or the cumulative number of writing may be increased by repeatedly performing a P/E operation or a write operation on each of the nonvolatile memory devices 420B and 420C as a period of using each of the nonvolatile memory devices 420B and 420C is increased. A wear condition may be determined based on P/E cycle counts of a plurality of non-volatile memory devices, which can be weighted based on factors such as permitted writes or scrubbing intervals, see Lim paragraph [0065], In an embodiment, the scrubbing scheduler 630 may determine the scrubbing intervals based on the wear-level information. The wear-level information may include the number of P/E cycles permitted to the memory devices 420A, 420B, and 420C and the cumulative number of P/E cycles, or the number of writing permitted to the memory devices 420A, 420B, and 420C and the cumulative number of writing. As the cumulative number of P/E cycles or the cumulative number of writing is increased or as the cumulative number of P/E cycles or the cumulative number of writing approaches the permitted number of P/E cycles or the permitted number of writing, the scrubbing scheduler 630 may reduce the scrubbing intervals for the memory devices 420A, 420B, and 420C. In an embodiment, as the cumulative number of P/E cycles or the cumulative number of writing is increased and thus enters into a predetermined range before the cumulative number of P/E cycles or the cumulative number of writing reaches the permitted number of P/E cycles or the permitted number of writing, the scrubbing scheduler 630 may gradually reduce the scrubbing intervals for the memory devices 420A, 420B, and 420C).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sharma and Gundecha with those of Lim. Lim teaches determining a wearing condition based on a weighted sum of P/E cycles, which can provide an accurate overall wear condition of the memory device, resulting in more accurate memory health information (i.e., see Lim paragraph [0080], When the health information is the wear-level information, the wear-level information may include the number of program/erase (P/E) cycles permitted to the memory devices 420A, 420B and 420C and the cumulative number of P/E cycles or the number of writing permitted to the memory devices 420A, 420B, and 420C and the cumulative number of writing. As the cumulative number of P/E cycles or the cumulative number of writing is increased or as the cumulative number of P/E cycles or the cumulative number of writing approaches the permitted number of P/E cycles or the permitted number of writing, the scrubbing scheduler 630 may reduce the scrubbing intervals for the memory devices 420A, 420B, and 420C).
Claim 16 is the corresponding device claim to method claim 9. It is rejected with the same references and rationale.
Regarding claim 10, Sharma in view of Gundecha in further view of Lim teaches The method of claim 9, wherein the first strategy is a performance-oriented strategy and the second strategy is a lifespan-oriented strategy (Sharma paragraph [0076], Background trigger logic 638.3 may include a set of rules for determining different types of background operations and when they should be added to background queues 638.4 based on background operation thresholds 638.2. Background queues 638.2 may operate similarly to host command queues 632.3 and hold background operation commands in first-in-first-out order for processing through read/write (R/W) processor 636 and NVM controller 640. In some configurations, background queues 638.2 may be configured with priority parameters relative to one another (for prioritizing different types of background operations) and/or host command queues 632.3 that enable storage manager 634 to control the relative use of processor and NVM resources for different types of operations. In some configurations, background operations manager 638 may determine one or more background states 638.5 that are available to other components or systems for monitoring operation of storage device 600. For example, background states 638.5 may reflect evaluation of background thresholds 638.2 by background trigger logic 638.3 and/or queue depths of background queues 638.4 to determine whether and when one or more background states may change from unnecessary, to normal maintenance, to critical maintenance levels. The garbage collection may be performed based on various triggers and/or conditions, such as optimizing performance or endurance level).
Regarding claim 11, Sharma in view of Gundecha in further view of Lim teaches The method of claim 10, wherein the step of selecting the first strategy to configure the flash memory comprising: configuring the flash memory to have at least the first region having a plurality of blocks of a first type, a second region having a plurality of blocks of the first type, a third region having a plurality of blocks of a second type a forth region having a plurality of blocks of a third type; (Gundecha paragraph [0018-0019], Memory device 110 may be divided into one or more dies, each of which may be further divided into one or more planes that are linked together. The number and configurations of planes within the flash die may be adaptable. Each plane may be further divided into blocks, the smallest unit that may be erased from memory device 110. A block in memory device 110 may be divided into sub-blocks (also referred to herein as sister sub-blocks), wherein each sister sub-block may be a fraction of the block, and each sister sub-block may be individually programmed and/or erased. For example, a block may be divided into two or three sister sub-blocks, each of which may be accessed or erased individually. Memory device 110 may be configured in various formats, with the formats being defined by the number of bits that may be stored per memory cell. For example, a single-layer cell (SLC) format may write one bit of information per memory cell, a multi-layer cell (MLC) format may write two bits of information per memory cell, a triple-layer cell (TLC) format may write three bits of information per memory cell, and a quadruple-layer cell (QLC) format may write four bits of information per in memory cell, and so on. Writing multiple bits of information per memory cell may reduce the cost of storage device 104 but may increase the wear of the blocks on memory device 110. The memory device can be divided into a plurality of different regions/blocks, wherein each can be configured to a set cell level type, including SLC, TLC and QLC) and if the available number of the blocks of the first type goes below a lower bound, programming data to the blocks of the second type in the third region with one-shot programming without performing the GC operation (Sharma paragraph [0076], One or more of storage metrics 638.1 may be compared to one or more background operation thresholds 638.2 for determining when background operations should be initiated and/or the priority they should be given. For example, background operation thresholds 638.2 may include an available capacity threshold or invalid fragment threshold to trigger migration of SLC data to MLC (SLC/MLC migration 638.7) or garbage collection 638.6 of deleted or otherwise invalid blocks and consolidation of valid blocks to free up capacity to be rewritten. Background trigger logic 638.3 may include a set of rules for determining different types of background operations. One storage protocol may trigger a relocation/migration immediately upon an available block number falling below a threshold value).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sharma with those of Gundecha and Lim. Gundecha teaches using a wear indication value corresponding to a program/erase count based on a program/erase count for memory cell levels of different types. This improved the function of memory device as each cell-type may have different expected reliability values corresponding to the number of PEC counts (i.e., see Gundecha paragraph [0020], Different types of memory devices 110 may have different limits as to how many times the individual blocks on the NAND flash can be programmed/erased before data can no longer be stored reliably. For example, an SLC flash memory may have a limit of approximately 100,000 program/erase (PE) cycles, an MLC flash memory may have a limit of approximately 10,000 PE cycles, a TLC flash memory may have a limit of approximately 5,000 PE cycles, and a QLC flash memory may have a limit of approximately 1,000-100 PE cycles).
Regarding claim 12, Sharma in view of Gundecha in further view of Lim teaches The method of claim 11, wherein the wearing condition indication value is determined according to a weight sum of a maximum of program/erase (P/E) cycles of the blocks of the first type in the second region, a maximum of P/E cycles of the blocks of the second type in the third region and a maximum of P/E cycles of the blocks of the third type in the fourth region (Gundecha paragraph [0022], When blocks in memory device 110 are arranged in sub-blocks and a sub-block is erased, programmed, and/or read, the data in the sister sub-block(s) may become disturbed, causing an unselected block disturb (USBD) issue. Over a certain number of PE cycles, the disturbance may accumulate to a level that may be beyond the system performance or reliability bit-error-rate criteria, and beyond this point the sister sub-block(s) may need to be refreshed. Controller 108 may therefore calculate a sister sub-block threshold based on memory device 110. The sister sub-block threshold may be kept relatively lower than the PEC difference allowed between sister sub-blocks. For example, in a TLC/Hybrid SLC flash memory, the sister sub-block threshold may be kept relatively lower than the PEC difference of less than approximately 100 allowed between sister sub-blocks. In QLC/Hybrid SLC flash memory, the sister sub-block threshold may be kept relatively lower than the PEC difference of less than approximately 50 allowed between sister sub-blocks. Each of the different memory regions corresponding to different memory cell level types may correspond to a maximum PEC which can determine a wearing/reliability indication value, also see Gundecha paragraph [0020]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sharma with those of Gundecha and Lim. Gundecha teaches using a wear indication value corresponding to a program/erase count based on a program/erase count for memory cell levels of different types. This improved the function of memory device as each cell-type may have different expected reliability values corresponding to the number of PEC counts (i.e., see Gundecha paragraph [0020], Different types of memory devices 110 may have different limits as to how many times the individual blocks on the NAND flash can be programmed/erased before data can no longer be stored reliably. For example, an SLC flash memory may have a limit of approximately 100,000 program/erase (PE) cycles, an MLC flash memory may have a limit of approximately 10,000 PE cycles, a TLC flash memory may have a limit of approximately 5,000 PE cycles, and a QLC flash memory may have a limit of approximately 1,000-100 PE cycles).
Regarding claim 13, Sharma in view of Gundecha in further view of Lim teaches The method of claim 12, wherein the blocks of the first type in the second region are single-level cell (SLC) blocks in a dynamic SLC region, the blocks of the second type in the third region are triple-level cell (TLC) blocks in a TLC region and the blocks of the third type in the fourth region are quad-level cell (QLC) blocks in a QLC region (Gundecha paragraph [0018-0019], Memory device 110 may be divided into one or more dies, each of which may be further divided into one or more planes that are linked together. The number and configurations of planes within the flash die may be adaptable. Each plane may be further divided into blocks, the smallest unit that may be erased from memory device 110. A block in memory device 110 may be divided into sub-blocks (also referred to herein as sister sub-blocks), wherein each sister sub-block may be a fraction of the block, and each sister sub-block may be individually programmed and/or erased. For example, a block may be divided into two or three sister sub-blocks, each of which may be accessed or erased individually. Memory device 110 may be configured in various formats, with the formats being defined by the number of bits that may be stored per memory cell. For example, a single-layer cell (SLC) format may write one bit of information per memory cell, a multi-layer cell (MLC) format may write two bits of information per memory cell, a triple-layer cell (TLC) format may write three bits of information per memory cell, and a quadruple-layer cell (QLC) format may write four bits of information per in memory cell, and so on. Writing multiple bits of information per memory cell may reduce the cost of storage device 104 but may increase the wear of the blocks on memory device 110. The memory device can be divided into a plurality of different regions/blocks, wherein each can be configured to a set cell level type, including SLC, TLC and QLC).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sharma with those of Gundecha and Lim. Gundecha teaches using a wear indication value corresponding to a program/erase count based on a program/erase count for memory cell levels of different types. This improved the function of memory device as each cell-type may have different expected reliability values corresponding to the number of PEC counts (i.e., see Gundecha paragraph [0020], Different types of memory devices 110 may have different limits as to how many times the individual blocks on the NAND flash can be programmed/erased before data can no longer be stored reliably. For example, an SLC flash memory may have a limit of approximately 100,000 program/erase (PE) cycles, an MLC flash memory may have a limit of approximately 10,000 PE cycles, a TLC flash memory may have a limit of approximately 5,000 PE cycles, and a QLC flash memory may have a limit of approximately 1,000-100 PE cycles).
Regarding claim 14, Sharma in view of Gundecha in further view of Lim teaches The method of claim 10, wherein the step of selecting the second strategy to configure the flash memory comprising: configuring the flash memory to have at least the first region having a plurality of blocks of a first type and a second region having a plurality of blocks of a second type; (Gundecha paragraph [0025], When controller 108 executes a wear-leveling algorithm, controller 108 may select the hottest block from the list of hot free blocks as a destination block. Controller 108 may also determine if the PEC value associated with a sub-block, for example the second sister sub-block, is greater than the sister sub-block threshold. If the PEC value associated with the second sister sub-block is greater than the sister sub-block threshold, controller 108 may determine if the sister sub-block with a lower PEC value, for example, the first sister sub-block, is in the free block pool. If the first sister sub-block is in the free block pool, controller 108 may prioritize allocation of the first sister sub-block for MLC flows and may make the first sister sub-block, for example, a host Hybrid SLC block, host MLC block, and/or a relocation MLC block. The memory device can consist of a dynamic SLC region, as well as a MLC region, which can be configured for QLC, as described in Gundecha paragraph [0022]) and if the available number of the blocks of a first type goes below a lower bound, programming the GC operation to move valid data from the blocks of the first type in the first region to a block of the second type in the second region before programming data into the blocks of the first type in the first region (Sharma paragraph [0076], One or more of storage metrics 638.1 may be compared to one or more background operation thresholds 638.2 for determining when background operations should be initiated and/or the priority they should be given. For example, background operation thresholds 638.2 may include an available capacity threshold or invalid fragment threshold to trigger migration of SLC data to MLC (SLC/MLC migration 638.7) or garbage collection 638.6 of deleted or otherwise invalid blocks and consolidation of valid blocks to free up capacity to be rewritten. Background trigger logic 638.3 may include a set of rules for determining different types of background operations. One storage protocol may trigger a garbage collection operation immediately upon an available block number falling below a threshold value, which can result in storing data from one cell type (i.e., SLC) to another type (i.e., MLC)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sharma with those of Gundecha and Lim. Gundecha teaches using a wear indication value corresponding to a program/erase count based on a program/erase count for memory cell levels of different types. This improved the function of memory device as each cell-type may have different expected reliability values corresponding to the number of PEC counts (i.e., see Gundecha paragraph [0020], Different types of memory devices 110 may have different limits as to how many times the individual blocks on the NAND flash can be programmed/erased before data can no longer be stored reliably. For example, an SLC flash memory may have a limit of approximately 100,000 program/erase (PE) cycles, an MLC flash memory may have a limit of approximately 10,000 PE cycles, a TLC flash memory may have a limit of approximately 5,000 PE cycles, and a QLC flash memory may have a limit of approximately 1,000-100 PE cycles).
Regarding claim 15, Sharma in view of Gundecha in further view of Lim teaches The method of claim 14, wherein the blocks of the first type in the first region are SLC blocks in a SLC region and blocks of the second type in the second region are QLC blocks in a QLC region (Gundecha paragraph [0025], When controller 108 executes a wear-leveling algorithm, controller 108 may select the hottest block from the list of hot free blocks as a destination block. Controller 108 may also determine if the PEC value associated with a sub-block, for example the second sister sub-block, is greater than the sister sub-block threshold. If the PEC value associated with the second sister sub-block is greater than the sister sub-block threshold, controller 108 may determine if the sister sub-block with a lower PEC value, for example, the first sister sub-block, is in the free block pool. If the first sister sub-block is in the free block pool, controller 108 may prioritize allocation of the first sister sub-block for MLC flows and may make the first sister sub-block, for example, a host Hybrid SLC block, host MLC block, and/or a relocation MLC block. The memory device can consist of a dynamic SLC region, as well as a MLC region, which can be configured for QLC, as described in Gundecha paragraph [0022]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sharma with those of Gundecha and Lim. Gundecha teaches using a wear indication value corresponding to a program/erase count based on a program/erase count for memory cell levels of different types. This improved the function of memory device as each cell-type may have different expected reliability values corresponding to the number of PEC counts (i.e., see Gundecha paragraph [0020], Different types of memory devices 110 may have different limits as to how many times the individual blocks on the NAND flash can be programmed/erased before data can no longer be stored reliably. For example, an SLC flash memory may have a limit of approximately 100,000 program/erase (PE) cycles, an MLC flash memory may have a limit of approximately 10,000 PE cycles, a TLC flash memory may have a limit of approximately 5,000 PE cycles, and a QLC flash memory may have a limit of approximately 1,000-100 PE cycles).
Regarding claim 17, Sharma in view of Gundecha in further view of Lim teaches A data storage device comprising a memory controller of claim 16 (see claims 9 and 16 above) and a flash memory (Sharma paragraph [0029], In some embodiments, data storage devices 120 are, or include, solid-state drives (SSDs). Each data storage device 120 may include a non-volatile memory (NVM) or device controller 130 based on compute resources (processor and memory) and a plurality of NVM or media devices 140 as a non-volatile storage medium for data storage (e.g., one or more NVM device(s), such as one or more flash memory devices)).
Response to Arguments
Applicant’s arguments, see page 1 (numbered page 8), filed September 30th, 2025, with respect to Claim 3 have been fully considered and are persuasive. The Claim Objection of Claim 3 has been withdrawn.
The informality of dependent claim 3 has been amended and corrected, and the objection is therefore withdrawn. However, the objection of dependent claim 11 has not been withdrawn as the claim still incorrectly recites the term “Forth region”, as described above.
Applicant’s arguments, see pages 1-5 (numbered pages 8-12), filed September 30th, 2025, with respect to the rejection(s) of claim(s) 1 and 7 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Redaelli (US Publication No. 2025/0181247 – “Redaelli”) in view of Gohain et al. (US Publication No. 2024/0289031 – “Gohain”) in further view of Vijendra Kumar Lakshmi et al. (US Publication No. 2023/0367486 – “Lakshmi”).
The Lakshmi reference has been added to address the newly amended claim limitation reciting reprogramming the target block itself, as described in further detail in the rejection above. Similarly, the Lim reference has been added to independent claims 9 and 16 to disclose a wearing condition indication value, as also described in the rejection above. In light of the newly cited references and rationale, the 35 USC 103 Rejection is maintained.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONAH C KRIEGER whose telephone number is (571)272-3627. The examiner can normally be reached Monday - Friday 8 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kenneth Lo can be reached at (571) 272-9774. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.C.K./Examiner, Art Unit 2136
/KENNETH M LO/Supervisory Patent Examiner, Art Unit 2136