NON-FINAL REJECTION
DETAILED ACTION
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 11, 2025 has been entered.
Response to Amendment
The Amendment filed December 11, 2025 has been entered. Claims 1-20 remain pending in the application. Applicant's amendments to the claims have overcome the 1-20 rejections previously set forth in the Final Office Action mailed September 17, 2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chang et al. (US 2014/0019677), Chen et al. (US 2010/0306448), and Hinkle (US 2023/0297236).
Regarding claim 1, Chang et al. disclose:
A persistent memory controller for memory management, the persistent memory controller comprising (FIG. 1 Processor 102 corresponding to FIG. 6 Processor 600) comprising:
metadata controller circuitry configured to generate metadata (Fig. 6 Cache Usage Tracker 630 of Processor 600; [0056] The processor (600) may have a cache usage tracker (630) that tracks the usage of the memory blocks in the cache) based on the metadata controller monitoring data in volatile memory (Fig. 1 Cache 104; [0018] cache (104), for example, may be a memory cache, a processor memory cache, an off-chip memory cache, a random access memory cache, or combinations thereof. The memory cache may contain data, executable code, other information, or combinations thereof. In some examples, the cache uses dynamic random access memory (DRAM), static random access memory (SRAM), another volatile memory) (Fig. 4 metadata 410; [0038] metadata (410) may track usage statistics about each block (404, 406, 408) in the row (402). For example, the metadata (410) may show the times and the frequency that each block is referenced; [0039] The metadata (410) may include information about whether the memory block has information written to it. For example, a "1" stored in a particular metadata field may indicate that information has been written to the memory block. In some examples, when a memory block has been written to or has been changed, the memory block is referred to as a dirty block. The memory device may use these metadata fields to determine which blocks are dirty);
memory manager circuitry (Fig. 6 Processor 600) configured to generate a request to remove the data in the volatile memory based on the metadata ([0040] the metadata fields that indicate that the memory block is dirty may be used when determining which memory blocks to write back to the non-volatile memory); and
data controller circuitry configured to process the request (Fig. 6 Processor 600; Fig. 10 step 1014 Request that dirty blocks are written back to the NVM during a background operation) based on a request criterion (Fig. 10 step 1016 Is processor executing on demand requests?) and the arbitrator circuitry allowing the request to proceed to the data controller circuitry (Fig. 10 step 1018 Write back selected dirty blocks to NVM), wherein the persistent memory controller is configured to operate using a cache coherent protocol, and wherein the request criterion is based on the request generated by the memory manager circuitry and a request of a host ([0025] the write back policy (118) may have a throttling sub-policy (120) that limits the write backs to a time when convenient. For example, if the processor (102) is executing on demand requests, the writing back may be put on hold to free up the cache and non-volatile memory for the on demand requests. An on demand request may be a request that is made by a user (i.e., a host) to be performed by the processor at the time that the user makes the request. In some examples, writing back is performed in the background of the memory device so as to create as little interference with the other operations of the memory device (100). In some examples, the throttling sub-policy (120) allows some interference to be created if the need to write back is great enough and the other operations of the memory device (100) are less urgent), the persistent memory controller being separate from the host (FIG. 1 Processor 102 is within Memory Device 100 and separate from user referenced in paragraph [0025]).
Chang et al. do not appear to explicitly teach “to remove data” and “wherein the persistent memory controller is configured to operate using a cache coherent protocol.” However, Chen et al. disclose:
to remove data ([0003] data in these locations is written back to the non-volatile memory and then the data is removed from the cache)
Chang et al. and Chen et al. are analogous art because Chang et al. teach storing data in persistent hybrid memory and Chen et al. teach cache flushing is a solid state memory device.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Chang et al. and Chen et al. before him/her, to modify the teachings of Chang et al. with the Chen et al. teachings of removing data when data is written back to nonvolatile memory because doing so would free up space in the cache for subsequent cache writes.
Chang et al. and Chen et al. do not appear to explicitly teach “wherein the persistent memory controller is configured to operate using a cache coherent protocol.” However, Hinkle discloses:
…wherein the persistent memory controller is configured to operate using a cache coherent protocol (FIG. 1 FAR Memory Controller 32; [0036] far memory controller coherent protocol interface that connects the far memory to the host processor may implement the Compute Express Link (CXL) 32 includes at least one processor configured to process the program instructions (see firmware 40), wherein the program instructions are configured to, when processed by the at least one processor, cause the processor to perform various operations. The one or more memory devices 34 may be either volatile or persistent memory devices)…
Chang et al., Chen et al., and Hinkle are analogous art because Chang et al. teach storing data in persistent hybrid memory; Chen et al. teach cache flushing is a solid state memory device; and Hinkle teaches a cache coherent protocol.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Chang et al., Chen et al., and Hinkle before him/her, to modify the teachings of Chang et al. and Chen et al. with Hinkle's teachings of implementing cache coherent logic in the persistent memory controller because doing so would increase system performance (Hinkle [0033]).
Regarding claim 2, Chang et al. further disclose:
The persistent memory controller of claim 1, wherein the data controller circuitry processing the request includes the data controller circuitry moving the data based on device idle time (FIG. 10 step 1016 Is processor executing on demand requests?; [0025] the write back policy (118) may have a throttling sub-policy (120) that limits the write backs to a time when convenient. For example, if the processor (102) is executing on demand requests, the writing back may be put on hold to free up the cache and non-volatile memory for the on demand requests. An on demand request may be a request that is made by a user to be performed by the processor at the time that the user makes the request. In some examples, writing back is performed in the background of the memory device so as to create as little interference with the other operations of the memory device (100). In some examples, the throttling sub-policy (120) allows some interference to be created if the need to write back is great enough and the other operations of the memory device (100) are less urgent).
Regarding claim 3, Chang et al. further disclose:
The persistent memory controller of claim 1, wherein the request criterion is based on a priority of the request ([0025] the throttling sub-policy (120) allows some interference to be created if the need to write back is great enough and the other operations of the memory device (100) are less urgent; [0029] writing back may be less of a priority than others processes being executed by the memory device. Thus, the throttling sub-policy may balance the needs of writing back with the processor's other demands) and a priority of a host request.
Chang et al. do not appear to explicitly teach “a priority of a host request.” However, Chen et al. further disclose:
a priority of a host request ([0030] auto-flush function essentially sets the write-back policy so that dirty data is written back after a threshold time period expires and during which no host requests are received)
Regarding claim 4, Chang et al. further disclose:
The persistent memory controller of claim 1, wherein the request criterion is based on a pattern of the data in the volatile memory ([0033]; [0050] The policies (606) may also include a write back policy (614) that determines which of the memory blocks in the cache should be written back to the non-volatile memory. In some examples, the write back policy (614) includes determining which of the memory blocks is likely to be finished being modified in the cache. Such a prediction may be based on patterns identified through tracking statistics).
Regarding claim 5, Chang et al. further disclose:
The persistent memory controller of claim 1, wherein the request criterion is based on at least one of:
an age of the data in the volatile memory ([0056] The processor (600) may have a cache usage tracker (630) that tracks the usage of the memory blocks in the cache. Such a tracker (630) may track…the time duration from which the memory block was written back to the non-volatile memory),
a hotness of the data in the volatile memory, or
a selection process selecting the request.
Regarding claim 6, Chang et al. further disclose:
The persistent memory controller of claim 1, wherein the request criterion is based on a number of requests at the memory manager circuitry ([0056] The processor (600) may have a cache usage tracker (630) that tracks the usage of the memory blocks in the cache. Such a tracker (630) may track the number of writes to a memory block, the number of reads to a memory block)…
Chang et al. do not appear to explicitly teach “a number of requests at a host.” However, Chen et al. further disclose:
a number of requests at a host ([0030] auto-flush function essentially sets the write-back policy so that dirty data is written back after a threshold time period expires and during which no host requests are received).
Regarding claim 7, Chang et al. further disclose:
The persistent memory controller of claim 6, wherein the request criterion is based on a frequency of requests at the memory manager circuitry ([0056] The processor (600) may have a cache usage tracker (630) that tracks the usage of the memory blocks in the cache. Such a tracker (630) may track…the frequency of writes to a memory block, the frequency of reads to a memory block, the clean/dirty status of a memory block)…
Chang et al. do not appear to explicitly teach “a frequency of requests at the host.” However, Chen et al. further disclose:
a frequency of requests at the host ([0030] auto-flush function essentially sets the write-back policy so that dirty data is written back after a threshold time period expires and during which no host requests are received).
Regarding claim 8, Chen et al. further disclose:
The persistent memory controller of claim 6, wherein the request criterion is based on the memory manager circuitry determining a status of the host ([0030] auto-flush function essentially sets the write-back policy so that dirty data is written back after a threshold time period expires and during which no host requests are received).
Regarding claim 9, Chang et al. further disclose:
The persistent memory controller of claim 1, wherein the metadata includes at least one of:
a dirty status of the data in the volatile memory (Fig. 4 metadata 410; [0040] in the cache memory, the metadata fields that indicate that the memory block is dirty may be used when determining which memory blocks to write back to the non-volatile memory),
an age of the data in the volatile memory, or
register metadata that includes at least one of a dirty data count, a hot data count, a host request count, an eviction threshold, or a data hotness threshold.
Regarding claim 10, Chang et al. further disclose:
The persistent memory controller of claim 1, wherein the metadata describes the data in the volatile memory (Fig. 4 metadata 410; [0038] The metadata (410) may track usage statistics about each block (404, 406, 408) in the row (402). For example, the metadata (410) may show the times and the frequency that each block is referenced. The metadata (410) may be updated each time the data is read or written to the memory blocks; [0040] in the cache memory, the metadata fields that indicate that the memory block is dirty may be used when determining which memory blocks to write back to the non-volatile memory).
Regarding claim 11, Chang et al. disclose:
A method for memory management via at least one processor of one or more processors (Fig. 6 Processor 600), the method comprising:
generating metadata (Fig. 6 Cache Usage Tracker 630 of Processor 600; [0056] The processor (600) may have a cache usage tracker (630) that tracks the usage of the memory blocks in the cache) based on monitoring data in volatile memory (Fig. 1 Cache 104; [0018] cache (104), for example, may be a memory cache, a processor memory cache, an off-chip memory cache, a random access memory cache, or combinations thereof. The memory cache may contain data, executable code, other information, or combinations thereof. In some examples, the cache uses dynamic random access memory (DRAM), static random access memory (SRAM), another volatile memory) (Fig. 4 metadata 410; [0038] metadata (410) may track usage statistics about each block (404, 406, 408) in the row (402). For example, the metadata (410) may show the times and the frequency that each block is referenced; [0039] The metadata (410) may include information about whether the memory block has information written to it. For example, a "1" stored in a particular metadata field may indicate that information has been written to the memory block. In some examples, when a memory block has been written to or has been changed, the memory block is referred to as a dirty block. The memory device may use these metadata fields to determine which blocks are dirty);
generating a request to remove the data in the volatile memory based on the metadata ([0040] the metadata fields that indicate that the memory block is dirty may be used when determining which memory blocks to write back to the non-volatile memory); and
processing the request based on the request (Fig. 10 step 1014 Request that dirty blocks are written back to the NVM during a background operation) being allowed to proceed data controller circuitry of a persistent memory controller (Fig. 6 Processor 600) in response to arbitrator circuitry of the persistent memory controller granting of the request based on a request criterion (Fig. 10 step 1016 Is processor executing on demand requests?; Fig. 10 step 1018 Write back selected dirty blocks to NVM), wherein the persistent memory controller is configured to operate using a cache coherent protocol, and wherein the request criterion is based on the request generated by memory manager circuitry the persistent memory controller and a request of a host ([0025] the write back policy (118) may have a throttling sub-policy (120) that limits the write backs to a time when convenient. For example, if the processor (102) is executing on demand requests, the writing back may be put on hold to free up the cache and non-volatile memory for the on demand requests. An on demand request may be a request that is made by a user (i.e., a host) to be performed by the processor at the time that the user makes the request. In some examples, writing back is performed in the background of the memory device so as to create as little interference with the other operations of the memory device (100). In some examples, the throttling sub-policy (120) allows some interference to be created if the need to write back is great enough and the other operations of the memory device (100) are less urgent), the persistent memory controller being separate from the host (FIG. 1 Processor 102 is within Memory Device 100 and separate from user referenced in paragraph [0025]).
Chang et al. do not appear to explicitly teach “to remove data” and “wherein the persistent memory controller is configured to operate using a cache coherent protocol.” However, Chen et al. disclose:
to remove data ([0003] data in these locations is written back to the non-volatile memory and then the data is removed from the cache)
The motivation for combining is based on the same rational presented for rejection of independent claim 1.
Chang et al. and Chen et al. do not appear to explicitly teach “wherein the persistent memory controller is configured to operate using a cache coherent protocol.” However, Hinkle discloses:
…wherein the persistent memory controller is configured to operate using a cache coherent protocol (FIG. 1 FAR Memory Controller 32; [0036] far memory controller coherent protocol interface that connects the far memory to the host processor may implement the Compute Express Link (CXL) 32 includes at least one processor configured to process the program instructions (see firmware 40), wherein the program instructions are configured to, when processed by the at least one processor, cause the processor to perform various operations. The one or more memory devices 34 may be either volatile or persistent memory devices)…
The motivation for combining is based on the same rational presented for rejection of independent claim 1.
Regarding claim 12, Chang et al. further disclose:
The method of claim 11, wherein processing the request includes moving the data based on device idle time (FIG. 10 step 1016 Is processor executing on demand requests?; [0025] the write back policy (118) may have a throttling sub-policy (120) that limits the write backs to a time when convenient. For example, if the processor (102) is executing on demand requests, the writing back may be put on hold to free up the cache and non-volatile memory for the on demand requests. An on demand request may be a request that is made by a user to be performed by the processor at the time that the user makes the request. In some examples, writing back is performed in the background of the memory device so as to create as little interference with the other operations of the memory device (100). In some examples, the throttling sub-policy (120) allows some interference to be created if the need to write back is great enough and the other operations of the memory device (100) are less urgent).
Regarding claim 13, Chang et al. further disclose:
The method of claim 11, wherein the request criterion is based on at least one of:
a priority of the request and a priority of a host request, or a pattern of the data in the volatile memory ([0033]; [0050] The policies (606) may also include a write back policy (614) that determines which of the memory blocks in the cache should be written back to the non-volatile memory. In some examples, the write back policy (614) includes determining which of the memory blocks is likely to be finished being modified in the cache. Such a prediction may be based on patterns identified through tracking statistics).
Regarding claim 14, Chang et al. further disclose:
The method of claim 11, wherein the request criterion is based on at least one of:
an age of the data in the volatile memory ([0056] The processor (600) may have a cache usage tracker (630) that tracks the usage of the memory blocks in the cache. Such a tracker (630) may track…the time duration from which the memory block was written back to the non-volatile memory), or
a hotness of the data in the volatile memory.
Regarding claim 15, Chang et al. further disclose:
The method of claim 11, wherein the request criterion is based on a number of requests at a memory manager circuitry ([0056] The processor (600) may have a cache usage tracker (630) that tracks the usage of the memory blocks in the cache. Such a tracker (630) may track the number of writes to a memory block, the number of reads to a memory block)…
Chang et al. do not appear to explicitly teach “a number of requests at a host.” However, Chen et al. further disclose:
a number of requests at a host ([0030] auto-flush function essentially sets the write-back policy so that dirty data is written back after a threshold time period expires and during which no host requests are received).
Regarding claim 16, Chang et al. further disclose:
The method of claim 15, wherein the request criterion is based on at least one of:
a frequency of requests at the memory manager circuitry ([0056] The processor (600) may have a cache usage tracker (630) that tracks the usage of the memory blocks in the cache. Such a tracker (630) may track the number of writes to a memory block, the number of reads to a memory block) and…or determining the host is in an idle state.
Chang et al. do not appear to explicitly teach “a frequency of requests at the host.” However, Chen et al. further disclose:
a frequency of requests at the host ([0030] auto-flush function essentially sets the write-back policy so that dirty data is written back after a threshold time period expires and during which no host requests are received).
Regarding claim 17, Chang et al. further disclose:
The method of claim 11, wherein the metadata includes at least one of:
a dirty status of the data in the volatile memory (Fig. 4 metadata 410; [0040] in the cache memory, the metadata fields that indicate that the memory block is dirty may be used when determining which memory blocks to write back to the non-volatile memory),
an age of the data in the volatile memory, or
register metadata that includes at least one of a dirty data count, a hot data count, a host request count, an eviction threshold, or a data hotness threshold.
Regarding claim 18, Chang et al. disclose:
A non-transitory computer-readable medium storing code, the code comprising instructions executable by at least one processor of a device to ([0046] processor (600) may be caused to operate by computer readable program code stored in a computer readable storage medium in communication with the processor (600). The computer readable storage medium may be tangible and/or non-transitory):
generate metadata (Fig. 6 Cache Usage Tracker 630 of Processor 600; [0056] The processor (600) may have a cache usage tracker (630) that tracks the usage of the memory blocks in the cache) based on data monitored in volatile memory (Fig. 1 Cache 104; [0018] cache (104), for example, may be a memory cache, a processor memory cache, an off-chip memory cache, a random access memory cache, or combinations thereof. The memory cache may contain data, executable code, other information, or combinations thereof. In some examples, the cache uses dynamic random access memory (DRAM), static random access memory (SRAM), another volatile memory) (Fig. 4 metadata 410; [0038] metadata (410) may track usage statistics about each block (404, 406, 408) in the row (402). For example, the metadata (410) may show the times and the frequency that each block is referenced; [0039] The metadata (410) may include information about whether the memory block has information written to it. For example, a "1" stored in a particular metadata field may indicate that information has been written to the memory block. In some examples, when a memory block has been written to or has been changed, the memory block is referred to as a dirty block. The memory device may use these metadata fields to determine which blocks are dirty);
generate a request to remove the data in the volatile memory based on the metadata ([0040] the metadata fields that indicate that the memory block is dirty may be used when determining which memory blocks to write back to the non-volatile memory); and
process the request (Fig. 6 Processor 600; Fig. 10 step 1014 Request that dirty blocks are written back to the NVM during a background operation) based on the request being allowed to proceed to data controller circuitry of a persistent memory controller (Fig. 6 Processor 600) in response to arbitrator circuitry of the persistent memory controller granting of the request based on a request criterion (Fig. 10 step 1016 Is processor executing on demand requests?; Fig. 10 step 1018 Write back selected dirty blocks to NVM), wherein the persistent memory controller is configured to operate using a cache coherent protocol, and wherein the request criterion is based on the request generated by memory manager circuitry of the persistent memory controller and a request of a host ([0025] the write back policy (118) may have a throttling sub-policy (120) that limits the write backs to a time when convenient. For example, if the processor (102) is executing on demand requests, the writing back may be put on hold to free up the cache and non-volatile memory for the on demand requests. An on demand request may be a request that is made by a user (i.e., a host) to be performed by the processor at the time that the user makes the request. In some examples, writing back is performed in the background of the memory device so as to create as little interference with the other operations of the memory device (100). In some examples, the throttling sub-policy (120) allows some interference to be created if the need to write back is great enough and the other operations of the memory device (100) are less urgent), the persistent memory controller being separate from the host (FIG. 1 Processor 102 is within Memory Device 100 and separate from user referenced in paragraph [0025]).
The motivation for combining is based on the same rational presented for rejection of independent claim 1.
Chang et al. do not appear to explicitly teach “to remove data” and “wherein the persistent memory controller is configured to operate using a cache coherent protocol.” However, Chen et al. disclose:
to remove data ([0003] data in these locations is written back to the non-volatile memory and then the data is removed from the cache)
Chang et al. and Chen et al. do not appear to explicitly teach “wherein the persistent memory controller is configured to operate using a cache coherent protocol.” However, Hinkle discloses:
…wherein the persistent memory controller is configured to operate using a cache coherent protocol (FIG. 1 FAR Memory Controller 32; [0036] far memory controller coherent protocol interface that connects the far memory to the host processor may implement the Compute Express Link (CXL) 32 includes at least one processor configured to process the program instructions (see firmware 40), wherein the program instructions are configured to, when processed by the at least one processor, cause the processor to perform various operations. The one or more memory devices 34 may be either volatile or persistent memory devices)…
The motivation for combining is based on the same rational presented for rejection of independent claim 1.
Regarding claim 19, Chang et al. further disclose:
The non-transitory computer-readable medium of claim 18, wherein processing the request is based on further instructions executable by the at least one processor of the device to move the data during an idle time of the device (FIG. 10 step 1016 Is processor executing on demand requests?; [0025] the write back policy (118) may have a throttling sub-policy (120) that limits the write backs to a time when convenient. For example, if the processor (102) is executing on demand requests, the writing back may be put on hold to free up the cache and non-volatile memory for the on demand requests. An on demand request may be a request that is made by a user to be performed by the processor at the time that the user makes the request. In some examples, writing back is performed in the background of the memory device so as to create as little interference with the other operations of the memory device (100). In some examples, the throttling sub-policy (120) allows some interference to be created if the need to write back is great enough and the other operations of the memory device (100) are less urgent).
Regarding claim 20, Chang et al. further disclose:
The non-transitory computer-readable medium of claim 18, wherein the request criterion is based on at least one of:
a priority of the request and a priority of a host request,
a pattern of the data in the volatile memory ([0033]; [0050] The policies (606) may also include a write back policy (614) that determines which of the memory blocks in the cache should be written back to the non-volatile memory. In some examples, the write back policy (614) includes determining which of the memory blocks is likely to be finished being modified in the cache. Such a prediction may be based on patterns identified through tracking statistics),
an age of the data in the volatile memory, or
a hotness of the data in the volatile memory.
Response to Arguments
Applicant’s arguments, filed December 11, 2025, with respect to the rejection(s) of claim(s) under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Chang et al., Chen et al., and Hinkle.
With respect to applicant’s arguments against the teachings of Chang et al., the examiner is not persuaded. Chang et al. teach the amended claim limitation “wherein the request criterion is based on the request generated by the memory manager circuitry and a request of a host” as discussed supra. Applicant also argues (Remarks page 8) that the processor disclosed by Chang et al. is a host processor. Referring to Fig. 1 of Chang et al., the processor is contained within the memory device and is therefore, not a host processor. Additionally, Chang et al. disclose that a user initiates requests performed by the processor contained within the memory device, see paragraph [0025]. For examination, the user has been interpreted as claimed host. Therefore, the processor contained within the memory device is separate and distinct from the host.
Regarding applicant’s argument that the combination of references teaches away from the claimed invention, the examiner disagrees. It is noted that applicant has failed to specifically point to the limitations in which Chen is applied that are believed to teach away from the teachings of Chang. Applicant argues that Chang teaches continuous background write-backs during idle gaps to minimize latency and Chen expressly delays all write-backs until a threshold timer expires to minimize NAND wear. Applicant asserts that integrating Chen's policy that is triggered only after prolonged host quiescence with Chang's frequent background flushing would force mutually exclusive behaviors. (Remarks pages 9-10)
First, Chang teaches a write back policy with a throttling sub-policy and a compaction sub-policy. The compaction policy predicts when a memory block in finished receiving modifications causing such memory blocks to be written back in a batch ([0025], [0030]). The throttling sub-policy performs write backs in the background when demand requests are not being performed ([0025]). The write back policy also considers the amount of time since a memory block in the cache was last written back to the non-volatile memory ([0050]). The implementation of the sub-policies is determined by different factors such as demand requests, bandwidth, and the number of blocks to be written back ([0051]-[0052]). Turning to Fig. 10, the decision to write back dirty blocks is dependent upon a threshold number of dirty blocks in the cache. If the number of dirty blocks is less that the threshold, then a decision is made as to whether the dirty blocks are ready to be written back. Only after a determination of the readiness of the blocks to be written back is made are blocks written back based on demand requests. Therefore, Chang does not merely teach continuous background write-backs during idle gaps as asserted by applicant (Remarks page).
Next, Chen teaches various write back policies to control when and how data within a cache are written back to non-volatile memory ([0024]). The write back policy may implement an auto-flush function and an auto-flush timer. The auto-flush function sets the write-back policy so that dirty data is written back after a threshold time period expires and during which no host requests are received. Data is written from write cache only after expiration of a threshold time period as determined by auto-flush timer during idle periods of time when no host commands are received. The auto-flush write back terminates on receipt of a host command, so that the controller may service the host command ([0030]). The write back policy may also may be set so that each set of data is written back depending on the amount of time the data has been in write cache, depending on the write location of the data within flash array or the frequency of access to data within solid state drive by the host. Additionally, the write back policy may also integrate various data leveling mechanisms to maximize the life of flash array ([0031]-[0032]). Turning to FIG. 3, the auto-flush process must be enabled. The write back policy does not delay all write backs using the auto-flush process as asserted by applicant (Remarks page 9).
Because both Chang and Chen implement different write back policies based on conditions and objectives, integrating Chen with the teachings of Chang would not force mutually exclusive behaviors. The fact that both Chang and Chen implement different desirable features in their respective write back policies does not constitute a teaching away from any of the alternatives.
In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRACY A WARREN whose telephone number is (571)270-7288. The examiner can normally be reached M-Th 7:30am-5pm, Alternate F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan P. Savla can be reached at 571-272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TRACY A WARREN/Primary Examiner, Art Unit 2137