Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 16th, 2026 has been entered.
Claim Status
Claims 1, 3, 8, and 11-22 have been amended. No new claims have been added or cancelled. Claims 1-28 remain pending and are ready for examination.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on January 29th, 2026 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 8 objected to because of the following informalities: Claim 8 reads “selecting at least one different personality region …” The claim should read “selecting the at least one different personality region”, as the term was previously introduced in independent claim 1.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-2, 8, 11, 13, 18, 21 and 26-27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karkra et al. (US Publication No. 2023/0051806 -- "Karkra") in view of Bradshaw et al. (US Publication No. 2021/0081325 -- "Bradshaw") in further view of Lu et al. (US Publication No. 2021/0089471 – “Lu”).
Regarding claim 1, Karkra teaches A system comprising: one or more circuits to: (see Karkra paragraph [0048], In some examples, storage device components 850 can include common computing elements or circuitry, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, interfaces, oscillators, timing devices, power supplies, and so forth. Examples of memory units can include without limitation various types of computer readable and/or machine-readable storage media any other type of volatile or non-volatile storage media suitable for storing information) store the data in the at least one region as a version of the data; (Karkra paragraph [0021], To examine the cause of this paradoxical increase in write amplification, FIG. 2 illustrates the write amplification data flow in detail 200. As shown, user writes 212a/212b are staged in a high performance NVM device 114. In non-volatile cache 224, write reduction process 202 and aggregation process 204 attempt to reduce writes to the high capacity NVM storage drive 118. For example, the aggregation process 204 can combine small user writes 212a/212b into large writes staged in the high performance NVM device 114. Data can be stored in a multi stage process resulting in particular versions of data being stored) transfer the data from the at least one region to a cache in a non-volatile memory that is other than a SSD to obtain cached data; and evict the cached data from the cache by: (Karkra paragraph [0021], To examine the cause of this paradoxical increase in write amplification, FIG. 2 illustrates the write amplification data flow in detail 200. As shown, user writes 212a/212b are staged in a high performance NVM device 114. In non-volatile cache 224, write reduction process 202 and aggregation process 204 attempt to reduce writes to the high capacity NVM storage drive 118. For example, the aggregation process 204 can combine small user writes 212a/212b into large writes staged in the high performance NVM device 114. A compaction process 206 issues compaction reads 214 to the high performance NVM device 114 and replaces invalid data (i.e., data marked as no longer valid) with valid data. The compaction process 206 further issues compaction writes 216 to a storage process 210 to store the compacted data to the high capacity NVM storage drive 118. Data can be stored and updated in the non-volatile cache in the NVM device, and evicted to the drive when compacted and evicted) and selecting at least one different personality region for the cached data (Karkra paragraph [0021], To examine the cause of this paradoxical increase in write amplification, FIG. 2 illustrates the write amplification data flow in detail 200. As shown, user writes 212a/212b are staged in a high performance NVM device 114. In non-volatile cache 224, write reduction process 202 and aggregation process 204 attempt to reduce writes to the high capacity NVM storage drive 118. For example, the aggregation process 204 can combine small user writes 212a/212b into large writes staged in the high performance NVM device 114. A compaction process 206 issues compaction reads 214 to the high performance NVM device 114 and replaces invalid data (i.e., data marked as no longer valid) with valid data. The compaction process 206 further issues compaction writes 216 to a storage process 210 to store the compacted data to the high capacity NVM storage drive 118. The host FTL 110 rewrites some of the data from the high capacity NVM storage drive 118 during the garbage collection process 208 using garbage collection reads 220 and garbage collection writes 218. In addition, high capacity NVM storage drive 118 needs to eventually perform its own garbage collection process to make free space resulting in the drive FTL 120 issuing its own internal writes 222. The data can be cached and then stored in a different NVM storage drive, which can be interpreted as personality regions depending on various factors of the NVM, such as high-performance vs. high-capacity, as described above) and transferring the cached data to the at least one different personality region if the cached data does not match the version of the data stored in the at least one region; and (Karkra Fig. 1; Karkra paragraph [0019], FIG. 1 illustrates a block diagram of an example storage architecture 100 in which an embodiment of FTL synchronization can be implemented. One or more applications 102 issue requests to create, write, modify and read data maintained in a storage system combining a high performance non-volatile memory (NVM) device 114 for caching and staging data, and one or more high capacity non-volatile memory (NVM) storage devices 118, such as a NAND QLC (Quad Level Cell)/PLC (Penta Level Cell) SSD for storing data, hereafter referred to as an NVM storage drive 118. Karkra explicitly teaches the concept of using a higher performance NVM to store and cache data, which can then further be transferred to a corresponding storage drive (i.e., SSD) when required, see paragraph [0019], The storage architecture 100 allows a host FTL 110 to receive user writes over an 10 interface 104 and to access a write shaping buffer 108 and write shaping algorithms 112 to shape the user writes into shaped writes. The shaped writes are staged on the high performance NVM device 114 for writing back to the high capacity NVM storage drive 118. The shaped writes are typically large, sequential, indirection-unit (IU) aligned writes designed to reduce write amplification at the drive FTL 120) updating a memory map to indicate the current storage location of the data (Karkra paragraph [0016-0017], To address this challenge, embodiments of FTL synchronization synchronize the host FTL operations with the drive FTL operations. Among other aspects, embodiments of FTL synchronization map, at the host FTL SW stack level, logical bands in which data is managed, referred to as host bands, to the physical bands on a drive where data is stored. Among other information, embodiments of FTL synchronization are based in part on determining data validity levels. In NVM Flash devices, data is typically marked as valid or no longer valid by logical block address (LBA) as maintained in a logical-to-physical address table (L2P table). A host band validity level is typically expressed as a percentage of data managed in the host bands that is still valid. The validity level typically decreases over time as more of the data gets erased or deleted from the physical band where it was stored, or is otherwise no longer valid. The memory mapping (i.e., L2P map) is synchronized to update corresponding to the data transfer between the cache and memory region of the drive (i.e. the SSD), also see Karkra paragraph [0023]).
Karkra does not teach obtaining a personality type of a plurality of candidate personality types for data stored in the cache; selecting at least one region … having the personality type; comparing the cached data to the version of the data; discarding the cached data and identifying the at least one region as a current storage location of the data if the cached data matches the data.
However, Bradshaw teaches obtaining a personality type of a plurality of candidate personality types for data stored in the cache; (Bradshaw paragraph [0137], In one embodiment, a method comprises: accessing, by a processing device (e.g., processing device 310 of FIG. 3) of a computer system, memory in an address space, wherein memory devices of the computer system are accessed by the processing device using addresses in the address space; storing metadata (e.g., metadata 320 and/or 322) that associates a first address range of the address space with a first memory device (e.g., DRAM 304), and a second address range of the address space with a second memory device (e.g., NVRAM 306), wherein a first latency of the first memory device is different from a second latency of the second memory device; and allocating, based on the stored metadata, the first address range to an application (e.g., application 312) executing on the computer system. Data can be evicted from the cache by obtaining a data access pattern associated with the given data, and can be correspondingly mapped to a section of a non-volatile memory for eviction, also see Bradshaw paragraph [0039] for mapping according to data access patterns, In one example, memory device types include DRAM, NVRAM, and NAND flash. The priority of a process is determined by the processor (e.g., based on data usage patterns by the process)) selecting at least one region … having the personality type (Bradshaw paragraph [0150], the accessing including accessing the first memory device and the second memory device using addresses in the address space; store metadata that associates a first address range of the address space with the first memory device, and a second address range of the address space with the second memory device; and manage, by the operating system based on the stored metadata, processes including a first process and a second process, wherein data for the first process is stored in the first memory device, and data for the second process is stored in the second memory device. A region of non-volatile memory may be selected to correspond with the address space of the cache, and can be re-mapped and/or rebinded, see Bradshaw paragraph [0221], In one embodiment, namespaces are used for memory access. Each namespace is a named logical reference to a set of memory units in which memory addresses are defined. An application allocates memory from a namespace. The operating system provides a service which can be called by an application to bind and/or re-bind the namespace to a particular type of memory (e.g., DRAM, NVRAM, NAND flash)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra with those of Bradshaw. Bradshaw is added to the teachings of Karkra to disclose the concept of mapping a personality type (i.e., a data access pattern) between the two separate memories in order to provide optimized storage and data transfer (see Bradshaw paragraph [0039], In one example, memory device types include DRAM, NVRAM, and NAND flash. The priority of a process is determined by the processor (e.g., based on data usage patterns by the process). Based on stored metadata regarding address range mapping to these memory device types, the processor allocates the process to an address range having an appropriate memory latency).
Karkra in view of Bradshaw does not teach comparing the cached data to the version of the data; discarding the cached data and identifying the at least one region as a current storage location of the data if the cached data matches the data.
However, Lu teaches comparing the cached data to the version of the data; discarding the cached data and identifying the at least one region as a current storage location of the data if the cached data matches the data (Lu paragraph [0226], The virtualized cache implementation method provided in the embodiments of this application may further implement a technology similar to the cache reuse, such as cache deduplication, to reduce costs of a cache of a virtual machine. Cache deduplication is to delete duplicate data in a cache. Cache deduplication may include inline deduplication and offline deduplication. Inline deduplication means that when data is written into a cache of a virtual machine, it is determined whether same data already exists in another area, and if yes, a mapping relationship between a first physical address and a second physical address is directly modified. Offline deduplication means that there is a scheduled task on a physical machine, where the task is to periodically scan data in caches of N virtual machines, and if same data is found, a mapping relationship between a first physical address and a second physical address is modified, and a plurality of first physical addresses are mapped to a same second physical address. Cache data may be deleted/discarded in the event that the cache data matches stored data in storage).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra and Bradshaw with those of Lu. Lu teaches comparing data that has been cached to data currently stored and deleting the data if a duplication is detected. The process of deduplication is well known to remove data that is already stored to free up the cache for future data storage (i.e., see Lu paragraph [0227], It should be noted that in the embodiments of this application, a storage device usually used for storage may provide storage space for a cache of a virtual machine. In addition, the storage device may be a byte-based addressing device. When a storage device is used to provide storage space for a cache of a virtual machine, based on the solutions provided in the embodiments of this application, a physical machine can directly access the cache of the virtual machine without virtualization overheads. In addition, the physical machine may further perform, through cooperation among a front-end driver, a back-end driver, and a cache management module, unified cache management such as cache allocation, cache flushing, cache eviction, cache size modification, cache property modification, and cache reuse on N virtual machines on the physical machine. This implements flexible management of the caches of the virtual machines and reduces cache management costs).
Claims 11 and 21 are the corresponding processor and method claims to system claim 1. They are rejected with the same references and rationale.
Regarding claim 2, Karkra in view of Bradshaw in further view of Lu teaches The system of claim 1, wherein the one or more circuits are to perform at least one read or write operation on the data before transferring the data to the at least one region (Karkra paragraph [0021], To examine the cause of this paradoxical increase in write amplification, FIG. 2 illustrates the write amplification data flow in detail 200. As shown, user writes 212a/212b are staged in a high performance NVM device 114. In non-volatile cache 224, write reduction process 202 and aggregation process 204 attempt to reduce writes to the high capacity NVM storage drive 118. For example, the aggregation process 204 can combine small user writes 212a/212b into large writes staged in the high performance NVM device 114. A compaction process 206 issues compaction reads 214 to the high performance NVM device 114 and replaces invalid data (i.e., data marked as no longer valid) with valid data. The compaction process 206 further issues compaction writes 216 to a storage process 210 to store the compacted data to the high capacity NVM storage drive 118. Data can be stored and updated in the non-volatile cache in the NVM device, and evicted to the drive when compacted and evicted).
Regarding claim 8, Karkra in view of Bradshaw in further view of Lu teaches The system of claim 1, wherein selecting at least one different personality region for the cached data comprises obtaining a personality type for the cached data based, at least in part, on an access pattern associated with the data (Bradshaw paragraph [0137], In one embodiment, a method comprises: accessing, by a processing device (e.g., processing device 310 of FIG. 3) of a computer system, memory in an address space, wherein memory devices of the computer system are accessed by the processing device using addresses in the address space; storing metadata (e.g., metadata 320 and/or 322) that associates a first address range of the address space with a first memory device (e.g., DRAM 304), and a second address range of the address space with a second memory device (e.g., NVRAM 306), wherein a first latency of the first memory device is different from a second latency of the second memory device; and allocating, based on the stored metadata, the first address range to an application (e.g., application 312) executing on the computer system. Data can be evicted from the cache by obtaining a data access pattern associated with the given data, and can be correspondingly mapped to a section of a non-volatile memory for eviction, also see Bradshaw paragraph [0039] for mapping according to data access patterns, In one example, memory device types include DRAM, NVRAM, and NAND flash. The priority of a process is determined by the processor (e.g., based on data usage patterns by the process)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra with those of Bradshaw. Bradshaw is added to the teachings of Karkra to disclose the concept of mapping a personality type (i.e., a data access pattern) between the two separate memories in order to provide optimized storage and data transfer (see Bradshaw paragraph [0039], In one example, memory device types include DRAM, NVRAM, and NAND flash. The priority of a process is determined by the processor (e.g., based on data usage patterns by the process). Based on stored metadata regarding address range mapping to these memory device types, the processor allocates the process to an address range having an appropriate memory latency).
Regarding claim 13, Karkra in view of Bradshaw in further view of Lu teaches The processor of claim 11, wherein the circuitry is to: obtain the personality type by classifying an application associated with the data as corresponding to the personality type (Bradshaw paragraph [0137], In one embodiment, a method comprises: accessing, by a processing device (e.g., processing device 310 of FIG. 3) of a computer system, memory in an address space, wherein memory devices of the computer system are accessed by the processing device using addresses in the address space; storing metadata (e.g., metadata 320 and/or 322) that associates a first address range of the address space with a first memory device (e.g., DRAM 304), and a second address range of the address space with a second memory device (e.g., NVRAM 306), wherein a first latency of the first memory device is different from a second latency of the second memory device; and allocating, based on the stored metadata, the first address range to an application (e.g., application 312) executing on the computer system. Data can be evicted from the cache by obtaining a data access pattern associated with the given data, and can be correspondingly mapped to a section of a non-volatile memory for eviction, also see Bradshaw paragraph [0039] for mapping according to data access patterns, In one example, memory device types include DRAM, NVRAM, and NAND flash. The priority of a process is determined by the processor (e.g., based on data usage patterns by the process)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra with those of Bradshaw. Bradshaw is added to the teachings of Karkra to disclose the concept of mapping a personality type (i.e., a data access pattern) between the two separate memories in order to provide optimized storage and data transfer (see Bradshaw paragraph [0039], In one example, memory device types include DRAM, NVRAM, and NAND flash. The priority of a process is determined by the processor (e.g., based on data usage patterns by the process). Based on stored metadata regarding address range mapping to these memory device types, the processor allocates the process to an address range having an appropriate memory latency).
Regarding claim 18, Karkra in view of Bradshaw in further view of Lu teaches The one or more processors of claim 11, wherein the at least one non-volatile memory device comprises dynamic random access memory (“DRAM”) (Bradshaw paragraph [0039], In one example, memory device types include DRAM, NVRAM, and NAND flash. The priority of a process is determined by the processor (e.g., based on data usage patterns by the process). Based on stored metadata regarding address range mapping to these memory device types, the processor allocates the process to an address range having an appropriate memory latency).
Claim 26 is the corresponding method claim to processor claim 18. It is rejected with the same references and rationale.
Regarding claim 27, Karkra in view of Bradshaw in further view of Lu teaches The method of claim 21, wherein the at least one SSD comprises a remote memory device that is remote with respect to the cache (Karkra Fig. 1; Karkra paragraph [0019], FIG. 1 illustrates a block diagram of an example storage architecture 100 in which an embodiment of FTL synchronization can be implemented. One or more applications 102 issue requests to create, write, modify and read data maintained in a storage system combining a high performance non-volatile memory (NVM) device 114 for caching and staging data, and one or more high capacity non-volatile memory (NVM) storage devices 118, such as a NAND QLC (Quad Level Cell)/PLC (Penta Level Cell) SSD for storing data, hereafter referred to as an NVM storage drive 118. Karkra explicitly teaches the concept of using a higher performance NVM to store and cache data, which can then further be transferred to a remote storage drive (i.e., SSD) when required, see paragraph [0019], The storage architecture 100 allows a host FTL 110 to receive user writes over an 10 interface 104 and to access a write shaping buffer 108 and write shaping algorithms 112 to shape the user writes into shaped writes. The shaped writes are staged on the high performance NVM device 114 for writing back to the high capacity NVM storage drive 118. The shaped writes are typically large, sequential, indirection-unit (IU) aligned writes designed to reduce write amplification at the drive FTL 120).
Claim(s) 3, 5, 9, 14, 19-20 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karkra in view of Bradshaw in further view of Lu as applied to claims 1, 11 and 21 above, and further in view of Inna et al. (US Patent No. 10,936,500 -- "Inna").
Regarding claim 3, Karkra in view of Bradshaw in further view of Lu and in further view of Inna The system of claim 1, wherein the version of the data stored in the at least one region is a first version of the data, the cached data transferred to the at least one different personality region is a second version of the data (Karkra paragraph [0021], To examine the cause of this paradoxical increase in write amplification, FIG. 2 illustrates the write amplification data flow in detail 200. As shown, user writes 212a/212b are staged in a high performance NVM device 114. In non-volatile cache 224, write reduction process 202 and aggregation process 204 attempt to reduce writes to the high capacity NVM storage drive 118. For example, the aggregation process 204 can combine small user writes 212a/212b into large writes staged in the high performance NVM device 114. A compaction process 206 issues compaction reads 214 to the high performance NVM device 114 and replaces invalid data (i.e., data marked as no longer valid) with valid data. The compaction process 206 further issues compaction writes 216 to a storage process 210 to store the compacted data to the high capacity NVM storage drive 118. The host FTL 110 rewrites some of the data from the high capacity NVM storage drive 118 during the garbage collection process 208 using garbage collection reads 220 and garbage collection writes 218. In addition, high capacity NVM storage drive 118 needs to eventually perform its own garbage collection process to make free space resulting in the drive FTL 120 issuing its own internal writes 222. The cached data may include a valid and invalid version) and the one or more circuits are to: cause the second version of the data to be transferred from the at least one different personality region back to the cache to obtain new cache data, (Inna column 16; lines 1-25, In some embodiments, a server process may wish to execute a special type of write operation, such as an operation which requires transfer of bulk data in a single operation. Such operations are referred to herein as bulk type of operations. Some non-limiting examples of such operations include large multi-block database write operations, such as COPY IN and stale/dead row cleanup operations like VACUUM. In most cases, the requirement for transfer of the bulk data is a one-time operation and accordingly, the bulk data does not need to be persisted in the buffers of the persistent memory database cache 340. Accordingly, in at least one embodiment, the cache manager 220 is configured to assign a subset of buffers from among the plurality of buffers in response to a receipt of a bulk type of operation. Accordingly, in FIG. 5, a subset of buffers 504 is depicted to be assigned from among the plurality of buffers 502 for receiving bulk type of data. The buffers, such as buffers 504x, 504y to 504z in the subset of buffers 504 are capable of being recycled and repeatedly used for caching data in relation to a bulk type of operation. It is noted that the subset of buffers is a negligibly small percentage of the entire buffer cache provided by the persistent memory database cache 340. The data may be transferred from the storage region to the buffer (i.e., caching data), as opposed to the first operation in the other direction) transfer the new cache data from the cache to one or more regions of the at least one SSD (for SSD, see Inna column 1; lines 24-45, Relational databases typically are managed using a relational database management system (RDBMS). One or more clients (i.e. user applications) establish a connection with a server associated with the RDBMS to create tables and store data in the tabular format. The server typically uses a storage media, such as hard disk drives (HDDs) and/or solid state drives (SSDs) at the backend for storing user's data in the tabular format) having the personality type when the new cache data does not match the first version, and identify the at least one different personality region as a location of the data when the new cache data matches the second version (Inna column 12; lines 8-35, Referring back to FIG. 3, the DRAM 204 is configured to store a buffer hash table 306 including a plurality of buffer tags, such as a buffer tag 306a, a buffer tag 306b and a buffer tag 306n. Each buffer tag is configured to uniquely identify a buffer from among the plurality of buffers. More specifically, each buffer tag is mapped to a unique buffer ID of a buffer in the persistent memory database cache 340. In at least one embodiment, a server process of the database server 202 (shown in FIG. 2) uses a buffer tag to read a buffer from the persistent memory 206. The cache manager 220 is configured to compare the buffer tag with the plurality of buffer tags in the buffer hash table 306 in the DRAM 204 and, subsequent to finding a match, obtain a buffer ID mapped to the matching buffer tag. The cache manager 220 is thereafter configured to look-up the buffer ID in the buffer metadata associated with the plurality of buffers 302. The flags, reference counts and usage counts are then used to determine whether the requested data available in a buffer is in persistent memory database cache 340 (i.e. cache hit), whether the buffer data is valid, or if a new buffer is needed (cache miss), whether free buffers are available (from the free list), or if a victim buffer needs to be identified and its data need to be written. In the case when data is written to the buffer, the cache manger 220 is configured to set the flag of the respective buffer to a dirty state (such as the dirty state 420 shown in FIG. 4) after the data is written into the buffer. The buffer hash table 306 and the persistent memory database cache 340, or more specifically. The data that is stored/cached may utilize a matching system that will determine if the data version already exists, while implementing the transfer process if an identity match is not detected).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra and Bradshaw and Lu with those of Inna. Inna teaches using different versions of data to be transferred from separate regions and performing a matching comparison, which can reduce the number of unnecessary transfer operations (see Inna column 1; lines 55-60, For the aforementioned reasons, there is a need to enable vertical scaling of the storage architecture without incurring high costs while retaining DRAM-like low access latency and memory-like access to cached content. It would also be advantageous to address the query latency variance drawback of the current storage architecture and Inna column 2; lines 56-61, Modifications to the buffer hash table are routed to the DRAM, and modifications to buffer content and modifications to buffer descriptor values corresponding to the first type of buffer descriptors are explicitly flushed to the persistent memory database cache in the persistent memory).
Claims 20 and 22 are the corresponding processor and method claims to system claim 3. They are rejected with the same references and rationale.
Regarding claim 5, Karkra in view of Bradshaw in further view of Lu and further in view of Inna teaches The system of claim 1, wherein the one or more circuits are to shape the data for use with the personality type (Inna column 8; lines 19-28, In an example embodiment, at least one module of the database system 200 may include I/O circuitry (not shown in FIG. 2) configured to control at least some functions of one or more elements of the I/O module 216. The module of the database system 200 and/or the I/O circuitry may be configured to control one or more functions of the one or more elements of the I/O module 216 through computer program instructions stored on a memory, for example, the memory module 214, and accessible to the processing module 212 of the database system 200. Various input/output circuitry can be utilized to shape the data that is used for the namespace typing/determination. Also see Inna column 20; lines 33-50, Particularly, the database server 202 and its various components such as the processing module 212, the memory module 214, the I/O module 216, the communication module 218 and the cache manager 220 may be enabled using software and/or using transistors, logic gates, and electrical circuits (for example, integrated circuit circuitry such as ASIC circuitry). Various embodiments of the present invention may include one or more computer programs stored or otherwise embodied on a computer-readable medium, wherein the computer programs are configured to cause a processor or computer to perform one or more operations (for example, operations explained herein with reference to the cache manager 220). A computer-readable medium storing, embodying, or encoded with a computer program, or similar language, may be embodied as a tangible data storage device storing one or more software programs that are configured to cause a processor or computer to perform one or more operations).
Claim 14 is the correspond process/apparatus claim to system claim 5. It is rejected with the same references and rationale.
Regarding claim 9, Karkra in view of Bradshaw in further view of Lu and further in view of Inna further teaches The system of claim 1, wherein the one or more circuits are to perform load balancing with respect to input and output (“I/O”) commands across at least one of the at least one SSD or a plurality of personality regions that comprise the at least one region (Inna column 8; lines 19-28, In an example embodiment, at least one module of the database system 200 may include I/O circuitry (not shown in FIG. 2) configured to control at least some functions of one or more elements of the I/O module 216. The module of the database system 200 and/or the I/O circuitry may be configured to control one or more functions of the one or more elements of the I/O module 216 through computer program instructions stored on a memory, for example, the memory module 214, and accessible to the processing module 212 of the database system 200. Various input/output circuitry can be utilized to shape the data that is used for the namespace typing/determination. Also see Inna column 20; lines 33-50, Particularly, the database server 202 and its various components such as the processing module 212, the memory module 214, the I/O module 216, the communication module 218 and the cache manager 220 may be enabled using software and/or using transistors, logic gates, and electrical circuits (for example, integrated circuit circuitry such as ASIC circuitry). Various embodiments of the present invention may include one or more computer programs stored or otherwise embodied on a computer-readable medium, wherein the computer programs are configured to cause a processor or computer to perform one or more operations (for example, operations explained herein with reference to the cache manager 220). A computer-readable medium storing, embodying, or encoded with a computer program, or similar language, may be embodied as a tangible data storage device storing one or more software programs that are configured to cause a processor or computer to perform one or more operations. The I/O operations can be used to manage the load on the storage resources).
Claim 19 is the correspond process/apparatus claim to system claim 9. It is rejected with the same references and rationale.
Claim(s) 4, 6, 15-16 and 23-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karkra in view of Bradshaw in further view of Lu as applied to claim 1 and 11 above, and further in view of Kanso et al. (US Publication No. 2022/0156631 -- "Kanso").
Regarding claim 4, Karkra in view of Bradshaw in further view of Lu and further in view of Kanso teaches The system of claim 1, wherein the data was generated by a workload, and the one or more circuits are to perform an analysis of the workload to obtain the personality type (Kanso paragraph [0027], Based on configuration in a namespace, operators can have success in tasked jobs or can fail. Before deployment, it is desirable to know if deployment will be successful or fail. Understanding risk of an operation can assist an operation engineer to make informed decisions about execution. Operators are akin to intelligent agents that can automate actions. Thus, embodiments herein propose an efficient system, using a machine-learning model, to predict probability of success for an operator in a new environment in a Platform as a Service (PaaS) cloud. A machine-learning model is trained given inputs and output. An operator is input with a description of an operation along with other artifacts such as operator controller custom resource definition (CRD). A CRD is an object that extends the Kubernetes APIs into a cluster. A namespace is associated with different configurations where the operator can be deployed. A retrieved output facilitates determining whether deployment of the operator in a particular namespace will be successful or not. The namespace type and configuration can be based on various workload values, such as a machine learning method/code).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra and Bradshaw and Lu with those of Kanso. Kanso teaches using a workload value (i.e., machine learning code) to determine a personality (i.e., namespace) type. This can be used to provide more compatible and optimized configurations for the namespaces resulting in better operations (Kanso paragraph [0023], The subject disclosure relates generally to embodiments that predict probability of success for an operator in a new environment of a Platform as a Service (PaaS) cloud. This includes receiving capabilities of the operator as input and receiving different configurations that apply to a given namespace where the operator can be deployed. Output of deployment of the operator to other namespaces is received; given the input and output, a machine-learning model is trained to predict probability of success of deployment of the operator in the new PaaS environment).
Claim 15 is the correspond process/apparatus claim to system claim 4. It is rejected with the same references and rationale.
Regarding claim 6, Karkra in view of Bradshaw in further view of Lu and further in view of Kanso teaches The system of claim 1, wherein the one or more circuits obtain the personality type using one or more machine learning methods (Kanso paragraph [0027], Based on configuration in a namespace, operators can have success in tasked jobs or can fail. Before deployment, it is desirable to know if deployment will be successful or fail. Understanding risk of an operation can assist an operation engineer to make informed decisions about execution. Operators are akin to intelligent agents that can automate actions. Thus, embodiments herein propose an efficient system, using a machine-learning model, to predict probability of success for an operator in a new environment in a Platform as a Service (PaaS) cloud. A machine-learning model is trained given inputs and output. An operator is input with a description of an operation along with other artifacts such as operator controller custom resource definition (CRD). A CRD is an object that extends the Kubernetes APIs into a cluster. A namespace is associated with different configurations where the operator can be deployed. A retrieved output facilitates determining whether deployment of the operator in a particular namespace will be successful or not. The namespace type and configuration can be based on various workload values, such as a machine learning method/code).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra and Bradshaw and Lu with those of Kanso. Kanso teaches using a workload value (i.e., machine learning code) to determine a personality (i.e., namespace) type. This can be used to provide more compatible and optimized configurations for the namespaces resulting in better operations (Kanso paragraph [0023], The subject disclosure relates generally to embodiments that predict probability of success for an operator in a new environment of a Platform as a Service (PaaS) cloud. This includes receiving capabilities of the operator as input and receiving different configurations that apply to a given namespace where the operator can be deployed. Output of deployment of the operator to other namespaces is received; given the input and output, a machine-learning model is trained to predict probability of success of deployment of the operator in the new PaaS environment).
Claim 16 is the correspond process/apparatus claim to system claim 6. It is rejected with the same references and rationale.
Regarding claim 23, Karkra in view of Bradshaw in further view of Lu and further in view of Kanso teaches The method of claim 21, further comprising: determining an access pattern of the workload and using the access pattern to associate the one or more personality types with at least one of the workload or the data generated by the workload (Kanso paragraph [0027], Based on configuration in a namespace, operators can have success in tasked jobs or can fail. Before deployment, it is desirable to know if deployment will be successful or fail. Understanding risk of an operation can assist an operation engineer to make informed decisions about execution. Operators are akin to intelligent agents that can automate actions. Thus, embodiments herein propose an efficient system, using a machine-learning model, to predict probability of success for an operator in a new environment in a Platform as a Service (PaaS) cloud. A machine-learning model is trained given inputs and output. An operator is input with a description of an operation along with other artifacts such as operator controller custom resource definition (CRD). A CRD is an object that extends the Kubernetes APIs into a cluster. A namespace is associated with different configurations where the operator can be deployed. A retrieved output facilitates determining whether deployment of the operator in a particular namespace will be successful or not. The namespace type and configuration can be based on various workload values, such as a machine learning method/code).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra and Bradshaw and Lu with those of Kanso. Kanso teaches using a workload value (i.e., machine learning code) to determine a personality (i.e., namespace) type. This can be used to provide more compatible and optimized configurations for the namespaces resulting in better operations (Kanso paragraph [0023], The subject disclosure relates generally to embodiments that predict probability of success for an operator in a new environment of a Platform as a Service (PaaS) cloud. This includes receiving capabilities of the operator as input and receiving different configurations that apply to a given namespace where the operator can be deployed. Output of deployment of the operator to other namespaces is received; given the input and output, a machine-learning model is trained to predict probability of success of deployment of the operator in the new PaaS environment).
Regarding claim 24, Karkra in view of Bradshaw in further view of Lu and further in view of Kanso teaches The method of claim 21, wherein one or more machine learning methods are used to associate the one or more personality types with at least one of the workload or the data generated by the workload (Kanso paragraph [0027], Based on configuration in a namespace, operators can have success in tasked jobs or can fail. Before deployment, it is desirable to know if deployment will be successful or fail. Understanding risk of an operation can assist an operation engineer to make informed decisions about execution. Operators are akin to intelligent agents that can automate actions. Thus, embodiments herein propose an efficient system, using a machine-learning model, to predict probability of success for an operator in a new environment in a Platform as a Service (PaaS) cloud. A machine-learning model is trained given inputs and output. An operator is input with a description of an operation along with other artifacts such as operator controller custom resource definition (CRD). A CRD is an object that extends the Kubernetes APIs into a cluster. A namespace is associated with different configurations where the operator can be deployed. A retrieved output facilitates determining whether deployment of the operator in a particular namespace will be successful or not. The namespace type and configuration can be based on various workload values, such as a machine learning method/code).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra and Bradshaw and Lu with those of Kanso. Kanso teaches using a workload value (i.e., machine learning code) to determine a personality (i.e., namespace) type. This can be used to provide more compatible and optimized configurations for the namespaces resulting in better operations (Kanso paragraph [0023], The subject disclosure relates generally to embodiments that predict probability of success for an operator in a new environment of a Platform as a Service (PaaS) cloud. This includes receiving capabilities of the operator as input and receiving different configurations that apply to a given namespace where the operator can be deployed. Output of deployment of the operator to other namespaces is received; given the input and output, a machine-learning model is trained to predict probability of success of deployment of the operator in the new PaaS environment).
Claim(s) 7, 17 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karkra in view of Bradshaw in further view of Lu as applied to claim 1, 11 and 21 above, and further in view of Amidi et al. (US Publication No. 2022/0404975 – “Amidi”).
Regarding claim 7, Karkra in view of Bradshaw in further view of Lu and further in view of Amidi teaches The system of claim 1, wherein the cache is implemented in storage class memory (“SCM”) (Amidi paragraph [0028], The host computer system (not shown) preferably communicates with Host PMI 110 via a storage device driver installed on the host computer system (e.g. OS driver software). For example, the storage device driver could be programmed to allow hybrid memory apparatus 100 to be seen as both a volatile byte-addressable memory (e.g. DRAM, MRAM) and as a local cache for a non-volatile block-addressable memory (e.g. SSD cache, SSD buffer) (and hence a cache to a non-volatile SCM). The cache may be implemented in a non-volatile storage class memory).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra and Bradshaw and Lu with those of Amidi. Amidi teaches using a cache implemented in a non-volatile memory structure, particularly that of SCM (Storage Class Memory), which can be used to provide additional operations control features, such as queueing capabilities of flexibility regarding command execution order (see Amidi paragraph [0028], The host computer system (not shown) preferably communicates with Host PMI 110 via a storage device driver installed on the host computer system (e.g. OS driver software). For example, the storage device driver could be programmed to allow hybrid memory apparatus 100 to be seen as both a volatile byte-addressable memory (e.g. DRAM, MRAM) and as a local cache for a non-volatile block-addressable memory (e.g. SSD cache, SSD buffer) (and hence a cache to a non-volatile SCM)).
Claims 17 and 25 are the corresponding processor and method claims to system claim 7. They are rejected with the same references and rationale.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karkra in view of Bradshaw in further view of Lu as applied to claim 1 above, and further in view of Serizawa et al. (US Publication No. 2004/0260861 -- "Serizawa").
Regarding claim 10, Karkra in view of Bradshaw in further view of Lu and further in view of Serizawa teaches The system of claim 1, wherein the data is associated with a virtual address, and the one or more circuits are to create an entry in a file system associating the virtual address with an address of the at least one region (Serizawa paragraph [0074], The control unit 210 refers to the entry 318, in the virtual volume management table 221, with which a virtual address range of a journal area is registered to judge if an I/O request is a request for writing in the journal area. If the virtual volume 100 is not formatted as a journaled file system, the virtual address area entry 318 in the journal area in the Virtual volume management table 221 is empty (i.e. a "null" value is registered with the entry 318 concerned). When the virtual volume 100 is being initialized as a journaled file system, the administrator writes the virtual address range in the virtual volume, in which a journal of the journaled file system is stored, in the entry 318 of the virtual volume management table 221 via the management console 14. Alternatively, the virtual address range in a virtual volume in which a journal is to be stored may be written in the entry 318 of the virtual volume management table 211 when the format processing program (which is stored in the memory of the control unit 210) of the journaled file system is executed by the control unit 210. The entry of a particular file system may utilize a virtual address for data of a given region).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra and Bradshaw and Lu with those of Serizawa. Serizawa teaches using a virtual address associated with a data entry in a file system, which can both be used to provide additional storage/formatting modifications potentially resulting in improved performance and decreased memory deterioration (Serizawa paragraph [0078], The control unit 210, by executing the format processing program 215, initializes the file system which uses the virtual volume 100 in place of the host processor, or erases all files and directories on the file system, and establishes a state that enables creating files and directories anew. At this time, the control unit 210 issues an I/O request for writing management data called "meta-data" in the virtual volume 100 to the control unit 210 itself. Note that, the control unit 210 includes a special value, specifically 0xFFFF00, indicating the control unit 210 itself in the transmission source address (specifically, Port ID of Fibre channel) of the I/O request. The size of the meta-data to be written at this time is not so large, but, since the meta-data is written at a regular interval in the storage of the virtual volume 100, if the size of the virtual volume 100 is not large, much meta-data will be written at a regular interval in one virtual volume. Consequently, when the real region 132 which is larger than the meta-data size is allocated each time the meta-data is written in the virtual volume concerned, an unused area in which no data is written will occur within the real region 132 that is allocated to the virtual volume. Thus, the allocation efficiency of the real region 132 will be deteriorated. Making the size of the real region 132 variable is performed to prevent the allocation efficiency from being deteriorated).
Claim(s) 12 and 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karkra in view of Bradshaw in further view of Lu as applied to claims 11 and 21 above, and further in view of Chou et al. (US Publication No. 2015/0254088 -- "Chou").
Regarding claim 12, Karkra in view of Bradshaw in further view of Lu and further in view of Chou teaches The one or more processors of claim 11, wherein the circuitry is to store the data in the at least one region in accordance with an eviction policy (Chou paragraph [0167], As an example, a critically important file session may have large number of memory buffers in memory 2208, so that the session can take advantage of more data being present for quicker and frequent access, whereas a second session with the same file may be assigned with very few buffers and hence it might have to incur more delay and reuse of its buffers to access various parts of the file; (c) allow application 2202 to create an extended pool of buffers beyond memory 2208 across other hosts or block server 2210 for quicker access. This enables blocks of data be kept in memory 2208 of other hosts as well as any memory 2402 present in the file or block server 2210; (d) allow application 2202 to make any block of data to be more persistent in memory 2208 relative to other blocks of data for a file, volume or a session. This allows an application to pick and choose a block of data to be always available for immediate access and not let operating system 2204 or file system client 2206 to evict it based on their own eviction policies; and (e) allow application 2202 to make any block of data to be less persistent in memory 2208 relative to other blocks of data for a file, volume or a session. This allows an application to let know operating system 2204 and file system client 2206 to evict and reuse the buffer of the data block as and when they choose to. Particular data can be flushed and evicted from the buffer/cache to a region in persistent memory based on a given policy).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra and Bradshaw and Lu with those of Chou. Chou teaches using an eviction policy as a means for storing data in a region of persistent memory after it has been moved from a cache. Setting up an eviction policy for flushing data from a buffer/cache allows the user far more flexibility and control over the data that gets flushed as well as the particular order/emphasis with which the data is flushed, which can greatly improve performance (Chou paragraph [0167], As an example, a critically important file session may have large number of memory buffers in memory 2208, so that the session can take advantage of more data being present for quicker and frequent access, whereas a second session with the same file may be assigned with very few buffers and hence it might have to incur more delay and reuse of its buffers to access various parts of the file; (c) allow application 2202 to create an extended pool of buffers beyond memory 2208 across other hosts or block server 2210 for quicker access. This enables blocks of data be kept in memory 2208 of other hosts as well as any memory 2402 present in the file or block server 2210; (d) allow application 2202 to make any block of data to be more persistent in memory 2208 relative to other blocks of data for a file, volume or a session. This allows an application to pick and choose a block of data to be always available for immediate access and not let operating system 2204 or file system client 2206 to evict it based on their own eviction policies; and (e) allow application 2202 to make any block of data to be less persistent in memory 2208 relative to other blocks of data for a file, volume or a session. This allows an application to let know operating system 2204 and file system client 2206 to evict and reuse the buffer of the data block as and when they choose to. Particular data can be flushed and evicted from the buffer/cache to a region in persistent memory based on a given policy).
Regarding claim 28, Karkra in view of Bradshaw in further view of Lu and further in view of Chou teaches The method of claim 27, wherein remote direct memory access (“RDMA”) is used to transfer the data to the at least one region (Chou paragraph [0022], Methods and systems disclosed herein include virtualization of a converged network/storage adaptor. From a traffic perspective, one may combine systems into one. Combining the storage and network adaptors, and adding in virtualization, gives significant advantages. Say there is a single host 102 with two PCIe buses 110. To route from the PCIe 110, you can use a system like RDMA to get to another machine/host 102. If one were to do this separately, one has to configure the storage and the network RDMA system separately. One has to join each one and configure them at two different places. In the converged scenario, the whole step of setting up QoS, seeing that this is RDMA and that there is another fabric elsewhere is a zero touch process, because with combined storage and networking the two can be configured in a single step. That is, once one knows the storage, one doesn't need to set up the QoS on the network separately. Remote communication may be utilized between a distinct memory device and a cache/buffer component).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Karkra and Bradshaw and Lu with those of Chou. Chou teaches using a remote memory device for remote communication between a cache/buffer component and non-volatile persistent memory. There are many benefits associated with enabling remote communication, including allowing for far easier implementation of additional memory devices and storage components as future communication components can be added remotely to a network communication system (Chou paragraph [0022-0023], Methods and systems disclosed herein include virtualization of a converged network/storage adaptor. From a traffic perspective, one may combine systems into one. Combining the storage and network adaptors, and adding in virtualization, gives significant advantages. Say there is a single host 102 with two PCIe buses 110. To route from the PCIe 110, you can use a system like RDMA to get to another machine/host 102. If one were to do this separately, one has to configure the storage and the network RDMA system separately. One has to join each one and configure them at two different places. In the converged scenario, the whole step of setting up QoS, seeing that this is RDMA and that there is another fabric elsewhere is a zero touch process, because with combined storage and networking the two can be configured in a single step. That is, once one knows the storage, one doesn't need to set up the QoS on the network separately. Method and systems disclosed herein include virtualization and/or indirection of networking and storage functions, embodied in the hardware, optionally in a converged network adaptor/storage adaptor appliance).
Response to Arguments
Applicant's arguments filed January 16th, 2026 have been fully considered but they are not persuasive.
Applicant argues:
“With respect to transferring the cached data to the at least one different personality region if the cached data does not match the version of the data, the Office cites paragraph 19 of Karkra, which states that the "shaped writes are staged on the high performance NVM device 114 for writing back to the high capacity NVM storage drive 118." Karkra at paragraph [0021]. Thus, the Office appears to map the transferring recited in claim 1 to the writing back of the shaped writes to the high capacity NVM storage drive 118 of Karkra. However, Karkra is completely silent with respect to writing the shaped writes back to the high capacity NVM storage drive 118 if the cached data does not match the version of the data previously stored in the NVM storage device 118. Further, Karkra is completely silent with respect to personality regions, and is thus completely silent with respect to writing the shaped writes back to the high capacity NVM storage drive 118 if the cached data does not match the version of the data, where the version of the data was previously stored in a personality region of the NVM storage device 118.”
The examiner respectfully disagrees. Regarding the newly added claim limitation to independent claims 1, 11 and 18, the examiner has cited new portions of the Karkra reference. Specifically, the newly added claim limitations describing a transfer of data to a second personality region in response to the cache data not matching the data stored in the personality region. The examiner notes that the term personality region includes storage regions that can be characterized by access patterns, and thus the Karkra reference, which has a “high performance” and “high capacity” NVM region, can be interpreted as two distinct personality regions (i.e., see Karkra paragraph [0019], FIG. 1 illustrates a block diagram of an example storage architecture 100 in which an embodiment of FTL synchronization can be implemented. One or more applications 102 issue requests to create, write, modify and read data maintained in a storage system combining a high performance non-volatile memory (NVM) device 114 for caching and staging data, and one or more high capacity non-volatile memory (NVM) storage devices 118, such as a NAND QLC (Quad Level Cell)/PLC (Penta Level Cell) SSD for storing data, hereafter referred to as an NVM storage drive 118. The storage architecture 100 allows a host FTL 110 to receive user writes over an 10 interface 104 and to access a write shaping buffer 108 and write shaping algorithms 112 to shape the user writes into shaped writes. The shaped writes are staged on the high performance NVM device 114 for writing back to the high capacity NVM storage drive 118. The shaped writes are typically large, sequential, indirection-unit (IU) aligned writes designed to reduce write amplification at the drive FTL 120). The examiner notes that the data in the cached can be staged when it does not match data currently cached, as part of reducing write amplification through compaction of the memory device (i.e., see Karkra paragraph [0021], To examine the cause of this paradoxical increase in write amplification, FIG. 2 illustrates the write amplification data flow in detail 200. As shown, user writes 212a/212b are staged in a high performance NVM device 114. In non-volatile cache 224, write reduction process 202 and aggregation process 204 attempt to reduce writes to the high capacity NVM storage drive 118. For example, the aggregation process 204 can combine small user writes 212a/212b into large writes staged in the high performance NVM device 114. A compaction process 206 issues compaction reads 214 to the high performance NVM device 114 and replaces invalid data (i.e., data marked as no longer valid) with valid data. The compaction process 206 further issues compaction writes 216 to a storage process 210 to store the compacted data to the high capacity NVM storage drive 118). In light of the newly cited portions of the Karkra reference, the applicant’s arguments are not found to be persuasive, and the 35 USC 103 Rejection is maintained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Prasad et al. (US Publication No. 2023/0059072) teaches a storage device with a plurality of storage regions which can each be interpreted as personality regions depending on various data factors such as workload (i.e., see Prasad paragraph [0029-0030], FIG. 2 shows a namespace 200 (also labeled as “Schemal”) that can be mounted or located on a particular device node 205, such as a server. The namespace 200 is shown as including a number of variables 210, such as “Var1,” “Var2,” “Var3,” and “Var4.” Of course, any number of variables can be included in the namespace 200, and four is just one example. Here, the namespace 200 is configured to have various namespace attributes 215. FIG. 3 provides some additional clarification regarding the namespace attributes 215. Specifically, FIG. 3 shows attributes 300, which are representative of the namespace attributes 215 of FIG. 2. The attributes 300 detail characteristics, properties, or features of a namespace and potentially how that namespace is configured or for which type of IO the namespace is optimized to handle) which can be used to determine where data is flushed/staged to (i.e., see Prasad paragraph [0036], Another optional guideline relates to flush optimized forwarding. In some cases, Devdax SCM configurations can support a fast way of data movement. A particular namespace can be selected based on flushing properties of the IO stream. That is, data in the IO stream can be flushed to media, and those flushing properties can be considered when selecting or configuring a namespace. A namespace configured to consider flushing properties can beneficially reduce data commit time significantly).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONAH C KRIEGER whose telephone number is (571)272-3627. The examiner can normally be reached Monday - Friday 8 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocio Del Mar Perez-Velez can be reached on (571)-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.C.K./Examiner, Art Unit 2133
/Arpan P. Savla/Supervisory Patent Examiner, Art Unit 2137