DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 11-12, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US PGPUB 2018/0032429) in view of Kojima et al. (US PGPUB 2019/0107947).
With regard to Claim 1, Liu teaches a semiconductor device, comprising:
a memory cell array comprising a plurality of memory cells configured to store data ([0014] “a computing system 100 having a multi-tiered or multi-level system memory 112... a smaller, faster near memory 113 may be utilized as a cache for a larger far memory 114.” [0016] “the near memory 113 may be a faster (e.g., lower access time), volatile system memory technology (e.g., high performance dynamic random access memory (DRAM)) and/or SRAM memory cells”); and
a circuitry coupled to the memory cell array and configured to read stored data from the memory cell array (Fig. 1: Memory Controller 116. [0016] “Here, the near memory 113 may be a faster (e.g., lower access time), volatile system memory technology (e.g., high performance dynamic random access memory (DRAM)) and/or SRAM memory cells co-located with the memory controller 116. By contrast, far memory 114 may be either a volatile memory technology implemented with a slower clock speed (e.g., a DRAM component that receives a slower clock) or, e.g., a non volatile memory technology that is slower (e.g., longer access time) than volatile/DRAM memory or whatever technology is used for near memory.”),
wherein the circuitry is configured to:
obtain a starting address of target data to be read from the memory cell array based on a read instruction (See Fig. 2, [0035] “The memory control hub 204_2 of processor 201_1 services the request (e.g., by reading/writing from/to the system memory address within system memory slice 208_2),” wherein the “memory control hub” is equivalent to the “memory controller 116” in Fig. 1.);
determine that the starting address is in a first address group of a plurality of address groups, wherein each of the plurality of address groups is associated with a respective reading speed ([0028] “In the case of an incoming read request, if there is a cache hit, the memory controller 116 responds to the request.” [0059] “FIGS. 3a and 3b(i)/(ii) indicate that each of the different system memory levels/partitions can be allocated their own system memory address range.” [0060] “For example, as depicted in FIG. 3a, the system memory address space of the slice of system memory 208_1 associated with the first platform 201_1 corresponds to a first system address range SAR0 that is allocated to the internal DRAM 209_1 of the first platform 201_1.” Furthermore, as indicated above, [0035] “The memory control hub 204_2 of processor 201_1 services the request (e.g., by reading/writing from/to the system memory address within system memory slice 208_2).” [0025] “at least some portion of near memory 113 has its own system address space apart from the system addresses that have been assigned to far memory 114 locations,” wherein each of the “near memory” and “far memory” have their own “address space” which is associated with a respective “reading speed”.); and
read out the target data from the memory cell array based on the starting address being in the first address group ([0028] “In the case of an incoming read request, if there is a cache hit, the memory controller 116 responds to the request by reading the version of the cache line from near memory 113 and providing it to the requestor. By contrast, if there is a cache miss, the memory controller 116 reads the requested cache line from far memory 114 and not only provides the cache line to the requestor (e.g., a CPU) but also writes another copy of the cache line into near memory 113.”).
With further regard to claim 1, Liu does not teach the memory cell array comprising a plurality of regions as described in claim 1. Kojima teaches
wherein the plurality of address groups correspond to one or more read regions of the memory cell array ([0052] “FIG. 7 is a diagram showing the configuration of the third embodiment of the memory cell array 111. Each of pages forming the memory cell array 111 is formed of multiple memory cell transistors. A multi-level cell (MLC) method is adopted as the storage method of the memory cell array 111. That is, each memory cell transistor can store N bits of information, where N>1. Further, the memory cell array 111 can operate according to a single-level cell (SLC) method... the memory cell array 111 comprises an SLC block group 114 formed of one or more blocks operating according to the SLC method and an MLC block group 115 formed of one or more blocks operating according to the MLC method. It can be set, e.g., on a per block basis whether the memory cell array 111 operates according to the MLC method or the SLC method.” [0053] “In ordinary operation, write data 300 is programmed into the MLC block group 115. Specifically, in the process of S3, the processing unit transmits the page address specifying a page belonging to the MLC block group 115,” wherein the “block groups” are the “one or more read regions”, and further wherein the combined use of MLC/SLC memory in Kojima serves a similar purpose as the combined use of different types of memory in Liu.).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu with the memory cell array comprising a plurality of regions as taught by Liu in order to utilize “a single-level cell (SLC) method in order to speed up programming and improve reliability” (Kojima [0052]).
With regard to Claim 11, Liu in view of Kojima teaches all the limitations of Claim 1 as described above. Liu further teaches wherein the circuitry comprises a memory interface coupled to the memory cell array (Fig. 3A: Bridge Device 102 coupled to Memory Device 104. [0053] “FIG. 4 is a block diagram of a bridge device 200 in accordance with an embodiment, which corresponds to the bridge device 102 shown in FIG. 3A. The bridge device 200 has a bridge device input/output interface 202, a memory device interface 204, and a format converter 206.” [0089] “Memory 416 is functionally shown as a single block, but can be logically or physically divided into sub-divisions such as banks, planes or arrays, where each bank, plane or array is matched to a NAND flash memory device.”), and
wherein the memory interface is configured to:
receive an input signal for reading the target data from the memory array, the input signal comprising the read instruction ([0058] “Following is a description of example operations of bridge device 200, with further reference to the composite memory device 100 of FIG. 3A. For a read operation, a global command is received, such as a global read command arriving at the bridge device input/output interface 202 through input port GLBCMD_IN.”); and
output an output signal comprising the read out target data ([0054] “For read operations, bridge device input/output interface 202 includes parallel-to-serial conversion circuitry for providing bits of data in serial format for output through the GLBCMD_OUT output port.”).
With regard to Claim 12, Liu in view of Kojima teaches all the limitations of Claim 11 as described above. Liu further teaches wherein the circuitry comprises an address detector coupled to the memory interface and configured to obtain the starting address of the target data based on the input signal and determine that the starting address is in the first address group (Fig. 4: Command Format Converter 208. [0053] “the command format converter 208 in the format converter 206 converts the global memory control signals 112, which provides the op-code and command signals and any row and address information from the global format to the local format, and forwards it to the memory device interface 204.” [0058] “Once the bridge device input/output interface 202 determines that it has been selected for the global read command by comparing the global device address 116 to a predetermined address of the composite memory device 100, the command format converter 208 converts the global read command into the local format compatible with the discrete memory device 104 on which the read data command is to be executed. As will be described later, the composite memory device can have an assigned address. The local device address 118 of the global read command is forwarded to the memory device interface 204, and the converted read data command is provided to the discrete memory device addressed by the local device address via a corresponding set of local I/O ports of the command path 212.”).
With regard to Claims 18 and 20, these claims are equivalent in scope to Claim 1 rejected above, merely having a different independent claim type, and as such Claims 18 and 20 are respectively rejected under the same grounds and for the same reasons as discussed above with regard to Claim 1.
With further regard to Claim 18, the claim recites additional elements not specifically addressed in the rejection of Claim 1. The Liu reference also anticipates these additional elements of Claim 18, for example, wherein the system comprises:
a memory device (Fig. 1: Multi-Level System Memory 112[0014] “a computing system 100 having a multi-tiered or multi-level system memory 112.”); and
a controller coupled to the memory device and configured to transmit a read instruction to the memory device (Fig. 1: CPU comprising Processor Cores 117. [0078] “any component that can issue a read or write request to system memory (e.g. … a CPU core).”);
and further wherein the circuitry is configured to:
output the read out target data to the controller ([0028] “f there is a cache hit, the memory controller 116 responds to the request by reading the version of the cache line from near memory 113 and providing it to the requestor. By contrast, if there is a cache miss, the memory controller 116 reads the requested cache line from far memory 114 and not only provides the cache line to the requestor (e.g., a CPU) but also writes another copy of the cache line into near memory 113. “).
Claims 2-4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Kojima as applied to Claims 1 and 12 above, and further in view of Xu et al. (US PGPUB 2016/0124873).
With regard to claim 2, Liu in view of Kojima teaches all the limitations of claim 1 as described above. Liu in view of Kojima does not teach the timing profile as described in claim 2. Xu teaches
wherein each of the plurality of address groups is associated with a respective timing profile of a plurality of timing profiles that are different from each other ([0019] “the memory controller 102 also implements profiling logic 110 and a timing data store 112 to determine and store region-specific memory timing information… the profiling logic 110 evaluates each memory region of a set of one or more memory regions of the memory array 104 to determine one or more memory timing parameters specific to that region. The memory timing information for the region then may be maintained in the timing data store 112.”), and
wherein the circuitry is configured to:
determine a first timing profile associated with the first address group; and read out the target data from the memory cell array according to the first timing profile ([0021] “For example, when a memory read request is received by the memory controller 102, the controller logic 108 identifies the region of memory to be accessed based on the address of the memory read request and then communicates with the profiling logic 110 and timing data store 112 to determine the memory timing parameters to that region. The controller logic then schedules and transmits commands to DRAM arrays 106 according to the stored timing parameters.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu in view of Kojima with the timing profile as taught by Xu in order “to improve performance and efficiency” (Xu [0019]).
With regard to Claim 3, Liu in view of Kojima and Xu teaches all the limitations of Claim 2 as described above. Xu further teaches
wherein different timing profiles are associated with different reading speeds, wherein the first timing profile is associated with a first reading speed, and wherein a second timing profile of the plurality of timing profiles is associated with a second reading speed that is higher than the first reading speed ([0040] “With the region-by-region memory timing parameters identified and stored in the timing data store 412, the scheduler 420 may utilize the stored region-based memory timing parameters to more optimally schedule memory access requests based on the regions they target… tRCD represents the minimum delay required between an ‘activation row’ DRAM command and the subsequent ‘column read’ DRAM command. Suppose for a given bank tRDC is 5 cycles for some rows and is 4 cycles for other faster rows. A conventional memory controller would use the most conservative timing of tRCD=5 for all rows. In contrast, the present invention having stored data representative of tRCD for each row in the timing data store 412, allows the scheduler 420 to utilize tRCD=4 for the faster rows, thus reducing by one cycle the latency of a DRAM read operation to those rows.”).
With regard to Claim 4, Liu in view of Kojima and Xu teaches all the limitations of Claim 2 as described above. Xu further teaches wherein a timing profile comprises at least one of:
a time duration of activating a word line,
a time duration of activating a bit line,
a time duration of sensing one or more data bits from the memory cell array, or
a time duration of outputting the one or more data bits ([0013] “scheduling memory accesses to the memory based on the profiled region-specific memory timing parameters associated with the regions targeted by the memory accesses.” [0036] “DRAM timing parameters measured may include, but are not limited to tRCD (row to column command delay), tCL (time between column command and data out), tCCD (time between column commands), tRP (precharge time), tRAS (minimum row open time), tFAW (multi-bank activation window), tWTR (time between read and write), tWR (write recovery time), and the like.”).
With regard to claim 13, Liu in view of Kojima teaches all the limitations of claim 12 as described above. Liu does not teach the timing profile controller as described in claim 13. Xu teaches
wherein the circuitry further comprises a timing profile controller coupled to the address detector and configured to determine a timing profile for reading the target data based on a signal from the address detector, the signal indicating that the starting address of the target data is in the first address group ([0019] “the memory controller 102 also implements profiling logic 110 and a timing data store 112 to determine and store region-specific memory timing information… the profiling logic 110 evaluates each memory region of a set of one or more memory regions of the memory array 104 to determine one or more memory timing parameters specific to that region. The memory timing information for the region then may be maintained in the timing data store 112.” [0021] “For example, when a memory read request is received by the memory controller 102, the controller logic 108 identifies the region of memory to be accessed based on the address of the memory read request and then communicates with the profiling logic 110 and timing data store 112 to determine the memory timing parameters to that region. The controller logic then schedules and transmits commands to DRAM arrays 106 according to the stored timing parameters.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu in view of Kojima with the timing profile controller as taught by Xu in order “to improve performance and efficiency” (Xu [0019]).
Claims 5-6, 8-9, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Kojima as applied to Claims 1 and 18 above, and further in view of Nayak et al. (US PGPUB 2024/0086107).
With regard to claim 5, Liu in view of Kojima teaches all the limitations of claim 1 as described above. Liu in view of Kojima does not teach the sequential read as described in claim 5. Nayak teaches
wherein the target data comprises a first part and a second part sequential to the first part, the first part having the starting address ([0128] “The sequential read commands include read commands that are part of sequential read command streams that span more than one planes in the non-volatile storage system. The method comprises breaking the sequential read command streams at plane boundaries into a plurality of plane read commands. Each plane read command is for data within a single plane.” [0089] “The sequential reads vary between 40 KB and 64 KB in this example. Each sequential read could be a single read command. However, multiple read commands that are to consecutive addresses (e.g., consecutive LBAs) are what are referred to herein as a sequential read command stream. Any of the sequential reads (r3, r6, r8, r11, and r12) depicted in FIG. 7 could be a sequential read command stream that includes more than one read command. The read commands that are part of a sequential read command stream are referred to herein as sequential commands regardless of their individual length,” wherein a “sequential read command,” i.e. “r8” in Figs. 8-9 of Nayak, comprises four ‘parts’ split over four planes P0-P4.), and
wherein the circuitry is configured to read out the first part with a first reading speed and the second part with a second reading speed that is higher than the first reading speed ([0007] “an SLC read may complete much faster than a TLC read. Thus, one plane could perform multiple SLC reads while another plane performs a single slower TLC or QLC read,” wherein the ‘parts’ discussed above are stored on different planes and further wherein the plurality of planes may have be implemented as an ‘SLC’ and a ‘TLC’ which are disclosed as having different reading speeds.).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu in view of Kojima with the sequential read as taught by Nayak in order “to improve throughput” (Nayak [0005]).
With regard to Claim 6, Liu in view of Kojima and Nayak teaches all the limitations of Claim 5 as described above. Nayak further teaches wherein a total length of the first part of the target data is predetermined ([0008] “A ‘sequential read command stream’ is a collection of one or more read commands that is collectively directed to a consecutive range of addresses having a certain minimum length… the certain minimum length could be a logical block, which could have a length of 4 KB. Note that the sequential read command stream could include one or more read commands… For example, a single read command having a length of 64 KB may be considered a sequential read command because it has a length greater than, for example, 4 KB. Multiple shorter read commands may be considered to be sequential read commands if they are to read at consecutive addresses. For example, 16 4 KB read commands to read 64 KB of data at a consecutive logical address range may be considered to be a sequential read command stream,” wherein the “total length of the first part” is predetermined to be 4 KB.).
With regard to claim 8, Liu in view of Kojima teaches all the limitations of claim 1 as described above. Liu in view of Kojima does not teach the memory cells and word line as described in claim 8. Nayak teaches
wherein the first address group comprises a collection of particular addresses corresponding to a slower reading speed than addresses in one or more other address groups ([0007] “an SLC read may complete much faster than a TLC read. Thus, one plane could perform multiple SLC reads while another plane performs a single slower TLC or QLC read,” wherein the “another plane” comprising the TLC/QLC-type memory cells comprise the “first address group”.), and
wherein one or more addresses of the particular addresses correspond to one or more particular memory cells coupled to end of a word line of the memory cell array ([0041] “Memory die 300 includes a memory structure 302 (e.g., memory array) that can comprise non-volatile memory cells.” [0083] “The read command may specify a logical address (LA) in the host's address space.” [0084] “Step 604 includes the memory controller 102 translating the logical address to a physical address in the memory packages 104. The physical address will specify the location of the memory cells that store the requested data. The physical address may specify the memory die 300, the plane 400, the block, the word line, a physical page of memory cells, etc.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu in view of Kojima with the memory cells and word line as taught by Nayak as the use of TLC/MLC-type memory cells having a slower reading speed is advantageous since “Using a greater number of data states allows for more bits to be stored per memory cell” (Nayak [0004]).
With regard to Claim 9, Liu in view of Kojima and Nayak teaches all the limitations of Claim 8 as described above. Liu further teaches wherein the particular addresses in the first address group are fixed or predetermined ([0059] “FIGS. 3a and 3b(i)/(ii) indicate that each of the different system memory levels/partitions can be allocated their own system memory address range.”).
With regard to claim 17, Liu in view of Kojima teaches all the limitations of claim 1 as described above. Liu in view of Kojima does not teach the read command comprising a start address as described in claim 17. Nayak teaches
wherein the read instruction comprises a read command and the starting address ([0008] “A command from a host to read data will typically contain a start logical address (e.g., logical block address or LBA) and a length… The storage device will typically translate the logical address to a physical address in the storage device. The read command may be a random read or part of a sequential read command stream.” [0083] “FIG. 6 is a flowchart of one embodiment of a process 600 of responding to a read command from a host. Step 602 includes the memory controller 102 receiving a read command from host 120. The read command may specify a logical address (LA)”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu in view of Kojima with the read command comprising a start address as taught by Nayak since it was well-known by those of common knowledge in the art that a memory read instruction necessarily comprises some form of a read command and a starting address, as shown by the disclosure of Nayak.
With regard to Claim 19, this claim is equivalent in scope to Claim 5 rejected above, merely having a different independent claim type, and as such Claim 19 is rejected under the same grounds and for the same reasons as discussed above with regard to Claim 5.
Claims 7 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Kojima as applied to Claims 1 and 11 above, and further in view of Pyeon et al. (US PGPUB 2010/0327923).
With regard to claim 7, Liu in view of Kojima teaches all the limitations of claim 1 as described above. Liu in view of Kojima does not teach the clock frequency as described in claim 7. Pyeon teaches
wherein the circuitry is configured to receive a clock signal having a clock frequency and read out the target data using the clock signal ([0043] “A first clock domain includes circuits responsible for providing commands to the discrete memory devices, for providing write data from the memory to the discrete memory devices, and circuits for controlling read data received from the discrete memory devices to be stored in the memory. Accordingly, the operation of the circuits in the first clock domain are synchronized with a memory clock having a first frequency, which corresponds to the operating frequency of the discrete memory devices.” [0046] “a configurable clock controller is provided, which receives a system clock and generates the memory clock having a frequency that is a predetermined ratio of the system clock.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu in view of Kojima with the use of the clock frequency as taught by Pyeon in order “to maximize the performance of both the composite memory device and of the discrete memory devices” (Pyeon [0046]).
With regard to claim 15, Liu in view of Kojima teaches all the limitations of claim 11 as described above. Liu in view of Kojima does not teach the serial pin as described in claim 15. Pyeon teaches
wherein the memory interface comprises a serial pin configured to perform at least one of ([0056] “memory system 20 includes a memory controller 22 having a set of output ports Sout and a set of input ports Sin, and memory devices… that are connected in series. The memory devices can be serial interface flash memory devices… each memory device has a set of input ports Sin and a set of output ports Sout. These sets of input and output ports includes one or more individual input/output ports, such as physical pins or connections, interfacing the memory device to the system it is a part of.”):
receiving the input signal from a bus, or outputting the output signal to the bus ([0075] “The Sout port provides a global command in a global format. The Sin port receives read data in the global format, and the global command as it propagates through all the composite memory devices,” wherein Fig. 5 shows the “Sout” and “Sin” serial pins connected to a bus which connects the plurality of memory devices 304.).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu in view of Kojima with the use of the serial pin as taught by Pyeon since “serial interface flash memory devices… are desirable for their improved performance over the asynchronous flash memory devices” (Pyeon [0061]).
With regard to claim 16, Liu in view of Kojima teaches all the limitations of claim 11 as described above. Liu in view of Kojima does not teach the serial pin as described in claim 16. Pyeon teaches
wherein the memory interface comprises multiple serial input/output (SIO) pins, and the memory interface is configured to receive the input signal from a bus using at least one of the multiple SIO pins and output the output signal to the bus using the multiple SIO pins ([0056] “memory system 20 includes a memory controller 22 having a set of output ports Sout and a set of input ports Sin, and memory devices… that are connected in series. The memory devices can be serial interface flash memory devices… each memory device has a set of input ports Sin and a set of output ports Sout. These sets of input and output ports includes one or more individual input/output ports, such as physical pins or connections, interfacing the memory device to the system it is a part of.” [0075] “The Sout port provides a global command in a global format. The Sin port receives read data in the global format, and the global command as it propagates through all the composite memory devices,” wherein Fig. 5 shows the “Sout” and “Sin” serial pins connected to a bus which connects the plurality of memory devices 304.).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu in view of Kojima with the use of the serial pin as taught by Pyeon since “serial interface flash memory devices… are desirable for their improved performance over the asynchronous flash memory devices” (Pyeon [0061]).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Kojima and Nayak as applied to Claim 8 above, and further in view of Paley et al. (US PGPUB 2020/0264792).
With regard to claim 10, Liu in view of Kojima and Nayak teaches all the limitations of claim 8 as described above. Liu in view of Kojima and Nayak does not teach the determining of an address group as described in claim 10. Paley teaches
wherein the circuitry is configured to determine at least one of the particular addresses in the first address group or time durations for the particular addresses based on one or more parameters comprising a clock frequency and a data density of the word line ([0035] “NVMs 128a-n may be partitioned to include two or more different partition types. For example, NVMs 128a-n can be partitioned to have an SLC partition and a MLC partition (e.g., a TLC partition). As another example, NVMs 128a-n can be partitioned to have a SLC partition, a first MLC partition (e.g., a two-level cell partition), and a second MLC partition (e.g., a TLC partition)… As another example, each NVM can be dual partitioned to include both SLC and MLC partitions, such as SLC partition 132c and MLC partition 132d,” wherein the SLC/MLC partitions in Paley have an associated “data density of a word line,” i.e. each cell in a word line storing 2-bits/cell.).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu in view of Kojima and Nayak with the determining of an address group as taught by Paley for purposes of “maintaining balance between partitions each having a different endurance” (Paley [0047]).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Kojima as applied to Claim 11 above, and further in view of Intrater et al. (US PGPUB 2022/0092004).
With regard to claim 14, Liu in view of Kojima teaches all the limitations of claim 11 as described above. Liu in view of Kojima does not teach the dummy cycles as described in claim 14. Pyeon teaches
wherein the memory interface is configured to output the output signal after one or more dummy cycles following the input signal ([0025] “a ‘configuration state’ can include a set of parameters or information (e.g., opcode, gap, mode, number of dummy cycles, etc.) that are utilized in the execution of an operation (e.g., a read operation).” [0040] “In certain embodiments, multiple read commands can be accommodated in the memory device, with each looking for a specific bit combination (e.g., a number of zeroes), and each having its own corresponding dummy cycle setting.” [0043] “Referring now to FIG. 10, shown is a timing diagram of a first example read access… in cycles 0 to 7, the command opcode can be received via the serial input on I/O0… In cycles 8 and 9, most significant address byte A23-A16 can be received, in cycles 10 and 11 address byte A15-A8 can be received, and in cycles 12 and 13 address byte A7-A0 can be received… The dummy cycles can be cycles 16 through 19 for this example command. Thus, four cycles are shown here for the dummy cycles prior to data being output starting at the falling edge of clock 19,” wherein the “dummy cycles” follow the transmission of the “command opcode,” wherein the “command opcode” is a part of the “input signal”.).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the device as disclosed by Liu in view of Kojima with the use of the dummy cycles as taught by Intrater since “read latency can be reduced to improve the CPU throughput” (Intrater [0039]).
With further regard to claim 14, Liu further teaches wherein the circuitry is configured to determine which group the starting address is in before the one or more dummy cycles start ([0026] “memory controller 116 can determine whether a cache hit or cache miss has occurred in near memory 113 for any incoming memory request,” wherein this determination regarding the location of the “incoming memory request,” which comprises the “starting address,” would necessarily occur before the operations shown in Fig. 10 of Intrater since those operations occur once the “Memory Device 104”, i.e. a far-memory as explained in Liu, has been determined to be the location storing the requested data.).
Response to Arguments
Applicant's arguments, see Pages 8-9 of the Remarks filed 11/18/2025, with respect to the rejections under 35 U.S.C. 102/103 of Claims 1-20 have been fully considered but they are not persuasive. With respect to the Applicant’s argument that the newly amended language of Claims 1, 18 and 20 are not taught by the previously cited prior art, this argument has been fully considered but is moot in view of the newly cited Kojima et al. (US PGPUB 2019/0107947) reference as discussed above in the respective rejections.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is as follows:
Kim (US PGPUB 2022/0236914) discloses a nonvolatile memory device comprising an MLC and an SLC block area which are able to be used for differing data access modes.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS J SIMONETTI whose telephone number is (571)270-7702. The examiner can normally be reached Monday-Thursday 10AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan Savla can be reached at (571) 272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS J SIMONETTI/Primary Examiner, Art Unit 2137 February 21, 2026