DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
As per MPEP 2111 and 2111.01, the claims are given their broadest reasonable interpretation and the words of the claims are given their plain meaning consistent with the specification without importing claim limitations from the specification.
In responding to this Office action, the applicant is requested to include specific references (figures, paragraphs, lines, etc.) to the drawings/specification of the present application and/or the cited prior arts that clearly support any amendments/arguments presented in the response, to facilitate consideration of the amendments/arguments.
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 12, 2026 has been entered.
Response to Amendment
The amendment filed January 12, 2026 has been entered. Claims 1-20 remain pending in this application. Claims 1-3, 15, 17, and 19-20 have been amended. No claims have been added. No new matter has been added.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-8 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 11,562,791 B1 to Hao T. Nguyen (hereafter Nguyen) in view of US 7,336,538 B2 to Luca Crippa, et al. (hereafter Crippa).
Regarding Amended Independent Claim 1, Nguyen discloses a memory, comprising:
a memory cell array (Disclosing a memory array: Nguyen, col.4:1-2); and
a page buffer (A page buffer: Nguyen, col.4:60), wherein the page buffer
is disposed correspondingly to a bit line of the memory cell array (A page buffer connected to a selected data line 204: Nguyen, col.12:23-28)
and comprises:
latches (Page buffer including multiple latches: Nguyen, col.12:37-40)
which are coupled to the bit line through a sense node of the page buffer (Latches 691 and 692 connected to bit line 204 through a sense node: Nguyen, Figure 6); and
a common data transmission circuit (Common data transmission circuit: Nguyen, Figure 6),
wherein a first port of the common data transmission circuit is coupled to the sense node (A first connection between the data transmission circuit and the sense node: Nguyen, Figure 6), and
wherein a second port of the common data transmission circuit
is coupled to at least two of the latches that are configured for data sensing through the common data transmission circuit (Data latches coupled to the sense node through the common data transmission circuit: Nguyen, Figure 6).
Nguyen does not expressly disclose a common data transmission circuit wherein the at least two latches share the common data transmission circuit, are configured to perform data sensing operations through the common data transmission circuit, and are enabled to transmit data between the at least two data latches based on a sensing result. Crippa, however, discloses a memory page buffer wherein:
The at least two of the latches (Disclosing data latches 230-1 and 230-2, marked 1 & 2, below: Crippa, Figure 2B, annotated below)
share the common data transmission circuit (Common data transmission circuit, outline 3 below: Crippa, Figure 2B) and
are configured to perform data sensing operations through the common data transmission circuit (Configured to perform data sensing operations through the common data transmission circuit: Crippa, Figure 2B),
wherein the common data transmission circuit is configured
to perform data sensing by discharging a voltage of the sense node (Transmission circuit configured to discharge the voltage of sense node SO through, for example, transistors 274, 278, and 286: Crippa. Figure 2B)
in response to a common data transmission signal (Common data transmission signal, MLCPROG: Crippa, Figure 2B) and
data latched in one of the at least two of the latches (Discharge reliant on data stored in latch 230-1, in the above example: Crippa, Figure 2B), and
to enable data transmission between the at least two of the latches based on a sensing result (Transferring data from latch 230-1 to latch 230-2 based on sensing result: Crippa, col.17:55-18:6).
Crippa discloses this arrangement reduces the number of read accesses required to retrieve stored data (Crippa, col.6:1-5) and allows for a flexible circuit without the excess circuitry otherwise required (Crippa, col.26:23-28). Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of this application, to combine the flexible data transfer circuitry of Crippa with the communal data latches of Nguyen, with a reasonable expectation of success. Both inventions are well known in the field of page buffer data transfer latches and the combination of known inventions with predictable results is obvious and not patentable.
PNG
media_image1.png
569
805
media_image1.png
Greyscale
Regarding Amended Claim 2 and the substantially similar limitations of Claim 20, Nguyen discloses the memory of claim 1, wherein
the data transmission is performed between the at least two of the latches through the common data transmission circuit (Data transfer connecting latches 691 and 692 through the common data transmission circuit: Nguyen, Figure 6).
Regarding Amended Claim 3, Nguyen discloses the memory of claim 1, wherein the page buffer comprises
a first data setting circuit (A data setting circuit consisting of transistors 623 and 627: Nguyen, Figure 6),
wherein
a first port of the first data setting circuit is coupled to the sense node (The first port of the transistor connected to the sensing node: Nguyen, Figure 6) and
a second port of the first data setting circuit is coupled to the at least two of the latches (The second port of the transistor connected to the latches: Nguyen, Figure 6),
wherein
the first data setting circuit is configured to set the data latched in the least two of the latches (First data setting circuit configured to control data set in latches: Nguyen, Figure 6); and
the second port of the common data transmission circuit is coupled to a coupling node between the second port of the first data setting circuit and the at least two of the latches (Second port of first data setting circuit connected to the common node between latches 691 and 692: Nguyen, Figure 6).
Regarding Claim 4, Nguyen discloses the memory of claim 3, wherein the first data setting circuit comprises
a first transistor and a second transistor that are connected in series (First transistor 623 connected in series with second transistor 627: Nguyen, Figure 6),
wherein
a first end of the first transistor is coupled to the second port of the first data setting circuit (First end of transistor 623 connected to the common node of Latches 691 and 692: Nguyen, Figure 6);
a second end of the first transistor is coupled to a first end of the second transistor (The second end of transistor 623 connected to the first end of transistor 627: Nguyen, Figure 6); and
a second end of the second transistor is coupled to a ground terminal (The second end of transistor 627 connected to ground: Nguyen, Figure 6),
wherein
a control end of the second transistor is coupled to the sense node (Control end of transistor 623 connected to the sense node: Nguyen, Figure 6; Note: Applicant has the second transistor controlled by the sense node while prior art Nguyen has the first transistor controlled by the sense node. For a pair of transistors operating in series, this variation serves no functional difference), and
a control end of the first transistor is configured to receive a first data setting signal (Control end of transistor 627 responsive to EN_LATCH: Nguyen, Figure 6).
Regarding Claim 5, Nguyen discloses the memory of claim 4, wherein the page buffer further comprises
a second data setting circuit comprising a third transistor (Transistor 603: Nguyen, Figure 6), wherein
a first end of the third transistor is coupled to the second port of the common data transmission circuit (Transistor 603 is coupled to the DATA TRANSFER node through transistor 696: Nguyen, Figure 6; Note: Broadest Reasonable Interpretation of couple may be interpreted to mean direct or indirect connection. This interpretation is supported by applicant usage in the specification. See, for example, Specification ¶[0036] “the above-mentioned peripheral circuit may be coupled to the memory cell array through a Bit Line (BL), a Word Line (WL), a Source Line, a Source Select Gate (SSG) and a Drain Select Gate (DSG).”);
a second end of the third transistor is coupled to a ground terminal (Second end of transistor 603 coupled to a ground terminal: Nguyen, Figure 6); and
a control end of the third transistor is configured to receive a second data setting signal (The control end of transistor 603 configured to receive a data setting signal: Nguyen, Figure 6).
Regarding Claim 6, Nguyen discloses the memory of claim 5, further comprising
a control logic circuit (Control logic circuit 116: Nguyen, col.14:1; See Also, Nguyen, Figure 1)
coupled to
the first data setting circuit (Control logic circuit 116 configured to control data setting circuit EN_LATCH: Nguyen, col.14:1-14) and
the second data setting circuit (Second data setting transistor 603: Nguyen, Figure 6), wherein
the control logic circuit
is configured to generate at least one of the first data setting signal or the second data setting signal (Control logic configured to control data setting signal: Nguyen, col.14:44-47).
Regarding Claim 7, Nguyen discloses the memory of claim 1, wherein the latches comprise:
first phase inverters (First phase inverter 683: Nguyen, Figure 6);
second phase inverters (Second phase inverter 684: Nguyen, Figure 6);
fourth transistors (Fourth transistor 688: Nguyen, Figure 6); and
fifth transistors (Fifth transistor 687: Nguyen, Figure 6),
wherein
output ends of the first phase inverters are coupled to input ends of the second phase inverters (The output of inverter 683 connected to the input of inverter 684: Nguyen, Figure 6);
input ends of the first phase inverters are coupled to output ends of the second phase inverters (The output of inverter 684 connected to the input of inverter 683: Nguyen, Figure 6);
first ends of the fourth transistors are coupled to first nodes between the output ends of the first phase inverters and the input ends of the second phase inverters (First end of transistor 688 connected to the node between the output of inverter 683 and the input of transistor 684: Nguyen, Figure 6);
first ends of the fifth transistors are coupled to second nodes between the input ends of the first phase inverters and the output ends of the second phase inverters (First end of transistor 687 connected to the node between the output of inverter 684 and the input of transistor 683: Nguyen, Figure 6); and
second ends of the fourth transistors and second ends of the fifth transistors are coupled to third nodes (Second ends of transistors 683 and 684 connected to a common node: Nguyen, Figure 6),
wherein
the third nodes of at least two of the latches are coupled to the second port of the common data transmission circuit (Common node of latches 691 and 692 connected to common DATA_TRANSFER signal path 693: Nguyen, Figure 6).
Nguyen does not disclose the specific configuration of first and second data latch 691 and 692, but does disclose the standard latch arrangement in the Sense Amplifier Latch, which may be considered exemplary.
Regarding Claim 8, Nguyen discloses the memory of claim 1, wherein the common data transmission circuit comprises
a sixth transistor (Sixth transistor 609: Nguyen, Figure 6) and a seventh transistor (Seventh transistor 613: Nguyen, Figure 6) that are connected in series (Transistors 609 and 613 connected in series: Nguyen, Figure 6),
wherein
a first end of the sixth transistor is coupled to the sense node (First end of transistor 609 connected to the sense node: Nguyen, Figure 6);
a second end of the sixth transistor is coupled to a first end of the seventh transistor (The second end of transistor 609 connected to the first end of transistor 613: Nguyen, Figure 6); and
a second end of the seventh transistor is coupled to a ground terminal (The second end of transistor 613 connected to ground: Nguyen, Figure 6),
wherein
a control end of the seventh transistor is coupled to the second port of the common data transmission circuit (Control of transistor 609 coupled to the common link between latches 691 and 692: Nguyen, Figure 6), and
a control end of the sixth transistor is configured to receive a common data transmission signal (Control of transistor 613 connected to control signal EN_SA: Nguyen, Figure 6).
Regarding Amended Independent Claim 19, Nguyen discloses a memory system, comprising:
one or more memories (Memory device 100: Nguyen, Figure 1); and
a memory controller coupled to the one or more memories and configured to control the one or more memories (Processor 130 coupled to memory device 100: Nguyen, Figure 1),
wherein one of the one or more memories comprises:
a memory cell array (Disclosing a memory array: Nguyen, col.4:1-2); and
a page buffer (A page buffer: Nguyen, col.4:60), wherein the page buffer
is disposed correspondingly to a bit line of the memory cell array (A page buffer connected to a selected data line 204: Nguyen, col.12:23-28)
and comprises:
latches (Page buffer including multiple latches: Nguyen, col.12:37-40)
coupled to the bit line through a sense node of the page buffer (Latches 691 and 692 connected to bit line 204 through a sense node: Nguyen, Figure 6); and
at least one common data transmission circuit (Common data transmission circuit: Nguyen, Figure 6),
wherein a first port of the common data transmission circuit is coupled to the sense node (A first connection between the data transmission circuit and the sense node: Nguyen, Figure 6), and
wherein a second port of the common data transmission circuit is coupled to at least two of the latches that are configured for data sensing through the common data transmission circuit (Data latches coupled to the sense node through the common data transmission circuit: Nguyen, Figure 6).
Nguyen does not expressly disclose a common data transmission circuit wherein the at least two latches share the common data transmission circuit, are configured to perform data sensing operations through the common data transmission circuit, and are enabled to transmit data between the at least two data latches based on a sensing result. Crippa, however, discloses a memory page buffer wherein:
The at least two of the latches (Disclosing data latches 230-1 and 230-2, marked 1 & 2, below: Crippa, Figure 2B, annotated above)
share the common data transmission circuit (Common data transmission circuit, outline 3 below: Crippa, Figure 2B) and
are configured to perform data sensing operations through the common data transmission circuit (Configured to perform data sensing operations through the common data transmission circuit: Crippa, Figure 2B),
wherein the common data transmission circuit is configured
to perform data sensing by discharging a voltage of the sense node (Transmission circuit configured to discharge the voltage of sense node SO through, for example, transistors 274, 278, and 286: Crippa. Figure 2B)
in response to a common data transmission signal (Common data transmission signal, MLCPROG: Crippa, Figure 2B) and
data latched in one of the at least two of the latches (Discharge reliant on data stored in latch 230-1, in the above example: Crippa, Figure 2B), and
to enable data transmission between the at least two of the latches based on a sensing result (Transferring data from latch 230-1 to latch 230-2 based on sensing result: Crippa, col.17:55-18:6).
Crippa discloses this arrangement reduces the number of read accesses required to retrieve stored data (Crippa, col.6:1-5) and allows for a flexible circuit without the excess circuitry otherwise required (Crippa, col.26:23-28). Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of this application, to combine the flexible data transfer circuitry of Crippa with the communal data latches of Nguyen, with a reasonable expectation of success. Both inventions are well known in the field of page buffer data transfer latches and the combination of known inventions with predictable results is obvious and not patentable.
Claim(s) 9-12 and 15-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 11,562,791 B1 to Hao T. Nguyen (hereafter Nguyen) in view of US 7,336,538 B2 to Luca Crippa, et al. (hereafter Crippa) and further in view of US 2022/0328114 A1 to Soo Yeol Chai, et al. (hereafter Chai).
Regarding Claim 9, Nguyen discloses the memory of claim 1, but does not expressly disclose the further limitations of claim 9. Chai, however, discloses the memory of claim 1, wherein the page buffer further comprises
independent data transmission circuits (Independent data transmission circuits 234, 235, and 236: Chai, Figure 17),
wherein
a latch that is not coupled with the common data transmission circuit corresponds to one of the independent data transmission circuits (Independent circuits not corresponding to the common data transmission circuit: Chai, Figure 17).
Chai teaches independent latches permit separate data to be stored in each latch, such as a first latch storing initialization data and a second latch storing accessed data (Chai, ¶¶[0157-0158]).
Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of this application, to combine the multiple latch structure of Chai with the common data node architecture of Nguyen, with a reasonable expectation of success. They are both known inventions in the field of page buffer design and latch management, and the combination of known inventions with predictable results is obvious and not patentable.
Regarding Claim 10, Chai discloses the memory of claim 9, wherein one of the independent data transmission circuits comprises
an eighth transistor and a ninth transistor that are connected in series (Transistors N21 and N25 connected in series: Chai, Figure 17),
wherein
a first end of the eighth transistor is coupled to the sense node (The first node of transistor N21 connected to the sense node SO: Chai, Figure 17);
a second end of the eighth transistor is coupled to a first end of the ninth transistor (The second end of transistor 21 directly electrically connected to transistor N25: Chai, Figure 17); and
a second end of the ninth transistor is coupled to a ground terminal (The second end of transistor N25 connected to a ground terminal: Chai, Figure 17),
wherein
a control end of the ninth transistor is coupled to an output end of the corresponding latch (The control of transistor N25 connected to the output of the latch: Chai, Figure 17), and
a control end of the eighth transistor is configured to receive an independent data transmission signal (The control end of transistor N21 configured to receive an independent signal: Chai, Figure 17).
Regarding Claim 11, Nguyen discloses the memory of Claim 1, but fails to expressly disclose the further limitations of Claim 11. Chai, however, discloses a memory as in Claim 1, wherein the page buffer further comprises:
a precharge and discharge circuit is coupled to a power terminal and the sense node (Disclosing a precharge circuit 233, a discharge circuit 232, and bit line control unit 231: Chai, Figure 7); and
a bit line voltage setting circuit (Bitline control unit 231: Chai, Figure 7) is coupled to:
the precharge and discharge circuit (Precharge circuit 233 and discharge circuit 232 connected to bitline control unit 231: Chai, Figure 7),
the sense node, and the bit line (Bitline control unit 231 connected to the sense node S0 and bitline BL1: Chai, Figure 7),
wherein
the precharge and discharge circuit is configured to
provide a first voltage from the power terminal (Precharge circuit 233 connected to Vdd power terminal: Chai, Figure 7)
to at least one of the bit line voltage setting circuit or the sense node (Precharge circuit 233 connected to sense node S0: Chai, Figure 7); and
the bit line voltage setting circuit is configured to
provide a bit line force voltage to the bit line
based on the first voltage provided by the precharge and discharge circuit (Bitline control unit 231 managing the provided voltage: Chai, ¶[0080]; See Also, Chai, ¶[0090]).
Chai teaches this arrangement allows the circuit to manage the speed at which the sense node and bit line charge and discharge (Chai, ¶[0090]). Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of this application, to combine the multiple latch structure of Chai with the common data node architecture of Nguyen, with a reasonable expectation of success. They are both known inventions in the field of page buffer design and latch management, and the combination of known inventions with predictable results is obvious and not patentable.
Regarding Claim 12, Chai discloses the memory of claim 11, wherein the latches include
a sense latch (Sense latch included in bitline control unit 231: Chai, Figure 7),
wherein the precharge and discharge circuit comprises:
a tenth transistor (Transistor P3: Chai, Figure 7),
wherein
a first end of the tenth transistor is coupled to the power terminal (First end of transistor P3 connected to the power terminal Vdd: Chai, Figure 7);
a second end of the tenth transistor is coupled to the sense node (Second end of transistor P3 connected to the sense node: Chai, Figure 7);
a control end of the tenth transistor is coupled to the sense latch (Control of transistor P3 connected to PRECHS0_N: Chai, Figure 7); and
the tenth transistor is configured to
provide the first voltage to at least one of
the bit line voltage setting circuit or the sense node (Transistor P3 configured to provide power to the sense node: Chai, Figure 7) according to data latched in the sense latch.
Regarding Amended Independent Claim 15, Nguyen discloses an operation method of a memory, wherein the memory comprises:
a memory cell array (Disclosing a memory array: Nguyen, col.4:1-2); and
a page buffer (A page buffer: Nguyen, col.4:60) disposed correspondingly to a bit line of the memory cell array (A page buffer connected to a selected data line 204: Nguyen, col.12:23-28),
wherein
the page buffer comprises latches (Page buffer including multiple latches: Nguyen, col.12:37-40) and at least one common data transmission circuit (Common data transmission circuit: Nguyen, Figure 6),
wherein
a first port of the common data transmission circuit is coupled to a sense node (A first connection between the data transmission circuit and the sense node: Nguyen, Figure 6) and
a second port of the common data transmission circuit is coupled to at least two of the latches (Data latches coupled to the sense node through the common data transmission circuit: Nguyen, Figure 6),
Nguyen does not expressly disclose an operation method comprising performing a precharge operation on the sense node, performing a first data setting operation on a first latch, sensing data in a second latch through the data transmission circuit, or generating a sensing result such that sense node has a second voltage and sets the first latch. Chai, however, teaches an operation method of a memory wherein the operation method comprises:
performing a precharge operation on the sense node such that the sense node has a first voltage (Performing a precharge operation on sense node SO: Chai, ¶[0090]);
in response to the sense node having the first voltage, performing a first data setting operation on a first latch of at least two of the latches (Perform a first data operation in response to the sense node SO voltage level: Chai, ¶[0098]);
sensing data
in a second latch of the at least two of the latches through the common data transmission circuit (Sensing data in the second latch unit 234 depending on the SO voltage level: Chai, ¶[0099]);
generating a sensing result
such that the sense node has a second voltage less than or equal to the first voltage (Reducing the voltage of the sense node: Chai, ¶[0090]); and
in response to the sense node having the second voltage, performing a second data setting operation on the first latch (Setting the first latch unit to store current data levels: Chai, ¶[104]).
Chai teaches this buffer arrangement allows the circuit to have at least two different evaluation periods based on the stored value in the first latch (Chai, ¶[0107]). Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of this application, to combine the multiple latch structure of Chai with the common data node architecture of Nguyen, with a reasonable expectation of success. They are both known inventions in the field of page buffer design and latch management, and the combination of known inventions with predictable results is obvious and not patentable.
Neither Nguyen nor Chai expressly discloses a common data transmission circuit wherein the at least two latches share the common data transmission circuit, are configured to perform data sensing operations through the common data transmission circuit, and are enabled to transmit data between the at least two data latches based on a sensing result. Crippa, however, discloses a memory page buffer wherein:
wherein the at least two of the latches (Disclosing data latches 230-1 and 230-2: Crippa, Figure 2B, annotated above)
share the common data transmission circuit (Common data transmission circuit, outline 3 below: Crippa, Figure 2B) and
are configured to perform data sensing operations through the common data transmission circuit (Configured to perform data sensing operations through the common data transmission circuit: Crippa, Figure 2B),
wherein the common data transmission circuit is configured
to perform data sensing by discharging a voltage of the sense node (Transmission circuit configured to discharge the voltage of sense node SO through, for example, transistors 274, 278, and 286: Crippa. Figure 2B)
in response to a common data transmission signal (Common data transmission signal, MLCPROG: Crippa, Figure 2B) and
data latched in one of the at least two of the latches (Discharge reliant on data stored in latch 230-1, in the above example: Crippa, Figure 2B);
enabling data transmission between the at least two of the latches based on the sensing result (Transferring data from latch 230-1 to latch 230-2 based on sensing result: Crippa, col.17:55-18:6).
Crippa discloses this arrangement reduces the number of read accesses required to retrieve stored data (Crippa, col.6:1-5) and allows for a flexible circuit without the excess circuitry otherwise required (Crippa, col.26:23-28). Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of this application, to combine the flexible data transfer circuitry of Crippa with the communal data latches of Nguyen, with a reasonable expectation of success. Both inventions are well known in the field of page buffer data transfer latches and the combination of known inventions with predictable results is obvious and not patentable.
Regarding Claim 16, Nguyen and Chai disclose the operation method of claim 15, wherein the page buffer comprises
a first data setting circuit (A data setting circuit consisting of transistors 623 and 627: Nguyen, Figure 6),
wherein
a first port of the first data setting circuit is coupled to the sense node (The first port of the transistor connected to the sensing node: Nguyen, Figure 6);
a second port of the first data setting circuit is coupled to at least two of the latches (The second port of the transistor connected to the latches: Nguyen, Figure 6);
a second port of the common data transmission circuit is coupled to a coupling node between the second port of the first data setting circuit and at least two of the latches (Second port of first data setting circuit connected to the common node between latches 691 and 692: Nguyen, Figure 6); and
wherein performing the first data setting operation on the first latch of the at least two of the latches comprises:
in response to a first data setting signal (In response to first data setting signal XSET: Chai, Figure 17),
providing, by the first data setting circuit, a ground voltage from a ground terminal to the first latch (Providing ground voltage via transistor N24: Chai, Figure 17); and
latching, by the first latch, initial data information based on the ground voltage (Latching data in comparison to the ground voltage: Chai, Figure 17).
Regarding Claim 17, Nguyen discloses the operation method of claim 16, wherein
the second latch latches target data information (First latch being sense amplifier latch 686 and second latch being first data latch 691: Nguyen, col.:14:38-40), and
wherein the method further comprises:
applying a common data transmission signal to the common data transmission circuit
before sensing the data in the second latch of the at least two of the latches (First applying the data transmission signal to the common signal path: Nguyen, col.14:31-34); and
in response to the sense node
having the second voltage and the first data setting signal, performing the second data setting operation on the first latch (Latching the first data state to the sense amplifier latch 686: Nguyen, col.14:38-40).
Regarding Claim 18, Nguyen discloses the operation method of claim 17, wherein:
when the target data information corresponds to a high level, the second voltage is less than the first voltage; and when the target data information corresponds to a low level, the second voltage is equal to the first voltage (Data may be maintained at a low in the sensing latch and the first data latch, regardless of the current target data information level, as is implied by the selective setting of appropriate latches: Nguyen, col.14:25-40).
Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 11,562,791 B1 to Hao T. Nguyen (hereafter Nguyen) in view of US 7,336,538 B2 to Luca Crippa, et al. (hereafter Crippa) and further in view of US 11,205,484 B1 to Soo-Woong Lee, et al. (hereafter Lee).
Regarding Claim 13, Nguyen discloses the memory of claim 1, but does not expressly disclose the remaining limitations of claim 13. Lee, however, discloses a memory device as in claim 1, wherein the latches include
a sense latch and a cache latch, wherein the sense latch and the cache latch are coupled to the second port of the common data transmission circuit (Disclosing a sensing latch and a cache latch coupled to a common data node: Lee, Figure 12A).
Lee discloses this arrangement helps minimize the distance data signals must travel, minimizing the amount of time it takes to transfer data from, for example, the sense latch to the data latch (Lee, col.17:3-9). Therefore, it would have been obvious to one having ordinary skill in the art, prior to the effective filing date of this application, to combine the multiple latches of Lee with the common data transfer architecture of Nguyen, with a reasonable expectation of success. They are both known inventions in the design and construction of page buffers and latch management, and the combination of known inventions with predictable results is obvious and not patentable.
Regarding Claim 14, Lee discloses the memory of claim 13, wherein the latches further include
at least one of a data latch or a low voltage latch (Disclosing a page buffer circuit including a cache latch, a sense latch, and a data latch: Lee, Figure 12A),
wherein at least one of the data latch or the low voltage latch is coupled to the second port of the common data transmission circuit (Data latch coupled to a common data node: Lee, Figure 12A)
Response to Arguments
Applicant’s arguments filed with respect to the claims have been fully considered but are thought to be fully addressed by the modified and new grounds of rejections above. Applicant’s response is considered to be a bona fide attempt at a response and is being accepted as a complete response.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 11,475,965 B2 to Hyung Jin Choi: Performing operations involving multiple latches simultaneously.
US 11,538,531 B2 to Hyung Jin Choi: Managing precharge and discharge operations among a plurality of data latches.
US 11,626,173 B2 to Sung Hyun Hwang, et al.: Paired latches within a page buffer.
US US 7,046,554 B2 to Ju Yeab Lee: Teaching a two latch page buffer for a memory array wherein the latches communicate through a common data transmission circuit.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER LANE REECE whose telephone number is (571)272-0288. The examiner can normally be reached Monday - Friday 7:30am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richard Elms can be reached at (571) 272-1869. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER LANE REECE/Examiner, Art Unit 2824
/JEROME LEBOEUF/Primary Examiner, Art Unit 2824 - 03/17/2026