DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of Group I, claims 1-11 in the reply filed on 17 November 2025 is acknowledged.
Drawings
Figures 6-13, 16-17, and 20-34 should be designated by a legend such as --Prior Art-- because only that which is old is illustrated. The information related to these drawing is found in the background section of the specification and is therefore considered as known art. See MPEP § 608.02(g). Corrected drawings in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. The replacement sheet(s) should be labeled “Replacement Sheet” in the page header (as per 37 CFR 1.84(c)) so as not to obstruct any portion of the drawing figures. If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-5, 8, and 9 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hantao et al: "A Highly Parallel and Energy Efficient Three-Dimensional Multilayer CMOS-RRAM Accelerator for Tensorized Neural Network”.
In reference to claim 1, Hantao teaches a three-dimensional integrated circuit for use in an artificial neural network (Abstract: "...three-dimensional (3-D) multilayer CMOS-RRAM accelerator for a tensorized neural network. Highly parallel matrix-vector multiplication can be performed with low power in the proposed 3-D multilayer CMOS- RAM accelerator...") comprising:
a first die comprising a first vector by matrix multiplication array and a first input multiplexor, the first die located on a first vertical layer (III. 3D Multi- layer CMOS-RRAM accelerator, B. 3D Multi-layer CMOS-RRAM architecture: ...Layer 2 of RRAM-crossbar performs logic operations such as matrix-vector multiplication and also vector addition...", Fig. 3);
a second die comprising an input circuit, the second die located on a second vertical layer different than the first vertical layer (III. 3D Multi-layer CMOS-RRAM accelerator, B. 3D Multi-layer CMOS-RRAM architecture: " ..Layer 1 of RRAM-crossbar is implemented as a buffer to store neural network model weights...", Fig. 3); and
one or more vertical interfaces coupling the first die and the second die (III. 3D Multi-layer CMOS-RRAM accelerator, B. 3D Multi-layer CMOS-RRAM architecture: " Layer 2 collects tensor cores from Layer 1 through TSV communication to perform parallel matrix-vector multiplication. Fig. 3);
wherein during a read operation, the input circuit provides an input signal to the first input multiplexor over at least one of the one or more vertical interfaces, the first input multiplexor applies the input signal to one or more rows in the first vector by matrix multiplication array, and the first vector by matrix multiplication array generates an output (III. 3D Multi-layer CMOS- RRAM accelerator, B. 3D Multi-layer CMOS-RRAM architecture: " The wordline takes the input (in this case, tensor core 3) and the multiplicand (in this case, tensor core 4) is stored as the conductance of RRAM. The output will be collected from the bit lines...", Fig. 3).
In reference to claim 2, Hantao teaches wherein the second die also comprises a digital-to-analog converter for converting a digital input into an analog input provided to the input circuit as the input signal (III. 3D Multi-layer CMOS-RRAM accelerator, Fig. 3).
In reference to claim 3, Hantao teaches wherein the first die also comprises a neuron circuit for buffering the output (III. 3D Multi-layer CMOS-RRAM accelerator, Fig. 3).
In reference to claim 4, Hantao teaches wherein the first die also comprises a column multiplexer for sending the output to a third die over at least one of the one or more vertical interfaces (III. 3D Multi-layer CMOS-RRAM accelerator, Fig. 3).
In reference to claim 5, Hantao teaches wherein the third die also comprises an analog-to-digital converter to convert the output from the first die into a digital output, the third die located on a third vertical layer different than the first vertical layer and the second vertical layer (III. Fig. 3, 6, TC1 – TC6).
In reference to claim 8, Hantao teaches a third die comprising a second vector by matrix multiplication array and a second input multiplexor, the third die located on the first vertical layer (III, 3D Multi-layer CMOS-RRAM accelerator, Fig. 3)
In reference to claim 9, Hantao teaches wherein the vector by matrix multiplication array also comprises a plurality of non-volatile memory cells (III, 3D Multi-layer CMOS-RRAM accelerator, Fig. 3).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hantao et al: "A Highly Parallel and Energy Efficient Three-Dimensional Multilayer CMOS-RRAM Accelerator for Tensorized Neural Network” in view of applicant admitted prior art (AAPA).
In reference to claim 6, Hantao teaches claims 1 and 5 as described above. Hantao does not teach a fourth die comprising a high voltage generator, analog circuitry, and a temperature compensation circuit, the fourth die located on a fourth vertical layer different than the first vertical layer, the second vertical layer, and the third vertical layer. AAPA teaches a VMM system that includes a high voltage generator (Paragraph [0088] Figure 34, 3410), analog circuitry (Paragraph [0088] Figure 34, 3415), and a temperature compensation circuit (Paragraph [0089] Figure 34, 3406). Accordingly, it would have been obvious for one of ordinary skill in the art at the time of invention to incorporate the high voltage generator, analog circuitry, and a temperature compensation circuit into the three-dimensional integrated circuit of claims 1 and 5, as taught by Hantao, on a fourth die located on a fourth vertical layer different than the first vertical layer, the second vertical layer, and the third vertical layer because it would place the high voltage generator, analog circuitry, and a temperature compensation circuit into the vertical stack with the first vertical layer, the second vertical layer, and the third vertical layer of claims 1 and 5 which would continue to reduce the overall horizontal footprint of the circuit.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hantao et al: "A Highly Parallel and Energy Efficient Three-Dimensional Multilayer CMOS-RRAM Accelerator for Tensorized Neural Network” in view of applicant admitted prior art (AAPA) and Wang et al., US PGPUB No. 2022/0156469 A1.
In reference to claim 7, Hantao in view of AAPA teaches claims 1, 5, and 6 as described above. Furthermore Hantao teaches a second vector by matrix multiplication array, a second input multiplexor, a high voltage decoder, and a neuron circuit (Figure 3, for instance TC1 is the first vector by matrix multiplication array with a first input multiplexor, a high voltage decoder, and a neuron circuit and TC2 is the second vector by matrix multiplication array with a second input multiplexor, a high voltage decoder, and a neuron circuit) They do not teach wherein the second vector by matrix multiplication array, second input multiplexor, high voltage decoder, and neuron circuit are on a fifth die located on a fifth vertical layer different than the first vertical layer, the second vertical layer, the third vertical layer, and the fourth vertical layer. Wang teaches a multi-layer neural network wherein a first matrix multiplication is performed on a first layer and a second matrix multiplication is performed on a second layer (Paragraphs 33, 42). Accordingly, it would have been obvious for one of ordinary skill in the art at the time of invention to incorporate the teachings of Wang for placing first matrix multiplication circuitry on a first layer and second matrix multiplication circuitry on a second layer into the teachings of Hantao in view of AAPA for claims 1, 6, and a second vector by matrix multiplication array, a second input multiplexor, a high voltage decoder, and a neuron circuit to have a fifth layer different than the first vertical layer, the second vertical layer, the third vertical layer, and the fourth vertical layer that comprises the second vector by matrix multiplication array with a second input multiplexor, a high voltage decoder, and a neuron circuit because it would place the second vector by matrix multiplication array, a second input multiplexor, a high voltage decoder, and a neuron circuit into the vertical stack with the first vertical layer, the second vertical layer, the third vertical layer, and the fourth vertical layer of claims 1 and 5 which would reduce the overall horizontal footprint of the circuit of Hantao as shown in Figure 3 be placing the TC1, TC2, TC3, and TC4 vertically instead of horizontally.
Claim(s) 10 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hantao et al: "A Highly Parallel and Energy Efficient Three-Dimensional Multilayer CMOS-RRAM Accelerator for Tensorized Neural Network”
In reference to claims 10 and 11, Hantao teaches claims 1 and 9 as described above. They do not teach wherein the memory cells comprises a plurality of stacked gate flash memory cells or split gate flash memory cells. However using a plurality of stacked gate flash memory cells or split gate flash memory cells as memory cells is notoriously well known in the art. OFFICIAL NOTICE IS TAKEN. Accordingly, it would have been obvious for one of ordinary skill in the art at the time of invention to use stacked gate flash memory cells or split gate flash memory cells as the RRAM memory cells as taught by Hantao in claims 1 and 9 because stacked gate generally allows for higher density due to a simpler cell design while split gate offers significantly faster erase times, either of which would be desirable.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON BOWERS whose telephone number is (571)272-1888. The examiner can normally be reached Flex M-F 7am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jack Chiang can be reached at (571) 272-7483. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.B/Examiner, Art Unit 2851
/JACK CHIANG/Supervisory Patent Examiner, Art Unit 2851