DETAILED ACTION
Claims 1, 3-4 are pending. Claim 2 is cancelled.
Priority: April 15. 2022(FP, JP)
Assignee: Semiconductor Energy Lab
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lai et al.(20180210836), and further in view of Ishizu et al.(20190355397).
As per claim 1, Lai discloses:
A semiconductor device(Lai, [claim 1 -- An integrated circuit]) comprising a first cache, a second cache, a cache controller, and a core(Lai, [Abstract -- A multi-core processing chip where the last-level cache is implemented by multiple last-level caches (a.k.a. cache slices) that are physically and logically distributed], [0019 -- In a multi-core processing chip, the last-level cache may be implemented by multiple last-level caches (a.k.a. cache slices) that are physically and logically distributed. The various processors of the chip decide which last-level cache is to hold a given data block by applying a hash function to the physical address.]), wherein the core is configured to perform program processing,(Lai, [0020 -- As used herein, the term “processor” includes digital logic that executes operational instructions to perform a sequence of tasks.; processor can be one of several “cores” (a.k.a., ‘core processors’) that are collocated on a common die or integrated circuit (IC) with other processors. In a multiple processor (“multi-processor”) system]);
wherein the cache controller(Lai, [0019 -- processor can be one of several “cores” (a.k.a., ‘core processors’) that are collocated on a common die or integrated circuit (IC) with other processors. In a multiple processor (“multi-processor”) system]) is configured to
perform control to store data for performing the program processing in the second cache in the case where a temperature around or inside the core is higher than or equal to a predetermined temperature threshold value,(Lai, [0050 -- For example, based at least in part on a temperature indicator associated with processor 111c, processor 111b may map its accesses using a second hashing function that distributes these accesses only to those of last-level caches 131a-131e that are associated with processors 111a-111e that are associated with temperature indicators that are not over a certain limit. In other words, when one or more of processors 111a-111e are over a temperature limit, processor 111b uses the second hashing function to avoid accessing those the last-level caches 131a-131e that are most tightly coupled to processors 111a-111e that are over-limit.]);
and perform control to store the data for performing the program processing in the first cache in the case where the temperature around or inside the core is lower than the predetermined temperature threshold value(Lai, [0049 -- For example, when temperature indicators associated with all of processors 111a-111e (e.g., including the indicator for processor 111a) indicate a within-limits condition, processor 111b may map its accesses using a first hashing function that distributes these accesses to any and all of last-level caches 131a-131e.]);
Lai does not explicitly disclose the following, however Ishizu discloses:
wherein the first cache comprises a transistor whose channel formation region includes silicon, and the second cache comprises a transistor whose channel formation region includes an oxide semiconductor(Ishizu, [0067 -- The cell 10 includes a memory cell 20 and a backup circuit 30; The memory cell 20 has the same circuit configuration as a standard 6T (transistor) SRAM cell and is composed of a bistable circuit 25 and transistors MT1 and MT2.], [0073 -- When the transistors MO1 and MO2 are each an OS transistor, the backup circuit 30 can be stacked over the memory cell 20 including Si transistors;], [0217 -- In the layer LX1, a Si transistor included in the memory device 100, such as the transistor MT1, is provided. A channel formation region of the Si transistor is provided in the single crystal silicon wafer 5500.], [0244 -- A channel formation region of the OS transistor is preferably a CAC-OS (cloud-aligned composite metal oxide semiconductor).]).
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the features of Ishizu into the system of Lai for the benefit of the power consumption of the semiconductor device is efficiently reduced. The reduction efficiency of power consumption is improved by providing sleep mode(Ishizu, [0146]).
Claim(s) 3-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lai et al.(20180210836), in view of Ishizu et al.(20190355397) and further in view of Gomes et al.(20220308995).
As per claim 3, the rejection of claim 2 is incorporated, in addition, Lai does not explicitly disclose the following in its entirety, however Gomes discloses:
further comprising a substrate(Gomes, [0033 -- The compute logic is then over or on top of a package substrate 121. FIG. 1C illustrates an example in which the compute logic 103 is stacked over or on top of the 3D DRAM 105, which is over or on top of the package substrate 121.]), a layer over the substrate(Gomes, [0033 -- FIG. 1B illustrates an example in which the 3D DRAM 105 is stacked over or on top of the compute logic 103.]), and a die over the substrate, wherein the core is provided over the substrate(Gomes, [0037 -- The compute logic includes one or more processor cores 111 and one or more levels of cache 109 (e.g., level 1 (L1), level 2 (L2), level 3 (L3), etc.). The one or more levels of cache 109 may be implemented in SRAM on the same die as the processor cores.], [0053 -- FIG. 5C illustrates another 3D compute device with integrated 3D memory. In the example illustrated in FIG. 5C, many memory layers 213 are added to a base die 550. The NMOS memory layers 213 and the PMOS layer 215 may be added to the base die 550 via a layer transfer process, or memory layers may be deposited on the base die 550.]), wherein part of the first cache is provided in the layer(Gomes, [0042 -- The compute layer(s) 202 includes processor cores, a cache controller, and other compute logic.]), wherein part of the second cache is provided in the die(Gomes, [0034 -- FIG. 1D shows one example in which four compute dies are integrated over one 3D DRAM die, however, any number compute dies can be integrated with a number of 3D DRAM dies.], [0053 -- The base die 550 includes TSVs (through silicon vias) 552 to connect memory layers 213, the PMOS layer 215, and memory layers in the base die 550 with the compute layers 202. The base die 550 and compute layers 202 may be bonded together via contacts 556 using bonding techniques. Although FIG. 5C illustrates an example in which the base die is over the compute die, a base die may be under one or more compute dies or over a compute die.]), wherein the layer is electrically connected to the substrate through a via hole formed between the substrate and the layer(Gomes, [0048 -- Turning again to FIG. 4A, in the illustrated example, the bottom layers include the substrate 246, which includes diffusion contact (diffcon) material. The die on which the memory layers are formed may include alternate layers of interconnect (M) layers and interlayer (V) layers. In the illustrated example, the transistors for the memory cell array 240 are located between metal layers. In the illustrated example, the capacitors for the memory cells are located in an interlayer layer.]), and wherein the die is electrically connected to the substrate by bonding a first electrode formed on the substrate and a second electrode formed on the die to each other(Gomes, [0042 -- The compute layer(s) 202 are bonded with the 3D memory 201 via a bonding technique (e.g., bonding solder bumps, balls, exposed contacts, pads, etc.). The compute layer(s) 202 includes processor cores, a cache controller, and other compute logic.]).
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the features of Gomes into the system of Lai for the benefit of reducing cost-per-bit to external dynamic random-access memory (DRAM) technology, and power consumption and improves performance of the processor and the memory in an efficient manner(Gomes, 0105).
As per claim 4, the rejection of claim 2 is incorporated, in addition, Lai does not explicitly disclose the following in its entirety, however Gomes discloses:
further comprising a substrate(Gomes, [0033 -- The compute logic is then over or on top of a package substrate 121. FIG. 1C illustrates an example in which the compute logic 103 is stacked over or on top of the 3D DRAM 105, which is over or on top of the package substrate 121.]), a layer over the substrate(Gomes, [0033 -- FIG. 1B illustrates an example in which the 3D DRAM 105 is stacked over or on top of the compute logic 103.]), and a die over the layer, wherein the core is provided over the substrate(Gomes, [0037 -- The compute logic includes one or more processor cores 111 and one or more levels of cache 109 (e.g., level 1 (L1), level 2 (L2), level 3 (L3), etc.). The one or more levels of cache 109 may be implemented in SRAM on the same die as the processor cores.], [0053 -- FIG. 5C illustrates another 3D compute device with integrated 3D memory. In the example illustrated in FIG. 5C, many memory layers 213 are added to a base die 550. The NMOS memory layers 213 and the PMOS layer 215 may be added to the base die 550 via a layer transfer process, or memory layers may be deposited on the base die 550.]), wherein part of the first cache is provided in the layer(Gomes, [0042 -- The compute layer(s) 202 includes processor cores, a cache controller, and other compute logic.]), wherein part of the second cache is provided in the die(Gomes, [0034 -- FIG. 1D shows one example in which four compute dies are integrated over one 3D DRAM die, however, any number compute dies can be integrated with a number of 3D DRAM dies.], [0053 -- The base die 550 includes TSVs (through silicon vias) 552 to connect memory layers 213, the PMOS layer 215, and memory layers in the base die 550 with the compute layers 202. The base die 550 and compute layers 202 may be bonded together via contacts 556 using bonding techniques. Although FIG. 5C illustrates an example in which the base die is over the compute die, a base die may be under one or more compute dies or over a compute die.]), wherein the layer is electrically connected to the substrate through a via hole formed between the substrate and the layer(Gomes, [0048 -- Turning again to FIG. 4A, in the illustrated example, the bottom layers include the substrate 246, which includes diffusion contact (diffcon) material. The die on which the memory layers are formed may include alternate layers of interconnect (M) layers and interlayer (V) layers. In the illustrated example, the transistors for the memory cell array 240 are located between metal layers. In the illustrated example, the capacitors for the memory cells are located in an interlayer layer.]), and wherein the die is electrically connected to the layer by bonding a first electrode formed on the layer and a second electrode formed on the die to each other(Gomes, [0042 -- The compute layer(s) 202 are bonded with the 3D memory 201 via a bonding technique (e.g., bonding solder bumps, balls, exposed contacts, pads, etc.). The compute layer(s) 202 includes processor cores, a cache controller, and other compute logic.]).
Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the features of Gomes into the system of Lai for the benefit of reducing cost-per-bit to external dynamic random-access memory (DRAM) technology, and power consumption and improves performance of the processor and the memory in an efficient manner(Gomes, 0005, 0105).
Response to Arguments
Applicant's arguments filed 12/10/2025 have been fully considered but they are not persuasive.
The USPTO has lifted the objection concerning claims 3-4 in the OA(office action) dated 9/10/2025.
The applicant has amended claim 1 to include the limitations of claim 2, presently cancelled. The applicant contends that none of the presently cited prior art either singly or in combination, suggests or renders obvious the amended features of claim 1(Remarks, p. 6):
wherein the first cache comprises a transistor whose channel formation region includes silicon, and the second cache comprises a transistor whose channel formation region includes an oxide semiconductor.
The USPTO disagrees with this contention.
Response:
The prior art of Ishizu et al.(2019/0355397) discloses a memory device that includes a cell array(0053), where each cell includes a memory cell and a backup circuit(0067). That means that the cell array includes a memory cell array(first cache) and a backup circuit array(second cache). The memory cell is an Si transistor, while the backup circuit is an OS transistor(0073). The backup circuit is capable of retaining data of the memory cell even when powered off(0005). The redundancy and low power of the backup circuit is the improvement. Therefore the amended claim limitations are obvious over the prior art. All rejections are maintained.
Examiner Notes
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ishizu et al.(2016/0217848) where the device ensures that an inside of an electronic component is filled with resin, thus reducing damage to a circuit portion and a wire embedded in a component caused by external mechanical force and reducing deterioration of characteristics due to moisture or dust. (Ishizu, 0237).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARVIND TALUKDAR whose telephone number is (303)297-4475. The examiner can normally be reached M-F, 10 am-6pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at 571-272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Arvind Talukdar
Primary Examiner
Art Unit 2132
/ARVIND TALUKDAR/Primary Examiner, Art Unit 2132