DETAIL ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. The instant application having application No. 18/976,568 has a total of 20 claims pending in the application; there are 3 independent claim and 17 dependent claims, all of which are ready for examination by the examiner.
IFORMATION CONCENING DRAWING:
3. Application’s drawing submitted on 12/11/2024 are acceptable for examination purposes.
ACKNOWLEDGEMENT OF REFERENCES CITED BY APPLICANT
Information Disclosure Statement
4. As required by M.P.E.P. 2001.06(b) and 37 C.F.R. 1.98(d), since the instant application has been identified as a continuation application of an earlier filed application and is relied upon for an earlier filing date under 35 U.S.C. 120, the examiner has reviewed the prior art cited in the earlier related application as required by M.P.E.P. 707.05 and 904 and as stated in M.P.E.P. 2001.06(b), no separate citation of the same prior art need be made by the applicants in the instant application.
INFORMATION CONCERNING IDS:
5. The information disclosure statements (IDS’) submitted on 12/11/2024, 01/08/2025, and 01/31/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the Examiner. Documents in the IDS dated 01/31/205 under heading of “Foreign Patent Document”, as shown by lined-through, have not been considered by the Examiner. An English translation of the HF documents have not been provided.
SPECIFICATION
The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter. See 37 CFR 1.75(d)(1) and MPEP § 608.01(o). Correction of the following is required:
6. The specification fails to provide proper antecedent basis for “first cache” and “second cache” recited in the claims.
INFORMATION CONCERNING CLAIMS:
Claim Objections
Claims 10 and 19 are objected to because of the following informalities:
7. Claim 10 in line 15 recites “one a second memory cycle”
Claim 19 in line 15 recites “one a first memory cycle”
It is suggested that that “one” to be replace by “on” in the claims 10 and 19.
8. Claim 6 recites the limitation “, .
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-5, 12-14, and 19-20 are rejected lack of proper antecedent basis in the claim.
9. Claim 2 recites the limitation "the cache line." in line 6. There is insufficient antecedent basis for this limitation in the claim. Claims 3-5 are rejected at least based on their dependency from claim 2.
10. Claim 12 recites the limitation "the cache line" in lines 2, 3, and 4. There is insufficient antecedent basis for this limitation in the claim. Claims 13-14 are rejected at least based on their dependency from claim 12.
11. Claim 19 recites the limitation "the cache line" in line 20. There is insufficient antecedent basis for this limitation in the claim. Claim 20 is rejected at least based on its dependency from claim 19.
Double Patenting
12. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
13. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,197,334 B2 (hereinafter the patent). Although the claims at issue are not identical, they are not patentably distinct from each other because: the claims in instant application and claims of the patent refer to same caches with different names. For example, as shown in Fig. 1 of instant application, there a first level or L1 instruction (L1I) cache 121 and a second level or L2 cache 130. The size of cache line of the L2 cache is twice the size the L1 cache (e.g., two program instructions can be stored in each half the L2 cache line while only one program instruction can be stored in the L1 cache line). For example, upon a miss in the L1 instruction cache that causes a hit in the upper half of a L2 cache line, the L2 cache supplies the upper half level cache line of the L2 cache to the level one instruction cache, and also supplies the lower half level the L2 cache line to L1 cache, as a prefetch. The Fig. 1 of the patent shows a similar configuration. However, the claims of instant application refers to the ”L1 cache” as a “second cache” and refers to the L2 cache as a “first cache” for some claims (e.g., claims 1-9 and 19-20) and changes for other claims (e.g., claims 10-18). The claims of the patent refers to L1 as a “first cache” and the L2 cache as a “second cache”. The claims 6 and 15 of the instant application recites a third program instruction and a fourth program instruction. Claims 4 and 13 of instant application recite that the width L2 cache is twice the size of L1 cache. Claim(s) of the patent does not specifically recite any size (e.g., twice) but recites first half and second half of L2 cache. However, the differences do not make the claims patentably distinct from each.
14. Claims of instant application are compared to claims of the patent in following table:
US patent 12,197,334
US Application 18/976,568
An apparatus comprising:
a processor configured to issue a demand fetch request for a first program instruction;
a first cache coupled to the processor; and
a memory controller configured to:
receive the demand fetch request; based on the demand fetch request,
determine whether the first program instruction is stored in the first cache; based on the first program instruction not being stored in the first cache,
determine whether the first program instruction is associated with a hit in a second cache; and
based on the first program instruction not being stored in the first cache and being associated with a hit in the second cache:
transmit the first program instruction on a first memory cycle, wherein the first program instruction is associated with a first portion of a cache line of the second cache;
and transmit a second program instruction on a second memory cycle as a prefetch response, wherein the second program instruction is associated with a second portion of the cache line.
1.An apparatus, comprising:
a first cache; and
a first memory controller configured to: receive a demand fetch request
for a first program instruction,
wherein the first program instruction is associated with a miss in a second cache;
determine whether the first program instruction is associated with a hit in the first cache; and based on a determination that the first program instruction is associated with a hit in the first cache,
provide the first program instruction from the first cache on a first memory cycle; and
provide a second program instruction from the first cache on a second memory cycle as a prefetch.
12. A method comprising: issuing, by a processor, a demand fetch request for a first program instruction;
receiving, by a memory controller, the demand fetch request;
determining whether the first program instruction is stored in a first cache; determining, by the memory controller, whether the first program instruction is associated with a hit in a second cache based on the first program instruction not being stored in the first cache; and based on the first program instruction not
being stored in the first cache and being associated with a hit in the second cache:
transmitting,
by the memory controller,
the first program instruction on a first memory cycle, wherein the first program instruction is associated with a first portion of a cache line of the second cache; and
transmitting,
by the memory controller,
a second program instruction on a second memory cycle as a prefetch response, wherein the second program instruction is associated with a second portion of the cache line.
10. A method, comprising:
receiving, by a first memory controller, a demand fetch request for a first program instruction,
wherein the first program instruction is associated with a miss in a first cache;
determining, by the first memory controller,
whether the first program instruction is associated with a hit in a second cache; and based on determining that the first program instruction is associated with a hit in the second cache,
providing
the first program instruction from the second cache on a first memory cycle; and
providing
a second program instruction from the second cache on a second memory cycle as a prefetch.
1.An apparatus comprising: a processor configured to issue a demand fetch request for a first program instruction;
a first cache coupled to the processor; and a memory controller configured to: receive the demand fetch request;
based on the demand fetch request, determine whether the first program instruction is stored in the first cache; based on the first program instruction not being stored in the first cache,
determine whether
the first program instruction is associated with a hit in a second cache;
and based on the first program instruction not being stored in the first cache and
being associated with a hit in the second cache:
transmit the first program instruction on a first memory cycle, wherein the first program instruction is associated with a first portion of a cache line of the second cache; and
transmit a second program instruction on a second memory cycle as a prefetch response, wherein the second program instruction is associated with a second portion of the cache line.
19. A device, comprising: a processor configured to generate a demand fetch request for a first program instruction;
a first cache; and
a memory controller configured to: receive the demand fetch request,
wherein
the first program instruction is associated with a miss in a second cache;
determine whether the first program instruction is associated with a hit in the first cache;
and based on a determination that the first program instruction is associated with a hit in the first cache,
provide the first program instruction from a first portion of a cache line of the first cache on a first memory cycle;
and
provide a second program instruction from a second portion of the cache line on a second memory cycle.
1.An apparatus comprising:
…
…
transmit the first program instruction on a first memory cycle, wherein the first program instruction is associated with a first portion of a cache line of the second cache; and
transmit a second program instruction on a second memory cycle as a prefetch response, wherein the second program instruction is associated with a second portion of the cache line
2. The apparatus of claim 1, wherein
the first memory controller is configured to:
provide the first program instruction from a first portion of a cache line of the first cache; and
provide the second program instruction from a second portion of the cache line.
14. The method of claim 12,
wherein: the first portion of the cache line is an upper half of the cache line;
and the second portion of the cache line is a lower half of the cache line.
3.The apparatus of claim 2,
wherein the first portion of the cache line is an upper half portion of the cache line, and the second portion of the cache line is a lower half portion of the cache line.
9. An apparatus comprising: a processor configured to issue a demand fetch request for a first program instruction; a first cache coupled to the processor; and a memory controller configured to: receive the demand fetch request; based on the demand fetch request, determine whether the first program instruction is stored in the first cache; based on the first program instruction not being stored in the first cache, determine whether the first program instruction is associated with a hit in a second cache; and based on the first program instruction not being stored in the first cache and being associated with a hit in the second cache: transmit the first program instruction on a first memory cycle, wherein
the first program instruction is associated with an upper half of a cache line of the second cache; and transmit a second program instruction on a second memory cycle, wherein the second program instruction is associated with a lower half of the cache line.
4.The apparatus of claim 3,
wherein the cache line of the first cache has a width twice a width of a cache line of the second cache.
9. An apparatus comprising: a processor configured to issue a demand fetch request for a first program instruction; a first cache coupled to the processor; and a memory controller configured to: receive the demand fetch request; based on the demand fetch request,
determine whether the first program instruction is stored in the first cache;
based on the first program instruction not being stored in the first cache, determine whether the first program instruction is associated with a hit in a second cache;
and based on the first program instruction not being stored in the first cache and being associated with a hit in the second cache:
transmit the first program instruction on a first memory cycle, wherein the first program instruction is associated with an upper half of a cache line of the second cache; and
transmit a second program instruction on a second memory cycle, wherein the second program instruction is associated with a lower half of the cache line.
5.The apparatus of claim 3,
wherein to provide the second program instruction as the prefetch,
the first memory controller is further configured to:
based on the determination that the first program instruction is associated with a hit in the first cache, determine whether the first program instruction is stored in the upper half portion or the lower half portion of the cache line of the first cache; and
based on a determination that the first program instruction is stored in the upper half portion of the cache line,
provide the first program instruction from the first cache on the first memory cycle; and
provide the second program instruction from the first cache on the second memory cycle as the prefetch.
1.An apparatus comprising: a processor configured to issue a demand fetch request for a first program instruction; a first cache coupled to the processor; and a memory controller configured to:
receive the demand fetch request; based on the demand fetch request,
determine whether the first program instruction is stored in the first cache; based on the first program instruction not being stored in the first cache,
determine whether the first program instruction is associated with a hit in a second cache; and
based on the first program instruction not being stored in the first cache and being associated with a hit in the second cache:
transmit the first program instruction on a first memory cycle, wherein the first program instruction is associated with a first portion of a cache line of the second cache; and
transmit a second program instruction on a second memory cycle as a prefetch response, wherein the second program instruction is associated with a second portion of the cache line.
6. The apparatus of claim 1,
wherein: the first memory controller is further configured to:
receive a second demand fetch request for a third program instruction,
wherein the third program instruction is associated with a miss in the second cache;
determine whether the third program instruction is associated with a hit in the first cache; and
based on a determination that the third program instruction is associated with a hit in the second cache, determine whether the third program instruction is stored in an upper half portion or a lower half portion of a second cache line of the first cache; and
based on a determination that the third program instruction is stored in the lower half portion of the second cache line, provide the third program instruction from the lower half portion of the second cache line, and
provide a fourth program instruction from the upper half portion of the second cache line as a prefetch.
11. The apparatus of claim 9 further comprising:
a register coupled to the first cache that includes:
a first portion configured to store the first program instruction from the first portion of the first cache; and
a second portion configured to store the second program instruction from the second portion of the second cache;
and a multiplexer that includes: a first data input coupled to the first portion of the register; a second data input coupled to the second portion of the register; and an output.
7. The apparatus of claim 1, further comprising:
a register including:
a first portion configured to store the first program instruction provided from the first cache; and
a second portion configured to store the second program instruction provided from the first cache.
11. The apparatus of claim 9 further comprising: a register coupled to the first cache that includes: a first portion configured to store the first program instruction from the first portion of the first cache; and a second portion configured to store the second program instruction from the second portion of the second cache; and
a multiplexer that includes: a first data input coupled to the first portion of the register;
a second data input coupled to the second portion of the register; and an output.
8. The apparatus of claim 7, further comprising:
a multiplexer configured to: receive the first program instruction at a first input of the multiplexer; receive the second program instruction at a second input of the multiplexer; and select the first program instruction or the second program instruction to provide to a second memory controller.
1.An apparatus comprising: a processor configured to issue a demand fetch request for a first program instruction; a first cache coupled to the processor; and a memory controller configured to: receive the demand fetch request; based on the demand fetch request, determine whether the first program instruction is stored in the first cache;
based on the first program instruction not being stored in the first cache, determine whether the first program instruction is associated with a hit in a second cache; and based on the first program instruction not being stored in the first cache and being associated with a hit in the second cache: transmit the first program instruction on a first memory cycle, wherein the first program instruction is associated with a first portion of a cache line of the second cache; and transmit a second program instruction on a second memory cycle as a prefetch response, wherein the second program instruction is associated with a second portion of the cache line.
9. The apparatus of claim 1,
wherein the second cache is a level-one (L1) cache, and the first cache is a level-two (L2) cache.
Conclusion
the prior art made of record and not relied upon are as follows:
1. Kottapalli (US 20020188805 A1) teaches “…if the primary access hits in the first cache 570(a), the first cache provides 550 the targeted data. If the primary and secondary look-ups to the first cache miss 570(b), a full cache line from the second cache is returned 580 to the first cache. If the primary look-up misses and the secondary look-up hits, the first half of the cache line from the second cache is returned 590 to the first cache” (par. [0038).
2. Venkatasubramanian et. al (US 20160179700 A1) teaches “…The two level one caches (L1I 111 and L1D 112) are backed by a level two unified cache (L2) 113. In the event of a cache miss to level one instruction cache 111 or to level one data cache 112, the requested instruction or data is sought from level two unified cache 113…” (par. 0028)
3. Chachad et al. (US 20120198160 A1) teaches “… In case of loads or stores that cause write allocations, the first external request by UMC 630 is for the bytes that CPU 110 needs. This corresponds to the missed L1D cache line. The second external request is for the other half of the L2 cache line (par. 0046)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASHEM FARROKH whose telephone number is (571)272-4193. The examiner can normally be reached Monday through Friday from 8:30 am - 5:00 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Mr. Tim Vo can be reached on (571)272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see htto://pair-direct.uspto.gov.
For questions regarding access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786- 9199 (IN USA OR CANADA) or 571-272-1000.
/HASHEM FARROKH/ Primary Examiner, Art Unit 2138