DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 4-9, 11-15, and 18-20 is/are rejected under 35 U.S.C. 102(a1)/(a2) as being anticipated by Vasekin et al. (US 7,447883).
Regarding claim 1, Vasekin et al. discloses a data processing method [Col. 11: 31-32: method of data processing], comprising: when a first coroutine is executed, determining whether a to-be-fetched object in an execution process is stored in a target cache [Col. 9: 37-67: an instruction fetching circuit coupled to memory storing program instructions to be executed, trigger a branch target cache to store data concerning a branch instruction fetched by the instruction fetching unit]; and if it is determined upon determining that the to-be-fetched object is not stored in the target cache, prefetching the to-be-fetched object, and switching the currently executed first coroutine to a second coroutine [FIG. 2, 3, Col. 1: 39-44: an instruction fetching circuit coupled to memory storing program instructions to be executed, a prefetching unit wherein a cache target address is issued to the prefetch unit to redirect program fetching to the target address].
Regarding claim 2, Vasekin et al. discloses the method according to claim 1, wherein the to-be-fetched object comprises a to-be-fetched target instruction, and the determining whether a to-be-fetched object in an execution process is stored in a target cache comprises: predicting, based on an address of the target instruction, whether the target instruction is stored in the target cache [FIG. 1, Col. 1: 39-44: an instruction fetching circuit coupled to memory storing program instructions to be executed, a prefetching unit wherein a cache target address is issued to the prefetch unit to redirect program fetching to the target address].
Regarding claim 4, Vasekin et al. discloses the method according to claim 1, wherein the to-be-fetched object comprises a to-be-fetched target instruction, and the determining whether a to-be-fetched object in an execution process is stored in a target cache comprises: determining, by accessing the target cache, whether the target instruction is stored in the target cache [FIG. 1, Col. 1: 39-44: an instruction fetching circuit coupled to memory storing program instructions to be executed, a prefetching unit wherein a cache target address is issued to the prefetch unit to redirect program fetching to the target address].
Regarding claim 5, Vasekin et al. discloses the method according to claim 1, wherein the to-be-fetched object comprises to- be-fetched target data, the target data is data that needs to be obtained based on a currently processed instruction, and the determining whether a to-be-fetched object in an execution process is stored in a target cache comprises: before entering a decoding phase of the currently processed instruction, performing first prediction about whether the target data is stored in the target cache [FIG. 1, Col. 1: 39-44: an instruction fetching circuit coupled to memory storing program instructions to be executed, a prefetching unit wherein a cache target address is issued to the prefetch unit to redirect program fetching to the target address].
Regarding claim 6, Vasekin et al. discloses the method according to claim 5, wherein the performing first prediction about whether the target data is stored in the target cache comprises: predicting, based on an address of the currently processed instruction, whether the target data is stored in the target cache [FIG. 1, Col. 1: 39-44; 5:6-12: an instruction fetching circuit coupled to memory storing program instructions to be executed, a prefetching unit wherein a cache target address is issued to the prefetch unit to redirect program fetching to the target address].
Regarding claim 7, Vasekin et al. discloses the method according to claim 5, wherein when a result of the first prediction indicates that the target data is not stored in the target cache, the prefetching the to-be-fetched object comprises: decoding and executing the currently processed instruction, and prefetching the target data based on an address that is of the target data and that is calculated in an execution process of the currently processed instruction [Col. 5: 13-16, 36-50].
Regarding claim 8, Vasekin et al. discloses the method according to claim 1, wherein the to-be-fetched object comprises to- be-fetched target data, the target data is data that needs to be obtained based on a currently processed instruction, and the determining whether a to-be-fetched object in an execution process is stored in a target cache comprises: in an execution phase of the currently processed instruction, performing second prediction about whether the target data is stored in the target cache [Col. 1: 13-32].
Regarding claim 9, Vasekin et al. discloses the method according to claim 8, wherein the performing second prediction about whether the target data is stored in the target cache comprises: predicting, based on an address of the target data, whether the target data is stored in the target cache, wherein the address of the target data is calculated in an execution process of the currently processed instruction [FIG. 3-4].
Regarding claim 11, Vasekin et al. discloses the method according to claim 1, wherein the to-be-fetched object comprises to-be-fetched target data, the target data is data that needs to be obtained based on a currently processed instruction, and the determining whether a to-be-fetched object in an execution process is stored in a target cache comprises: determining, by accessing the target cache, whether the target data is stored in the target cache [Col. 5: 13-16, 36-50].
Regarding claim 12, Vasekin et al. discloses the method according to claim 1,wherein the second coroutine is a next coroutine of the first coroutine in a coroutine chain, the coroutine chain is a closed-loop chain comprising a plurality of coroutines, and the method further comprises: when switching is performed for a plurality of times based on the coroutine chain and switching to the first coroutine is performed again, no longer predicting whether the to-be-fetched object prefetched last time is stored in the target cache [Col. 4: 41 to Col. 5: 35].
Regarding claim 13, Vasekin et al. discloses the method according to claim 1 wherein the second coroutine is a next coroutine of the first coroutine in a coroutine chain, the coroutine chain is a closed-loop chain comprising a plurality of coroutines, and the method further comprises: when switching is performed for a plurality of times based on the coroutine chain and switching to the first coroutine is performed again, starting processing from an instruction, in the first coroutine, whose previous processing procedure is interrupted by coroutine switching [Col. 5: 6-35].
Regarding claim 14, Vasekin et al. discloses the method according to claim 1, wherein the switching the currently executed first coroutine to a second coroutine comprises: storing context information of the currently executed first coroutine, and loading context information of the second coroutine [Col. 5: 36-67].
Regarding claim 15, Vasekin et al. discloses the method according to claim 1, wherein the determining whether a to-be- fetched object in an execution process is stored in a target cache comprises: predicting, by using a prediction system, whether the to-be-fetched object is stored in the target cache; and the method further comprises: updating the prediction system based on a real result of whether the to-be-fetched object is stored in the target cache [Col. 5: 17-36].
16. (Canceled)
17. (Canceled)
Regarding claim 18, the rationale in the rejection of claim 1 is herein incorporated.
Regarding claim 19, the rationale in the rejection of claim 1 is herein incorporated.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 3, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vasekin et al. (US 7,447883) and Spracklen (US 2013/0332778).
Regarding claim 3, Vasekin et al. discloses the method according to claim 2 but does not explicitly disclose wherein the target cache is a level 2 cache, and the method further comprises: when predicting whether the target instruction is stored in the level 2 cache, accessing a level 1 cache to obtain the target instruction.
Spracklen, however, in combination with Vaseken et al., discloses wherein the target cache is a level 2 cache, and the method further comprises: when predicting whether the target instruction is stored in the level 2 cache, accessing a level 1 cache to obtain the target instruction [FIG. 10-11, ¶0035, 0036, 0038, 0039: fetch instructions and data from the L1 cache when the value cannot be found].
It would have been obvious to one of ordinary skill in the art to have when predicting whether the target instruction is stored in the level 2 cache, accessing a level 1 cache to obtain the target instruction in order to facilitate performance monitoring and accounting in resource utilization within multi-threaded processors (when predicting whether the target instruction is stored in the level 2 cache, accessing a level 1 cache to obtain the target instruction0001).
Regarding claim 10, Spracklen et al. discloses the method according to claim 8, wherein the target cache is a level 2 cache, and the method further comprises: when performing the second prediction on the target data, accessing a level 1 cache to obtain the target data [FIG. 10-11, ¶0035, 0036, 0038, 0039].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. DE 102020131816 discloses loading program into cache memory, an instruction cache system takes instruction data of a program to be executed by a CPU, repeating an operation of fetching and executing the instruction data, automatically adds an instruction for loading a subroutine.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARDOCHEE CHERY whose telephone number is (571)272-4246. The examiner can normally be reached 900-500.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocio del Mar Perez-Velez can be reached at (571) 270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARDOCHEE CHERY/Primary Examiner, Art Unit 2133