DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 20, 2026 has been entered.
Response to Amendment
The amendment filed February 20, 2026 has been entered. Claims 1-32 remain pending in this application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 5, 6, 8, 9, 13, 14, 16, 17, 21, 22, 24, 25, 29, 30, and 32 are rejected under 35 U.S.C. 103 as unpatentable over Chen et al. (US 2008/0005473) in view of Guirado (US 2022/0075730) and Cabot (US 2006/0143396).
Regarding claim 1, Chen teaches one or more processors (Fig. 1), comprising circuitry (the different elements within Fig. 1) to:
cause a software program to be compiled (Fig. 2 shows that source code is analyzed and compiled to determine whether the data can be inserted into a cache, with optimizing compiler analyzing source code 204 to generate compiled code 210, see also [0044-0046]) based, at least in part, on one or more indications in the program of one or more memory locations, wherein the complied software program is to apply, based on the one or more indications in the source code, one or more caching policies to control when information stored in the one or more memory locations is to be cached (Fig. 2 shows that source code is analyzed and compiled to determine whether the data can be inserted into a cache, where one cache shown is Fig. 5’s software implemented cache, where Fig. 6 shows example of code 618 declaring different objects that are stored into the software implemented cache; when analyzing the source code, Chen provides that “Alias analysis determines whether any pair of data references in the program may or must refer to the same memory location. Data dependence analysis determines pairs of data references in the program where one reference uses the contents of a memory location that are written by the other references, or one reference over-writes the contents of a memory location that are used or written by the other reference. Data reference analysis uses the results of alias analysis and data dependence analysis…” [0044], where [0044] provides that the compiler can determine based on this information whether the data is cacheable or not, with the example of data x and data y in Fig. 2, teaching that the source code contains indications of memory locations (with the ability to analyze whether the data references rely on a same memory location), where the indications determine how to apply caching policies (Chen provides that the compiler utilizes the information and analysis of the memory locations from the data references to determine whether to cache the data, i.e. whether or not the data is cacheable to begin with is a caching policy in itself) to control when information stored in the one or more memory locations is to be cached (as seen in Fig. 2, upon being compiled, the cache lookup is inserted for the cacheable data for access to data x, showing that the memory locations affected the caching policies and in turn controlled whether information is cached); Chen further teaches that the parameters of the software cache can be configured/re-configured, including the replacement policy, see [0042], see also “Additionally, optimizing compiler 202 links cache manager code 214 to compiled code 210 in order to configure the cache and implement modified cache lookup code 212. Cache manager code 214 is separate code that interfaces with modified cache lookup code 212. That is, the modified cache lookup code 212 provides an entry-point into cache manager code 214 when an application executes. Cache manager code 214 is responsible for, among other things, deciding what policy to use when deciding where to place newly fetched data, what data to replace when no space is available, and whether to perform data pre-fetching,” [0046]; this cache manager code and cache lookup code provide further evidence of the caching policies that are applied to control how information is to be cached).
Chen fails to anticipate where software program being compiled is specifically in response to an API call. While Chen discusses application programming languages, see [0043, 0044, 0049], this is not understood to explicitly be an API.
Chen also fails to teach wherein the one or more indications indicate a first caching policy to apply to a portion of one or more memory locations and a second caching policy, different from the first caching policy, to apply to remaining portions of the one or more memory locations.
Examiner notes for clarity of record that the current version of the claims require the indications to indicate two kinds of information: the one or more memory locations where the caching policies are applied, as well as the first/second caching policies to apply. Therefore, while Chen is relied upon above to teach the indications of the memory locations, Chen is unable to teach where the indications themselves indicate the caching policies, let alone the specific application of two policies as recited.
Guirado’s disclosure relates to a data processing system incorporating caching policies, and as such comprises analogous art.
As part of this disclosure, Guirado provides that a “host processor may also execute a compiler or compilers for compiling programs to be executed by (e.g., a programmable processing stage of the) processor,” [0171], where “The compiler may, e.g., be part of the driver 4, with there being a special API call to cause the compiler to run. The compiler execution can thus be seen as being part of the, e.g. draw call preparation done by the driver in response to API calls generated by an application,” [0232].
An obvious modification can be identified, incorporating Guirado’s disclosure that a compiler can specifically be run via an API call into Chen’s system and specifically the optimizing compiler. Such a modification reads upon the limitation of the claim, as it provides that an API is utilized to call Chen’s optimizing compiler, which causes a software program to be compiled.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Guirado’s API with Chen’s processor, as the API’s provide for a greater compatibility and efficiency in operation between different system and platforms, allowing for easier use of Chen’s processor via a user.
The combination of Chen and Guirado still fails to teach the amended limitation concerning the indications of a first and second caching policy.
Cabot’s disclosure relates to providing cache policies via program code, and as such comprises analogous art.
As part of this disclosure, Cabot provides the ability to embed markers at the source code level that lead to cache priority cues. As seen in Figs. 3a and 3b, Cabot supports operations for caching processes where the caches are divided into multiple priority pools, see also [0011,0012], and Fig. 4 shows the process where markers are inserted at the source code level, see step 400, see also “In one embodiment, markers are embedded at the source code level, resulting in the generation of corresponding cache priority cues in the machine code. With reference to FIG. 4, this process begins at a block 400 in which markers are inserted into high-level source code to delineate cache eviction policies for the different code portions. In one embodiment, the high-level code comprises programming code written in the C or C++ language, and the markers are implemented via corresponding pragma statements. Pseudocode illustrating a set of exemplary pragma statements to effect a two-priority-level cache eviction policy is shown in FIG. 5a. In this embodiment, there are two priority levels: ON, indicating high priority, and OFF, indicating low priority, or the default priority level. The pragma statement "CACHE EVICT POLICY ON" is used to mark the beginning of a code portion that is to be assigned to the high-priority pool, while the "CACHE EVICT POLICY OFF" pragma statement is used to mark the end of the code portion.” [0049]. Notably, the compilation of the source code then results in the ability to identify which cache priority pool ID’s to use, see Fig. 4, steps 402 and 404. Examples of pseudocode to delineate code portions with cache priority are seen in Figs. 5a and 5b.
An obvious modification can be identified: incorporating Cabot’s disclosure of markers in source-level code to delineate and mark cache priority levels for data within the program. Such a modification reads upon where the indications in the source code indicate a first and second caching policy for different portions of the memory locations, as well as where the first and second policies differ from each other (with Cabot’s example, the “CACHE EVICT POLICY ON” pragma statement marks a section of code with the high priority pool with the one policy (cache evict policy on) applied, and the “CACHE EVICT POLICY OFF” pragma statement indicates the end of this code portion, i.e. the subsequent portion of code utilizes a different caching policy and applies to a different location in memory).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Cabot’s disclosure of source code markers into Chen’s code analysis/compilation for caching data, as this gives programmers more control on how to instruct the systems to cache data based on anticipated data usage, see Abstract, [0035], as well as adjust policies based on monitored statistics that are not effectively handled by existing cache policies, see [0055,0056].
Regarding claim 5, the combination of Chen, Guirado, and Cabot teaches the one or more processors of claim 1, and the combination further teaches wherein:
the one or more indications are text within the source code of the software program specifying cache guidance for the one or more portions of memory (following the combination of claim 1, modifying Chen by incorporating Cabot’s disclosure of source code markers/pragma statements reads upon where the source code contains the indications as text to specify the cache guidance for the one or more memory locations; notably – Cabot’s disclosure incorporated into Chen’s process allows Chen’s Fig. 2 process of analyzing and compiling source code to determine where data is inserted into a cache to then use explicit markers within the source code to perform the determination, such as the determination in Chen Fig. 6 whether to use the hardware or software caches); and
the one or more memory locations are a contiguous block of memory locations (as seen in Chen Fig. 5, the software implemented cache is a contiguous block of memory locations in the tag and data arrays).
Regarding claim 6, the combination of Chen, Guirado, and Cabot teaches the one or more processors of claim 1, and Chen further teaches wherein the one or more indications are associated with a source code statement that declares an array (Fig. 6, the source code defining the objects to be stored provide the objects as array objects).
Regarding claim 8, the combination of Chen, Guirado, and Cabot teaches the one or more processors of claim 1, and Chen further teaches wherein:
the one or more caching policies are associated with an array of data elements (Fig. 6, data elements A, B, C, D, and E are shown as objects in array notation, see the common storage in element 614); and
cache guidance is applied to a plurality of the data elements as a result of a single indication of the one or more caching policies (the cache manager code cited in claim 1 rationale is the single indication of the policy and defines how objects are stored in the cache; necessarily the policy is applied to the objects).
Claim 9 is directed to a computer-implemented method and may be rejected according to the same rationale of claim 1, as claim 1 recites the processor circuits performing the method.
Claims 13 and 14 are rejected according to the same rationale of claims 5 and 6.
Regarding claim 16, the combination of Chen, Guirado, and Cabot teaches the computer-implemented method of claim 9, and Chen further teaches wherein the one or more memory locations are associated with an array data structure of the software program (Fig. 6 shows the software implemented cache and hardware cache associated with data objects A, B, C, D, and E, which are all depicted with array notation).
Regarding claim 17, Chen teaches a computer system comprising one or more processors and memory storing executable instructions that (data processing system with processor and memory elements in [0074], where [0072,0073] provide code embodied on memories, including RAM/ROM that would be associated with a data processing system), as a result of being executed by the one or more processors, perform the method of claim 9, identical to the functional limitations of the processors of claim 1 and therefore rejected according to the same rationale.
Claims 21, 22, and 24 are rejected according to the same rationale of claims 5, 6 and 16.
Regarding claim 25, Chen teaches a machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors (see [0072,0073]), cause the one or more processors to perform the method of claim 1 and can be rejected according to the same rationale.
Claims 29, 30, and 32 are rejected according to the same rationale of claims 5, 6 and 16.
Claims 2-4, 10-12, 18-20, and 26-28 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Guirado and Cabot, and further in view of Wang et al. (US 2011/0010502).
Regarding claim 2, the combination of Chen, Guirado, and Cabot teaches the one or more processors of claim 1, and Chen further teaches wherein:
the one or more caching policies are associated with an array of data elements (Fig. 6 shows the software implemented cache and hardware cache associated with data objects A, B, C, D, and E, which are all depicted with array notation).
The combination fails to teach wherein:
the one or more caching policies specify that cache guidance is to be fractionally apportioned when one or more data elements are accessed.
Wang’s disclosure is related to cache replacement policies and as such comprises analogous art for being in the same field of endeavor.
As part of this disclosure, Wang provides the ability to use the LRR and LRU policies within a same area of memory, see [0028,0029], where [0049] provides that registers may be used to define the LRU/LRR ranges utilized.
An obvious modification can be identified: providing for the ability to define ranges where different cache policies may be implemented. This reads upon the limitation of the claim, as the cache areas are fractionally apportioned to either LRU/LRR (for example).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Wang’s multiple cache policies with defining areas to Chen’s software implemented cache, as the ability to support multiple policies “may provide flexibility to handle different requestor workloads in an efficient fashion,” [0030].
Regarding claim 3, the combination of Chen, Guirado, Cabot, and Wang teaches the one or more processors of claim 2, and the combination further teaches wherein fractional apportionment is achieved by indicating a fraction of the data elements to be provided with a first cache guidance and a remainder of the data elements receive a second cache guidance (as discussed in the claim 2 rationale, registers are used to define the range of the cache that are assigned to LRU/LRR policies, see Wang [0049], examiner also notes that the indication of multiple caching policies are given in the source code, see the claim 1 rationale relying on Cabot).
Regarding claim 4, the combination of Chen, Guirado, Cabot, and Wang teaches the one or more processors of claim 2, and the combination further teaches wherein fractional apportionment is achieved by indicating a contiguous subset of the data elements to be provided with a first cache guidance and a remainder of the data elements receive a second cache guidance (as discussed in the claim 2 rationale, registers are used to define the range of the cache that are assigned to LRU/LRR policies, see Wang [0049], where in the claim 1 rationale, Chen shows that the software cache is a contiguous region of memory, see Fig. 5, examiner also notes that the indication of multiple caching policies are given in the source code, see the claim 1 rationale relying on Cabot)
Claims 10-12, 18-20, and 26-28 rejected according to the same rationale of claims 2-4.
Claims 7, 15, 23, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Guirado and Cabot and further in view of Fleming et al. (US 2019/0205284).
Regarding claim 7, the combination of Chen, Guirado, and Cabot teaches the one or more processors of claim 1, but fails to teach wherein the one or more indications are associated with declaration of a pointer used to access the one or more memory locations.
While Chen’s source code with object variables and pointers are shown, pointers are not explicitly shown that are utilized to access the cache itself.
Fleming’s disclosure is related to processor instructions utilized for a spatial accelerator, and is analogous art, as it relates to the same field of endeavor of accessing/handling data objects via source code/hardware.
As part of this disclosure, Fleming describes how C source code is utilized to cause an accelerator to write to a memory location, see [0323]. In particular, the source code provides a pointer, with the ability to write the pointer’s payload into a cache bank, see [0323] and Fig. 40.
An obvious combination can be identified: combining the declaration of a pointer for a payload with Chen’s source code. This combination would read upon the claim, as the cache policy/software implemented cache code would be associated with declared pointers that are utilized to write to the cache area itself.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to combine Fleming’s source code of a pointer with payload written to cache with Chen’s source code for a software implemented cache. Both elements are known in the art, and as both elements are software source code related to accessing caches, then one of ordinary skill in the art would find the combination predictable, as the combination is adding the two source codes together.
Claims 15, 23, and 31 are rejected for the same rationale of claim 7.
Response to Arguments
Applicant's arguments filed February 20, 2026 have been fully considered but they are moot in part and unpersuasive in part.
The arguments are moot in part as they focus on addressing the main grounds of rejection utilizing Chen in view of Guirado, and so have not had the opportunity to address the modified rationale incorporating Cabot.
The arguments are unpersuasive in part, as the arguments provide a broad general allegation of patentability over the references cited, but a reconsideration of the art shows that Cabot provides a clear disclosure capable of rendering the claimed features obvious.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Jayasena et al. (US 10,042,762) discloses providing caching policies including in source code,
Bajie et al. (US 2022/0318144) discloses providing source code analysis and configuring caching policies.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON D HO whose telephone number is (469)295-9093. The examiner can normally be reached Mon-Fri 8:00-4:00 CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.D.H./Examiner, Art Unit 2139
/REGINALD G BRAGDON/Supervisory Patent Examiner, Art Unit 2139