Prosecution Insights
Last updated: April 19, 2026
Application No. 19/035,930

METHOD OF REDUCING CACHE THRASHING IN A PROCESSING SYSTEM AND RELATED PROCESSING SYSTEM

Non-Final OA §103
Filed
Jan 24, 2025
Examiner
MACKALL, LARRY T
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
MediaTek Inc.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
661 granted / 779 resolved
+29.9% vs TC avg
Moderate +8% lift
Without
With
+8.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
810
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
24.8%
-15.2% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 779 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Information Disclosure Statement The Information Disclosure Statement filed on 24 January 2025 has been considered by the examiner. Claim Objections Applicant is advised that should claim 12 be found allowable, claim 13 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-8 and 11-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bozek et al. (Pub. No. US 2012/0226866) in view of LeMay et al. (Pub. No. US 2016/0092673). Claim 1: Bozek et al. disclose a method of reducing cache thrashing in a processing system, comprising: issuing a workload [par. 0081 – “Embodiments of the invention track virtual machine application workloads, and their processor cache hit ratios. Then based on the cache hit ratios, use dynamic migration to move virtual machine workloads with a higher processor cache demand to physical servers with larger L1, L2 and/or L3 caches. Conversely, embodiments may identify virtual machine workloads with a lower cache appetite and migrate those virtual machine workloads to physical servers with more moderate L1, L2 and/or L3 cache sizes. Using this methodology results in increased utilization of physical servers and increased performance of cache-sensitive virtual machine workloads. More specifically, the cache demands of a virtual machine workload, in the form of cache hit ratio data, are obtained from the processors and provided to the provisioning manager 222 and ultimately to the global provisioning manager 232 (See FIG. 5). The global provisioning manager migrates high cache demand workloads to physical servers with larger cache.”]; transmitting a memory access request associated with the workload to a first-level cache of the processing system and determining whether a first cache hit or a first cache miss occurs at the first-level cache [pars. 0006, 0020-0021, 0080 – A request is sent to the L1 cache. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]; transmitting the memory access request associated with the workload to a second-level cache of the processing system in response to the first cache miss and determining whether a second cache hit or a second cache miss occurs at the second-level cache [pars. 0006, 0020-0021, 0080 – The request is sent to the L2 cache when it misses in the L1 cache. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]; transmitting the memory access request associated with the workload to a main memory of the processing system in response to the second cache miss [pars. 0006, 0020-0021, 0080 – The request is sent to main memory when it misses in the cache hierarchy. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]; determining whether a relationship between a first hit rate of the first-level cache and a second hit rate of the second-level cache satisfies a predetermined criterion [par. 0091 – “For example, in Physical Server #1, the first virtual machine VM1 has an L1 cache hit ratio equal to the number of L1 cache hits (i.e., 800) divided by the total number of memory accesses (i.e., 1000). Accordingly, the L1 cache hit ratio is 0.80 (alternatively expressed as 80 percent). Since there were only 200 L1 misses (i.e., 1000 memory accesses minus 800 L1 cache hits), there are only 200 potential memory accesses to the L2 cache. The L2 cache hit ratio is, therefore, equal to the number of L2 cache hits (i.e., 160) divided by the 200 memory accesses to the L2 cache. Accordingly the L2 cache hit ratio is 0.80. Similarly, the L3 cache hit ratio of 0.25 is calculated by dividing the 10 L3 cache hits divided by the 40 L2 cache misses. By comparing the class-specific cache hit ratios with the respective threshold ratios, it is seen that the L1 and L2 cache hit ratios are greater than their respective threshold ratios, but the L3 cache hit ratio is less than the respective threshold ratio. In other words, there is an L3 cache hit ratio exception. As a result, the virtual machine VM1 is identified as a candidate for migration to another physical server. The selection of an appropriate target physical server is discussed in relation to FIG. 8.”]; and decreasing a value of the workload when the predetermined criterion is satisfied [par. 0091 – The VM is migrated (e.g. workload is set to zero on the previous machine). (“For example, in Physical Server #1, the first virtual machine VM1 has an L1 cache hit ratio equal to the number of L1 cache hits (i.e., 800) divided by the total number of memory accesses (i.e., 1000). Accordingly, the L1 cache hit ratio is 0.80 (alternatively expressed as 80 percent). Since there were only 200 L1 misses (i.e., 1000 memory accesses minus 800 L1 cache hits), there are only 200 potential memory accesses to the L2 cache. The L2 cache hit ratio is, therefore, equal to the number of L2 cache hits (i.e., 160) divided by the 200 memory accesses to the L2 cache. Accordingly the L2 cache hit ratio is 0.80. Similarly, the L3 cache hit ratio of 0.25 is calculated by dividing the 10 L3 cache hits divided by the 40 L2 cache misses. By comparing the class-specific cache hit ratios with the respective threshold ratios, it is seen that the L1 and L2 cache hit ratios are greater than their respective threshold ratios, but the L3 cache hit ratio is less than the respective threshold ratio. In other words, there is an L3 cache hit ratio exception. As a result, the virtual machine VM1 is identified as a candidate for migration to another physical server. The selection of an appropriate target physical server is discussed in relation to FIG. 8.”], wherein: a storage capacity of the second-level cache is higher than a storage capacity of the first-level cache [fig. 6; par. 0080 – “FIG. 6 also shows the cache that is available to the processor in the server 240. The Level 1 cache 251 is local to the processor core 254 and can be accessed the fastest. The Level 2 cache 252 is still located on the processor module 250, but is further away from the core processor and although larger than the size of Level 1 cache is accessed slower.”]; an access latency of the second-level cache is higher than an access latency of the first-level cache [fig. 6; par. 0080 – “FIG. 6 also shows the cache that is available to the processor in the server 240. The Level 1 cache 251 is local to the processor core 254 and can be accessed the fastest. The Level 2 cache 252 is still located on the processor module 250, but is further away from the core processor and although larger than the size of Level 1 cache is accessed slower.”]; However, Bozek et al. do not specifically disclose, issuing M threads to process a workload M is an integer larger than 1. In the same field of endeavor, LeMay et al. disclose, issuing M threads to process a workload [par. 0027 – “Although illustrated as including a single guest virtual machine 206 with a single software thread 208, it should be understood that the environment 200 may include many guest virtual machines 206, and each guest virtual machine may include many software threads 208.”] M is an integer larger than 1 [par. 0027 – “Although illustrated as including a single guest virtual machine 206 with a single software thread 208, it should be understood that the environment 200 may include many guest virtual machines 206, and each guest virtual machine may include many software threads 208.”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Bozek et al. to include multithreading, as taught by LeMay et al., in order to improve performance. Claim 2 (as applied to claim 1 above): Bozek et al. disclose, wherein: the predetermined criterion is satisfied when the second hit rate is higher than the first hit rate [par. 0091 – The criteria is met when any cache fails to meet the respective threshold ratio. In such a case, the L2 hit rate may be higher than the first hit rate. (“For example, in Physical Server #1, the first virtual machine VM1 has an L1 cache hit ratio equal to the number of L1 cache hits (i.e., 800) divided by the total number of memory accesses (i.e., 1000). Accordingly, the L1 cache hit ratio is 0.80 (alternatively expressed as 80 percent). Since there were only 200 L1 misses (i.e., 1000 memory accesses minus 800 L1 cache hits), there are only 200 potential memory accesses to the L2 cache. The L2 cache hit ratio is, therefore, equal to the number of L2 cache hits (i.e., 160) divided by the 200 memory accesses to the L2 cache. Accordingly the L2 cache hit ratio is 0.80. Similarly, the L3 cache hit ratio of 0.25 is calculated by dividing the 10 L3 cache hits divided by the 40 L2 cache misses. By comparing the class-specific cache hit ratios with the respective threshold ratios, it is seen that the L1 and L2 cache hit ratios are greater than their respective threshold ratios, but the L3 cache hit ratio is less than the respective threshold ratio. In other words, there is an L3 cache hit ratio exception. As a result, the virtual machine VM1 is identified as a candidate for migration to another physical server. The selection of an appropriate target physical server is discussed in relation to FIG. 8.”)]. Claim 3 (as applied to claim 1 above): Bozek et al. disclose, wherein: the predetermined criterion is satisfied when the second hit rate is higher than the first hit rate by more than a predetermined value [par. 0091 – The criteria is met when any cache fails to meet the respective threshold ratio. In such a case, the L2 hit rate may be higher than the L1 hit rate by a value greater than any of the respective hit rate thresholds (e.g. a predetermined value). (“For example, in Physical Server #1, the first virtual machine VM1 has an L1 cache hit ratio equal to the number of L1 cache hits (i.e., 800) divided by the total number of memory accesses (i.e., 1000). Accordingly, the L1 cache hit ratio is 0.80 (alternatively expressed as 80 percent). Since there were only 200 L1 misses (i.e., 1000 memory accesses minus 800 L1 cache hits), there are only 200 potential memory accesses to the L2 cache. The L2 cache hit ratio is, therefore, equal to the number of L2 cache hits (i.e., 160) divided by the 200 memory accesses to the L2 cache. Accordingly the L2 cache hit ratio is 0.80. Similarly, the L3 cache hit ratio of 0.25 is calculated by dividing the 10 L3 cache hits divided by the 40 L2 cache misses. By comparing the class-specific cache hit ratios with the respective threshold ratios, it is seen that the L1 and L2 cache hit ratios are greater than their respective threshold ratios, but the L3 cache hit ratio is less than the respective threshold ratio. In other words, there is an L3 cache hit ratio exception. As a result, the virtual machine VM1 is identified as a candidate for migration to another physical server. The selection of an appropriate target physical server is discussed in relation to FIG. 8.”)]. Claim 4 (as applied to claim 1 above): Bozek et al. disclose, wherein: the predetermined criterion is satisfied when the second hit rate is higher than a predetermined positive value [par. 0091 – The criteria is met when any cache fails to meet the respective threshold ratio. In such a case, the L2 cache may still meet the respective threshold value. (“For example, in Physical Server #1, the first virtual machine VM1 has an L1 cache hit ratio equal to the number of L1 cache hits (i.e., 800) divided by the total number of memory accesses (i.e., 1000). Accordingly, the L1 cache hit ratio is 0.80 (alternatively expressed as 80 percent). Since there were only 200 L1 misses (i.e., 1000 memory accesses minus 800 L1 cache hits), there are only 200 potential memory accesses to the L2 cache. The L2 cache hit ratio is, therefore, equal to the number of L2 cache hits (i.e., 160) divided by the 200 memory accesses to the L2 cache. Accordingly the L2 cache hit ratio is 0.80. Similarly, the L3 cache hit ratio of 0.25 is calculated by dividing the 10 L3 cache hits divided by the 40 L2 cache misses. By comparing the class-specific cache hit ratios with the respective threshold ratios, it is seen that the L1 and L2 cache hit ratios are greater than their respective threshold ratios, but the L3 cache hit ratio is less than the respective threshold ratio. In other words, there is an L3 cache hit ratio exception. As a result, the virtual machine VM1 is identified as a candidate for migration to another physical server. The selection of an appropriate target physical server is discussed in relation to FIG. 8.”)]. Claim 5 (as applied to claim 1 above): Bozek et al. disclose, wherein: a storage capacity of the main memory is higher than the storage capacity of the second-level cache [fig. par. 0080 – “FIG. 6 also shows the cache that is available to the processor in the server 240. The Level 1 cache 251 is local to the processor core 254 and can be accessed the fastest. The Level 2 cache 252 is still located on the processor module 250, but is further away from the core processor and although larger than the size of Level 1 cache is accessed slower. The Level 3 cache 253 is located off the processor module 250 and has more capacity, albeit at a slower access rate, than the Level 2 cache. If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”]; an access latency of the main memory is higher than the access latency of the second-level cache [fig. par. 0080 – “FIG. 6 also shows the cache that is available to the processor in the server 240. The Level 1 cache 251 is local to the processor core 254 and can be accessed the fastest. The Level 2 cache 252 is still located on the processor module 250, but is further away from the core processor and although larger than the size of Level 1 cache is accessed slower. The Level 3 cache 253 is located off the processor module 250 and has more capacity, albeit at a slower access rate, than the Level 2 cache. If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”]. Claim 6 (as applied to claim 1 above): Bozek et al. disclose the method, further comprising: providing, by the first-level cache, data requested by the memory access request for completing an operation associated with the memory access request in response to the first cache hit [pars. 0006, 0020-0021, 0080 – Data is returned from the level 1 cache in response to a cache hit. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]. Claim 7 (as applied to claim 1 above): Bozek et al. disclose the method, further comprising: providing, by the second-level cache, data requested by the memory access request for completing an operation associated with the memory access request in response to the second cache hit [pars. 0006, 0020-0021, 0080 – Data is returned from the level 2 cache in response to a cache hit. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]. Claim 8 (as applied to claim 1 above): Bozek et al. disclose the method, further comprising: providing, by the main memory, data requested by the memory access request for completing an operation associated with the memory access request in response to the first cache miss and the second cache miss [pars. 0006, 0020-0021, 0080 – Data is returned from the main memory in response to a miss in the cache hierarchy. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]. Claim 11: Bozek et al. disclose a processing system which reduces cache thrashing, comprising: a first-level cache with a first storage capacity and a first access latency [fig. 6; par. 0080 – “FIG. 6 also shows the cache that is available to the processor in the server 240. The Level 1 cache 251 is local to the processor core 254 and can be accessed the fastest. The Level 2 cache 252 is still located on the processor module 250, but is further away from the core processor and although larger than the size of Level 1 cache is accessed slower.”], and configured to: receive a memory access request associated with a workload [pars. 0006, 0020-0021, 0080 – A request is sent to the L1 cache. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]; and determine whether a first cache hit or a first cache miss occurs at the first-level cache [pars. 0006, 0020-0021, 0080 – It is determined whether a hit or miss occurs. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]; a second-level cache with a second storage capacity and a second access latency [fig. 6; par. 0080 – “FIG. 6 also shows the cache that is available to the processor in the server 240. The Level 1 cache 251 is local to the processor core 254 and can be accessed the fastest. The Level 2 cache 252 is still located on the processor module 250, but is further away from the core processor and although larger than the size of Level 1 cache is accessed slower.”], and configured to: receive the memory access request associated with the workload from the first-level cache in response to a first cache miss at the first-level cache [pars. 0006, 0020-0021, 0080 – The request is sent to the L2 cache when it misses in the L1 cache. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]; and determine whether a second cache hit or a second cache miss occurs at the second-level cache [pars. 0006, 0020-0021, 0080 – It is determined whether a hit or miss occurs. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]; and a main memory configured to receive the memory access request associated with the workload from the second-level cache in response to a second cache miss at the second-level cache [pars. 0006, 0020-0021, 0080 – The request is sent to main memory when it misses in the cache hierarchy. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]; and a scheduler configured to: transmit the memory access request associated with the workload to the first-level cache [pars. 0006, 0020-0021, 0080 – A request is sent to the L1 cache. (“If an instruction can access either Level 1, 2, or 3 cache, this operation results in a cache hit. Otherwise, the processor must go to external memory (DIMMs 264) to obtain the data resulting in a much longer instruction time. It is most efficient if instructions are able to use information in the Level 1 cache so that the data can be accessed immediately.”)]; determine whether a relationship between a first hit rate of the first-level cache and a second hit rate of the second-level cache satisfies a predetermined criterion [par. 0091 – “For example, in Physical Server #1, the first virtual machine VM1 has an L1 cache hit ratio equal to the number of L1 cache hits (i.e., 800) divided by the total number of memory accesses (i.e., 1000). Accordingly, the L1 cache hit ratio is 0.80 (alternatively expressed as 80 percent). Since there were only 200 L1 misses (i.e., 1000 memory accesses minus 800 L1 cache hits), there are only 200 potential memory accesses to the L2 cache. The L2 cache hit ratio is, therefore, equal to the number of L2 cache hits (i.e., 160) divided by the 200 memory accesses to the L2 cache. Accordingly the L2 cache hit ratio is 0.80. Similarly, the L3 cache hit ratio of 0.25 is calculated by dividing the 10 L3 cache hits divided by the 40 L2 cache misses. By comparing the class-specific cache hit ratios with the respective threshold ratios, it is seen that the L1 and L2 cache hit ratios are greater than their respective threshold ratios, but the L3 cache hit ratio is less than the respective threshold ratio. In other words, there is an L3 cache hit ratio exception. As a result, the virtual machine VM1 is identified as a candidate for migration to another physical server. The selection of an appropriate target physical server is discussed in relation to FIG. 8.”]; and decrease a value of the workload when the predetermined criterion is satisfied [par. 0091 – The VM is migrated (e.g. workload is set to zero on the previous machine). (“For example, in Physical Server #1, the first virtual machine VM1 has an L1 cache hit ratio equal to the number of L1 cache hits (i.e., 800) divided by the total number of memory accesses (i.e., 1000). Accordingly, the L1 cache hit ratio is 0.80 (alternatively expressed as 80 percent). Since there were only 200 L1 misses (i.e., 1000 memory accesses minus 800 L1 cache hits), there are only 200 potential memory accesses to the L2 cache. The L2 cache hit ratio is, therefore, equal to the number of L2 cache hits (i.e., 160) divided by the 200 memory accesses to the L2 cache. Accordingly the L2 cache hit ratio is 0.80. Similarly, the L3 cache hit ratio of 0.25 is calculated by dividing the 10 L3 cache hits divided by the 40 L2 cache misses. By comparing the class-specific cache hit ratios with the respective threshold ratios, it is seen that the L1 and L2 cache hit ratios are greater than their respective threshold ratios, but the L3 cache hit ratio is less than the respective threshold ratio. In other words, there is an L3 cache hit ratio exception. As a result, the virtual machine VM1 is identified as a candidate for migration to another physical server. The selection of an appropriate target physical server is discussed in relation to FIG. 8.”], wherein: the second storage capacity is higher than the first storage capacity [fig. 6; par. 0080 – “FIG. 6 also shows the cache that is available to the processor in the server 240. The Level 1 cache 251 is local to the processor core 254 and can be accessed the fastest. The Level 2 cache 252 is still located on the processor module 250, but is further away from the core processor and although larger than the size of Level 1 cache is accessed slower.”]; the second access latency is higher than the first access latency [fig. 6; par. 0080 – “FIG. 6 also shows the cache that is available to the processor in the server 240. The Level 1 cache 251 is local to the processor core 254 and can be accessed the fastest. The Level 2 cache 252 is still located on the processor module 250, but is further away from the core processor and although larger than the size of Level 1 cache is accessed slower.”]; However, Bozek et al. do not specifically disclose, a plurality of processing cores; the scheduler configured to: issue the M threads to the plurality of processing cores for processing a workload; M is an integer larger than 1. In the same field of endeavor, LeMay et al. disclose, a plurality of processing cores [par. 0021 – “The processor 120 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 120 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit.”]; the scheduler configured to: issue the M threads to the plurality of processing cores for processing a workload [par. 0027 – “Although illustrated as including a single guest virtual machine 206 with a single software thread 208, it should be understood that the environment 200 may include many guest virtual machines 206, and each guest virtual machine may include many software threads 208.”]; M is an integer larger than 1 [par. 0027 – “Although illustrated as including a single guest virtual machine 206 with a single software thread 208, it should be understood that the environment 200 may include many guest virtual machines 206, and each guest virtual machine may include many software threads 208.”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Bozek et al. to include multithreading, as taught by LeMay et al., in order to improve performance. Claim 12 (as applied to claim 11 above): Claim 12, directed to a processing system, is rejected for the same reasons set forth in the rejection of claim 2 above, mutatis mutandis. Claim 13 (as applied to claim 11 above): Claim 13, directed to a processing system, is rejected for the same reasons set forth in the rejection of claim 2 above, mutatis mutandis. Claim 14 (as applied to claim 11 above): Claim 14, directed to a processing system, is rejected for the same reasons set forth in the rejection of claim 4 above, mutatis mutandis. Claim 15 (as applied to claim 11 above): Claim 15, directed to a processing system, is rejected for the same reasons set forth in the rejection of claim 5 above, mutatis mutandis. Claim 16 (as applied to claim 11 above): Claim 16, directed to a processing system, is rejected for the same reasons set forth in the rejection of claim 6 above, mutatis mutandis. Claim 17 (as applied to claim 11 above): Claim 17, directed to a processing system, is rejected for the same reasons set forth in the rejection of claim 7 above, mutatis mutandis. Claim 18 (as applied to claim 11 above): Claim 18, directed to a processing system, is rejected for the same reasons set forth in the rejection of claim 8 above, mutatis mutandis. Claim(s) 9, 10, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bozek et al. (Pub. No. US 2012/0226866) in view of LeMay et al. (Pub. No. US 2016/0092673) as applied to claims 1 and 11 above, respectively, and further in view of Zheng et al. (Pub. No. US 2020/0065126). Claim 9 (as applied to claim 1 above): Bozek et al. and LeMay et al. disclose all the limitations above but do not specifically disclose the method, further comprising: determining whether an adjustment made to the value of M has met a predetermined condition; and increasing the value of M when the predetermined condition is met. In the same field of endeavor, Zheng et al. disclose, determining whether an adjustment made to the value of M has met a predetermined condition [par. 0067 – “At step 513, the resource management service 246 can check the last migration 242 to determine when the virtual machine 106 was last migrated from one host to another. To avoid the performance impacts of thrashing, where a VM 106 is repeatedly migrated between hosts in quick succession, the resource management service 246 may weigh the determination of whether or not to cause the VM 106 to migrate again. Accordingly, the resource management service 246 can then determine whether the time that the virtual machine 106 was last migrated occurred within a predefined window of time prior to the current time. If the VM 106 was last migrated outside of the predefined window of time, then the process skips to step 519. However, if the VM's 106 last migration 242 occurred within the predefined window of time, then the process proceeds to step 516.”]; and increasing the value of M when the predetermined condition is met [par. 0067 – “At step 513, the resource management service 246 can check the last migration 242 to determine when the virtual machine 106 was last migrated from one host to another. To avoid the performance impacts of thrashing, where a VM 106 is repeatedly migrated between hosts in quick succession, the resource management service 246 may weigh the determination of whether or not to cause the VM 106 to migrate again. Accordingly, the resource management service 246 can then determine whether the time that the virtual machine 106 was last migrated occurred within a predefined window of time prior to the current time. If the VM 106 was last migrated outside of the predefined window of time, then the process skips to step 519. However, if the VM's 106 last migration 242 occurred within the predefined window of time, then the process proceeds to step 516.”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Bozek et al. and LeMay et al. to include evaluating a last migration time, as taught by Zheng et al., in order to improve performance by reducing thrashing. Claim 10 (as applied to claim 9 above): Zheng et al. disclose, wherein the predetermined condition is met when: the value of M has been reduced more than K times; a difference between an original value of M and a current value of N exceeds a predetermined value; or a predetermined period of time has elapsed since a first decrease of the value of M [par. 0067 – “At step 513, the resource management service 246 can check the last migration 242 to determine when the virtual machine 106 was last migrated from one host to another. To avoid the performance impacts of thrashing, where a VM 106 is repeatedly migrated between hosts in quick succession, the resource management service 246 may weigh the determination of whether or not to cause the VM 106 to migrate again. Accordingly, the resource management service 246 can then determine whether the time that the virtual machine 106 was last migrated occurred within a predefined window of time prior to the current time. If the VM 106 was last migrated outside of the predefined window of time, then the process skips to step 519. However, if the VM's 106 last migration 242 occurred within the predefined window of time, then the process proceeds to step 516.”]. Claim 19 (as applied to claim 11 above): Claim 19, directed to a processing system, is rejected for the same reasons set forth in the rejection of claim 9 above, mutatis mutandis. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bozek et al. (Pub. No. US 2012/0226866) in view of LeMay et al. (Pub. No. US 2016/0092673) as applied to claim 11 above, and further in view of Tuan (Pub. No. US 2013/0226535). Claim 20 (as applied to claim 11 above): Bozek et al. and LeMay et al. disclose all the limitations above but do not specifically disclose, wherein the plurality of processing cores, the scheduler and the first-level cache are implemented as a streaming multiprocessor (SM). In the same field of endeavor, Tuan discloses, wherein the plurality of processing cores, the scheduler and the first-level cache are implemented as a streaming multiprocessor (SM) [fig. 2; par. 0054 – “FIG. 2 shows GPU architecture 200 that is suitable for implementing the concurrent simulation system of the present invention. GPU 200 may be, for example, a GPU having the Nvidia Fermi architecture. As shown in FIG. 2, GPU architecture 200 includes an integrated circuit having 16 streaming multiprocessors (e.g., streaming multiprocessor (SM) 201), second-level or "L2" cache 206, and six on-chip dynamic random access memory (DRAM) controllers 207-1 to 207-6 (collectively, "DRAM controllers 207") for accessing a global memory shared by all the SMs. DRAM controllers 207 may be, for example, configured to access a 64-bit wide memory. Each streaming processor includes 32 processor cores (e.g., processor core 202), register file 203, which includes 4096 32-bit registers, 64K-byte of memory divided between shared memory 206 and first-level (i.e., "L1") cache 204, and control circuitry including an instruction cache, warp schedulers and dispatch units. In one implementation, the 64K-byte memory may be divided into either as a 16K-byte shared memory 206 and 48K-byte cache.”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Bozek et al. and LeMay et al. to include a streaming processor, as taught by Tuan, in order to improve performance and reduce power consumption. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Holmberg (Pub. No. US 2002/0138700) discloses, “Misses in caches can be classified into four categories: conflict, compulsory, capacity and coherence misses (see e.g. N. P. Jouppi: Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers. The 17.sup.th International Symposium on Computer Architecture Conference proceedings (ISCA-17), 1990) internet-publication http://www.research.digital.com /wrl/techreports/abstracts/TN-14.html). Conflict misses are misses that would not occur if the cache was fully-associative and had LRU replacement. Compulsory misses are misses required in any cache organization because they are the first references to an instruction or piece of data. Capacity misses occur when the cache size is not sufficient to hold data between references. Coherence misses are misses that occur as a result of invalidation to preserve multiprocessor cache consistency.” [par. 0013] Any inquiry concerning this communication or earlier communications from the examiner should be directed to LARRY T MACKALL whose telephone number is (571)270-1172. The examiner can normally be reached Monday - Friday, 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald G Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. LARRY T. MACKALL Primary Examiner Art Unit 2131 7 February 2026 /LARRY T MACKALL/Primary Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Jan 24, 2025
Application Filed
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591389
MEMORY CONTROLLER AND OPERATION METHOD THEREOF FOR PERFORMING AN INTERLEAVING READ OPERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12572308
STORAGE DEVICE SUPPORTING REAL-TIME PROCESSING AND METHOD OF OPERATING THE SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12561065
PROVIDING ENDURANCE TO SOLID STATE DEVICE STORAGE VIA QUERYING AND GARBAGE COLLECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12555170
TRANSFORMER STATE EVALUATION METHOD BASED ON ECHO STATE NETWORK AND DEEP RESIDUAL NEURAL NETWORK
2y 5m to grant Granted Feb 17, 2026
Patent 12554400
METHOD OF OPERATING STORAGE DEVICE USING HOST REQUEST BYPASS AND STORAGE DEVICE PERFORMING THE SAME
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
93%
With Interview (+8.1%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 779 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month