Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 16 December, 2025 has been entered.
Response to Amendment
The Amendment filed 16 December, 2025 has been entered. Claims 1-20 remain pending in the application. Applicant’s amendments to the Claims have overcome each and every objection previously set forth in the Non-Final Office Action mailed 12 June, 2025. Examiner further acknowledges amendments to the claims, resulting in withdrawal of rejections under 35 U.S.C. § 112 and further rejection under 35 U.S.C. § 103 upon further search and consideration.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 5-6, 11, 13-15, 18-19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Busaba (U.S. Patent Pub. No. 2015/0089155), hereinafter referred to as Busaba, in view of Yamada et al (U.S. Patent Pub. No. 2009/0007121), hereinafter referred to as Yamada, Cochcroft, Jr. (U.S. Patent No. 4,912,630), hereinafter referred to as Cochcroft, Gschwind (U.S. Patent Pub. No. 2015/0347301), and Qureshi (U.S. Patent Pub. No. 2011/0191546).
In regard to claim 1, Busaba teaches a system, comprising: at least one processor of a group of associated processors (Fig. 1); and at least one memory that stores executable instructions that, when executed by the at least one processor, facilitate performance of operations (Busaba Fig. 3), comprising: receiving, from a thread of a group of threads, a read request for access to requested data; determining, based on the read request, that the requested data is unavailable in a first cache memory; based on the requested data being determined to be unavailable in the first cache memory, sending the read request for access to the requested data to storage server equipment for fulfillment of the read request (Paragraph 0255, lines 13-14 disclose use of multithreaded processors, Fig. 5 discloses cache coherence process which achieves limitations, Fig. 3 illustrates connected memory with a controller e.g. storage server equipment); performing, by the storage server equipment, a snoop operation on a second cache memory, and transferring the requested data, from the storage server equipment, to the first cache memory (Busaba Paragraph 0104 discloses a multiprocessor environment where caches are interconnected to maintain coherency, and Paragraph 0202, lines 10-14 discloses snooping to update local cache lines, achieving the claimed limitation); providing access, by the thread of the group of threads, to the requested data stored in the first cache memory (Paragraph 0151 discloses setting a cache line to "shared" in response to a read request); incrementing, by a processor of the at least one processor, a counter value stored in a counter variable maintained by the processor (Fig. 11 illustrates counters for cache line tags managed by processor 1110, Paragraph 0009 discloses incrementing counters on "first communication" with a cache e.g. read access, as well as on "qualifying contentions (Paragraph 0275) which can include coherence misses (Paragraph 0276, lines 16-19); and in response to the thread of the group of threads having modified the requested data to create modified requested data, writing the modified requested data to the first cache memory for future access to the modified requested data by the thread of the group of threads (Busaba Fig. 5 illustrates a process whereby modified bus packets are detected and saved to set a cache line as modified).
The previously cited references do not teach the remaining limitations of claim 1 relating to placing an inter-processor interrupt process into a hiatus state, however Cochcroft Column 7, lines 22-38 disclose halting processor control while incrementing a counter during a burst read of an external cache. Column 7, lines 55-59 disclose returning to the previous state after the conclusion of incrementing. When combined with previously cited disclosures, a processor can be halted upon incrementing an associated counter and restarted after completion. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Cochcroft in order to halt a processor during an incrementing operation and benefit from a cache address comparator having bus error and halt outputs for filling after a cache miss (Cochcroft Column 2, lines 8-11).
Cochcroft does not teach halting, by the processor, a scheduler process and an inter-processor interrupt process, however Yamada Paragraph 0040, lines 1-7 teaches pausing interrupt transactions and thread scheduling in response to a determination by a CPU migration manager. When combined with previously cited disclosures, the claimed limitation is achieved. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Yamada in order to halt specific processes when servicing a cache miss and "enable runtime migration of processors and memory modules" (Yamada Paragraph 0009).
The previously cited references do not explicitly teach an embodiment wherein the inter-processor interrupt process operating on the processor provides communication and coordination between software in execution on disparate processors comprising the at least one processor, and wherein the processor is a per-cpu processor of the at least one processor. However, Gschwind teaches utilizing inter-processor interrupts with software coordination to manipulate and synchronize cache on local and remote (i.e. external to the CPU) per-cpu processors (¶ 0099 discloses a local processor utilizing an interrupt to coordinate and synchronize cache invalidation with remote processors; Fig. 2 different independent CPUs cooperate in Central processor complex 202). If implemented in combination with other references, the interrupt could be halted by the CPU migration manager of Yamada, achieving the claimed limitation. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Gschwind in order to benefit from more frequent data synchronization between processors in a multiprocessor system (¶ 0091, lines 3-8 synchronization method disclosed can guarantee page table entry updates vs. no guarantee of a timely update without synchronization).
The previously cited references do not explicitly teach counters maintained by each processor, however Qureshi teaches an embodiment wherein each processor has an associated counter (see Figs. 1 and 2), achieving the claimed limitation. . It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Qureshi in order to record cache line hits or misses and "reduce memory latency by accessing memory in parallel with snooping the other processors" (Qureshi Paragraph 0003, lines 15-17).
As for claim 2, the rejection of claim 1 above addresses the majority of limitations and the previously cited references teach the system of claim 1, but the rejection of claim 1 does not address the limitation of at a time in near contemporaneity with the receiving, from the first thread, the first read request for access to the requested data, receiving, from a second thread of a second group of threads in execution on a second processor of a second group of the at least one processor, a second read request for access to the requested data. However Busaba Paragraphs 0147-0148 disclose pausing one or another transaction when contention occurs, which would allow for both requests to be completed in the same manner as the instant application. Additionally, Paragraph 0213, lines 17-28 disclose handling a snoop hit during a cacheable read or write by a "second bus master" (e.g. processor) by updating modified data, allowing for the embodiment to achieve the claimed limitation.
Claim 2 also specifies retrieving data from a cache of another processor, but this would functionally occur in the disclosure of Busaba given the use of cache coherency and the presence of data caches in each CPU that are interconnected with a shared cache (Fig. 1 of Busaba). Additionally, the disclosure of Busaba addresses transactions between different caches (Paragraph 0265).
As for claim 3, the previously cited references teach the system of claim 2. Additionally, Qureshi teaches am embodiment wherein the operations further comprise incrementing, by the second processor, a second counter value stored in a second counter variable maintained by the second processor. Paragraph 0023, lines 1-5 disclose incrementing a counter managed by a processor (wherein each processor has a counter, see Figs. 1 and 2) when a cache miss is serviced by another processor's cache, achieving the claimed limitation.
As for claim 5, the previously cited references teach the system of claim 2. Additionally, Busaba Fig. 1 illustrates multiple CPUs having their own caches, achieving the claimed limitation.
As for claim 6, the previously cited references teach the system of claim 2. Additionally, Busaba Paragraph 0292, lines 1-7 disclose transferring data (e.g. requesting data) from a shared cache to other processor caches, achieving the claimed limitation.
As for claim 11, applicant is directed to the rejection of claim 1 above, as they are directed to similar limitations and therefore rejected on the same rationale. However, the rejection of claim 1 does not explicitly address an embodiment wherein the storage server equipment, in response to receiving the read request, is to poll a second cache memory to determine that the shared data is unavailable in the second cache memory, and based on the shared data being determined to be unavailable in the second cache memory, the storage server equipment is to send the shared data to the first cache memory, however Busaba Paragraph 0265 discloses retrieving a cache line from higher level cache or main memory after a coherence miss in another cache, achieving the claimed limitation.
As for claim 13, the previously cited references teach the method of claim 11. Additionally, Busaba discloses sending cache read requests between caches on processors (Paragraph 0292), achieving the claimed limitation.
As for claim 14, the previously cited references teach the method of claim 11. For remaining limitations, applicant is directed to the rejection of claim 5, as they are directed to the same limitations and therefore rejected on the same rationale.
As for claim 15, the previously cited references teach the method of claim 11. For remaining limitations, applicant is directed to the rejection of claim 6, as they are directed to the same limitations and therefore rejected on the same rationale.
As for claim 18, the previously cited references teach the method of claim 11. They do not teach the remaining limitations of claim 18. However, Cochcroft Column 7, lines 22-38 disclose halting processor control while incrementing a counter during a burst read of an external cache. Column 7, lines 55-59 disclose returning to the previous state after the conclusion of incrementing. When combined with previously cited disclosures, a processor can be halted upon incrementing an associated counter and restarted after completion. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Cochcroft in order to halt a processor during an incrementing operation and benefit from a cache address comparator having bus error and halt outputs for filling after a cache miss (Cochcroft Column 2, lines 8-11).
Cochcroft does not teach halting, by the processor, a scheduler process and an inter-processor interrupt process, however Yamada Paragraph 0040, lines 1-7 teaches pausing interrupt transactions and thread scheduling in response to a determination by a CPU migration manager. When combined with previously cited disclosures, the claimed limitation is achieved. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Yamada in order to halt specific processes when servicing a cache miss and "enable runtime migration of processors and memory modules" (Yamada Paragraph 0009).
As for claim 19, applicant is directed to the rejection of claim 11 above, as the claims are directed to the same limitations and therefore rejected on the same rationale. Additionally, see Fig. 19 of Busaba which shows a non-transitory machine-readable medium comprising instructions for carrying out disclosed embodiments.
As for claim 20, the previously cited references teach the storage medium of claim 19. For the remaining limitations of claim 20, applicant is directed to the rejection of claim 18 above, as they are directed to the same limitations and therefore rejected on the same rationale.
Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Busaba in view of Yamada, Cochcroft, Gschwind, Qureshi, and Chang et al (Intl. Pub. No. 2011/002437), hereinafter referred to as Chang.
In regard to claim 4, the previously cited references teach the system of claim 3. They do not teach the remaining limitations of claim 4. However, Chang teaches storing reference counter values as part of a page cache (Chang Paragraph 0045, lines 1-2) which is dedicated to a specific “compute blade” (e.g. processing unit, see Fig. 2 compute blade 102 containing agent 217 which includes page cache 302 shown in Fig. 3), achieving the claimed limitation. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Chang in order to persist counter values and "facilitate the selection of [victim pages] and promote page migration" (Chang Paragraph 0044, lines 7-11).
As for claim 12, the previously cited references teach the method of claim 11. For remaining limitations, applicant is directed to the rejection of claim 4 above, as they are directed to the same limitations and therefore rejected on the same rationale.
Claims 7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Busaba in view of Yamada, Cochcroft, Gschwind, Qureshi, and Davis et al (U.S. Patent Pub. No. 2011/0172968), hereinafter referred to as Davis.
In regard to claim 7, the previously cited references teach the system of claim 2. They do not teach the remaining limitations of claim 7. However, Qureshi discloses setting an EarlyAccess policy on a processor (Paragraph 0003, lines 15-17 disclose EarlyAccess as accessing memory while snooping processors, e.g. a mode wherein data is not expected to be retrieved from other processors) if its cache miss counter is greater than 0. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Qureshi in order to change operating modes based on a counter value and "reduce memory latency by accessing memory in parallel with snooping the other processors" (Qureshi Paragraph 0003, lines 15-17).
Qureshi does not explicitly teach operating the first processor in a per-CPU state, but Davis discloses operating performance counters in a detailed mode, collecting data using a single processor in an array at a time (e.g. per CPU; Paragraph 0026, lines 1-4), achieving the claimed limitation. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Davis in order to track per-CPU cache misses when coherence miss frequency is low (0 counter) and avoid performing kernel-level operations when interacting with performance counts (Davis Paragraph 0018, lines 9-10).
As for claim 9, the previously cited references teach the system of claim 2. They do not teach the remaining limitations of claim 9. However, Qureshi discloses setting a LateAccess policy on a processor (Paragraph 0003, lines 9-12 disclose LateAccess as waiting for a snoop of another cache before accessing memory, e.g. a mode wherein data is expected to be retrieved from other processors) if its cache miss counter is 0. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Qureshi in order to change operating modes based on a counter value and "reduce memory latency by accessing memory in parallel with snooping the other processors" (Qureshi Paragraph 0003, lines 15-17).
Qureshi does not explicitly teach operating the first processor in an atomic state, but Davis discloses operating performance counters in a distributed mode, collecting data from all processors in an array into a single central monitor (e.g. atomic or shared state; Paragraph 0027, lines 1-2), achieving the claimed limitation. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Davis in order to track overall cache misses when coherence miss frequency is high (>0 counter) and avoid performing kernel-level operations when interacting with performance counts (Davis Paragraph 0018, lines 9-10).
Claims 10 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Busaba in view of Yamada, Cochcroft, Gschwind, Qureshi, and Guo (U.S. Patent Pub. No. 2007/0203960).
In regard to claim 10, the previously cited references teach the system of claim 2. They do not teach the remaining limitations of claim 10. However Guo teaches a garbage collection unit which reclaims objects when their reference counter values reach zero (Paragraph 0082). When combined with previous disclosures, this would reclaim cache memory, achieving the claimed limitation. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Guo in order to reclaim unused cache memory and benefit from low delay concurrent garbage collection (Paragraph 0041, line 2).
Guo does not teach an embodiment including placing in hiatus a scheduler process operational on the first processor and placing in hiatus an inter-processor interrupt process operational on the first processor. However, when combined with the disclosure of Yamada Paragraph 0040, lines 1-7 which teaches pausing interrupt transactions and thread scheduling in response to a determination by a CPU migration manager, the claimed limitation is achieved. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Yamada in order to halt processes when reclaiming unused memory and "enable runtime migration of processors and memory modules" (Yamada Paragraph 0009).
As for claim 16, the previously cited references teach the method of claim 11. They do not teach the remaining limitations of claim 16. However, Guo teaches a garbage collection unit which reclaims objects when their reference counter values reach zero (Paragraph 0082). When combined with previous disclosures, this would reclaim cache memory. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the disclosure of Guo in order to reclaim unused cache memory and benefit from low delay concurrent garbage collection (Paragraph 0041, line 2).
Guo does not teach a process that transfers the modified shared resource data stored in the first cache memory to the storage server equipment, however this is a regular function of the disclosure of Busaba whenever a cache access occurs (see Fig. 5; Paragraph 0215), meaning it would also occur if the cache is accessed for reclamation, achieving the claimed limitation.
As for claim 17, the previously cited references teach the method of claim 16. Additionally, Busaba Fig. 11 illustrates counters for cache line tags managed by processor 1110 and Paragraph 0009 discloses incrementing counters on "qualifying contentions" (e.g. access by another thread; Paragraph 0275) which can include coherence misses (Paragraph 0276, lines 16-19), achieving the claimed limitation.
Response to Arguments
Applicant's arguments filed 16 December, 2025 (see page 8 of response) were considered but found unpersuasive. Amendments to the independent claims necessitated further search and consideration, wherein new reference Gschwind was found to teach the newly added limitations concerning an inter-processor interrupt process that coordinates software execution on disparate processors. Additionally, as seen in the updated rejection, Busaba was found to already teach a group of processors and Qureshi was found to already teach counter variables maintained by each processor.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAKARIA MOHAMMED BELKHAYAT whose telephone number is (571)270-0472. The examiner can normally be reached Monday thru Thursday 7:30AM-5:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZAKARIA MOHAMMED BELKHAYAT/Examiner, Art Unit 2139
/REGINALD G BRAGDON/Supervisory Patent Examiner, Art Unit 2139