Prosecution Insights
Last updated: April 19, 2026
Application No. 18/969,562

PREEMPTION IN A MACHINE LEARNING HARDWARE ACCELERATOR

Non-Final OA §103§112§DP
Filed
Dec 05, 2024
Examiner
SUN, MICHAEL
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
87%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
679 granted / 768 resolved
+33.4% vs TC avg
Minimal -2% lift
Without
With
+-1.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
785
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
39.8%
-0.2% vs TC avg
§102
36.9%
-3.1% vs TC avg
§112
5.3%
-34.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 768 resolved cases

Office Action

§103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of the Application This Office Action is in response to Applicant’s Continuation filed on 12/05/2024 and subsequent preliminary amendment filed 2/12/2025. Claims 21-40 are pending for this examination. Claims 1-20 were cancelled. Claims 21-40 were added. Information Disclosure Statement The information disclosure statement (IDS) submitted on 1/16/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 27, 34, and 40 are objected to because of the following informalities: In claim 27, line 2, Examiner believes “end-or-process” may be a typo and the intended claim language was --end-of-process--. In claim 34, line 2, Examiner believes “end-or-process” may be a typo and the intended claim language was --end-of-process--. In claim 40, lines 2-3, Examiner believes “end-or-process” may be a typo and the intended claim language was --end-of-process--. Appropriate correction is required. Claim Rejections - 35 U.S.C. § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 21-40 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “long-running process” in claims 21, 28, and 35 is a relative term which renders the claim indefinite. The term “long-running process” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. More specifically, there is no definition in the specification that indicates what Applicants would be considering as “long-running”, thus making this a relative term that is indefinite, i.e. what one person of ordinary skill in the art would consider as “long-running” would differ from what another would consider as “long-running”. Obvious-Type Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 21-22, 28-29, and 35 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 3-4, 11-12, and 19 of U.S. Patent No. 12,197,959 (parent application s/n 18/036,506). Although the claims at issue are not identical, they are not patentably distinct from each other because claims 21-22, 28-29, and 35 of instant Application, respectively contains every element of claims 3-4, 11-12, and 19 of U.S. Patent No. 12,197,959 (parent application s/n 18/036,506), as shown below with the difference between to two sets of claims underlined: Claims Instant Application Claims U.S. Patent No. 12,197,959 (parent application s/n 18/036,506) Independent claim 21 A method of operating a machine learning accelerator, comprising: executing, by a scalar core directing a plurality of compute units of the machine learning accelerator, a first process in a first context, wherein the first process is a long-running process; identifying, by a job scheduler of the machine learning accelerator, that a second process is queued, wherein the second process has a higher priority than a priority of the long-running process, and upon reaching a preemption checkpoint: determining, by the scalar core, an amount of available resources and in response to the amount of available resources being sufficient for the second process: pausing, by the scalar core, execution of the first process; allocating available resources to the second process; switching, by the scalar core, to a second context; executing, by the scalar core, the second process; switching, by the scalar core, to the first context; and resuming, by the scalar core, execution of the first process. Dependent claim 3 (Independent claim 1 AND dependent claim 3) Claim 1. A method of operating a machine learning accelerator, comprising: executing, by a scalar core directing a plurality of compute units of the machine learning accelerator, a first process in a first context, wherein the first process is a long-running process; identifying, by a job scheduler of the machine learning accelerator, that a second process is queued, wherein the second process has a higher priority than a priority of the long-running process, and upon reaching a preemption checkpoint: determining, by the scalar core, an amount of available resources and in response to the amount of available resources being less than a required amount for the second process: saving, by the scalar core, in-process values of the first process; switching, by the scalar core, to a second context; executing, by the scalar core, the second process; upon completion of the second process, switching, by the scalar core, to the first context; restoring, by the scalar core, the in-process values of the first process; and resuming, by the scalar core, execution of the first process. Claim 3. The method of claim 1, comprising in response to the amount of available resources being greater than the required amount for the higher priority process: pausing, by the scalar core, execution of the first process; allocating available resources to the second process; switching, by the scalar core, to a second context; executing the second process; switching, by the scalar core, to the first context; and resuming execution of the first process. Analysis As seen above, the instant claims are a broader version of the claims of U.S. Patent No. 12,197,959 (parent application s/n 18/036,506), where the difference is in that the method is done in response to the amount of available resources being “sufficient” for the second process compared to U.S. Patent No. 12,197,959 (parent application s/n 18/036,506) where the method is done in response to the amount of available resources being “greater than the required amount” for a higher priority process. Overall, the language of the instant claims are a slightly different variation of dependent claim 3 seen in U.S. Patent No. 12,197,959 (parent application s/n 18/036,506) but falls within the same meaning and has all of the same steps being done, thus the instant independent claim is a broader version of the claims in U.S. Patent No. 12,197,959 (parent application s/n 18/036,506). Independent claim 28 A system for preempting operations in a machine learning accelerator, comprising: a scalar core comprising one or more processors and configured to direct a plurality of compute units of the machine learning accelerator; a job scheduler; one or more tangible, non-transitory media operably connectable to the one or more processors and storing instructions that, when executed, cause the one or more processors to perform operations comprising: executing, by the scalar core, a first process in a first context, wherein the first process is a long-running process; identifying, by the job scheduler, that a second process is queued, wherein the second process has a higher priority than a priority of the long-running process, and upon reaching a preemption checkpoint: determining, by the scalar core, an amount of available resources and in response to the amount of available resources being sufficient for the second process: pausing, by the scalar core, execution of the first process; allocating available resources to the second process; switching, by the scalar core, to a second context; executing, by the scalar core, the second process; switching, by the scalar core, to the first context; and resuming, by the scalar core, execution of the first process. Dependent claim 11 (Independent claim 9 AND dependent claim 11) Claim 9. A system for preempting operations in a machine learning accelerator, comprising: a scalar core comprising one or more processors and configured to direct a plurality of compute units of the machine learning accelerator; a job scheduler; one or more tangible, non-transitory media operably connectable to the one or more processors and storing instructions that, when executed, cause the one or more processors to perform operations comprising: executing, by the scalar core, a first process in a first context, wherein the first process is a long-running process; identifying, by the job scheduler, that a second process is queued, wherein the second process has a higher priority than a priority of the long-running process, and upon reaching a preemption checkpoint: determining, by the scalar core, an amount of available resources and in response to the amount of available resources being less than a required amount for the second process: saving, by the scalar core, in-process values of the first process; switching, by the scalar core, to a second context; executing, by the scalar core, the second process; upon completion of the second process, switching, by the scalar core, to the first context; restoring, by the scalar core, the in-process values of the first process; and resuming, by the scalar core, execution of the first process. Claim 11. The system of claim 9, comprising in response to the amount of available resources being greater than the required amount for the higher priority process: pausing, by the scalar core, execution of the first process; allocating available resources to the second process; switching, by the scalar core, to a second context; executing the second process; switching, by the scalar core, to the first context; and resuming execution of the first process. Analysis As seen above, the instant claims are a broader version of the claims of U.S. Patent No. 12,197,959 (parent application s/n 18/036,506), where the difference is in that the method is done in response to the amount of available resources being “sufficient” for the second process compared to U.S. Patent No. 12,197,959 (parent application s/n 18/036,506) where the method is done in response to the amount of available resources being “greater than the required amount” for a higher priority process. Overall, the language of the instant claims are a slightly different variation of dependent claim 11 seen in U.S. Patent No. 12,197,959 (parent application s/n 18/036,506) but falls within the same meaning and has all of the same steps being done, thus the instant independent claim is a broader version of the claims in U.S. Patent No. 12,197,959 (parent application s/n 18/036,506). Independent claim 35 A non-transitory computer readable storage medium storing instructions that, when executed by at least one processor, cause at least one processor of a machine learning accelerator to perform operations comprising: executing, by a scalar core directing a plurality of compute units of the machine learning accelerator, a first process in a first context, wherein the first process is a long-running process; identifying, by a job scheduler, that a second process is queued, wherein the second process has a higher priority than a priority of the long-running process, and upon reaching a preemption checkpoint: determining, by the scalar core, an amount of available resources and in response to the amount of available resources being sufficient for the second process: pausing, by the scalar core, execution of the first process; allocating available resources to the second process; switching, by the scalar core, to a second context; executing, by the scalar core, the second process; switching, by the scalar core, to the first context; and resuming, by the scalar core, execution of the first process. Dependent claim 19 (Independent claim 17 AND dependent claim 19) Claim 17. A non-transitory computer readable storage medium storing instructions that, when executed by at least one processor, cause at least one processor of a machine learning accelerator to perform operations comprising: executing, by a scalar core directing a plurality of compute units of the machine learning accelerator, a first process in a first context, wherein the first process is a long-running process; identifying, by a job scheduler, that a second process is queued, wherein the second process has a higher priority than a priority of the long-running process, and upon reaching a preemption checkpoint: determining, by the scalar core, an amount of available resources and in response to the amount of available resources being less than a required amount for the second process: saving, by the scalar core, in-process values of the first process; switching, by the scalar core, to a second context; executing, by the scalar core, the second process; upon completion of the second process, switching, by the scalar core, to the first context; restoring, by the scalar core, the in-process values of the first process; and resuming, by the scalar core, execution of the first process. Claim 19. The medium of claim 17, comprising in response to the amount of available resources being greater than the required amount for the higher priority process: pausing, by the scalar core, execution of the first process; allocating available resources to the second process; switching, by the scalar core, to a second context; executing the second process; switching, by the scalar core, to the first context; and resuming execution of the first process. Analysis As seen above, the instant claims are a broader version of the claims of U.S. Patent No. 12,197,959 (parent application s/n 18/036,506), where the difference is in that the method is done in response to the amount of available resources being “sufficient” for the second process compared to U.S. Patent No. 12,197,959 (parent application s/n 18/036,506) where the method is done in response to the amount of available resources being “greater than the required amount” for a higher priority process. Overall, the language of the instant claims are a slightly different variation of dependent claim 19 seen in U.S. Patent No. 12,197,959 (parent application s/n 18/036,506) but falls within the same meaning and has all of the same steps being done, thus the instant independent claim is a broader version of the claims in U.S. Patent No. 12,197,959 (parent application s/n 18/036,506). Claim Rejections - 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21-22, 26-28, 33-35, and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Ahn et al. (US 2021/0256373), herein referred to as Ahn ‘373, in view of Serebrin (US 2009/0037936), herein referred to as Serebrin ‘936. Referring to claim 21, Ahn ‘373 teaches a method (see Abstract) of operating a machine learning accelerator (see Fig. 1, accelerator 140; see Paragraphs 0044 and 0047, wherein accelerator 140 is executing neural network-based inference tasks), comprising: executing, by a core (see Paragraph 0074) directing a plurality of compute units of the machine learning accelerator (see Fig. 2, accelerator 210 with PEs 213; see Paragraph 0076), a first process in a first context (see Paragraphs 0050, 0076-0077), wherein the first process is a long-running process (see Paragraph 0050, 0076-0077; see Fig. 4); identifying, by a job scheduler of the machine learning accelerator (see Fig. 3, instruction queue; see Paragraphs 0050 and 0071-0072, wherein tasks are scheduled, i.e. there is logic circuitry for scheduling tasks), that a second process is queued (see Fig. 3, instruction queue 330 holding target instructions of tasks; see Fig. 4, wherein task 3 arrives with higher priority than previously scheduled and executing tasks that have low priority), wherein the second process has a higher priority than a priority of the long-running process (see Paragraphs 0050 and 0073), and upon reaching a preemption checkpoint (see Paragraphs 0062-0063, where accelerator can start execution of a second tasks before the first task is completed, i.e. preemption-based scheduling done at the start point of the execution of the second task): determining, by the core, an amount of available resources (see Paragraphs 0062-0063, where the resource usage information gives availability of resource) and in response to the amount of available resources being sufficient for the second process [perform the claimed method] (contingent limitation that does not need to occur, see explanation below after the 103 portion of this rejection). However, Ahn ‘373 does not teach the type of processor being a scalar processor. Serebrin ‘936 teaches a computer architecture where the execution core is may be a scalar core or superscalar core (see Paragraph 0036). Ahn ‘373 and Serebrin ‘936 apply as analogous prior arts as both pertain to the same field of endeavor of processor systems with execution cores that handle scheduling and execution of instructions / tasks. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ahn ‘373 system as set forth above to have the core being a scalar core to execute tasks, as taught by Serebrin ‘936, as a person of ordinary skill in the art would be motivated to use scalar processors to utilize a more cost-effective processor that has lower power consumption due to the simpler complexity in processor design, have simpler programming as instructions are executed one at a time in sequential order, and be more efficient for sequential, non-specialized tasks. Claim 21 above recites the following contingent limitations: a) the required limitation of “determining, by the core, an amount of available resources”, b) a first contingent limitation of “in response to the amount of available resources being sufficient for the second process, [perform the claimed method]”, and c) a second contingent limitation that implied as a second possibility based on the contingent of what happens in response to the amount of available resources being insufficient for the second process, where the claim language here remains silent / open ended on what is to be done (there is no claim language), i.e. anything or nothing can be done in the event of insufficient resources. As such, the broadest reasonable interpretation (BRI) of the claim encompasses the method comprising steps a) and b) (method 1) and a method comprising steps a) and c) (method 2), where mapping the BRI would only require limitation a) and only one of b) or c). Thereby under the BRI, the method claim here may not require the currently claimed steps recited for the contingent limitation as it is not a required element but only one possibility based on the contingent set forth in the claim language. In this instance of claim 21, Examiner has mapped the claim according to the required limitation a) and the silent contingent of c) which is essentially an open ended limitation of nothing being done. Applicants can correct this by amending the claim language to indicate that the current claim language is a required element that must occur, or to remove the “in response” to language which directly indicates a contingent limitation. As to claim 22, Examiner points out that claim 22 recites further limitations based on the contingent limitations found in the independent claim (in this case, dependent claim 22 expands / adds upon the allocating available resources limitation which is found in only one path of the contingent limitation), and thus this would be a dependent claim upon a contingent limitation, and would also not be required under the BRI. As to claim 26, Ahn ‘373 teaches the method of claim 21, wherein the second process is executed to completion (see Paragraphs 0050 and 0060; see Fig. 4). As to claim 27, Ahn ‘373 teaches the method of claim 26, wherein completion of the second process is indicated by the second process returning an end-or-process pointer to the core (see Paragraphs 0050 and 0060, where after completion of the second task, an indicator of completion would inherently be returned to the core which would allow the accelerator to switch the context back to the first process to be resumed; see Fig. 4). Referring to claim 28, Ahn ‘373 teaches a system (see Fig. 1, electronic device 10) for preempting operations (see Paragraphs 0062-0063; also see Fig. 2, preemption module 211) in a machine learning accelerator (see Fig. 1, accelerator 140; see Paragraphs 0044 and 0047, wherein accelerator 140 is executing neural network-based inference tasks), comprising: a core comprising one or more processors and configured to direct a plurality of compute units of the machine learning accelerator (see Fig. 2, accelerator 210 with multiple PEs 213; see Paragraphs 0074 and 0076); a job scheduler (see Fig. 3, instruction queue; see Paragraphs 0050 and 0071-0072, wherein tasks are scheduled, i.e. there is logic circuitry for scheduling tasks); one or more tangible, non-transitory media operably connectable to the one or more processors and storing instructions that, when executed, cause the one or more processors to perform operations (see Paragraphs 0089-0090) comprising: executing, by the core (see Paragraph 0074), a first process in a first context (see Paragraphs 0050, 0076-0077), wherein the first process is a long-running process (see Paragraph 0050, 0076-0077; see Fig. 4); identifying, by the job scheduler (see Fig. 3, instruction queue; see Paragraphs 0050 and 0071-0072, wherein tasks are scheduled, i.e. there is logic circuitry for scheduling tasks), that a second process is queued (see Fig. 3, instruction queue 330 holding target instructions of tasks; see Fig. 4, wherein task 3 arrives with higher priority than previously scheduled and executing tasks that have low priority), wherein the second process has a higher priority than a priority of the long-running process (see Paragraphs 0050 and 0073), and upon reaching a preemption checkpoint (see Paragraphs 0062-0063, where accelerator can start execution of a second tasks before the first task is completed, i.e. preemption-based scheduling done at the start point of the execution of the second task): determining, by the core, an amount of available resources (see Paragraphs 0062-0063, where the resource usage information gives availability of resource) and in response to the amount of available resources being sufficient for the second process (contingent limitation that does not need to occur): pausing, by the core, execution of the first process (see Paragraphs 0050 and 0056); allocating available resources to the second process (see Paragraphs 0050 and 0054-0060; see Fig. 4); switching, by the core, to a second context (see Paragraphs 0050, 0059, and 0073); executing, by the core, the second process (see Paragraphs 0050, 0060; see Fig. 4); switching, by the core, to the first context (see Paragraphs 0050, 0060, and 0073; see Fig. 4); and resuming, by the core, execution of the first process (see Paragraphs 0050, 0060, and 0073, where after execution of the second task associated with the preemption request is completed, the context information of the first task for which execution is suspended is allowed to be executed again starting from the point at which the first task was suspended). However, Ahn ‘373 does not teach the type of processor being a scalar processor. Serebrin ‘936 teaches a computer architecture where the execution core is may be a scalar core or superscalar core (see Paragraph 0036). Ahn ‘373 and Serebrin ‘936 apply as analogous prior arts as both pertain to the same field of endeavor of processor systems with execution cores that handle scheduling and execution of instructions / tasks. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ahn ‘373 system as set forth above to have the core being a scalar core to execute tasks, as taught by Serebrin ‘936, as a person of ordinary skill in the art would be motivated to use scalar processors to utilize a more cost-effective processor that has lower power consumption due to the simpler complexity in processor design, have simpler programming as instructions are executed one at a time in sequential order, and be more efficient for sequential, non-specialized tasks. As to claim 33, Ahn ‘373 teaches the system of claim 28, wherein the second process is executed to completion (see Paragraphs 0050 and 0060; see Fig. 4). As to claim 34, Ahn ‘373 teaches the system of claim 33, wherein completion of the second process is indicated by the second process returning an end-or-process pointer to the core (see Paragraphs 0050 and 0060, where after completion of the second task, an indicator of completion would inherently be returned to the core which would allow the accelerator to switch the context back to the first process to be resumed; see Fig. 4). Referring to claim 35, Ahn ‘373 teaches a non-transitory computer readable storage medium storing instructions (see Paragraphs 0089-0090) that, when executed by at least one processor, cause at least one processor of a machine learning accelerator to perform operations comprising: executing, by a core (see Paragraph 0074) directing a plurality of compute units of the machine learning accelerator (see Fig. 2, accelerator 210 with PEs 213; see Paragraph 0076), a first process in a first context (see Paragraphs 0050, 0076-0077), wherein the first process is a long-running process (see Paragraph 0050, 0076-0077; see Fig. 4); identifying, by a job scheduler (see Fig. 3, instruction queue; see Paragraphs 0050 and 0071-0072, wherein tasks are scheduled, i.e. there is logic circuitry for scheduling tasks), that a second process is queued (see Fig. 3, instruction queue 330 holding target instructions of tasks; see Fig. 4, wherein task 3 arrives with higher priority than previously scheduled and executing tasks that have low priority), wherein the second process has a higher priority than a priority of the long-running process (see Paragraphs 0050 and 0073), and upon reaching a preemption checkpoint (see Paragraphs 0062-0063, where accelerator can start execution of a second tasks before the first task is completed, i.e. preemption-based scheduling done at the start point of the execution of the second task): determining, by the core, an amount of available resources (see Paragraphs 0062-0063, where the resource usage information gives availability of resource) and in response to the amount of available resources being sufficient for the second process (contingent limitation that does not need to occur): pausing, by the core, execution of the first process (see Paragraphs 0050 and 0056); allocating available resources to the second process (see Paragraphs 0050 and 0054-0060; see Fig. 4); switching, by the core, to a second context (see Paragraphs 0050, 0059, and 0073); executing, by the core, the second process (see Paragraphs 0050, 0060; see Fig. 4); switching, by the core, to the first context (see Paragraphs 0050, 0060, and 0073; see Fig. 4); and resuming, by the core, execution of the first process (see Paragraphs 0050, 0060, and 0073, where after execution of the second task associated with the preemption request is completed, the context information of the first task for which execution is suspended is allowed to be executed again starting from the point at which the first task was suspended). However, Ahn ‘373 does not teach the type of processor being a scalar processor. Serebrin ‘936 teaches a computer architecture where the execution core is may be a scalar core or superscalar core (see Paragraph 0036). Ahn ‘373 and Serebrin ‘936 apply as analogous prior arts as both pertain to the same field of endeavor of processor systems with execution cores that handle scheduling and execution of instructions / tasks. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ahn ‘373 system as set forth above to have the core being a scalar core to execute tasks, as taught by Serebrin ‘936, as a person of ordinary skill in the art would be motivated to use scalar processors to utilize a more cost-effective processor that has lower power consumption due to the simpler complexity in processor design, have simpler programming as instructions are executed one at a time in sequential order, and be more efficient for sequential, non-specialized tasks. As to claim 40, Ahn ‘373 teaches the system of claim 28, wherein the second process is executed to completion (see Paragraphs 0050 and 0060; see Fig. 4), wherein completion of the second process is indicated by the second process returning an end-or-process pointer to the core (see Paragraphs 0050 and 0060, where after completion of the second task, an indicator of completion would inherently be returned to the core which would allow the accelerator to switch the context back to the first process to be resumed; see Fig. 4). Claims 29 and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Ahn ‘373, in view of Serebrin ‘936 and As to claim 29, Ahn ‘373 does not specifically teach the system of claim 28, wherein allocating available resources to the higher priority process comprises assigning a start and stop address for each memory in a plurality of memories of the plurality of compute units. Shirota ‘908 teaches a computer system with a shared memory accessible to a plurality of clusters (see Abstract; see Fig. 4, memory 25, clusters 1), where a priority determination circuit is connected to each port and has a start and end address register for holding the start address and end address of an instruction that is being executed (see Fig. 4, priority circuit 24 with start & end address register 242; see Paragraph 0044). Ahn ‘373, Serebrin ‘936, and Shirota ‘908 apply as analogous prior arts as all of these arts pertain to the same field of endeavor of a computer system systems with executing instructions that utilize a memory and priority scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination Ahn ‘373 and Serebrin ‘936 system as set forth above to have registers for holding the start address and end address of instructions being executed such that allocation of resources for executing the higher priority process would include storing / assigning the start and stop addresses for each memory involved in the execution of the higher priority process, as taught by Shirota ‘908, as a person of ordinary skill in the art would be motivated to use registers to hold starting and ending / stop addresses of instructions being executed in memory as knowing the start and end addresses is fundamental to how processors operate, particularly with instruction fetching, pipelined / sequential execution, and when to advance to the next instruction. As to claim 36, Ahn ‘373 does not specifically teach the medium of claim 35, wherein allocating available resources to the higher priority process comprises assigning a start and stop address for each memory in a plurality of memories of the plurality of compute units. Shirota ‘908 teaches a computer system with a shared memory accessible to a plurality of clusters (see Abstract; see Fig. 4, memory 25, clusters 1), where a priority determination circuit is connected to each port and has a start and end address register for holding the start address and end address of an instruction that is being executed (see Fig. 4, priority circuit 24 with start & end address register 242; see Paragraph 0044). Ahn ‘373, Serebrin ‘936, and Shirota ‘908 apply as analogous prior arts as all of these arts pertain to the same field of endeavor of a computer system systems with executing instructions that utilize a memory and priority scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination Ahn ‘373 and Serebrin ‘936 system as set forth above to have registers for holding the start address and end address of instructions being executed such that allocation of resources for executing the higher priority process would include storing / assigning the start and stop addresses for each memory involved in the execution of the higher priority process, as taught by Shirota ‘908, as a person of ordinary skill in the art would be motivated to use registers to hold starting and ending / stop addresses of instructions being executed in memory as knowing the start and end addresses is fundamental to how processors operate, particularly with instruction fetching, pipelined / sequential execution, and when to advance to the next instruction. Allowable Subject Matter Claims 23-25, 30-32, and 37-39 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. As to claims 23, 30, and 37, Examiner finds that prior art does not specifically teach the system and method, wherein: at compile-time for the first process: determining a maximum allowable latency for the second process; identifying data synchronization checkpoints to be used as preemption points; determining a maximum expected time delay between data synchronization checkpoints; and in response to determining the maximum time delay between data synchronization checkpoints is above a predetermined threshold: inserting preemption checkpoints in code for the first process. Examiner Comments Examiner notes that the contingent limitations is applied to the method claim above (claim 21) / process claims as a category of claims where only the mapping of one path is needed, as under current office guidance on how to handle contingent limitations other categories of claims such as systems and product claims (like independent claims 28 and 35) can still recite claim language with contingent limitations and the BRI for these claims would still require the mapping of the structure for performing the recited contingent limitations, i.e. in the above analysis of claim 21 with limitations of a), b) and c), the structure needed for doing all three paths would be needed / mapped for the system claims and the medium claims and thus they were mapped as set forth above. Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lin et al. (US 2010/0050184) teaches a multitasking system and method where preemptive context switching is done to consume less system resources thereby relatively less time spent on task switching. Goodman et al. (US 2021/0224072) teaches a context switching system where preemption leads to scheduler performing a context switching to execute the preemption task and restoring afterwards, which is standard context switching in response to a priority / preemption task. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL SUN whose telephone number is (571)270-1724. The examiner can normally be reached Monday-Friday 8am-4pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached on 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL SUN/Primary Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Dec 05, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591434
SHADOW CACHE FOR SECURING CONDITIONAL SPECULATIVE INSTRUCTION EXECUTION
2y 5m to grant Granted Mar 31, 2026
Patent 12585612
MEMORY DEVICE WITH EMBEDDED DEEP LEARNING ACCELERATOR IN MULTI-CLIENT ENVIRONMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12585598
STORAGE DEVICE WITH HARDWARE ACCELERATOR
2y 5m to grant Granted Mar 24, 2026
Patent 12572478
Method and Apparatus for Dual Issue Multiply Instructions
2y 5m to grant Granted Mar 10, 2026
Patent 12561249
PREFETCHING USING A DIRECT MEMORY ACCESS ENGINE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
87%
With Interview (-1.6%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 768 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month