Prosecution Insights
Last updated: April 19, 2026
Application No. 18/309,604

MECHANISM FOR SHARING A COMMON RESOURCE IN A MULTI-THREADED ENVIRONMENT

Non-Final OA §101§103§112
Filed
Apr 28, 2023
Examiner
LIN, HSING CHUN
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Texas Instruments Incorporated
OA Round
2 (Non-Final)
59%
Grant Probability
Moderate
2-3
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
64 granted / 108 resolved
+4.3% vs TC avg
Strong +80% interview lift
Without
With
+79.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
145
Total Applications
across all art units

Statute-Specific Performance

§101
17.1%
-22.9% vs TC avg
§103
35.8%
-4.2% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
34.0%
-6.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-24 are pending in this application. Response to Arguments Applicant’s arguments regarding the rejections of claims 1-24 under 35 U.S.C. 112b have been fully considered and are persuasive. The rejections have been withdrawn. Applicant's arguments regarding the 35 U.S.C. 101 rejections of claims 11-14 and 19 have been fully considered but they are not persuasive. Regarding the 35 U.S.C. 101 rejection, the applicant argues the following in the remarks: It is simply impossible for a human to analyze a computer queue. A human mind cannot practically determine the maximum available transaction length of a request queue in the mind because the problem requires precise, parallel evaluation of ordering, constraints, and cumulative state across many elements. Such operations depend on exact counting and algorithmic iteration rather than intuitive reasoning. Without an explicit computational process, the necessary comparisons and updates exceed what unaided mental simulation can reliably perform. Furthermore, queue operations are performed at speeds on the scale of nanoseconds. A human mind attempting to analyze a queue updating every ten nanoseconds will be fundamentally incapable of tracking its state, since the queue changes orders of magnitude faster than human perception, working memory, and conscious reasoning can operate. Examiner has thoroughly considered Applicant’s arguments, but respectfully finds them unpersuasive for at least the following reasons: As to point (a), the examiner considers this argument to be moot since the mental process in claim 11 is “determining a maximum available transaction length based at least in part on the plurality of requests” and the mental process has nothing to do with analyzing a computer queue. As to point (b), the examiner considers this argument to be moot since the mental process recited in claim 11 is “determining a maximum available transaction length based at least in part on the plurality of requests” and not “determine the maximum available transaction length of a request queue”. Applicant's arguments regarding the 35 U.S.C. 103 rejections of claims 1-24 have been fully considered but they are moot in light of the references being applied in the current rejection. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 18-24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As per claim 18: Lines 7-8 recite “receive, after the first request is preempted and rejected from the queue, a notification that the first request was preempted” but this is not supported by the specification. The specification recites in [0076] “If the shared resource doesn’t support preemption and resumption, the lower priority request is rejected and the requestor may be notified” and in [0034] “If resumption is not supported, resource manager 110 may preempt and reject a currently executing lower priority request.” The first request is not rejected from the queue, but rejected by a resource manager. Claims 19-24 are dependent claims of claim 18, and fail to resolve the deficiencies of claim 18, so they are rejected for the same reasons. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 11-14 and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (abstract idea) without significantly more. As per claim 11, in step 1 of the 101 analysis, the examiner has determined that the claim is directed to a method. Therefore, the claim is directed to one of the four statutory categories of invention. In step 2A prong 1 of the 101 analysis, the examiner has determined that the claim recites a judicial exception. Specifically, the limitation “determining a maximum available transaction length based at least in part on the plurality of requests” is a mental process. Determining a maximum available transaction length is a mental process since humans can observe data to determine a duration that the shared resource is available for a transaction. In step 2A prong 2 of the 101 analysis, the examiner has determined that the additional elements, alone or in combination do not integrate the judicial exceptions into a practical application for the following rationale: The limitations "adding a plurality of requests from one or more requestors to a queue for a shared resource", “adding a first request from a first requestor to the queue for the shared resource”, and “notifying the first requestor that the first request exceeds the maximum available transaction length” represent insignificant, extra-solution activities. The term "extra-solution activity" can be understood as "activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim" (MPEP 2106.05(g)). The examiner has determined that the limitations "adding a plurality of requests from one or more requestors to a queue for a shared resource", “adding a first request from a first requestor to the queue for the shared resource”, and “notifying the first requestor that the first request exceeds the maximum available transaction length” are directed to mere data gathering activities which is a category of insignificant extra-solution activities (MPEP 2106.05(g)). In step 2B of the 101 analysis, the examiner has determined that the additional elements, alone or in combination do not recite significantly more than the abstract ideas identified above for the following rationale: The limitations "adding a plurality of requests from one or more requestors to a queue for a shared resource", “adding a first request from a first requestor to the queue for the shared resource”, and “notifying the first requestor that the first request exceeds the maximum available transaction length” represent insignificant, extra-solution activities. The limitations "adding a plurality of requests from one or more requestors to a queue for a shared resource", “adding a first request from a first requestor to the queue for the shared resource”, and “notifying the first requestor that the first request exceeds the maximum available transaction length” are well-understood, routine, or conventional because they are directed to "receiving or transmitting data" or “storing and retrieving information in memory” (MPEP 2106.05(d)). These are additional elements that the courts have recognized as well understood, routine, or conventional (MPEP 2106.05(d)). The citation of court cases in the MPEP meets the Berkheimer evidentiary burden since citation of a court case in the MPEP is one of the 4 types of evidentiary support that can be used to prove that the additional elements are well-understood, routine, or conventional (see 125 USPQ2d 1649 Berkheimer v. HP, Inc.). Thus, the limitations do not amount to significantly more than the abstract idea. As per claim 12, it recites “notifying the first requestor of the maximum available transaction length” which is an insignificant extra solution activity that is well understood, routine, or conventional because it is directed to "receiving or transmitting data". Therefore, the additional element neither integrates the judicial exception into a practical application nor recites significantly more. As per claim 13, it recites “wherein the revised request fits within the maximum available transaction length” which is an attribute of the technological environment and “adding a revised request from the first requestor to the queue for the shared resource” which is an insignificant extra solution activity that is well understood, routine, or conventional because it is directed to “storing and retrieving information in memory”. Therefore, the additional elements neither integrate the judicial exception into a practical application nor recite significantly more. As per claim 14, it recites “wherein the revised request is scheduled to execute when its transaction length fits within an estimated duration of availability window” which is a mental process since scheduling can be performed in the human mind. As per claim 19, in step 1 of the 101 analysis, the examiner has determined that the claim is directed to a system. Therefore, the claim is directed to one of the four statutory categories of invention. In step 2A prong 1 of the 101 analysis, the examiner has determined that the claim recites a judicial exception. Specifically, the limitation “determining a maximum available transaction length based at least in part on the plurality of requests” is a mental process. Determining a maximum available transaction length is a mental process since humans can observe data to determine a duration that the shared resource is available for a transaction. In step 2A prong 2 of the 101 analysis, the examiner has determined that the additional elements, alone or in combination do not integrate the judicial exceptions into a practical application for the following rationale: The limitations “add a first request from a first requestor to a queue for a shared resource”, “add a second request from a second requestor to the queue for the shared resource”, “receive, after the first request is preempted and rejected from the queue, a notification that the first request was preempted”, “responsive to receiving the notification, add the first request to the queue again for the shared resource”, and “add a plurality of requests from one or more requestors to the queue for the shared resource” represent insignificant, extra-solution activities. The term "extra-solution activity" can be understood as "activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim" (MPEP 2106.05(g)). The examiner has determined that the limitations “add a first request from a first requestor to a queue for a shared resource”, “add a second request from a second requestor to the queue for the shared resource”, “receive, after the first request is preempted and rejected from the queue, a notification that the first request was preempted”, “responsive to receiving the notification, add the first request to the queue again for the shared resource”, and “add a plurality of requests from one or more requestors to the queue for the shared resource” are directed to mere data gathering activities which is a category of insignificant extra-solution activities (MPEP 2106.05(g)). The limitations “wherein the first request has a first priority” and “wherein the second request has a second priority that is higher than the first priority” merely describe attributes of the technological environment in with the abstract idea is operating. The courts have identified that generally linking the use of a judicial exception into a technological environment do not integrate a judicial exception into a practical application (MPEP 2106.04(d)(I)). The limitation “a processor configured to” applies judicial exceptions on a generic computer. "Alappat 's rationale that an otherwise ineligible algorithm or software could be made patent-eligible by merely adding a generic computer to the claim was superseded by the Supreme Court's Bilski and Alice Corp. decisions" so therefore applying judicial exceptions on a processor which is a generic computing component does not integrate the judicial exceptions into a practical application (MPEP 2106.05(b)). In step 2B of the 101 analysis, the examiner has determined that the additional elements, alone or in combination do not recite significantly more than the abstract ideas identified above for the following rationale: The limitations “add a first request from a first requestor to a queue for a shared resource”, “add a second request from a second requestor to the queue for the shared resource”, “receive, after the first request is preempted and rejected from the queue, a notification that the first request was preempted”, “responsive to receiving the notification, add the first request to the queue again for the shared resource”, and “add a plurality of requests from one or more requestors to the queue for the shared resource” represent insignificant, extra-solution activities. The limitations “add a first request from a first requestor to a queue for a shared resource”, “add a second request from a second requestor to the queue for the shared resource”, “receive, after the first request is preempted and rejected from the queue, a notification that the first request was preempted”, “responsive to receiving the notification, add the first request to the queue again for the shared resource”, and “add a plurality of requests from one or more requestors to the queue for the shared resource” are well-understood, routine, or conventional because they are directed to "receiving or transmitting data" or “storing and retrieving information in memory” (MPEP 2106.05(d)). These are additional elements that the courts have recognized as well understood, routine, or conventional (MPEP 2106.05(d)). The citation of court cases in the MPEP meets the Berkheimer evidentiary burden since citation of a court case in the MPEP is one of the 4 types of evidentiary support that can be used to prove that the additional elements are well-understood, routine, or conventional (see 125 USPQ2d 1649 Berkheimer v. HP, Inc.). Thus, the limitations do not amount to significantly more than the abstract idea. The limitations “wherein the first request has a first priority” and “wherein the second request has a second priority that is higher than the first priority” merely describe attributes of the technological environment and therefore do not amount to significantly more than the exception itself (MPEP 2106.05(h)). The limitation “a processor configured to” applies judicial exceptions on a generic computer and therefore does not provide significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu Raghu et al. (US 11219039 B2 hereinafter Kandhalu) in view of Farrell et al. (US 20130185732 A1 hereinafter Farrell). Kandhalu was cited in the IDS filed on 04/28/2023. As per claim 1, Kandhalu teaches a method, comprising: adding a first request from a first requestor to a queue for a shared resource, wherein the first request has a first priority (Col. 7 lines 39-40 three low priority radio commands B1, B2, and B3 from protocol stack B 306; Col. 7 lines 35-36 a queued low priority radio command; Col. 6 lines 10-12 the new radio command is low priority, the radio command scheduler 314 attempts to add the new radio command to the radio command queue; Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks); providing the first request to the shared resource from the queue; processing the first request at the shared resource (Fig. 7; Col. 7 lines 62-64 the low priority non-time critical radio command B1 is currently being executed by the radio driver 310; Col. 7 lines 35-36 a queued low priority radio command); adding a second request from a second requestor to the queue for the shared resource, wherein the second request has a second priority that is higher than the first priority (Col. 7 lines 22-24 receives a new radio command A4 from protocol stack A 308. The radio command scheduler 314 attempts to add A4 to the radio command queue 320; Col. 7 lines 40-45 At time T2, the radio command scheduler 314 receives a new radio command A4 from the protocol stack A 308 that is high priority as per the scheduling policy for the current states of the protocol stack. The radio command scheduler 314 attempts to add A4 to the radio command queue 320; Col. 6 lines 64-66 If the new radio command is high priority and the queued radio command occupying the specified time slot is low priority); preempting the processing of the first request and notifying the first requestor of the preemption, providing duration of availability for the shared resource; providing the second request to the shared resource from the queue; and processing the second request at the shared resource (Col. 6 line 64-Col. 7 line 9 If the new radio command is high priority and the queued radio command occupying the specified time slot is low priority, the radio command scheduler 314 pre-empts the queued low priority radio command to free up the specified time slot for the new radio command in the radio command queue 320 and inserts the new radio command in the radio command queue 314…the radio command scheduler 314 aborts the pre-empted radio command and notifies the protocol stack that issued the low priority radio command that the radio command has been pre-empted; Col. 5 lines 59-66 If the new radio command is a high priority command as per the scheduling policy, the currently executing radio command is a low priority command, and the start and end time parameters of the new radio command overlap the time slot of the low priority command, the scheduler 314 aborts execution of the low priority radio command and places the new radio command at the head of the radio command queue 320 to be immediately executed; Col. 6 lines 2-4 the radio command scheduler 314 notifies the protocol stack that issued the low priority radio command that the radio command was aborted; Col. 6 lines 18-20 If the time slot specified by the start time and end time parameters of the new radio command is available between two queued radio commands; Col. 7 lines 16-18 FIG. 5 is an example of inserting a radio command in an available time slot in the radio command queue 320; Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks). Kandhalu fails to teach wherein notifying the first requestor of the preemption includes providing the first requestor with duration of availability for the shared resource. However, Farrell teaches wherein notifying the first requestor of the preemption includes providing the first requestor with duration of availability for the shared resource ([0033] In one particular embodiment, a capability is provided in which a guest program executing on a guest CPU provisioned by a host CPU is provided a warning of expiration of a timeslice given to the guest CPU from the host CPU or of pre-emption by the host of the guest's timeslice. The warning provides a grace period that the guest CPU can use to perform a particular function, such as complete execution of a dispatchable unit; [0050] the grace period is provided in response to expiration of the timeslice, or in response to the host pre-empting the guest; [0035] Each processor (and/or a program, such as an operating system, executing on the processor) is given a certain amount of time, referred to as a timeslice, to share the resources; claim 8 the grace period provides a period in addition to the timeslice; [0110] In one embodiment, a logical (guest) processor running in a timeslice on a physical processor receives a warning signal indicating a grace period, e.g., an amount of time before the logical processor will be interrupted (deallocated from the physical processor that may be shared) enabling the work being done by the logical processor to be either completed; The warning of pre-emption includes a grace period which is a duration of availability for the shared resource since it is an extension of a timeslice which is an amount of time that a guest processor is allocated to a shared resource. ). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu with the teachings of Farrell to improve execution duration of time sensitive work (see Farrell [0145] In a further embodiment, one or more aspects of the invention can be used with requests from an operating system to let an individual execution thread continue to improve elapsed time of time sensitive work. That is, a thread may request or be provided additional time to perform a function.). As per claim 5, Kandhalu and Farrell teach the method of claim 1. Kandhalu teaches wherein the first request is associated with a Bluetooth® Low Energy protocol (Col. 2 lines 17-19 Embodiments of the disclosure provide for concurrent execution of multiple protocols, e.g., Bluetooth Low Energy (BLE); Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks.). As per claim 6, Kandhalu and Farrell teach the method of claim 5. Kandhalu teaches wherein the second request is associated with a Zigbee protocol (Col. 2 lines 56-58 he protocol software includes protocol stacks for the supported protocols, e.g., Thread, Zigbee; Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks.). As per claim 7, Kandhalu and Farrell teach the method of claim 1. Kandhalu teaches further comprising: receiving a third request with a third priority at the queue; receiving a fourth request with a fourth priority at the queue (Col. 7 lines 16-21 FIG. 5 is an example of inserting a radio command in an available time slot in the radio command queue 320. The original schedule 500 includes three radio commands A1, A2, and A3 from protocol stack A 308 and three radio commands B1, B2, and B3 from protocol stack B 306; Col. 7 lines 37-40 The original schedule 600 includes three high priority radio commands A1, A2, and A3 from protocol stack A 308 and three low priority radio commands B1, B2, and B3 from protocol stack B 306.); when the third priority is higher than the fourth priority, complete the third request first with the shared resource; when the fourth priority is higher than the third priority, complete the fourth request first with the shared resource (Col. 6 line 64-Col. 7 line 3 If the new radio command is high priority and the queued radio command occupying the specified time slot is low priority, the radio command scheduler 314 pre-empts the queued low priority radio command to free up the specified time slot for the new radio command in the radio command queue 320 and inserts the new radio command in the radio command queue 314; Col. 6 lines 40-45 If the new radio command is low priority, the radio command scheduler 314 cannot pre-empt the queued radio command occupying the specified time slot in favor of the new radio command as a low priority radio command cannot pre-empt either a high priority radio command; Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks); and when the third priority and the fourth priority are equal, complete the third request and the fourth request with round robin scheduling (Col. 6 lines 52-60 If the new radio command is high priority and the queued radio command occupying the specified time slot is also high priority, the radio command scheduler 314 cannot pre-empt the queued radio command in the specified time slot in favor of the new radio command as a high priority radio command cannot pre-empt another high priority radio command. In this instance, the radio command scheduler 314 appends the new radio command to the end of the radio command queue 320). Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu and Farrell, as applied to claim 1 above, in view of Nikuie et al. (US 20230141986 A1 hereinafter Nikuie). As per claim 2, Kandhalu and Farrell teach the method of claim 1. Kandhalu teaches further comprising: resending the first request from the first requestor to the queue for the shared resource, wherein the first request fits within the duration of availability (Fig. 7; Col. 7 lines 3-5 The radio command scheduler 314 appends the pre-empted low priority radio command to the end of the radio command queue 320; Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks; Col. 8 lines 3-9 The radio command scheduler 314 causes the radio driver 310 to abort execution of the low priority radio command B1 and places the new radio command A at the head of the radio command queue 320. Because B1 is not time critical, the radio command scheduler 314 reschedules B1 by appending B1 to the radio command queue 320, as shown in the new schedule 702; Col. 7 lines 16-18 FIG. 5 is an example of inserting a radio command in an available time slot in the radio command queue 320). Kandhalu and Farrell fail to teach after notifying the first requestor of the preemption, resending the first request. However, Nikuie teaches after notifying the first requestor of the preemption, resending the first request ([0014] In another example, CPU 102 may issue a series of five requests to complete a write transaction of 16 KB to flash target 104, e.g., four write transfer requests followed by a write request. Because write requests are low priority, a read request arriving before all five requests in the write sequence can interrupt may be scheduled by arbiter 110 immediately, thus interrupting the larger write transaction. In some examples, arbiter 110 may mark all five requests as incomplete and return them to the queue. In some examples, arbiter 110 may signal CPU 102 that the write transaction was preempted. CPU 102 may requeue or cancel the preempted write transaction.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu and Farrell with the teachings of Nikuie so low priority requests can be requeued to be continued (see Nikuie [0014] In another example, CPU 102 may issue a series of five requests to complete a write transaction of 16 KB to flash target 104, e.g., four write transfer requests followed by a write request. Because write requests are low priority, a read request arriving before all five requests in the write sequence can interrupt may be scheduled by arbiter 110 immediately, thus interrupting the larger write transaction. In some examples, arbiter 110 may mark all five requests as incomplete and return them to the queue. In some examples, arbiter 110 may signal CPU 102 that the write transaction was preempted. CPU 102 may requeue or cancel the preempted write transaction.). As per claim 3, Kandhalu, Farrell, and Nikuie teach the method of claim 2. Kandhalu teaches wherein resending the first request includes altering the first request to fit within the duration of availability (Fig. 7; Col. 7 lines 9-14 In each of the above instances in which the radio command scheduler 314 appends a radio command to the radio command queue 320, the radio command scheduler 314 modifies the time parameters of the appended radio command as needed to accommodate the start time and end time deferral.). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu and Farrell, as applied to claim 1 above, in view of Dignum et al. (US 20070113222 A1 hereinafter Dignum). As per claim 4, Kandhalu and Farrell teach the method of claim 1. Kandhalu and Farrell fail to teach wherein the shared resource is a cryptographic engine. However, Dignum teaches wherein the shared resource is a cryptographic engine ([0058] a single parser unit or cryptographic unit is included in an accelerator and may be shared by multiple cores and/or multiple processor threads.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu and Farrell with the teachings of Dignum to provide security (see Dignum [0070] Cryptographic engines 214a-214n may be configured to decrypt an encrypted portion of an XML document, encrypt a portion of an outgoing document, verify or compute a digital signature, and/or perform other security-related functions. For example, a cryptographic unit 214 may facilitate the enforcement of SSL (Secure Sockets Layer) security, web services security, XML security, IPSec or some other security scheme. The cryptographic unit may be configured to apply a cryptographic algorithm such as DES, 3DES, AES, MD5, RC4, SHA or some other algorithm now known or hereafter developed.). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu in view of Bates (US 20090049451 A1). As per claim 8, Kandhalu teaches a method, comprising: adding a first request from a first requestor to a queue for a shared resource, wherein the first request has a first priority (Col. 7 lines 39-40 three low priority radio commands B1, B2, and B3 from protocol stack B 306; Col. 7 lines 35-36 a queued low priority radio command; Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks); providing the first request to the shared resource from the queue; processing the first request at the shared resource (Fig. 7; Col. 7 lines 62-64 the low priority non-time critical radio command B1 is currently being executed by the radio driver 310; Col. 7 lines 35-36 a queued low priority radio command); adding a second request from a second requestor to the queue for the shared resource, wherein the second request has a second priority that is higher than the first priority (Col. 7 lines 22-24 receives a new radio command A4 from protocol stack A 308. The radio command scheduler 314 attempts to add A4 to the radio command queue 320; Col. 7 lines 40-45 At time T2, the radio command scheduler 314 receives a new radio command A4 from the protocol stack A 308 that is high priority as per the scheduling policy for the current states of the protocol stack. The radio command scheduler 314 attempts to add A4 to the radio command queue 320; Col. 6 lines 64-66 If the new radio command is high priority and the queued radio command occupying the specified time slot is low priority); Kandhalu fails to teach the second request also includes a hold time; when the shared resource can complete the first request within the hold time, completing the first request and then processing the second request; and when the shared resource cannot complete the first request within the hold time, preempting the first request and then processing the second request. However, Bates teaches the second request also includes a hold time; when the shared resource can complete the first request within the hold time, completing the first request and then processing the second request; and when the shared resource cannot complete the first request within the hold time, preempting the first request and then processing the second request ([0043] Each wait-time attribute T.sub.W1 . . . T.sub.W2 specifies how long the corresponding thread can wait before pre-empting a lower priority thread; [0026] notify a first thread running on one or more of the processors of a pre-emption by a second thread having a higher priority than the first thread; [0032] As indicated at 204 the operating system OS may wait for the first thread to wind up execution. The first thread may wind up execution by completing its execution within the allotted wait time T.sub.W; [0026] The operating system may also notify the first thread of a time limit for pre-emption (referred to herein as a pre-emption wait time attribute) associated with the second thread. The application running the first thread may yield the one the processor(s) held by the first thread to the second thread within the time limit without saving a context of the first thread if the first thread can wind up within the time limit; [0011] Co-processors and cache may be shared among different software threads; [0033] If the first thread does not wind up execution within the wait time T.sub.W, the first thread may be preempted as indicated at 208. A context switch from the first thread to the second thread may then be performed). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu with the teachings of Bates to increase performance (see Bates [0026] According to embodiments of the present invention a computer system having one or more processors coupled to a memory may implement multi-threaded processing in a way that significantly reduces the need for context switches; [0013] In addition to being detrimental to performance, context switches are often unnecessary.). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu and Bates, as applied to claim 8 above, in view of Dignum. As per claim 9, Kandhalu and Bates teach the method of claim 8. Kandhalu and Bates fail to teach wherein the shared resource is a cryptographic engine. However, Dignum teaches wherein the shared resource is a cryptographic engine ([0058] a single parser unit or cryptographic unit is included in an accelerator and may be shared by multiple cores and/or multiple processor threads.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu and Bates with the teachings of Dignum to provide security (see Dignum [0070] Cryptographic engines 214a-214n may be configured to decrypt an encrypted portion of an XML document, encrypt a portion of an outgoing document, verify or compute a digital signature, and/or perform other security-related functions. For example, a cryptographic unit 214 may facilitate the enforcement of SSL (Secure Sockets Layer) security, web services security, XML security, IPSec or some other security scheme. The cryptographic unit may be configured to apply a cryptographic algorithm such as DES, 3DES, AES, MD5, RC4, SHA or some other algorithm now known or hereafter developed.). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu and Bates, as applied to claim 8 above, in view of Proejts et al. (US 11265052 B1 hereinafter Proejts). As per claim 10, Kandhalu and Bates teach the method of claim 8. Kandhalu and Bates fail to teach wherein the shared resource is an antenna. However, Proejts teaches wherein the shared resource is an antenna (Col. 22 lines 15-16 determine which wireless protocol may take priority on the shared antennas). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu and Bates with the teachings of Proejts to reduce utilized resources (see Proejts Col. 3 lines 45-49 sharing of antenna systems is possible between wireless protocols thus reducing the number of antennas needed to be formed within the C-cover and D-cover of the base chassis or in the display chassis.). Claims 11 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu, in view of Giusto et al. (US 20140040904 A1 hereinafter Giusto), and further in view of Shari (US 6068661 A). As per claim 11, Kandhalu teaches a method, comprising: adding a plurality of requests from one or more requestors to a queue for a shared resource (Fig. 5; Col. 7 lines 16-21 FIG. 5 is an example of inserting a radio command in an available time slot in the radio command queue 320. The original schedule 500 includes three radio commands A1, A2, and A3 from protocol stack A 308 and three radio commands B1, B2, and B3 from protocol stack B 306; Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks); adding a first request from a first requestor to the queue for the shared resource (Col. 7 lines 21-26 At time T1, the radio command scheduler 314 receives a new radio command A4 from protocol stack A 308. The radio command scheduler 314 attempts to add A4 to the radio command queue 320 by scanning the radio command queue 320 for the time slot specified by the start time and end time parameters of A4.). Kandhalu fails to teach determining a maximum available transaction length based at least in part on the plurality of requests; notifying the first requestor that the first request exceeds the maximum available transaction length. However, Giusto teaches determining a maximum available transaction length based at least in part on the plurality of requests ([0041] A non-preemptive duration Bn(i, j) of lower-priority tasks is bounded by using the maximum non-preemptive duration, which is determined as a maximum I(M)CM over all mutexes M that can be held by any task with lower priority than Ti; [0021] For any lock M protecting a mutually exclusive shared resource, the term I(M) is employed to denote the number of tasks that access lock M, and C.sub.M is employed to represent the maximum duration for which M can be held; Equation 4 page 6 above paragraph [0056] PNG media_image1.png 426 1023 media_image1.png Greyscale ; [0056] The first term corresponds to the maximum duration for which the mutex M can be held by any lower-priority task Tl when Ti requests M. The second term represents the maximum duration for which higher-priority tasks Th can hold the mutex M before task Ti can acquire M. Here, W'.sub.l.sup.M is the maximum global-mutex-holding time of task Tl with respect to global mutex M.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu with the teachings of Giusto to provide better resource utilization (see Giusto [0039] A priority inversion occurs when a high-priority task waits for a low-priority task to release a resource…Long durations of priority inversion thus lead to significant loss in useful system utilization. Bounding such priority inversion is important to achieve both timing predictability and better utilization.). Kandhalu and Giusto fail to teach notifying the first requestor that the first request exceeds the maximum available transaction length. However, Shari teaches notifying the first requestor that the first request exceeds the maximum available transaction length (Col. 11 lines 36-53 If the request is not complete within the maximum time duration, the Task Loop module 356 notifies the application module 110 that the desired data is not available. Beginning in state 446, the Task Loop module 356 proceeds to state 502. In state 502, the Task Loop module 356 monitors the time duration needed to obtain a response to the asynchronous transaction. In one embodiment, the Task Loop module 356 monitors the time duration with a timer. The timer is designed to count to a maximum time duration. After reaching the maximum duration, the Task Loop module 356 is said to have "timed out."For example, if a request for data about the operation of a server fan is sent with the SnmpSendMsg function 344, the Task Loop module 356 monitors the duration needed to obtain the data about the fan. If the time to obtain the fan data exceeds a set time limit, the Task Loop module 356 notifies the application module 110 that the fan data is not available.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu and Giusto with the teachings of Shari to prevent delays (see Shari Col. 14 lines 18-30 Advantageously, one embodiment of the invention provides synchronous support by providing a timer which monitors when an asynchronous transaction has exceeded a time limit. Such a timer allows the synchronous interface to provide a response to a synchronous request within a set time limit. As explained above, when the application module 110 initiates a synchronous transaction, the application module 110 may suspend execution. If the asynchronous transaction exceeds a certain amount of time, the application module 110 can appear to freeze or hang. Thus, the timer ensures that a long delay in processing a transaction will not suspend the application module 110 beyond a desired time duration.). As per claim 14, Kandhalu, Giusto, and Shari teach the method of claim 13. Kandhalu teaches wherein the revised request is scheduled to execute when its transaction length fits within an estimated duration of availability window (Fig. 7; Col. 7 lines 9-14 In each of the above instances in which the radio command scheduler 314 appends a radio command to the radio command queue 320, the radio command scheduler 314 modifies the time parameters of the appended radio command as needed to accommodate the start time and end time deferral; Col. 7 lines 16-18 FIG. 5 is an example of inserting a radio command in an available time slot in the radio command queue 320). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu, Giusto, and Shari, as applied to claim 11 above, in view of Landais et al. (US 20180368202 A1 hereinafter Landais). As per claim 12, Kandhalu, Giusto, and Shari teach the method of claim 11. Kandhalu, Giusto, and Shari fail to teach further comprising: notifying the first requestor of the maximum available transaction length. However, Landais teaches further comprising: notifying the first requestor of the maximum available transaction length (claim 40 after having rejected a MT NIDD Request from the SCEF, send to the SCEF, in a notification indicating that the UE is reachable again, information indicating a maximum UE availability time). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu, Giusto, and Shari with the teachings of Landais to improve performance (see Landais [0008] There is also a need to improve the support of Mobile-Terminated Short Message Service (MT-SMS) towards a User Equipment UE using extended idle mode DRX. [0009] Embodiments of the present invention in particular address such needs.). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu, Giusto, and Shari, as applied to claim 11 above, in view of Saeki (US 20100064151 A1). As per claim 13, Kandhalu, Giusto, and Shari teach the method of claim 11. Kandhalu teaches further comprising: adding a revised request from the first requestor to the queue for the shared resource, (Fig. 7; Col. 7 lines 9-14 In each of the above instances in which the radio command scheduler 314 appends a radio command to the radio command queue 320, the radio command scheduler 314 modifies the time parameters of the appended radio command as needed to accommodate the start time and end time deferral; Col. 7 lines 39-40 three low priority radio commands B1, B2, and B3 from protocol stack B 306; Col. 7 lines 16-18 FIG. 5 is an example of inserting a radio command in an available time slot in the radio command queue 320;). Kandhalu, Giusto, and Shari fail to teach wherein the revised request fits within the maximum available transaction length. However, Saeki teaches wherein the revised request fits within the maximum available transaction length ([0122] Equation (5) is to allow a judgment on whether or not the request UPS duration time UT.sub.req desired for responding to the change request from the client unit Ci falls within a range of the maximum UPS duration time UT.sub.max. Here, the determination unit 814 determines that the change in the power supply-distribution capacity is allowable if the request UPS duration time UT.sub.req is equal to or shorter than the maximum UPS duration time UT.sub.max.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu, Giusto, and Shari with the teachings of Saeki to prevent overutilization of resources (see Saeki [0221] This embodiment may achieve advantageous effects of effectively suppressing the supply of electric power that exceeds the power supply-distribution capacity of the facility). Claims 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu, Giusto, and Shari, as applied to claim 11 above, in view of Bates. As per claim 15, Kandhalu, Giusto, and Shari teach the method of claim 11. Kandhalu teaches further comprising: adding a second request from a second requestor to the queue for the shared resource (Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks; Col. 7 lines 37-40 The original schedule 600 includes three high priority radio commands A1, A2, and A3 from protocol stack A 308 and three low priority radio commands B1, B2, and B3 from protocol stack B 306.). Kandhalu, Giusto, and Shari fail to teach wherein the second request includes a hold time; when the shared resource can complete a currently executing request within the hold time, completing the currently executing request and then processing the second request; and when the shared resource cannot complete the currently executing request within the hold time, preempting the currently executing request and then processing the second request. However, Bates teaches wherein the second request includes a hold time; when the shared resource can complete a currently executing request within the hold time, completing the currently executing request and then processing the second request; and when the shared resource cannot complete the currently executing request within the hold time, preempting the currently executing request and then processing the second request ([0043] Each wait-time attribute T.sub.W1 . . . T.sub.W2 specifies how long the corresponding thread can wait before pre-empting a lower priority thread; [0032] As indicated at 204 the operating system OS may wait for the first thread to wind up execution. The first thread may wind up execution by completing its execution within the allotted wait time T.sub.W; [0026] The operating system may also notify the first thread of a time limit for pre-emption (referred to herein as a pre-emption wait time attribute) associated with the second thread. The application running the first thread may yield the one the processor(s) held by the first thread to the second thread within the time limit without saving a context of the first thread if the first thread can wind up within the time limit; [0011] Co-processors and cache may be shared among different software threads; [0033] If the first thread does not wind up execution within the wait time T.sub.W, the first thread may be preempted as indicated at 208. A context switch from the first thread to the second thread may then be performed). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu, Giusto, and Shari with the teachings of Bates to increase performance (see Bates [0026] According to embodiments of the present invention a computer system having one or more processors coupled to a memory may implement multi-threaded processing in a way that significantly reduces the need for context switches or avoids them altogether; [0013] In addition to being detrimental to performance, context switches are often unnecessary.). As per claim 16, Kandhalu, Giusto, Shari, and Bates teach the method of claim 15. Kandhalu teaches further comprising: responsive to preempting the currently executing request, notifying a requestor of the currently executing request that the currently executing request was preempted (Col. 6 line 64-Col. 7 line 9 If the new radio command is high priority and the queued radio command occupying the specified time slot is low priority, the radio command scheduler 314 pre-empts the queued low priority radio command to free up the specified time slot for the new radio command in the radio command queue 320 and inserts the new radio command in the radio command queue 314…the radio command scheduler 314 aborts the pre-empted radio command and notifies the protocol stack that issued the low priority radio command that the radio command has been pre-empted.). As per claim 17, Kandhalu, Giusto, Shari, and Bates teach the method of claim 15. Kandhalu teaches further comprising: after processing the second request, resuming processing of the currently executing request (Col. 5 line 66-Col. 6 line 7 The scheduler 314 appends the aborted low priority radio command to the radio command queue 320 if the low priority command is not time critical; otherwise, the radio command scheduler 314 notifies the protocol stack that issued the low priority radio command that the radio command was aborted. If the low priority radio command is pre-empted and appended to the radio command queue 320, the low priority radio command will be executed from the beginning). Claims 18 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu in view of Nikuie. As per claim 18, Kandhalu teaches a system, comprising: a processor configured to: add a first request from a first requestor to a queue for a shared resource, wherein the first request has a first priority (Col. 1 lines 59-61 a processor coupled to the memory to execute the software instructions; Col. 7 lines 39-40 three low priority radio commands B1, B2, and B3 from protocol stack B 306; Col. 7 lines 35-36 a queued low priority radio command; Col. 6 lines 10-12 the new radio command is low priority, the radio command scheduler 314 attempts to add the new radio command to the radio command queue; Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks); add a second request from a second requestor to the queue for the shared resource, wherein the second request has a second priority that is higher than the first priority (Col. 7 lines 22-24 receives a new radio command A4 from protocol stack A 308. The radio command scheduler 314 attempts to add A4 to the radio command queue 320; Col. 7 lines 40-45 At time T2, the radio command scheduler 314 receives a new radio command A4 from the protocol stack A 308 that is high priority as per the scheduling policy for the current states of the protocol stack. The radio command scheduler 314 attempts to add A4 to the radio command queue 320; Col. 6 lines 64-66 If the new radio command is high priority and the queued radio command occupying the specified time slot is low priority); receive, after the first request is preempted and rejected from the queue, a notification that the first request was preempted (Col. 7 lines 6-9 the radio command scheduler 314 aborts the pre-empted radio command and notifies the protocol stack that issued the low priority radio command that the radio command has been pre-empted; Col. 7 lines 49-55 The radio command scheduler 314 pre-empts B2 in favor of A4, removes B2 from the queue 320…The radio command scheduler 314 also determines that B2 cannot be rescheduled because B2 is time critical and notifies the protocol stack B 306 that the radio command B2 has been pre-empted); and add the first request to the queue again for the shared resource (Col. 7 lines 4-5 appends the pre-empted low priority radio command to the end of the radio command queue 320; Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks). Kandhalu fails to teach responsive to receiving the notification, add the first request to the queue. However, Nikuie teaches responsive to receiving the notification, add the first request to the queue ([0014] In another example, CPU 102 may issue a series of five requests to complete a write transaction of 16 KB to flash target 104, e.g., four write transfer requests followed by a write request. Because write requests are low priority, a read request arriving before all five requests in the write sequence can interrupt may be scheduled by arbiter 110 immediately, thus interrupting the larger write transaction. In some examples, arbiter 110 may mark all five requests as incomplete and return them to the queue. In some examples, arbiter 110 may signal CPU 102 that the write transaction was preempted. CPU 102 may requeue or cancel the preempted write transaction.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu with the teachings of Nikuie so low priority requests can be requeued to be continued (see Nikuie [0014] In another example, CPU 102 may issue a series of five requests to complete a write transaction of 16 KB to flash target 104, e.g., four write transfer requests followed by a write request. Because write requests are low priority, a read request arriving before all five requests in the write sequence can interrupt may be scheduled by arbiter 110 immediately, thus interrupting the larger write transaction. In some examples, arbiter 110 may mark all five requests as incomplete and return them to the queue. In some examples, arbiter 110 may signal CPU 102 that the write transaction was preempted. CPU 102 may requeue or cancel the preempted write transaction.). As per claim 24, Kandhalu and Nikuie teach the system of claim 18. Kandhalu teaches wherein the processor and the shared resource are integrated in a same integrated circuit (Fig. 2; Col. 3 lines 5-14 The RF core 204 includes a processor implemented as an ARM® Cortex®-M0 processor for executing software that, e.g., interfaces the analog RF and base-band circuitry, handles data transmission to and from the main CPU 202, and assembles packets for transmission based on the particular protocol corresponding to the packets. The software includes a command-based application program interface (API) used by applications executing on the main CPU 202 to communicate with the RF core 204.). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu and Nikuie, as applied to claim 18 above, in view of Giusto. As per claim 19, Kandhalu and Nikuie teach the system of claim 18. Kandhalu teaches wherein the processor is further configured to: add a plurality of requests from one or more requestors to the queue for the shared resource (Col. 7 lines 16-21 FIG. 5 is an example of inserting a radio command in an available time slot in the radio command queue 320. The original schedule 500 includes three radio commands A1, A2, and A3 from protocol stack A 308 and three radio commands B1, B2, and B3 from protocol stack B 306; Col. 3 lines 25-27 A dynamic multi-protocol manager (DMM) executing on the main CPU 202 provides time shared access to the radio for radio commands issued by multiple protocol stacks). Kandhalu and Nikuie fail to teach determine a maximum available transaction length based at least in part on the plurality of requests. However, Giusto teaches determine a maximum available transaction length based at least in part on the plurality of requests ([0041] A non-preemptive duration Bn(i, j) of lower-priority tasks is bounded by using the maximum non-preemptive duration, which is determined as a maximum I(M)CM over all mutexes M that can be held by any task with lower priority than Ti; [0021] For any lock M protecting a mutually exclusive shared resource, the term I(M) is employed to denote the number of tasks that access lock M, and C.sub.M is employed to represent the maximum duration for which M can be held; Equation 4 page 6 above paragraph [0056] PNG media_image1.png 426 1023 media_image1.png Greyscale ; [0056] The first term corresponds to the maximum duration for which the mutex M can be held by any lower-priority task Tl when Ti requests M. The second term represents the maximum duration for which higher-priority tasks Th can hold the mutex M before task Ti can acquire M. Here, W'.sub.l.sup.M is the maximum global-mutex-holding time of task Tl with respect to global mutex M.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu and Nikuie with the teachings of Giusto to provide better resource utilization (see Giusto [0039] A priority inversion occurs when a high-priority task waits for a low-priority task to release a resource…Long durations of priority inversion thus lead to significant loss in useful system utilization. Bounding such priority inversion is important to achieve both timing predictability and better utilization.). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu and Nikuie, as applied to claim 18 above, in view of Dignum. As per claim 20, Kandhalu and Nikuie teach the system of claim 18. Kandhalu and Nikuie fail to teach wherein the shared resource is a cryptographic engine. However, Dignum teaches wherein the shared resource is a cryptographic engine ([0058] a single parser unit or cryptographic unit is included in an accelerator and may be shared by multiple cores and/or multiple processor threads.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu and Nikuie with the teachings of Dignum to provide security (see Dignum [0070] Cryptographic engines 214a-214n may be configured to decrypt an encrypted portion of an XML document, encrypt a portion of an outgoing document, verify or compute a digital signature, and/or perform other security-related functions. For example, a cryptographic unit 214 may facilitate the enforcement of SSL (Secure Sockets Layer) security, web services security, XML security, IPSec or some other security scheme. The cryptographic unit may be configured to apply a cryptographic algorithm such as DES, 3DES, AES, MD5, RC4, SHA or some other algorithm now known or hereafter developed.). Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu and Nikuie, as applied to claim 18 above, in view of Averbuch et al. (US 20020186678 A1 hereinafter Averbuch). As per claim 21, Kandhalu and Nikuie teach the system of claim 18. Kandhalu and Nikuie fail to teach wherein adding the first request again includes adding a revised first request with a shorter length. However, Averbuch teaches wherein adding the first request again includes adding a revised first request with a shorter length ([0017] the slots for the subscriber sent as the shorter first message are not discarded but are resent at a later time.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu and Nikuie with the teachings of Averbuch to efficiently use resources (see Averbuch [0002] provide an efficient use of air resources.). Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu and Nikuie, as applied to claim 18 above, in view of Farrell. As per claim 22, Kandhalu and Nikuie teach the system of claim 18. Kandhalu and Nikuie fail to teach wherein the notification includes a maximum available transaction length for the shared resource. However, Farrell teaches wherein the notification includes a maximum available transaction length for the shared resource ([0033] In one particular embodiment, a capability is provided in which a guest program executing on a guest CPU provisioned by a host CPU is provided a warning of expiration of a timeslice given to the guest CPU from the host CPU or of pre-emption by the host of the guest's timeslice. The warning provides a grace period that the guest CPU can use to perform a particular function, such as complete execution of a dispatchable unit; [0050] the grace period is provided in response to expiration of the timeslice, or in response to the host pre-empting the guest; [0035] Each processor (and/or a program, such as an operating system, executing on the processor) is given a certain amount of time, referred to as a timeslice, to share the resources; claim 8 the grace period provides a period in addition to the timeslice; [0110] In one embodiment, a logical (guest) processor running in a timeslice on a physical processor receives a warning signal indicating a grace period, e.g., an amount of time before the logical processor will be interrupted (deallocated from the physical processor that may be shared) enabling the work being done by the logical processor to be either completed; [0051] the grace period limits the remaining time (or other period) given to the guest CPU and is not itself extendable.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu and Nikuie with the teachings of Farrell to improve execution duration of time sensitive work (see Farrell [0145] In a further embodiment, one or more aspects of the invention can be used with requests from an operating system to let an individual execution thread continue to improve elapsed time of time sensitive work. That is, a thread may request or be provided additional time to perform a function.). Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Kandhalu and Nikuie, as applied to claim 18 above, in view of Bates. As per claim 23, Kandhalu and Nikuie teach the system of claim 18. Kandhalu and Nikuie fail to teach wherein the second request includes a hold time. However, Bates teaches wherein the second request includes a hold time ([0043] Each wait-time attribute T.sub.W1 . . . T.sub.W2 specifies how long the corresponding thread can wait before pre-empting a lower priority thread; [0026] In particular an operating system may notify a first thread running on one or more of the processors of a pre-emption by a second thread having a higher priority than the first thread.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Kandhalu and Nikuie with the teachings of Bates to increase performance (see Bates [0026] According to embodiments of the present invention a computer system having one or more processors coupled to a memory may implement multi-threaded processing in a way that significantly reduces the need for context switches or avoids them altogether; [0013] In addition to being detrimental to performance, context switches are often unnecessary.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HSING CHUN LIN whose telephone number is (571)272-8522. The examiner can normally be reached Mon - Fri 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.L./Examiner, Art Unit 2195 /Aimee Li/Supervisory Patent Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Apr 28, 2023
Application Filed
Sep 25, 2025
Non-Final Rejection — §101, §103, §112
Dec 22, 2025
Response Filed
Mar 07, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554523
REDUCING DEPLOYMENT TIME FOR CONTAINER CLONES IN COMPUTING ENVIRONMENTS
2y 5m to grant Granted Feb 17, 2026
Patent 12547458
PLATFORM FRAMEWORK ORCHESTRATION AND DISCOVERY
2y 5m to grant Granted Feb 10, 2026
Patent 12468573
ADAPTIVE RESOURCE PROVISIONING FOR A MULTI-TENANT DISTRIBUTED EVENT DATA STORE
2y 5m to grant Granted Nov 11, 2025
Patent 12461785
GRAPHIC-BLOCKCHAIN-ORIENTATED SHARDING STORAGE APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Nov 04, 2025
Patent 12443425
ISOLATED ACCELERATOR MANAGEMENT INTERMEDIARIES FOR VIRTUALIZATION HOSTS
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
59%
Grant Probability
99%
With Interview (+79.8%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month