Prosecution Insights
Last updated: April 19, 2026
Application No. 18/275,344

RESOURCE CONTROL DEVICE, RESOURCE CONTROL SYSTEM, AND RESOURCE CONTROL METHOD

Non-Final OA §101§103§112
Filed
Aug 01, 2023
Examiner
AYERS, MICHAEL W
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
200 granted / 287 resolved
+14.7% vs TC avg
Strong +56% interview lift
Without
With
+56.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
2.9%
-37.1% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 287 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This office action is in response to claims filed 1 August 2023. Claims 1-8 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The abstract of the disclosure is objected to because it contains references to the Figures. Please remove these references. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1, 7, and 8 (line numbers correspond to claim 1), i. In lines 7-8, the claim does not particularly point out and distinctly claim what is meant by “a plurality of priorities for each program”. The claim only establishes a single program, so it is not clear how there could be a plurality of priorities for multiple programs. For examination purposes, the examiner will interpret this as a plurality of priorities for a plurality of tasks. ii. In line 9, the claim does not particularly point out and distinctly claim what is meant by “select a task”, because the claim separately describes a single task (“a program executes a task”) and multiple tasks (“store tasks in the user queue”) and it is not clear whether the claim selects the task executed by the program or one of the tasks stored in the user queue. For examination purposes, the examiner will interpret the program as executing multiple tasks which are stored in the user queue, and from which one is selected. iii. In line 10, the claim does not particularly point out and distinctly claim what is meant by “the user queues”, as the claim only describes a single user queue. For examination purposes, the examiner will interpret this as the set of queues. Regarding claim 3, lines 2-3, the claim does not particularly point out and distinctly claim what is meant by “the processor is configured to control such that a non-designated IP core is not used for each task”, since it is not clear what the processor is controlling to cause such an effect. For examination purposes, the examiner will interpret this as the processor is configured to not use a non-designated IP core for each task. Regarding claims 2-6, they are dependent upon rejected claims and fail to resolve the deficiencies thereof. They are therefore rejected for similar rationale. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. Regarding claim 1, in step 1 of the 101 analysis set forth in MPEP 2106, the claim recites a device that selects tasks to execute by IP cores from a plurality of prioritized queues. A device is one of the four statutory categories of invention. In step 2A, prong 1 of the 101 analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components: i. “set resources related to Intellectual Property (IP) cores of a Field-Programmable Gate Array (FPGA)” (a person can mentally set resources by simply evaluating the available resources, and making a judgement of which ones to set (MPEP 2106.04(a))) ii. “create a user queue” (a person can mentally create a queue by simply making a judgement of a particular order in which tasks should be organized (MPEP 2106.04(a))) iii. “select a task” (a person can mentally select by simply evaluating tasks and making a judgement of a particular task to select (MPEP 2106.04(a))) If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea. In step 2A, prong 2 of the 101 analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: iv. “A resource control device, comprising: a processor; and a memory device storing instructions that, when executed by the processor, configure the processor to” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))). v. “Intellectual Property (IP) cores of a Field-Programmable Gate Array (FPGA) in which a program executes a task” (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). vi. “a user queue that is a set of queues having a plurality of priorities for each program, and store tasks in the user queue” (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). vii. “a task to be executed by any one of the IP cores by multi-stage scheduling in the user queue and between the user queues” (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). Since the claim does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In step 2B of the 101 analysis set forth in the 2019 PEG, the examiner has determined through reanalysis of the following limitations considered in step 2A prong 2, that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. iv. “A resource control device, comprising: a processor; and a memory device storing instructions that, when executed by the processor, configure the processor to” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))). v. “Intellectual Property (IP) cores of a Field-Programmable Gate Array (FPGA) in which a program executes a task” (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). vi. “a user queue that is a set of queues having a plurality of priorities for each program, and store tasks in the user queue” (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). vii. “a task to be executed by any one of the IP cores by multi-stage scheduling in the user queue and between the user queues” (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 2, the additional element “select user queues from which tasks are to be taken out” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) (a person can mentally select a queue by simply evaluating queues and making a judgement of a particular queue selection (MPEP 2106)). Further, the additional element “extract a task from a queue with the highest priority out of queues each of which has a registered task, among the selected user queues” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) (a person can mentally extract a task from a queue by simply evaluating tasks in queues and making a judgement of a particular task (MPEP 2106)). Regarding claim 3, the additional element “control such that a non-designated IP core is not used for each task” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 4, the additional element “secure the number of IP cores designated by the program, create and control a map in which IP cores are fixedly allocated to each program when receiving a designation of exclusive use of the IP cores” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) (a person can mentally secure cores and create an allocation map by simply evaluating core allocation assignments and making a judgement of a mapping reflecting those allocation assignments (MPEP 2106)). Further, the additional element “wherein the processor IP core usage control unit is configured not to receive the designation if the total number of IP cores newly designated by the program exceeds the number of IP cores in the FPGA” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 5, the additional element “create a user queue for a new program each time the program is activated” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) (a person can mentally create queues by simply making a judgement of an order for tasks to be organized in (MPEP 2106)). Regarding claim 6, the additional element “when receiving a task from the program, select a user queue related to the program based on an identifier” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) (a person can mentally select a task by simply evaluating identifiers and queues, and making a judgement of a particular one (MPEP 2106)). Further, the additional element “register the task to the user queue based on a task priority” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) (a person can mentally register a task by simply evaluating the task and candidate queues, and making a judgement of a particular task to assign to a queue (MPEP 2106)) Regarding claims 7 and 8, they comprise limitations similar to those of claim 1, and are therefore rejected for similar rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, and 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over MANNAR Patent No.: US 11,526,385 B1 (hereafter MANNAR), in view of WANG et al. Pub. No.: US 2017/0315846 A1 (hereafter WANG846). Regarding claim 1, MANNAR teaches the invention substantially as claimed, including: A resource control device, comprising: a processor; and a memory device storing instructions that, when executed by the processor, configure the processor ([Column 26, Lines 12-20] A system for leveraging inactive computing resources, comprising…one or more processors; and a memory communicatively coupled to the one or more computing nodes and the one or more processors, the memory containing instructions therein that, when executed, cause the one or more processors) to: set resources related to…cores…in which a program executes a task ([Column 20, Lines 61-63] The method 500 begins by identifying a task (i.e., “program”) to be performed by one or more computing resources (block 502). [Column 21, Lines 22-25] The method 500 may continue by creating one or more sub-tasks based upon the task (block 504). Each of the sub-tasks may generally be self-contained tasks that, when combined, result in the accomplishment of the task (i.e., each sub-task represents a “task” that, when executed, performs the overall task, or “program”)) [Column 20, Lines 7-11] The metrics module 416 may receive an operating status from the computing node 402 (e.g., via the de-identifier 408) indicating that 50% of the processing power of the computing node 402 (e.g., one or more cores of the computing node 402 processor(s)) is being utilized (i.e., computing resources (nodes) used to execute the sub-tasks comprise computing “cores”)); create a user queue that is a set of queues having a plurality of priorities for each program, and store tasks in the user queue ([Column 4, Lines 41-42] The task receiver may receive a ML/AI task from a user. [Column 14, Lines 6-41] The one or more sub-tasks may then be transmitted for scheduling (e.g., via the scheduler 114). Each sub-task may be individually analyzed and prioritized for processing, and placed in a priority queue 214 accordingly…In some embodiments, the priority queue 214 may include multiple queues for tasks/sub-tasks based upon the type of task/sub-task (e.g., data prep, gird searching, training, etc.), processing requirements (e.g., robust processor, mid-level processor, etc.), and/or any other suitable categorization(s) or combinations thereof…The priority queue 214 may place any task/sub-task receiving a “now” designation at the front of the queue corresponding to that task/sub-task. The priority queue 214 may place any task/sub-task receiving an “immediate” designation at or near the front of the queue corresponding to that task/sub-task if there are no tasks/sub-tasks in the queue with a “now” designation currently in the queue (i.e., priority queue 214 represents a “user queue” because it is associated with processing requests made by users, and includes multiple queues having different priorities)); and select a task to be executed by any one of the IP cores by multi-stage scheduling in the user queue and between the user queues ([Column 15, Lines 43-45] Once a task/sub-task reaches the front of the queue, the scheduler 216 may publish the task/sub-task 220 for processing at a node 222 (i.e., publishing a particular sub-task “selects” the task and executes the sub-task based on the multi-step scheduling process illustrated in FIG. 5)). While MANNAR discusses scheduling tasks for execution on cores of processing resources, MANNAR does not explicitly teach: set resources related to Intellectual Property (IP) cores of a Field-Programmable Gate Array (FPGA) in which a program executes a task However, in analogous art that similarly schedules tasks for execution on cores of processing resources, WANG846 teaches: set resources related to Intellectual Property (IP) cores of a Field-Programmable Gate Array (FPGA) in which a program executes a task ([0007] According to a first aspect, a task scheduling method on a heterogeneous multi-core reconfigurable computing platform is provided, where the heterogeneous multi-core reconfigurable computing platform includes multiple reconfigurable resource packages, and the method includes: when determining that a to-be-executed hardware task is in a ready state, adding the to-be-executed hardware task into a target hardware task queue corresponding to a function of the to-be-executed hardware task; reconfiguring, according to a priority of the to-be-executed hardware task and a usage status of the multiple reconfigurable resource packages, at least one reconfigurable resource package in the multiple reconfigurable resource packages into a target Intellectual Property IP core that can execute the to-be-executed hardware task, where the priority denotes an execution order of the hardware task; and executing the hardware task in the target hardware task queue by using the target IP core. [0040] It should be understood that a heterogeneous multi-core reconfigurable computing platform in the embodiments of the present invention refers to a computing system with both a general purpose processor (General Purpose Processor, “GPP” for short) and a field programmable gate array (Field Programmable Gate Array “FPGA” for short) integrated on a single physical chip (i.e., the reconfigurable IP core used to execute the hardware task from the queue is provided by an FPGA)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined WANG846’s teaching of an FPGA providing an IP core used to execute computing tasks, with MANNAR’s teaching of a computing core used to execute computing tasks, to realize, with a reasonable expectation of success, a system that executes computing tasks, as in MANNAR, on IP cores providing by an FPGA, as in WANG846. A person having ordinary skill would have been motivated to make this combination to execute tasks in an effective manner using a reconfigurable FPGA that provides the flexibility of a general purpose processor, with the speed of an integrated circuit (WANG846 [0003]). Regarding claim 2, MANNAR further teaches: select user queues from which tasks are to be taken out; and extract a task from a queue with the highest priority out of queues each of which has a registered task, among the selected user queues ([Column 14, Lines 30-45] Each of the designations may indicate a relative order of processing the current task requests configured to optimize the available computing resources included within the processing workflow diagram 200 (e.g., private domain 122 and public cloud 126). The priority queue 214 may place any task/sub-task receiving a “now” designation at the front of the queue corresponding to that task/sub-task. The priority queue 214 may place any task/sub-task receiving an “immediate” designation at or near the front of the queue corresponding to that task/sub-task if there are no tasks/sub-tasks in the queue with a “now” designation currently in the queue. The priority queue 214 may place any task/sub-task receiving a “later” designation at the front of the queue corresponding to that task/sub-task only if there are no tasks/sub-tasks with either “now” or “immediate” designations currently in the queue (i.e., sub-tasks are placed into a set of groups within the task queue based on priority, and tasks having the highest priority are taken out before tasks having lower priority). [Column 15, Lines 43-45] Once a task/sub-task reaches the front of the queue, the scheduler 216 may publish the task/sub-task 220 for processing at a node 222 (i.e., publishing a sub-task “extracts” the sub-task at the head of the task queue for execution)). Regarding claim 6, MANNAR further teaches: when receiving a task from the program, select a user queue related to the program based on an identifier, and register the task to the user queue based on a task priority ([Column 14, Lines 6-41] The one or more sub-tasks may then be transmitted for scheduling (e.g., via the scheduler 114). Each sub-task may be individually analyzed and prioritized for processing, and placed in a priority queue 214 accordingly…In some embodiments, the priority queue 214 may include multiple queues for tasks/sub-tasks based upon the type of task/sub-task (e.g., data prep, gird searching, training, etc.), processing requirements (e.g., robust processor, mid-level processor, etc.), and/or any other suitable categorization(s) or combinations thereof…The priority queue 214 may place any task/sub-task receiving a “now” designation at the front of the queue corresponding to that task/sub-task. The priority queue 214 may place any task/sub-task receiving an “immediate” designation at or near the front of the queue corresponding to that task/sub-task if there are no tasks/sub-tasks in the queue with a “now” designation currently in the queue (i.e., when sub-tasks are received, they are placed into a selected queue position based on priority)). Regarding claims 7, and 8, they comprise limitations similar to claim 1, and are therefore rejected for similar rationale. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over MANNAR, in view of WANG846, as applied to claim 1, above, and in further view of DESAI et al. Pub. No.: US 2019/0303326 A1 (hereafter DESAI). Regarding claim 3, while MANNAR and WANG846 discuss using IP cores to perform tasks, they do not explicitly teach: a non-designated IP core is not used for each task. However, in analogous art the similarly uses IP cores to perform tasks, DESAI teaches: a non-designated IP core is not used for each task ([0044] Each of the sub-systems 14 is typically a block of “reusable” circuitry or logic, commonly referred to as an IP core or agent. Most IP agents are designed to perform a specific function (i.e., “task”), for example, controllers for peripheral devices such as an Ethernet port, a display driver, an SDRAM interface, a USB port, etc. Such IP agents are generally used as “building blocks” that provide needed sub-system functionality within the overall design of a complex system provided on an integrated circuit (IC), such as either an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA). By using a library of available IP agents, a chip designer can readily “bolt” together various logic functions in the design of a more complex integrated circuit, reducing design time and saving development costs…sub-system agents 14 are described above in terms of a dedicated IP core (i.e., IP cores are “designated” as being dedicated for specific functions)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined DESAI’s teaching of using IP cores that have been designated as dedicated for execution of specific functions, with the combination of MANNAR and WANG846’s teaching of using IP cores to execute jobs, to realize, with a reasonable expectation of success, a system that executes jobs, as in MANNAR and WANG846, using dedicated IP cores, as in DESAI. A person having ordinary skill would have been motivated to make this combination to reduce design time and save development costs through use of dedicated IP cores (DESAI [0044]). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over MANNAR, in view of WANG846, as applied to claim 1, above, and in further view of WANG et al. Pub. No.: US 2014/0013330 A1 (hereafter WANG330). Regarding claim 4, while MANNAR and WANG846 discuss allocation of tasks to processing cores, they do not explicitly teach: secure the number of IP cores designated by the program, create and control a map in which IP cores are fixedly allocated to each program when receiving a designation of exclusive use of the IP cores, and wherein the processor IP core usage control unit is configured not to receive the designation if the total number of IP cores newly designated by the program exceeds the number of IP cores in the FPGA. However, in analogous art that similarly teaches allocation of tasks to processing cores, WANG330 teaches: secure the number of…cores designated by the program, create and control a map in which…cores are fixedly allocated to each program when receiving a designation of exclusive use of the…cores ([0031] The non-latency scheduler 216 may implement a semaphore 222. In various embodiments, the semaphore is an abstract data type that provides an abstraction for controlling access by multiple threads (i.e., “programs”) to one or more resources, such as the cores 106(1)-106(N) of a multi-core processor 102. The semaphore 222 may track the numbers of concurrent threads that are executing on the cores 106(1)-106(N). Accordingly, the non-latency scheduler 216 may release a blocked thread to execute on a core when the semaphore 222 indicates that a core is available or becomes available. Thus, by using the semaphore 222, the non-latency scheduler 216 may at any time enable a predetermined number of threads to execute on the cores 106(1)-106(N), while blocking other threads from executing due to the lack of available cores (i.e., the semaphore represents a “map” which tracks the allocation of cores to threads which each have “exclusive” access to their respective core while holding the semaphore)), and wherein the processor IP core usage control unit is configured not to receive the designation if the total number of IP cores newly designated by the program exceeds the number of IP cores in the FPGA ([0066] The request may be successfully when there is an available core for the dispatcher 214 to allocate, and the dispatcher 214 return a thread affinity for the alternative core. However, if all of the cores 106(1)-106(N) of the multi-core processor 102 are executing threads at the time of the request by the caller thread 108, the request may fail and no core may be allocated by the dispatcher 214 (i.e., not allocating the thread to the core represents not receiving an exclusive “designation” when the addition of the request would exceed the total number of available cores)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined WANG330’s teaching of failing a request to allocate a core to a thread when doing so would exceed a total number of cores, with the combination of MANNAR and WANG846’s teaching allocating IP cores of an FPGA to tasks, to realize, with a reasonable expectation of success, a system that attempts to allocate IP cores of an FPGA to tasks, as in MANNAR and WANG846, which may fail if a total number of available cores would be exceeded, as in WANG330. A person having ordinary skill would have been motivated to make this combination to ensure processing resources are not overallocated or overutilized to the detriment of performance. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over MANNAR, in view of WANG846, as applied to claim 1, above, and in further view of LI et al. Pub. No.: US 2021/0303344 A1 (hereafter LI). Regarding claim 6, while MANNAR and WANG846 discuss placing received tasks on queues for execution, they do not explicitly teach: create a user queue for a new program each time the program is activated. However, in analogous art that similarly teaches placing received tasks on queues for execution, LI teaches: create a user queue for a new program each time the program is activated ([0030] Control device 110 in FIG. 1 may receive a processing request for a plurality of task sets (i.e., request for tasks “activates” the tasks) of different users. Control device 110 may create a to-be-scheduled task queue “task_queue” according to a plurality of received task sets). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined LI’s teaching of creating a task queue for a plurality of task sets when a processing request is received to activate the task sets, with MANNAR and WANG846’s teaching of placing received tasks on queues for execution, to realize, with a reasonable expectation of success, a system that places received tasks on queues for execution, as in MANNAR and WANG846, which was created in response to a request, as in LI. A person having ordinary skill would have been motivated to make this combination to only consume resources necessary to support task queues when the tasks are active, thereby freeing those resource up for other uses when not needed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL W AYERS whose telephone number is (571)272-6420. The examiner can normally be reached M-F 8:30-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL W AYERS/ Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Aug 01, 2023
Application Filed
Nov 25, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547446
Computing Device Control of a Job Execution Environment Based on Performance Regret of Thread Lifecycle Policies
2y 5m to grant Granted Feb 10, 2026
Patent 12498950
SIGNAL PROCESSING DEVICE AND DISPLAY APPARATUS FOR VEHICLE USING SHARED MEMORY TO TRANSMIT ETHERNET AND CONTROLLER AREA NETWORK DATA BETWEEN VIRTUAL MACHINES
2y 5m to grant Granted Dec 16, 2025
Patent 12493497
DETECTION AND HANDLING OF EXCESSIVE RESOURCE USAGE IN A DISTRIBUTED COMPUTING ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12461768
CONFIGURING METRIC COLLECTION BASED ON APPLICATION INFORMATION
2y 5m to grant Granted Nov 04, 2025
Patent 12423149
LOCK-FREE WORK-STEALING THREAD SCHEDULER
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+56.2%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 287 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month