Prosecution Insights
Last updated: April 18, 2026
Application No. 18/056,242

TASK PROCESSING METHOD AND APPARATUS USED IN MULTI-TANK SCENARIO

Non-Final OA §101§103§112
Filed
Nov 16, 2022
Examiner
GHAFFARI, ABU Z
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Montage Technology Co. Ltd.
OA Round
3 (Non-Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
533 granted / 676 resolved
+23.8% vs TC avg
Strong +47% interview lift
Without
With
+47.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
44 currently pending
Career history
720
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
0.1%
-39.9% vs TC avg
§112
36.8%
-3.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 676 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This non-final office action is responsive to the amendments filed on 03/03/2026. Claims 1-7, 9-14 are pending. Response to Amendment Applicant has amended independent claims 1, 9, 10 to include new/old limitations in a form not previously presented necessitating new search and considerations. Claims 4, 8, 15 have been canceled by the Applicant previously. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-15 are rejected under 35 U.S.C. 112 (b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or joint inventor regards as the invention. The following claim language is not clearly understood: Claim 1 recites “controller configured to query … task to be executed in the task processing apparatus…if exists”. It is unclear if the controller is making the query to the host apparatus or the scheduler about if task exists i.e. query from the controller is sent to the host apparatus or the scheduler or to host schedulers from the scheduler. Claims 9 and 10 recite elements of claim 1 and have similar deficiency as claim 1. Therefore, they are rejected for the same rational. Remaining dependent claims 2-7 and 11-14 are also rejected due to their dependency on the rejected independent claims. Appropriate amendments to the claim language or argument/specification/drawing supporting the current claim language is required to overcome 35 USC 112 2nd . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The previous objections to the claims 1-7, 9-14 under 35 U.S.C. 101 abstract idea have been withdrawn. Applicant is advised to amend the claim to overcome 101 objections at the end of the claim as following instead of current form of recited steps duplicated in the claim: -- and generated the data processing result, wherein the task processing apparatus performing task with the host apparatus --. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Powers et al. (US 2009/0049443 A1, hereafter Powers) in view of Kim et al. (US 2020/0218567 A1, hereafter Kim), and further in view of Blinzer et al. (US 2013/0159664 A1, hereafter Blinzer). Powers, Kim and Blinzer were cited in the last office action. As per claim 1, Powers teaches the invention substantially as claimed including a task processing apparatus ([0047] fig. 1 computing resources 110 [0051] work units or tasks, run on computing resource), the task processing apparatus being implemented and coupled to a host apparatus via a communication interface (fig. 1 control server 105 i.e. host apparatus, pool 110 [0047] control server 105 connected via a communication network with at least one pool 110 of computing resources, server 111 desktop 112 laptop 114 nodes 116) to perform task and data interactions with the host apparatus ([0050] computing resources includes an agent [0051] job, work unit, run on computing resources, [0046] agent manages the execution of work units on its computing node [0224] agent, computing resources, requests, receives, work units list from the control server), and the task processing apparatus comprising ([0047] fig. 1 pool 110 computing resources): a controller configured to query whether there is a data processing task to be executed in the task processing apparatus ([0052] computing resource’s agent, queries, control server to identify any work units that need to be processed, agent select appropriate work unit to execute to the computing resources, agent, starts an instance, process [0248] fig. 25 control server/agent module 2535 select work units from work unit queue 2515 for execution to the processor cores), and trigger execution of the data processing task if the data processing task exists ([0219] startup and initialization phase, performed by resource pool [0235] agent receives the selected work units and initiates their execution on the computational resource [0247] computing resource, initialization work units [0052] computing resource’s agent, queries, control server to identify any work units that need to be processed, agent, starts an instance, process); at least one data processing engine configured to process operation data corresponding to the data processing task ([0047] pool 110 computing resources, server, computers, nodes within clusters [0051] job, task, work units, run on one computing resource in pool 110 [0224] agent, computing resource, receive, work unit list, attribute/requirement of the work units [0122] data required for the selected work units, transferred, to the computing resource, process the work unit) according to a configured working mode ([0137] agent, set the priority of the application [0229] agent, selects, and prioritizes work units, for executing work units [0231] agent, adjust, the concurrency attributes of a work unit [0130] hosted applications, run by agents, on the computing resources to complete work units), and generate a data processing result ([0077] application process the work unit and transfer result once the application is complete]); and at least one scheduler implemented and configured to ([0232] agent’s associated computing resource, prioritizes the work units, the scheduling algorithm in use on the pool of computing resources i.e. agent acting as scheduler [0085] agent core module 715 manages the activities of the distributed processing system of the computing resource, including fetching descriptions of available work units from the control server [0086] agent core module 715 in selecting appropriate work units to execute [0089] one task of the agent is selecting appropriate work units for execution by the associated computing resource, by comparing attributes of the computing resources with requirements of a work unit): receive a task descriptor of the data processing task from the host apparatus via the communication interface (fig. 1 control server 105 pool 110 [0047] control server 105 connected via a communications network with at least one pool 110 of computing resources [0010] agent, request, work units, server, agent manage execution of work units [0224] In step 2405 of method 2400, an agent associated with a computing resource requests a list of available work units from a control server for the distributed processing system. In response to this request, the agent receives a work unit list from the control server in step 2410 work unit list provides information on the attributes or requirements of the work units included in the work unit list. Work unit attributes can include a Work unit ID; a sequence; a name; a Job ID; one or more File Overrides; substitution attributes; priority values; an affinity; and minimum hardware, software, and application and data requirements for processing the work unit); configure the working mode of the data processing engine based on the task descriptor after the execution of the data processing task is triggered ([0137] agent, set the priority of the application i.e. setting the working mode processing the work unit on a computing resource, priority determines how the computing resource divided its processing between primary user and the work unit [0122] once the data required for the selected work units, transferred, to the computing resource, agent executes the application, and instructs it to process the work unit, agent, executes application, application, application control object [0229] agent, selects, and prioritizes work units, for executing work units [0231] agent, adjust, the concurrency attributes of a work unit [0130] hosted applications, run by agents, on the computing resources to complete work units); control transmission of the operation data corresponding to the data processing task from the host apparatus to the data processing engine via the communication interface ([0077] agent, responsible, transferring and installing application and data, for processing work units [0085] agent core module 715, managing activities of the distributed processing system, computing resources, fetching description of available work units from the control server [0047] control server 105 connected via a communications network with at least one pool 110 of computing resources [0232] agent’s associated computing resource, the scheduling algorithm in use on the pool of computing resources i.e. acting as scheduler [0047] control server 105 connected via a communications network with at least one pool 110 of computing resources; msgAgentcheckinresult - sent from the server to the agent, contain the job table for a pool); and control transmission of the data processing result from the data processing engine to the host apparatus via the communication interface, after the data processing engine has completed the processing of the operation data and generated the data processing result ([0077] agent, run on each individual computing resource, coordinate, control server, agent responsible for transferring, result once the application is complete [0135] agent, determine the progress [0139] agent to determine when the application has completed processing the work unit [0047] control server 105 connected via a communications network with at least one pool 110 of computing resources [0143] message communicated between control servers and agents; msgnotifywork status - to notify the server of the progress/completion of a work unit), wherein the task processing apparatus is configured to process the data processing task by executing steps of: the scheduler receiving the task descriptor of the data processing task from the host apparatus; the controller triggering execution of the data processing task; the scheduler configuring the working mode of the data processing engine based on the task descriptor; the scheduler controlling transmission of the operation data corresponding to the data processing task from the host apparatus to the data processing engine; the data processing engine processing the operation data according to the configured working mode to generate the data processing result; and the scheduler controlling transmission of the data processing result to the host apparatus (similar mapping as above because these are steps performing the claim elements rejected above). Power doesn’t specifically teach task processing apparatus being implemented as an express card or an acceleration card; scheduler implemented as a hardware circuit. Kim, however, teaches task processing apparatus being implemented as an express card or an acceleration card (fig. 1 task processing device 110/120 [0049] task processing device 110 include a processor 111 [0055] processor 111, include one or more CPUs / GPUs i.e. accelerator e.g. like GPU in 101 may perform GPU accelerated computing [0055]); configure the working mode of the data processing engine based on the task descriptor ([0057] configure i.e. mode edge computing device, task processing device edges [0061] task descriptor, task processing algorithm [0079] processing device, process, task, basis of the task descriptor [0121]); at least one data processing engine configured to process operation data corresponding to the data processing task according to a configured working mode ([0072] task processing device 110, task, processor, process the task, processing input data [0057] configure i.e. mode edge computing device, task processing device edges ). It would have been obvious to one of ordinary skills in the art before the effective filing date of the invention was made to combine the teachings of Powers with the teachings of Kim of task processing device comprising processor including one or more GPU to improve efficiency and allow task processing apparatus being implemented as an express card or an acceleration card to the method of Powers as in the instant invention. The combination of analogous arts would have been obvious because substituting /adding the GPU taught by Kim to the computing resources taught by Powers to yield expected result and improved efficiency and speed. Power and Kim, in combination, do not specifically teach scheduler implemented as a hardware circuit. Blinzer, however, teaches scheduler implemented as hardware circuit ([0071] system 100 also includes a hardware scheduler (HWS) 128 for selecting a process from a run list 150 for execution on APD 104). It would have been obvious to one of ordinary skills in the art before the effective filing date of the invention was made to combine the teachings of Powers and Kim with the teachings of Blinzer of computing system comprising hardware scheduler for selecting a process from runlist for execution on accelerated processing device to improve efficiency and allow scheduler implemented as hardware circuit to the method of Powers and Kim as in the instant invention. The combination would have been obvious because supplementing / substituting the hardware scheduler taught by Blinzer to the method of Powers and Kim to yield predictable result with improved efficiency. As per claim 2, Kim teaches wherein the task descriptor at least contains information indicative of ([0061] task descriptor, information): a type of the data processing task ([0061] information associated with task processing algorithm), a storage location of operation data corresponding to the data processing task ([0061] information associated with input data, information associated with a source from which input data is to be obtained), and a storage location of the data processing result generated after the processing of the data processing task is completed ([0061] information associated with an address to which a processing result is to be output ). As per claim 3, Powers teaches task processing apparatus comprising at least one scheduler ([0232] agent, using the scheduling algorithm, i.e. acting as scheduler). Kim teaches remaining claim elements of the task processing apparatus of claim 2, wherein the at least one scheduler is further configured to: configure the working mode of the data processing engine according to the information of the type of the data processing task ([0057] configure i.e. mode edge computing device, task processing device edges [0061] task descriptor, task processing algorithm [0079] processing device, process, task, basis of the task descriptor [0121]); control acquisition of the operation data from a memory of the host apparatus based on the information of the storage location of the operation data ([0061] information associated with input data, information associated with a source from which input data is to be obtained); and transmit the data processing result to the memory of the host apparatus based on the information of the storage location of the data processing result ([0061] information associated with an address to which a processing result is to be output [0072] processor, transfer, processing result, external port, output connector [0055] communication circuit 113/ 123, transfer a task performance result, external device). As per claim 4, Kim teaches wherein the task descriptor further contains information indicative of an operation command required for executing the data processing task([0061] task descriptor, information associated with task processing algorithm [0071] task descriptor, input command associated with input, processing algorithm command); and the at least one scheduler is further configured to acquire the operation command from a memory of the host apparatus based on the information of the operation command ([0072] execute, processor, processing algorithm 322, obtained processing algorithm [0071] task descriptor, input command associated with input, processing algorithm command). As per claim 6, Powers teaches wherein the at least data processing engine comprises a plurality of data processing engines ([0045] agent, associated, several computers, executed by a head node of a computing cluster that includes two or more computers), and the scheduler is further configured to select specific data processing engine from the plurality of processing engine ( [0045] agent coordinates the assignment of distributed computing tasks to all of the computers in the computing cluster [0086] availability, computing resources, used, agent, selecting work units [0232] scheduling algorithm, pool of computing resources [0248] select work units and distribute for execution) according to the task descriptor to execute the data processing task corresponding to the task descriptor ([0052] agent, select, work unit, execute, computing resource, based on computing resources’ capabilities, processing capability, amount of memory / disk space, bandwidth, availability, [0224] agent, receives a work unit list, provides information on the attributes or requirements of the work units included in the work unit list); As per claim 7, Kim teaches further comprising: an input buffer and an output buffer corresponding to the data processing engine (fig. 3 input connector 115 output connector 116), wherein the input buffer is configured to buffer operation data ([0072] obtain sensing data via input connector), and the output buffer is configured to buffer data processing results ([0072] transfer the processing result, via output connector 116). As per claim 9, Powers teaches the invention substantially as claimed including a task processing system, comprising (fig. 1 distributed processing system 100 ): a host apparatus (fig. 1 control server 105); and at least one task processing apparatus (fig 1 pool 100 of computing resources) implemented and coupled to the host apparatus via a communication interface (fig. 1 control server 105 i.e. host apparatus, pool 110 [0047] control server 105 connected via a communication network with at least one pool 110 of computing resources, server 111 desktop 112 laptop 114 nodes 116) to perform task and data interactions with the host apparatus ([0050] computing resources includes an agent [0051] job, work unit, run on computing resources, [0046] agent manages the execution of work units on its computing node [0224] agent, computing resources, requests, receives, work units list from the control server), wherein the hosts apparatus is configured to (fig. 1 control server 105 ): receive a data processing task from a user program executed on the host apparatus ([0048] control server 105, software application, supports user control and monitoring [0051] users submits one or more jobs to the control server via administrative control 107); allocate the data processing task to a virtual function queue ([0210] queue of pending distributed computing jobs); generate a task descriptor corresponding to the data processing task ([0093] attributes specifying requirement of a work unit, work unit id, sequence, name, job id); transmit the task descriptor to the at least one task processing apparatus for execution ([0049] server 105, job manager, allocating, task, computing resource pool 100 [0077] transferring data for processing work units [0095] agent retrieves list of available work units from the control servers, Job Manager responds with a "job table"; job table includes the length of time that each work unit of a job is expected to take and the requirements each work unit [0093] attributes specifying requirement of a work unit, work unit id, sequence, name, job id); and receive from the at least one task processing apparatus a data processing result generated after operation data is processed ([0085] agent, communicating work unit results [0077] transferring the results once the application is complete); wherein the task processing apparatus comprising ([0047] fig. 1 pool 110 computing resources): a controller configured to query whether there is a data processing task to be executed in the task processing apparatus ([0052] computing resource’s agent, queries, control server to identify any work units that need to be processed, agent select appropriate work unit to execute to the computing resources, agent, starts an instance, process [0248] fig. 25 control server/agent module 2535 select work units from work unit queue 2515 for execution to the processor cores), and trigger execution of the data processing task if the data processing task exists ([0219] startup and initialization phase, performed by resource pool [0235] agent receives the selected work units and initiates their execution on the computational resource [0247] computing resource, initialization work units [0052] computing resource’s agent, queries, control server to identify any work units that need to be processed, agent, starts an instance, process); at least one data processing engine configured to process operation data corresponding to the data processing task ([0047] pool 110 computing resources, server, computers, nodes within clusters [0051] job, task, work units, run on one computing resource in pool 110 [0224] agent, computing resource, receive, work unit list, attribute/requirement of the work units [0122] data required for the selected work units, transferred, to the computing resource, process the work unit) according to a configured working mode ([0137] agent, set the priority of the application [0229] agent, selects, and prioritizes work units, for executing work units [0231] agent, adjust, the concurrency attributes of a work unit [0130] hosted applications, run by agents, on the computing resources to complete work units), and generate a data processing result ([0077] application process the work unit and transfer result once the application is complete]); and at least one scheduler implemented and configured to ([0232] agent’s associated computing resource, prioritizes the work units, the scheduling algorithm in use on the pool of computing resources i.e. agent acting as scheduler [0085] agent core module 715 manages the activities of the distributed processing system of the computing resource, including fetching descriptions of available work units from the control server [0086] agent core module 715 in selecting appropriate work units to execute [0089] one task of the agent is selecting appropriate work units for execution by the associated computing resource, by comparing attributes of the computing resources with requirements of a work unit): receive a task descriptor of the data processing task from the host apparatus via the communication interface (fig. 1 control server 105 pool 110 [0047] control server 105 connected via a communications network with at least one pool 110 of computing resources [0010] agent, request, work units, server, agent manage execution of work units [0224] In step 2405 of method 2400, an agent associated with a computing resource requests a list of available work units from a control server for the distributed processing system. In response to this request, the agent receives a work unit list from the control server in step 2410 work unit list provides information on the attributes or requirements of the work units included in the work unit list. Work unit attributes can include a Work unit ID; a sequence; a name; a Job ID; one or more File Overrides; substitution attributes; priority values; an affinity; and minimum hardware, software, and application and data requirements for processing the work unit); configure the working mode of the data processing engine based on the task descriptor after the execution of the data processing task is triggered ([0137] agent, set the priority of the application i.e. setting the working mode processing the work unit on a computing resource, priority determines how the computing resource divided its processing between primary user and the work unit [0122] once the data required for the selected work units, transferred, to the computing resource, agent executes the application, and instructs it to process the work unit, agent, executes application, application, application control object [0229] agent, selects, and prioritizes work units, for executing work units [0231] agent, adjust, the concurrency attributes of a work unit [0130] hosted applications, run by agents, on the computing resources to complete work units); control transmission of the operation data corresponding to the data processing task from the host apparatus to the data processing engine via the communication interface ([0077] agent, responsible, transferring and installing application and data, for processing work units [0085] agent core module 715, managing activities of the distributed processing system, computing resources, fetching description of available work units from the control server [0047] control server 105 connected via a communications network with at least one pool 110 of computing resources [0232] agent’s associated computing resource, the scheduling algorithm in use on the pool of computing resources i.e. acting as scheduler [0047] control server 105 connected via a communications network with at least one pool 110 of computing resources; msgAgentcheckinresult - sent from the server to the agent, contain the job table for a pool); and control transmission of the data processing result from the data processing engine to the host apparatus via the communication interface, after the data processing engine has completed the processing of the operation data and generated the data processing result ([0077] agent, run on each individual computing resource, coordinate, control server, agent responsible for transferring, result once the application is complete [0135] agent, determine the progress [0139] agent to determine when the application has completed processing the work unit [0047] control server 105 connected via a communications network with at least one pool 110 of computing resources [0143] message communicated between control servers and agents; msgnotifywork status - to notify the server of the progress/completion of a work unit), wherein the task processing apparatus is configured to process the data processing task by executing steps of: the scheduler receiving the task descriptor of the data processing task from the host apparatus; the controller triggering execution of the data processing task; the scheduler configuring the working mode of the data processing engine based on the task descriptor; the scheduler controlling transmission of the operation data corresponding to the data processing task from the host apparatus to the data processing engine; the data processing engine processing the operation data according to the configured working mode to generate the data processing result; and the scheduler controlling transmission of the data processing result to the host apparatus (similar mapping as above because these are steps performing the claim elements rejected above). Power doesn’t specifically teach task processing apparatus being implemented as an express card or an acceleration card; allocate the data processing task to a virtual function queue; generate a task descriptor corresponding to the data processing task according to a type of the data processing task; scheduler implemented as a hardware circuit. Kim, however, teaches task processing apparatus being implemented as an express card or an acceleration card (fig. 1 task processing device 110/120 [0049] task processing device 110 include a processor 111 [0055] processor 111, include one or more CPUs / GPUs i.e. accelerator e.g. like GPU in 101 may perform GPU accelerated computing [0055]); allocate the data processing task to a virtual function queue ([0074] fig. 4 task pool 412, task descriptors 413-415); generate a task descriptor corresponding to the data processing task according to a type of the data processing task ([0074] task descriptors, input/processing/output information [0061] task, information, expressed, task descriptor, information processing algorithm fig. 4 task pool 412 task descriptor 413) configure the working mode of the data processing engine based on the task descriptor ([0057] configure i.e. mode edge computing device, task processing device edges [0061] task descriptor, task processing algorithm [0079] processing device, process, task, basis of the task descriptor [0121]); at least one data processing engine configured to process operation data corresponding to the data processing task according to a configured working mode ([0072] task processing device 110, task, processor, process the task, processing input data [0057] configure i.e. mode edge computing device, task processing device edges ). Power and Kim, in combination, do not specifically teach scheduler implemented as a hardware circuit. Blinzer, however, teaches scheduler implemented as hardware circuit ([0071] system 100 also includes a hardware scheduler (HWS) 128 for selecting a process from a run list 150 for execution on APD 104). Claim 10 recites a task processing method for elements similar to claim 1. Therefore, it is rejected for the same rationale. Claim 11 recites the task processing method for elements similar to claim 2. Therefore, it is rejected for the same rationale. Claim 12 recites the task processing method for elements similar to claim 3. Therefore, it is rejected for the same rationale. Claim 13 recites the task processing method for elements similar to claim 4. Therefore, it is rejected for the same rationale. Claim 14 recites the task processing method for elements similar to claim 6 before configuring the working mode of the data processing engine based on the task descriptor (Kim [0014] select edge, task descriptor transferred i.e. selection before processing ). Therefore, it is rejected for the same rationale. Claims 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Powers in view of Kim, as applied to above claims, and further in view of George (US 2017/0031723 A1). As per claim 5, Powers teaches wherein the task processing apparatus comprises a plurality of schedulers, and the controller is further configured to poll the plurality of schedulers to query whether there is a data processing task to be executed in the plurality of schedulers ([0052] computing resource’s agent, queries, control server to identify any work units that need to be processed, agent select appropriate work unit to execute to the computing resources, agent, starts an instance, process [0248] fig. 25 control server/agent module 2535 select work units from work unit queue 2515 for execution to the processor cores). Powers and Kim, in combination, do not specifically teach wherein the task processing apparatus comprises a plurality of schedulers. George, however, teaches wherein the task processing apparatus comprises a plurality of schedulers (fig. 2 host 200 schedulers 204 206 208 fig. 6 execution environment 600 processing units, schedulers 608). It would have been obvious to one of ordinary skills in the art before the effective filing date of the invention was made to combine the teachings of Powers and Kim with the teachings of George of host comprising multiple schedulers to improve efficiency and allow task processing apparatus comprises a plurality of schedulers to the method of Powers and Kim as in the instant invention. The combination would have been obvious because substituting the processing elements of the task processing device taught by Powers and Kim with host comprising a multiple scheduler for task scheduling as taught by George to yield predictable result with reasonable expectation of success and improved efficiency as in the instant invention. Examiners Note Applicant is further reminded of that the cited paragraphs and in the references as applied to the claims above for the convenience of the applicant(s) and although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider all of the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Response to Arguments The previous interpretation under 112(f) have been withdrawn. The previous objections under 35 USC 101 abstract idea have been withdrawn. Applicant's arguments filed on 03/03/2026 to overcome 103 rejections have been fully considered but they are not moot in view of new ground of rejection. In addition, response to the argument has been provided for the sake of compact prosecution: Kim teaches the task scheduler 411 is in the master device 410. Examiner respectfully indicate that the argument is moot in view of new grounds of rejections. Powers teaches the agent performing scheduling function is an application running on generic computers. In contrast to the powers, the scheduler in claim 1 is implemented by hardware circuity: Examiner respectfully indicate that Blinzer teaches the hardware scheduler and can be supplemented with the agent i.e. acting as scheduler as taught by Powers. In contrast to the powers, the scheduler allocates tasks among multiple data processing engines within the same express card or acceleration card. Examiner respectfully indicate that currently amended claim only requires task processing apparatus comprising at least one data processing engine. Detailed review of Blinzer reveals that the assertion - scheduler is implemented as a hardware circuit is not supported. Examiner respectfully indicates that Blinzer clearly teaches a hardware scheduler ([0071] fig. 1A 128 ). One of ordinary skills in the art would be able to substitute the agent acting as scheduler taught by Powers with scheduler hardware as taught by Blinzer to yield a hardware scheduler performing the limitations of the instant invention. Blinzer clearly shows that HWS 129 is located outside the accelerated processing device 104 not included within it: Examiner respectfully points out that the Blinzer is cited only for the implementation of a scheduler using hardware and could be applied to the agent acting as a scheduler taught by Powers to implement the agent as hardware scheduler, and scheduler being implemented as hardware is independent of location. Nowhere Blinzer teaches HWS 129 is configured to “receive a task descriptor of the data processing task from the host apparatus via the communication interface; configure the working mode of the data processing engine based on the task descriptor after the execution of the data processing engine based on the task descriptor after the execution of the data processing task is triggered; control transmission of the operation data corresponding to the data: Examiner respectfully indicates that the Agent taught by powers teaches these limitations and is acting as a scheduler. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. AMIN; MINESH B. US-20080189709-A1 CLOSSET; Arnaud US-20150012676-A1 AOYAMA; Toshikazu US-20180074853-A1 Sanghvi; Hetul US-20180189105-A1 Authorization for Internet Communication Applicant is encouraged to submit an authorization to communicate with the Examiner via the internet by making the following statement (MPEP 502.03) “Recognizing that internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only by submitted via Central Fax (not Examiner’s Fax), Regular postal mail, or EFS Web using PTO/SB/439. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABU ZAR GHAFFARI whose telephone number is (571)270-3799. The examiner can normally be reached on Monday-Thursday 9:00 - 17:00 Hrs. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached on 571-272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABU ZAR GHAFFARI/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Nov 16, 2022
Application Filed
May 16, 2025
Non-Final Rejection — §101, §103, §112
Aug 19, 2025
Response Filed
Oct 31, 2025
Final Rejection — §101, §103, §112
Jan 04, 2026
Response after Non-Final Action
Feb 03, 2026
Applicant Interview (Telephonic)
Feb 03, 2026
Examiner Interview Summary
Mar 03, 2026
Request for Continued Examination
Mar 12, 2026
Response after Non-Final Action
Apr 02, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602264
DATA CENTER WITH ENERGY-AWARE WORKLOAD PLACEMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12596562
TECHNOLOGIES TO ALLOCATE RESOURCES TO START-UP A FUNCTION
2y 5m to grant Granted Apr 07, 2026
Patent 12596559
TECHNIQUES FOR PERFORMING CONTINUATION WORKFLOWS BY TERMINATING VIRTUAL MACHINE BASED ON RESPONSE TIME EXCEEDING THRESHOLD
2y 5m to grant Granted Apr 07, 2026
Patent 12585493
AUTOMATED SYNTHESIS OF REFERENCE POLICIES FOR RUNTIME MICROSERVICE PROTECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12579046
FIRMWARE-BASED ORCHESTRATION OF ARTIFICIAL INTELLIGENCE (AI) PERFORMANCE PROFILES IN HETEROGENEOUS COMPUTING PLATFORMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+47.3%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 676 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month