DETAILED ACTION
This office action is in response to amendment filed on 2/3/2026.
Claims 1 – 8, 10 – 18, 19 and 20 are amended.
Claims 1 – 20 are pending.
Claims 11 – 20 are no longer interpreted under 35 USC 112(f) in view of the amendment.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 – 4, 6, 10 – 18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al (US 20230147786, hereinafter Zhang), in view of Bao et al (US 20180373540, hereinafter Bao).
As per claim 1, Zhang discloses: A task scheduling method, executed by a terminal device, comprising:
While a game engine in the terminal device is working in a pending stage: generating, by specific task processing systems of the game engine, to-be-executed tasks which are to be processed by the specific task processing systems; transmitting the to-be-executed tasks to a task scheduling system of the game engine, wherein each specific task processing system comprises a configured task processing stage that is part of the pending stage in the game engine; (Zhang figure 2, [0140]: “An operating system determines at least one piece of first information of an application. The first information is used to indicate a first task to be executed by the application.”; [0141]: “a first API of the application is usually in a one-to-one correspondence with a task of the application, that is, one first API of the application is usually used to submit first information of one task to the operating system When the application needs to execute a plurality of first tasks, the application may be configured with a plurality of first APIs, and submit first information of each first task to the operating system through the first API corresponding to the first task”; [0142]: “after obtaining the first information, the operating system may determine, based on the first information, the at least one first task to be executed by the application.”.)
and allocating, by the task scheduling system for the to-be-executed tasks from the specific task processing systems, processing threads in a central processing unit (CPU) for the to-be-executed tasks based on load conditions of the processing threads in the CPU and running association relationships of the specific task processing systems in the pending stage. (Zhang [0143]: “Step S12: The operating system allocates the at least one first task to a corresponding first task queue.”; [0146]: “For example, when there is a strong time dependency between a task A and a task B, that is, the task B can be executed only after the task A is executed, the operating system allocates the task A and the task B to the serial task queue. In addition, when there is no time dependency relationship between the task A and the task B, that is, the task A and the task B may be executed at the same time, or the other task may be executed after one task is executed, the operating system allocates the task A and the task B to the parallel task queue.”; [0177]: “he operating system determines, based on the priority of the first task queue and a task type corresponding to an idle thread in a first thread pool, a first target task queue to which a thread can be allocated. The first thread pool includes an idle thread created by the operating system based on the current load level of the operating system.”; [0181]: “The operating system allocates the idle thread in the first thread pool to the first target task queue based on a type of the first target task queue.”)
Zhang did not explicitly disclose:
wherein the specific task is configured with running dependency relationship in the configured task processing stage between the each specific task processing system and other task processing systems in the game engine;
However, Bao teaches:
wherein the specific task is configured with running dependency relationship in the configured task processing stage between the each specific task processing system and other task processing systems in the game engine; (Bao [0061])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Bao into that of Zhang in order to have the specific task is configured with running dependency relationship in the configured task processing stage between the each specific task processing system and other task processing systems in the game engine. Zhang [0145] – [0146] teaches the dependencies being tracked is time dependency between tasks. However, one of ordinary skill in the art can readily recognize that other forms of task dependency can be featured here as the constraint factor for resource allocation and scheduling process, such as dependencies between hardware components as shown in Bao [0061]. Applicants have merely claimed the combination of known parts in the field to achieve predictable results of improved task scheduling based on load condition and is therefore rejected under 35 USC 103.
As per claim 2, the combination of Zhang and Bao further teach:
The method according to claim 1, wherein the game engine further comprises a plurality of entities, the plurality of entities comprise component data, and the component data comprised in each of the plurality of entities is configured with an identity document (ID) of the entity; and wherein the method further comprises: determining, by the specific task processing systems according to the component data comprised in the plurality of entities in the game engine, entities related to the to-be-executed tasks which are to be processed by the specific task processing systems as task entities, and transmitting IDs of the task entities to the task scheduling system; and transmitting, by the task scheduling system after allocating the processing threads for the to-be-executed tasks from the specific task processing systems, the IDs of the task entities to the processing threads, to cause the processing threads to acquire, based on the IDs of the task entities, the component data comprised in the task entities to execute the to-be-executed tasks. (Zhang figure 3 and [0163] [0168])
As per claim 3, the combination of Zhang and Bao further teach:
The method according to claim 1, wherein each of the specific task processing systems is further configured with a priority; and wherein allocating, by the task scheduling system for the to-be-executed tasks from the specific task processing systems, processing threads in the CPU for the to-be-executed tasks is further based on the priorities corresponding to the specific task processing systems. (Zhang figure 4 and [0174] – [0181])
As per claim 4, the combination of Zhang and Bao further teach:
The method according to claim 3, wherein in response to the CPU being a heterogeneous multi-core CPU, allocating, by the task scheduling system for the to-be-executed tasks from the specific task processing systems, the processing threads in the CPU for the to-be-executed tasks comprises: allocating, by the task scheduling system for a specific task processing system having a priority higher than a first preset level, a processing thread, running of which is supported by an optimal core in the CPU, for the to-be-executed task from the specific task processing system, the optimal core being a core having optimal processing performance in the CPU. (Zhang figure 4 and [0174] – [0181])
As per claim 6, the combination of Zhang and Bao further teach:
The method according to claim 3, wherein the priority of each of the specific task processing systems is determined based on at least one factor of a degree of importance of a task to be processed by each of the specific task processing systems, and a processing resource to be occupied by the task to be processed by each of the specific task processing system. (Zhang [0209])
As per claim 10, the combination of Zhang and Bao further teach:
The method according to claim 1, wherein the generating, respectively by specific task processing systems, to-be-executed tasks which are to be processed by the specific task processing systems, and transmitting the to-be-executed tasks to the task scheduling system comprises: generating, respectively by the specific task processing systems, the to-be-executed tasks which are to be processed by the specific task processing systems, and transmitting, by means of application programming interfaces (APIs), the to-be-executed tasks to the task scheduling system. (Zhang figure 2 and [0140] – [0142])
As per claim 11, it claims substantially similar limitation as claim 1 and is therefore rejected under the same rationale.
As per claim 12, it claims substantially similar limitation as claim 2 and is therefore rejected under the same rationale.
As per claim 13, it claims substantially similar limitation as claim 3 and is therefore rejected under the same rationale.
As per claim 14, it claims substantially similar limitation as claim 4 and is therefore rejected under the same rationale.
As per claim 15, it is the device variant of claim 1 and is therefore rejected under the same rationale.
As per claim 16, it is the device variant of claim 2 and is therefore rejected under the same rationale.
As per claim 17, it is the device variant of claim 3 and is therefore rejected under the same rationale.
As per claim 18, it is the device variant of claim 4 and is therefore rejected under the same rationale.
As per claim 20, it is the device variant of claim 6 and is therefore rejected under the same rationale.
Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang and Bao, and further in view of Fedorova (US 20080134184).
As per claim 5, the combination of Zhang and Bao did not teach:
The method according to claim 3, wherein in response to the CPU being a homogeneous multi-core CPU, allocating, by the task scheduling system for the to-be-executed tasks from the specific task processing systems, the processing threads in the CPU for the to-be-executed tasks comprises: determining, by the task scheduling system according to the load conditions of the processing threads in the CPU, a processing thread having an occupancy rate satisfying a preset condition in the CPU as a preferred processing thread; and allocating, by the task scheduling system for a specific task processing system having a priority higher than a second preset level, the preferred processing thread for the to-be-executed task from the specific task processing system.
However, Fedorova teaches:
The method according to claim 3, wherein in response to the CPU being a homogeneous multi-core CPU, allocating, by the task scheduling system for the to-be-executed tasks from the specific task processing systems, the processing threads in the CPU for the to-be-executed tasks comprises: determining, by the task scheduling system according to the load conditions of the processing threads in the CPU, a processing thread having an occupancy rate satisfying a preset condition in the CPU as a preferred processing thread; and allocating, by the task scheduling system for a specific task processing system having a priority higher than a second preset level, the preferred processing thread for the to-be-executed task from the specific task processing system. (Fedorova [0002])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Fedorova into that of Zhang and Bao in order to in response to the CPU being a homogeneous multi-core CPU, the allocating, by the task scheduling system for the to-be-executed tasks from the specific task processing systems, the processing threads in the CPU for the to-be-executed tasks based on the load conditions of the processing threads in the CPU, the priorities corresponding to the specific task processing systems, and the running association relationships of the specific task processing systems in the pending stage comprises: determining, by the task scheduling system according to the load conditions of the processing threads in the CPU, a processing thread having an occupancy rate satisfying a preset condition in the CPU as a preferred processing thread; and allocating, by the task scheduling system for a specific task processing system having a priority higher than a second preset level, the preferred processing thread for the to-be-executed task from the specific task processing system. Fedorova has shown the claimed limitations are merely commonly known and used methods in parallel task scheduling, applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
As per claim 19, it is the device variant of claim 5 and is therefore rejected under the same rationale.
Claim(s) 7 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang and Bao, and further in view of Nam et al (US 20210132987, prior art part of IDS dated 5/17/2024, hereinafter Nam).
As per claim 5, the combination of Zhang and Bao further teach:
The method according to claim 1, and the allocating, by the task scheduling system for the to-be-executed tasks from the specific task processing systems, processing threads in a CPU for the to-be-executed tasks based on load conditions of the processing threads in the CPU and running association relationships of the specific task processing systems in the pending stage comprises: allocating, by the task scheduling system for the to-be-executed tasks from the specific task processing systems, processing thread sets for the to-be-executed tasks comprises, by the task scheduling system for each of the to-be-executed tasks, the processing threads in the processing thread set for the to-be-executed task based on the load conditions of the processing threads in the processing thread set corresponding to the to-be-executed task and the execution association relationships respectively corresponding to the to-be-executed sub-tasks in the to-be-executed task. (Zhang [0143]: “Step S12: The operating system allocates the at least one first task to a corresponding first task queue.”; [0146]: “For example, when there is a strong time dependency between a task A and a task B, that is, the task B can be executed only after the task A is executed, the operating system allocates the task A and the task B to the serial task queue. In addition, when there is no time dependency relationship between the task A and the task B, that is, the task A and the task B may be executed at the same time, or the other task may be executed after one task is executed, the operating system allocates the task A and the task B to the parallel task queue.”; [0177]: “he operating system determines, based on the priority of the first task queue and a task type corresponding to an idle thread in a first thread pool, a first target task queue to which a thread can be allocated. The first thread pool includes an idle thread created by the operating system based on the current load level of the operating system.”; [0181]: “The operating system allocates the idle thread in the first thread pool to the first target task queue based on a type of the first target task queue.”)
Zhang and Bao did not explicitly disclose:
wherein the to-be-executed tasks comprise a plurality of to-be-executed sub-tasks and execution association relationships respectively corresponding to the plurality of to-be-executed sub-tasks, and the execution association relationship is used for representing an execution dependency relationship between the corresponding to-be-executed sub-task of the execution association relationship and other to-be-executed sub-tasks in the to-be-executed tasks;
However, Nam teaches:
wherein the to-be-executed tasks comprise a plurality of to-be-executed sub-tasks and execution association relationships respectively corresponding to the plurality of to-be-executed sub-tasks, and the execution association relationship is used for representing an execution dependency relationship between the corresponding to-be-executed sub-task of the execution association relationship and other to-be-executed sub-tasks in the to-be-executed tasks; (Nam [0055]: “when a first query 200 issued from the client is a query for ordering tables T1 and T2 based on a record of column C3, the processor 130 may divide the operation corresponding to the query into one or more tasks. Specifically, the processor 130 may divide an operation for the first query 200 into a first task 210 for scanning the record of table T1, a second task 220 for scanning the record of table T2, and a third task 230 of ordering the records scanned through the first task 210 and the second task 220 based on the records of the column C3 as illustrated in FIG. 2.”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Nam into that of Zhang and Bao in order to have the to-be-executed tasks comprise a plurality of to-be-executed sub-tasks and execution association relationships respectively corresponding to the plurality of to-be-executed sub-tasks, and the execution association relationship is used for representing an execution dependency relationship between the corresponding to-be-executed sub-task of the execution association relationship and other to-be-executed sub-tasks in the to-be-executed tasks. Nam has shown the claimed limitations are merely commonly known and used methods in parallel task scheduling, applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
As per claim 8, the combination of Zhang, Bao and Nam further teach:
The method according to claim 7, wherein the to-be-executed tasks further comprise priorities respectively corresponding to the plurality of to-be-executed sub-tasks; and the allocating, by the task scheduling system, the processing threads in the processing thread set for the to-be-executed sub-tasks in the to-be-executed task comprises: determining, by the task scheduling system according to the load conditions of the processing threads in the processing thread set, a processing thread having a lowest occupancy rate in the processing thread set; and allocating, by the task scheduling system for a to-be-executed sub-task having a priority higher than a third preset level, the processing thread having the lowest occupancy rate for the to-be-executed sub-task. (Zhang figure 4 and [0174] – [0181])
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, in view of Shibata et al (US 20190279602, hereinafter Shibata).
As per claim 9, the combination of Zhang and Bao did not teach:
The method according to claim 1, wherein the task processing stages are divided according to processing of a game frame, and comprise a preUpdate stage, an Update stage and a postUpdate stage corresponding to the game frame; and the preUpdate stage is used for executing a preparation task required for updating the game frame, the Update stage is used for executing an update task of the game frame, and the postUpdate stage is used for executing a finishing task required after the game frame is updated.
However, Shibata teaches:
The method according to claim 1, wherein the task processing stages are divided according to processing of a game frame, and comprise a preUpdate stage, an Update stage and a postUpdate stage corresponding to the game frame; and the preUpdate stage is used for executing a preparation task required for updating the game frame, the Update stage is used for executing an update task of the game frame, and the postUpdate stage is used for executing a finishing task required after the game frame is updated. (Shibata [0053] – [0054])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Shibata into that of Zhang and Bao in order to have the task processing stages are divided according to processing of a game frame, and comprise a preUpdate stage, an Update stage and a postUpdate stage corresponding to the game frame; and the preUpdate stage is used for executing a preparation task required for updating the game frame, the Update stage is used for executing an update task of the game frame, and the postUpdate stage is used for executing a finishing task required after the game frame is updated. One of ordinary skill in the art can easily see that GPU is a commonly known and used resources in computer task scheduling, and it is merely an obvious design choice to have the tasks be GPU tasks featuring multiple execution stages and is therefore rejected under 35 USC 103.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 – 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES M SWIFT whose telephone number is (571)270-7756. The examiner can normally be reached Monday - Friday: 9:30 AM - 7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 5712701014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES M SWIFT/Primary Examiner, Art Unit 2196