Prosecution Insights
Last updated: April 18, 2026
Application No. 17/858,833

ARTIFICIAL INTELLIGENCE OPERATION PROCESSING METHOD AND APPARATUS, SYSTEM, TERMINAL, AND NETWORK DEVICE

Non-Final OA §103§112
Filed
Jul 06, 2022
Examiner
PATEL, HIREN P
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Guangdong OPPO Mobile Telecommunications Corp., Ltd.
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
336 granted / 428 resolved
+23.5% vs TC avg
Strong +38% interview lift
Without
With
+37.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
13 currently pending
Career history
441
Total Applications
across all art units

Statute-Specific Performance

§101
15.4%
-24.6% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This Office Action is in response to an RCE filed 01/30/2026. Claims 1, 3, 10 and 15 are currently amended via Applicant’s amendment. Claims 4-6, 13, 17 and 18 have been canceled. Claims 1-3, 7-12, 14-16 and 19-20 are currently pending. Claims 1, 10 and 15 are independent claims. This Office Action is made non-final after the RCE. Request Continuation for Examination A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 01/30/2026 has been entered. Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement The information disclosure statements (IDSs) submitted on 12/03/2025 and 03/10/2026 are acknowledged, the submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1-3, 7-12, 14-16 and 19-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As per claim 1, 10 and 15, recites limitation “wherein the AI/ML computing power of the terminal is dedicated to the AI/ML model” that fails to comply with the written description requirement. The specification does not provide adequate written description support for the claim limitation. Applicant cites specification paragraphs [0116-0117] as support. These paragraphs discuss switching an AI/ML model based on “varying of an AI/ML computing power of a terminal,” as illustrated in Figs. 10, 12, 14, and 15. The specification discloses that the terminal may send information about “a computing power of the terminal for performing the AI/ML task” to the network device and characterizes this as “an available computing resource for performing AI/ML task.” However, there is no disclosure—express, implicit, or inherent—of computing power that is “dedicated” to the AI/ML model in the sense of reserved, exclusive, or specifically partitioned resources (e.g., dedicated CPU cores) for the AI/ML model. While the specification (paragraph [0077]) recites “the computing power of the terminal for performing the AI/ML task refers to an allocated computing resource of the terminal for performing the AI/ML operation”, it does not provide any description of the structure or mechanism of such allocation, much less any “dedication” to a particular model. A POSITAQ would no recognize that the inventor possessed the full scope of this limitation at the time of filing. The amendment thus introduces new matter unsupported by the original disclosure. The dependent claims 2-3, 7-9, 11-12, 14, 16, 19 and 20 are also rejected based on virtue of their dependency to respective independent claim 1, 10 or 15. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-3, 7-12, 14-16 and 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per claim 1, 10 and 15, the limitations “wherein the AI/ML computing power of the terminal is dedicated to the AI/ML model” renders these claims indefinite. The specification does not define “dedicated” or describe any concrete mechanism by which computing resources of the terminal are dedicated to the AI/ML model (e.g., reserved hardware, isolation techniques, or partitioning schemes). The disclosure refers to “computing power of the terminal for performing the AI/ML task” (paragraph [0017, 0021, 0032, 0077]), but provides no guidance on what “dedicated” adds to this concept. It is unclear whether “dedicated” requires (1) exclusive hardware reservation, (2) logical allocation of the task duration, or merely (3) general usage “for performing the AI/ML task.” This different interpretation lead to unclear claim scope. For Examination purpose, and to advance prosecution notwithstanding this indefiniteness, the Examiner interprets, “wherein the AI/ML computing power of the terminal is dedicated to the AI/ML model” as referring to computing resources of the terminal provided for performing the AI/ML task. This interpretation does not waive the indefiniteness rejection. The dependent claims 2-3, 7-9, 11-12, 14, 16, 19 and 20 are also rejected based on virtue of their dependency to respective independent claim 1, 10 or 15. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7, 9-12, 14-16 and 20 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Andreas Meier (US 2019/0258243 A1) (hereinafter Meier) in view of Ahn et al. (US 2016/0119410 A1) (hereinafter Ahn) and Song et al. (US 2019/0318245 A1) (Song) and further in view of Yang (US 2019/0095212 A1) (Yang). As per claim 1, Meier discloses (Currently amended) An artificial intelligence operation processing method, performed by a terminal (e.g. Meier: [0035][0042]), comprising: receiving, by the terminal, indication information sent by a network device, wherein the indication information is used for indicating information about an Artificial Intelligence/Machine Learning (AI/ML) task to be performed by the terminal (e.g. Meier: [Figs. 1 and 2] [0041-0042] discloses a computation module of vehicle receives a partial task of a distributed data processing from a communication module of central office. The data processing corresponds to a distributed machine learning algorithm. The partial task can comprise information about instructions of the partial task and information about data of the partial task. [0049] discloses vehicle receives program to be processed and the data required thereof from a TSP. [0057] discloses the TSP pushes the program and data directly to the vehicle. [0074] discloses selected vehicle receives a job from the TSP which provides the partial tasks to be performed by the selected vehicle. [Fig. 4] [0087-0088] discloses computing system of central office sends a job retrieval of program and data to a vehicle and the vehicle receives information indicating the task that should be performed by the vehicle.); wherein the indication information is used for indicating part or all of operations to be performed by the terminal in the AI/ML task and an AI/ML model used by the terminal to perform the AI/ML task (e.g. Meier: [0042] [0049] [0057] discloses providing information comprising instructions, program code and data required to perform the partial task, the information indicates which instructions to perform to process the assigned partial task. Thus, by providing information comprising instructions, program code and data require to perform the partial task, Meier implies providing information that indicates part of operations to be performed by the terminal.). Meier does not expressly disclose wherein the indication information used for indicating part or all of AI/ML acts to be performed by the terminal comprises a ratio between acts to be performed by the network device and the terminal in the AI/ML task; and wherein the method further comprises: switching the AI/ML model, according to varying of an AI/ML computing power of the terminal; or switching the AI/ML model, according to varying of a realizable communication rate. However, Ahn discloses wherein the indication information used for indicating part or all of AI/ML acts to be performed by the terminal comprises a ratio between acts to be performed by the network device and the terminal in the AI/ML task (e.g. Ahn: [0018-0019] discloses the task performing patterns may include a task allocation ratio between the host device and the selected guest device [terminal and network device]. [0072] discloses a task performing pattern includes information whether to perform the task jointly, a task allocation ratio, and information regarding guest device to which to allocate the task. [0092-0093] discloses determining a task allocation ratio between the host device and the guest device. The task allocation ratio may be determined based on various factors. After the task allocation ratio is determined, the host device request the guest device to perform the divided task according to the determined ratio. Also see [0027-0028] [0085][0115].). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method/system of determining a task allocation ratio between host device and a guest device, and including the task allocation ratio in the task performing pattern as taught by Ahn into Meier because it would allow dividing task between the host and the guest device according to measured network performances of the devices and also provide indication for the guest to only download a portion of the content required to perform the fraction of the task allocated to the guest device according to the ratio (See Ahn: [0092-0093][0115]). The combination of Meier and Ahn does not expressly disclose wherein the method further comprises: switching the AI/ML model, according to varying of an AI/ML computing power of the terminal; or switching the AI/ML model, according to varying of a realizable communication rate. However, Song discloses wherein the method further comprises: switching the AI/ML model, according to varying of an AI/ML computing capability [power of the terminal]; or switching the AI/ML model, according to capability [varying of a realizable communication rate] (e.g. Song: [0127] discloses trimming the neural network model, so a hardware resource required when the neural network model (that is, the second neural network model) delivered to the terminal-side device runs is within the available hardware resource capability range of the terminal-side device. [0005-0006] discloses terminal-side device receives a second neural network model that is obtained by trimming a first neural network model such that when the second neural network model runs is within an available hardware capability range of the terminal-side device. [0032] disclose receiving indication information used to indicate an available hardware resource capability of the terminal-side device. [0033] discloses trimming a first neural network model based on the available hardware resource capability of the terminal side device, and delivering the trimmed neural network model (second model) to the terminal-side device, so that the hardware resource required when the trimmed neural network model delivered to the terminal device is within the available hardware resource capability range of the terminal device. [0077-0080] discloses receiving an available hardware resource capability of the terminal-device including a computing capability related to CPU performance information, a storage capability related to storage performance information. [0169-0176] discloses dynamically updating [switching] neural network model on the terminal-side device. Also see [0018-0021] [0042] [0094-0104]. Thus, Song discloses switching the neural network model based on varying computing capability of terminal-device by trimming a first neural network model according to resource capability range of the terminal and delivering the trimmed (a second neural network model) to the terminal-side device. It is implied that the available hardware resource capability may be any computing resource including remaining computing power of the terminal device.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method/system of dynamically updating or switching neural network model based on remaining or available hardware resource capability of the terminal-side device as taught by Song into the combination of Meier and Ahn because it would improve performance of processing a neural network-related application on the terminal-side device, and help enhance expansion of an intelligent application capability of the terminal-side device. (See Song: [0004, 0006] [0019] [0033] [0042] [0086] [0104]). The combination of Meier, Ahn and Song strongly implies that an available hardware resource or computing capability may include varying computing power of the terminal device or varying communication rate. The combination does not expressly disclose computing capability includes varying of an AI/ML computing power of the terminal. However, Yang discloses changing [switching] the AI/ML model, according to varying of an AI/ML computing power of the terminal; (e.g. Yang: [0060-0062] expressly discloses computing capability refers to at least one of the processing capacity of a CPU [computing power], storage capacity of a memory, a bandwidth of data transmission, an amount of usable power [power of the terminal], amounts of usable hardware resources (e.g., 2 cores available for use) [varying computing power], a remaining quantity of a battery, etc. The CPU load, amounts of usable hardware and other computing capability/resources may dynamically change [0072]. [0082] further discloses dynamically changing [updating or switching] a neural network model in response to a change in the computing load and the computing capability of the device by changing size of number of parallel inputs or changing a batch mode of the neural network model. [Fig. 4] [0067] discloses hybrid manager inputs load/capacity and outputs params (batch mode, I/O size, # of instances which changes NN model execution. This implies changing NN model, for example, switch from low-batch to high-batch model. Thus, Yang discloses changing NN parameter based on change in load/capacity of the device.). The combination further discloses wherein the AI/ML computing power of the terminal is dedicated to the AI/ML model (e.g. Song: [Abstract] [0005-0006] [0018-0019] discloses ensuring a hardware resource required when the second neural network model or the trimmed neural network model runs on the terminal-side device is within an available hardware resource capability range of the terminal-side device. [0027] [0033] discloses trimming neural network model, so that a hardware resource (e.g., computation amount) required when the trimmed/second neural network model runs on the terminal-side device is within the available hardware resource capability range of the terminal-side device. [0077-0080] discloses an available hardware resource capability of the terminal-side device is a computing capability that is related to CPU performance of the terminal-side device. Thus, the available hardware resource of the terminal-side device is dedicated/allocated to run the trimmed/second neural network model on the terminal-side device. Yang: [0060-0062] also discloses the computing capability refers to processing capacity of a CPU, amounts of usable hardware resource (e.g., cores available for use). This available hardware resource is dedicated/allocated to a neural network model to perform processing. Thus, Song and Yang expressly disclose computing power (CPU, core or computation amount) of terminal-side device is dedicated/allocated to the AI/ML model.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method/system of adaptively changing neural network model in response to change in computing load and the computing capability including a bandwidth of data transmission or remaining quantity of battery/ usable power as taught by Yang into the combination of Meier, Ahn and Song because it would enable the neural network system to have optimal performance in a computing environment based on the computing load and capability (See Yang: [0082] [0097]). As per claim 2, the combination of Meier, Ahn, Song and Yang discloses (Previously Presented) The method according to claim 1 [See rejection to claim 1 above], further comprising: performing, by the terminal, part or all of operations in the AI/ML task to be performed according to the indication information (e.g. Meier: [0041-0042] [0045] discloses selected vehicle executes partial tasks according to received instructions. The computation module fetches the partial task and the data of the partial task on the basis of provided reference to the instructions and the reference to the data. The computation module can carry out the computation of the result of the partial task at least (or wholly) on the compute units for specialized computations. [Abstract] discloses apparatuses/method for processing partial task, the computation module receives a partial task and computes the partial task of the distributed data processing to obtain a result of the partial task. [0012-0013] discloses providing the partial tasks to the vehicle and the partial tasks are processed by the compute units of the vehicle. Also see [0018] [0026-0027] [0049] [0053] [0058] [0079] [0088][Figs. 2A, 4 and related description]. Ahn: [0018-0019] also discloses the task performing patterns may include a task allocation ratio between the host device and the selected guest device [terminal and network device]. [0072] discloses a task performing pattern includes information whether to perform the task jointly, a task allocation ratio, and information regarding guest device to which to allocate the task. [0092-0093] discloses determining a task allocation ratio between the host device and the guest device. The task allocation ratio may be determined based on various factors. After the task allocation ratio is determined, the host device requests the guest device to perform the divided task according to the determined ratio. Also see [0027-0028] [0085][0115].). As per claim 3, the combination of Meier, Ahn, Song and Yang discloses (Currently amended) The method according to claim 1 [See rejection to claim 1 above], wherein the indication information is further used for indicating: a parameter set of the AI/ML model used by the terminal to perform the AI/ML task (e.g. Meier: [0042] discloses the partial task comprises information about instructions of the partial task and information about data of the partial task. The information about the partial task can comprise reference to the instruction of the partial task and reference to the data of the partial task. [0049] [0057] discloses TSP can initiate the job in the vehicle by providing or downloading the program to be processed and the data required therefor. [0088] disclose providing a job for retrieval of program and data to the vehicle, to be processed by the vehicle. Thus, the information provided to the vehicle includes program code, data and instructions required to perform the partial tasks. Song: [0005-0008] [0024-0026] further discloses updating neural network parameter of the neural network model and delivering the updated parameter and neural network model to terminal-side device for processing cognitive task on the terminal-side device.). As per claim 7, the combination of Meier, Ahn, Song and Yang discloses (Original) The method according to claim 1 [See rejection to claim 1 above], further comprising: sending, by the terminal, at least one piece of following information to the network device for generating the indication information by the network device: a computing power of the terminal for performing the AI/ML task, a storage space of the terminal for performing the AI/ML task, a battery resource of the terminal for performing the AI/ML task, or a communication requirement of the terminal for performing the AI/ML task (e.g. Meier: [0046] discloses computation module of vehicle is configured to provide a notification about an availability or non-availability of the vehicle to computer of the central office. [0047] discloses a vehicle regularly sends a heartbeat to make it clear to the TSP that it is still available. The heartbeat could also be supplemented by further information, such as, e.g., available CPU time. [0059] [0061] discloses receiving information about vehicle, the information includes system capacity utilization of computation module of the vehicle, energy capacity of the vehicle, performance of the computation module of the vehicle, connectivity of the vehicle, position of the vehicle, expected availability of the vehicle, previous processing of a partial task by the vehicle and prioritization of vehicle, to be selected for partial tasks. [0064-0065] discloses vehicle may regularly report the current system capacity utilization of their relevant control units, information concerning the present energy capacity, etc. Song: [0077-0080] discloses receiving an available hardware resource capability of the terminal-device including a computing capability related to CPU performance information, a storage capability related to storage performance information. Also see [0094-0104]. Yang: [0060-0062] expressly discloses computing capability refers to at least one of the processing capacity of a CPU [computing power], storage capacity of a memory, a bandwidth of data transmission [communication rate], an amount of usable power [power of the terminal], a system power state, a remaining quantity of a battery, etc. Also see [0067] [0072].). As per claim 9, the combination of Meier, Ahn, Song and Yang discloses (Original) The method according to claim 3 [See rejection to claim 3 above], Song further discloses wherein the AI/ML model is a neural network-based model (e.g. Song: [Abstract] [0005-0007] discloses terminal-side receives a neural network model for processing cognitive task on the terminal-side based on the neural network model. Also see [0009-0013] [0017-0020] [0022-0023] [0068] [0074] [0085].). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine method/system of providing a neural network-based model to terminal-device as taught by Song into Meier because it would enable providing a technical solution that terminal-side device specific neural network model based on available hardware resource of the terminal-side device to process the cognitive task. This will ensure and improve accuracy of processing the cognitive task by the terminal-side device. (See Song: [0011-0013] [0023]). As per claim 10, Meier discloses An artificial intelligence operation processing method, performed by a network device (e.g. Meier: [0035][0042]), comprising: determining, by the network device, information about an Artificial Intelligence/Machine Learning (AI/ML) task to be performed by a terminal (e.g. Meier: [0046] discloses computation module of vehicle is configured to provide a notification about an availability or non-availability of the vehicle to computer of the central office. [0047] discloses a vehicle regularly sends a heartbeat to make it clear to the TSP that it is still available. The heartbeat could also be supplemented by further information, such as, e.g., available CPU time. [0059] [0061] discloses receiving information about vehicle, the information includes system capacity utilization of computation module of the vehicle, energy capacity of the vehicle, performance of the computation module of the vehicle, connectivity of the vehicle, position of the vehicle, expected availability of the vehicle, previous processing of a partial task by the vehicle and prioritization of vehicle, to be selected for partial tasks. [0064-0065] discloses vehicle may regularly report the current system capacity utilization of their relevant control units, information concerning the present energy capacity, etc. [0073] [0087] discloses task requirements may include the number of necessary compute units, stable network connection, etc. The partial tasks to be performed by the selected vehicle is determined based on task requirement and available capacity of the vehicle.); and sending, by the network device, indication information to the terminal, wherein the indication information is used for indicating the information about the AI/ML task to be performed by the terminal (e.g. Meier: [Figs. 1 and 2] [0041-0042] discloses a computation module of vehicle receives a partial task of a distributed data processing from a communication module of central office. The data processing corresponds to a distributed machine learning algorithm. The partial task can comprise information about instructions of the partial task and information about data of the partial task. [0049] discloses vehicle receives program to be processed and the data required thereof from a TSP. [0057] discloses the TSP pushes the program and data directly to the vehicle. [0074] discloses selected vehicle receives a job from the TSP which provides the partial tasks to be performed by the selected vehicle. [Fig. 4] [0087-0088] discloses computing system of central office sends a job retrieval of program and data to a vehicle and the vehicle receives information indicating the task that should be performed by the vehicle.); wherein the indication information is used for indicating part or all of operations to be performed by the terminal in the AI/ML task and an AI/ML model used by the terminal to perform the AI/ML task (e.g. Meier: [0042] [0049] [0057] discloses providing information comprising instructions, program code and data required to perform the partial task, the information indicates which instructions to perform to process the assigned partial task. Thus, by providing information comprising instructions, program code and data require to perform the partial task, Meier implies providing information that indicates part of operations to be performed by the terminal.). Meier does not expressly disclose wherein the indication information used for indicating part or all of AI/ML acts to be performed by the terminal comprises a ratio between acts to be performed by the network device and the terminal in the AI/ML task; and wherein the method further comprises: indicating, according to varying of an AI/ML computing power of a terminal, the terminal to switch the AI/ML model. However, Ahn discloses wherein the indication information used for indicating part or all of AI/ML acts to be performed by the terminal comprises a ratio between acts to be performed by the network device and the terminal in the AI/ML task (e.g. Ahn: [0018-0019] discloses the task performing patterns may include a task allocation ratio between the host device and the selected guest device [terminal and network device]. [0072] discloses a task performing pattern includes information whether to perform the task jointly, a task allocation ratio, and information regarding guest device to which to allocate the task. [0092-0093] discloses determining a task allocation ratio between the host device and the guest device. The task allocation ratio may be determined based on various factors. After the task allocation ratio is determined, the host device request the guest device to perform the divided task according to the determined ratio. Also see [0027-0028] [0085][0115].). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method/system of determining a task allocation ratio between host device and a guest device, and including the task allocation ratio in the task performing pattern as taught by Ahn into Meier because it would allow dividing task between the host and the guest device according to measured network performances of the devices and also provide indication for the guest to only download a portion of the content required to perform the fraction of the task allocated to the guest device according to the ratio (See Ahn: [0092-0093][0115]). The combination of Meier and Ahn does not expressly disclose wherein the method further comprises: indicating, according to varying of an AI/ML computing power of a terminal, the terminal to switch the AI/ML model. However, Song discloses wherein the method further comprises: indicating, according to varying of an AI/ML computing capability [computing power of a terminal], the terminal to switch the AI/ML mode (e.g. Song: [0127] discloses trimming the neural network model, so a hardware resource required when the neural network model (that is, the second neural network model) delivered to the terminal-side device runs is within the available hardware resource capability range of the terminal-side device. [0005-0006] discloses terminal-side device receives a second neural network model that is obtained by trimming a first neural network model such that when the second neural network model runs is within an available hardware capability range of the terminal-side device. [0032] disclose receiving indication information used to indicate an available hardware resource capability of the terminal-side device. [0033] discloses trimming a first neural network model based on the available hardware resource capability of the terminal side device, and delivering the trimmed neural network model (second model) to the terminal-side device, so that the hardware resource required when the trimmed neural network model delivered to the terminal device is within the available hardware resource capability range of the terminal device. [0077-0080] discloses receiving an available hardware resource capability of the terminal-device including a computing capability related to CPU performance information, a storage capability related to storage performance information. [0169-0176] discloses dynamically updating [switching] neural network model on the terminal-side device. Also see [0018-0021] [0042] [0094-0104]. Thus, Song discloses switching the neural network model based on varying computing capability of terminal-device by trimming a first neural network model according to resource capability range of the terminal and delivering the trimmed (a second neural network model) to the terminal-side device. It is implied that the available hardware resource capability may be any computing resource including remaining computing power of the terminal device.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method/system of dynamically updating or switching neural network model based on remaining or available hardware resource capability of the terminal-side device as taught by Song into the combination of Meier and Ahn because it would improve performance of processing a neural network-related application on the terminal-side device, and help enhance expansion of an intelligent application capability of the terminal-side device. (See Song: [0004, 0006] [0019] [0033] [0042] [0086] [0104]). The combination of Meier, Ahn and Song strongly implies that an available hardware resource or computing capability may be any computing capability including a varying computing power of the terminal device or varying communication rate. The combination does not expressly disclose computing capability includes varying of an AI/ML computing power of the terminal; or varying of a realizable communication rate. However, Yang discloses indicating, according to varying of an AI/ML computing power of a terminal, the terminal to change [switch] the AI/ML model; (e.g. Yang: [0060-0062] expressly discloses computing capability refers to at least one of the processing capacity of a CPU [computing power], storage capacity of a memory, a bandwidth of data transmission, an amount of usable power [power of the terminal], amounts of usable hardware resources (e.g., 2 cores available for use) [varying computing power], a remaining quantity of a battery, etc. The CPU load, amounts of usable hardware and other computing capability/resources may dynamically change [0072]. [0082] further discloses dynamically changing [updating or switching] a neural network model in response to a change in the computing load and the computing capability of the device by changing size of number of parallel inputs or changing a batch mode of the neural network model. [Fig. 4] [0067] discloses hybrid manager inputs load/capacity and outputs params (batch mode, I/O size, # of instances which changes NN model execution. This implies changing NN model, for example, switch from low-batch to high-batch model. Thus, Yang discloses changing NN parameter based on change in load/capacity of the device.). The combination further discloses wherein the AI/ML computing power of the terminal is dedicated to the AI/ML model (e.g. Song: [Abstract] [0005-0006] [0018-0019] discloses ensuring a hardware resource required when the second neural network model or the trimmed neural network model runs on the terminal-side device is within an available hardware resource capability range of the terminal-side device. [0027] [0033] discloses trimming neural network model, so that a hardware resource (e.g., computation amount) required when the trimmed/second neural network model runs on the terminal-side device is within the available hardware resource capability range of the terminal-side device. [0077-0080] discloses an available hardware resource capability of the terminal-side device is a computing capability that is related to CPU performance of the terminal-side device. Thus, the available hardware resource of the terminal-side device is dedicated/allocated to run the trimmed/second neural network model on the terminal-side device. Yang: [0060-0062] also discloses the computing capability refers to processing capacity of a CPU, amounts of usable hardware resource (e.g., cores available for use). This available hardware resource is dedicated/allocated to a neural network model to perform processing. Thus, Song and Yang expressly discloses computing power (CPU, core or computation amount) of terminal-side device is dedicated/allocated to the AI/ML model.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method/system of adaptively changing neural network model in response to change in computing load and the computing capability including a bandwidth of data transmission or remaining quantity of battery/ usable power as taught by Yang into the combination of Meier, Ahn and Song because it would enable the neural network system to have optimal performance in a computing environment based on the computing load and capability (See Yang: [0082] [0097]). As per claim 11, the combination of Meier, Ahn, Song and Yang discloses (Original) The method according to claim 10 [See rejection to claim 10 above], wherein determining, by a network device, information about an AI/ML task to be performed by a terminal, comprises: acquiring at least one piece of following information: a computing power of the terminal for performing the AI/ML task, a storage space of the terminal for performing the AI/ML task, a battery resource of the terminal for performing the AI/ML task, or a communication requirement of the terminal for performing the AI/ML task; and determining, by the network device according to the acquired information, the information about the AI/ML task to be performed by the terminal (e.g. Meier: [0046] discloses computation module of vehicle is configured to provide a notification about an availability or non-availability of the vehicle to computer of the central office. [0047] discloses a vehicle regularly sends a heartbeat to make it clear to the TSP that it is still available. The heartbeat could also be supplemented by further information, such as, e.g., available CPU time. [0059] [0061] discloses receiving information about vehicle, the information includes system capacity utilization of computation module of the vehicle, energy capacity of the vehicle, performance of the computation module of the vehicle, connectivity of the vehicle, position of the vehicle, expected availability of the vehicle, previous processing of a partial task by the vehicle and prioritization of vehicle, to be selected for partial tasks. [0064-0065] discloses vehicle may regularly report the current system capacity utilization of their relevant control units, information concerning the present energy capacity, etc. [0073] [0087] discloses task requirements may include the number of necessary compute units, stable network connection, etc. The partial tasks to be performed by the selected vehicle is determined based on task requirement and available capacity of the vehicle. Song: [0032] further discloses receiving indication information used to indicate an available hardware resource capability of the terminal-side device. [0077-0080] discloses receiving an available hardware resource capability of the terminal-device including a computing capability related to CPU performance information, a storage capability related to storage performance information. Yang: [0060-0062] expressly discloses computing capability refers to at least one of the processing capacity of a CPU, storage capacity of a memory, a bandwidth of data transmission [communication rate], an amount of usable power [computing power of the terminal], a system power state, a remaining quantity of a battery, etc. Also see [0067] [0072].). As per claim 12, the combination of Meier, Ahn, Song and Yang discloses (Previously Presented) The method according to claim 10 [See rejection to claim 10 above], wherein the indication information is further used for indicating: a parameter set of the AI/ML model used by the terminal to perform the AI/ML task (e.g. Meier: [0042] discloses the partial task comprises information about instructions of the partial task and information about data of the partial task. The information about the partial task can comprise reference to the instruction of the partial task and reference to the data of the partial task. [0049] [0057] discloses TSP can initiate the job in the vehicle by providing or downloading the program to be processed and the data required therefor. [0088] disclose providing a job for retrieval of program and data to the vehicle, to be processed by the vehicle. Thus, the information provided to the vehicle includes program code, data and instructions required to perform the partial tasks. Song: [0005-0008] [0024-0026] further discloses updating neural network parameter of the neural network model and delivering the updated parameter and neural network model to terminal-side device for processing cognitive task on the terminal-side device.). As per claim 14, the combination of Meier, Ahn, Song and Yang discloses (Previously Presented) The method according to claim 10 [See rejection to claim 10 above], further comprising: after sending the indication information to the terminal, performing, by the network device, an AI/ML operation that matches an AI/ML operation performed by the terminal; wherein an AI/ML operation that matches an AI/ML operation performed by the terminal, comprising: a part of AI/ML operations of the AM/ML task are performed by the terminal, and a remaining part of the AI/ML task is performed by the network device (e.g. Meier: [Figs. 1 and 2] [0041-0042] discloses a computation module of vehicle receives a partial task of a distributed data processing from a communication module of central office. The data processing corresponds to a distributed machine learning algorithm. The partial task can comprise information about instructions of the partial task and information about data of the partial task. [0049] discloses vehicle receives program to be processed and the data required thereof from a TSP. [0057] discloses the TSP pushes the program and data directly to the vehicle. [0074] discloses selected vehicle receives a job from the TSP which provides the partial tasks to be performed by the selected vehicle. [Fig. 4] [0087-0088] discloses computing system of central office sends a job retrieval of program and data to a vehicle and the vehicle receives information indicating the task that should be performed by the vehicle. Ahn: [0018-0019] further discloses the task performing patterns may include a task allocation ratio between the host device and the selected guest device. [0072] discloses a task performing pattern includes information whether to perform the task jointly, a task allocation ratio, and information regarding guest device to which to allocate the task. [0092-0093] discloses determining a task allocation ratio between the host device and the guest device. The task allocation ratio may be determined based on various factors. After the task allocation ratio is determined, the host device request the guest device to perform the divided task according to the determined ratio. Also see [0027-0028] [0085][0115]. Song: [0005] also discloses cloud side device trims neural network model based on available hardware resource capability of the terminal-side device and sends it to the terminal-side device. The first neural network model is used on the cloud-side device to process cognitive computing task, and the second neural network model is used on the terminal-side device to process the cognitive computing task. [0011] discloses matching the cognitive accuracy tolerance that represents the expected accuracy of processing the computing task by the terminal side meets the expected accuracy of cloud-side device. [0013] discloses matching/determining accuracy of processing the cognitive computing task by using the second neural network model delivered by the cloud-side device to the terminal-side device is consistent with accuracy corresponding to the cognitive accuracy tolerance.). As per claims 15, 16 and 20, these are apparatus/system claims having similar limitations as cited in method claims 1, 3 and 9, respectively. Thus, claims 15, 16 and 20 are also rejected under the same rationale as cited in the rejection of rejected claims 1, 3 and 9, respectively. Claims 8 and 19 are rejected under AIA 35 U.S.C. 103 as being unpatentable over the combination of Meier, Ahn, Song and Yang in view of Pang et al. (US 2019/0327593 A1) (hereinafter Pang). As per claim 8, the combination of Meier, Ahn, Song and Yang discloses (Original) The method according to claim 1 [See rejection to claim 1 above], but does not expressly disclose wherein the indication information sent by the network device is received by receiving at least one piece of following information: Downlink Control Information (DCI), a Medium Access Control Control Element (MAC CE), high layer configuration information, or application layer control information. However, Pang discloses wherein the indication information sent by the network device is received by receiving at least one piece of following information: Downlink Control Information (DCI), a Medium Access Control Control Element (MAC CE), high layer configuration information, or application layer control information (e.g. Pang: [0076] discloses the D2D communication method includes: sending, by the network device, downlink control information to the receiving device, where the downlink control information is used to indicate configuration information for downlink data transmission of the network device. [0127-0128] discloses the network device sends downlink control information to the receiving device, where the downlink control information is used to indicate configuration information for downlink data transmission of the network device. [0156] discloses receiving unit is configured to receive downlink control information sent by the network device.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine well-known method/system of D2D communication that includes sending, by the network device, downlink control information to the receiving device as taught by Pang into the combination of Meier, Ahn, Song and Yang because it would enable communication between network device and receiving device, where the downlink control information is used to indicate configuration information for downlink data transmission of the network device (See Pang: [0076] [0127-0128]). As per claim 19, this is an apparatus/system claim having similar limitations as cited in method claim 8. Thus, claim 19 is also rejected under the same rationale as cited in the rejection of rejected claim 8. Response to Arguments Applicant’s arguments with respect to 35 U.S.C. § 103 have been fully considered but they are not persuasive and/or moot in view of new grounds of rejections (112(a)/(b)) necessitated by the amendment. Applicants argue with respect to amended independent claims 1, 10 and 15 that the cited prior art fail to disclose “switching the AI/ML model, according to varying of an AI/ML computing power of the terminal; wherein the AI/ML computing power of the terminal is dedicated to the AI/ML model” as amended. Specifically Applicant argues that Song “only describes that in the cloud-side, the neural network model is trimmed/updated within the scope of resource capacity, and updated neural network can still be used when transmitted to the terminal side.” Applicant asserts that the amended claims require the model to be updated only according to the dedicated AI/ML computing power of the terminal,” which is very different from that of in Song. Applicant further argues that Yang “only describes the contents of the computing capability,” which is “irrelevant to switching the AIML model,” and that neither Meier nor Ahn teaches model switching. Examiner’s Response (a): Examiner respectfully disagrees with Applicant’s assertion that the combination of cited references does not disclose features recited in amended independent claims 1, 10 and 15. These arguments are not persuasive. First, Song expressly teaches dynamically updating or switching a neural network model on the terminal-side device based on the terminal’s available hardware resource capability (including computing capability). See Song [0169-0176] (dynamic updating of neural network model on terminal-side); [0077-0080], (trimming model so that “a hardware resource required when the...model...runs [is] within the available hardware resource capability range of the terminal-side device”). Song receives an “available hardware resource capability of the terminal-device including a computing capability related to CPU performance information.” Id. [0077-0080]. Thus, Song teaches model switching according to the terminal’s varying computing capability, not merely static cloud-side trimming. As discussed above in detail, Yang cures any purported gap and elaborates: “computing capability refers to at least one of the processing capacity of a CPU...a bandwidth of data transmission, an amount of usable power (computing power) of the terminal...[or] a remaining quantity of a battery”; the system “dynamically changes, updates, or switches a neural network model in response to a change in the computing load and the computing capability of the device.” See Yang [0060-0062][0072][0082] [Fig. 4 and related description]. Applicant’s assertion that Yang “only describes the contents of the computing capability” ignores implicit teaching of model changing based on varying computing load and the computing capability of the device. For example, Yang [0082] further discloses dynamically changing [updating or switching] a neural network model in response to a change in the computing load and the computing capability of the device by changing size of number of parallel inputs or changing a batch mode of the neural network model. [Fig. 4] [0067] discloses hybrid manager inputs load/capacity and outputs params (batch mode, I/O size, # of instances which changes NN model execution. This implies changing NN model based on varying compute load and computing capability. Applicant’s distinction based on “dedicated AI/ML computing power” is similarly unavailing. The specification itself discloses that “a computing power of the terminal for performing the AI/ML task” is “providing computing resource of the terminal for performing the AI/ML operation”. Applicant’s own disclosure thus equates task-specific computing power with provided resources for the AI/ML model. Song and Yang teach precisely this: allocating terminal computing resources (CPU capacity, usable power, etc.) to the neural network model being executed, with model selection/switching based on whether those allocated resources suffice for the model. See Song [0077-0080], ; Yang [0060-0062], . Applicant’s claim does not require “exclusive” dedication (e.g., hardware isolation) or updating “only according to dedicated power” to the exclusion of other factors; it merely recites switching “according to varying of an AIML computing power...wherein [it] is dedicated.” The combination teaches this under the broadest reasonable interpretation consistent with the specification (computing resources allocated for the AI/ML task/model). Applicant’s argument that “neither Meier nor Ahn relates to switching the AI/ML model” is correct but immaterial. “The test for obviousness is not whether each element of the claimed invention is found in a single prior art reference, but whether ‘the claimed invention as a whole would have been obvious’ to a person of ordinary skill.” Meier and Ahn teach distributed AI/ML task processing with indication information and allocation ratios; Song and Yang teach the model-switching limitations. The motivations to combine (optimal distributed processing and resource-aware model adaptation) remain applicable. In view of above discussion, Examiner respectfully concludes that the cited references disclose all features recited in independent claims and the amended limitations recited in independent claims do not distinguish claimed invention from the cited prior arts. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bychkovsky et al. (US 2023/0088431 A1) discloses [abstract] [0055-0056] discloses updating machine-learning model based on a resource availability condition such as processor utilization level being less than a threshold value. Kim et al. (US 2021/0174137 A1) discloses he electronic device includes identifying whether each of one or more neural network models included in a first external device is suitable for hardware of the electronic device and whether each of the one or more neural network models identified as suitable for the hardware of the electronic device is suitable to replace the neural network models included in the electronic device, based on first device information on a hardware specifications of the electronic device, second device information on a hardware specification of the first external device, first model information on the one or more neural network models included in the first external device, and second model information on the one or more neural network models included in the electronic device. Walker (US 2020/0252682 A1) discloses “train[ing] multiple predictive machine learning models concurrently based on different criteria such as the first device's computational capability and a number of frames provided as input. The first device may adaptively select a predictive machine learning model to use based on conditions and resources available at the first device, such as computational power and a number of frames cached.” Fang et al. (Fang_2018.pdf) discloses wherein the method further comprises: switching the AI/ML model, according to varying of an AI/ML computing power of the terminal; or switching the AI/ML model, according to varying of a realizable communication rate (e.g. Fang: [Page 115, Abstract] “NestDNN enables each deep learning model to offer flexible resource-accuracy trade-offs. At runtime, it dynamically selects the optimal resource-accuracy trade-off for each deep learning model to fit the model’s resource demand to the system’s available runtime resources” [Page 116, Col. 2] “The multi-capacity model is comprised of a set of descendent models, each of which offers a unique resource-accuracy trade-off…processing latency of each descendent model is considered in a cost function. NestDNN employs a resource-aware runtime scheduler which select the optimal resource-accuracy trade-off for each deep learning model and determines the optimal amount of resources for each model.” [Page 117, Col. 2] In model profiling phase,…a profile is generated for each model including accuracy, memory footprint, and processing latency. In the online stage, the resource-aware runtime scheduler monitors events that change runtime resources and when such event is detected, the scheduler checks profile and selects the optimal descendant model. [Page 120, Col. 1, Efficient Model Switching] discloses multi-capacity model is able to switch models with little overhead. Switching independent models causes significant overhead. Figure 5 illustrates the details of model switching, for example, switching between a model with larger capability and a model with smaller capability.). Stokman et al. (US 2021/0097351 A1) discloses “[0032] In an embodiment the AI system analyzes a communication stream and detects an suspect pattern, the system can switch to a machine learning model that employs more compute power to decode the communication stream or that uses more data to analyze the communication stream.” “In an embodiment the AI system 1 switches to another trained machine learning model in state 102. As a result the power consumption in state 102 can differ from state 101.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hiren Patel whose telephone number is (571) 270-3366. The examiner can normally be reached on Monday-Friday 9:30 AM to 6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. If attempts to reach the above noted Examiner by telephone are unsuccessful, the Examiner’s supervisor, April Y. Blair, can be reached at the following telephone number: (571) 270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions on access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). April 3, 2026 /HIREN P PATEL/Primary Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Jul 06, 2022
Application Filed
Apr 18, 2025
Non-Final Rejection — §103, §112
Jul 21, 2025
Response Filed
Oct 30, 2025
Final Rejection — §103, §112
Dec 30, 2025
Response after Non-Final Action
Jan 30, 2026
Request for Continued Examination
Feb 08, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602259
RESOURCE CAPACITY MANAGEMENT IN CLOUDS
2y 5m to grant Granted Apr 14, 2026
Patent 12602251
SEMICONDUCTOR DEVICE, CONTROL METHOD FOR THE SAME, AND PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12578999
AUTOMATED RIGHTSIZING OF CONTAINERIZED APPLICATION WITH OPTIMIZED HORIZONTAL SCALING
2y 5m to grant Granted Mar 17, 2026
Patent 12572386
AUTOMATED TASK MANAGEMENT IN ANALYTICS COMPUTING SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12547444
LCS LIFE-CYCLE MANAGEMENT SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+37.7%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month