DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Remarks
This Office Action is in response to an RCE filed 01/29/2026.
Claims 1, 18 and 19 are currently amended via Applicant’s amendment.
Claim 9 has been canceled via previous amendment.
Claims 1-8 and 10-19 are currently pending.
This Office Action is made non-final after the RCE.
Request Continuation for Examination
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 01/29/2026 has been entered.
Examiner Notes
Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on 12/04/2025 and 03/12/2026 are acknowledged, the submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8, 10-16, 18 and 19 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Andreas Meier (US 2019/0258243 A1) (hereinafter Meier) in view of Song et al. (US 2019/0318245 A1) (Song) and further in view of Kim et al. (US 2015/0305008 A1) (hereinafter Kim).
As per claim 1, (Currently amended) A method for information reporting (e.g. Meier: [Figs. 1 and 2] discloses vehicle equipped with communication module 12 and computational module 14 that establishes communication between vehicle and central office server; and Central office server equipped with communication module 22 and control module 24. The method is implemented using computing apparatus comprising communication modules and computational modules.), comprising: sending, by a terminal device, artificial intelligence machine learning (AI/ML) capability information to a network device; wherein the AI/ML capability information indicates resource information of the terminal device configured for processing an AI/ML service (e.g. Meier: [0046] discloses computation module of vehicle is configured to provide a notification about an availability or non-availability of the vehicle [terminal] to computer of the central office [network device]. [0047] discloses a vehicle regularly sends a heartbeat to make it clear to the TSP that it is still available. The heartbeat could also be supplemented by further information, such as, e.g., available CPU time. [0059-0061] discloses sending information about vehicle to central office server, the information includes system capacity utilization of computation module of the vehicle, energy capacity of the vehicle, performance of the computation module of the vehicle, connectivity of the vehicle, position of the vehicle, expected availability of the vehicle, previous processing of a partial task by the vehicle and prioritization of vehicle, to be selected for partial tasks. [0064-0065] discloses vehicle may regularly report the current system capacity utilization of their relevant control units, information concerning the present energy capacity, etc. [0087] discloses task requirements may include the number of necessary compute units, stable network connection, etc. The partial tasks to be performed by the selected vehicle is determined based on task requirement and available capacity of the vehicle.) Also see [0020].); receiving, by the terminal device, AI/ML task configuration information sent by the network device to the terminal device according to the AI/ML capability information (e.g. Meier: [Figs. 1 and 2] [0041-0042] discloses a computation module of vehicle receives a partial task of a distributed data processing from a communication module of central office. The data processing corresponds to a distributed machine learning algorithm. The partial task can comprise information about instructions of the partial task and information about data of the partial task. [0049] discloses vehicle receives program to be processed and the data required thereof from a TSP. [0057] discloses the TSP pushes the program and data directly to the vehicle. [0074] discloses selected vehicle receives a job from the TSP which provides the partial tasks to be performed by the selected vehicle. [Fig. 4] [0087-0088] discloses computing system of central office sends a job retrieval of program and data to a vehicle and the vehicle receives information indicating the task that should be performed by the vehicle. The vehicle is selected for performing the task based on vehicle’s resource information provided by the vehicle.); wherein the AI/ML capability information indicates a processing capability, an available memory space of the processing capability, and a power headroom/battery capacity of the terminal device for processing an AI/ML service, and a performance index requirement on wireless transmission of the network device by an AI/ML operation of the terminal device (e.g. Meier: [0047] discloses a vehicle regularly sends a heartbeat to make it clear to the TSP that it is still available. The heartbeat could also be supplemented by further information, such as, e.g., available CPU time [processing capability]. [0059-0061] discloses receiving information about vehicle, the information includes system capacity utilization of computation module [processing capability] of the vehicle, energy capacity of the vehicle [power/battery capacity], performance of the computation module of the vehicle [processing capacity], connectivity of the vehicle [required wireless transmission between vehicle and central office device], position of the vehicle, expected availability of the vehicle, previous processing of a partial task by the vehicle and prioritization of vehicle, to be selected for partial tasks. [0064-0065] discloses vehicle may regularly report the current system capacity utilization of their relevant control units, information concerning the present energy capacity, etc. Vehicles’ available CPU time and available system capacity indicate number of operations that are capable of being completed by the vehicle. [0066-0067] discloses selection of vehicle to perform service/task on the basis of the available hardware information or on the basis of connectivity information. For example, by giving preference to transportation vehicle which are linked via WLAN or have a measured, fast connection with low latency. Thus, vehicle’s connectivity information indicates performance index requirement on wireless transmission, such as having measured fast connection with low latency.).
Meier does not expressly disclose wherein the AI/ML capability information indicates an available memory space of the processing capability; and wherein the AI/ML task configuration information comprises an identity of an AI/ML model needed by the terminal device for processing the AI/ML service.
However, Song discloses wherein the AI/ML capability information indicates a processing capability, and an available memory space of the processing capability…by an AI/ML operation of the terminal device (e.g. Song: [0078-0080] discloses terminal-side device reports the available hardware resource capability of the terminal-side device to the cloud-side device. The available hardware resource capability of the terminal device is a computing capability and a storage capability of the terminal-side device. [0100-0102] discloses message sent by the terminal-side device to the cloud-side device carries indication information used to indicate the available hardware resource capability of the terminal-side device. Specifically, a computing capability and a storage capability to run neural network service on the terminal-side device.); and wherein the AI/ML task configuration information comprises an identity of an AI/ML model needed by the terminal device for processing the AI/ML service (e.g. Song: [0170-0175] discloses a terminal-side device receives update push notification including ID of a new available version of a neural network model from the cloud-side device. [0006] discloses cloud-side device delivers the trimmed neural network model to the terminal-side device, where a hardware resource required when the trimmed neural network model runs is within the available hardware resource capability range of the terminal-side device. [0019] discloses cloud-side device delivers the trimmed neural network model to the terminal side device, where a hardware resource required when the trimmed neural network model runs is within the available hardware resource capability range of the terminal-side device. [0166-0167] discloses cloud-side device maintains log entry that includes an identifier of a neural network model, an available version number of the neural network model, etc. The cloud-side device sends an update push notification to the terminal-side device, the terminal side device may choose and update a neural network model based on the information provided by the cloud-side device. Also see [0075] [0160-0161][0164] [0192].).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method/system of transmitting capability information (e.g. available memory space for processing neural network tasks) and configuration information (e.g. neural network trimmed based on neural network ID) between the terminal-device and cloud-side device as taught by Song into Meier because it would improve performance of processing a neural network-related application on the terminal-side device, and help enhance expansion of an intelligent application capability of the terminal-side device. (See Song: [0004, 0006] [0019] [0033] [0042] [0086] [0104]).
As discussed above, Meier discloses sending, by a terminal device, capability information to a network device (e.g. Meier: [0047] [0059-0061] [0064-0067] discloses vehicle [terminal device] sends available CPU time [processing capability], energy capacity [power/battery capacity], connectivity information [wireless transmission; latency] of the vehicle, number of operations that vehicle is capable of completing are but does not expressly disclose sending capability information to a network device.). Song further discloses different trigger conditions for sending the message from the terminal-side device to a cloud-side device, including the trigger condition that sends message when required hardware resource exceeds available hardware resource capability range of the terminal-device (e.g. Song: [0091-0097]).
The combination of Meier and Song does not expressly disclose sending, by a terminal device, capability information when an available computing power, a transmission rate, and a delay requirement of the terminal device vary.
However, Kim discloses sending capability information to a network device when an available computing power, a transmission rate, and a delay requirement of the terminal device vary (e.g. Kim: [0093] discloses information reporting interface, include “Observer” and “Notify.” The interface is used to transmit a new value related to a resource on the client to the server. “Observe” is used for the server to a specific resource when the server is interested in resource change. “Notify” is used to notify the server of observation condition attributes set through “Write Attribute” when the observation condition attributes are satisfied. [0113-0114] discloses client-device notifies its resources values when the value is above the number specified in parameter, below the number specified in the parameter, or when the value is changed more than the number specified in the parameter from the resource value. Step parameter triggers Notify when a resource value changes [varies] by a configured threshold amount, e.g., when available computing power drops below a step threshold or transmission rate improves by more than step. Greater than parameter triggers Notify when resource value exceeds threshold, e.g., when transmission rate becomes available above network-configured minimum. Less than parameter triggers Notify when resource value falls below threshold, e.g., when delay requirement cannot be met due to increased latency. Thus, client can notify its resource values when multiple conditions are met. Multiple conditions form a set controlling when M2M client [terminal device] sends Notify (capability information) to server [network device] only when resource values vary according to configured thresholds/step/timing. Kim’s framework routinely applies to terminal capability parameters for computation. Kim teaches general event-driven reporting where network configures observation attributes on any numeric resource, terminal sends capability update (Notify) when resource values vary per configuration—corresponds to sending capability information when computing power, transmission rate and delay requirement vary. Thus, Kim discloses a general client-server framework where multiple observation condition attributes can be configured together (forming a set of criteria); and client sends updated resource values (via Notify) only when those configuration observation conditions are met.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the Observe/Notify with configurable observation attributes mechanism as taught by Kim into capability reporting scenario of Meier and Song because it would reduce signaling overhead: Meier discloses regularly sending capacity utilization information. A POSITA would recognize that constantly sending such information consumes computing resources. Kim’s observe/notify mechanism with configurable thresholds and step values provides a well-known technique to trigger reporting only when meaningful changes occurs, thereby reducing unnecessary signaling. A POSITA would apply Kim’s teaching of setting observation attributes on the terminal’s computing power, transmission rate, and delay requirement; the terminal would then send capability information (Notify) when those resource values change to satisfy the configured attributes. Under KSR, such a combination represents an obvious variation and assembly of known elements.
As per claim 2, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 1 [See rejection to claim 1 above], wherein(e.g. [Figs. 1 and 2] [0041-0042] discloses a computation module of vehicle receives a partial task of a distributed data processing from a communication module of central office. The data processing corresponds to a distributed machine learning algorithm. The partial task can comprise information about instructions of the partial task and information about data of the partial task. [0049] discloses vehicle receives program to be processed and the data required thereof from a TSP. [0057] discloses the TSP pushes the program and data directly to the vehicle. Song: [0005] further discloses in response to receiving a request message form the terminal-side device, the cloud-side device trims requested first neural network model and sends the trimmed/second neural network model to the terminal-side device. The first neural network model is trimmed based on an available hardware resources capability range of the terminal device such that when the second neural network model runs, it is within an available hardware resource capability range of the terminal-side device.).
As per claim 3, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 1 [See rejection to claim 1 above], wherein the resource information used by the terminal device for processing the AI/ML service comprises at least one information piece of: a processing ability of the terminal device for the AI/ML service; information of an AI/ML model stored in the terminal device for the AI/ML service; information of a storage space of the terminal device for storing an AI/ML model; an amount of training data stored in the terminal device for an AI/ML training task; information of a storage space of the terminal device for storing training data; a performance index requirement on wireless transmission of the network device by an AI/ML operation of the terminal device; a power headroom of the terminal device for an AI/ML operation; or a battery capacity of the terminal device for an AI/ML operation (e.g. Meier: [0047] discloses a vehicle regularly sends a heartbeat to make it clear to the TSP that it is still available. The heartbeat could also be supplemented by further information, such as, e.g., available CPU time [processing capability]. [0059-0061] discloses receiving information about vehicle, the information includes system capacity utilization of computation module [processing capability] of the vehicle, energy capacity of the vehicle [power/battery capacity], performance of the computation module of the vehicle [processing capacity], connectivity of the vehicle [required wireless transmission between vehicle and central office device], position of the vehicle, expected availability of the vehicle, previous processing of a partial task by the vehicle and prioritization of vehicle, to be selected for partial tasks. [0064-0065] discloses vehicle may regularly report the current system capacity utilization of their relevant control units, information concerning the present energy capacity, etc. Vehicles’ available CPU time and available system capacity indicate number of operations that are capable of being completed by the vehicle. [0066-0067] discloses selection of vehicle to perform service/task on the basis of the available hardware information or on the basis of connectivity information. For example, by giving preference to transportation vehicle which are linked via WLAN or have a measured, fast connection with low latency. Thus, vehicle’s connectivity information indicates performance index requirement on wireless transmission, such as having measured fast connection with low latency. Song: [0078-0080] discloses terminal-side device reports the available hardware resource capability of the terminal-side device to the cloud-side device. The available hardware resource capability of the terminal device is a computing capability and a storage capability of the terminal-side device. [0100-0102] discloses message sent by the terminal-side device to the cloud-side device carries indication information used to indicate the available hardware resource capability of the terminal-side device. Specifically, a computing capability and a storage capability to run neural network service on the terminal-side device.).
As per claim 4, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 3 [See rejection to claim 3 above], wherein the processing ability of the terminal device for the AI/ML service comprises a number of AI/ML operations which are capable of being completed by the terminal device per unit time (e.g. Meier: [0047] discloses a vehicle regularly sends a heartbeat to make it clear to the TSP that it is still available. The heartbeat could also be supplemented by further information, such as, e.g., available CPU time. [0059-0061] discloses receiving information about vehicle, the information includes system capacity utilization of computation module of the vehicle, energy capacity of the vehicle, performance of the computation module of the vehicle, connectivity of the vehicle, position of the vehicle, expected availability of the vehicle, previous processing of a partial task by the vehicle and prioritization of vehicle, to be selected for partial tasks. [0064-0065] discloses vehicle may regularly report the current system capacity utilization of their relevant control units, information concerning the present energy capacity, etc. Vehicles’ available CPU time and available system capacity indicate number of operations that are capable of being completed by the vehicle. Song: [0078-0079] discloses computing capability is related to CPU performance of the terminal-side device. The hardware resource information may include CPU performance information. [0101-0102] discloses measuring CPU performance capability. [0009] also discloses measuring accuracy of processing the cognitive task on the terminal-side device and comparing to the expected accuracy of processing the task.).
As per claim 5, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 3 [See rejection to claim 3 above], wherein the information of the AI/ML model stored in the terminal device for the AI/ML service comprises: a list of AI/ML models stored in the terminal device; a list of AI/ML models newly added to the terminal device; or a list of AI/ML models deleted from the terminal device ( Song: [0129-0130] further discloses delivering a plurality of neural network models to the terminal-side device. The neural network models with different degrees of cognitive accuracy are pre-stored on the terminal-side device. [0160-0161] discloses a terminal-side registration module maintains a correspondence between an identifier of a neural network model ID, an identifier of the terminal-side device, and an IP address of the CDN DNS node.).
As per claim 6, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 1 [See rejection to claim 1 above], wherein the AI/ML task configuration information further comprises at least one information piece of: an identity of an AI/ML task to be performed by the terminal device; identities of some of AI/ML tasks to be performed by the terminal device; an identity of an act corresponding to an AI/ML task to be performed by the terminal device; an AI/ML model needed by the terminal device for processing the AI/ML service; an identity of an AI/ML model to be deleted from the terminal device; an AI/ML model to be trained by the terminal device; or a training parameter needed by the terminal device (e.g. Meier: [0042] discloses the partial task comprises information about instructions of the partial task and information about data of the partial task. The information about the partial task can comprise reference to the instruction of the partial task and reference to the data of the partial task. [0049] [0057] discloses TSP can initiate the job in the vehicle by providing or downloading the program to be processed and the data required therefor. [0087] discloses task comprise further information, e.g., a priority or the number of compute units required. The TSP can store this task under an ID (identification number) and can allocate resources. Meier clearly discloses using task identifier which implies that that partial task that are allocated to a selected vehicle may comprise corresponding task identification number. Song: [0006] discloses cloud-side device delivers the trimmed neural network model to the terminal-side device, where a hardware resource required when the trimmed neural network model runs is withing the available hardware resource capability range of the terminal-side device, so that a neural network model that originally runs on the cloud-side device with a strong computing capability can also be applicable to the terminal-side device with relatively weak computing capability, and the terminal-side can process the cognitive computing tasks.).
As per claim 7, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 6 [See rejection to claim 6 above], wherein the AI/ML model to be deleted is an AI/ML model that has been stored in the terminal device and does not conform to the AI/ML capability information of the terminal device, or the AI/ML model to be deleted is an AI/ML model that has been stored in the terminal device and has a matching degree with the AI/ML capability information of the terminal device less than a preset threshold (e.g. Song: [0129-0130] further discloses delivering a plurality of neural network models to the terminal-side device. The neural network models with different degrees of cognitive accuracy are pre-stored on the terminal-side device. [0160-0161] discloses a terminal-side registration module maintains a correspondence between an identifier of a neural network model ID, an identifier of the terminal-side device, and an IP address of the CDN DNS node. [0009] also discloses comparing cognitive accuracy of processing the cognitive task by using a neural network model on the terminal-side device and cloud-side device, and determining whether accuracy of processing the cognitive computing task on the terminal-side device does not meet cognitive accuracy tolerance. The cognitive accuracy tolerance represents expected accuracy of processing task using the neural network model by the terminal-side. [0129-0130] further discloses delivering a plurality of neural network models to the terminal-side device. The neural network models with different degrees of cognitive accuracy are pre-stored on the terminal-side device. [0160-0161] discloses a terminal-side registration module maintains a correspondence between an identifier of a neural network model ID, an identifier of the terminal-side device, and an IP address of the CDN DNS node.).
As per claim 8, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 6 [See rejection to claim 6 above], wherein the training parameter needed by the terminal device comprises at least one of a type of training data, a training period, or an amount of training data per round of training (e.g. Song: [0031] the cloud-side device trims the architecture component of the neural network model, so as to simplify a computation kernel of the neural network model, thereby reducing a computation amount and a required storage capacity of the neural network model in a training process. The cloud-side device trims the parameter component of the neural network model, so as to reduce a storage capacity occupied by the neural network model, and reduce a computation amount and a required storage capacity of the neural network model in a running process, so that a hardware resource required when the neural network model (that is, the second neural network model) delivered to the terminal-side device runs is within the available hardware resource capability range of the terminal-side device. See [0064-0068].).
As per claim 10, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 1 [See rejection to claim 1 above], wherein(e.g. Meier: [0042] discloses the partial task comprises information about instructions of the partial task and information about data of the partial task. The information about the partial task can comprise reference to the instruction of the partial task and reference to the data of the partial task. [0087] discloses task comprise further information, e.g., a priority or the number of compute units required. The TSP can store this task under an ID (identification number) and can allocate resources. Meier clearly discloses using task identifier which implies that that partial task that are allocated to a selected vehicle may comprise corresponding task identification number. Song: [0016-0018] discloses the request message further carries function information, where the function information is used to describe function of processing the cognitive task, so the cloud-side device determines the first neural network based on the function information. The cloud-side device trims the neural network mode based on the function information and available hardware resource capability of the terminal device and delivers the trimmed neural network model to the terminal device for processing the cognitive computing task.).
As per claim 11, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 1 [See rejection to claim 1 above],wherein the AI/ML capability information further indicates information of an AI/ML model stored in the terminal device for the AI/ML service, and an available storage space of the terminal device for the AI/ML service (e.g. Meier: [0047] discloses a vehicle regularly sends a heartbeat to make it clear to the TSP that it is still available. The heartbeat could also be supplemented by further information, such as, e.g., available CPU time. [0059-0061] discloses receiving information about vehicle, the information includes system capacity utilization of computation module of the vehicle, energy capacity of the vehicle, performance of the computation module of the vehicle, connectivity of the vehicle, position of the vehicle, expected availability of the vehicle, previous processing of a partial task by the vehicle and prioritization of vehicle, to be selected for partial tasks. [0064-0065] discloses vehicle may regularly report the current system capacity utilization of their relevant control units, information concerning the present energy capacity, etc. Song: [0129] discloses the plurality of neural network models with different degree of cognitive accuracy are stored on the terminal-side device. [0131] the terminal-side device determines, bae don a to-be-processed application, a requirement of the application and adds that information to the request message. The cloud-side devices obtains, based on the information provided by the terminal device, a neural network model, and then delivers the trimmed neural network model to the terminal-side device that meets current cognitive accuracy tolerance of the terminal-side device. [0077-0079] discloses cloud-side device trims the first neural network model to obtain a second neural network model, where a hardware resource required when the second neural network model runs is within an available hardware resource capability range of the terminal-side device. An available hardware resource capability of the terminal-side device is a computing capability and/or a storage capability of the terminal-side device. The computing capability is related to CPU performance of the terminal-side device, and the storage capability is related to storage performance of the terminal-side device. [0083] A computation amount of a neural network model mentioned in this embodiment of this application refers to a data amount generated when the neural network model is used to process data, and a required storage capacity of the neural network model refers to storage space required for storing the neural network model.).
As per claim 12, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 11 [See rejection to claim 11 above], wherein the AI/ML task configuration information comprises an AI/ML model needed by the terminal device for processing the AI/ML service, or an identity of an AI/ML model to be deleted from the terminal device Song: [0170-0175] further discloses a terminal-side device receives update push notification including ID of a new available version of a neural network model from the cloud-side device. [0006] discloses cloud-side device delivers the trimmed neural network model to the terminal-side device, where a hardware resource required when the trimmed neural network model runs is within the available hardware resource capability range of the terminal-side device. [0019] discloses cloud-side device delivers the trimmed neural network model to the terminal side device, where a hardware resource required when the trimmed neural network model runs is within the available hardware resource capability range of the terminal-side device. [0166-0167] discloses cloud-side device maintains log entry that includes an identifier of a neural network model, an available version number of the neural network model, etc. The cloud-side device sends an update push notification to the terminal-side device, the terminal side device may choose and update a neural network model based on the information provided by the cloud-side device.).
As per claim 13, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 1 [See rejection to claim 1 above],wherein the AI/ML capability information further indicates a processing capability, an amount and a storage space of stored training data, and a power headroom/battery capacity of the terminal device for an AI/ML training task, and a performance index requirement on wireless transmission of the network device by an AI/ML operation of the terminal device (e.g. Meier: [0047] discloses a vehicle regularly sends a heartbeat to make it clear to the TSP that it is still available. The heartbeat could also be supplemented by further information, such as, e.g., available CPU time [processing capability]. [0059-0061] discloses receiving information about vehicle, the information includes system capacity utilization of computation module [processing capability] of the vehicle, energy capacity of the vehicle [power/battery capacity], performance of the computation module of the vehicle [processing capacity], connectivity of the vehicle [required wireless transmission between vehicle and central office device], position of the vehicle, expected availability of the vehicle, previous processing of a partial task by the vehicle and prioritization of vehicle, to be selected for partial tasks. [0064-0065] discloses vehicle may regularly report the current system capacity utilization of their relevant control units, information concerning the present energy capacity, etc. Vehicles’ available CPU time and available system capacity indicate number of operations that are capable of being completed by the vehicle. [0066-0067] discloses selection of vehicle to perform service/task on the basis of the available hardware information or on the basis of connectivity information. For example, by giving preference to transportation vehicle which are linked via WLAN or have a measured, fast connection with low latency. Thus, vehicle’s connectivity information indicates performance index requirement on wireless transmission, such as having measured fast connection with low latency. Song: [0078-0080] discloses terminal-side device reports the available hardware resource capability of the terminal-side device to the cloud-side device. The available hardware resource capability of the terminal device is a computing capability and a storage capability of the terminal-side device. [0100-0102] discloses message sent by the terminal-side device to the cloud-side device carries indication information used to indicate the available hardware resource capability of the terminal-side device. Specifically, a computing capability and a storage capability to run neural network service on the terminal-side device.).
As per claim 14, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 13 [See rejection to claim 13 above], wherein the AI/ML task configuration information comprises a training parameter needed by the terminal device (e.g. Song: [0025] [0029] the cloud-side device may first train the first neural network model to obtain the parameter component (for example, a weight parameter component) of the first neural network model, and then trim the parameter component, for example, cluster weight parameter matrices of the first neural network model, so that a storage capacity required by a trimmed parameter component is less than a storage capacity required by the untrimmed parameter component. The second neural network model is formed after the parameter component of the first neural network model is trimmed. In other words, in this implementation, architecture components of the second neural network model and the first neural network model are the same, the parameter components of the second neural network model and the first neural network model are different, and the storage capacity required by the parameter component of the second neural network model is less than the storage capacity required by the parameter component of the first neural network model. [0110] The cloud-side device trims a neural network architecture component based on an accuracy requirement of the terminal-side device, trims a neural network parameter component obtained through training, and sends a corresponding neural network component to the terminal-side device after completing trimming. This can effectively avoid wasting the computing resources and the storage resources of the terminal-side device. Also see [0031] [0065-0067] [0107] [0112].).
As per claim 15, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 14 [See rejection to claim 14 above], wherein the training parameter needed by the terminal device comprises at least one of: a type of training data, a training period, or an amount of training data per round of training (e.g. Song: [0110] discloses a neural network for image classification is trained by using 1000 categories by default, but only 20 of the 1000 categories need to be identified in an application scenario for the terminal-side device. The cloud-side device trims a neural network component based on requirement of the terminal-side device, trims a neural network parameter component and sends it to the terminal-side device. Also see [0107] [0115] [0120] [0123] [0127] [0129].).
As per claim 16, the combination of Meier, Song and Kim discloses (Previously presented) The method of claim 13 [See rejection to claim 13 above], further comprising: sending, by the terminal device, a training result of the AI/ML training task to the network device (e.g. Song: [0110] [0112] discloses if there is a new requirement for application scenario after terminal-side device reduces a quantity of categories for the neural network cognitive computing platform to 20, for example, a quantity of categories is increased to 21, the terminal-side device re-submits a perceived accuracy requirement to the cloud-side device, to trigger trimming and training of a neural network architecture and parameter of the cloud-side device and a real-time update to the terminal-side device.).
Claim 17 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Meier in view of Song and Kim and further in view of Pang et al. (US 2019/0327593 A1) (hereinafter Pang).
As per claim 17, the combination of Meier, Song and Kim discloses (Original) The method of claim 1 [See rejection to claim 1 above], but does not expressly disclose wherein the AI/ML capability information is carried in Uplink Control Information (UCI), Medium Access Control Control Element (MAC CE), or application layer control information.
However, Pang discloses wherein the AI/ML capability information is carried in Uplink Control Information (UCI), Medium Access Control Control Element (MAC CE), or application layer control information (e.g. Pang: [0076] discloses the D2D communication method includes: sending, by the network device, downlink control information to the receiving device, where the downlink control information is used to indicate configuration information for downlink data transmission of the network device. [0127-0128] discloses the network device sends downlink control information to the receiving device, where the downlink control information is used to indicate configuration information for downlink data transmission of the network device. [0156] discloses receiving unit is configured to receive downlink control information sent by the network device.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine well-known method/system of D2D communication that includes sending, by the network device, downlink control information to the receiving device as taught by Pang into the combination of Meier, Song and Kim because it would enable communication between network device and receiving device, where the downlink control information is used to indicate configuration information for downlink data transmission of the network device (See Pang: [0076] [0127-0128]).
As per claim 18, this is an apparatus claim having similar limitations as cited in method claim 1. Thus, claim 18 is also rejected under the same rationale as cited in the rejection of rejected claim 1.
As per claim 19, this is an apparatus/system claim having similar limitations as cited in method claim 1. Thus, claim 19 is also rejected under the same rationale as cited in the rejection of rejected claim 1.
Response to Arguments
Applicant’s arguments with respect to 35 U.S.C. § 101 have been fully considered but they are persuasive. Therefore, the rejection under 35 U.S.C. § 101 is withdrawn.
Applicant’s arguments with respect to 35 U.S.C. § 103 have been fully considered but they are moot in view of new ground of rejection necessitated by the amendment.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Tang (US 2009/0327455 A1) discloses sending resource registration information to server when resource registration in the first network equipment changes (e.g. [0053] The first detection unit 121 is adapted to detect whether the resource registration information of the first network equipment changes or not, and notify the sending unit 11 to report resource registration information contained in the first network equipment to the server when detecting that the resources contained in the first network equipment change. When the resource registration information in the network equipment 10 changes, for example, this part of resources changes from idle to occupied, the quantity of the resources changes, or the like, and the changes can all be detected by the first detection unit 121. If the triggering condition for reporting the resource registration information is that the resource registration information changes, the first detection unit 121 notifies the sending unit 11 to report the current resource registration information to the server when detecting that the resource registration information in the first network equipment 10 changes relative to the resource registration information reported last time.).
Carnahan et al. (US 2018/0337927 A1) also discloses “[0104] Auto-updater module 414 automatically updates stored data and/or agent software based on recent changes to resource utilization, availability or schedules and/or updates to software or protocols. Such updates can be pushed from another device (e.g., upon detecting a change in a resource availability or access permit) or can be received in response to a request sent by device 400. For example, device 400 can transmit a signal to another device that identifies a particular resource, and a responsive signal can identify availabilities of access to the resource.”
Hernandez et al. (US 2014/0280581 A1) discloses “[0049] providing, to device 2 200b, updated data 246 representative of resource 1 214a. In some embodiments, device 1 200a may send updated data regarding a local resource (such as resource 1 214a) to one or more of its remote devices when the properties of the local resource change (e.g., the capacity of a storage device), after device 1 200a or one of its remote devices provides instructions to control the local resource, and/or after device 1 2001 or one of its remote devices relinquishes control over the local resource.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hiren Patel whose telephone number is (571) 270-3366. The examiner can normally be reached on Monday-Friday 9:30 AM to 6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
If attempts to reach the above noted Examiner by telephone are unsuccessful, the Examiner’s supervisor, April Y. Blair, can be reached at the following telephone number: (571) 270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions on access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
March 17, 2026
/HIREN P PATEL/Primary Examiner, Art Unit 2196