Response to Amendment
This office action is in response to a response received on January 05, 2025.
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
The applicant’s arguments directed towards the 35 U.S.C § 102(a)(1) and 35 U.S.C § 103, rejections of claims 1-20 have been fully considered, but are not persuasive (see remarks Pg. 1-4).
Applicant argues that, in Kovics '330, there is a ML-split, where there is a split of ML-processing of a trained ML model between the network node and the UE, and each ML-split setup represents configuration options. (Par. 0005). Kovics'330 states that "the distribution of an ML-assistance between NG-RAN nodes and UE is a complicated task..." (Par. 0029). Kovics'330 states that different UEs and different RAN nodes may have different ML processing capabilities, and some ML functions or ML-assistance mechanisms would greatly benefit from UEs participating in the ML operations (Par. 0029).
Kovics'330 states: "...In this context "ML-assistance" for RRM functionality relies on ML implementation partially or entirely. It is assumed that the overall ML-assistance mechanism for given RRM function supports splitting between processing steps running in gNB and UEs... This is referred to as ML-split..." (Par. 0030).
Kovics '330 also states: "The 'ML-split setups' include trained ML instances for certain RRM functions to be executed in distributed manner between the gNB and UE, i.e., when the gNB and UE run different parts of the same ML-assistance mechanism..." (emphasis added, Par. 0031). Thus, Kovics'330 relates to coordination of which parts of a trained ML-model are run on the UE and what parts of the trained ML-model are run on the gN, in the case of a ML-split (a split or distribution of ML-processing of trained ML model between UE and gNB). Kovics '330 does not relate to timing of ML model adaptation or training.
Furthermore, Kovics'330 does not teach or even suggest an adaptation cycle during which the user device is to perform ML functionality adaptation, nor a validity period for which the set of ML functionality adaptation parameters are valid. The concepts of an adaptation (or training) cycle or a validity period are not contemplated by Kovics '330. The Office Action asserts that par. 0047 of Kovics'330 as purportedly includes an adaptation cycle and a validity period. Kovics '330 states that "at 309, the UE sends an indication to the gNB that the ML-split is ready" and "possible actions in the UE after receiving the ML-re-configuration RRC message include the following:
- The ML layers can be used as provided...imply no additional training...
- some ML layers may use re-training...
- In some implementations, other conditions...prevent the UE perform the re-configuration imply UE provides negative feedback
- In some implementations, the gNB generates feedback to the MSO including the status messages received from the UEs..."
However, nowhere in par. 0047 of Kovics'330 is there a mention or suggestion of a time or timing that the UE should perform ML model adaptation or re-training, much less an adaptation cycle for ML model adaptation or re-training as required by claim 1.
Par. 0037 is also cited. However, par. 0037 of Kovics '330 describes a signaling interface between UE and gNB, but no adaptation cycle or validity period are included.
With respect to par. 0034 of Kovics'330, it states: "New MAC CE 230... to trigger a specific ML-split configuration and/or activation in UE… The MAC activation CE may distinguish two different modes:
- Semi-persistent: UE ML-assistance is provided as per an established periodicity during the duration of the activation periods..."
However, the ML-split in Kovics '330 relates to a split of performing trained ML models: As noted above at par. 0031: "The 'ML-split setups' include trained ML instances for certain RRM functions to be executed in distributed manner between the gNB and lE, i.e., when the gNB and lE run different parts of the same ML-assistance mechanism..." (emphasis added, par. 0031 of Kovics '330).
The examiner has considered the arguments, but respectfully disagrees. The claim language requires "a set of machine learning (ML) functionality adaptation parameters for the user device to perform adaptation of a ML functionality associated with at least one ML model that is used by the user device to perform a radio access network (RAN)-related function", so in BRI of the claim as long as some (ML) functionality adaptation parameters are used by user device to perform adaptation of a ML functionality associated with at least one ML model the requirement is met. In BRI of the claim, it may be executed in a distributed manner.
KOVÁCS teaches, the signaling includes an activation period hence a validity period and a periodicity thus an adaptation cycle (see Fig. 3, element 308, 314, ¶ [0034], The MAC activation CE may distinguish two different modes: ■ Semi-persistent: UE ML-assistance is provided as per an established periodicity during the duration of the activation periods. ■ Persistent: Network assumes the UE ML assistance is constantly employed during the activation periods.).
Finally, the UE decides and performs adaption of the ML functionality during that cycle because UE meets certain conditions (see ¶ [0047], In some implementations, other conditions such as UE power consumptions, expected traffic, etc. prevent the UE perform the re-configuration imply UE provides negative feedback to gNB, ¶ [0051], Although gNB sends trigger to perform a specific reconfiguration, UE should not activate it (use it) until it meets certain conditions).
As result, applicant’s arguments are not persuasive and the 35 U.S.C § 102(a)(1) rejections of claims 1-2, 4, 8, 10-11, 14-16 and 18 are maintained.
The 35 U.S.C § 103 rejection of claims 3, 5-7, 9, 12-13, 17 and 19-20 are maintained for the reasons set forth above.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 4, 8, 10-11, 14-16 and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by KOVÁCS et al., WO 2022182330 A1, (hereinafter KOVÁCS).
Regarding claim 1, and 15, KOVÁCS teaches a method comprising:
confirming, by a user device with a network node, a set of machine learning (ML) functionality adaptation parameters for the user device to perform adaptation of a ML functionality associated with at least one ML model that is used by the user device to perform a radio access network (RAN)-related function (see Fig. 2, e.g., element 200, ¶ [0034], e.g., The main signaling interfaces shown in FIG. 2 are as follows. • Signaling from CN/MSO to provision 210 the gNBs and UEs with sets of MLsplit configuration options ('ML-split setups'), UEs may perform a radio resource management (RRM) function through a ML-split; ¶ [0006], e.g., UEs may perform a radio resource management (RRM) function through a ML-split;),
the set of ML functionality adaptation parameters indicating at least one adaptation cycle during which the user device is to perform the ML functionality adaptation and a validity period for which the set of ML functionality adaptation parameters are valid (see Fig 3, e.g., element 308, 309, see ¶ [0047], e.g., At 309, the UE sends an indication to the gNB that the ML-split is ready. That is, after the UEs are ML-configured or reconfigured via RRC, the ML reconfigured UEs feedback the status to the gNB via RRC. Possible actions in the UE after receiving the
ML re-configuration RRC message include the following: The ML layers can be used as provided by the MSO imply no additional ML training is required, and UE prepares/loads the model and provides positive feedback to MSO, Some ML layers may use retraining. This information can be included in the ML-split setup or determined as UE specific case-by-case imply UE provides positive feedback to gNB only after ML re-training has been finalized, see ¶ [0037], e.g., The table below summarizes a definition of a ML-split setup, see ¶ [0034], e.g., The MAC activation CE may distinguish two different modes: Semi-persistent: UE ML-assistance is provided as per an established periodicity during the duration of the activation periods. Persistent: Network assumes the UE ML assistance is constantly employed during the activation periods.); and
performing, by the user device, adaptation of the ML functionality during the at least one adaptation cycle (see Fig 3, 308, ¶ [0046], e.g., At 308, the UE prepares UE ML-split reconfiguration for the target RRM functionality; ¶ [0047], In some implementations, other conditions such as UE power consumptions, expected traffic, etc. prevent the UE perform the re-configuration imply UE provides negative feedback to gNB; ¶ [0051], Although gNB sends trigger to perform a specific reconfiguration, UE should not activate it (use it) until it meets certain conditions).
Regarding claim 2, and 16, KOVÁCS teaches the limitations of Claim 1.
KOVÁCS further teaches, wherein the at least one adaptation cycle comprises a plurality of adaptation cycles, wherein the performing adaptation comprises performing, by the user device, adaptation of the ML functionality during the plurality of adaptation cycles (see ¶ [0034], e.g., The MAC activation CE may distinguish two different modes: Semi-persistent: UE ML-assistance is provided as per an established periodicity during the duration of the activation periods. Persistent: Network assumes the UE ML assistance is constantly employed during the activation periods.);
the method further comprising:
using, by the user device, the at least one ML model in inference mode to perform or assist in performing the RAN-related function between the adaptation cycles (see ¶ [0056], e.g., the signaling provides the MSO updated information on the number of active UEs supporting a certain ML inference. see ¶ [0057], e.g., Operation 520 includes transmitting, to each of the plurality of UEs being served by the network node, signaling data configured to trigger respective reconfigurations of each of the plurality of UEs for performing the ML-split with the network node, the signaling data including a representation of a respective subset of the plurality of ML- split setups, the respective subset of the plurality of ML- split setups being transmitted to that UE of the plurality of UEs based on a capability of that UE to perform ML processing for the RRM function.).
Regarding claim 4, and 18, KOVÁCS teaches the limitations of Claim 1.
KOVÁCS further teaches, wherein the confirming comprises:
receiving, by the user device from the network node, the set of ML functionality adaptation parameters for performing adaptation of the ML functionality (see Fig. 3, e.g., element 307, ¶ [0045] At 307, the gNB sends the selected UE ML-split to the UE via RRC signaling.);
and transmitting, by the user device to the network node, an acknowledgement confirming that the set of ML functionality adaptation parameters are acceptable (see Fig. 3, e.g., element 309, ¶ [0047] At 309, the UE sends an indication to the gNB that the ML-split is ready. That is, after the UEs are ML-configured or reconfigured via RRC, the ML reconfigured UEs feedback the status to the gNB via RRC. Some ML layers may use retraining. This information can be included in the ML-split setup or determined as UE specific case-by-case imply UE provides positive feedback to gNB only after ML re-training has been finalized,).
Regarding claim 8, KOVÁCS teaches the limitations of Claim 1.
KOVÁCS further teaches, wherein one or more inputs to the ML functionality, which are configured by the network node, remain constant during the validity period (see ¶ [0034], e.g., The MAC activation CE may distinguish two different modes: Persistent: Network assumes the UE ML assistance is constantly employed during the activation periods).
Regarding claim 10, KOVÁCS teaches the limitations of Claim 1.
KOVÁCS further teaches, further comprising:
transmitting, by the user device to the network node, a capabilities response indicating that the user device has a capability to perform at least one of the following:
ML functionality or ML model adaptation;
receiving, by the user device, the set of ML functionality adaptation parameters (see Fig. 3, e.g., element 302, and 305, ¶ [0040], e.g., At 302, the UE and gNB exchange ML capabilities for the UE, ¶ [0043], e.g., At 305, the gNB sends the UE ML-split setups configurations to be used to the UE via RRC signaling. That is, each targeted gNB generates RRC signaling content/information - with a subset of ML-split configurations for the selected UEs and the corresponding triggering mode (RRC, MAC, DCI).); or
sending or providing, by the user device to the network node, the set of ML functionality adaptation parameters or a proposed or requested set of ML functionality adaptation parameters.
Regarding claim 11, KOVÁCS teaches the limitations of Claim 1.
KOVÁCS further teaches, wherein the performing, by the user device, adaptation of the ML functionality during at least one of the plurality of adaptation cycles is performed based on at least one of the following:
receiving, by the user device from the network node, a request to perform adaptation of the ML functionality; or detecting, by the user device, a need to perform adaptation of the ML functionality based on performance of the RAN-related function being less than a threshold (Fig. 3, e.g., element 307, ¶ [0045], e.g., At 307, the gNB sends the selected UE ML-split to the UE via RRC signaling.).
Regarding claim 14, KOVÁCS teaches the limitations of Claim 1.
KOVÁCS further teaches, wherein the performing, by the user device, adaptation of the ML functionality comprises performing at least one of the following:
adapting one or more weights or biases of the at least one ML model; adapting the at least one ML model; adapting a plurality of ML models; or adapting an architecture and/or model structure of at least one ML model (see ¶ [0035], e.g., The required parameters for the trained ML assistance model includes the following: Usual ML parameters, and some implementations, RRM parameters. The allowed set of ML model splits configuration options between gNB and UE or between selected UE).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over KOVÁCS, in view of Kumar et al., US 20220377844 A1, (hereinafter Kumar).
Regarding claim 3, and 17, KOVÁCS teaches the limitations of Claim 1 and 15.
KOVÁCS, teaches ML model training and ML functionality adaptation, however it does not explicitly indicate that UE, may train the ML model based on an ML model training configuration received from the RAN, and transmit an ML model training report indicative of the trained ML model.
Kumar teaches, wherein the confirming comprises:
transmitting, by the user device to the network node, the set of ML functionality adaptation parameters for performing adaptation of the ML functionality; and
receiving, by the user device from the network node, an acknowledgement confirming that the set of ML functionality adaptation parameters are acceptable (Fig. 6, 640, 644, 652, ¶ [0092], e.g., The UE 610 may update, at 640, model weights based on each data observation at the UE 610 and transmit, at 642-644, the updated weights, model, deltas, etc., to the model repository 612 over the u-plane or to the CU-CP 606 over the c-plane via a periodic or event-based update procedure. The updated model may be uploaded, at 650a, in a container to the RAN-based ML controller 608, which may relay, at 650b, the updated model to the CU-CP 606.The CU-CP 606 may transmit, at 652, an updated model configuration to the UE 610 (e.g., based on the update associated with the model ID/descriptor).).
It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to have modified of KOVÁCS to incorporate the teachings of Kumar to include UE, initiated ML model training based on an ML model training configuration received from the RAN. Doing so would facilitate in achieving UE training neural network while learning dependence of measured qualities on individual parameters as suggested by Kumar (see ¶ [0074], e.g., The data weights may be readjusted through the process. In some aspects described herein, an encoding device (e.g., a UE) may train one or more neural networks to learn dependence of measured qualities on individual parameters.).
Claim(s) 5-7, 9, 12, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over KOVÁCS, in view of KOVACS et al., WO 2021064275 A1, (hereinafter KOVACS).
Regarding claim 5, and 19, KOVÁCS teaches the limitations of Claim 1 and 15.
KOVÁCS, teaches the validity period for which the ML functionality adaptation parameters are valid (see ¶ [0034], e.g., The MAC activation CE may distinguish two different modes: Semi-persistent: UE ML-assistance is provided as per an established periodicity during the duration of the activation periods. Persistent: Network assumes the UE ML assistance is constantly employed during the activation periods.), however it does not explicitly indicate an adaptation cycle duration.
KOVÁCS does not teach but KOVACS teaches, wherein the set of ML functionality adaptation parameters comprises information indicating:
the validity period for which the ML functionality adaptation parameters are valid; a number of adaptation cycles within the validity period; and
an adaptation cycle duration for each adaptation cycle of the at least one adaptation cycle (see Pg. 19, line 21 – Pg. 20, line 26, e.g., The gNB may transmit a configuration (mlcontextConjig) to the UE for the inference conditions information reporting by indicating one or more ML-context objects (mlcontextObject) to be configured and activated. For example, Context Report configuration ID: 1; Context Reporting mode: triggered (may be triggered by HO execution) Context Reporting format: 500ms - beam-wise (long term estimate for each detected gNB radio beam. 500ms is an example of the periodicity of reporting the context information. Context Report configuration ID: 2; Context Reporting mode: periodic- 100ms (periodic report in this example with 100ms interval) Context Reporting format: 100ms - cell-wise, Context Report configuration ID: 3 Context Reporting format: 10ms - cell-wise).
It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to have modified of KOVÁCS to incorporate the teachings of KOVACS to include periodic inference cycles by the UE. Doing so would facilitate in achieving improved radio resource management actions and decisions as suggested by KOVACS (see Pg. 11, lines 13-18, e.g., Devices employing ML-based entities, such as gNBs, UEs, and/or CN entities in a 5G system, can learn over time the essential context conditions for particular inference results. This information can help design algorithms dedicated to provide explainability and interpretability of the inference results, which in turn can be used to improve radio resource management actions and decisions).
Regarding claim 6, and 20, KOVÁCS as combined with KOVACS teaches the limitations of Claim 5 and 19.
KOVÁCS does not teach but KOVACS teaches, wherein the at least one adaptation cycle comprises a plurality of adaptation cycles,
wherein the adaptation cycle duration for the plurality of adaptation cycles comprises at least one of: an adaptation cycle duration for the plurality of adaptation cycles within the validity period,
wherein the adaptation cycle duration is the same for each of the adaptation cycles (see Pg. 19, line 21 – Pg. 20, line 26, e.g., The gNB may transmit a configuration (mlcontextConjig) to the UE for the inference conditions information reporting by indicating one or more ML-context objects (mlcontextObject) to be configured and activated. For example, Context Report Type: mlcontextReportType#3 (context ID for HO inference) Context Reporting mode: periodic- 100ms (periodic report in this example with 100ms interval));
or an average adaptation cycle duration for the adaptation cycles within the validity period.
It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to have modified of KOVÁCS to incorporate the teachings of KOVACS to include periodic inference cycles by the UE. Doing so would facilitate in achieving improved radio resource management actions and decisions as suggested by KOVACS (see Pg. 11, lines 13-18, e.g., Devices employing ML-based entities, such as gNBs, UEs, and/or CN entities in a 5G system, can learn over time the essential context conditions for particular inference results. This information can help design algorithms dedicated to provide explainability and interpretability of the inference results, which in turn can be used to improve radio resource management actions and decisions).
Regarding claim 7, KOVÁCS teaches the limitations of Claim 1 and 15.
KOVÁCS does not teach but KOVACS teaches, wherein the at least one adaptation cycle comprises a plurality of adaptation cycles,
wherein the set of ML functionality adaptation parameters comprises at least one of the following: a number of ML functionality adaptations; a number of the adaptation cycles per ML functionality adaptation; a duration, or an average duration, of the adaptation cycles; a time period between each of the adaptation cycles; or an average time period between each of the adaptation cycles (see Pg. 19, line 21 – Pg. 20, line 26, e.g., The gNB may transmit a configuration (mlcontextConjig) to the UE for the inference conditions information reporting by indicating one or more ML-context objects (mlcontextObject) to be configured and activated. For example, Context Report Type: mlcontextReportType#3 (context ID for HO inference) Context Reporting mode: periodic- 100ms (periodic report in this example with 100ms interval)).
It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to have modified of KOVÁCS to incorporate the teachings of KOVACS to include periodic inference cycles by the UE. Doing so would facilitate in achieving improved radio resource management actions and decisions as suggested by KOVACS (see Pg. 11, lines 13-18, e.g., Devices employing ML-based entities, such as gNBs, UEs, and/or CN entities in a 5G system, can learn over time the essential context conditions for particular inference results. This information can help design algorithms dedicated to provide explainability and interpretability of the inference results, which in turn can be used to improve radio resource management actions and decisions).
Regarding claim 9, KOVÁCS teaches the limitations of Claim 1 and 15.
KOVÁCS further teaches, wherein one or more inputs to the ML functionality, which are configured by the network node, remain constant within each of the ML functionality adaptations; and
wherein the one or more inputs to the ML functionality, which are configured by the network node, are changed between two of the ML functionality adaptations during the validity period (see ¶ [0034], e.g., The MAC activation CE may distinguish two different modes: Semi-persistent: UE ML-assistance is provided as per an established periodicity during the duration of the activation periods. Persistent: Network assumes the UE ML assistance is constantly employed during the activation periods.),
however, it does not explicitly indicate a plurality of ML functionality adaptations, wherein each ML functionality adaptation comprises a plurality of adaptation cycles.
KOVACS teaches, wherein performing adaptation of the ML functionality comprises:
performing a plurality of ML functionality adaptations, wherein each ML functionality adaptation comprises a plurality of adaptation cycles (see Pg. 19, line 21 – Pg. 20, line 26, e.g., The gNB may transmit a configuration (mlcontextConjig) to the UE for the inference conditions information reporting by indicating one or more ML-context objects (mlcontextObject) to be configured and activated. For example, Context Report configuration ID: 1; Context Reporting mode: triggered (may be triggered by HO execution) Context Reporting format: 500ms - beam-wise (long term estimate for each detected gNB radio beam. 500ms is an example of the periodicity of reporting the context information. Context Report configuration ID: 2; Context Reporting mode: periodic- 100ms (periodic report in this example with 100ms interval) Context Reporting format: 100ms - cell-wise, Context Report configuration ID: 3 Context Reporting format: 10ms - cell-wise).
It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to have modified of KOVÁCS to incorporate the teachings of KOVACS to include periodic inference cycles by the UE. Doing so would facilitate in achieving improved radio resource management actions and decisions as suggested by KOVACS (see Pg. 11, lines 13-18, e.g., Devices employing ML-based entities, such as gNBs, UEs, and/or CN entities in a 5G system, can learn over time the essential context conditions for particular inference results. This information can help design algorithms dedicated to provide explainability and interpretability of the inference results, which in turn can be used to improve radio resource management actions and decisions).
Regarding claim 12, KOVÁCS teaches the limitations of Claim 1 and 15.
KOVÁCS does not teach but KOVACS teaches, further comprising:
transmitting, by the user device to the network node, a request for resources to be used by the user device during the plurality of adaptation cycles within the validity period to perform adaptation of the ML functionality (Pg. 6, lines 26-27, e.g., Different ML-based architecture options may be applied for radio resource management (RRM).).
It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to have modified of KOVÁCS to incorporate the teachings of KOVACS to include ML-based architecture options to be applied for radio resource management (RRM). Doing so would facilitate in achieving improved radio resource management actions and decisions as suggested by KOVACS (see Pg. 11, lines 13-18, e.g., Devices employing ML-based entities, such as gNBs, UEs, and/or CN entities in a 5G system, can learn over time the essential context conditions for particular inference results. This information can help design algorithms dedicated to provide explainability and interpretability of the inference results, which in turn can be used to improve radio resource management actions and decisions).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over KOVÁCS, in view of KUMAR et al., US 20240276241 A1, (hereinafter KUMAR).
Regarding claim 13, KOVÁCS teaches the limitations of Claim 1.
KOVÁCS teaches, performing adaptation of the ML functionality during the plurality of adaptation cycles (see ¶ [0034], e.g., The MAC activation CE may distinguish two different modes: Semi-persistent: UE ML-assistance is provided as per an established periodicity during the duration of the activation periods),
however, it does not explicitly indicate adaptation of the ML functionality is performed partially in an iterative manner during each adaptation cycle of the plurality of adaptation cycles.
KUMAR teaches, wherein adaptation of the ML functionality is performed partially in an iterative manner (Fig. 5, e.g., element 500, ¶ [0116], e.g., The neural network 500 can be pre-trained to process the features from the data in the input layer 503, The forward pass, loss function, backward pass, and parameter update can be performed for one training iteration. The process can be repeated for a certain number of iterations for each set of training data until the weights of the layers are accurately tuned (e.g., meet a configurable threshold determined based on experiments and/or empirical studies)).
It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to have modified of KOVÁCS to incorporate the teachings of KUNAR to include training iterations. Doing so would facilitate in achieving configurable threshold determined based on experiments and/or empirical studies as suggested by KUMAR (see ¶ [0116], e.g., in some cases, the neural network 500 can adjust weights of nodes using a training process called backpropagation. Backpropagation can include a forward pass, a loss function, a backward pass, and a weight update. The forward pass, loss function, backward pass, and parameter update can be performed for one training iteration. The process can be repeated for a certain number of iterations for each set of training data until the weights of the layers are accurately tuned (e.g., meet a configurable threshold determined based on experiments and/or empirical studies).).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to POONAM SHARMA whose telephone number is (571)272-6579. The examiner can normally be reached Monday thru 8:30-5:30 pm, ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin Bates can be reached at (571) 272-3980. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/POONAM SHARMA/Examiner, Art Unit 2472
/KEVIN T BATES/Supervisory Patent Examiner, Art Unit 2472