Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments/Remarks
Applicant’s arguments/remarks, see page # 5, filed 10/07/2025, with respect to Claim interpretation under have been fully considered and are persuasive. The interpretation of under has been withdrawn.
Applicant's arguments/remarks see 3rd ¶ of page 6 to 3rd ¶ of page 10, with respect to 35 U.S.C 102 rejection of claims 1, 2, 5, 6, 8, 11, 13, 14 and 16 have been fully considered but they are not persuasive. In response to applicant’s arguments/remarks, see 3rd – 4th ¶s of page 8, with respect to independent claims generally stating that Kou fails to disclose the limitations “transmitting, to a second ML module at a network node, the input data and the first prediction data….” and “receiving, from the second ML module of the network node….second prediction data obtained from the second ML model” because nothing in Kou discloses “the second system sends and the first system receives an accuracy metric, let alone that such an accuracy metric is based in part on prediction ….”, the examiner respectfully disagree. Examiner would like to note that none of the limitations of the describes “the second system sends and the first system receives an accuracy metric”. For example, the limitations only describe “receiving, from the second ML module of the network node, an accuracy metric…”. In other words, the limitations do not describe that “the second ML module [i.e. the second system] transmits/sends and the first ML module [i.e. the first system] receives an accuracy metric. On the other hand, the examiner interpret “a wireless transmit receive unit WTRU” as a system, e.g. system 100 of Fig. 1, comprised of a plurality of networked computers, e.g. computing devices 105-1, 105-n and second computing system 130, that may communicate each other wirelessly. Note that the system 100 is a part of [i.e. one of a unit] of the larger network 145 (Fig. 1, ¶ 0024 and ¶ 0046 – 0049).
Therefore, Kou discloses: transmitting/transmit, to a second ML module at a network node, the input data and the first prediction data (i.e. the method/system, e.g. first computing device 105 of the system 100 [i.e. a wireless transmit receive unit WTRU], may send/transmit at least some of the set of results [i.e. the first prediction data] and the corresponding input data [i.e. the input data] to local smart box [i.e. a second ML module] of the second computing system 130 [i.e. a network node]) (130 & 135 – Fig. 1, 205 & 210 – Fig. 2, ¶ 0011 and ¶ 0052),
wherein the second ML module includes a second ML model (i.e. the local smart box [i.e. the second ML module], includes a second 2nd Neural network model 135, e.g. heavyweight model [i.e. a second ML model]) (130 & 135 – Fig. 1, 205 & 210 – Fig. 2, ¶ 0011, ¶ 0037, ¶ 0043 and ¶ 0052);
receiving/receive, from the second ML module of the network node, an accuracy metric based on a comparison of the transmitted first prediction data of the first ML model and second prediction data obtained from the second MVL model (i.e. the method/system, e.g. the system 100 as a whole, is presented with [i.e. receive] an accuracy value [i.e. an accuracy metric] determined by the second 2nd Neural network model 135 of the local smart box [i.e. the second ML module of the network node], wherein the accuracy value is obtained by comparing [i.e. based on a comparison] results of the 2nd Neural network model 135, e.g. heavyweight model [i.e. second prediction data obtained from the second ML model] to the results received from the 1st NN model, e.g. lightweight model [i.e. the transmitted first prediction data of the first ML]) (210, 215 & 220 – Fig. 2 and ¶ 0052 - 0053).
In response to the applicant’s arguments/remarks, see last ¶ of page 8 & 4th – 5th ¶s of page 9, generally stating that Kou fails to disclose “updating the first ML model based on the received accuracy metric” as the first system would not perform such an update based on accuracy metric because no accuracy metric is sent by the second system or received by the first system, the examiner respectfully disagree. As described above, the examiner interpret “a wireless transmit receive unit WTRU” as a system, e.g. system 100, comprised of a plurality of networked computers, e.g. computing devices 105-1, 105-n and second computing system 130, that may communicate each other wirelessly. Note that the system 100 is a part of [i.e. one of a unit] of the larger network 145 (Fig. 1, ¶ 0024 and ¶ 0046 – 0049).
Therefore, Kou discloses: updating/update the first ML model based on the received accuracy metric and an accuracy condition (i.e. if the accuracy value [i.e. based on the received accuracy metric] is below a threshold value [i.e. an accuracy condition], the method/system, e.g. system 100 as a whole, may update the lightweight model [i.e. the first ML model]) (220, 235 & 245 – Fig. 2 and ¶ 0053 - 0055).
In response to the applicant’s arguments/remarks, see last ¶ of page 9, stating that Kou fails to disclose a method of machine learning performed by a wireless transmit receive unit, the examiner respectfully disagree.
Kou discloses: (i.e. system 100 [i.e. a wireless transmit receive unit WTRU] comprised of a plurality of networked computers, e.g. computing devices 105-1, 105-n and second computing system 130, that may communicate each other wirelessly, may implement training and updating of neural networks [i.e. performing machine learning]; Note that the system 100 is a part of [i.e. one of a unit] of the larger network 145) (Fig. 1, ¶ 0024 and ¶ 0046 – 0049).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 2, 5, 6, 8, 11, 13, 14 and 16 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kou et al. (US PG PUB 20230229890), hereinafter "Kou".
Regarding Claims 1 and 11, Kou discloses:
(Claim 1) A method of machine learning performed by a wireless transmit receive unit (WTRU) (i.e. system 100 [i.e. a wireless transmit receive unit WTRU] comprised of a plurality of networked computers, e.g. computing devices 105-1, 105-n and second computing system 130, that may communicate each other wirelessly, may implement training and updating of neural networks [i.e. performing machine learning]; Note that the system 100 is a part of [i.e. one of a unit] of the larger network 145) (Fig. 1, ¶ 0024 and ¶ 0046 – 0049), the method comprising:
(Claim 11) A wireless transmit receive unit (WTRU) (i.e. system 100 [i.e. a wireless transmit receive unit WTRU] comprised of a plurality of networked computers, e.g. computing devices 105-1, 105-n and second computing system 130, that may communicate each other wirelessly; Note that the system 100 is a part of [i.e. one of a unit] of the larger network 145) (Fig. 1, ¶ 0024 and ¶ 0046 – 0049) being configured to:
Implementing/implement, by a first machine learning (ML) module, a first ML model using input data to generate first prediction data (i.e. smart box edge computing device [i.e. a first machine learning ML module] may implement 1st NN model 110 [i.e. a first ML model] to generate results [i.e. first prediction data] using input data, e.g. data collected by one or more sensors) (110 – Fig. 1, ¶ 0037, ¶ 0046 and ¶ 0052),
wherein the first ML model is a production model (i.e. the 1st NN model 110 [i.e. the first ML model] is used for generating/producing results [i.e. the first ML model is a production model]) (¶ 0011 and ¶ 0046);
transmitting/transmit, to a second ML module at a network node, the input data and the first prediction data (i.e. the method/system, e.g. first computing device 105 of the system 100 [i.e. a wireless transmit receive unit WTRU], may send/transmit at least some of the set of results [i.e. the first prediction data] and the corresponding input data [i.e. the input data] to local smart box [i.e. a second ML module] of the second computing system 130 [i.e. a network node]) (130 & 135 – Fig. 1, 205 & 210 – Fig. 2, ¶ 0011 and ¶ 0052),
wherein the second ML module includes a second ML model (i.e. the local smart box [i.e. the second ML module], includes a second 2nd Neural network model 135, e.g. heavyweight model [i.e. a second ML model]) (130 & 135 – Fig. 1, 205 & 210 – Fig. 2, ¶ 0011, ¶ 0037, ¶ 0043 and ¶ 0052);
receiving/receive, from the second ML module of the network node, an accuracy metric based on a comparison of the transmitted first prediction data of the first ML model and second prediction data obtained from the second MVL model (i.e. the method/system, e.g. the system 100 as a whole, is presented with [i.e. receive] an accuracy value [i.e. an accuracy metric] determined by the second 2nd Neural network model 135 of the local smart box [i.e. from the second ML module of the network node], wherein the accuracy value is obtained by comparing [i.e. based on a comparison] results of the 2nd Neural network model 135, e.g. heavyweight model [i.e. second prediction data obtained from the second ML model] to the results received from the 1st NN model, e.g. lightweight model [i.e. the transmitted first prediction data of the first ML]) (210, 215 & 220 – Fig. 2 and ¶ 0052 - 0053); and
updating/update the first ML model based on the received accuracy metric and an accuracy condition (i.e. if the accuracy value [i.e. based on the received accuracy metric] is below a threshold value [i.e. an accuracy condition], the method/system, e.g. system 100, may update the lightweight model [i.e. the first ML model]) (220, 235 & 245 – Fig. 2 and ¶ 0053 - 0055); and
(Claim 11) execute the first ML model (i.e. the computing device may execute the lightweight model [i.e. the first ML model]) (110 – Fig. 1, ¶ 0037, ¶ 0046 and ¶ 0052).
Regarding Claim2, Kou discloses:
executing, by the first ML module, the first ML model (i.e. smart box edge computing device [i.e. a first machine learning ML module] may execute 1st NN model 110 [i.e. a first ML model] to generate results) (110 – Fig. 1, ¶ 0037, ¶ 0046 and ¶ 0052).
Regarding Claims 5 and 13, Kou discloses:
Wherein the input data are received from the WTRU (i.e. first computing device 105 [i.e. a wireless transmit receive unit (WTRU)] which may be a AI camera captures the input data) (105 & 110 – Fig. 1, ¶ 0024, ¶ 0040 and ¶ 0046).
Regarding Claims 6 and 14, Kou discloses:
wherein the second MVL model has any of: (1) a greater accuracy metric than the first ML model for a predetermined validation data set, (2) a greater number of floating-point operations, and (3) a greater memory size (i.e. The heavyweight model is likely to be more complex—thereby using more computing resources (memory, computing time, energy, processing power, etc.) [i.e. a greater memory size], but it achieves higher accuracy than the lightweight model [i.e. a greater accuracy metric than the first ML model for a predetermined validation data set]) (¶ 0037).
Regarding Claims 8 and 16, Kou discloses:
Generating/generate a dataset, wherein the dataset comprises input data associated with at least a second prediction data of the second prediction data generated by the second ML module (i.e. a training dataset may be formed by using the collected input data [i.e. comprises input data associated with at least a second prediction data] as input data and the corresponding results from the second neural network as ground truth results [i.e. a second prediction data of the second prediction data generated by the second ML module]) (¶ 0005 - 0006).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kou as applied to claims 1 and 11 above, and further in view of Ghanta et al. (US PG PUB 20200034665), hereinafter "Ghanta".
Regarding Claims 7 and 15, Kou discloses all the features with respect to Claim 1 as described above.
However, Kou does not explicitly disclose:
Wherein the first ML model is updated by selecting a third ML model among one or more candidate ML models.
On the other hand, in the same field of endeavor, Ghanta teaches:
Wherein the first ML model is updated by selecting a third ML model among one or more candidate ML models (i.e. the action module 312 is configured to trigger an action associated with the first machine learning algorithm in response to the predicted suitability of the first machine learning algorithm/model for analyzing the inference data set not satisfying a predetermined suitability threshold; the action may include selecting a machine learning model [i.e. a third ML model] from a plurality of machine learning models [i.e. one or more candidate ML models]) (¶ 0071 and ¶ 0101).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of Kou to include the feature wherein the first ML model is updated by selecting a third ML model among one or more candidate ML models as taught by Ghanta so that a best suited machine learning model may be activated in response to the predicted suitability of the first machine learning algorithm/model for analyzing the inference data set not satisfying a predetermined suitability threshold (¶ 0071 and ¶ 0101).
Claim(s) 9 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kou as applied to claims 8 and 16 above, and further in view of Jung (US PG PUB 20210056412), hereinafter "Jung".
Regarding Claims 9 and 17, Kou discloses all the features with respect to Claim 8 as described above.
However, Kou does not explicitly disclose:
wherein the at least second prediction data is associated with a confidence score, and wherein generating the dataset further comprises adding to the dataset the at least second prediction data, based on the confidence score associated with the at least second predictions data.
On the other hand, in the same field of endeavor, Jung teaches:
wherein the at least second prediction data is associated with a confidence score (i.e. data generated by neural network [i.e. the at least second prediction data] is associated with corresponding confidence level [i.e. confidence score]) (Abstract, ¶ 0025 and ¶ 0100), and
wherein generating the dataset further comprises adding to the dataset the at least second prediction data, based on the confidence score associated with the at least second predictions data (i.e. training dataset may be generated by labeling the subset of candidate data from among the subset of candidate data in accordance with a confidence level label based on the confidence conditions) (Abstract, ¶ 0025 and ¶ 01000).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of Kou to include the feature wherein the at least second prediction data is associated with a confidence score, and wherein generating the dataset further comprises adding to the dataset the at least second prediction data, based on the confidence score associated with the at least second predictions data as taught by Jung so that a training dataset may be generated in accordance with the confidence levels associated with the candidate data) (Abstract, ¶ 0025 and ¶ 01000).
Claim(s) 10 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kuo as applied to claims 8 and 16 above, and further in view of Sharma et al. (US PG PUB 20220391312), hereinafter "Sharma".
Regarding Claims 10 and 18, Kuo discloses all the features with respect to Claim 8 as described above.
However, Kuo does not explicitly disclose:
Wherein the first ML model is retrained by the first ML module, using the generated dataset.
On the other hand, in the same field of endeavor, Sharma teaches:
Wherein the first ML model is retrained by the first ML module, using the generated dataset (i.e. ML prediction model [i.e. the first ML model] may be retrained by RA platform [i.e. the first ML module] using the updated training set [i.e. generated dataset]) (Fig. 1, Fig. 2, ¶ 0021 and ¶ 0060 - 0061).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of Kou to include the feature wherein the first ML model is retrained by the first ML module, using the generated dataset as taught by Sharma so that machine learning model may be retrained based on detection of the inaccuracy in the output of the machine learning model Fig. 1, Fig. 2, ¶ 0021 and ¶ 0060 - 0061).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOE MIN HLAING whose telephone number is (303)297-4282. The examiner can normally be reached Monday-Friday 9AM - 5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry can be reached at 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Soe Hlaing/ Primary Examiner, Art Unit 2451