DETAILED ACTION
Applicants’ response filed 1/20/26 has been considered.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending.
Prior rejections are maintained and reformulated in view of amendments.
IDS has been considered. PTO-1449 is attached.
Application is allowed.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
For example, claim 1 recites:
A Wireless Transmit/Receive Unit (WTRU) comprising: a processor configured to:
receive configuration information for a machine learning (ML) model, wherein the configuration information includes one or more parameters associated with an assessment mode, wherein at least one of the one or more parameters is associated with a dataset used to train the ML model;
receive a request for activating the assessment mode associated with the ML model;
perform measurements for the assessment mode, wherein the measurements comprise at least one of: end-to-end (E2E) performance statistics, one or more intermediate key performance indicators (KPIs), or data distribution measurements;
determine an error cause associated with the ML model based on the measurements, wherein the determined error cause includes an indication of at least one of: the data distribution measurements being out-of-distribution OOD) with respect to the dataset used to train the ML model, or whether the error cause occurred when the WTRU was implementing the ML model; and
send one or more reports that include at least one of: an indication of the error cause, an indication of a measurement related to the error cause, or an indication of a mitigation action for the error cause.
The claim states, “…configuration information for a machine learning (ML) model, wherein the configuration information includes one or more parameters associated with an assessment mode, wherein at least one of the one or more parameters is associated with a dataset used to train the ML model…”
If the configuration information includes one parameter associated with an assessment mode then how can the same parameter be associated with a dataset used to train the ML model?
This would necessarily mean the parameters to be more than one.
Is the ML model trained in assessment mode or is the model assessed in assessment mode?
Essential elements are missing from the claim.
Then the claim states, “…perform measurements for the assessment mode, wherein the measurements comprise at least one of: end-to-end (E2E) performance statistics, one or more intermediate key performance indicators (KPIs), or data distribution measurements; determine an error cause associated with the ML model based on the measurements, wherein the determined error cause includes an indication of at least one of: the data distribution measurements being out-of-distribution OOD) with respect to the dataset used to train the ML model, or whether the error cause occurred when the WTRU was implementing the ML model…”
If the measurements in assessment mode are at least one of E2E performance statistics, one or more KPIs or data distribution measurements, then it is not clear how error cause determination can only include data distribution measurements?
There is a disconnect between the measurements and the error cause determination.
Essential elements are missing from the claim.
Independent claim 15 is rejected for similar reasons. Respective dependent claims 2-14 and 16-20 are rejected at least based on dependency.
Corrections are requested.
It is the Examiner’s conclusion that the claims of the present application, as presented, are unclear. Applicants are encouraged to formulate claim language that clearly defines the novelty of the application. Pertinent prior arts have been cited and may be applied once the claims are clear. For example, prior art by Piazentin Ono et al. USPAP 20260010786A1 which teaches (i.e., abstract and Figure 5) methods for a machine-learning network that provide efficient, scalable, and granular analyses during validation of a machine learning model are disclosed. Validation of models depends upon many factors, including the real-world application of the model, the type of model being trained, and the types of data samples it is being trained on. In order to provide relevant edge case information to users that pertains to their specific model, data slice finding techniques may be used to identify subsets of the dataset that are particularly problematic. By limiting a length of the slice description that the algorithm searches and by configuring the algorithm to target specific types of errors, users are provided with a more granular analysis that then allows them to determine how or if they need to retrain the model.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MUJTABA M CHAUDRY whose telephone number is (571)272-3817. The examiner can normally be reached Monday-Friday 9am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Albert DeCady can be reached at 571-272-3819. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MUJTABA M. CHAUDRY
Primary Examiner
Art Unit 2112
/MUJTABA M CHAUDRY/Primary Examiner, Art Unit 2112