DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Introduction
This is a final office action in response to remarks filed on 1 December 2025. Claims 1-2, 13, 16-18, and 26 are amended. No claims are canceled or added. Claims 1-30 are pending.
Response to Arguments
Applicant’s arguments, see remarks pages 10-12, filed 1 December 2025, with respect to the rejections of claims 1-30 under 35 USC 102(a)(1) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made incorporating newly discovered prior art Ashraf et al. (U.S. Patent 12,526,765) to teach the amended language.
Examiner notes that applicant’s remarks are directed towards the amended language and the applicability of the previously cited prior art Hasegawa et al. Examiner relies upon newly discovered Ashraf et al. to teach the amended language.
Claim Interpretation
The claims have been considered according to the latest Patent Eligibility Guidelines and are considered eligible.
Claim Objections
Claims 2-7 and 16 are objected to because of the following informalities:
Claim 2 recites “generate first location information”, “predict the first location information”, and “the predicted first UE location information”. Claim 2 also recites “generating second location information” and “the second UE location information”. Examiner recommends amending the claims to ensure consistent language when describing the first and second location information. Dependent claims 3-7 are also objected to base on their dependency on claim 2. Claims 3-6 also recites “the second location information” and dependent claim 4 also recites “the first location information”.
Claim 16 recites “generate estimated location information” and “the predicted first UE location information”. Neither claim 16 nor parent claim 13 describes predicting a first UE location. Examiner notes that claim 16 is similar to claim 2 however the terminology is inconsistent between the two claims. Examiner recommends amending the claims to ensure consistent language when describing the location information.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-30 are rejected under 35 U.S.C. 103 as being unpatentable over Hasegawa et al. (WO 2022/155244 A2) in view of Ashraf et al. (U.S. Patent 12,526,765).
Regarding claim 1, Hasegawa disclosed a method of wireless communication performed by a user equipment (UE) (see Hasegawa Fig. 5, [00117]: WTRU initiated training procedure for positioning | [00122]: WTRU-initiated positioning), the method comprising:
receiving, from a network entity, a configuration message associated with a first set of reference signals and a second set of reference signals (see Hasegawa Fig. 5: WTRU receives a training configuration message (#507, [00117 #4]) and PRS configuration (#509, [00117 #5]) | Fig. 6, [00183]: WTRU receives a PRS configuration, receives and measures PRSs based on the PRS configuration, determines positioning estimates; WTRU also receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration; [00156]: training reconfiguration is received after sending the training report | [00132]: WTRU receives at least two PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS);
performing first measurements based on the first set of reference signals received from a first set of transmit/receive points (TRPs) to generate first measurement data (see Hasegawa Fig. 5 #511, [00117 #6]: WTRU performs measurements | Fig. 4, [00101 Example 2]: input to model includes measurements obtained from PRS transmitted from different TRPs | [00137 first bullet]: WTRU receives a switch pattern for switching among TRPs from which PRS are transmitted, e.g., WTRU is configured to receive PRS from 9 TRPs, however the WTRU uses PRSs from 3 TRPs to obtain a position estimate or to generate a measurement report, so the WTRU receives PRSs from 3 TRPs for a time duration, then uses PRSs from 3 different TRPs for another time period | [00170 second first-level bullet]: configuration includes specifying the number of TRPs, e.g. one configuration uses three TRPs whereas another configuration uses six TRPs | [00132]: WTRU receives at least two ([00134]: three; [00120]: four) PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS);
performing second measurements based on the second set of reference signals received from a second set of TRPs to generate second measurement data (see Hasegawa Fig. 5 #511, [00117 #6]: WTRU performs measurements | Fig. 4, [00101 Example 2]: input to model includes measurements obtained from PRS transmitted from different TRPs | [00137 first bullet]: WTRU receives a switch pattern for switching among TRPs from which PRS are transmitted, e.g., WTRU is configured to receive PRS from 9 TRPs, however the WTRU uses PRSs from 3 TRPs to obtain a position estimate or to generate a measurement report, so the WTRU receives PRSs from 3 TRPs for a time duration, then uses PRSs from 3 different TRPs for another time period | [00170 second first-level bullet]: configuration includes specifying the number of TRPs, e.g. one configuration uses three TRPs whereas another configuration uses six TRPs | [00132]: WTRU receives at least two ([00134]: three; [00120]: four) PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS), the first measurement data associated with positioning information and the second measurement data associated with monitoring performance information (see Ashraf combination below); and
transmitting, to the network entity, a reporting message based on the second measurements or based on the first measurement data and the second measurement data (see Hasegawa Fig. 5: WTRU transmits inference information message (#513, [00117 #7]) and completion indication message (#515, [00117 #8]) | [00109]: WTRU reports the status of a machine learning model training to the network (a “training status report”) | [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4 | [00108]: WTRU presents parameters to the model and obtains weights by an iterative method).
Although Hasegawa disclosed using positioning reference signals to generate measurement data (see citations and explanations above), Hasegawa did not explicitly disclose “the first measurement data associated with positioning information and the second measurement data associated with monitoring performance information”.
However in a related art of using ML models to improve positioning estimates (see Ashraf 8:20-27), Ashraf disclosed a method performed by UE #610 for training, running, validating, and testing a ML model for positioning (see Ashraf Fig. 6, 16:61-65). A UE determines a positioning measurement based on received reference signals (see Ashraf 10:7-24) that include sidelink positioning reference signals (see Ashraf 10:51-67) [i.e. “first measurement data”]. UE positioning is also performed by a reference device, e.g., positioning reference unit (PRU), that has known positioning measurements based on certain reference signals (see Ashraf 11:10-30) [i.e. “second measurement data”]. A positioning ML model is trained, tested, and validated using non-overlapping data sets (see Ashraf Fig. 2, 11:40-65). A UE receives a first measurement report including at least one positioning measurement (see Ashraf Fig. 3, 12:64-13:2) [i.e. “the first measurement data is associated with positioning information”] and also receives reference-positioning related information (e.g., ground truth) that is to be used for testing and/or validating the ML model (see Ashraf Fig. 3, 13:2-9) [i.e. “the second measurement data associated with monitoring performance information”]. The information from the received measurement report is fed into the ML model to obtain an estimated position and then the output is used in combination with the received ground truth information to determine the performance or accuracy of the ML model (see Ashraf Fig. 3, 13:9-25). The performance determination is the sent to a network node (see Ashraf Fig. 3, 13:23-37).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hasegawa and Ashraf to further describe the types of measurement data used in ML models for positioning. Including Ashraf’s teachings would improve positioning accuracy (see Ashraf 11:31-39) and also ensure that a sufficient accuracy is maintained over time as the UE’s position changes (see Ashraf 12:14-28).
Regarding claim 2, Hasegawa-Ashraf disclosed the method of claim 1, further comprising:
providing the first measurement data as input data to a machine learning (ML) positioning model to generate first location information, the ML positioning model configured to predict the first location information based on the first measurement data (see Hasegawa Fig. 4, [0098]: WTRU trains a machine learning model such that the input to model includes measurements obtained from PRS transmitted from different TRPs ([00101 Example 2]) and the output is an estimated position ([00101] Example 2) | [00112]: after training, the WTRU applies the ML model to process sets of inputs and produce outputs, e.g. estimated position | Fig. 6, [00183]: WTRU receives a PRS configuration, receives and measures PRSs based on the PRS configuration, determines positioning estimates for the reference point and each of the positioning methods; WTRU also receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration; [00156]: training reconfiguration is received after sending the training report); and
generating second location information based on the second measurement data, the second UE location information indicative of a quality of the predicted first UE location information (Ashraf disclosed a UE receives a first measurement report including at least one positioning measurement (see Ashraf Fig. 3, 12:64-13:2) and the information from the received measurement report is fed into the ML model to obtain an estimated position [i.e. “predicted first location”] (see Ashraf Fig. 3, 13:9-25). UE positioning is also performed by a reference device, e.g., positioning reference unit (PRU), that has known positioning measurements based on certain reference signals (see Ashraf 11:10-30) [i.e. “second measurement data”]. The reference positioning-related information includes a true or accurate position of the reference device, a predicted output of the ML model with the assumption that the ML model is operating within a threshold level of accuracy, or an estimated position of the reference device when using the reference positioning information as inputs to the ML model (see Ashraf 14:4-26) [i.e. “generating second location information based on the second measurement data”]. A positioning ML model is trained, tested, and validated using non-overlapping data sets (see Ashraf Fig. 2, 11:40-65). The performance of the ML model is determined based on a comparison of the estimated position output from the ML model [i.e. “first location”] and the reference position [i.e. “second location”] such that the difference between the two positions is an indication of the ML model’s performance [i.e. “the second UE location information indicative of a quality of the predicted first UE location information”] (see Ashraf 14:27-37). If the ML model is determined to be inaccurate, the ML model is deactivated, updated, replaced, and/or retrained (see Ashraf 14:48-59).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hasegawa and Ashraf to further describe the types of measurement data used in ML models for positioning. Including Ashraf’s teachings would improve positioning accuracy (see Ashraf 11:31-39) and also ensure that a sufficient accuracy is maintained over time as the UE’s position changes (see Ashraf 12:14-28).
Regarding claim 3, Hasegawa-Ashraf disclosed the method of claim 2, wherein the reporting message includes the second location information (see Ashraf 14:27-35: performance indication is based on a comparison between the estimated and reference positions and includes information about the difference, error, etc.; it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that the type of information, e.g., location, included in the report is a matter of implementation choice.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hasegawa and Ashraf to further describe the types of measurement data used in ML models for positioning. Including Ashraf’s teachings would improve positioning accuracy (see Ashraf 11:31-39) and also ensure that a sufficient accuracy is maintained over time as the UE’s position changes (see Ashraf 12:14-28).
Regarding claim 4, Hasegawa-Ashraf disclosed the method of claim 2, wherein the reporting message includes a metric that is based on a comparison between the first location information and the second location information (see Ashraf 14:27-35: performance indication is based on a comparison between the estimated and reference positions and includes information about the difference, error, etc.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hasegawa and Ashraf to further describe the types of measurement data used in ML models for positioning. Including Ashraf’s teachings would improve positioning accuracy (see Ashraf 11:31-39) and also ensure that a sufficient accuracy is maintained over time as the UE’s position changes (see Ashraf 12:14-28).
Regarding claim 5, Hasegawa-Ashraf disclosed the method of claim 2, wherein the ML positioning model is trained based on features extracted from reference signal measurements (see Hasegawa Fig. 4, [00101 Example 2]: input to model includes measurements obtained from PRS transmitted from different TRPs | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS), and wherein generating the second location information comprises applying a positioning technique to the second measurement data (see Hasegawa Fig. 6, [00183]: WTRU receives a PRS configuration, receives and measures PRSs based on the PRS configuration, determines positioning estimates for the reference point and each of the positioning methods; WTRU also receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration).
Regarding claim 6, Hasegawa-Ashraf disclosed the method of claim 2, wherein generating the second location information comprises providing the second measurement data as input data to a second ML positioning model to generate the second location information (see Hasegawa [0160]: multiple types of training methods may be used), and wherein the second ML positioning model has greater complexity that the ML positioning model (see Hasegawa [00170 second first-level bullet]: configuration includes specifying the number of TRPs, e.g. one configuration uses three TRPs whereas another configuration uses six TRPs; examiner notes that the number of TRPs affects the complexity of the system).
Regarding claim 7, Hasegawa-Ashraf disclosed the method of claim 2, further comprising:
receiving, from the network entity, an instruction message based on the reporting message (see Hasegawa Fig. 6, [00183]: determines positioning estimates for the reference point and each of the positioning methods, training continues until an expiration time or a determined weight exceeds a threshold; WTRU also receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration; [00156]: training reconfiguration is received after sending the training report), the instruction message indicating a life cycle management action (see Hasegawa Fig. 6, [00183]: training continues until an expiration time | [00118]: model training ends when, e.g., [00118 second bullet]: the preconfigured training duration has expired; [00179]: configured PRS parameters are used until a preconfigured timer expires); and
performing the life cycle management action on the ML positioning model (see Hasegawa [00118]: model training ends when, e.g., [00118 second bullet]: the preconfigured training duration has expired; [00179]: configured PRS parameters are used until a preconfigured timer expires).
Regarding claim 8, Hasegawa-Ashraf disclosed the method of claim 1, wherein (examiner notes that this claim describes a list that ends with “or a combination thereof” and interprets this claim according to an “or” structure such that at least one item is described):
the second set of reference signals are associated with a larger bandwidth than the first set of reference signals (see Hasegawa [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS, e.g. “PRS set A2 contains a PRS configuration with more symbols and/or bandwidth than PRS set A1”);
the second set of reference signals are associated with a different periodicity than the first set of reference signals;
the second set of TRPs include one or more different TRPs than the first set of TRPs;
the second set of TRPs include more TRPs than the first set of TRPs;
the second set of reference signals at least partially overlap with the first set of reference signals in time, frequency, or both;
the second set of reference signals are transmitted at a higher transmit (TX) power than the first set of reference signals;
the second set of reference signals are associated with a different physical frequency layer (PFL) mapping than the first set of reference signals;
or a combination thereof.
Regarding claim 9, Hasegawa-Ashraf disclosed the method of claim 1, further comprising:
prior to receiving the configuration message, transmitting, to the network entity, a capabilities message (see Hasegawa Fig. 5 #505, [00117 #3]: WTRU sends capabilities message to network before receiving the configuration information in #507 and #509) that indicates (examiner notes that this claim describes a list that ends with “or both”; as such, examiner interprets this claim according to an “or” structure such that at least one item is described) reference signal measuring capabilities (see Hasegawa [00162]: capability information sent by the WTRU includes, e.g., [00162 third bullet]: types of supported reference signals), reporting capabilities, or both.
Regarding claim 10, Hasegawa-Ashraf disclosed the method of claim 1, wherein the configuration message (see Hasegawa Fig. 5: WTRU receives a training configuration message (#507, [00117 #4]) and PRS configuration (#509, [00117 #5]) | Fig. 6, [00183]: WTRU receives a PRS configuration, receives and measures PRSs based on the PRS configuration, determines positioning estimates; WTRU also receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration; [00156]: training reconfiguration is received after sending the training report | [00132]: WTRU receives at least two PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS) includes (examiner notes that this claim describes a list that ends with “or a combination thereof”; as such, examiner interprets this claim according to an “or” structure such that at least one item is described) first configuration information associated with the first set of reference signals, second configuration information associated with the second set of reference signals, measurement prioritization information (see Hasegawa [00142]: WTRU receives candidates from the network according to a prioritized order, such that the measurements and positioning model training are performed based on the highest priority reference point that is above the preconfigured threshold | [0160]: multiple types of training methods may be used, e.g., the training method to be used is determined based on a predefined priority of signals), reporting configuration information, or a combination thereof.
Regarding claim 11, Hasegawa-Ashraf disclosed the method of claim 10, wherein the measurement prioritization information indicates that the UE is to perform the second measurements according to one of:
an always measure priority setting;
a UE autonomous decision setting;
a network-configured condition setting (see Hasegawa [00142]: WTRU receives candidates from the network according to a prioritized order, such that the measurements and positioning model training are performed based on the highest priority reference point that is above the preconfigured threshold | [0160]: multiple types of training methods may be used, e.g., the training method to be used is determined based on a predefined priority of signals); or
a network request setting.
Regarding claim 12, Hasegawa-Ashraf disclosed the method of claim 10, wherein the reporting configuration information (see Hasegawa [00110]: WTRU is configured to send a report based on a (pre)configured condition) indicates (examiner notes that this claim describes a list that ends with “or a combination thereof”; as such, examiner interprets this claim according to an “or” structure such that at least one item is described) a reported information type, reporting scheduling information (see Hasegawa [00109]: WTRU transmit a training status report on a periodic basis and/or based on preconfigured events or occasions), a reporting quantity, one or more reporting conditions, or a combination thereof.
Regarding claim 13, the claim contains the limitations, substantially as claimed as described in claim 1 above except claim 1 describes a method and claim 13 describes a user equipment for implementing the method. Hasegawa disclosed, as recited in claim 13: A user equipment (UE) configured for wireless communication (see Hasegawa Fig. 5, [00117]: WTRU initiated training procedure for positioning | [00122]: WTRU-initiated positioning | Fig. 1B, [0026]: UE), the UE comprising:
a memory storing processor-readable code (see Hasegawa [00199]: method is implemented in a computer program incorporated in a computer readable medium for execution by a processor, e.g., in a WTRU, terminal, base station, etc.); and
at least one processor coupled to the memory, the at least one processor configured to execute the processor-readable code to cause the at least one processor to (see Hasegawa [00199]: method is implemented in a computer program incorporated in a computer readable medium for execution by a processor, e.g., in a WTRU, terminal, base station, etc.):
receive, from a network entity, a configuration message associated with a first set of reference signals and a second set of reference signals (see Hasegawa Fig. 5: WTRU receives a training configuration message (#507, [00117 #4]) and PRS configuration (#509, [00117 #5]) | Fig. 6, [00183]: WTRU receives a PRS configuration, receives and measures PRSs based on the PRS configuration, determines positioning estimates; WTRU also receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration; [00156]: training reconfiguration is received after sending the training report | [00132]: WTRU receives at least two PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS);
perform first measurements based on the first set of reference signals received from a first set of transmit/receive points (TRPs) to generate first measurement data (see Hasegawa Fig. 5 #511, [00117 #6]: WTRU performs measurements | Fig. 4, [00101 Example 2]: input to model includes measurements obtained from PRS transmitted from different TRPs | [00137 first bullet]: WTRU receives a switch pattern for switching among TRPs from which PRS are transmitted, e.g., WTRU is configured to receive PRS from 9 TRPs, however the WTRU uses PRSs from 3 TRPs to obtain a position estimate or to generate a measurement report, so the WTRU receives PRSs from 3 TRPs for a time duration, then uses PRSs from 3 different TRPs for another time period | [00170 second first-level bullet]: configuration includes specifying the number of TRPs, e.g. one configuration uses three TRPs whereas another configuration uses six TRPs | [00132]: WTRU receives at least two ([00134]: three; [00120]: four) PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS);
perform second measurements based on the second set of reference signals received from a second set of TRPs to generate second measurement data (see Hasegawa Fig. 5 #511, [00117 #6]: WTRU performs measurements | Fig. 4, [00101 Example 2]: input to model includes measurements obtained from PRS transmitted from different TRPs | [00137 first bullet]: WTRU receives a switch pattern for switching among TRPs from which PRS are transmitted, e.g., WTRU is configured to receive PRS from 9 TRPs, however the WTRU uses PRSs from 3 TRPs to obtain a position estimate or to generate a measurement report, so the WTRU receives PRSs from 3 TRPs for a time duration, then uses PRSs from 3 different TRPs for another time period | [00170 second first-level bullet]: configuration includes specifying the number of TRPs, e.g. one configuration uses three TRPs whereas another configuration uses six TRPs | [00132]: WTRU receives at least two ([00134]: three; [00120]: four) PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS), the first measurement data associated with positioning information and the second measurement data associated with monitoring performance information (see Ashraf combination below); and
transmit, to the network entity, a reporting message based on the second measurement data or based on the first measurement data and the second measurement data (see Hasegawa Fig. 5: WTRU transmits inference information message (#513, [00117 #7]) and completion indication message (#515, [00117 #8]) | [00109]: WTRU reports the status of a machine learning model training to the network (a “training status report”) | [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4 | [00108]: WTRU presents parameters to the model and obtains weights by an iterative method).
Although Hasegawa disclosed using positioning reference signals to generate measurement data (see citations and explanations above), Hasegawa did not explicitly disclose “the first measurement data associated with positioning information and the second measurement data associated with monitoring performance information”.
However in a related art of using ML models to improve positioning estimates (see Ashraf 8:20-27), Ashraf disclosed a method performed by UE #610 for training, running, validating, and testing a ML model for positioning (see Ashraf Fig. 6, 16:61-65). A UE determines a positioning measurement based on received reference signals (see Ashraf 10:7-24) that include sidelink positioning reference signals (see Ashraf 10:51-67) [i.e. “first measurement data”]. UE positioning is also performed by a reference device, e.g., positioning reference unit (PRU), that has known positioning measurements based on certain reference signals (see Ashraf 11:10-30) [i.e. “second measurement data”]. A positioning ML model is trained, tested, and validated using non-overlapping data sets (see Ashraf Fig. 2, 11:40-65). A UE receives a first measurement report including at least one positioning measurement (see Ashraf Fig. 3, 12:64-13:2) [i.e. “the first measurement data is associated with positioning information”] and also receives reference-positioning related information (e.g., ground truth) that is to be used for testing and/or validating the ML model (see Ashraf Fig. 3, 13:2-9) [i.e. “the second measurement data associated with monitoring performance information”]. The information from the received measurement report is fed into the ML model to obtain an estimated position and then the output is used in combination with the received ground truth information to determine the performance or accuracy of the ML model (see Ashraf Fig. 3, 13:9-25). The performance determination is the sent to a network node (see Ashraf Fig. 3, 13:23-37).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hasegawa and Ashraf to further describe the types of measurement data used in ML models for positioning. Including Ashraf’s teachings would improve positioning accuracy (see Ashraf 11:31-39) and also ensure that a sufficient accuracy is maintained over time as the UE’s position changes (see Ashraf 12:14-28).
Regarding claim 14, Hasegawa-Ashraf disclosed the UE of claim 13, wherein the at least one processor is further configured to:
transmit, to the network entity, one or more positioning messages that include the first measurement data to enable training of a machine learning (ML) positioning model at the network entity (see Hasegawa [00122 last bullet]: WTRU-assisted training; WTRU receives configuration information from the network and sends a measurement report to the network for assisting with the training at the network | [00162 last bullet]: WTRU indicates capability for supporting federated learning for positioning), wherein the reporting message includes the second measurement data (see Hasegawa [00109]: WTRU reports the status of a machine learning model training to the network (a “training status report”); [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4; [0105]: WTRU determines pos1 and pos2, determines weights associated with the location/position estimate, then obtains a new position estimate by using the weighted average, e.g., 0.8*pos1+0.2*pos2; after training, the WTRU indicates to the network that the weighted average is used to determine the estimated position | [00108]: WTRU presents parameters to the model and obtains weights by an iterative method; Fig. 6, [00183]: WTRU determines positioning estimates for the reference point and each of the positioning methods, then receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration; examiner notes that the iterative process closes and restarts with item #6).
Regarding 15, Hasegawa-Ashraf disclosed the UE of claim 14, wherein the at least one processor is further configured to:
receive, from the network entity, location information based on transmission of the one or more positioning messages, the location information indicating a predicted location of the UE generated by the ML positioning model (see Hasegawa [00105]: network uses weights to derive the position of the WTRU | [00122 last bullet]: WTRU-assisted training; WTRU receives from the network a configuration, new measurements related to ML/AL for positioning | [00150]: WTRU receives its own position from the network).
Regarding claim 16, Hasegawa-Ashraf disclosed the UE of claim 13, wherein the at least one processor is further configured to:
transmit, to the network entity, one or more positioning messages that include the first measurement data to enable training of a machine learning (ML) positioning model at the network entity (see Hasegawa [00122 last bullet]: WTRU-assisted training; WTRU receives configuration information from the network and sends a measurement report to the network for assisting with the training at the network | [00162 last bullet]: WTRU indicates capability for supporting federated learning for positioning); and
generate estimated location information based on the second measurement data, wherein the reporting message includes the estimated location information, the estimated location information indicative of a quality of the predicted first UE location information (Ashraf disclosed a UE receives a first measurement report including at least one positioning measurement (see Ashraf Fig. 3, 12:64-13:2) and the information from the received measurement report is fed into the ML model to obtain an estimated position [i.e. “predicted first UE location”] (see Ashraf Fig. 3, 13:9-25). UE positioning is also performed by a reference device, e.g., positioning reference unit (PRU), that has known positioning measurements based on certain reference signals (see Ashraf 11:10-30) [i.e. “second measurement data”]. The reference positioning-related information includes a true or accurate position of the reference device, a predicted output of the ML model with the assumption that the ML model is operating within a threshold level of accuracy, or an estimated position of the reference device when using the reference positioning information as inputs to the ML model (see Ashraf 14:4-26) [i.e. “generating estimated location information based on the second measurement data”]. A positioning ML model is trained, tested, and validated using non-overlapping data sets (see Ashraf Fig. 2, 11:40-65). The performance of the ML model is determined based on a comparison of the estimated position output from the ML model [i.e. “first location”] and the reference position [i.e. “second location”] such that the difference between the two positions is an indication of the ML model’s performance [i.e. “the estimated location information indicative of a quality of the predicted first UE location information”] (see Ashraf 14:27-37). If the ML model is determined to be inaccurate, the ML model is deactivated, updated, replaced, and/or retrained (see Ashraf 14:48-59).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hasegawa and Ashraf to further describe the types of measurement data used in ML models for positioning. Including Ashraf’s teachings would improve positioning accuracy (see Ashraf 11:31-39) and also ensure that a sufficient accuracy is maintained over time as the UE’s position changes (see Ashraf 12:14-28).
Regarding claim 17, the claim contains the limitations, substantially as claimed, as described in claim 1 above except claim 17 is a method performed by a network entity. Hasegawa disclosed, as recited in claim 17: A method of wireless communication performed by a network entity (see Hasegawa Fig. 5, [00117]: network entities communicate with WTRU during WTRU-initiated training procedure for positioning | [00122]: WTRU-initiated positioning includes receiving configuration information from network), the method comprising:
transmitting, to a user equipment (UE) (see Hasegawa Fig. 1B, [0026]: UE), a configuration message associated with a first set of reference signals and a second set of reference signals (see Hasegawa Fig. 5: WTRU receives from the network a training configuration message (#507, [00117 #4]) and PRS configuration (#509, [00117 #5]); examiner notes that the network transmits the configuration message(s) prior to the WTRU being able to receive them | Fig. 6, [00183]: WTRU receives a PRS configuration, receives and measures PRSs based on the PRS configuration, determines positioning estimates; WTRU also receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration; [00156]: training reconfiguration is received after sending the training report | [00132]: WTRU receives at least two PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS), the first set of reference signals transmitted by a first set of transmit/receive points (TRPs) and the second set of reference signals transmitted by a second set of TRPs (see Hasegawa Fig. 4, [00101 Example 2]: input to model includes measurements obtained from PRS transmitted from different TRPs | [00137 first bullet]: WTRU receives a switch pattern for switching among TRPs from which PRS are transmitted, e.g., WTRU is configured to receive PRS from 9 TRPs, however the WTRU uses PRSs from 3 TRPs to obtain a position estimate or to generate a measurement report, so the WTRU receives PRSs from 3 TRPs for a time duration, then uses PRSs from 3 different TRPs for another time period | [00170 second first-level bullet]: configuration includes specifying the number of TRPs, e.g. one configuration uses three TRPs whereas another configuration uses six TRPs | [00132]: WTRU receives at least two ([00134]: three; [00120]: four) PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS);
receiving, from the UE, one or more positioning messages that include first measurement data associated with the first set of reference signals (see Hasegawa Fig. 5: WTRU transmits to the network an inference information message (#513, [00117 #7]) and completion indication message (#515, [00117 #8]) | [00109]: WTRU reports the status of a machine learning model training to the network (a “training status report”) | [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4 | [00108]: WTRU presents parameters to the model and obtains weights by an iterative method); and
receiving, from the UE, a reporting message based on second measurement data associated with the second set of reference signals or based on the first measurement data and the second measurement data (see Hasegawa Fig. 5: WTRU transmits to the network an inference information message (#513, [00117 #7]) and completion indication message (#515, [00117 #8]) | [00109]: WTRU reports the status of a machine learning model training to the network (a “training status report”) | [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4 | [00108]: WTRU presents parameters to the model and obtains weights by an iterative method), the first measurement data associated with positioning information and the second measurement data associated with monitoring performance information (see Ashraf combination below).
Although Hasegawa disclosed using positioning reference signals to generate measurement data (see citations and explanations above), Hasegawa did not explicitly disclose “the first measurement data associated with positioning information and the second measurement data associated with monitoring performance information”.
However in a related art of using ML models to improve positioning estimates (see Ashraf 8:20-27), Ashraf disclosed a method performed by UE #610 for training, running, validating, and testing a ML model for positioning (see Ashraf Fig. 6, 16:61-65). A UE determines a positioning measurement based on received reference signals (see Ashraf 10:7-24) that include sidelink positioning reference signals (see Ashraf 10:51-67) [i.e. “first measurement data”]. UE positioning is also performed by a reference device, e.g., positioning reference unit (PRU), that has known positioning measurements based on certain reference signals (see Ashraf 11:10-30) [i.e. “second measurement data”]. A positioning ML model is trained, tested, and validated using non-overlapping data sets (see Ashraf Fig. 2, 11:40-65). A UE receives a first measurement report including at least one positioning measurement (see Ashraf Fig. 3, 12:64-13:2) [i.e. “the first measurement data is associated with positioning information”] and also receives reference-positioning related information (e.g., ground truth) that is to be used for testing and/or validating the ML model (see Ashraf Fig. 3, 13:2-9) [i.e. “the second measurement data associated with monitoring performance information”]. The information from the received measurement report is fed into the ML model to obtain an estimated position and then the output is used in combination with the received ground truth information to determine the performance or accuracy of the ML model (see Ashraf Fig. 3, 13:9-25). The performance determination is the sent to a network node (see Ashraf Fig. 3, 13:23-37).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hasegawa and Ashraf to further describe the types of measurement data used in ML models for positioning. Including Ashraf’s teachings would improve positioning accuracy (see Ashraf 11:31-39) and also ensure that a sufficient accuracy is maintained over time as the UE’s position changes (see Ashraf 12:14-28).
Regarding claim 18, Hasegawa-Ashraf disclosed the method of claim 17, further comprising:
providing the first measurement data as input data to a machine learning (ML) positioning model to generate first UE location information, the ML positioning model configured to predict the first UE location information based on the first measurement data (examiner notes that although this claim describes a method performed by a network entity, this claim limitation includes the scope in which the UE and/or the network performs input of measurement data to a ML model | see Hasegawa [00122 last bullet]: WTRU-assisted training; training at the network | [00162 last bullet]: WTRU indicates capability for supporting federated learning for positioning | [00126]: network-initiated positioning | Fig. 4, [0098]: WTRU trains a machine learning model such that the input to model includes measurements obtained from PRS transmitted from different TRPs ([00101 Example 2]) and the output is an estimated position ([00101] Example 2) | [00112]: after training, the WTRU applies the ML model to process sets of inputs and produce outputs, e.g. estimated position | Fig. 6, [00183]: WTRU receives a PRS configuration, receives and measures PRSs based on the PRS configuration, determines positioning estimates for the reference point and each of the positioning methods; WTRU also receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration; [00156]: training reconfiguration is received after sending the training report); and
generating second UE location information based on the second measurement data, the second UE location information indicative of a quality of the predicted first UE location information (examiner notes that although this claim describes a method performed by a network entity, this claim limitation includes the scope in which the UE and/or the network performs generating second UE location information; also, examiner notes that “second UE location” can be interpreted as a reference location for a different UE or a second location for the same UE | Ashraf disclosed a UE receives a first measurement report including at least one positioning measurement (see Ashraf Fig. 3, 12:64-13:2) and the information from the received measurement report is fed into the ML model to obtain an estimated position [i.e. “predicted first location”] (see Ashraf Fig. 3, 13:9-25). UE positioning is also performed by a reference device, e.g., positioning reference unit (PRU), that has known positioning measurements based on certain reference signals (see Ashraf 11:10-30) [i.e. “second measurement data”]. The reference positioning-related information includes a true or accurate position of the reference device, a predicted output of the ML model with the assumption that the ML model is operating within a threshold level of accuracy, or an estimated position of the reference device when using the reference positioning information as inputs to the ML model (see Ashraf 14:4-26) [i.e. “generating second location information based on the second measurement data”]. A positioning ML model is trained, tested, and validated using non-overlapping data sets (see Ashraf Fig. 2, 11:40-65). The performance of the ML model is determined based on a comparison of the estimated position output from the ML model [i.e. “first location”] and the reference position [i.e. “second location”] such that the difference between the two positions is an indication of the ML model’s performance [i.e. “the second UE location information indicative of a quality of the predicted first UE location information”] (see Ashraf 14:27-37). If the ML model is determined to be inaccurate, the ML model is deactivated, updated, replaced, and/or retrained (see Ashraf 14:48-59).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hasegawa and Ashraf to further describe the types of measurement data used in ML models for positioning. Including Ashraf’s teachings would improve positioning accuracy (see Ashraf 11:31-39) and also ensure that a sufficient accuracy is maintained over time as the UE’s position changes (see Ashraf 12:14-28).
Regarding claim 19, Hasegawa-Ashraf disclosed the method of claim 18,
wherein the ML positioning model is trained based on features extracted from reference signal measurements (see Hasegawa Fig. 4, [00101 Example 2]: input to model includes measurements obtained from PRS transmitted from different TRPs | [00176]: ML model is trained using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS | [00122 last bullet]: WTRU-assisted training; training at the network | [00162 last bullet]: WTRU indicates capability for supporting federated learning for positioning | [00126]: network-initiated positioning), and
wherein generating the second UE location information comprises applying a positioning technique to the second measurement data (examiner notes that although this claim describes a method performed by a network entity, this claim limitation includes the scope in which the UE and/or the network performs generating the second UE location information | see Hasegawa Fig. 6, [00183]: measures PRSs based on the PRS configuration, determines positioning estimates for the reference point and each of the positioning methods; WTRU also receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration | [00122 last bullet]: WTRU-assisted training; training at the network | [00162 last bullet]: WTRU indicates capability for supporting federated learning for positioning | [00126]: network-initiated positioning).
Regarding claim 20, Hasegawa-Ashraf disclosed the method of claim 18,
wherein generating the second UE location information comprises providing the second measurement data as input data to a second ML positioning model to generate the second UE location information (examiner notes that although this claim describes a method performed by a network entity, this claim limitation includes the scope in which the UE and/or the network performs generating the second UE location information | see Hasegawa [0160]: multiple types of training methods may be used | [00122 last bullet]: WTRU-assisted training; training at the network | [00162 last bullet]: WTRU indicates capability for supporting federated learning for positioning | [00126]: network-initiated positioning), and
wherein the second ML positioning model has greater complexity that the ML positioning model (see Hasegawa [00170 second first-level bullet]: configuration includes specifying the number of TRPs, e.g. one configuration uses three TRPs whereas another configuration uses six TRPs; examiner notes that the number of TRPs affects the complexity of the system).
Regarding claim 21, Hasegawa-Ashraf disclosed the method of claim 18, further comprising:
transmitting, to the UE, the first UE location information based on receiving the one or more positioning messages (see Hasegawa [00105]: network uses weights to derive the position of the WTRU | [00122 last bullet]: WTRU-assisted training; WTRU receives from the network a configuration, new measurements related to ML/AL for positioning | [00150]: WTRU receives its own position from the network).
Regarding claim 22, Hasegawa-Ashraf disclosed the method of claim 18, further comprising:
generating a metric based on a comparison between the first UE location information and the second UE location information (examiner notes that although this claim describes a method performed by a network entity, this claim limitation includes the scope in which the UE and/or the network generates the metric | see Hasegawa [00155]: identifying a difference between the estimated position and the position obtained from the GNSS module; [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4; [0105]: pos1 and pos2 are determined, then weights are determined that are associated with the location/position estimate, then a new position estimate is obtained by using the weighted average, e.g., 0.8*pos1+0.2*pos2; [00111 last bullet]: training status report includes variation in positioning accuracy measured by the difference between the intended output and the ML model output | [00122 last bullet]: WTRU-assisted training; training at the network | [00162 last bullet]: WTRU indicates capability for supporting federated learning for positioning | [00126]: network-initiated positioning | [00105]: network uses weights to derive the position of the WTRU);
selecting a life cycle management action based on the metric (examiner notes that although this claim describes a method performed by a network entity, this claim limitation includes the scope in which the UE and/or the network selects a life cycle management action | see Hasegawa Fig. 6, [00183]: training continues until an expiration time | [00118]: model training ends when, e.g., [00118 second bullet]: the preconfigured training duration has expired; [00179]: configured PRS parameters are used until a preconfigured timer expires; examiner notes that the expiration time is selected when a timer is started); and
performing the life cycle management action on the ML positioning model (examiner notes that although this claim describes a method performed by a network entity, this claim limitation includes the scope in which the UE and/or the network performs the life cycle management action | see Hasegawa [00118]: model training ends when, e.g., [00118 second bullet]: the preconfigured training duration has expired; [00179]: configured PRS parameters are used until a preconfigured timer expires).
Regarding claim 23, Hasegawa-Ashraf disclosed the method of claim 18, further comprising:
generating a metric based on a comparison between the first UE location information and the second UE location information (see Hasegawa [00185]: network sends correction information to WTRU, e.g., [00187]: mean of timing offset, standard deviation or variance of timing offset; examiner notes that these are statistical metrics involving comparisons and that these metrics are generated before they can be sent | [00122 last bullet]: WTRU-assisted training; training at the network | [00162 last bullet]: WTRU indicates capability for supporting federated learning for positioning | [00126]: network-initiated positioning | [00105]: network uses weights to derive the position of the WTRU | [00155]: identifying a difference between the estimated position and the position obtained from the GNSS module; [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4; [0105]: pos1 and pos2 are determined, then weights are determined that are associated with the location/position estimate, then a new position estimate is obtained by using the weighted average, e.g., 0.8*pos1+0.2*pos2; [00111 last bullet]: training status report includes variation in positioning accuracy measured by the difference between the intended output and the ML model output); and
transmitting, to the UE, the metric (see Hasegawa [00185]: WTRU receives correction information from the network, e.g., [00187]: mean of timing offset, standard deviation or variance of timing offset, and updates measurements based on the correction information).
Regarding claim 24, Hasegawa-Ashraf disclosed the method of claim 17, wherein the reporting message includes the second measurement data or an estimated location that is based on the second measurement data (see Hasegawa [00105]: network uses weights to derive the position of the WTRU | [00122 last bullet]: WTRU-assisted training; WTRU receives from the network a configuration, new measurements related to ML/AL for positioning | [00150]: WTRU receives its own position from the network | [00109]: WTRU reports the status of a machine learning model training to the network (a “training status report”); [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4; [0105]: WTRU determines pos1 and pos2, determines weights associated with the location/position estimate, then obtains a new position estimate by using the weighted average, e.g., 0.8*pos1+0.2*pos2; after training, the WTRU indicates to the network that the weighted average is used to determine the estimated position | [00108]: model obtains weights by an iterative method; Fig. 6, [00183]: WTRU determines positioning estimates for the reference point and each of the positioning methods, then receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration).
Regarding claim 25, Hasegawa-Ashraf disclosed the method of claim 17, wherein the reporting message includes (examiner notes that this claim describes a list that ends with “or both”; as such, examiner interprets this claim according to an “or” structure such that at least one item is described) the second measurement data, a metric that is based on a comparison between the first measurement data and the second measurement data (see Hasegawa [00155]: identifying a difference between the estimated position and the position obtained from the GNSS module; [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4; [0105]: pos1 and pos2 are determined, then weights are determined that are associated with the location/position estimate, then a new position estimate is obtained by using the weighted average, e.g., 0.8*pos1+0.2*pos2; [00111 last bullet]: training status report sent from WTRU includes variation in positioning accuracy measured by the difference between the intended output and the ML model output);), or both.
Regarding claim 26, the claim contains the limitations, substantially as claimed as described in claim 17 above except claim 17 describes a method and claim 26 describes a network entity for implementing the method. Hasegawa disclosed, as recited in claim 26: A network entity configured for wireless communication (see Hasegawa Fig. 5, [00117]: network entities communicate with WTRU during WTRU-initiated training procedure for positioning | [00122]: WTRU-initiated positioning includes receiving configuration information from network), the network entity comprising:
a memory storing processor-readable code (see Hasegawa [00199]: method is implemented in a computer program incorporated in a computer readable medium for execution by a processor, e.g., in a WTRU, terminal, base station, etc.); and
at least one processor coupled to the memory, the at least one processor configured to execute the processor-readable code to cause the at least one processor to (see Hasegawa [00199]: method is implemented in a computer program incorporated in a computer readable medium for execution by a processor, e.g., in a WTRU, terminal, base station, etc.):
transmit, to a user equipment (UE) (see Hasegawa Fig. 1B, [0026]: UE), a configuration message associated with a first set of reference signals and a second set of reference signals (see Hasegawa Fig. 5: WTRU receives from the network a training configuration message (#507, [00117 #4]) and PRS configuration (#509, [00117 #5]); examiner notes that the network transmits the configuration message(s) prior to the WTRU being able to receive them | Fig. 6, [00183]: WTRU receives a PRS configuration, receives and measures PRSs based on the PRS configuration, determines positioning estimates; WTRU also receives a PRS reconfiguration and measures PRSs based on the PRS reconfiguration; [00156]: training reconfiguration is received after sending the training report | [00132]: WTRU receives at least two PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS), the first set of reference signals transmitted by a first set of transmit/receive points (TRPs) and the second set of reference signals transmitted by a second set of TRPs (see Hasegawa Fig. 4, [00101 Example 2]: input to model includes measurements obtained from PRS transmitted from different TRPs | [00137 first bullet]: WTRU receives a switch pattern for switching among TRPs from which PRS are transmitted, e.g., WTRU is configured to receive PRS from 9 TRPs, however the WTRU uses PRSs from 3 TRPs to obtain a position estimate or to generate a measurement report, so the WTRU receives PRSs from 3 TRPs for a time duration, then uses PRSs from 3 different TRPs for another time period | [00170 second first-level bullet]: configuration includes specifying the number of TRPs, e.g. one configuration uses three TRPs whereas another configuration uses six TRPs | [00132]: WTRU receives at least two ([00134]: three; [00120]: four) PRS parameter sets from the network, e.g. default set and reconfiguration set | [00176]: WTRU is configured to train the ML model using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS);
receive, from the UE, one or more positioning messages that include first measurement data associated with the first set of reference signals (see Hasegawa Fig. 5: WTRU transmits to the network an inference information message (#513, [00117 #7]) and completion indication message (#515, [00117 #8]) | [00109]: WTRU reports the status of a machine learning model training to the network (a “training status report”) | [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4 | [00108]: WTRU presents parameters to the model and obtains weights by an iterative method); and
receive, from the UE, a reporting message based on second measurement data associated with the second set of reference signals or based on the first measurement data and the second measurement data (see Hasegawa Fig. 5: WTRU transmits to the network an inference information message (#513, [00117 #7]) and completion indication message (#515, [00117 #8]) | [00109]: WTRU reports the status of a machine learning model training to the network (a “training status report”) | [00120]: inference data include weights for each PRS, e.g. WTRU received four PRSs and indicates to the network weights associated with each PRS, e.g. 0.1, 0.2, 0.3, and 0.4 | [00108]: WTRU presents parameters to the model and obtains weights by an iterative method), the first measurement data associated with positioning information and the second measurement data associated with monitoring performance information (see Ashraf combination below).
Although Hasegawa disclosed using positioning reference signals to generate measurement data (see citations and explanations above), Hasegawa did not explicitly disclose “the first measurement data associated with positioning information and the second measurement data associated with monitoring performance information”.
However in a related art of using ML models to improve positioning estimates (see Ashraf 8:20-27), Ashraf disclosed a method performed by UE #610 for training, running, validating, and testing a ML model for positioning (see Ashraf Fig. 6, 16:61-65). A UE determines a positioning measurement based on received reference signals (see Ashraf 10:7-24) that include sidelink positioning reference signals (see Ashraf 10:51-67) [i.e. “first measurement data”]. UE positioning is also performed by a reference device, e.g., positioning reference unit (PRU), that has known positioning measurements based on certain reference signals (see Ashraf 11:10-30) [i.e. “second measurement data”]. A positioning ML model is trained, tested, and validated using non-overlapping data sets (see Ashraf Fig. 2, 11:40-65). A UE receives a first measurement report including at least one positioning measurement (see Ashraf Fig. 3, 12:64-13:2) [i.e. “the first measurement data is associated with positioning information”] and also receives reference-positioning related information (e.g., ground truth) that is to be used for testing and/or validating the ML model (see Ashraf Fig. 3, 13:2-9) [i.e. “the second measurement data associated with monitoring performance information”]. The information from the received measurement report is fed into the ML model to obtain an estimated position and then the output is used in combination with the received ground truth information to determine the performance or accuracy of the ML model (see Ashraf Fig. 3, 13:9-25). The performance determination is the sent to a network node (see Ashraf Fig. 3, 13:23-37).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hasegawa and Ashraf to further describe the types of measurement data used in ML models for positioning. Including Ashraf’s teachings would improve positioning accuracy (see Ashraf 11:31-39) and also ensure that a sufficient accuracy is maintained over time as the UE’s position changes (see Ashraf 12:14-28).
Regarding claim 27, Hasegawa-Ashraf disclosed the network entity of claim 26, wherein the at least one processor is further configured to:
receive, from the UE, a capabilities message (see Hasegawa Fig. 5 #505, [00117 #3]: WTRU sends capabilities message to network before receiving the configuration information in #507 and #509) that indicates (examiner notes that this claim describes a list that ends with “or both”; as such, examiner interprets this claim according to an “or” structure such that at least one item is described) reference signal measuring capabilities at the UE (see Hasegawa [00162]: capability information sent by the WTRU includes, e.g., [00162 third bullet]: types of supported reference signals), reporting capabilities at the UE, or both, wherein the configuration message is sent based on receiving the capabilities message (see Hasegawa Fig. 5, [00117]: network sends the configuration information in #507 and #509 after the network receives WTRU capabilities message in #505).
Regarding claim 28, Hasegawa-Ashraf disclosed the network entity of claim 26, wherein the configuration message includes (examiner notes that this claim describes a list that ends with “or a combination thereof”; as such, examiner interprets this claim according to an “or” structure such that at least one item is described) first configuration information associated with the first set of reference signals (see Hasegawa [00176]: WTRU receives configuration information from network that specifies multiple sets of PRS parameters; ML model is trained using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS, e.g. “PRS set A2 contains a PRS configuration with more symbols and/or bandwidth than PRS set A1”), second configuration information associated with the second set of reference signals, measurement prioritization information, reporting configuration information, or a combination thereof.
Regarding claim 29, Hasegawa-Ashraf disclosed the network entity of claim 28, wherein the first configuration information indicates a first set of time and frequency resources allocated to the first set of reference signals, and wherein the second configuration information indicates a second set of time and frequency resources allocated to the second set of reference signals (see Hasegawa [00176]: WTRU receives configuration information from network that specifies multiples sets of PRS parameters; ML model is trained using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS, e.g. “PRS set A2 contains a PRS configuration with more symbols and/or bandwidth than PRS set A1”).
Regarding claim 30, Hasegawa-Ashraf disclosed the network entity of claim 26, wherein (examiner notes that this claim describes a list that ends with “or a combination thereof”; as such, examiner interprets this claim according to an “or” structure such that at least one item is described):
the second set of reference signals are associated with a larger bandwidth than the first set of reference signals (see Hasegawa [00176]: ML model is trained using PRS set A1 and PRS set A2 such that each set corresponds to different numbers of time/frequency resources used for PRS, e.g. “PRS set A2 contains a PRS configuration with more symbols and/or bandwidth than PRS set A1”);
the second set of reference signals are associated with a different periodicity than the first set of reference signals;
the second set of TRPs include one or more different TRPs than the first set of TRPs;
the second set of TRPs include more TRPs than the first set of TRPs;
the second set of reference signals at least partially overlap with the first set of reference signals in time, frequency, or both;
the second set of reference signals are transmitted at a higher transmit (TX) power than the first set of reference signals;
the second set of reference signals are associated with a different physical frequency layer (PFL) mapping than the first set of reference signals;
or a combination thereof.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Angela Widhalm de Rodriguez whose telephone number is (571)272-1035. The examiner can normally be reached M-F: 6am-2:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas Taylor can be reached at (571)272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANGELA WIDHALM DE RODRIGUEZ/Examiner, Art Unit 2443