Prosecution Insights
Last updated: April 19, 2026
Application No. 18/501,802

FRAMEWORK FOR AGNOSTICIZING POSITIONING MEASUREMENT REPORTS

Non-Final OA §102§103
Filed
Nov 03, 2023
Examiner
WU, ALEXANDER XIUYE
Art Unit
2642
Tech Center
2600 — Communications
Assignee
Nokia Technologies Oy
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
5 currently pending
Career history
5
Total Applications
across all art units

Statute-Specific Performance

§103
83.3%
+43.3% vs TC avg
§102
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement submitted on June 3, 2024 has been considered by the Examiner and made of record in the application file. Claim Objections Claim 9 is objected to because of the following informalities: The word “reconstruction” in “to reconstruct initially collected samples reconstruction when reporting a two dimensional or three dimensional location” makes the claim unclear and should be removed. Appropriate correction is required. Claim Rejections – 35 USC 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-5, 13, and 14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sundararajan et al. (US 20220046385 A1). Consider claim 1, Sundararajan et al. show and disclose an apparatus comprising: at least one processor (See Figure 3A - processing system 332); and at least one memory storing instructions (Figure 3A depicts a reference UE 302 which incorporates a processing system 332 and memory 340. Functionality of the UE “may be implemented by processor and memory component(s) of the UE 302 (e.g., by execution of appropriate code and/or by appropriate configuration of processor components)” (see Figure 3A, Paragraph 0140). It follows that said code would be stored in the memory, and is equivalent to instructions.) that, when executed by the at least one processor, cause the apparatus at least to perform: receive from a network node, a message comprising a code configuration to configure collected samples measurements for reporting (in Figure 9, at process block 910, “UE 302 obtains at least one neural network function configured to facilitate positioning measurement feature processing at the UE… In some designs, the at least one neural network function may be received from a network entity" (see paragraph 0202). Said neural network function is equivalent to a code configuration, and described as being "used to filter or process positioning measurement data into the respective positioning measurement feature(s)”. At process block 940, “UE 302 reports the processed set of positioning measurement features to a network component" (see paragraph 0205)); generate a code from the collected samples measurements in a predefined format according to the code configuration (At process block 930, “UE 302 processes the positioning measurement data into a respective set of positioning measurement features based on the at least one neural network function" (see paragraph 0204)); and report the generated code to the network node (referring back to paragraph 0205, “UE 302 reports the processed set of positioning measurement features to a network component”). Consider claim 2, and as applied to claim 1 above, Sundararajan et al. further show and disclose an apparatus: wherein the generated code from the collected samples measurements is obtained by compressing representations of the collected samples into the predetermined format defined by the code configurations (“used herein, a positioning measurement “feature” is a processed (e.g., compressed) representation of raw positioning measurement data. In some designs, processing (e.g., or refining or compressing) of raw positioning measurement data into respective positioning measurement feature(s) may be implemented for various reasons” (see paragraph 0200)). Consider claim 3, and as applied to claim 1 above, Sundararajan et al. further show and disclose an apparatus: wherein the collected samples are collected via a plurality of receiving antennas of the apparatus (“the positioning measurement data may include channel estimate information, such as a PDP (e.g., measured on one antenna or beam or across multiple antennas or beams)” (see paragraph 0203)). Consider claim 4, and as applied to claim 1 above, Sundararajan et al. further show and disclose an apparatus: wherein the collected samples comprises reference signals that are used to estimate a location of the apparatus (“the positioning measurement data may be obtained by performing a set of positioning measurements on a reference signal for positioning (e.g., PRS, etc.)” (see paragraph 0203)). Consider claim 5, and as applied to claim 4 above, Sundararajan et al. further show and disclose an apparatus: wherein the reference signals comprise positioning reference signals or sounding reference signals (PRS/SRS) (seen above, “performing a set of positioning measurements on a reference signal for positioning (e.g., PRS, etc.)” (see paragraph 0203)). Consider claim 13, and as applied to claim 1 above, Sundararajan et al. further show and disclose an apparatus: wherein the apparatus comprises one of: a user terminal device (Figure 2B, UE 204 “may be referred to interchangeably as…a ‘user terminal’ or UT” (see paragraph 0094)),a transmit/receive point or a base station (“A base station… may be alternatively referred to as an access point (AP), a network node, a NodeB, an evolved NodeB (eNB), a New Radio (NR) Node B (also referred to as a gNB or gNodeB), etc.” (see paragraph 0095)),Referring again to Figure 2B, “Either gNB 222 or eNB 224 may communicate with UEs 204” (see paragraph 0016))and the network node comprises a location management function (LMF) (Referring again to Figure 2B, “Another optional aspect may include a LMF 270, which may be in communication with the NGC 260 to provide location assistance for UEs 204” (see paragraph 0120), where the NGC (next generation core) is one embodiment of a core network (see paragraph 0099)). Consider claim 14, and as applied to claim 1 above, Sundararajan et al. further show and disclose an apparatus: wherein the collected sample measurements are isolated by transmit/receive point prior to generating the code (“UE-assisted positioning techniques can be used, whereby UE-measured data is reported to a network entity (e.g., location server 230, LMF 270, etc.)” (see paragraph 0169). Said location server may interface with the UE via a base station, where a “‘base station’ may refer to a single physical transmission-reception point (TRP)” (see paragraph 0096). It is implied that said UE-measured data is reported in isolation, and is transmitted via the TRP prior to positioning measurement feature processing). Claims 15-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pezeshki et al. (US 20210195462 A1). Consider claim 15, Pezeshki et al. further show and disclose an apparatus comprising: at least one processor; and at least one memory storing instructions (“an apparatus for wireless communication by a base station…generally includes a memory, and one or more processors coupled to the memory, the one or more processors and the memory being configured to transmit, to a UE, a configuration to be used for compressing one or more measurements” (see paragraph 0010). It follows that said configuration would be stored in the memory, and is equivalent to instructions). that, when executed by the at least one processor, cause the apparatus at least to perform: send to a second apparatus in a radio network, a message comprising a code configuration (“the one or more processors and the memory being configured to transmit, to a UE, a configuration” (see paragraph 0010). “In certain aspects, the configuration may be transmitted using radio resource control (RRC) signaling” (see Figure 3, paragraph 0049)), wherein the code configuration configures the second apparatus to report collected samples measurements (“a configuration to be used for compressing one or more measurements corresponding to at least one reference signal using an AI encoder” (see paragraphs 0010, 0049, and 0059 (codework with the compressed version of the measurements is received by the BS));and receive from the second apparatus, a code generated in a predefined format from the collected samples measurements in a predefined format, according to the code configuration (“receive a codeword having a compressed version of the one or more measurements, the compressed version of the one or more measurements being in accordance with the configuration” (see paragraph 0010s and 0050)). Consider claim 16, and as applied to claim 15 above, Pezeshki et al. further show and disclose an apparatus wherein the generated code from the measurements is obtained by compressing representations of collected samples into the predetermined format defined by the code configurations (“the BS 110a includes a feedback manager 112. The feedback manager 112 may be configured to indicate a configuration for compression of one or more measurements to a UE… the UE 120a includes a feedback manager 122. The feedback manager 122 may be configured to compress the one or more measurements for feedback to the BS in accordance with a configuration indicated by the BS” (see Figure 1, paragraph 0036)). Consider claim 17, and as applied to claim 16 above, Pezeshki et al. further show and disclose wherein the apparatus is caused to de-quantize and decompress the generated code to re-construct the collected samples that are representative of a network environment of the second apparatus (“the BS may derive (i.e., de-quantize) communication parameters based on the codework and include the AI decoder 520 having one or more AI modules 522 to decompress the codeword 612 and generate a decompressed codeword. The decompressed codeword may be used to calculate the one or more communication parameters” (see paragraph 0070). Said communication parameters may include “channel quality information (CQI), precoding matrix indicator (PMI), rank indicator (RI), reference signal received power (RSRP), or any combination thereof” (see paragraph 0050), and as such can be equated to the network environment of the UE). Consider claim 18, and as applied to claim 17 above, Pezeshki et al. further show and disclose wherein the apparatus is configured to decompress the generated code using an auto-encoder (“an AI module (e.g., autoencoder) may be used…the BS may decompress the feedback from the UE using an AI module” (see paragraph 0030)). Consider claim 19, and as applied to claim 18 above, Pezeshki et al. further show and disclose wherein the auto-encoder is trained using machine learning-based training based on a direct output of the auto-encoder (“the [auto]encoder may be a fully-connected artificial neural network (ANN)…The autoencoder learns a lower-dimensional representation of data through a training process. This training may be performed using forward propagation and backpropagation…During training, an error between the input and the output may be determined, and each weight's contribution to the error may be determined. The weights may be adjusted accordingly using gradient descent to facilitate training of the autoencoder” (see paragraph 0029)). Claim Rejections – 35 USC 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Sundararajan et al. (US 20220046385 A1) in view of Yokote (US 6430600 B1). Consider claim 6, and as applied to claim 1 above, Sundararajan et al. fail to disclose that the code configuration consists of information elements (IE) within the message defining the predetermined format of the generated code, comprising: a size, shape, and entry type. In the same field of endeavor, Yokote discloses a data processing device wherein the code configuration consists of information elements (IE) within the message defining the predetermined format of the generated code, comprising: a size, shape, and entry type (Figure 11 depicts a feature structure for executable code which defines the format of said code. Examples of the contents of the feature structure include size, layout, and data format. The latter two may reasonably be equated to shape and entry type, respectively (see Col. 13, line 11-54)). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by Sundararajan et al. by incorporating a system of information subcategories as disclosed by Yokote in order to simplify handling by making important properties of the code readily accessible. Consider claim 8, and as applied to claim 6 above, Sundararajan et al. fail to disclose that the entry type of the generated code is defined at least by one or more of: a flag type for reporting, a quantizer type, a compression type and a label type. In the same field of endeavor, Yokote discloses a data processing device wherein the entry type of the generated code is defined by a compression type (Further examples of the contents of the feature structure include “expandability, compressibility” (see Col. 13, line 34)). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by Sundararajan et al. by incorporating a specific compression type as disclosed by Yokote in order to signal to the base station the decompression method to apply to the generated code. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Sundararajan et al. (US 20220046385 A1) in view of Yokote (US 6430600 B1) and further in view of Singh et al. (US 20230017734 A1). Consider claim 7, and as applied to claim 6 above, Sundararajan et al., as modified by Yokote fail to disclose that the size and shape of the generated code is defined by a matrix having M rows and N columns. In the same field of endeavor, Singh et al. disclose a machine learning technique wherein the size and shape of the generated code is defined by a matrix having M rows and N columns (“in some embodiments, the entity-code occurrence data object may be a matrix, where one of the dimensions of matrix is associated with identifiable data entities, and the other dimension is associated with defined occurrence codes” (see paragraph 0078) Said two-dimensional matrix has M rows and N columns, which by definition define its size and shape.) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by Sundararajan et al., as modified by Yokote, by arranging the code in a matrix format as disclosed by Singh in order to structure the code for simplified handling and access. Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Sundararajan et al. (US 20220046385 A1) in view of Yokote (US 6430600 B1), and further in view of Wang et al. (WO 2023/064419A1). Consider claim 10, and as applied to claim 8 above, Sundararajan et al. and Yokote fail to disclose wherein the quantizer type causes the apparatus to perform a scalar or vector quantization on the collected samples to obtain the generated code. In the same field of endeavor, Wang et al. discloses a quantized configuration system for machine learning wherein the quantizer type causes a base station to perform a scalar or vector quantization on the collected samples to obtain the generated code (“UE quantization manager 220 receives an indication of a quantization configuration and processes ML configuration information based on the indicated quantization format … 220, for instance, uses different layer-based quantization configurations to recover different layers of a DNN (e.g., uses a first quantization configuration to recover a first ML configuration for first layer of a DNN, a second quantization configuration to recover a second ML configuration for a second layer of the DNN, and so forth)” (see Figure 2, paragraph 0023). The quantization manager additionally “uses the quantization configuration specified by the base station 120 to quantize the gradient information, which is then sent to the base station 120 as quantized ML configuration information.” (see paragraph 0024), which in this instance may reasonably be equated to generated code). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by Sundararajan et al. and modified by Yokote by incorporating a quantization manager as disclosed by Wang et al. in order to select an optimal quantization type for the collected samples. Consider claim 11, and as applied to claim 10 above, Sundararajan et al. and Yokote fail to disclose wherein the vector quantization comprises selected finite vector points or codewords that are shared when performing compression of the collected samples. In the same field of endeavor, Wang et al. discloses a quantized configuration system for machine learning wherein the vector quantization comprises selected finite vector points or codewords that are shared when performing compression of the collected samples (“the BS quantization manager 270 receives an index value that maps to an entry of a vector quantization codebook and extracts the ML configuration (or the ML configuration update) from the vector quantization codebook” (see paragraph 0033). One illustration of the said index value is as a series of 100 floating-point values, which each correspond to an ML parameter (see paragraph 0011). It is implied in the source that the referenced codebook is shared between the UE and base station). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by Sundararajan et al. and modified by Yokote by incorporating a vector quantization approach as disclosed by Wang et al. in order to ensure accurate decompression of the generated code by the receiving base station. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Sundararajan et al. (US 20220046385 A1) in view of Yokote (US 6430600 B1), and further in view of Wu et al. (WO 2022/227081). Consider claim 12, and as applied to claim 8 above, Sundararajan et al. and Yokote fail to disclose wherein the compression type causes the apparatus to perform a machine learning (ML) or non-machine learning (non-ML) on the collected samples to obtain the generated code. In the same field of endeavor, Wu et al. disclose a wireless communication technique wherein the compression type causes the apparatus to perform a machine learning (ML) or non-machine learning (non-ML) techniques to compress channel information (“CSI reports or related channel information may be compressed or decompressed in accordance with a compression scheme…in some examples, machine learning techniques may be used to support one or more of such compression schemes, which may include training one or more encoders (e.g., an auto encoder) , performing operations for encoding information, training one or more decoders (e.g., an auto decoder) , performing operations for decoding information, or any combination thereof” (see paragraph 0038). Use of ML techniques is optional and dependent on the compression type: “Although machine learning or neural network techniques may be implemented in some CSI compression schemes, in some examples, there may be a mismatch of a channel used for training and a channel used for inference…one compression scheme may be unsuitable or otherwise less favorable for reducing reporting payload compared to another compression scheme” (see paragraph 0039)). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by Sundararajan et al. and modified by Yokote by incorporating an optional ML integration as disclosed by Wu et al. in order to optimize resource usage in the compression of collected samples. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over by Pezeshki et al. (US 20210195462 A1) in view of Sundararajan et al. (US 20220046385 A1). Consider claim 20, and as applied to claim 18 above, Pezeshki et al. show and disclose the claimed invention except wherein wherein the apparatus is further configured to estimate a location of the second apparatus based on an output of the autoencoder using a trained location estimation circuit to provide an output location estimate. In the same field of endeavor, Sundararajan et al. disclose estimating the location of a UE based on an output of the autoencoder using a trained location estimation circuit to provide an output location estimate (as is the case in Pezeshki, an AI module performs the function of and can be equated to an autoencoder. “[A] machine learning module (e.g., implemented by a processing system, such as processors 332, 384, or 394) may be configured to iteratively analyze training input data (e.g., measurements of reference signals to/from various target UEs) and to associate this training input data with an output data set (e.g., a set of possible or likely candidate locations of the various target Ues), thereby enabling later determination of the same output data set when presented with similar input data (e.g., from other target Ues at the same or similar location)” (see paragraph 0255)). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by Pezeshki et al. by incorporating an apparatus configured to estimate the location of a UE based on the output of an autoencoder using a trained location estimation circuit as disclosed by Sundararajan in order to iteratively improve the accuracy of the output location estimate. Allowable Subject Matter Claim 9 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Consider claim 9, the best prior art found during the examination of the present application, Sundararajan et al. and Pezeshki et al., fail to disclose specifically the limitation of “wherein the label type is associated with the generated code to train a block of the generated code to reconstruct initially collected samples reconstruction when reporting a two dimensional or three dimensional location, or a line of sight (LOS) probability vector of the apparatus.” Conclusion Any inquiry concerning this communication from the examiner should be directed to ALEXANDER WU whose telephone number is (571)272-3360. The examiner can normally be reached Monday - Friday, 8:30 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:/www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, RAFAEL PEREZ-GUTIERREZ can be reached at (571)272-7915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit httos://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER WU/Examiner, Art Unit 2642 /Rafael Pérez-Gutiérrez/Supervisory Patent Examiner, Art Unit 2642 December 23, 2025
Read full office action

Prosecution Timeline

Nov 03, 2023
Application Filed
Dec 23, 2025
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month