Prosecution Insights
Last updated: April 19, 2026
Application No. 18/573,783

METHODS AND APPARATUSES FOR PROVISIONING A WIRELESS DEVICE WITH PREDICTION INFORMATION

Non-Final OA §103
Filed
Dec 22, 2023
Examiner
SHARMA, POONAM
Art Unit
2472
Tech Center
2400 — Computer Networks
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
14 granted / 16 resolved
+29.5% vs TC avg
Strong +15% interview lift
Without
With
+15.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
DETAILED ACTION This office action in response to an application filing received December 7, 2023. The preliminary amendment received December 22, 2023 has been entered. The Application Data Sheet received on December 22, 2023 has been considered. Claims 1-15, 24, 26, 43 and 62-63 are pending. Claims 1-14, 24, 26, 43 and 62 are amended. Claim 63 is newly added. Claims 16-23, 25, 27-42, 44-61 are cancelled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement received December 22, 2023 and January 04, 2024 have been considered. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because it contains legal phraseology (i.e. disclosure in the first and last sentence). A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 1-7, 13, 24, 26, 43 and 62-63 are rejected under 35 U.S.C. 103 as being unpatentable over Isaksson et al., WO 2019172813 A1, (hereinafter Isaksson) in view of Valentina et al., IEEE - Learning the CSI Denoising and Feedback Without Supervision (hereinafter Valentina). Regarding claim 1, 24 and 62, Isaksson teaches a computer-implemented method, performed by a first network node, for provisioning a wireless device with prediction information for allowing the wireless device to predict a radio signal measurement between the wireless device and a base station (see Fig.3 element 12, 10, 305; Pg. 19 lines 21-24, e.g., The method actions performed by the radio network node such as the first radio network node 12 or the stand-alone network node 15 for managing communication in the wireless communications network; see Pg. 20 lines 13-15, e.g., Action 605. The radio network node further provides to the wireless communication device 10, the indicator indicating the model and the one or more trained model parameters for the model; see Pg. 16, line 32 – Pg. 17, line 9, e.g., A machine learning algorithm that can cope well with time series, e.g LSTM, or Gated recurrent unit (GRU) may be used as the model. … The output may be the predicted next beam or beams with the predicted strongest RSRP with some conditional probabilities, … o Next beam quality (for example RSRP), strongest cell (on secondary carrier) stronger than the serving cell (to replace A3/A5) …)), the method comprising: receiving, from the wireless device, wireless device capability information (see Pg. 20, lines 7-9, e.g., Action 603. The radio network node may receive the capability indication from the wireless communication device 10, wherein the capability indication indicates the capability, of the wireless communication device 10, of supporting one or more models.); Obtaining, based on the wireless device capability information, a model for predicting a radio signal measurement (see Pg. 20, lines 10-12, e.g., Action 604. The radio network node may select the model out of the number of models based on the capability, of the wireless communication device 10, of supporting one or more models and/or a position of the wireless communication device 10; see Pg. 17, lines 13-17, e.g., The model may be a neural network wherein inputs of the neural network are based on time series such as a machine learning algorithm that copes well with time series such as a recurrent neural network (RNN) model e.g. a LSTM or a Gated recurrent unit (GRU). Other types of models may alternatively be used.); and transmitting an indication of the prediction information to the wireless device (see Pg. 20 lines 13-15, e.g., Action 605. The radio network node further provides to the wireless communication device 10, the indicator indicating the model and the one or more trained model parameters for the model.), however, it does not explicitly teach wherein the prediction information comprises a denoising autoencoder and at least one candidate noising pattern and a denoising autoencoder and at least one candidate noising pattern for predicting a radio signal measurement. Valentina teaches, wherein the prediction information comprises a denoising autoencoder and at least one candidate noising pattern, and for predicting a radio signal measurement (see Fig. 1, e.g.,Training of the autoencoder at the BS, see Section II, SYSTEM ARCHITECTURE, e.g., First, an autoencoder g_¹ f _¹_ºº is trained at the BS based solely on noisy UL data ˜HUL, which is supposed to be collected during the standard UL operation of the BS in advance. The f _ denotes the encoder with parameters _ and g_ denotes the decoder with parameters _, see Fig. 1a.; see Section V, SIMULATIONS, e.g., The autoencoder neural network has been implemented with Tensorflow [26] and single-precision has been utilized for the training. We consider mini-batches of 64 samples and we use the Adam optimization algorithm [27] to tune the hyperparameters _ and _ of the neural network; see Section I, e.g., Thus, the core idea of our scheme is that the neural network encoder trained on UL data at the BS can be applied to DL data without any further adaptation, from any mobile device to which the encoder is offloaded. Training on the MT is no longer necessary at all, making it possible to quickly update the encoder on the MT at any time and place, e.g., when moving from one cell to another or for different locations in the cell.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified model and the one or more trained model parameters for the model of Isaksson to incorporate the teachings of Valentina to include denoising autoencoder and at least one candidate noising pattern. Doing so would facilitate in achieving unsupervised training of the autoencoder conducted at the BS solely based on noisy UL training data and reducing immense effort of collecting DL data at the BS to enable the training as suggested by Valentina (see Section I, e.g., a novel method which is again based on the autoencoding concept. However, motivated by the results in [17], the unsupervised training of the autoencoder is conducted at the BS solely based on noisy UL training data, thus avoiding the issue that collecting DL data at the BS to enable the training otherwise would require an immense effort with respect to the overall network traffic.). Regarding claim 2, Isaksson as combined with Valentina teaches the limitations of Claim 1. Isaksson further teaches, wherein the wireless device is preconfigured with a plurality of denoising autoencoders and associated candidate noising patterns, and wherein transmitting the indication of the prediction information to the wireless device comprises transmitting control information, based on the wireless device capability, to the wireless device (see Pg. 20, lines 13-24, e.g., The radio network node further provides to the wireless communication device 10, the indicator indicating the model and the one or more trained model parameters for the model), wherein the control information is configured to identify a denoising autoencoder and associated noising pattern preconfigured at the wireless device (Pg. 13, lines 23-25, e.g., Action 306. The wireless communication device 10 thus selects the model based on the indicator e.g. from a list with indexed models already preconfigured at the wireless communication device 10.). Regarding claim 3, Isaksson as combined with Valentina teaches the limitations of Claim 1. Isaksson further teaches, wherein transmitting the indication of the prediction information to the wireless device comprises transmitting the prediction information to the wireless device (see Pg. 20, lines 13-24, e.g., Action 605. The radio network node further provides to the wireless communication device 10, the indicator indicating the model and the one or more trained model parameters for the model; The radio network node may provide different indicators and/or different one or more trained model parameters for different beams, cells or regions of cells. The one or more trained model parameters may comprise one or more weights for the model indicated by the indicator.). Regarding claim 4, Isaksson as combined with Valentina teaches the limitations of Claim 3. Isaksson further teaches, wherein transmitting the prediction information to the wireless device comprises transmitting a unicast transmission, broadcast transmission or a multicast transmission (see Pg. 13, lines 11-12, e.g., e.g. wireless communication devices in a certain area may use the same model (Note that, implicitly implied); see Pg. 11, lines 4-11, e.g., The radio network node instructs the wireless communication device 10 to forward the indicator indicating the model to a list of wireless communication devices. This can be done at the same time, or at a later time. A benefit of using D2D communication is that possibly less energy can be used to transmit to a wireless communication device that is close-by, rather than using the wireless communications network. Also, wireless communication devices that are close to each other are likely to benefit from using similar or the same model.). Regarding claim 5, Isaksson as combined with Valentina teaches the limitations of Claim 4. Isaksson further teaches, wherein transmitting the prediction information to the wireless device comprises transmitting a broadcast transmission or a multicast transmission and wherein the denoising autoencoder and/or the at least one candidate noising pattern are obtained based on wireless device capability information received from a plurality of wireless devices (see Pg. 9, lines 17-26, e.g., the model is trained at a radio network node, such as the first radio network node 12, or a standalone network node 15, with received data from one or more wireless communication devices. The received data may comprise: … capability of supporting one or more models of the one or more wireless communication devices;). Regarding claim 6, Isaksson as combined with Valentina teaches the limitations of Claim 3. Isaksson does not teach but Valentina teaches, wherein obtaining the denoising autoencoder and/or the at least one candidate noising pattern comprises training the denoising autoencoder to predict a radio signal measurement based on each of the at least one candidate noising pattern (see Section IV, AUTOENCODER, e.g., An autoencoder is a neural network that is trained in an unsupervised fashion to reconstruct its input. It has beenintroduced in [21] and its purpose is to find a compactrepresentation of the data. The autoencoder consists of two parts: an encoder function f _ with hyperparameters _ and a decoder function g_ with hyperparameters _... The encoder and decoder architectures are described in Tables I and II; see V SIMULATIONS, The autoencoder neural network has been implemented with Tensorflow [26] and single-precision has been utilized for the training. We consider mini-batches of 64 samples and we use the Adam optimization algorithm [27] to tune the hyperparameters _ and _ of the neural network. The weights are updated in order to minimize an empirical risk function based on the least-squares loss function … The UL-trained encoder is then used at each MT to generate the codeword zDL from the noisy DL CSI estimate ˜H DL. The codeword is then sent to the BS, which uses the UL-trained decoder to obtain a clean version of the DL CSI ˆH DL u HDL.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified model and the one or more trained model parameters for the model of Isaksson to incorporate the teachings of Valentina to include denoising autoencoder and at least one candidate noising pattern comprises training the denoising autoencoder to predict a radio signal measurement based on each of the at least one candidate noising pattern. Doing so would facilitate in achieving unsupervised training of the autoencoder conducted at the BS solely based on noisy UL training data and reducing immense effort of collecting DL data at the BS to enable the training as suggested by Valentina (see Section I, e.g., a novel method which is again based on the autoencoding concept. However, motivated by the results in [17], the unsupervised training of the autoencoder is conducted at the BS solely based on noisy UL training data, thus avoiding the issue that collecting DL data at the BS to enable the training otherwise would require an immense effort with respect to the overall network traffic.). Regarding claim 7, Isaksson as combined with Valentina teaches the limitations of Claim 6. Isaksson does not teach but Valentina teaches, wherein training the denoising autoencoder to predict a radio signal measurement based on each of the at least one candidate noising pattern comprises: obtaining a plurality of sets of radio signal measurements between the wireless device and the base station; applying each of a plurality of initial noising patterns to one or more of the plurality of sets of radio signal measurements to generate a noised dataset (see Section III, DATASET DESCRIPTION, IV AUTOENCODER and V SIMULATIONS, e.g., The encoder and decoder architectures are described in Tables I and II. Therefore, the dataset is split into three groups of 48 _ 103, 6 _ 103 and 6 _ 103 samples, where each sample consists of the three matrices HUL, HDL-120, and HDL-480 2 CNa_Nc . Note again that although the training of the autoencoder at the BS is based solely on the UL CSI, it still covers the distribution of the unseen DL CSI as well, since the UL and DL data ultimately follow the same propagation scenario, cf. [17]. With respect to testing, only the test set of the two DL CSI datasets (DL@120, 480) will be used.), wherein each of the plurality of initial noising patterns masks at least one radio signal measurement when applied to a set of radio signal measurements; training the denoising autoencoder to predict the at least one masked radio signal measurement for each initial noising pattern (see Section III, DATASET DESCRIPTION, IV AUTOENCODER and V SIMULATIONS, e.g., The encoder and decoder architectures are described in Tables I and II.); determining a respective reconstruction error of the denoising autoencoder associated for each respective initial noising pattern; and identifying the at least one candidate noising pattern from the plurality of initial noising patterns based on the respective reconstruction errors associated with the plurality of initial noising patterns (see V. SIMULATIONS, e.g., V. SIMULATIONS, The autoencoder neural network has been implemented with Tensorflow [26] and single-precision has been utilized for the training. We consider mini-batches of 64 samples and we use the Adam optimization algorithm [27] to tune the hyperparameters _ and _ of the neural network. The weights are updated in order to minimize an empirical risk function based on the least-squares loss function; After the training, we measure the quality of the unsupervised denoising in terms of normalized mean square error "2 and cosine similarity _, where, …their corresponding versions at the decoder output; We further compare the results achieved with the UL-trained autoencoder with two methods that serve as a reference.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified model and the one or more trained model parameters for the model of Isaksson to incorporate the teachings of Valentina to include training the denoising autoencoder to predict a radio signal measurement based on each of the at least one candidate noising pattern. Doing so would facilitate in achieving unsupervised training of the autoencoder conducted at the BS solely based on noisy UL training data and reducing immense effort of collecting DL data at the BS to enable the training as suggested by Valentina (see Section I, e.g., a novel method which is again based on the autoencoding concept. However, motivated by the results in [17], the unsupervised training of the autoencoder is conducted at the BS solely based on noisy UL training data, thus avoiding the issue that collecting DL data at the BS to enable the training otherwise would require an immense effort with respect to the overall network traffic.). Regarding claim 13, Isaksson as combined with Valentina teaches the limitations of Claim 1. Isaksson further teaches, wherein obtaining the denoising autoencoder and/or the at least one candidate noising pattern comprises selecting, based on the wireless device capability information, the denoising autoencoder and/or the at least one candidate noising pattern from a plurality of pre-trained denoising autoencoders each associated with at least one predetermined candidate noising pattern (see Pg. 20, lines 10-12, e.g., Action 604. The radio network node may select the model out of the number of models based on the capability, of the wireless communication device 10, of supporting one or more models and/or a position of the wireless communication device 10.). Regarding claim 26, 43 and 63, Isaksson teaches a computer-implemented method, performed by a wireless device, for obtaining prediction information for allowing the wireless device to predict a radio signal measurement between the wireless device and a base station (see Fig.3 element 12, 10, 305; Pg. 19, lines 21-24, e.g., The method actions performed by the radio network node such as the first radio network node 12 or the stand-alone network node 15 for managing communication in the wireless communications network; see Pg. 20, lines 13-15, e.g., Action 605. The radio network node further provides to the wireless communication device 10, the indicator indicating the model and the one or more trained model parameters for the model; see Pg. 16, line 32 – Pg. 17, line 9, e.g., A machine learning algorithm that can cope well with time series, e.g LSTM, or Gated recurrent unit (GRU) may be used as the model. … The output may be the predicted next beam or beams with the predicted strongest RSRP with some conditional probabilities, … o Next beam quality (for example RSRP), strongest cell (on secondary carrier) stronger than the serving cell (to replace A3/A5) …)), the method comprising: transmitting to a first network node, wireless device capability information (see Pg. 20, lines 7-9, e.g., Action 603. The radio network node may receive the capability indication from the wireless communication device 10, wherein the capability indication indicates the capability, of the wireless communication device 10, of supporting one or more models.); and receiving from the first network node, an indication of the prediction information (see Pg. 20, lines 13-15, e.g., Action 605. The radio network node further provides to the wireless communication device 10, the indicator indicating the model and the one or more trained model parameters for the model.), wherein the prediction information is obtained based on the wireless device capability information (see Pg. 20, lines 10-12, e.g., Action 604. The radio network node may select the model out of the number of models based on the capability, of the wireless communication device 10, of supporting one or more models and/or a position of the wireless communication device 10; see Pg. 17, lines 13-17, e.g., The model may be a neural network wherein inputs of the neural network are based on time series such as a machine learning algorithm that copes well with time series such as a recurrent neural network (RNN) model e.g. a LSTM or a Gated recurrent unit (GRU). Other types of models may alternatively be used.); and however, it does not explicitly teach wherein the prediction information comprises a denoising autoencoder and at least one candidate noising pattern and a denoising autoencoder and at least one candidate noising pattern for predicting a radio signal measurement. Valentina teaches, wherein the prediction information comprises a denoising autoencoder and at least one candidate noising pattern, and for predicting a radio signal measurement (see Valentina et al. Fig. 1, Training of the autoencoder at the BS, see Section II, SYSTEM ARCHITECTURE, e.g., First, an autoencoder g_¹ f _¹_ºº is trained at the BS based solely on noisy UL data ˜HUL, which is supposed to be collected during the standard UL operation of the BS in advance. The f _ denotes the encoder with parameters _ and g_ denotes the decoder with parameters _, see Fig. 1a.; see Section V, SIMULATIONS, e.g., The autoencoder neural network has been implemented with Tensorflow [26] and single-precision has been utilized for the training. We consider mini-batches of 64 samples and we use the Adam optimization algorithm [27] to tune the hyperparameters _ and _ of the neural network; see Section I, e.g., Thus, the core idea of our scheme is that the neural network encoder trained on UL data at the BS can be applied to DL data without any further adaptation, from any mobile device to which the encoder is offloaded. Training on the MT is no longer necessary at all, making it possible to quickly update the encoder on the MT at any time and place, e.g., when moving from one cell to another or for different locations in the cell.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified model and the one or more trained model parameters for the model of Isaksson to incorporate the teachings of Valentina to include denoising autoencoder and at least one candidate noising pattern. Doing so would facilitate in achieving unsupervised training of the autoencoder conducted at the BS solely based on noisy UL training data and reducing immense effort of collecting DL data at the BS to enable the training as suggested by Valentina (see Section I, e.g., a novel method which is again based on the autoencoding concept. However, motivated by the results in [17], the unsupervised training of the autoencoder is conducted at the BS solely based on noisy UL training data, thus avoiding the issue that collecting DL data at the BS to enable the training otherwise would require an immense effort with respect to the overall network traffic.). Claim(s) 8-12, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Isaksson in view of Valentina et al., and in further view of Scheideler et al., US 12039455 B2, (hereinafter Scheideler). Regarding claim 8, Isaksson as combined with Valentina teaches the limitations of Claim 7. Isaksson as improved by Valentina does not teach but Scheideler teaches, wherein the at least one masked radio signal measurement for each initial noising pattern is masked with a defined value that is the same for each of the plurality of initial noising patterns (see Col. 9, lines 44-53, e.g., While the function by P(S) provides the statement “is real” vs. “is fake”, the probability function P(k,S) (see 512) provides a probability that certain sample belongs to category k 516. Similar to an adversarial process that generates improved fake samples and discriminates better between real and fake, the generator aims for an evenly distributed result of P(k,S) (meaning the each P(k,S) for k=1, . . . , K has the same value P(k,S)=1/K), the discriminator aims to have P(k,S) for one k close to 1 while for all other categories P(k,S) is close to 0.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified improved model with denoising autoencoder and at least one candidate noising pattern of Isaksson as combined with Valentina to include each initial noising pattern to be masked with a defined value that is the same for each of the plurality of initial noising patterns. Doing so would facilitate in achieving determining a quality value of the training as suggested by Scheideler (see Col. 1, lines 44-51, e.g., The exemplary embodiments may further include training a second machine-learning system by using benevolent code patterns and the generated additional synthetic code patterns as training data. The exemplary embodiments may include determining a statistical distribution of predicted malicious code patterns and related categories, and determining a quality value of the training of the second machine-learning system.). Regarding claim 9, Isaksson as combined with Valentina teaches the limitations of Claim 7. Isaksson as improved by Valentina does not teach but Scheideler teaches, wherein the step of identifying the at least one candidate noising patterns comprises: for each initial noising pattern: determining whether the reconstruction error associated with the initial noising pattern meets an accuracy criterion (see Col. 11, lines 6-15, e.g., Following best practice, the decrease in the reconstruction error (i.e., cost function) is observed, i.e., the training is stopped if, for a predefined number of additional samples that were fed in after the last measurement of the reconstruction error, the reconstruction error has not decreased by a predefined value. The predefined number of samples and/or the predefined value can be relative (e.g., 10% more samples, decrease by 1%) or absolute (e.g., 10,000 samples, decrease by 0.01).); and responsive to the reconstruction error associated with the initial noising pattern meeting the accuracy criterion, identifying the initial noising pattern as one of the at least one candidate noising patterns (see Col. 12, lines 17-34, e.g., Thus, FIG. 10 shows also how a detection range of the auto-encoder fits into the context of FIG. 9 and how a detection of samples can be performed. In the evaluation process, malicious codes samples by category 1002 and benevolent software code pattern 1004 are fed into the CV auto-encoder 1006. As the input is known, a statistical analysis is created comprising the ratings and confidence levels 1008 for true positives with the correct category, true positives with the wrong category, false positives by category, true negatives, and false negatives. As stated in stage 3 (stage 3, 206, FIG. 2), in the implementation using one instance of a GAN(G,D) with its generator and discriminator is fully developed and utilizing the fully developed generator for AE training, improving the AE is only possible in a limited way. In case the AE does not detect the required rate of true positives in the right category, the AE will be fed with more synthetic samples of the said category to reduce the reconstruction error.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified improved model with denoising autoencoder and at least one candidate noising pattern of Isaksson as combined with Valentina to include reconstruction error associated with the initial noising pattern meets an accuracy criterion. Doing so would facilitate in achieving determining a quality value of the training as suggested by Scheideler (see Col. 1, lines 44-51, e.g., The exemplary embodiments may further include training a second machine-learning system by using benevolent code patterns and the generated additional synthetic code patterns as training data. The exemplary embodiments may include determining a statistical distribution of predicted malicious code patterns and related categories, and determining a quality value of the training of the second machine-learning system.). Regarding claim 10, Isaksson as combined with Valentina teaches the limitations of Claim 7. Isaksson as improved by Valentina does not teach but Scheideler teaches, wherein the prediction information further comprises the respective reconstruction errors associated with each respective candidate noising pattern (see Col. 11, lines 16-25, e.g., In another embodiment, the model trained with the synthetic data is a conditional variational auto-encoder (CVAE) 710. This allows the model to get input based on both, the category of the malware (inserted in the latent space) and the sample itself. The advantage of this method is that also during inference (i.e., in detection mode) an incoming sample can be tested against different categories of malware at the same time and for each category the result will give a (slightly) different reconstruction resulting from the decoder, hence, a different reconstruction error.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified improved model with denoising autoencoder and at least one candidate noising pattern of Isaksson as combined with Valentina to include prediction information further comprises the respective reconstruction errors associated with each respective candidate noising pattern. Doing so would facilitate in achieving determining a quality value of the training as suggested by Scheideler (see Col. 1, lines 44-51, e.g., The exemplary embodiments may further include training a second machine-learning system by using benevolent code patterns and the generated additional synthetic code patterns as training data. The exemplary embodiments may include determining a statistical distribution of predicted malicious code patterns and related categories, and determining a quality value of the training of the second machine-learning system.). Regarding claim 11, Isaksson as combined with Valentina teaches the limitations of Claim 7. Isaksson as improved by Valentina does not teach but Scheideler teaches, wherein training the denoising autoencoder to predict the at least one masked radio signal measurement for each initial noising pattern comprises: transmitting the denoising autoencoder and the plurality of initial noising patterns to the wireless device, wherein the wireless device is configured to apply the plurality of initial noising patterns to a second plurality of sets of radio signal measurements between the wireless device and the base station to generate a second noised dataset (see Col. 14, lines 42-53, e.g., the processor 1302 also to generate—e.g., by a generator or by triggering a generator unit 1310—additional synthetic code patterns by feeding additional random code samples to the trained first machine-learning system, and train a second machine-learning system 1312, thereby building a second machine-learning model using the generated additional synthetic code patterns and benevolent code patterns as training data until the second machine-learning system is enabled to predict malicious ones of the additional synthetic code patterns and a related categories for the additional synthetic code patterns.), and train the denoising autoencoder to predict at least one masked radio signal measurement for each initial noising pattern from the second noised dataset; receiving, from the wireless device, an updated denoising autoencoder and updated respective reconstruction errors of the denoising autoencoder associated for each respective initial noising pattern based on the wireless device training (see Col. 14, lines 54-65, e.g., Furthermore, stored program code portions, that, if executed by the processor 1302, enable the processor 1302 to determine—e.g., by a statistical determinator 1314 or by triggering it—a statistical distribution of the predicted malicious ones of the additional synthetic code patterns and related categories for the additional synthetic code patterns, and determine—e.g., using a quality determinator 1316 or triggering it—a quality value of the training of the second machine-learning system, wherein the quality value is a function of an ability of the second machine-learning system to predict correctly that one of the aggregated categorized known malware patterns is malware of a certain category.); and identifying the at least one candidate noising pattern from the plurality of initial noising patterns based on the updated respective reconstruction errors associated with the plurality of initial noising patterns (see Col. 11, lines 26-32, e.g., This additional dimension (i.e., the malware category k) can be used to enhance the precision of the malware detection, by having a tighter threshold for positive detection on the class that has the lowest reconstruction error. In addition, as a byproduct, the CVAE can identify the malware class (again, as the class that will produce the smallest reconstruction error).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified improved model with denoising autoencoder and at least one candidate noising pattern of Isaksson as combined with Valentina to include training the denoising autoencoder to predict at least one masked radio signal measurement for each initial noising pattern from the second noised dataset. Doing so would facilitate in achieving determining a quality value of the training as suggested by Scheideler (see Col. 1, lines 44-51, e.g., The exemplary embodiments may further include training a second machine-learning system by using benevolent code patterns and the generated additional synthetic code patterns as training data. The exemplary embodiments may include determining a statistical distribution of predicted malicious code patterns and related categories, and determining a quality value of the training of the second machine-learning system.). Regarding claim 12, Isaksson as combined with Valentina teaches the limitations of Claim 1. Isaksson as improved by Valentina does not teach but Scheideler teaches, wherein each candidate noising pattern is a unique configuration to mask one or more radio signal measurements in a set of radio signal measurements (see Col. 4, lines 11-13, e.g., The term ‘additional synthetic code patterns’ may denote synthetic code patterns being generated with a completely trained first machine-learning system; see Col. 4, lines 35-38, e.g., The term ‘statistical distribution’ may denote here the number, or relative number, of predicted malware, i.e., synthetically generated code patterns, in each of the predefined categories.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified improved model with denoising autoencoder and at least one candidate noising pattern of Isaksson as combined with Valentina to include where noising pattern is a unique configuration to mask one or more radio signal measurements. Doing so would facilitate in achieving determining a quality value of the training as suggested by Scheideler (see Col. 1, lines 44-51, e.g., The exemplary embodiments may further include training a second machine-learning system by using benevolent code patterns and the generated additional synthetic code patterns as training data. The exemplary embodiments may include determining a statistical distribution of predicted malicious code patterns and related categories, and determining a quality value of the training of the second machine-learning system.). Regarding claim 12, Isaksson as combined with Valentina teaches the limitations of Claim 1. Isaksson as improved by Valentina does not teach but Scheideler teaches, wherein the step of obtaining the at least one candidate noising pattern further comprises obtaining the at least one candidate noising pattern based on predicted power saving performances for the wireless device associated with each candidate noising pattern (Depending on the kind and extent of the changes of the internal and external environment, the existing ML model is discarded (fresh start) or is augmented with new input data. A fresh start has the advantage that the model is leaner as it does not contain malware patterns which are no longer applicable, for instance when a vulnerability does not exist anymore (e.g. after OS patching) and the corresponding malware cannot do any harm. Training the existing model with additional code requires less computational effort and is advisable when the environment is susceptible for a new kind or version of malware; see Col. 6, lines 26-38, e.g., In stage 3 206, the evaluation and deployment stage, the performance and gaps in the machine-learning model is determined 222 and a decision is made whether a desired confidence level is already achieved, 224. If that is not the case—case “N”—the process returns back to stage 2, in particular the curation of the samples, 218. In case the desired confidence level is achieved—case “Y”—the machine-learning model is deployed, 226. Last but not least, in stage 4, 208, the re-evaluation stage, criteria for a re-evaluation are checked, 228, in intervals, and if that is the case—case “Y”—the process returns back to see whether there are new populated point durability database entries.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified improved model with denoising autoencoder and at least one candidate noising pattern of Isaksson as combined with Valentina to include candidate noising pattern further comprises obtaining the at least one candidate noising pattern based on predicted power saving performances. Doing so would facilitate in achieving determining a quality value of the training as suggested by Scheideler (see Col. 1, lines 44-51, e.g., The exemplary embodiments may further include training a second machine-learning system by using benevolent code patterns and the generated additional synthetic code patterns as training data. The exemplary embodiments may include determining a statistical distribution of predicted malicious code patterns and related categories, and determining a quality value of the training of the second machine-learning system.). Allowable Subject Matter Claim 15, is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. WO2022050461A1 issued to LIM et al. US 20240077584 A1 issued to YOO et al. Any inquiry concerning this communication or earlier communications from the examiner should be directed to POONAM SHARMA whose telephone number is (571)272-6579. The examiner can normally be reached Monday thru 8:30-5:30 pm, ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin Bates can be reached at (571) 272-3980. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /POONAM SHARMA/Examiner, Art Unit 2472 /KEVIN T BATES/Supervisory Patent Examiner, Art Unit 2472
Read full office action

Prosecution Timeline

Dec 22, 2023
Application Filed
Feb 12, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588071
APPARATUS AND METHOD OF WIRELESS COMMUNICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12581344
MULTIPLE ACCESS POINT WIFI SOUNDING
2y 5m to grant Granted Mar 17, 2026
Patent 12581518
METHOD AND DEVICE FOR RESOURCE DETERMINATION
2y 5m to grant Granted Mar 17, 2026
Patent 12574278
PHASE NOISE SUPPRESSION METHOD AND RELATED APPARATUS
2y 5m to grant Granted Mar 10, 2026
Patent 12563511
PHYSICAL LAYER SYNCHRONIZATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+15.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month