Prosecution Insights
Last updated: April 19, 2026
Application No. 18/229,886

METHOD FOR TEACHING AN ELECTRONIC COMPUTING DEVICE, A COMPUTER PROGRAM PRODUCT, A COMPUTER-READABLE STORAGE MEDIUM AS WELL AS AN ELECTRONIC COMPUTING DEVICE

Non-Final OA §102§103§112
Filed
Aug 03, 2023
Examiner
ALABI, OLUWATOSIN O
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Siemens Aktiengesellschaft
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
85%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
116 granted / 199 resolved
+3.3% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
45 currently pending
Career history
244
Total Applications
across all art units

Statute-Specific Performance

§101
21.9%
-18.1% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
23.2%
-16.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 199 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant claims the benefit of prior-filed EP Application No.22189108.8, filed August 5, 2022, which is acknowledged. Drawings The drawings were received on 08/03/2023. These drawings are acceptable. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on the following date(s): 08/03/2023 has been considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1 the claim recites the limitation “training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm” that is indefinite because partial derivatives are taken on equations and training data is not fitted to an algorithm but a model; an algorithm is consider a set of steps/processes and is unclear how a partial derivative is taken from a set of steps/process. The limitation thus renders the claim indefinite as one of ordinary skill in the art would be unable to ascertain the intended scope. Examiner interprets any modeling process as within the scope of the claim limitation. Claims 2-14 do not resolve the noted deficiencies noted above for claim 1 and thus rejected under the same rationale. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 7-8 and 12-14 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Wirola et al. (US 20180164400, hereinafter ‘Ola’). Regarding independent claim 1, Ola teaches a method for teaching an electronic computing device including at least a machine learning algorithm for predicting a position-based propagation of radio waves in an environment, the method comprising: (in [0035] According to an exemplary embodiment of the method according to the first aspect, the first location-specific function is based on a radio propagation model [a machine learning algorithm for predicting a position-based propagation of radio waves in an environment]. The radio propagation model may for instance be the log-distance path loss model. The log-distance path loss model predicts the path loss of a signal over distance [predicting a position-based propagation of radio waves in an environment]. This model is particularly advantageous for indoor areas or densely populated areas. However, other radio propagation models or combinations thereof may also be used. As an example, the first location-specific function may be defined as…) providing a mathematical model for the position-based propagation, wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment; (in [0035] According to an exemplary embodiment of the method according to the first aspect, the first location-specific function is based on a radio propagation model [providing a mathematical model for the position-based propagation, wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment]. The radio propagation model may for instance be the log-distance path loss model [providing a mathematical model for the position-based propagation, wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment]. The log-distance path loss model predicts the path loss of a signal over distance. This model is particularly advantageous for indoor areas or densely populated areas. However, other radio propagation models or combinations thereof may also be used. As an example, the first location-specific function may be defined as… [0037] According to an exemplary embodiment of the method according to the first aspect, a-priori information [the mathematical model comprises at least a physical model for the position-based propagation in the environment] is imposed on at least a part of the model parameters associated with the first location-specific function. For instance, this may prevent the model parameters from taking physically impossible values [the mathematical model comprises at least a physical model for the position-based propagation in the environment]…) generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain; (in [0134] Embodiments of different aspects of the invention may therefore: [0135] decrease the requirements of the learning data [generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain] collection campaigns, as less data can be collected in less parts of the building; [0136] enable interpolation and extrapolation in 3D indoor scenario with floor losses; [0137] perform interpolation and extrapolation jointly without a need to make a difference between them; [0138] consider the shadowing in the estimation process; [0139] allows the estimates of the location-specific quantity (e.g. RSS estimates) to follow a locally varying shadowing function, and thus, to mimic building floor plans and different physical obstacles in the signal path; [0140] deliver uncertainties for the estimates of the location-specific quantity (e.g. RSS estimates); [0141] improve the subsequent positioning processes resulting in better position estimates; take measurement covariance into account; [0142] not require pre-processing or whitening of learning data regarding nearby measurement (e.g. mapping multiple RSS measurements into a single fingerprint)[ generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain]; [0143] not require heuristic parameters in the design; [0144] benefit from the fact that the methods may be based on the assumptions on the real life physical radio propagation environment.) training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm; (in [0107] Non-linear least square problems can be solved iteratively by using the Gauss-Newton algorithm for example in [1]. This requires formulation of the Jacobian matrix J of size N×6. More generally speaking, if the number of model parameters for the first location-specific function is M, the Jacobian matrix has the size N×M. The Jacobian matrix defines the analytical partial derivatives [training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm] of the trend function in Equation (6) as … [0108] The above partial derivatives may be derived analytically [training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm]. Then, it is possible to run the below algorithm. However, it is also possible to evaluate the Jacobian matrix numerically during the estimation process. The Gauss-Newton algorithm can be described as follows: [0109] 1. Define the a-priori estimates for each model parameter and substitute those in the vector m.sub.0 of length 6. Furthermore, approximate the parameter covariance matrix Σ.sub.0 of size 6×6 to describe parameter uncertainties and dependencies. This corresponds to step 403 in chart 400 in FIG. 4. [0110] 2. Define the a-priori values to be the first parameter estimate as m.sub.est=m.sub.0 and Σ.sub.est=Σ.sub.0. [0111] 3. Compute partial derivatives [training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm] in the Jacobian matrix J by using the current estimate values m.sub.est. [0112] 4. Compute the next step for the parameter estimate update as Δm=−(Σ.sub.0.sup.−1+J.sup.TΣ.sub.RSSJ).sup.−1(Σ.sub.0.sup.−1(m.sub.est−m.sub.0)+J.sup.TΣ.sub.RSSr),  (8) [0113] where Σ.sub.RSS is a covariance matrix of the RSS measurements (discussed later on), and r is the vector of error between the current model trend estimate G.sub.est(x.sub.i) and the measurements P.sub.i(x.sub.i) defined by…) and obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm. (in [0051] According to an exemplary embodiment of the method according to the first aspect, the second location-specific function is a weighted linear combination of the residuals of the obtained measurement data on the location-specific quantity. For instance, the determination of the model parameters of the second location-specific function is based on a method of at least one of interpolation and extrapolation for which the interpolated/extrapolated values are modeled by a Gaussian process governed by prior covariances. For instance, the determination of the model parameters of the second location-specific function is based on Kriging, also called Gaussian process regression. As an example, the estimated second location-specific function may be written as Ψ.sub.est(x)=Σ.sub.i=1 . . . Nw.sub.i(x,x.sub.i)Ψ(x.sub.i) [and obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm], where w.sub.i can be seen as the weighting factor, which is for instance based on the covariance function [and obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm]...; And in [0035] According to an exemplary embodiment of the method according to the first aspect, the first location-specific function is based on a radio propagation model. The radio propagation model may for instance be the log-distance path loss model. The log-distance path loss model predicts the path loss of a signal over distance [and obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm]. This model is particularly advantageous for indoor areas or densely populated areas. However, other radio propagation models or combinations thereof may also be used. As an example, the first location-specific function may be defined as … [0044] Preferably, the determination of the model parameters associated with the first location-specific function comprises a non-linear least squares algorithm. For example the algorithm may be a weighted non-linear least square algorithm [and obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm]. For instance, the algorithm accounts for the covariance of the measurement data. As an example, the determination of the model parameters associated with the first location-specific function is based on a Gauss-Newton algorithm.) Regarding claim 2, the rejection of claim 1 is incorporated and Ola further teaches the method according to claim 1, wherein training data additionally comprises second training data of propagation measurements, and training the machine learning algorithm additionally comprises fitting the second training data to weighted sums of evaluations of the machine learning algorithm. (in [0044] Preferably, the determination of the model parameters associated with the first location-specific function comprises a non-linear least squares algorithm. For example the algorithm may be a weighted non-linear least square algorithm [wherein training data additionally comprises second training data of propagation measurements]. For instance, the algorithm accounts for the covariance of the measurement data. As an example, the determination of the model parameters associated with the first location-specific function is based on a Gauss-Newton algorithm… [0050] According to an exemplary embodiment of the method according to the first aspect, the determining of the model parameters of the second location-specific function is based on residuals of at least a part of the obtained measurement data on the location-specific quantity [wherein training data additionally comprises second training data of propagation measurements]... Thus, the residuals can be seen as residual measurement data. As an example, the residuals may be defined as … [0051] According to an exemplary embodiment of the method according to the first aspect, the second location-specific function [wherein training data additionally comprises second training data of propagation measurements] is a weighted linear combination of the residuals of the obtained measurement data on the location-specific quantity [wherein training data additionally comprises second training data of propagation measurements]. For instance, the determination of the model parameters of the second location-specific function is based on a method of at least one of interpolation and extrapolation for which the interpolated/extrapolated values are modeled by a Gaussian process governed by prior covariances [training the machine learning algorithm additionally comprises fitting the second training data to weighted sums of evaluations of the machine learning algorithm]. For instance, the determination of the model parameters of the second location-specific function is based on Kriging, also called Gaussian process regression. As an example, the estimated second location-specific function may be written as…) Regarding claim 3, the rejection of claim 1 is incorporated and Ola further teaches the method according to claim 1, wherein the mathematical model comprises at least additionally a transmission power parameter and a free-space parameter of the propagation loss. (in [0039] For instance, the transmission power of the transmitter may have a minimum transmission power in order to allow for a technically sensible use. Thus, the a-priori information may provide a lower limit for respective model parameters [wherein the mathematical model comprises at least additionally a transmission power parameter and a free-space parameter of the propagation loss]. As a further example, the a-priori information may be limited by empirical values. Empirical values may be taken from the literature, for example. For instance, a-priori information in form of literature values may be chosen for model parameters such as a path loss exponent [a transmission power parameter and a free-space parameter of the propagation loss], which depends on the radio propagation environment. [0040] … However, due to (direct) a-priori information available for one or more other model parameters (as explained above) and due to the interconnection of the model parameters via the model, this may indirectly also provide indirect a-priori information on the one or more model parameters, where there is no direct a-priori information available. By using (direct) a-priori information for the other model parameters associated with the first location-specific function, a determination of all model parameters associated with the first location-specific function can be achieved even without (direct) a-priori knowledge on all model parameters...; And in [0101] Assuming that N RSS measurements P.sub.i(x.sub.i) are obtained from locations x.sub.i with i=0 . . . N−1, the estimation of the unknown parameters can be performed with a non-linear least squares algorithm. However, without providing any a-priori information on the estimated parameters, the estimate values might become senseless with respect to the known physical radio propagation environment and typical radio network parameters. For example, the transmission power of radio transceivers is known to be set at certain level so that some sensible coverage area is achieved, but the maximum values restricted in legislation are not exceeded. Nonetheless, one or more of the following a-priori information on the unknown parameters may be available: [0102] 1. Regarding model parameter A (the RSS at 1 m distance): the model parameter A is highly dependent on the transmission power [transmission power parameter], which is restricted based on the legislation and/or minimum coverage area requirements. If free space path loss is assumed for the first meter, the model parameter A can be estimated [a free-space parameter of the propagation loss] as the effective transmission power minus the 1 meter path loss, for example. [0103] 2. Regarding model parameter n (path loss exponent): n is dependent on the radio propagation environment. Typical values for indoor and outdoor case can be found from the literature, for example. [0104] 3. Regarding the model parameter L.sub.f (floor loss, only required in indoor scenario): L.sub.f is dependent on the building and the floor plan. Typical values can be found from the literature, for example.) Regarding claim 7, the rejection of claim 1 is incorporated and Ola further teaches the method according to claim 1, wherein for the propagation field at least one physical parameter of the environment is predefined. (in [0028] For instance, the first location-specific function may be based on at least one model parameter. For instance, the second location-specific function may also be based on at least one model parameter. Preferably, the first and second location-specific functions each depend on multiple model parameters.) Regarding claim 8, the rejection of claim 7 is incorporated and Ola further teaches the method according to claim 7, wherein the at least one physical parameter defines environment geometry information via transmission coefficients. (in [0035] According to an exemplary embodiment of the method according to the first aspect, the first location-specific function is based on a radio propagation model. The radio propagation model may for instance be the log-distance path loss model. The log-distance path loss model predicts the path loss of a signal over distance [wherein the at least one physical parameter defines environment geometry information via transmission coefficients]. This model is particularly advantageous for indoor areas or densely populated areas. However, other radio propagation models or combinations thereof may also be used. As an example, the first location-specific function may be defined as… [0036] In this case, the first location-specific function depends on the location x and has three model parameters, wherein A is the path loss or the observed power level at a reference distance (e.g. the power level (e.g. in dB ref 1 mW) at, for example, one meter distance from the transmitter), n is the path loss exponent and x.sub.AP is the location (for instance in three dimensions) [wherein the at least one physical parameter defines environment geometry information via transmission coefficients] of the transmitter transmitting the signal (e.g. an access point or a base station). ) Regarding claim 12, the rejection of claim 1 is incorporated and Ola further teaches a computer program product, comprising a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement a method according to claim 1. (in [0146] Any presented connection in the described embodiments is to be understood in a way that the involved components are operationally coupled. Thus, the connections can be direct or indirect with any number or combination of intervening elements, and there may be merely a functional relationship between the components [a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement a method according to claim 1]. [0147] Further, as used in this text, the term ‘circuitry’ refers to any of the following: [0148] (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) [0149] (b) combinations of circuits and software (and/or firmware), such as: (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone, to perform various functions) and … [0152] Any of the processors mentioned in this text, in particular but not limited to processors 20 and 30 of FIGS. 2 and 3, could be a processor of any suitable type. Any processor may comprise but is not limited to one or more microprocessors, one or more processor(s) with accompanying digital signal processor(s), one or more processor(s) without accompanying digital signal processor(s), one or more special-purpose computer chips, one or more field-programmable gate arrays (FPGAS), one or more controllers, one or more application-specific integrated circuits (ASICS), or one or more computer(s). The relevant structure/hardware has been programmed in such a way to carry out the described function. [0153] Moreover, any of the actions described or illustrated herein may be implemented using executable instructions in a general-purpose or special-purpose processor [a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement a method according to claim 1] and stored on a computer-readable storage medium (e.g., disk, memory, or the like) to be executed by such a processor. References to ‘computer-readable storage medium’ should be understood to encompass specialized circuits such as FPGAs, ASICs, signal processing devices, and other devices.; And the rejection of claim 1 is incorporated.) Regarding claim 13, the rejection of claim 12 is incorporated and Ola further teaches a computer-readable storage medium comprising at least the computer program product according to claim 12. (in [0146] Any presented connection in the described embodiments is to be understood in a way that the involved components are operationally coupled. Thus, the connections can be direct or indirect with any number or combination of intervening elements, and there may be merely a functional relationship between the components [a computer-readable storage medium comprising at least the computer program product according to claim 12]. [0147] Further, as used in this text, the term ‘circuitry’ refers to any of the following: [0148] (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) [0149] (b) combinations of circuits and software (and/or firmware), such as: (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone, to perform various functions) and … [0152] Any of the processors mentioned in this text, in particular but not limited to processors 20 and 30 of FIGS. 2 and 3, could be a processor of any suitable type. Any processor may comprise but is not limited to one or more microprocessors, one or more processor(s) with accompanying digital signal processor(s), one or more processor(s) without accompanying digital signal processor(s), one or more special-purpose computer chips, one or more field-programmable gate arrays (FPGAS), one or more controllers, one or more application-specific integrated circuits (ASICS), or one or more computer(s). The relevant structure/hardware has been programmed in such a way to carry out the described function. [0153] Moreover, any of the actions described or illustrated herein may be implemented using executable instructions in a general-purpose or special-purpose processor and stored on a computer-readable storage medium [a computer-readable storage medium comprising at least the computer program product according to claim 12] (e.g., disk, memory, or the like) to be executed by such a processor. References to ‘computer-readable storage medium’ should be understood to encompass specialized circuits such as FPGAs, ASICs, signal processing devices, and other devices.; And the rejection of claim 12 is incorporated.) Regarding claim 14, the rejection of claim 1 is incorporated and Ola further teaches an electronic computing device for predicting a propagation of radio waves in an environment, comprising at least one machine learning algorithm, wherein the machine learning algorithm is trained by the method according to claim 1. (in [0035] According to an exemplary embodiment of the method according to the first aspect, the first location-specific function is based on a radio propagation model [an electronic computing device for predicting a propagation of radio waves in an environment, comprising at least one machine learning algorithm, wherein the machine learning algorithm is trained by the method according to claim 1]. The radio propagation model may for instance be the log-distance path loss model. The log-distance path loss model predicts the path loss of a signal over distance [wherein the machine learning algorithm is trained by the method according to claim 1]. This model is particularly advantageous for indoor areas or densely populated areas. However, other radio propagation models or combinations thereof may also be used. As an example, the first location-specific function may be defined as…; And in (in [0146] Any presented connection in the described embodiments is to be understood in a way that the involved components are operationally coupled. Thus, the connections can be direct or indirect with any number or combination of intervening elements, and there may be merely a functional relationship between the components [an electronic computing device for predicting a propagation of radio waves in an environment, comprising at least one machine learning algorithm…]. [0147] Further, as used in this text, the term ‘circuitry’ refers to any of the following: [0148] (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) [0149] (b) combinations of circuits and software (and/or firmware), such as: (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone, to perform various functions) and… [0152] Any of the processors mentioned in this text, in particular but not limited to processors 20 and 30 of FIGS. 2 and 3, could be a processor of any suitable type. Any processor may comprise but is not limited to one or more microprocessors, one or more processor(s) with accompanying digital signal processor(s), one or more processor(s) without accompanying digital signal processor(s), one or more special-purpose computer chips, one or more field-programmable gate arrays (FPGAS), one or more controllers, one or more application-specific integrated circuits (ASICS), or one or more computer(s). The relevant structure/hardware has been programmed in such a way to carry out the described function. [0153] Moreover, any of the actions described or illustrated herein may be implemented using executable instructions in a general-purpose or special-purpose processor [an electronic computing device for predicting a propagation of radio waves in an environment, comprising at least one machine learning algorithm…] and stored on a computer-readable storage medium (e.g., disk, memory, or the like) to be executed by such a processor. References to ‘computer-readable storage medium’ should be understood to encompass specialized circuits such as FPGAs, ASICs, signal processing devices, and other devices.; And the rejection of claim 1 is incorporated.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 9-11 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Feinmesser et al. (US 10051423 hereinafter ‘Fein’) in view of Omi et al. (US 20240038076, hereinafter ‘Omi’). Regarding independent claim 1, Fein teaches a method for teaching an electronic computing device including at least a machine learning algorithm for predicting a position-based propagation of radio waves in an environment, the method comprising: (in 1:51-64: In some embodiments, a wireless device may be equipped with a pretrained convolutional neural network (CNN) [a machine learning algorithm for predicting a position-based propagation of radio waves in an environment]. In some embodiments, the wireless device may be configured to perform a coarse time of arrival (TOA) estimation on wireless communications received from a remote device to generate an estimated impulse response. The wireless device may further store a transmission time value associated with the received wireless communications. The estimated impulse response may be input to the CNN, which may calculate a line of sight (LOS) estimate [predicting a position-based propagation of radio waves in an environment] using a pretrained set of CNN parameters. In some embodiments, the wireless device may determine a range between the wireless device and the remote device based on the LOS estimate [predicting a position-based propagation of radio waves in an environment] and the transmission time value.) providing a mathematical model for the position-based propagation, wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment; (in 1:51-64: In some embodiments, a wireless device may be equipped with a pretrained convolutional neural network (CNN) [providing a mathematical model for the position-based propagation]. In some embodiments, the wireless device may be configured to perform a coarse time of arrival (TOA) estimation on wireless communications received from a remote device to generate an estimated impulse response. The wireless device may further store a transmission time value associated with the received wireless communications. The estimated impulse response may be input to the CNN, which may calculate a line of sight (LOS) estimate using a pretrained set of CNN parameters [wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment]. In some embodiments, the wireless device may determine a range between the wireless device and the remote device based on the LOS estimate and the transmission time value; And in 16:45-53: In these embodiments, the CNN that calculates the LOS estimate may receive the multipath environment classifier and calculate the LOS estimate using a set of CNN parameters that correspond to the multipath environment classifier [wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment]. In some embodiments, this may improve the computational burden and accuracy of the LOS calculation, as the CNN parameters may be customized for the particular local environment [wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment].) generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain; (in 16:45-53: In these embodiments, the CNN that calculates the LOS estimate may receive the multipath environment classifier and calculate the LOS estimate using a set of CNN parameters that correspond to the multipath environment classifier. In some embodiments, this may improve the computational burden and accuracy of the LOS calculation, as the CNN parameters may be customized for the particular local environment [generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain]…; And in 16: 13-22: In some embodiments, several CNN configurations may be created for a plurality of multipath environments or multipath environment classes. As described above, the training of the CNN may be separately performed for each of a plurality of training data sets corresponding to a plurality of different multipath environments and/or multipath environment classes [generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain], leading to a plurality of trained CNN configurations for each of the plurality of different multipath environments and/or multipath environment classes…) training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm; (in 11:47-61: In some embodiments, the final layer of the CNN may include a single filter, and the single filter may include the LOS time estimate. The LOS estimate may be compared to the (known) LOS associated with the training data to compute a loss function. In some embodiments, a gradient function may be calculated in the space of weight and bias functions. Backpropagation may be employed, wherein the gradient function is used to adjust the weight and bias functions in such a way as to minimize the loss function [training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm]. After completion of back propagation, the training process may iteratively repeat in a series of subsequent epochs, wherein each epoch recovers a LOS estimate from training data, performs backpropagation, and adjusts the weight and bias functions to reduce (or minimize) a loss function.) and obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm. (in 11:47-61: In some embodiments, the final layer of the CNN may include a single filter, and the single filter may include the LOS time estimate… After completion of back propagation, the training process may iteratively repeat in a series of subsequent epochs, wherein each epoch recovers a LOS estimate from training data [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm], performs backpropagation [weighted sum of multiple evaluations of the trained machine learning algorithm], and adjusts the weight and bias functions to reduce (or minimize) a loss function. 7:11-21: The CNN 310 may be preconfigured with a trained set of CNN parameters. In some embodiments, the CNN may be preconfigured with a plurality of sets of CNN parameters, and it may be configured to receive instructions from the processing element that determine which set of CNN parameters to use in a particular calculation or wireless environment. As described in further detail below, the sets of CNN parameters may include trained weight functions […by a weighted sum of multiple evaluations of the trained machine learning algorithm] and bias functions to use in calculating an estimated line-of-sight from an impulse response estimate...; And in 14:6-13: The method may then proceed to run the CNN on the received data and using the preconfigured set of training parameters to calculate a LOS estimate. For example, as illustrated in FIG. 9, the CNN input may be processed through a series of convolutional filter layers and subsequent dense layers using the pretrained CNN configuration to generate on output LOS estimate [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm]...) Fein teaches the use of a neural network as a mathematical model for making estimates and predictions regarding communication signals. One of ordinary skill in the art knows that training neural networks for making predictions involves the use partial derivates as part of the gradient/backpropagation algorithms, as noted above, in 11:48-56: In some embodiments, the final layer of the CNN may include a single filter, and the single filter may include the LOS time estimate. The LOS estimate may be compared to the (known) LOS associated with the training data to compute a loss function. In some embodiments, a gradient function may be calculated in the space of weight and bias functions. Backpropagation may be employed, wherein the gradient function […a partial derivative of the machine learning algorithm] is used to adjust the weight and bias functions in such a way as to minimize the loss function. Omi does expressly teaches the use partial derivates as part of the gradient/backpropagation algorithms, in [0186] Once the received power levels from the symmetric points become the same, the main beam direction is confirmed… FIG. 4b describes, by way of example only but the invention is not so limited, a gradient algorithm that may be used for beam localization and/or main beam centre estimation of the AUT 106… The gradient ascent algorithm for beam localisation and estimation of the main beam centre of an AUT 106 is one of many examples of an the iterative feedback algorithm/system, as previously described with reference to FIG. 3 in step 304, that may be used to estimate the main beam centre of the AUT 106…. Gradient ascent may be used for iterative optimization by repeating the formula below until convergence. θ←θ−α∇J(θ)  Equation (1) which is the set of below; [00001]θi←θi-α⁢∂∂θiJ⁡(θ)Equation⁢(2) where α is gradient descent/ascent step size […a partial derivative of the machine learning algorithm]. [0188] … When J(G) is considered as the function of radiation pattern, the partial derivative of Equation (2) […a partial derivative of the machine learning algorithm] is discretely obtained from the measurements from the two points: x.sub.est and x.sub.i. Then the gradient vector is obtained in the plane defined by θ.sub.1,2. By decomposing this vector into az-el plane, the gradient ascent, namely new beam estimation x.sub.est_new, can be calculated. The gradient ascent for beam localisation methodology/algorithm is provided as follows:… Additionally, Omi teaches, in [0236] The algorithm further comprises a multisensor fusion technique for combining information and/or RF radiation measurement data from different sources. Multisensor fusion may lead to enhanced data authenticity and availability. It can further improve the reliability and robustness, and increase the confidence as well as extend the spacial and temporal coverage. The multisensor fusion technique may include one or more Kalman filters and RF radiation measurement data obtained from the multiple RF sensor modules as the payload of the one or more aircraft. Different types of fusion architecture may be applied, such as (1) wherein RF radiation measurement data from each RF sensor module is combined and the posteriori state vector is estimated from the fused RF radiation measurement data, (2) wherein the multisensor fusion technique comprises a plurality of Kalman filters equal to the number of RF sensor modules, wherein the posteriori state vectors are centrally fused using a weighted sum [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm], and wherein each Kalman filter uses its own posteriori state vector estimation for the prediction stage in the next time step, or (3) wherein the multisensor fusion technique comprises a plurality of Kalman filters equal to the number of RF modules, wherein the posteriori state vectors are centrally fused using a weighted sum and a fused estimation is used as input for the prediction stage in the next time step… Fein and Omi are analogous art because both involve developing information processing and modeling techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information processing and modeling techniques in wireless communications systems using neural network as disclosed by Omi with the method of developing information processing and modeling techniques in wireless communications systems as disclosed by Fein. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Fein and Omi as noted above; Doing so allows for enabling data modeling and multi-sensor fusion technique for combining information and/or RF radiation measurement data from different sources, (Omi, 0236). Regarding claim 9, the rejection of claim 1 is incorporated and Fein in combination with Omi teaches the method according to claim 1, wherein the machine learning algorithm is provided as a neural network. (in 1:51-64: In some embodiments, a wireless device may be equipped with a pretrained convolutional neural network (CNN) [wherein the machine learning algorithm is provided as a neural network]. In some embodiments, the wireless device may be configured to perform a coarse time of arrival (TOA) estimation on wireless communications received from a remote device to generate an estimated impulse response. The wireless device may further store a transmission time value associated with the received wireless communications. The estimated impulse response may be input to the CNN, which may calculate a line of sight (LOS) estimate using a pretrained set of CNN parameters. In some embodiments, the wireless device may determine a range between the wireless device and the remote device based on the LOS estimate and the transmission time value. ) Regarding claim 10, the rejection of claim 1 is incorporated and Fein in combination with Omi teaches a method for using the electronic computing device trained according to claim 1, wherein the position-based propagation in the environment is predicted by optimizing parameters of the neural network by minimizing a loss. (in 11:48-61: In some embodiments, the final layer of the CNN may include a single filter, and the single filter may include the LOS time estimate. The LOS estimate may be compared to the (known) LOS associated with the training data to compute a loss function. In some embodiments, a gradient function may be calculated in the space of weight and bias functions. Backpropagation may be employed, wherein the gradient function is used to adjust the weight and bias functions in such a way as to minimize the loss function. After completion of back propagation, the training process may iteratively repeat in a series of subsequent epochs, wherein each epoch recovers a LOS estimate from training data, performs backpropagation, and adjusts the weight and bias functions to reduce (or minimize) a loss function [wherein the position-based propagation in the environment is predicted by optimizing parameters of the neural network by minimizing a loss]. ) Regarding claim 11, the rejection of claim 10 is incorporated and Fein in combination with Omi teaches the method according to claim 10, wherein a propagation field simulation is evaluated by the derivative of the machine learning algorithm (in 11:47-61: In some embodiments, the final layer of the CNN may include a single filter, and the single filter may include the LOS time estimate. The LOS estimate may be compared to the (known) LOS associated with the training data to compute a loss function. In some embodiments, a gradient function may be calculated in the space of weight and bias functions. Backpropagation [wherein a propagation field simulation is evaluated by the derivative of the machine learning algorithm] may be employed, wherein the gradient function is used to adjust the weight and bias functions in such a way as to minimize the loss function. After completion of back propagation, the training process may iteratively repeat in a series of subsequent epochs, wherein each epoch recovers a LOS estimate from training data, performs backpropagation, and adjusts the weight and bias functions to reduce (or minimize) a loss function.) and/or the propagation loss is evaluated by the weighted sum of multiple evaluations of the machine learning algorithm. (in 11:47-61: In some embodiments, the final layer of the CNN may include a single filter, and the single filter may include the LOS time estimate… After completion of back propagation, the training process may iteratively repeat in a series of subsequent epochs, wherein each epoch recovers a LOS estimate from training data [or the propagation loss is evaluated by the weighted sum of multiple evaluations of the machine learning algorithm], performs backpropagation [or the propagation loss is evaluated by the weighted sum of multiple evaluations of the machine learning algorithm], and adjusts the weight and bias functions to reduce (or minimize) a loss function. 7:11-21: The CNN 310 may be preconfigured with a trained set of CNN parameters. In some embodiments, the CNN may be preconfigured with a plurality of sets of CNN parameters, and it may be configured to receive instructions from the processing element that determine which set of CNN parameters to use in a particular calculation or wireless environment. As described in further detail below, the sets of CNN parameters may include trained weight functions […by the weighted sum of multiple evaluations of the machine learning algorithm] and bias functions to use in calculating an estimated line-of-sight from an impulse response estimate...; And in 14:6-13: The method may then proceed to run the CNN on the received data and using the preconfigured set of training parameters to calculate a LOS estimate. For example, as illustrated in FIG. 9, the CNN input may be processed through a series of convolutional filter layers and subsequent dense layers using the pretrained CNN configuration to generate on output LOS estimate [or the propagation loss is evaluated by the weighted sum of multiple evaluations of the machine learning algorithm]…) Additionally, Omi does expressly teaches the use partial derivates as part of the gradient/backpropagation algorithms, in [0186] Once the received power levels from the symmetric points become the same, the main beam direction is confirmed… FIG. 4b describes, by way of example only but the invention is not so limited, a gradient algorithm that may be used for beam localization and/or main beam centre estimation of the AUT 106… The gradient ascent algorithm for beam localisation and estimation of the main beam centre of an AUT 106 is one of many examples of an the iterative feedback algorithm/system, as previously described with reference to FIG. 3 in step 304, that may be used to estimate the main beam centre of the AUT 106…. Gradient ascent may be used for iterative optimization by repeating the formula below until convergence. θ←θ−α∇J(θ)  Equation (1) which is the set of below; [00001]θi←θi-α⁢∂∂θiJ⁡(θ)Equation⁢(2) where α is gradient descent/ascent step size [wherein a propagation field simulation is evaluated by the derivative of the machine learning algorithm]. [0188] … When J(G) is considered as the function of radiation pattern, the partial derivative of Equation (2) [wherein a propagation field simulation is evaluated by the derivative of the machine learning algorithm] is discretely obtained from the measurements from the two points: x.sub.est and x.sub.i. Then the gradient vector is obtained in the plane defined by θ.sub.1,2. By decomposing this vector into az-el plane, the gradient ascent, namely new beam estimation x.sub.est_new, can be calculated. The gradient ascent for beam localisation methodology/algorithm is provided as follows:… Furthermore, Omi teaches, in [0236] The algorithm further comprises a multisensor fusion technique for combining information and/or RF radiation measurement data from different sources. Multisensor fusion may lead to enhanced data authenticity and availability. It can further improve the reliability and robustness, and increase the confidence as well as extend the spacial and temporal coverage. The multisensor fusion technique may include one or more Kalman filters and RF radiation measurement data obtained from the multiple RF sensor modules as the payload of the one or more aircraft. Different types of fusion architecture may be applied, such as (1) wherein RF radiation measurement data from each RF sensor module is combined and the posteriori state vector is estimated from the fused RF radiation measurement data, (2) wherein the multisensor fusion technique comprises a plurality of Kalman filters equal to the number of RF sensor modules, wherein the posteriori state vectors are centrally fused using a weighted sum [or the propagation loss is evaluated by the weighted sum of multiple evaluations of the machine learning algorithm.], and wherein each Kalman filter uses its own posteriori state vector estimation for the prediction stage in the next time step, or (3) wherein the multisensor fusion technique comprises a plurality of Kalman filters equal to the number of RF modules, wherein the posteriori state vectors are centrally fused using a weighted sum and a fused estimation [or the propagation loss is evaluated by the weighted sum of multiple evaluations of the machine learning algorithm] is used as input for the prediction stage in the next time step… It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Fein and Omi for the same reasons disclosed above. Regarding claim 15, the rejection of claim 10 is incorporated and Fein in combination with Omi teaches an electronic computing device for predicting a propagation of radio waves in an environment, comprising at least one trained machine learning algorithm, wherein the electronic computing device is configured for performing the method according to claim 10. (in 1:51-64: In some embodiments, a wireless device may be equipped with a pretrained convolutional neural network (CNN) [an electronic computing device for predicting a propagation of radio waves in an environment, comprising at least one trained machine learning algorithm, wherein the electronic computing device is configured for performing the method according to claim 10]. In some embodiments, the wireless device may be configured to perform a coarse time of arrival (TOA) estimation on wireless communications received from a remote device to generate an estimated impulse response. The wireless device may further store a transmission time value associated with the received wireless communications. The estimated impulse response may be input to the CNN, which may calculate a line of sight (LOS) estimate using a pretrained set of CNN parameters. In some embodiments, the wireless device may determine a range between the wireless device and the remote device based on the LOS estimate [an electronic computing device for predicting a propagation of radio waves in an environment, comprising at least one trained machine learning algorithm, wherein the electronic computing device is configured for performing the method according to claim 10] and the transmission time value.; The rejection of claim 10 is incorporated.) Regarding claim 16, the rejection of claim 10 is incorporated and Fein in combination with Omi teaches a computer program product, comprising a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement a method according to claim 10. (in 1:51-64: In some embodiments, a wireless device may be equipped with a pretrained convolutional neural network (CNN) [a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement a method according to claim 10]. In some embodiments, the wireless device may be configured to perform a coarse time of arrival (TOA) estimation on wireless communications received from a remote device to generate an estimated impulse response. The wireless device may further store a transmission time value associated with the received wireless communications. The estimated impulse response may be input to the CNN, which may calculate a line of sight (LOS) estimate using a pretrained set of CNN parameters. In some embodiments, the wireless device may determine a range between the wireless device and the remote device based on the LOS estimate and the transmission time value; And in 3:17-39: Memory Medium—Any of various types of non-transitory computer accessible memory devices or storage devices [a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement a method according to claim 10]. The term “memory medium” is intended to include an installation medium, e.g., a CD-ROM, floppy disks, or tape device; a computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc.; a non-volatile memory such as a Flash, magnetic media, e.g., a hard drive, or optical storage; registers, or other similar types of memory elements, etc… The memory medium may store program instructions (e.g., embodied as computer programs) that may be executed by one or more processors..; The rejection of claim 10 is incorporated.) Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Feinmesser et al. (US 10051423 hereinafter ‘Fein’) in view of Omi et al. (US 20240038076, hereinafter ‘Omi’) in further view of Kaltiokallio et al. (NPL: “A Three-State Received Signal Strength Model for Device-Free Localization”, hereinafter ‘Kal’). Regarding claim 4, the rejection of claim 1 is incorporated. While Fein in combination with Omi teaches the process and system for modeling wireless network communication among wireless devices using neural network models. Fein and Omi doe not expressly teach the machine learning process including the method according to claim 1, wherein the propagation loss prediction comprises a calculation of integrals over the propagation domain are calculated as line integrals over a propagation field. Kal does expressly teach the machine learning process including the method according to claim 1, wherein the propagation loss prediction comprises a calculation of integrals over the propagation domain are calculated as line integrals over a propagation field. (pg. 9232: … 3) Shadowing Model: RF signals can diffract, scatter, reflect and attenuate upon contact with the person making it a demanding task to accurately model human-induced RSS changes in shadowing state. However, the modeling effort can be considerably simplified by assuming transmission through the human body as the dominating effect. In this case, attenuation can be represented by a line integral of the attenuation field along a straight line from TX to RX as visualized in Fig. 3(b). The total attenuation along the line [wherein the propagation loss prediction comprises a calculation of integrals over the propagation domain are calculated as line integrals over a propagation field] PNG media_image1.png 50 558 media_image1.png Greyscale caused by attenuation field ρ(x,y) as illustrated in Fig. 3(b), can be written as [23, Ch. 3] PNG media_image2.png 112 732 media_image2.png Greyscale [wherein the propagation loss prediction comprises a calculation of integrals over the propagation domain are calculated as line integrals over a propagation field] where δ(·) is Dirac delta function. In this paper, the cross section of a human is modeled as an ellipse with uniform electrical properties, i.e., ρ(x, y)=ρ. For such a geometry and properties, the closed form solution of Eq. (19) is where A and B are the semi-minor and semi-major axis of the ellipse and a2(ω)=A2 cos2(ω)+B2sin2(ω). The above formulations are closely related to the Radon transform which is widely utilized in computerized tomographic imaging (CTI) [23, Ch. 3]. Kal, Fein and Omi are analogous art because both involve developing information processing and modeling techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information processing and modeling techniques in wireless communications systems using statistical and deterministic models as disclosed by Kal with the method of developing information processing and modeling techniques in wireless communications systems as collectively disclosed by Omi and Fein. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Kal, Fein and Omi as noted above; Doing so allows for developing models for modeling temporal RSS changes that improve the localization accuracy while the spatial extent of a link’s sensing region is increased., (Kal, Pg. 9226, Right Col.). Regarding claim 5, the rejection of claim 1 is incorporated and Kal further teaches the method according to claim 1, wherein a dimensionality of a-the calculation of the propagation loss reduced by using a Radon transformation. (pg. 9232: … 3) Shadowing Model: RF signals can diffract, scatter, reflect and attenuate upon contact with the person making it a demanding task to accurately model human-induced RSS changes in shadowing state. However, the modeling effort can be considerably simplified by assuming transmission through the human body as the dominating effect. In this case, attenuation can be represented by a line integral of the attenuation field along a straight line from TX to RX as visualized in Fig. 3(b) [wherein a dimensionality of a-the calculation of the propagation loss reduced by using a Radon transformation]. The total attenuation along the line PNG media_image1.png 50 558 media_image1.png Greyscale caused by attenuation field ρ(x,y) as illustrated in Fig. 3(b), can be written as [23, Ch. 3] PNG media_image2.png 112 732 media_image2.png Greyscale where δ(·) is Dirac delta function. In this paper, the cross section of a human is modeled as an ellipse with uniform electrical properties, i.e., ρ(x, y)=ρ. For such a geometry and properties, the closed form solution of Eq. (19) is where A and B are the semi-minor and semi-major axis of the ellipse and a2(ω)=A2 cos2(ω)+B2sin2(ω) [wherein a dimensionality of a-the calculation of the propagation loss reduced by using a Radon transformation]. The above formulations are closely related to the Radon transform [wherein a dimensionality of a-the calculation of the propagation loss reduced by using a Radon transformation] which is widely utilized in computerized tomographic imaging (CTI) [23, Ch. 3]…. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Kal, Fein and Omi for the same reasons disclosed above. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Wirola et al. (US 20180164400, hereinafter ‘Ola’) in view of Frank et al. (US 10090901, hereinafter ‘Frank’). Regarding claim 6, the rejection of claim 1 is incorporated While Ola teaches the process and system for modeling wireless network communication among wireless devices using neural network models. Ola does not expressly teach the machine learning process including the method according to claim 1, wherein the propagation loss prediction comprises a calculation of integrals over the propagation domain are calculated as line integrals over a propagation field. Frank does not expressly teach the machine learning process including the method according to claim 1, wherein the propagation loss prediction comprises a calculation of integrals over the propagation domain are calculated as line integrals over a propagation field. (in 12:9-60: Calculation of the Transmitted Power for General Antenna Arrays is described below. For the purposes of this analysis, consider the more general case in which in which the number of antenna elements is K. Furthermore, the array elements need not be required to have the same pattern, though typically it is assumed that this is the case. Let the complex vector q(θ,ϕ) of length K denote the antenna patterns for these elements, where as before, ϕ(−π/2≤ϕ≤π/2) and θ(0≤θ≤2π) denote the antenna elevation and azimuth, respectively. If the array is driven by ideal current sources, the transmitted power is given by [wherein the propagation loss prediction comprises a calculation of integrals over the propagation domain are calculated as line integrals over a propagation field] PNG media_image3.png 308 674 media_image3.png Greyscale where the matrix Q is defined as PNG media_image4.png 68 616 media_image4.png Greyscale It can be noted that the Q matrix has the following properties: the dimension of the Q matrix is K×K, where K is the number of antenna elements in the transmitter array [wherein the propagation loss prediction comprises a calculation of integrals over the propagation domain are calculated as line integrals over a propagation field]; and from the definition of the Q matrix, it is apparent that the Q matrix is Hermitian so that Q.sup.H=Q. In general, each PMI is a matrix of dimension K×L, where K is the number of antennas (or antenna ports) in the array, and L is the number of transmission layers. We assume that the elements of the antenna array are coupled so that each vector of the PMI matrix must be scaled to satisfy the same unit energy constraint. Let w denote the precoding vector for a given transmission layer, or equivalently, let w denote any column of the PMI. It then follows that the correction factor needed for this precoding vector is given by the square-root of the inverse of the corresponding transmitted energy, or equivalently, by (w.sup.HQw).sup.−1/2,  (14) where w is the precoding matrix Q is determined by the antenna element patterns and the spacing of the antenna elements.) Frank and Ola are analogous art because both involve developing information processing and modeling techniques using mathematical models and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information processing and modeling techniques in wireless communications systems using statistical models as disclosed by Frank with the method of developing information processing and modeling techniques in wireless communications systems as disclosed by Ola. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Frank and Ola as noted above; Doing so allows for developing models for optimizing antenna precoder selection with coupled antennas which maximizes a performance metric such as the signal-to-interference plus noise ratio and/or the link throughput, (Frank, 1:26-28& 2:24-36). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Wirola et al. (US 20180164400, hereinafter ‘Ola’) in view of Simonsson et al. (US 20130336274, hereinafter ‘Sim’) Regarding claim 8, the rejection of claim 7 is incorporated and Ola further teaches the method according to claim 7, wherein the at least one physical parameter defines environment geometry information via transmission coefficients. (in [0035] According to an exemplary embodiment of the method according to the first aspect, the first location-specific function is based on a radio propagation model. The radio propagation model may for instance be the log-distance path loss model. The log-distance path loss model predicts the path loss of a signal over distance [wherein the at least one physical parameter defines environment geometry information via transmission coefficients]. This model is particularly advantageous for indoor areas or densely populated areas. However, other radio propagation models or combinations thereof may also be used. As an example, the first location-specific function may be defined as… [0036] In this case, the first location-specific function depends on the location x and has three model parameters, wherein A is the path loss or the observed power level at a reference distance (e.g. the power level (e.g. in dB ref 1 mW) at, for example, one meter distance from the transmitter), n is the path loss exponent and x.sub.AP is the location (for instance in three dimensions) [wherein the at least one physical parameter defines environment geometry information via transmission coefficients] of the transmitter transmitting the signal (e.g. an access point or a base station). ) Alternately Sim teaches the use of criterion to identify cell-edge transceiver devices is called geometry, in [0079] An exemplary criterion to identify cell-edge transceiver devices is called geometry. The geometry G.sub.u of a transceiver device u served by BS0 is given by PNG media_image5.png 120 310 media_image5.png Greyscale where S is the set of adjacent BSs, TxP is the transmit power [wherein the at least one physical parameter defines environment geometry information via transmission coefficients] of the considered BS, PL is the pathloss from the transceiver device u to the considered BS, and N is the receiver noise power. By subjecting the geometry parameter derived for a particular transceiver device to, for example, a threshold decision, it can be determined whether or not the particular transceiver device is located at a cell edge. Sim and Ola are analogous art because both involve developing information processing and modeling techniques using mathematical models and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information processing and modeling techniques in cellular communication networks using statistical models as disclosed by Sim with the method of developing information processing and modeling techniques in wireless communications systems as disclosed by Ola. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Sim and Ola as noted above; Doing so allows for developing models for optimizing efficient inter-cell interference coordination in a heterogeneous communication network, (Sim, 0010 & 0031). Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Vaknin at al. (US 20230063522, hereinafter ‘Vak’) in view of Omi. Regarding independent claim 1, Vak teaches a method for teaching an electronic computing device including at least a machine learning algorithm for predicting a position-based propagation of radio waves in an environment, the method comprising: (in [0016] Some techniques described herein provide training and usage of a model, such as a machine learning model, to predict a CI indicator, such as a CI KPI. For example, a network node may generate models of a plurality of cellular networks. The network node may simulate radio propagation (e.g., using a predefined model for path loss, antenna coverage, or the like) for a number of UEs of a model of a cellular network, of the models of the plurality of cellular networks. The network node may define, for one or more pairs of cells of the cellular network, one or more CI indicators based at least in part on the simulated radio propagation… Thus, the benefits of modeling radio propagation using accurate models (such as may be provided by a standards development organization), including accurate prediction of path loss and other radio propagation characteristics in a simulated cellular network, are achieved without incurring the processing resource usage of directly applying such models for determination of CI indicators. For example, by operating the machine learning model for determination of CI indicators to reconfigure or optimize a network, processing resource usage is reduced relative to modeling radio propagation using the models for path loss, antenna coverage, or the like, described above. Thus, delay in on-the-fly network management and planning is reduced and efficiency of resource usage is improved.) providing a mathematical model for the position-based propagation, wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment; (in [0016] Some techniques described herein provide training and usage of a model, such as a machine learning model [providing a mathematical model for the position-based propagation], to predict a CI indicator, such as a CI KPI. For example, a network node may generate models of a plurality of cellular networks [wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment]. The network node may simulate radio propagation (e.g., using a predefined model for path loss, antenna coverage, or the like) for a number of UEs of a model of a cellular network, of the models of the plurality of cellular networks...; And in [0077] A statistical [providing a mathematical model for the position-based propagation] result may include, for example, one or more LOS probabilities (e.g., LOS probabilities for a plurality of UEs associated with a cell pair), one or more path loss values (e.g., path loss values for a plurality of UEs associated with a cell pair), a number or ratio of UEs (associated with a cell pair) experiencing a receive power that satisfies a threshold, or the like. Training set generator 204 may use (503) the statistical results of simulated radio propagation (as determined using the above-described models) to define one or more CI KPIs for each pair of the simulated cells. For example, a CI KPI can be determined as a function of a path loss…; And in [0037] For purpose of illustration only, the following description of prediction unit 205 is provided for an embodiment implementing a machine learning model comprising a deep neural network (DNN) [providing a mathematical model for the position-based propagation]… [0040] DNN block 212 can comprise at least one DNN network comprising a plurality of layers organized in accordance with a DNN architecture... By way of non-limiting example, the layers in DNN can be convolutional, fully connected, locally connected, pooling/subsampling, recurrent, etc.) generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain; (in [0017] In some examples, a network node, when simulating the radio propagation, may generate multiple models of cellular networks (e.g., a large number of models, as described elsewhere herein). These multiple models may have different distributions of cells, different parameters, different densities, or the like. The network node may simulate the radio propagation for each of the multiple models, and/or may train the machine learning model based on a training set [generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain] derived from simulating the radio propagation for each of the multiple models. Thus, versatility of the machine learning model is improved, and the benefits of applying predefined models for path loss, antenna coverage, or the like to determine radio propagation are achieved while reducing processor usage relative to directly applying such models for reconfiguration or optimization of the network.) training the machine learning algorithm by fitting the training data to a(in [0037] For purpose of illustration only, the following description of prediction unit 205 is provided for an embodiment implementing a machine learning model comprising a deep neural network (DNN)… [0040] DNN block 212 can comprise at least one DNN network comprising a plurality of layers organized in accordance with a DNN architecture... By way of non-limiting example, the layers in DNN can be convolutional, fully connected, locally connected, pooling/subsampling, recurrent, etc.) and obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm. (in [0042] In some examples, the weighting and/or threshold values of a DNN can be initially selected prior to training, and can be further iteratively adjusted or modified, which is referred to as training of the DNN. Training may be performed using a training set of data. Training may aim to achieve an improved (e.g., optimal) set of weighting and/or threshold values in the DNN. Training may involve a number of iterations. After each iteration of training, a difference [obtaining a prediction of a propagation loss] can be determined between an actual output produced by the DNN and a target output identified by the training set of data. The difference can be referred to as an error value. The weighting and/or threshold values may be adjusted based on the difference, such as to reduce the error value [by a weighted sum of multiple evaluations of the trained machine learning algorithm]. Training can be determined to be complete when a cost function indicative of the error value is lower than a predetermined value, or when a limited change (e.g., lower than a threshold) in performance between iterations is achieved [by a weighted sum of multiple evaluations of the trained machine learning algorithm].) While Vak teaches the modeling of radio communication network signals using machine learning models. Vak does not expressly teach the use of a partial derivative as part of the machine learning model. Omi does expressly teach the use of a partial derivative as part of the machine learning model, in in [0186] Once the received power levels from the symmetric points become the same, the main beam direction is confirmed… FIG. 4b describes, by way of example only but the invention is not so limited, a gradient algorithm that may be used for beam localization and/or main beam centre estimation of the AUT 106… The gradient ascent algorithm for beam localisation and estimation of the main beam centre of an AUT 106 is one of many examples of an the iterative feedback algorithm/system, as previously described with reference to FIG. 3 in step 304, that may be used to estimate the main beam centre of the AUT 106…. Gradient ascent may be used for iterative optimization by repeating the formula below until convergence. θ←θ−α∇J(θ)  Equation (1) which is the set of below; [00001]θi←θi-α⁢∂∂θiJ⁡(θ)Equation⁢(2) where α is gradient descent/ascent step size […a partial derivative of the machine learning algorithm]. [0188] … When J(G) is considered as the function of radiation pattern, the partial derivative of Equation (2) […a partial derivative of the machine learning algorithm] is discretely obtained from the measurements from the two points: x.sub.est and x.sub.i. Then the gradient vector is obtained in the plane defined by θ.sub.1,2. By decomposing this vector into az-el plane, the gradient ascent, namely new beam estimation x.sub.est_new, can be calculated. The gradient ascent for beam localisation methodology/algorithm is provided as follows:… Additionally, Omi teaches, in [0236] The algorithm further comprises a multisensor fusion technique for combining information and/or RF radiation measurement data from different sources. Multisensor fusion may lead to enhanced data authenticity and availability. It can further improve the reliability and robustness, and increase the confidence as well as extend the spacial and temporal coverage. The multisensor fusion technique may include one or more Kalman filters and RF radiation measurement data obtained from the multiple RF sensor modules as the payload of the one or more aircraft. Different types of fusion architecture may be applied, such as (1) wherein RF radiation measurement data from each RF sensor module is combined and the posteriori state vector is estimated from the fused RF radiation measurement data, (2) wherein the multisensor fusion technique comprises a plurality of Kalman filters equal to the number of RF sensor modules, wherein the posteriori state vectors are centrally fused using a weighted sum [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm], and wherein each Kalman filter uses its own posteriori state vector estimation for the prediction stage in the next time step, or (3) wherein the multisensor fusion technique comprises a plurality of Kalman filters equal to the number of RF modules, wherein the posteriori state vectors are centrally fused using a weighted sum and a fused estimation is used as input for the prediction stage in the next time step… Vak and Omi are analogous art because both involve developing information processing and modeling techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information processing and modeling techniques in wireless communications systems using neural network as disclosed by Omi with the method of developing information processing and modeling techniques in wireless communications systems as disclosed by Vak. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Vak and Omi as noted above; Doing so allows for enabling data modeling and multi-sensor fusion technique for combining information and/or RF radiation measurement data from different sources, (Omi, 0236). Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Saxon et al. (US 20220166530, hereinafter ‘Jeff’) in view of Kim et al. (US 20240223407, hereinafter ‘Kim’) in further view of Mohammadjafari et al. (NPL: Machine Learning-Based Radio Coverage Prediction in Urban Environments, hereinafter ‘Mo’). Regarding independent claim 1, Jeff teaches a method for teaching an electronic computing device including at least a machine learning algorithm for predicting a position-based propagation of radio waves in an environment, the method comprising: (As depicted in Fig. 1 and in [0020] FIG. 1 is an architecture diagram of an example system 100 that can facilitate modeling radio wave propagation […electronic computing device including at least a machine learning algorithm for predicting a position-based propagation of radio waves in an environment], in accordance with one or more embodiments. More specifically, one or more embodiments can facilitate employing a deep machine learning process to model radio wave propagation based on graphical depictions of geographic areas […electronic computing device including at least a machine learning algorithm for predicting a position-based propagation of radio waves in an environment]. For purposes of brevity, description of like elements and/or processes employed in other embodiments is omitted. [0021] System 100 can include equipment 150 coupled to neural network 145. As depicted, equipment 150 can include memory 165, processor 160, storage device 170, as well as other components to implement and provide functions for system 100, and other embodiments described herein… [0028] In an additional example, in one or more embodiments, computer executable components 120 can include instructions that, when executed by processor 160, can facilitate performance of operations defining feature map generating component 122. In one or more embodiments, feature map generating component 122 can facilitate generating a feature map for a geographic area by employing neural network 145 to analyze information including graphical representation 172, the identified GIS feature, and metadata 174 about graphical representation 172. In some implementations, the generate feature map can be a map depicting estimates of the propagation of signal at locations within the geographic area depicted by graphical representation 172. The generation of feature maps and the configuration and operation of neural networks such as neural network 145 is discussed with FIGS. 2-6 below.) providing a mathematical model for the position-based propagation, wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment; (in [0039] FIG. 3 is a diagram of a non-limiting graphical representation 300 for facilitating the modeling of signal propagation in a geographic area [a physical model for the position-based propagation in the environment] , in accordance with one or more embodiments… [0040] Returning to the discussion of computer-executable components 120, in one or more embodiments, computer executable components 120 can include instructions that, when executed by processor 260, can facilitate performance of operations defining, siting component 124… [0041] One approach described herein can employ types of deep neural network structures [providing a mathematical model for the position-based propagation] to estimate the path loss for a given reference point across a bounding area. The estimation will function on an individual transmitter location that has already been identified and labeled in a graphical representation of the area… [0042] Identify area of impact within graphical representation 300. One or more embodiments can predict outputs across a set area surrounding the location of the emitter, and a bounding region can be defined for this area. For example, signal point 320 can be a potential site for placement of base station equipment. Based on the feature map generated from the neural network 145 [providing a mathematical model for the position-based propagation], the estimated propagation of a signal from signal point 320 can be generated. ) generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain; (in [0035] In another example, in one or more embodiments, computer executable components 220 can include instructions that, when executed by processor 260, can facilitate performance of operations defining, neural network training component 224. As would be appreciated by one having skill in the relevant art(s), given the description herein, neural network 145 can be a broadly considered to be a function approximator. For example, given a series of training inputs paired with training outputs [generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain], neural network 145 is trained to determine a function that can pair the training input values and output values in the same way that was specified by the training data [generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain]. [0036] In some implementations, machine learning approaches can be used to select functions for a variety of data such that, when accessed by one or more embodiments described, e.g., train a neural network to link inputs with outputs. Thus, returning to the example implementation of neural network training component 224, in some implementations, training graphical representations can be used to train neural network 145, to be paired with training values 274. In this example, training values 274 can be signal propagation values associated with corresponding training graphical representations 272 […training data for the machine learning algorithm comprising a propagation field and/or a propagation domain as signal propagation values associated with graphical representations]…) training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm; ( in [0034] In an example implementation, neural network 145 can comprise a convolutional neural network that can provide results according to a deep machine learning process, e.g., discussed below with FIGS. 3-7... [0035] In another example, in one or more embodiments, computer executable components 220 can include instructions that, when executed by processor 260, can facilitate performance of operations defining, neural network training component 224. As would be appreciated by one having skill in the relevant art(s), given the description herein, neural network 145 can be a broadly considered to be a function approximator. For example, given a series of training inputs paired with training outputs, neural network 145 is trained to determine a function [training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm] that can pair the training input values and output values in the same way that was specified by the training data. ) and obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm. (in [0034] In an example implementation, neural network 145 can comprise a convolutional neural network that can provide results according to a deep machine learning process, e.g., discussed below with FIGS. 3-7. With respect to this convolutional neural network, in one or more embodiments, matrix filtering component 222 can apply a filter to convolute data corresponding to the graphical representation and the identified feature, resulting in a matrix of weighted values, with the filter being based on the propagations of signals in conditions indicated by graphical representations 272. In approach to generating a feature map (e.g., by feature map generating component 122) is to combine features from the matrix of weighted values [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm] with the graphical representation, e.g., overlay the features indicated by the weighted values over graphical representation 172… [0036] In some implementations, machine learning approaches can be used to select functions for a variety of data such that, when accessed by one or more embodiments described, e.g., train a neural network to link inputs with outputs.. Once trained (e.g., once machine learning approaches are used to estimate a function [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm] linking training graphical representations 272 with training values 274), neural network 145 can be used to facilitate operations by feature map generating component 122… [0037] For example, feature map generating component 122 can provide an input to neural network 145 that is similar to the input data used to train neural network 145, e.g., graphical representation 172 can be similar to training graphical representations 272 used to train neural network 145. Based on graphical representation 172, neural network can provide propagation information from training values 174, e.g., by applying the estimated function of neural network 145 [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm] to the graphical representation 172 input.) While Jeff teaches a supervised learning process for training machine learning model for making radio coverage predictions associated with neural network using a convolutional neural network and deep learning techniques, as noted above. Jeff does not expressly teach the training processes including the claimed use of …a partial derivative of the machine learning algorithm … and … a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm. Kim does expressly disclose the training processes including the claimed use of …a partial derivative of the machine learning algorithm … (in [0124] Supervised learning may use training data labeled with a correct answer and the unsupervised learning may use training data which is not labeled with a correct answer. That is, for example, in case of supervised learning for data classification, training data may be labeled with a category. The labeled training data may be input to the neural network, and the output (category) of the neural network may be compared with the label of the training data, thereby calculating the error. The calculated error is backpropagated from the neural network backward (that is, from the output layer to the input layer), and the connection weight of each node of each layer of the neural network may be updated according to backpropagation [a partial derivative of the machine learning algorithm]. Change in updated connection weight of each node may be determined according to the learning rate. Calculation of the neural network for input data and backpropagation of the error may configure a learning cycle (epoch)… ) and … a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm. (in [0129] Referring to FIG. 3, when an input vector x=(x1, x2, . . . , xd) is input, each component is multiplied by a weight (32, W2, . . . , Wd), and all the results are summed [a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm]. After that, the entire process of applying an activation function σ(.Math.) is called a perceptron. The huge artificial neural network structure may extend the simplified perceptron structure illustrated in FIG. 3…; And in [0123] Neural network learning is to minimize output error [a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm]. Neural network learning refers to a process of repeatedly inputting training data to a neural network, calculating the error of the output and target of the neural network for the training data, backpropagating the error of the neural network from the output layer of the neural network to an input layer in order to reduce the error and updating the weight of each node of the neural network…. [0125] The learning method may vary according to the feature of data. For example, for the purpose of accurately predicting data transmitted from a transmitter in a receiver in a communication system, learning may be performed using supervised learning rather than unsupervised learning or reinforcement learning. Examiner notes that back-propagation algorithm obtains a prediction error as a progration error/loss that is back-propgated to updated the weights summed to compute the error/loss value as result of multiple evaluations through the layers, as depicted in Fig. 7) Kim and Jeff are analogous art because both involve developing information processing and modeling techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information processing and federated learning in a wireless communication system as disclosed by Kim with the method of developing information processing and modeling techniques in wireless communications systems using machine learning models as disclosed by Jeff. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Kim and Jeff as noted above; Doing so allows for developing and using machine learning methods that enable specific operation for removing an impact of a channel on each of the first signal and the second signal, (Kim, 0010). Alternatively Mo teaches and … a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm. (in Sec. II: … Many recent studies focused on machine learning models for propagation prediction. Neural network models have been frequently used for path loss prediction [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm], and shown to outperform specific empirical and deterministic models including Okumura-Hata and Egli models …; And in Sec. II(B): … Multi-layer perceptron is a non-linear non-parametric feed forward network composed of an input layer X, one or more hidden layers Z and one output layer Y, as illustrated in Figure 1. Each layer has a number of units, which defines the topology of the network. Weight values, wij are specified between the nodes i and j in consecutive layers in the network, and the nodes in the hidden layers apply non-linear transformation through activation functions on the weighted linear combination […by a weighted sum of multiple evaluations of the trained machine learning algorithm] of the inputs x1,...,xk… We also consider deep neural networks in our analysis, which are generalization of MLPs with a large number of hid den layers and hidden units. Deep neural networks have been shown to perform well for various complex problems…) Mo, Kim and Jeff are analogous art because both involve developing information processing and modeling techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information processing and modeling techniques making machine-learning based radio coverage predictions as disclosed by Mo with the method of developing information processing and modeling techniques in wireless communications systems using machine learning models as collectively disclosed by Kim and Jeff. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Mo Kim, and Jeff as noted above; Doing so allows for developing and using machine learning methods that are highly effective for the coverage prediction task, (Mo, Abstract). Claims 1 is rejected under 35 U.S.C. 103 as being unpatentable over Saxon et al. (US 20220166530, hereinafter ‘Jeff’) in view of Mohammadjafari et al. (NPL: Machine Learning-Based Radio Coverage Prediction in Urban Environments, hereinafter ‘Mo’). Regarding independent claim 1, Jeff teaches a method for teaching an electronic computing device including at least a machine learning algorithm for predicting a position-based propagation of radio waves in an environment, the method comprising: (As depicted in Fig. 1 and in [0020] FIG. 1 is an architecture diagram of an example system 100 that can facilitate modeling radio wave propagation […electronic computing device including at least a machine learning algorithm for predicting a position-based propagation of radio waves in an environment], in accordance with one or more embodiments. More specifically, one or more embodiments can facilitate employing a deep machine learning process to model radio wave propagation based on graphical depictions of geographic areas […electronic computing device including at least a machine learning algorithm for predicting a position-based propagation of radio waves in an environment]. For purposes of brevity, description of like elements and/or processes employed in other embodiments is omitted. [0021] System 100 can include equipment 150 coupled to neural network 145. As depicted, equipment 150 can include memory 165, processor 160, storage device 170, as well as other components to implement and provide functions for system 100, and other embodiments described herein… [0028] In an additional example, in one or more embodiments, computer executable components 120 can include instructions that, when executed by processor 160, can facilitate performance of operations defining feature map generating component 122. In one or more embodiments, feature map generating component 122 can facilitate generating a feature map for a geographic area by employing neural network 145 to analyze information including graphical representation 172, the identified GIS feature, and metadata 174 about graphical representation 172. In some implementations, the generate feature map can be a map depicting estimates of the propagation of signal at locations within the geographic area depicted by graphical representation 172. The generation of feature maps and the configuration and operation of neural networks such as neural network 145 is discussed with FIGS. 2-6 below.) providing a mathematical model for the position-based propagation, wherein the mathematical model comprises at least a physical model for the position-based propagation in the environment; (in [0039] FIG. 3 is a diagram of a non-limiting graphical representation 300 for facilitating the modeling of signal propagation in a geographic area [a physical model for the position-based propagation in the environment] , in accordance with one or more embodiments… [0040] Returning to the discussion of computer-executable components 120, in one or more embodiments, computer executable components 120 can include instructions that, when executed by processor 260, can facilitate performance of operations defining, siting component 124… [0041] One approach described herein can employ types of deep neural network structures [providing a mathematical model for the position-based propagation] to estimate the path loss for a given reference point across a bounding area. The estimation will function on an individual transmitter location that has already been identified and labeled in a graphical representation of the area… [0042] Identify area of impact within graphical representation 300. One or more embodiments can predict outputs across a set area surrounding the location of the emitter, and a bounding region can be defined for this area. For example, signal point 320 can be a potential site for placement of base station equipment. Based on the feature map generated from the neural network 145 [providing a mathematical model for the position-based propagation], the estimated propagation of a signal from signal point 320 can be generated. ) generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain; (in [0035] In another example, in one or more embodiments, computer executable components 220 can include instructions that, when executed by processor 260, can facilitate performance of operations defining, neural network training component 224. As would be appreciated by one having skill in the relevant art(s), given the description herein, neural network 145 can be a broadly considered to be a function approximator. For example, given a series of training inputs paired with training outputs [generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain], neural network 145 is trained to determine a function that can pair the training input values and output values in the same way that was specified by the training data [generating training data for the machine learning algorithm comprising a propagation field and/or a propagation domain]. [0036] In some implementations, machine learning approaches can be used to select functions for a variety of data such that, when accessed by one or more embodiments described, e.g., train a neural network to link inputs with outputs. Thus, returning to the example implementation of neural network training component 224, in some implementations, training graphical representations can be used to train neural network 145, to be paired with training values 274. In this example, training values 274 can be signal propagation values associated with corresponding training graphical representations 272 […training data for the machine learning algorithm comprising a propagation field and/or a propagation domain as signal propagation values associated with graphical representations]…) training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm; ( in [0034] In an example implementation, neural network 145 can comprise a convolutional neural network that can provide results according to a deep machine learning process, e.g., discussed below with FIGS. 3-7... [0035] In another example, in one or more embodiments, computer executable components 220 can include instructions that, when executed by processor 260, can facilitate performance of operations defining, neural network training component 224. As would be appreciated by one having skill in the relevant art(s), given the description herein, neural network 145 can be a broadly considered to be a function approximator. For example, given a series of training inputs paired with training outputs, neural network 145 is trained to determine a function [training the machine learning algorithm by fitting the training data to a partial derivative of the machine learning algorithm] that can pair the training input values and output values in the same way that was specified by the training data. ) and obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm. (in [0034] In an example implementation, neural network 145 can comprise a convolutional neural network that can provide results according to a deep machine learning process, e.g., discussed below with FIGS. 3-7. With respect to this convolutional neural network, in one or more embodiments, matrix filtering component 222 can apply a filter to convolute data corresponding to the graphical representation and the identified feature, resulting in a matrix of weighted values, with the filter being based on the propagations of signals in conditions indicated by graphical representations 272. In approach to generating a feature map (e.g., by feature map generating component 122) is to combine features from the matrix of weighted values [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm] with the graphical representation, e.g., overlay the features indicated by the weighted values over graphical representation 172… [0036] In some implementations, machine learning approaches can be used to select functions for a variety of data such that, when accessed by one or more embodiments described, e.g., train a neural network to link inputs with outputs.. Once trained (e.g., once machine learning approaches are used to estimate a function [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm] linking training graphical representations 272 with training values 274), neural network 145 can be used to facilitate operations by feature map generating component 122… [0037] For example, feature map generating component 122 can provide an input to neural network 145 that is similar to the input data used to train neural network 145, e.g., graphical representation 172 can be similar to training graphical representations 272 used to train neural network 145. Based on graphical representation 172, neural network can provide propagation information from training values 174, e.g., by applying the estimated function of neural network 145 [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm] to the graphical representation 172 input.) While Jeff teaches a supervised learning process for training machine learning model for making radio coverage predictions associated with neural network using a convolutional neural network and deep learning techniques, as noted above. Jeff does not expressly teach the training processes including the claimed use of …a partial derivative of the machine learning algorithm … and … a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm. Mo does expressly disclose the training processes including the claimed use of … a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm. (in Sec. II: … Many recent studies focused on machine learning models for propagation prediction. Neural network models have been frequently used for path loss prediction [obtaining a prediction of a propagation loss by a weighted sum of multiple evaluations of the trained machine learning algorithm], and shown to outperform specific empirical and deterministic models including Okumura-Hata and Egli models …; And in Sec. II(B): … Multi-layer perceptron is a non-linear non-parametric feed forward network composed of an input layer X, one or more hidden layers Z and one output layer Y, as illustrated in Figure 1. Each layer has a number of units, which defines the topology of the network. Weight values, wij are specified between the nodes i and j in consecutive layers in the network, and the nodes in the hidden layers apply non-linear transformation through activation functions on the weighted linear combination […by a weighted sum of multiple evaluations of the trained machine learning algorithm] of the inputs x1,...,xk… We also consider deep neural networks in our analysis, which are generalization of MLPs with a large number of hid den layers and hidden units. Deep neural networks have been shown to perform well for various complex problems…) Mo and Jeff are analogous art because both involve developing information processing and modeling techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information processing and modeling techniques making machine-learning based radio coverage predictions as disclosed by Mo with the method of developing information processing and modeling techniques in wireless communications systems using machine learning models as disclosed by Jeff. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Mo and Jeff as noted above; Doing so allows for developing and using machine learning methods that are highly effective for the coverage prediction task, (Mo, Abstract). While Mo teaches the processes for training neural networks. Mo does not expressly teach the training processes including the claimed use of …a partial derivative of the machine learning algorithm … Omi does expressly teach the training processes including the claimed use of …a partial derivative of the machine learning algorithm … in [0186] Once the received power levels from the symmetric points become the same, the main beam direction is confirmed… FIG. 4b describes, by way of example only but the invention is not so limited, a gradient algorithm that may be used for beam localization and/or main beam centre estimation of the AUT 106… The gradient ascent algorithm for beam localisation and estimation of the main beam centre of an AUT 106 is one of many examples of an the iterative feedback algorithm/system, as previously described with reference to FIG. 3 in step 304, that may be used to estimate the main beam centre of the AUT 106…. Gradient ascent may be used for iterative optimization by repeating the formula below until convergence. θ←θ−α∇J(θ)  Equation (1) which is the set of below; [00001]θi←θi-α⁢∂∂θiJ⁡(θ)Equation⁢(2) where α is gradient descent/ascent step size […a partial derivative of the machine learning algorithm]. [0188] … When J(G) is considered as the function of radiation pattern, the partial derivative of Equation (2) […a partial derivative of the machine learning algorithm] is discretely obtained from the measurements from the two points: x.sub.est and x.sub.i. Then the gradient vector is obtained in the plane defined by θ.sub.1,2. By decomposing this vector into az-el plane, the gradient ascent, namely new beam estimation x.sub.est_new, can be calculated. The gradient ascent for beam localisation methodology/algorithm is provided as follows:… Omi, Mo and Jeff are analogous art because both involve developing information processing and modeling techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information processing and modeling techniques in wireless communications systems using neural network as disclosed by Omi with the method of developing information processing and modeling techniques in wireless communications systems as collectively disclosed by Mo and Jeff. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Omi, Mo and Jeff as noted above; Doing so allows for enabling data modeling and multi-sensor fusion technique for combining information and/or RF radiation measurement data from different sources, (Omi, 0236). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wolfle et al. (NPL: “Field strength prediction in indoor environments with neural networks”): teaches training and using artificial neural network based model for the prediction of the electric field strength inside buildings. Castro-Gonzalez et al. (US 20190139221): teaches using a Radon transformation, in [0128] At step 270 of method 200, a Radon transform is performed on the spatiotemporal profiles generated in step 260 according to some embodiments. The Radon transform maps 2D lines to peaks located at particular positions so as to identify events and their associated parameters. Event detection using Radon transform is not only convenient but robust to noise… [0130] In some embodiments, f(x) is a discrete image, in which case a discrete-domain Radon transform R.sup.0 involving a finite amount of radial-coordinate values and of angles in [0; π] may be used. The largest radial-coordinate value can correspond to half of the image-diagonal length. The profile in Radon-domain ĝ is thus represented as: … [0132] FIG. 7 illustrates the application of the Radon transform on the spatiotemporal profile shown in FIGS. 6A-6B according to some embodiments. The projection angle θ and radial coordinate R are shown in FIG. 7. The solid arrow illustrates the direction for trajectory-line detection. Considering a predefined set of angles θ and radial coordinates R, the Radon transform computes projections of the spatiotemporal-profile intensities (i.e., the intensities in the non-transformed domain, as shown in FIG. 7)… [0133] FIG. 8A shows the Radon transform ĝ of the map in FIG. 7 according to some embodiments. The horizontal and vertical axes correspond to the projection angle and to the associated normalized radial coordinate, respectively. In FIG. 8A, several peaks may be readily identified. Each peak may correspond to the linear trajectories observed in the spatial domain (e.g., FIG. 6B). These peaks may be identified and located based on, for example, local-maxima detection within a window of odd pixel size S.sub.w><S.sub.w. In some embodiments, the local-maxima are limited to those greater than a threshold τ.sub.m. In some embodiments, the maxima locations are limited to locations with angular values within the range [0; π/2], thereby imposing one single flow direction inside the capillary.. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWATOSIN ALABI whose telephone number is (571)272-0516. The examiner can normally be reached Monday-Friday, 8:00am-5:00pm EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUWATOSIN ALABI/ Primary Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Aug 03, 2023
Application Filed
Sep 12, 2023
Response after Non-Final Action
Mar 08, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579409
IDENTIFYING SENSOR DRIFTS AND DIVERSE VARYING OPERATIONAL CONDITIONS USING VARIATIONAL AUTOENCODERS FOR CONTINUAL TRAINING
2y 5m to grant Granted Mar 17, 2026
Patent 12572814
ARTIFICIAL NEURAL NETWORK BASED SEARCH ENGINE CIRCUITRY
2y 5m to grant Granted Mar 10, 2026
Patent 12561570
METHODS AND ARRANGEMENTS TO IDENTIFY FEATURE CONTRIBUTIONS TO ERRONEOUS PREDICTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12547890
AUTOREGRESSIVELY GENERATING SEQUENCES OF DATA ELEMENTS DEFINING ACTIONS TO BE PERFORMED BY AN AGENT
2y 5m to grant Granted Feb 10, 2026
Patent 12536478
TRAINING DISTILLED MACHINE LEARNING MODELS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
85%
With Interview (+26.3%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 199 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month