Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. 2. Claims 1-20 are pending in this office action. This action is responsive to Applicant’s application filed 09/20/203. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained through the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims under 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of 35 U.S.C. 103(c) and potential 35 U.S.C. 102(e), (f) or (g) prior art under 35 U.S.C. 103(a). 3. Claims 1-2, 4 - 10, and 12- 20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Varadan et al. (US Patent Publication No. 2023/0181121 A1, hereinafter “Varadan”) in view of Ben simhon et al. (US Patent Publication No. 2016/0210556 A1, hereinafter “Ben simhon”), Vigano et al. (US Patent Publication No. 2017/0011318 A1, hereinafter “Vigano”) and Yamaguchi et al. (US Patent Publication No. 2020/0311576 A1, hereinafter “ Yamaguchi ”). As to Claim 1, Varadan teaches the claimed limitations: “A method, comprising:” as systems and methods to manage and predict post-surgical recovery (paragraph 0002). “Translating, by the device, the time series data to vectors that capture patterns over a time period of the time series data” as t he outputs following different methods depend on the methods applied and their mathematical constraints in terms of number of dimensions of length of a numeric vector or array that is possible as an output for each method. For Method, the output is a count of the number of deviations or the extent of deviation from a known pattern of recovery which indicates that it could be a quantity that could be reported as yes deviated or not at continuously measured instants of time (paragraph 0121). The optimal number of clusters may be chosen using methods such as the elbow point of the plot of number of clusters vs total within cluster sum of squares, gap statistic method, the silhouette method, or sum of squares method. There is a plurality of approaches to unsupervised learning to translate transformed inputs into metrics. Preferred methods are k-means clustering and hierarchical clustering as applied in Method (paragraph 0123). In Method 6, after the input data is obtained, it is processed for anomaly detection using anomaly detection algorithms such as isolation forest, local outlier factor, robust covariance, and One-Class support vector machines. Preferably, isolation forests are used for anomaly detection. The anomaly detected data is then compared with additional data obtained from an historic library of recovery patterns (paragraphs 0139, 0150). “Select, by the device, weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances” as in Method 6, after the input data is obtained, it is processed for anomaly detection using anomaly detection algorithms such as isolation forest, local outlier factor, robust covariance, and One-Class support vector machines. The historic library assumes that the selected input data across all methods is available for consumption for the implementation of this method except from patients who were observed prior to these methods. Distance measures are established methods to compare multi-dimensional data to quantify the distance between them to ascertain whether they are similar or different. This may be accomplished using a threshold for the distance computed. The mathematical functions to compute distance may be any one or combination of Euclidean, Manhattan, Mahalanobis, Minkowski, Hamming, and cosine distance. Preferably, Minkowski is used in the exemplary method. The patterns of occurrence of anomalies are then transformed using a mathematical model into a quantitative one or higher dimensional metric. Iteratively, the network is presented with the chosen input data, and computations as specified by the architecture of the neural network, the produced output which in this case is an estimate of assessment of a post-surgical patient is compared against a simultaneously measured value to compute an error which is then used to update the weights or parameters of a neural network during training. These networks are specifically suitable for time series data inputs and predictions that need to be made using time series data as inputs (paragraph 0139). “Apply, by the device, the weights to the Hamming distances, the vector Euclidean distances, and the Euclidean distances to generate weighted Hamming distances, weighted vector Euclidean distances, and weighted Euclidean distances” as distance measures are established methods to compare multi-dimensional data to quantify the distance between them to ascertain whether they are similar or different. This may be accomplished using a threshold for the distance computed. The mathematical functions to compute distance may be any one or combination of Euclidean, Manhattan, Mahalanobis, Minkowski, Hamming, and cosine distance. Preferably, Minkowski is used in the exemplary method. The patterns of occurrence of anomalies are then transformed using a mathematical model into a quantitative one or higher dimensional metric. As the patterns of occurrence of anomalies are a series in time showing when and how frequently anomalies were detected, the time-series data can be transformed using a model to represent the data such as Autoregressive (AR), Autoregressive moving average (ARMA), Auto Regressive Integrated Moving Average (ARIMA), Seasonal Autoregressive Integrated Moving Average (SARIMA), Generalized Autoregressive Conditional Heteroskedasticity (GARCH), or Vector Auto-Regression (VAR) and the parameters of such models with a chosen order will be a numerical array or a sequence of numbers (paragraphs 0138-0139). Supervised methods that result in classification outputs that could be indicative of different levels of recovery or trajectories toward recovery as opposed to continuous valued assessments. These methods may include but are not limited to logistic regression, support vector machines classifiers, tree-based classifier methods (paragraph 0143). “Process, by the device, the time series data, the weighted Hamming distances, the weighted vector Euclidean distances, and the weighted Euclidean distances, with a clustering model, to generate clusters for the time series data” as a lthough feature extraction and/or feature selection steps can be conducted outright in the process of the present invention, in certain methods when a time series regression neural network is used. For example: neural networks require initial values of weights for a given architecture is a parameter. Gaussian process methods require an initial choice of kernel function which is one of a list of available options like radial basis function, matern kernel (paragraph 0099). The output layer of a neural network produces a regression output, i.e., continuous valued output. The output layer is defined by either a single neuron in the case of separate models for the assessment, e.g., systolic and diastolic blood pressures or other physical parameters, or two neurons if e.g., systolic and diastolic pressures and/or other physical parameters are estimated using separate models. Each neuron multiplies the weights (W1, W2, W3, and so on) assigned to each connection from the previous layers, sums the values, and finally applies a linear scaling or multiplicative factor to the sum to calculate an output (paragraph 0110). A self-organizing map is an unsupervised machine learning technique used to create a representation of a higher dimensional data set while preserving the information on the similarity of data points such that similar data points appear close to each other. For example, in case of the exemplary feature set with features extracted in minute-level granularity observations as listed in Table 1, the data could be represented as clusters of observations that are similar. FIG. 6 is such a map-like visualization of clusters (paragraph 0133). “Perform, by the device, one or more actions based on the clusters” as the clustering methods then assign cluster numbers to each cluster. Once each feature is assigned to a cluster, the total number of feature instances belonging to each cluster is counted and referred to as cluster membership. This calculation is performed as part of translation of transformed data into a metric. The quantified change metric which is the same as patient status assessment is the ratio of the difference in cluster membership to the total number of observations for each cluster. Thus, a patient status assessment as a single number or metric is obtained (paragraph 0025, 0133). Varadan does not explicitly teach the claimed limitation “ calculating, by a device, Hamming distances for binary data representing time series data ”. Ben simhon teaches a block turbo code is decoded using a soft-in soft-out decoding algorithm and is configured by a chase II algorithm which generates candidate codewords from p least reliable bits (LRBs) and then finds an optimal codeword therefrom and an extrinsic information calculating part which converts the optimal codeword to s soft output. A series of processes for generating candidate codewords, then applying a hard input hard output decoding algorithm to each of the codewords (paragraph 0003). In order to reduce the complexity, when the syndrome is “0”, the ML codeword which is a result of the Chase II algorithm is replaced by the hard decision vector R.sup. H. This is because a codeword having the smallest Euclidean distance among 2.sup.P candidate codewords is always the hard decision vector R.sup.H. From this, the decoding complexity may be reduced in proportion to the rate of the zero syndrome. Therefore, it is assumed that the hard decision vector R.sup.H does not have an error and the extrinsic information of the j-th bit position may be calculated by Equation (paragraphs 0055, 0066). Varadan does not explicitly teach the claimed limitation “calculating, by the device, vector Euclidean distances for the vectors ”. Vigano teaches contractor recommendation system is shown to include a distance calculator. Distance calculator is shown receiving the set of recommended contractors from contractor recommended and the selected contractor from communications interface. Distance calculator may be configured to determine a similarity between the selected contractor and each of the recommended contractors. In various embodiments, the similarity may be expressed as a distance (e.g., a cosine distance, a Euclidean distance, a hamming distance, etc.) between the selected contractor and each of the recommended contractors. In some embodiments, distance calculator determines a similarity between multiple contractors selected by the building owner or cluster of building owners. The similarity may include, a contractor attribute that is the same or similar among the selected contractors (paragraph 0182). Distance calculator generates a vector of attributes for each of the recommended contractors and a vector A.sub.s of attributes for the selected contractor (paragraphs 0183, 0202, 0204). Varadan does not explicitly teach the claimed limitation “ calculate, by the device, Euclidean distances for the time series data ”. Yamaguchi teaches a distance between a time series subsequence that is a section of the length L from each offset location and the feature waveform k is calculated. Then, a smallest distance is determined as the distance between the time series data sequence i and the feature waveform k. The smaller the distance is, the more closely the feature waveform k fits the time series data sequence. Euclidean distance is used for the distance. However, any types of distance may be used as long as the distance can evaluate degrees of fittingness between waveforms (paragraphs 0043-0046). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Varadan, Ben simhon, Vigano and Yamaguchi before him/her, to modify Varadan c onvert the time series data into binary data because that would allow for problems to be quickly identified and resolved, hopefully before affecting business results, such as losing users, missing revenue, decreasing productivity as taught by Ben simhon (paragraph 0004). Or c alculate vector Euclidean distances for the vectors detects a fault in the building equipment based on the data received from the building equipment. The MSPR platform may include an alert generator that generates an alert for the building owner in response to detecting the fault as taught by Vigano (paragraph 0009). Or calculate Euclidean distances for the time series data provide a classification model learnt such that a performance indicator is optimized as taught by Yamaguchi (paragraph 0007). Or c alculate vector Euclidean distances for the vectors detects a fault in the building equipment based on the data received from the building equipment. The MSPR platform may include an alert generator that generates an alert for the building owner in response to detecting the fault as taught by Vigano (paragraph 0009). Or calculate Euclidean distances for the time series data provide a classification model learnt such that a performance indicator is optimized as taught by Yamaguchi (paragraph 0007). As to Claim 2, Varadan does not explicitly teach the claimed limitation “ converting the time series data into the binary data by converting the time series data to binary strings that capture undulations over the time period of the time series data”. Ben simhon teaches the similarity analysis for alternative metrics begins where control determines a binary representation for a first alternative metric, times at which the value of the alternative metric changes may be encoded. In other words, the time series data for the first alternative metric is converted to a set of time values where each time value indicates the amount of time between successive changes to the value of the first alternative metric (paragraphs 0127-0131). The seasonal trend identification module may also determine that twenty-four-hour cycles are present where the expected value of the metric may differ depending on the time of day. In various implementations, these seasonal trends may be completely algorithmic, with no preset information provided to the seasonal trend identification module about the existence of seven-day weeks, twenty-four-hour days, etc. (paragraphs 0064, 0068-0069). As to Claim 4, Varadan teaches the claimed limitations: “Wherein translating the time series data to the vectors comprises: calculating an average of values for each of a plurality of time steps of the time series data; determining a deviation of each value of the time series data and the average; and generating the vectors based determining the deviation of each value of the time series data and the average” as (paragraphs 0019, 0040, 0051, 0097, 0139). Ben simhon teaches (paragraph 0094). Yamaguchi teaches (paragraph 0061). As to Claim 5, Varadan teaches the claimed limitations: “Wherein the vector Euclidean distances represent distances between the vectors” as (paragraph 0139). Yamaguchi teaches (paragraphs 0087, 0186). As to Claim 6, Varadan does not explicitly teach the claimed limitation “Wherein selecting the weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances comprises: selecting the weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances to decrease intra-cluster variance for the time series data”. Ben simhon teaches (paragraphs 000043, 0090, 0094, 0107, 0111). As to Claim 7, Varadan does not explicitly teach the claimed limitation “Selecting the weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances comprises: selecting weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances to increase inter-cluster variance for the time series data”. Ben simhon teaches (paragraphs 000043, 0072, 0090, 0107). As to claim 8 is rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claim 1. In addition, Varadan teaches systems and methods to manage and predict post-surgical recovery (paragraph 0002) . Therefore, this claim is rejected for at least the same reasons as claim 1. As to Claim 9, Varadan teaches the claimed limitations: “wherein the one or more processors, to select the weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances, are configured to: calculate weighted distances for different combinations of the weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances; determine intra-cluster variances for the weighted distances; and select the weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances based on the intra-cluster variances” as (paragraphs 0025, 005, 0050, 0096, 0135, 0137, 0139). Ben simhon teaches (paragraphs 0107, 0131). As to Claim 10, Varadan teaches the claimed limitations: “wherein the one or more processors, to select the weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances, are configured to: iterate different combinations of the weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances to generate intra-cluster variances; and apply a convergence criterion to the intra-cluster variances to select the weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances” as (paragraphs 0025, 005, 0050, 0096, 0135, 0137, 0139). Ben simhon teaches (paragraphs 0107, 0131). As to Claim 12, Varadan teaches the claimed limitations: “Wherein the one or more processors, to perform the one or more actions, are configured to: identify retail store segments based on the clusters and sales patterns provided by the time series data” as (paragraphs 0027, 0135-0137). Yamaguchi teaches (paragraph 0097, claim 16). As to Claim 13, Varadan does not explicitly teach the claimed limitation “ Wherein the one or more processors, to perform the one or more actions, are configured to: identify a product or a service based on the clusters and sales and revenues provided by the time series data” . Vigano teaches (paragraphs 0151, 0212), As to Claim 14, Varadan teaches the claimed limitations: “Wherein the one or more processors, to perform the one or more actions, are configured to one or more of: forecast energy consumption for cell towers based on the clusters; or forecast network capacities for cell towers based on the clusters” as (paragraphs 0051-0052, 0091, 0110, 0123, 0139 , 0143) As to Claim 15, Varadan teaches the claimed limitations: “A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to:” as t he system may detect, process and report various physiologic and chemical parameters (paragraph 0014). The dimensionality is practically limited by the computational hardware and the available memory. Once transformed, the higher dimensional data may be used to train convolution neural networks or one of a family of recurrent neural networks inclusive of but not limited to long short-term memory (LSTM) networks (paragraph 0260). “Receive time series data” as utilizes Artificial Intelligence (AI) and allows physicians to remotely track their patients after surgery. Patients receive personalized daily care reminders, educational resources, and an AI-driven check-in tool that captures their physical and emotional symptoms (paragraph 0005). A method is provided for improving a patient's recovery using assessment predictions generated during perioperative care, wherein the assessment predictions are further configured as inputs to develop time series forecasting model (paragraph 0040). “Translate the time series data to vectors that capture patterns over a time period of the time series data” as t he outputs following different methods depend on the methods applied and their mathematical constraints in terms of number of dimensions of length of a numeric vector or array that is possible as an output for each method. For Method, the output is a count of the number of deviations or the extent of deviation from a known pattern of recovery which indicates that it could be a quantity that could be reported as yes deviated or not at continuously measured instants of time (paragraph 0121). The optimal number of clusters may be chosen using methods such as the elbow point of the plot of number of clusters vs total within cluster sum of squares, gap statistic method, the silhouette method, or sum of squares method. There is a plurality of approaches to unsupervised learning to translate transformed inputs into metrics. Preferred methods are k-means clustering and hierarchical clustering as applied in Method (paragraph 0123). In Method 6, after the input data is obtained, it is processed for anomaly detection using anomaly detection algorithms such as isolation forest, local outlier factor, robust covariance, and One-Class support vector machines. Preferably, isolation forests are used for anomaly detection. The anomaly detected data is then compared with additional data obtained from an historic library of recovery patterns (paragraphs 0139, 0150). “Select weights for the Hamming distances, the vector Euclidean distances, and the Euclidean distances” as in Method 6, after the input data is obtained, it is processed for anomaly detection using anomaly detection algorithms such as isolation forest, local outlier factor, robust covariance, and One-Class support vector machines. The historic library assumes that the selected input data across all methods is available for consumption for the implementation of this method except from patients who were observed prior to these methods. Distance measures are established methods to compare multi-dimensional data to quantify the distance between them to ascertain whether they are similar or different. This may be accomplished using a threshold for the distance computed. The mathematical functions to compute distance may be any one or combination of Euclidean, Manhattan, Mahalanobis, Minkowski, Hamming, and cosine distance. Preferably, Minkowski is used in the exemplary method. The patterns of occurrence of anomalies are then transformed using a mathematical model into a quantitative one or higher dimensional metric. Iteratively, the network is presented with the chosen input data, and computations as specified by the architecture of the neural network, the produced output which in this case is an estimate of assessment of a post-surgical patient is compared against a simultaneously measured value to compute an error which is then used to update the weights or parameters of a neural network during training. These networks are specifically suitable for time series data inputs and predictions that need to be made using time series data as inputs (paragraph 0139). “Apply the weights to the Hamming distances, the vector Euclidean distances, and the Euclidean distances to generate weighted Hamming distances, weighted vector Euclidean distances, and weighted Euclidean distances” as distance measures are established methods to compare multi-dimensional data to quantify the distance between them to ascertain whether they are similar or different. This may be accomplished using a threshold for the distance computed. The mathematical functions to compute distance may be any one or combination of Euclidean, Manhattan, Mahalanobis, Minkowski, Hamming, and cosine distance. Preferably, Minkowski is used in the exemplary method. The patterns of occurrence of anomalies are then transformed using a mathematical model into a quantitative one or higher dimensional metric. As the patterns of occurrence of anomalies are a series in time showing when and how frequently anomalies were detected, the time-series data can be transformed using a model to represent the data such as Autoregressive (AR), Autoregressive moving average (ARMA), Auto Regressive Integrated Moving Average (ARIMA), Seasonal Autoregressive Integrated Moving Average (SARIMA), Generalized Autoregressive Conditional Heteroskedasticity (GARCH), or Vector Auto-Regression (VAR) and the parameters of such models with a chosen order will be a numerical array or a sequence of numbers (paragraphs 0138-0139). Supervised methods that result in classification outputs that could be indicative of different levels of recovery or trajectories toward recovery as opposed to continuous valued assessments. These methods may include but are not limited to logistic regression, support vector machines classifiers, tree-based classifier methods (paragraph 0143). “Process the time series data, the weighted Hamming distances, the weighted vector Euclidean distances, and the weighted Euclidean distances, with a clustering model, to generate clusters for the time series data” as a lthough feature extraction and/or feature selection steps can be conducted outright in the process of the present invention, in certain methods when a time series regression neural network is used. For example: neural networks require initial values of weights for a given architecture is a parameter. Gaussian process methods require an initial choice of kernel function which is one of a list of available options like radial basis function, matern kernel (paragraph 0099). The output layer of a neural network produces a regression output, i.e., continuous valued output. The output layer is defined by either a single neuron in the case of separate models for the assessment, e.g., systolic and diastolic blood pressures or other physical parameters, or two neurons if e.g., systolic and diastolic pressures and/or other physical parameters are estimated using separate models. Each neuron multiplies the weights (W1, W2, W3, and so on) assigned to each connection from the previous layers, sums the values, and finally applies a linear scaling or multiplicative factor to the sum to calculate an output (paragraph 0110). A self-organizing map is an unsupervised machine learning technique used to create a representation of a higher dimensional data set while preserving the information on the similarity of data points such that similar data points appear close to each other. For example, in case of the exemplary feature set with features extracted in minute-level granularity observations as listed in Table 1, the data could be represented as clusters of observations that are similar. FIG. 6 is such a map-like visualization of clusters (paragraph 0133). “Perform one or more actions based on the clusters” as the clustering methods then assign cluster numbers to each cluster. Once each feature is assigned to a cluster, the total number of feature instances belonging to each cluster is counted and referred to as cluster membership. This calculation is performed as part of translation of transformed data into a metric. The quantified change metric which is the same as patient status assessment is the ratio of the difference in cluster membership to the total number of observations for each cluster. Thus, a patient status assessment as a single number or metric is obtained (paragraph 0025, 0133). Varadan does not explicitly teach the claimed limitation “ convert the time series data into binary data “ Ben simhon teaches the similarity analysis for alternative metrics begins where control determines a binary representation for a first alternative metric, times at which the value of the alternative metric changes may be encoded. In other words, the time series data for the first alternative metric is converted to a set of time values where each time value indicates the amount of time between successive changes to the value of the first alternative metric (paragraph 0127). Varadan does not explicitly teach the claimed limitation “ calculate Hamming distances for the binary data, wherein each of the Hamming distances represents a quantity of bit positions in which two bits, of the binary data, are different ”. Heo teaches a block turbo code is decoded using a soft-in soft-out decoding algorithm and is configured by a chase II algorithm which generates candidate codewords from p least reliable bits (LRBs) and then finds an optimal codeword therefrom and an extrinsic information calculating part which converts the optimal codeword to s soft output. A series of processes for generating candidate codewords, then applying a hard input hard output decoding algorithm to each of the codewords (paragraph 0003). In order to reduce the complexity, when the syndrome is “0”, the ML codeword which is a result of the Chase II algorithm is replaced by the hard decision vector R.sup. H. This is because a codeword having the smallest Euclidean distance among 2.sup.P candidate codewords is always the hard decision vector R.sup. H. From this, the decoding complexity may be reduced in proportion to the rate of the zero syndrome. Therefore, it is assumed that the hard decision vector R.sup.H does not have an error and the extrinsic information of the j-th bit position may be calculated by Equation (paragraphs 0055, 0066). Varadan does not explicitly teach the claimed limitation “ calculate vector Euclidean distances for the vectors”. Vigano teaches contractor recommendation system is shown to include a distance calculator. Distance calculator is shown receiving the set of recommended contractors from contractor recommended and the selected contractor from communications interface. Distance calculator may be configured to determine a similarity between the selected contractor and each of the recommended contractors. In various embodiments, the similarity may be expressed as a distance (e.g., a cosine distance, a Euclidean distance, a hamming distance, etc.) between the selected contractor and each of the recommended contractors. In some embodiments, distance calculator determines a similarity between multiple contractors selected by the building owner or cluster of building owners. The similarity may include, a contractor attribute that is the same or similar among the selected contractors (paragraph 0182). Distance calculator generates a vector of attributes for each of the recommended contractors and a vector A.sub.s of attributes for the selected contractor (paragraphs 0183, 0202, 0204). Varadan does not explicitly teach the claimed limitation “ calculate Euclidean distances for the time series data ”. Yamaguchi teaches a distance between a time series subsequence that is a section of the length L from each offset location and the feature waveform k is calculated. Then, a smallest distance is determined as the distance between the time series data sequence i and the feature waveform k. The smaller the distance is, the more closely the feature waveform k fits the time series data sequence. Euclidean distance is used for the distance. However, any types of distance may be used as long as the distance can evaluate degrees of fittingness between waveforms (paragraphs 0043-0046). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Varadan, Ben simhon , Heo, Vigano and Yamaguchi before him/her, to modify Varadan c onvert the time series data into binary data because that would allow for problems to be quickly identified and resolved, hopefully before affecting business results, such as losing users, missing revenue, decreasing productivity as taught by Ben simhon (paragraph 0004). Or because a codeword having the smallest Euclidean distance among candidate codewords, the decoding complexity may be reduced in proportion to the rate of the zero syndrome. Therefore, it is assumed that the hard decision vector does not have an error and the extrinsic information of the j-th bit position may be calculated by Equation as taught by Heo (paragraph 0055). Or c alculate vector Euclidean distances for the vectors detects a fault in the building equipment based on the data received from the building equipment. The MSPR platform may include an alert generator that generates an alert for the building owner in response to detecting the fault as taught by Vigano (paragraph 0009). Or calculate Euclidean distances for the time series data provide a classification model learnt such that a performance indicator is optimized as taught by Yamaguchi (paragraph 0007). As to claims 16-20 are rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claims 2, 4, 6+7, and 9-10. In addition, Varadan teaches systems and methods to manage and predict post-surgical recovery (paragraph 0002) . Therefore, these claims are rejected for at least the same reasons as claims 2, 4, 6+7, and 9-10. 4. Claim 3 is rejected under 35 U.S.C. 103(a) as being unpatentable over Varadan et al. (US Patent Publication No. 2023/0181121 A1, hereinafter “Varadan”) in view of Ben simhon et al. (US Patent Publication No. 2016/0210556 A1, hereinafter “Ben simhon”), Heo et al. (US Patent Publication No. 2020/0083911 A1, hereinafter “Heo”), Vigano et al. (US Patent Publication No. 2017/0011318 A1, hereinafter “Vigano”) and Yamaguchi et al. (US Patent Publication No. 2020/0311576 A1, hereinafter “ Yamaguchi ”). As to Claim 3, Varadan does not explicitly teach the claimed limitation “ wherein each of the Hamming distances represents a quantity of bit positions in which two bits, of the binary data, are different”. Heo teaches a block turbo code is decoded using a soft-in soft-out decoding algorithm and is configured by a chase II algorithm which generates candidate codewords from p least reliable bits (LRBs) and then finds an optimal codeword therefrom and an extrinsic information calculating part which converts the optimal codeword to s soft output. A series of processes for generating candidate codewords, then applying a hard input hard output decoding algorithm to each of the codewords (paragraph 0003). In order to reduce the complexity, when the syndrome is “0”, the ML codeword which is a result of the Chase II algorithm is replaced by the hard decision vector R.sup. H. This is because a codeword having the smallest Euclidean distance among 2.sup.P candidate codewords is always the hard decision vector R.sup. H. From this, the decoding complexity may be reduced in proportion to the rate of the zero syndrome. Therefore, it is assumed that the hard decision vector R.sup.H does not have an error and the extrinsic information of the j-th bit position may be calculated by Equation (paragraphs 0055, 0066). 5 . Claim 11 is rejected under 35 U.S.C. 103(a) as being unpatentable over Varadan et al. (US Patent Publication No. 2023/0181121 A1, hereinafter “Varadan”) in view of Ben simhon et al. (US Patent Publication No. 2016/0210556 A1, hereinafter “Ben simhon”), Servajean et al. (US Patent No. 11,924,048 B2 , hereinafter “ Servajean ”), Vigano et al. (US Patent Publication No. 2017/0011318 A1, hereinafter “Vigano”) and Yamaguchi et al. (US Patent Publication No. 2020/0311576 A1, hereinafter “ Yamaguchi ”). As to Claim 11, Varadan does not explicitly teach the claimed limitation “ Wherein the one or more processors, to perform the one or more actions, are configured to: identify similar cell towers based on the clusters and network traffic provided by the time series data” . Servajean teaches a clustering process is performed to cluster the set of time series into a plurality of clusters each constituting a subset of the set. In one embodiment, each cluster is defined based on a random division of the set of time series. In a preferred embodiment, each cluster is defined based on an autoencoder as input to a clustering algorithm such as k-means. For example, an autoencoder can be employed to convert a time series to a feature vector on which basis clustering is performed. Thus, for the set of time series each time series can be converted to a feature vector as input to a clustering algorithm such as k-means. In this way time series with common features determined by the autoencoder can be clustered together. In one embodiment, such clustering results in devices having similar network communication characteristics being clustered together (column 3, line 36-51). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Varadan, Ben simhon, Servajean , Vigano and Yamaguchi before him/her, to modify Varadan c onvert the time series data into binary data because that would allow for problems to be quickly identified and resolved, hopefully before affecting business results, such as losing users, missing revenue, decreasing productivity as taught by Ben simhon (paragraph 0004). Or reconstruction error information generated by the autoencoder tester provide the autoencoder tester and/or determined by the statistical model generator as taught by Servajean ( column 4, lines 28-40 ). Or c alculate vector Euclidean distances for the vectors detects a fault in the building equipment based on the data received from the building equipment. The MSPR platform may include an alert generator that generates an alert for the building owner in response to detecting the fault as taught by Vigano (paragraph 0009). Or calculate Euclidean distances for the time series data provide a classification model learnt such that a performance indicator is optimized as taught by Yamaguchi (paragraph 0007). Examiner’s Note Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to James Hwa whose telephone number is 571-270-1285, email address is james.hwa@uspto.gov . The examiner can normally be reached on 9:00 am – 5:30 pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached on 571-272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only, for more information about the PAIR system, see http://pair-direct.uspto.gov . Should you have questions on access to the PAIR system contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 03/30/2026 /SHYUE JIUNN HWA/ Primary Examiner, Art Unit 2156