Prosecution Insights
Last updated: April 19, 2026
Application No. 17/382,723

Multi-Level Time Series Forecaster

Final Rejection §102§103
Filed
Jul 22, 2021
Examiner
BOSTWICK, SIDNEY VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Ciena Corporation
OA Round
6 (Final)
52%
Grant Probability
Moderate
7-8
OA Rounds
4y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
71 granted / 136 resolved
-2.8% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
68 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This Office Action is responsive to Applicants' Amendment filed on January 28, 2026, in which claims 1, 2, 12, 13, and 19-21 are currently amended. Claims 1-21 are currently pending. Response to Arguments Applicant’s arguments with respect to rejection of claims 1-21 under 35 U.S.C. 102/103 based on amendment have been considered, however, are not persuasive. With respect to Applicant’s arguments on p. 8 of the Remarks submitted 1/28/2026 that “McDonnell does nor disclose, teach, or fairly suggest classifying any time series data using a trained time series classifier trained using a second partition of the time series data or selecting one or more trained forecasters trained using a first partition of the time series data from a plurality of forecasters to make a forecast based on the time series data using a correlation analyzer that correlates classified time series data types to forecaster types”, Examiner respectfully disagrees. McDonnell discloses ([Col. 7 l. 1-10] "Generally, the data set 110 provided to the data agnostic model builder 122 is pre-processed to some extent, such as to exclude outlier values, to select or label a particular subset of the data" [Col. 14 l. 15-20] "As a specific example, the fitness calculator 130 selects a subset of the data set 110 for use as validation data (also referred to herein as test data)" test data/validation data interpreted as second partition. [Col. 1 l. 18-32] "The accuracy and/or reliability of a neural network can be summarized using a fitness value, which indicates how closely output of the neural network matches an expected output determined based on the training data set”. Training data interpreted as first partition.) and [Col. 7 l. 64-Col. 8 l. 10] "The data set analyzer 124 uses heuristics, a data classifier, or both, to determine characteristics of the input data that indicate a data type of the input data. For example, the data set 110 could include time-series data, text, image data, other data types, or combinations thereof (e.g., time-series data with associated text labels)" [Col. 20 l. 65-Col. 21 l. 6] "in FIG. 4, the operations performed iteratively also include, at 406, providing the matrix representations as input to a relative fitness estimator to generate estimated fitness data for neural networks of the population." [Col. 22 l. 29-45] "the data set analyzer 124 evaluates the data set 110 of FIGS. 1 and 2 to determine characteristics of the data set 110 and selects the parameters 118 based on the characteristics of the data set 110. The parameters 118 can include, for example, architectural parameters that are used to guide generation of the initial population" McDonnell explicitly states that the data set analyzer analyzes input data for characteristics including data type. This data is then used to “guide generation of the initial population” of forecasters, where a forecaster whose generation was guided by a respective data type is interpreted as a forecaster type. See also FIG. 1 which shows that the relative fitness estimator 134 uses data set analyzer 124 such that data set analyzer 124 is interpreted as synonymous with a correlation analyzer.). With respect to Applicant’s arguments on p. 8 of the Remarks submitted 1/28/2026 that “McDonnell does nor disclose, teach, or fairly suggest that one or more trained forecasters are trained using representative time series data correlated to time series data using any patterns”, Examiner respectfully disagrees. While the instant claims do not limit “patterns” which is a broad term, McDonnell explicitly states that the method is performed using patterns (McDonnell [Col. 22 l. 6-16] "the method 400 provides an automated method of generating a neural network (e.g., software or software configuration information) that can be used for a variety of purposes, such as state labeling, state or value prediction, pattern recognition, etc. Using the relative fitness estimator 134 to rank neural networks of a population (e.g., the population 302 of FIG. 3) can significantly reduce computing resources required to automatically generate a neural network relative to model building processes that calculate fitness of the neural networks of a population in each iteration."). With respect to Applicant’s arguments on p. 9 of the Remarks submitted 1/28/2026 that “McDonnell does nor disclose, teach, or fairly suggest that any time series data is classified using a trained time series classifier trained using a second partition of the time series data while one or more trained forecasters are in an inference mode and making predictions, not training”, Examiner respectfully disagrees. McDonnell discloses ([Col. 1 l. 18-32] "The accuracy and/or reliability of a neural network can be summarized using a fitness value, which indicates how closely output of the neural network matches an expected output determined based on the training data set" [Col. 5 l. 39-50] "a fitness value can be calculated for each of the neural networks (e.g., using traditional fitness calculations as described above). The fitness value for a particular neural network can be used along with the matrix representation of the particular neural network as training data to train the relative fitness estimator." [Col. 11 l. 3-13] "using the relative fitness estimator 134, rather than the fitness calculator 130, to estimate the relative fitness of the neural networks of each population during each epoch after the initial epoch" McDonnell explicitly states that the model is evaluated using a test subset of the data (the second partition). See also FIG. 5 where model fitness is explicitly calculated through inference step 510 before training step 514. McDonnell explicitly states that the model is evaluated using a test subset of the data (the second partition) before being trained using the result of the evaluation (See FIG. 5 steps 510 through 514) and then repeats the process using the trained model). For at least these reasons and those further detailed below Examiner asserts that it is reasonable and appropriate to maintain the rejection in view of McDonnell. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 10, 12, 13, 18, 19, 20, and 21 are rejected under U.S.C. §102(a)(1) as being anticipated by McDonnell (US 10685286 B1). PNG media_image1.png 836 574 media_image1.png Greyscale FIG. 5 of US10685286B1 Regarding claim 1, McDonnell teaches A non-transitory computer-readable medium configured to store a program executable by a processing system, the program including instructions configured to cause the processing system to: ([Col. 24 l. 33-35] "The memory device(s) 610 store instructions 612 that are executable by the processor(s) 602 to perform various operations and functions") obtain time series data generated by one or more sensors of a network; ([Col. 6 l. 41-54] "if the data set used to generate the neural network is time series data captured by a particular sensor coupled to a particular system, the neural network can be used to predict future sensor data values or future states of the particular system based on real-time sensor data from the particular sensor (or from similar sensors associated with other systems). Thus, the neuroevolutionary process automates creation of a software tool (e.g., a neural network) that can be used for a variety of purposes depending on the input data set used.") and partition the time series data into a first partition and second partition([Col. 7 l. 1-10] "Generally, the data set 110 provided to the data agnostic model builder 122 is pre-processed to some extent, such as to exclude outlier values, to select or label a particular subset of the data" [Col. 14 l. 15-20] "As a specific example, the fitness calculator 130 selects a subset of the data set 110 for use as validation data (also referred to herein as test data)" test data/validation data interpreted as second partition. [Col. 1 l. 18-32] "The accuracy and/or reliability of a neural network can be summarized using a fitness value, which indicates how closely output of the neural network matches an expected output determined based on the training data set”. Training data interpreted as first partition.) classify the time series data using a trained time series classifier trained using the second partition of the time series data; and([Col. 8 l. 5-10] "the neural network 136 is to be configured to classify input data or to predict a future state or value" [Col. 3 l. 25-30] "Fitness is evaluated (or calculated) by providing test data as input to the neural network and comparing an output of the neural network to an expected result" McDonnell explicitly states that the model is evaluated using a test/validation subset of the data (the second partition) before being trained using the result of the evaluation (See FIG. 5 steps 510 through 514). McDonnell explicitly anticipates configuring/training the model for classification [Col. 6 l. 55-Col. 7 l. 12] "after the data set 110 has been processed to train a neural network 136, data from a real-time data stream, such as a second data set 138, can be provided to the neural network 136 for analysis”) select one or more trained forecasters trained using the first partition of the time series data from a plurality of forecasters to make a forecast based on the time series data ([Abstract] "The operations also include providing the matrix representations as input to a relative fitness estimator that is trained to generate estimated fitness data for neural networks of the population. The estimated fitness data are based on expected fitness of neural networks predicted by the relative fitness estimator. The operations further include generating, based on the estimated fitness data, a subsequent population of neural networks. The method also includes, when a termination condition is satisfied, outputting data identifying a neural network as a candidate neural network." [Col. 1 l. 18-32] "The accuracy and/or reliability of a neural network can be summarized using a fitness value, which indicates how closely output of the neural network matches an expected output determined based on the training data set" Outputting candidate neural network interpreted as synonymous with selecting one or more trained forecasters. See also flow chart of FIG. 4. McDonnell explicitly states that the model is trained using a training subset of the data (training data set/the first partition)) using a correlation analyzer that correlates classified time series data types to forecaster types; ([Col. 7 l. 64-Col. 8 l. 10] "The data set analyzer 124 uses heuristics, a data classifier, or both, to determine characteristics of the input data that indicate a data type of the input data. For example, the data set 110 could include time-series data, text, image data, other data types, or combinations thereof (e.g., time-series data with associated text labels)" [Col. 20 l. 65-Col. 21 l. 6] "in FIG. 4, the operations performed iteratively also include, at 406, providing the matrix representations as input to a relative fitness estimator to generate estimated fitness data for neural networks of the population." [Col. 22 l. 29-45] "the data set analyzer 124 evaluates the data set 110 of FIGS. 1 and 2 to determine characteristics of the data set 110 and selects the parameters 118 based on the characteristics of the data set 110. The parameters 118 can include, for example, architectural parameters that are used to guide generation of the initial population" McDonnell explicitly states that the data set analyzer analyzes input data for characteristics including data type. This data is then used to “guide generation of the initial population” of forecasters, where a forecaster whose generation was guided by a respective data type is interpreted as a forecaster type. See also FIG. 1 which shows that the relative fitness estimator 134 uses data set analyzer 124 such that data set analyzer 124 is interpreted as synonymous with a correlation analyzer.) monitor an accuracy of the forecast; ([Col. 1 l. 18-32] "The accuracy and/or reliability of a neural network can be summarized using a fitness value, which indicates how closely output of the neural network matches an expected output determined based on the training data set" [Col. 5 l. 39-50] "a fitness value can be calculated for each of the neural networks (e.g., using traditional fitness calculations as described above). The fitness value for a particular neural network can be used along with the matrix representation of the particular neural network as training data to train the relative fitness estimator." The fitness value monitors accuracy of the generated forecaster (model) forecast) and based on the monitored accuracy of the forecast, obtain modified time series data and ([Col. 6 l. 55-Col. 7 l. 12] "the data set 110 provided to the data agnostic model builder 122 is pre-processed to some extent, such as to exclude outlier values, to select or label a particular subset of the data, to normalize values, etc. Thus, the data set 110 is usually obtained from a memory (such as the database 106) rather than from a real-time data stream as may be output from the sensor 104 or the medical device 108. However, after the data set 110 has been processed to train a neural network 136, data from a real-time data stream, such as a second data set 138, can be provided to the neural network 136 for analysis" McDonnell explicitly discloses that the training is based on the monitored accuracy of the forecast) re-train the trained times series classifier and the one or more trained forecasters using the modified using the time series data. ([Col. 6 l. 55-Col. 7 l. 12] "the data set 110 provided to the data agnostic model builder 122 is pre-processed to some extent" [Col. 1 l. 18-32] "The accuracy and/or reliability of a neural network can be summarized using a fitness value" [Col. 6 l. 2-12] "the relative fitness estimator can be used to evaluate candidate neural networks during each epoch. Each epoch includes a particular number of candidate neural networks produced via various evolutionary operations (e.g., crossover and mutation operations) that are performed on the candidate neural networks of a preceding epoch" Each epoch interpreted as training such that a subsequent training epoch is interpreted as re-training based on the evaluated fitness (monitored accuracy). McDonnell explicitly states that the training data is pre-processed (modified).). Regarding claim 2, McDonnell teaches The non-transitory computer-readable medium of claim 1, wherein the time series data has patterns therein (McDonnell [Col. 22 l. 6-16] "the method 400 provides an automated method of generating a neural network (e.g., software or software configuration information) that can be used for a variety of purposes, such as state labeling, state or value prediction, pattern recognition, etc. Using the relative fitness estimator 134 to rank neural networks of a population (e.g., the population 302 of FIG. 3) can significantly reduce computing resources required to automatically generate a neural network relative to model building processes that calculate fitness of the neural networks of a population in each iteration.") and the one or more trained forecasters are trained using representative time series data correlated to the time series data using the patterns (McDonnell [Col. 20 l. 18-33] "Using the data agnostic model builder 122 described with reference to FIGS. 2 and 3 significantly reduces the amount of computing resources used to generate a neural network (e.g., the neural network 136) via the neuroevolutionary process by using the relative fitness estimator 134, rather than the fitness calculator 130, to estimate the relative fitness of the neural networks of each population during each epoch after the initial epoch" Epoch after initial epoch interpreted as synonymous with reperforming the training to determine the one or more forecasters from the plurality of forecasters.). Regarding claim 10, McDonnell teaches The non-transitory computer-readable medium of claim 1, wherein selecting includes selecting a same forecaster with different parameters.(McDonnell [Col. 1 l. 11-17] "a genetic algorithm applies neuroevolutionary techniques over multiple epochs to evolve candidate neural networks to model a training data set." [Col. 2 l. 64-Col. 3 l. 17] ""Evolutionary processes" are performed on the members of a population in one epoch to generate a population for a subsequent epoch. The evolutionary processes performed include, for example, “mutation”, which involves changing one or more features of a neural network (or a set of neural networks referred to as a genus); “cross-over”, which involves combining features of two or more neural networks (e.g., “parent” neural networks) to form a new neural network (e.g., a “child” neural network); and “extinction”, which involves dropping one or more neural networks from the population" [Col. 14 l. 1-9] "each of the neural networks of the initial population 204 and of each subsequent population (e.g., during later epochs) includes the same input layer and the same output layer" Evolving a candidate neural network through mutation interpreted as selecting a same forecaster with different parameters). Regarding claims 12, 13, and 18, claims 12, 13, and 18 are directed towards a system capable of performing the same methods as the computer readable media in claims 1, 2, and 10. Therefore, the rejections applied to claims 1, 2, and 10 also apply to claims 12, 13, and 18. Claims 12, 13, and 18 recite additional elements “one or more processors; and a memory in communication with the one or more processors, the memory configured to store instructions for detecting outliers of network data” (McDonnell [Col. 1 l. 58-Col. 2 l.14] "a computing device includes a processor and a memory storing instructions that are executable by the processor to cause the processor to iteratively perform a set of operations until a termination condition is satisfied [...] The instructions are further executable by the processor to cause the processor to, based on a determination that the termination condition is satisfied, output data identifying one or more neural networks of a final population of neural networks as a candidate neural network"). Regarding claim 19, claim 19 is directed toward the method performed by the computer readable media of claim 1. Therefore, the rejection applied to claim 1 also applies to claim 19. Regarding claim 20, McDonnell teaches The non-transitory computer-readable medium of claim 1, wherein the time series data is classified using the trained time series classifier trained using the second partition of the time series data while the one or more trained forecasters are in an inference mode and making predictions, not training (McDonnell [Col. 1 l. 18-32] "The accuracy and/or reliability of a neural network can be summarized using a fitness value, which indicates how closely output of the neural network matches an expected output determined based on the training data set" [Col. 5 l. 39-50] "a fitness value can be calculated for each of the neural networks (e.g., using traditional fitness calculations as described above). The fitness value for a particular neural network can be used along with the matrix representation of the particular neural network as training data to train the relative fitness estimator." [Col. 11 l. 3-13] "using the relative fitness estimator 134, rather than the fitness calculator 130, to estimate the relative fitness of the neural networks of each population during each epoch after the initial epoch" McDonnell explicitly states that the model is evaluated using a test subset of the data (the second partition). See also FIG. 5 where model fitness is explicitly calculated through inference step 510 before training step 514. McDonnell explicitly states that the model is evaluated using a test subset of the data (the second partition) before being trained using the result of the evaluation (See FIG. 5 steps 510 through 514) and then repeats the process using the trained model). Regarding claim 21, McDonnell teaches The system of claim 12, wherein the time series data is classified using the trained time series classifier trained using the second partition of the time series data while the one or more trained forecasters are in an inference mode and making predictions, not training (McDonnell [Col. 1 l. 18-32] "The accuracy and/or reliability of a neural network can be summarized using a fitness value, which indicates how closely output of the neural network matches an expected output determined based on the training data set" [Col. 5 l. 39-50] "a fitness value can be calculated for each of the neural networks (e.g., using traditional fitness calculations as described above). The fitness value for a particular neural network can be used along with the matrix representation of the particular neural network as training data to train the relative fitness estimator." [Col. 11 l. 3-13] "using the relative fitness estimator 134, rather than the fitness calculator 130, to estimate the relative fitness of the neural networks of each population during each epoch after the initial epoch" McDonnell explicitly states that the model is evaluated using a test subset of the data (the second partition). See also FIG. 5 where model fitness is explicitly calculated through inference step 510 before training step 514. McDonnell explicitly states that the model is evaluated using a test subset of the data (the second partition) before being trained using the result of the evaluation (See FIG. 5 steps 510 through 514) and then repeats the process using the trained model). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3-5, 7, 9, 14-16, and 17 are rejected under U.S.C. §103 as being unpatentable over the combination of McDonnell and Ryan (US 2019/0379589 A1). Regarding claim 3, McDonnell teaches The non-transitory computer-readable medium of claim 1. However, McDonnell doesn't explicitly teach wherein the time series data includes any of Signal-to-Noise Ratio, Bit Error Rate, packet losses, and packet counts. Ryan, in the same field of endeavor, teaches The non-transitory computer-readable medium of claim 1, wherein the time series data includes any of Signal-to-Noise Ratio, Bit Error Rate, packet losses, and packet counts.([¶0062] “FIG. 2 is a graph20 of time-series data where Signal-to-Noise Ratio (SNR) measurements are taken over time. A pattern detection model that is modeled from the historical training data can be used with new data for predicting when the SNR curve crosses over a threshold 22. Using the pattern detection model, new data can be plotted, and patterns may be detected to predict when the SNR in the future may cross the threshold 22"). McDonnell as well as Ryan are directed towards processing time series data with neural networks. Therefore, McDonnell as well as Ryan are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of McDonnell with the teachings of Ryan by applying the neural architecture search in McDonnell to the data types to find an optimized model for the commercial application described in Ryan. Ryan provides as additional motivation for combination ([¶0061] "it has been discovered that patterns in the time-series may show up as an object in the image generated from the time-series data. By using object detection methods, it is possible to detect patterns in the data"). This motivation for combination also applies to the remaining claims which depend on this combination. Regarding claim 4, the combination of McDonnell, and Ryan teaches The non-transitory computer-readable medium of claim 3, wherein when the time series data is Signal-to-Noise Ratio, using a first model trained for Signal- to-Noise Ratio patterns in the time series data.(Ryan [¶0062] “FIG. 2 is a graph20 of time-series data where Signal-to-Noise Ratio (SNR) measurements are taken over time. A pattern detection model that is modeled from the historical training data can be used with new data for predicting when the SNR curve crosses over a threshold 22. Using the pattern detection model, new data can be plotted, and patterns may be detected to predict when the SNR in the future may cross the threshold 22"). Regarding claim 5, the combination of McDonnell, and Ryan teaches The non-transitory computer-readable medium of claim 4, wherein when the time series data is another Signal-to-Noise Ratio, using a second model trained for other Signal-to-Noise Ratio patterns in the time series data.(Ryan [¶0055] "Examples of anomaly detection may include drops in SNR due to thunder strikes, detection of traffic pattern shifts (from packet counter data and call admission control data), network intrusion detection (from an examination of packet counter data), equipment failure prediction (from performance monitoring data), etc. Pattern detection for anomaly detection associates labeled anomaly periods with the anomalous measurements in the time-series [¶0062] “FIG. 2 is a graph20 of time-series data where Signal-to-Noise Ratio (SNR) measurements are taken over time. A pattern detection model that is modeled from the historical training data can be used with new data for predicting when the SNR curve crosses over a threshold 22. Using the pattern detection model, new data can be plotted, and patterns may be detected to predict when the SNR in the future may cross the threshold 22"). Regarding claim 7, the combination of McDonnell, and Ryan teaches The non-transitory computer-readable medium of claim 3, wherein when the time series data is Bit Error Rate, using a first model trained for Bit Error Rate patterns in the time series data.(Ryan [¶0069] "The software applications of the present systems and methods may use relevant Performance Monitoring (PM) data along with other data to describe the behavior of a telecommunications network." and [¶0070] "Examples of PM data include, without limitation, optical layer data [...] The optical layer data can include [...] Bit Error Rate (BER)"). Regarding claim 9, the combination of McDonnell, and Ryan teaches The non-transitory computer-readable medium of claim 3, wherein when the time series data is packet count, using a first model trained for packet count patterns in the time series data.(Ryan [¶0055] "Examples of anomaly detection may include […] detection of traffic pattern shifts (from packet counter data and call admission control data), network intrusion detection (from an examination of packet counter data)"). Regarding claims 14-17, claims 14-17 are directed towards a system that is substantially similar to claims 3-5 and 7. Therefore, the rejection applied to claims 3-5 and 7 also applies to claims 14-17. Claim 6 is rejected under U.S.C. §103 as being unpatentable over the combination of McDonnell and Ryan and in further view of Ngo (“Deep Learning Based Prediction of Signal-to-Noise Ratio (SNR) for LTE and 5G Systems”, 2020). Regarding claim 6, the combination of McDonnell, and Ryan teaches The non-transitory computer-readable medium of claim 5. However, the combination of McDonnell, and Ryan doesn't explicitly teach wherein when the time series data is a further Signal-to-Noise Ratio, using a third model trained for further Signal-to-Noise Ratio patterns in the time series data. Ngo, in the same field of endeavor, teaches The non-transitory computer-readable medium of claim 5, wherein when the time series data is a further Signal-to-Noise Ratio, using a third model trained for further Signal-to-Noise Ratio patterns in the time series data([Abstract] "Both time-domain and frequency domain signal grids are evaluated as inputs for SNR prediction" (Introduction) "To achieve the above goals of prediction accuracy and latency, we investigate four techniques. First, OFDM signaling offers time-domain and frequency-domain signal receptions. Both of these forms of signals are evaluated and compared for prediction input selection. Second, the numerical properties of SNR prediction, including range, resolution, and ordering (e.g., comparison), are taken advantage of. Third, temporal diversity is exploited where every prediction is based on a sequence of time-series data inputs." frequency domain signal grid SNR interpreted as a further SNR.). The combination of McDonnell and Ryan as well as Ngo are directed towards time-series forecasts for network environments. Therefore, the combination of McDonnell and Ryan as well as Ngo are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of McDonnell and Ryan with the teachings of Ngo by including time-domain data of 5G and LTE networks in system trained to detect patterns in network data. Ngo provides the additional reason for combination ([Introduction] “Signal to noise ratio (SNR) is an essential parameter in wireless communication. Its accurate and timely knowledge is even more critical in 5G technology and beyond to satisfy the demanding quality of services (QoS)” In other words, it would have been obvious to train the system to detect patterns in wireless 5G and LTE networks in order to increase the range of environments that the system can detect patterns, specifically for 5G technology where it is “critical” to satisfy quality of service.”). Claim 8 is rejected under U.S.C. §103 as being unpatentable over the combination of McDonnell and Ryan and Hariharan (US 20220022061 A1). Regarding claim 8, the combination of McDonnell, and Ryan teaches The non-transitory computer-readable medium of claim 3. However, the combination of McDonnell, and Ryan doesn't explicitly teach wherein when the time series data is packet losses, using a first model trained for packet losses patterns in the time series data. Hariharan, in the same field of endeavor, teaches wherein when the time series data is packet losses, using a first model trained for packet losses patterns in the time series data. ([¶0029] "The performance model can be pre-trained to output throughput based on session features. The performance model can be a neural network, in an example. The model can be trained across user sessions by applying machine learning algorithms to a large set of telemetry data. This can tune the performance model over time for predicting the performance values based on session features. In one example, the session features used as inputs to the model can include downlink channel quality (“CQI”), uplink channel quality (“SINR”), signal strength, power parameters on the cell's control and data channels, cell load, number of antennas present, and packet loss rates, among others."). The combination of McDonnell and Ryan as well as Hariharan are directed towards using neural networks for processing time series data. Therefore, the combination of McDonnell and Ryan as well as Hariharan are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of McDonnell and Ryan with the teachings of Hariharan by using the packet loss data as a data type in the neural architecture search of McDonnell. Hariharan provides as additional motivation for combination ([¶0030] “The observed performance value for the user session can be used as a comparison point to determine potential improvements based on adjustments to the power control parameters at the base station”). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124 /MIRANDA M HUANG/Supervisory Patent Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Jul 22, 2021
Application Filed
Oct 22, 2024
Non-Final Rejection — §102, §103
Dec 12, 2024
Response after Non-Final Action
Dec 12, 2024
Response Filed
Jan 23, 2025
Response Filed
Feb 21, 2025
Final Rejection — §102, §103
Apr 17, 2025
Response after Non-Final Action
May 28, 2025
Request for Continued Examination
Jun 01, 2025
Response after Non-Final Action
Jun 21, 2025
Non-Final Rejection — §102, §103
Jul 31, 2025
Response Filed
Sep 04, 2025
Final Rejection — §102, §103
Nov 03, 2025
Response after Non-Final Action
Nov 10, 2025
Request for Continued Examination
Nov 16, 2025
Response after Non-Final Action
Nov 19, 2025
Non-Final Rejection — §102, §103
Jan 28, 2026
Response Filed
Mar 04, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561604
SYSTEM AND METHOD FOR ITERATIVE DATA CLUSTERING USING MACHINE LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12547878
Highly Efficient Convolutional Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12536426
Smooth Continuous Piecewise Constructed Activation Functions
2y 5m to grant Granted Jan 27, 2026
Patent 12518143
FEEDFORWARD GENERATIVE NEURAL NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12505340
STASH BALANCING IN MODEL PARALLELISM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
52%
Grant Probability
90%
With Interview (+38.2%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month