Prosecution Insights
Last updated: April 19, 2026
Application No. 17/884,165

RECURRENT NEURAL NETWORKS WITH GAUSSIAN MIXTURE BASED NORMALIZATION

Non-Final OA §101§103§112
Filed
Aug 09, 2022
Examiner
JABLON, ASHER H.
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
The Bank Of New York Mellon
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
4y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
40 granted / 90 resolved
-10.6% vs TC avg
Strong +44% interview lift
Without
With
+43.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
25 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
26.3%
-13.7% vs TC avg
§103
37.0%
-3.0% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
26.9%
-13.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 90 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 501, 510, 512, 540, 541 in Fig. 5. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: Examiner recommends updating the placeholder application number in the first paragraph of the specification on page 1. Appropriate correction is required. Claim Objections Claims 1, 7-8, 10, 16, and 21 are objected to because of the following informalities: In the final line of claim 1, 10, and 16, Examiner suggests changing “magnitude” to “a magnitude”. The final line of each claim is missing a period. In claim 7, line 2 and claim 8, line 2, the term “gaussian” should be capitalized. In claim 21, the end of line 1 on page 34 should recite “and”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-4, 11-13, and 17-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claim 2, the limitations in lines 5-6 render the claim indefinite. It is unclear what “an outcome” and “an output” are and what produced said outcome and output. It is unclear how the outcome is related to the normalized data values. It is unclear how comparing the outcome to the output is used to predict the directionality of the time series data. Examiner treats claim 2 to mean processing the normalized data values into an outcome, comparing the outcome to some other data, and predicting the directionality based on the comparison. Claims 3-4 are rejected for failing to cure the deficiencies of claim 2. Claim 4 is rendered indefinite for the following reasons. Based on claims 2 and 4, it is unclear if “the reconstructed input being used to make the prediction” from claim 4 corresponds to either of “an outcome” or “an output” from claim 2, which are used to predict the directionality. Claim 4 recites the limitation "the direction" in line 3. There is insufficient antecedent basis for this limitation in the claim. It is unclear if “the direction” is supposed to recite “the directionality” which has antecedent basis from claim 1. Examiner treats “the direction” as “the directionality”, and Examiner treats the reconstructed input as “an outcome”. Claim 11 is rejected because it recites the same indefinite limitations as claim 2, and claim 13 is rejected because it recites the same indefinite limitations as claim 4. Claims 12-13 are rejected for failing to cure the deficiencies of claim 11. Claim 17 is rejected because it recites the same indefinite limitations as claim 2, and claim 19 is rejected because it recites the same indefinite limitations as claim 4. Claims 18-19 are rejected for failing to cure the deficiencies of claim 17. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-9 recite a system comprising a processor (a system), claims 10-15 recite a method, claims 16-20 recite a non-transitory computer readable medium (a product), and claims 21-23 recite a system comprising a processor (a system). Each of a system, a method, and a product fall under one of the four statutory categories of patent eligible subject matter. Claim 1 Step 2A Prong 1: Decompose the time series of data into a plurality of clusters to generate the mixture model, each cluster from among the plurality of clusters comprising a normal distribution of a respective subset of the plurality of data values from the time series of data is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. For each data value in the time series of data: identify a corresponding cluster, from among the plurality of clusters of the mixture model, against which the data value is to be normalized is an observation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Determine a normalization value for the corresponding cluster is a mathematical calculation. In specification paragraph [0035], Equation 3 discloses a formula for normalizing a data value, Determining a normalization value includes calculating a mean or variance of the closest cluster. Normalize the data value based on the normalization value is a mathematical calculation. In specification paragraph [0035], Equation 3 discloses a formula for normalizing a data value. Generate, Step 2A Prong 2: A mixture model that approximates a non-normal distribution of sequential data amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). A machine-learning model trained on one or more sets of time series of data amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). A processor programmed to: access a time series of data having a plurality of data values that exhibit a non-normal distribution, each data value from among the plurality of data values corresponding to a point in time in the time series of data amounts to an insignificant extra-solution activity under MPEP 2106.05(g). A processor is a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). Provide the normalized data values to the machine-learning model trained to predict a directionality of the time series of data amounts to invoking computers merely as a tool to perform an existing process under MPEP 2106.05(f). Using the machine-learning model amounts to invoking computers merely as a tool to perform an existing process under MPEP 2106.05(f). The additional elements as disclosed above, alone or in combination, do not integrate the abstract ideas into a practical application as they are mere insignificant extra solution activities as disclosed in combination with generic computer functions that are implemented to perform the abstract ideas disclosed above. The claim is directed to an abstract idea. Step 2B: A mixture model that approximates a non-normal distribution of sequential data amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). A machine-learning model trained on one or more sets of time series of data amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). A processor programmed to: access a time series of data having a plurality of data values that exhibit a non-normal distribution, each data value from among the plurality of data values corresponding to a point in time in the time series of data is analogous to retrieving information from memory, which the courts have recognized as a well-understood, routine, conventional activity under MPEP 2106.05(d)(II). A processor is a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). Provide the normalized data values to the machine-learning model trained to predict a directionality of the time series of data amounts to invoking computers merely as a tool to perform an existing process under MPEP 2106.05(f). Using the machine-learning model amounts to invoking computers merely as a tool to perform an existing process under MPEP 2106.05(f). The additional elements as disclosed above, in combination with the abstract ideas, are not sufficient to amount to significantly more than the abstract ideas as they are well-understood, routine and conventional activities as disclosed in combination with generic computer functions that are implemented to perform the abstract ideas disclosed above. The claim is not patent eligible. Claim 2 incorporates the rejection from claim 1. Step 2A Prong 1: The abstract ideas from claim 1 are incorporated. Compare the outcome to an output is an observation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Predict the directionality based on the comparison is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Step 2A Prong 2 and Step 2B: The machine-learning model comprises an autoencoder amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). To generate, using the machine-learning model, the prediction, the processor is further programmed to: encode, by the autoencoder, an outcome based on the normalized data values amounts to invoking computers merely as a tool to perform an existing process under MPEP 2106.05(f). The claim is not patent eligible. Claim 3 incorporates the rejection from claim 2. Step 2A Prong 1: The abstract ideas from claim 2 are incorporated. Generate a compressed input based on the normalized data values that is a reduced version of the time series of data is a mathematical calculation and a judgement mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Step 2A Prong 2 and Step 2B: The autoencoder comprises an trained encoder amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 4 incorporates the rejection from claim 3. Step 2A Prong 1: The abstract ideas from claim 3 are incorporated. Generate a reconstructed input from the compressed input generated by the encoder is a mathematical calculation and a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Using the reconstructed input to make the prediction of the direction and/or magnitude is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Step 2A Prong 2 and Step 2B: The autoencoder comprises a trained decoder amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 5 incorporates the rejection from claim 1. Step 2A Prong 1: The abstract ideas from claim 1 are incorporated. Identify the corresponding normal distribution is an observation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Determine a distance between the data value to each normal distribution from among the plurality of normal distributions is a mathematical calculation. In specification paragraph [0035], the sentence in lines 5-6 discloses the distance may be a difference between the particular data value and the mean of the k-cluster. Select the corresponding normal distribution that is closest to the data value based on the determined distances is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Step 2A Prong 2 and Step 2B: The processor amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 6 incorporates the rejection from claim 1. Step 2A Prong 1: The abstract ideas from claim 1 are incorporated. Identify a number of the plurality of clusters to be used is an observation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. The time series of data is approximated based on the plurality of clusters is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Step 2A Prong 2 and Step 2B: The processor amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 7 incorporates the rejection from claim 1. Step 2A Prong 1: The abstract ideas from claim 1 are incorporated. Decompose the time series of data into a gaussian mixture comprising overlapping clusters of normal distributions is a judgement and evaluation mental process based on a mathematical calculation which can reasonably be performed in the human mind with the aid of pencil and paper. Step 2A Prong 2 and Step 2B: The processor amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 8 incorporates the rejection from claim 1. Step 2A Prong 1: The abstract ideas from claim 1 are incorporated. Decompose the time series of data into a gaussian mixture comprising non-overlapping clusters of normal distributions is a judgement and evaluation mental process based on a mathematical calculation which can reasonably be performed in the human mind with the aid of pencil and paper. Step 2A Prong 2 and Step 2B: The processor amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 9 incorporates the rejection from claim 1. Step 2A Prong 1: The abstract ideas from claim 1 are incorporated. Determine the normalization value by determine a mean or variance of the corresponding cluster is a mathematical calculation. In specification paragraph [0035], Equation 3 discloses a normalization value is a mean or variance of the closest cluster. Step 2A Prong 2 and Step 2B: The processor amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claims 10-15 each recites a method which incorporates the same features as the system of claims 1-6, respectively, and are therefore rejected for at least the same reasons. Claim 16 recites a product which incorporates the same features as the system of claim 1 and is therefore rejected for at least the same reasons. In Step 2A Prong 2 and Step 2B, a non-transitory computer-readable storage medium storing instructions that, when executed by a processor, programs the processor amount to generic computer components for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claims 17-20 each recites a product which incorporates the same features as the system of claims 2-5, respectively, and are therefore rejected for at least the same reasons. Claim 21 Step 2A Prong 1: Merge the output from each of the plurality of RNNs is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Generate a prediction based on the merged output is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. The claim recites an abstract idea. Step 2A Prong 2 and Step 2B: A plurality of recursive neural networks (RNNs) configured to operate in parallel to collectively form a parallel neural network architecture, each neural network from among the plurality of RNNs comprising: an input layer that receives a time series of data, one or more RNN layers, and one or more dense layers amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). A processor programmed to: provide each RNN, from among the plurality of RNNs, with a respective time series of data, each respective time series of data comprising sequential data values that vary independently of one another over time amounts to invoking computers merely as a tool to perform an existing process under MPEP 2106.05(f). Obtain an output from a last one of the one or more dense layers of each RNN amounts to invoking computers merely as a tool to perform an existing process under MPEP 2106.05(f). The additional elements as disclosed above, alone or in combination, do not integrate the abstract ideas into a practical application as they are mere generic computer functions that are implemented to perform the abstract ideas disclosed above. The claim is directed to an abstract idea. The additional elements as disclosed above, in combination with the abstract ideas, are not sufficient to amount to significantly more than the abstract ideas as they are generic computer functions that are implemented to perform the abstract ideas disclosed above. The claim is not patent eligible. Claim 22 incorporates the rejection from claim 21. Step 2A Prong 1: The abstract ideas from claim 21 are incorporated. Generate a mixture model for each of the respective time series of data, the mixture model comprising a plurality of clusters of normal distributions that together approximates the respective time series of data is a judgement and evaluation mental process based on mathematical calculations which can reasonably be performed in the human mind with the aid of pencil and paper. Step 2A Prong 2 and Step 2B: The processor amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 23 incorporates the rejection from claim 22. Step 2A Prong 1: The abstract ideas from claim 22 are incorporated. Normalize values of each of the respective time series of data based on the mixture model generated for the respective time series of data is a mathematical calculation. In specification paragraph [0035], Equation 3 discloses a formula for normalizing a data value. Step 2A Prong 2 and Step 2B: The processor amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 5-7, 9-10, 14-16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Pallath et al. (US 20180150547 A1) in view of Abbaszadeh et al. (US 20200067969 A1) and Ouyang et al. (US 20190036795 A1). Regarding claim 1, Pallath teaches: A system, comprising: a a machine-learning model trained on one or more sets of time series of data; and ([0062]-[0069] discloses a trained Random Forest algorithm) a processor programmed to: ([0104], lines 1-2) access a time series of data having a plurality of data values that exhibit a non-normal distribution, each data value from among the plurality of data values corresponding to a point in time in the time series of data; ([0023], lines 1-5; [0027] on page 3, col. 1, lines 1-12; [0040] discloses receiving time series data that exhibit non-normal underlying structures.) decompose the time series of data into a plurality of clusters to generate the for each data value in the time series of data: identify a corresponding cluster, … ([0048], lines 1-3) provide the generate, using the machine-learning model, a prediction relating to the directionality and/or magnitude of the time series of data. ([0037], final 2 lines and [0097]. A “directionality” is an increase or decrease in the usage relative to the previous time points, and a “magnitude” is a value of the usage.) However, Pallath does not explicitly teach: generate the mixture model; identify a corresponding cluster, from among the plurality of clusters of the mixture model, against which the data value is to be normalized, determine a normalization value for the corresponding cluster, and normalize the data value based on the normalization value; provide the normalized data values But Abbaszadeh teaches: generate the mixture model, each cluster from among the plurality of clusters comprising a normal distribution ([0096] and [0097], lines 1-10) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have used Abbaszadeh’s Gaussian mixture model (GMM) for Pallath’s clustering. A motivation for the combination is that a GMM may allow for the building of a complex probability distribution from a linear superposition of simpler components. Gaussian distributions may be the most common choice as mixture components because of the mathematical simplicity of parameter estimation as well as their ability to perform well in many situations. (Abbaszadeh, [0096]) However, Pallath and Abbaszadeh do not explicitly teach: identify a corresponding cluster, from among the plurality of clusters of the mixture model, against which the data value is to be normalized, determine a normalization value for the corresponding cluster, and normalize the data value based on the normalization value; provide the normalized data values But Ouyang teaches: identify a corresponding cluster, from among the plurality of clusters normalization value for the corresponding cluster, and normalize the data value based on the normalization value; ([0080]-[0082] and reference claim 6, lines 1-7) provide the normalized data values ([0083], lines 1-2 and reference claim 6, lines 8-9) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have standardized every data point in a particular cluster in Pallath and Abbaszadeh, and to have applied the standardized data points. A motivation for the combination is that standardization makes the data points within a cluster more comparable to one another. Regarding claim 5, the combination of Pallath, Abbaszadeh, and Ouyang teaches: The system of claim 1, Pallath teaches: wherein to identify the corresponding normal distribution, the processor is programmed to: determine a distance between the data value to each normal distribution from among the plurality of normal distributions; and ([0048], lines 3-14; [0051], lines 1-6) select the corresponding normal distribution that is closest to the data value based on the determined distances. ([0030] and [0051], lines 1-6) Regarding claim 6, the combination of Pallath, Abbaszadeh, and Ouyang teaches: The system of claim 1, Pallath teaches: wherein the processor is further programmed to: identify a number of the plurality of clusters to be used, wherein the time series of data is approximated based on the plurality of clusters. (All of [0047] and [0048], lines 1-6) Regarding claim 7, the combination of Pallath, Abbaszadeh, and Ouyang teaches: The system of claim 1, Pallath teaches: wherein to decompose the time series of data, the processor is further programmed to decompose the time series of data into However, Pallath does not explicitly teach: a gaussian mixture comprising overlapping clusters of normal distributions. But Abbaszadeh teaches: a gaussian mixture comprising overlapping clusters of normal distributions. ([0098], lines 1-3) A motivation for the combination is the same as the motivation given for claim 1. Regarding claim 9, the combination of Pallath, Abbaszadeh, and Ouyang teaches: The system of claim 1, wherein, to determine the normalization value, the processor is programmed to: Pallath teaches: determine a mean or variance of the corresponding cluster. ([0029], lines 18-end and [0030] discloses determining a centroid of a corresponding cluster, which is a mean value.) Abbaszadeh at [0097], lines 4-10 discloses that a centroid is a mean. Determining the normalization value is recited as an intended effect of determining a mean. In the combination of references, Pallath’s system determines a centroid, and then Ouyang’s system determines the normalization value. Claims 10 and 14-15 each recites a method which incorporates the same features as the system of claims 1 and 5-6, respectively, and are therefore rejected for at least the same reasons. Claim 16 recites a product which incorporates the same features as the system of claim 1 and is therefore rejected for at least the same reasons. Pallath teaches: A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, programs the processor to: ([0104], lines 4-8) Claim 20 recites a product which incorporates the same features as the system of claim 5 and is therefore rejected for at least the same reasons. Claims 2-4, 11-13, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Pallath et al. (US 20180150547 A1) in view of Abbaszadeh et al. (US 20200067969 A1), Ouyang et al. (US 20190036795 A1), and Cheng et al. (US 20160093048 A1). Regarding claim 2, the combination of Pallath, Abbaszadeh, and Ouyang teaches: The system of claim 1, Pallath teaches: … and wherein to generate, using the machine-learning model, the prediction, the processor is further programmed to: encode, compare the outcome to an output; and ([0051] discloses refining the number of clusters by increasing a number of clusters while reducing a distance between each cluster. An “outcome” is a first cluster, an “output” is a second cluster, and the comparison is a distance between the clusters.) predict the directionality based on the comparison. ([0052]-[0056], where predicting the future data points in the time series is based on the optimized clustering algorithm.) However, Pallath and Abbaszadeh do not explicitly teach: wherein the machine-learning model comprises an autoencoder, encode, by the autoencoder, an outcome based on the normalized data values; But Ouyang teaches: an outcome based on the normalized data values; ([0080]-[0083] and reference claim 6) A motivation for the combination is the same as the motivation given for claim 1. However, Pallath, Abbaszadeh, and Ouyang do not explicitly teach: wherein the machine-learning model comprises an autoencoder, and encode, by the autoencoder, an outcome But Cheng teaches: wherein the machine-learning model comprises an autoencoder, and encode, by the autoencoder, an outcome ([0018]) It would have been obvious to a person having ordinary skill in the art to have encoded Pallath’s symbols using Cheng’s autoencoder. In the combination of reference, Examiner treats the machine learning model as Pallath’s Random Forest algorithm in combination with Cheng’s autoencoder, and the normalized data values are provided to the Random Forest portion of the machine-learning model. A motivation for the combination is that Cheng’s encoder can be trained to reduce the dimensionality of input data in an unsupervised manner. Regarding claim 3, the combination of Pallath, Abbaszadeh, Ouyang, and Cheng teaches: The system of claim 2, Pallath teaches: However, Pallath and Abbaszadeh do not explicitly teach: wherein the autoencoder comprises an encoder trained to generate a compressed input based on the normalized data values But Ouyang teaches: based on the normalized data values ([0080]-[0083] and reference claim 6) A motivation for the combination is the same as the motivation given for claim 1. However, Pallath, Abbaszadeh, and Ouyang do not explicitly teach: wherein the autoencoder comprises an encoder But Cheng teaches: wherein the autoencoder comprises an encoder ([0018], lines 4-10) A motivation for the combination is the same as the motivation given in claim 2. Regarding claim 4, the combination of Pallath, Abbaszadeh, Chen, and Ouyang teaches: The system of claim 3, Pallath teaches: make the prediction of the direction and/or magnitude. ([0037], final 2 lines and [0097]. A “directionality” is an increase or decrease in the usage relative to the previous time points, and a “magnitude” is a value of the usage.) However, Pallath, Abbaszadeh, and Ouyang do not explicitly teach: wherein the autoencoder comprises a decoder trained to generate a reconstructed input from the compressed input generated by the encoder, the reconstructed input being used to make the prediction But Chen teaches: wherein the autoencoder comprises a decoder trained to generate a reconstructed input from the compressed input generated by the encoder, the reconstructed input being used to make the prediction ([0018], lines 10-16) A motivation for the combination is the same as the motivation given in claim 2. Claims 11-13 each recites a method which incorporates the same features as the system of claims 2-4, respectively, and are therefore rejected for at least the same reasons. Claims 17-19 each recites a product which incorporates the same features as the system of claims 2-4, respectively, and are therefore rejected for at least the same reasons. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Pallath et al. (US 20180150547 A1) in view of Abbaszadeh et al. (US 20200067969 A1), Ouyang et al. (US 20190036795 A1), and Parlak et al. (US 20200159925 A1). Regarding claim 8, the combination of Pallath, Abbaszadeh, and Ouyang teaches: The system of claim 1, Pallath teaches: wherein to decompose the time series of data, the processor is further programmed to decompose the time series of data into Abbaszadeh at [0098], lines 1-3 teaches “GMM is a soft clustering method (i.e., overlapping clusters)”. Therefore, Pallath and Abbaszadeh do not explicitly teach: a gaussian mixture comprising non-overlapping clusters of normal distributions. But Parlak teaches: a gaussian mixture comprising non-overlapping clusters of normal distributions. ([0032], lines 1-10) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have used Parlak’s GMM to perform hard clustering. A motivation for the combination is to assign a data point to exactly one cluster. (Parlak, [0032]) Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Ranjan et al. (US 20220103444 A1) in view of Shamir et al. (US 20210158156 A1). Regarding claim 21, Ranjan teaches: each neural network a processor programmed to: provide each RNN, from among the plurality of RNNs, with a respective time series of data, each respective time series of data comprising sequential data values that vary independently of one another over time; ([0047] and [0121]-[0127] discloses providing time-series network model 700 with multivariate time-series data 702. The data values are voltage, temperature, etc. over time. The model 700 is an RNN because it includes recurrent layers. The data values of voltage and temperature vary independently from one another at least because they are different types of measurements.) obtain an output from a last one of the one or more dense layers of each RNN; ([0127], final 3 lines) … generate a prediction based on the However, Ranjan does not explicitly teach: A system, comprising: a plurality of recursive neural networks (RNNs) configured to operate in parallel to collectively form a parallel neural network architecture, each neural network from among the plurality of RNNs comprising: layers … merge the output from each of the plurality of RNNs; generate a prediction based on the merged output. But Shamir teaches: A system, comprising: a plurality of recursive neural networks (RNNs) configured to operate in parallel to collectively form a parallel neural network architecture, each neural network from among the plurality of RNNs comprising [layers] ([0043], lines 4-8, [0050], lines 1-5, [0067], lines 1-5, and [0088], lines 3-12 discloses three RNNs 22a-c configured to operate in parallel, each RNN comprising layers.) … merge the output from each of the plurality of RNNs; ([0050], lines 1-5 discloses aggregating the respective outputs to generate the ensemble output.) generate a prediction based on the merged output. ([0050], lines 1-5 discloses aggregating the respective outputs to generate the ensemble output, which is a prediction.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have generated an ensemble of Ranjan’s RNNs based on Shamir’s techniques. A motivation for the combination is that use of an ensemble improves the reproducibility of the models, making predictions of two independently trained ensembles diverge less from one another. (Shamir, [0038]) Claims 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Ranjan et al. (US 20220103444 A1) in view of Shamir et al. (US 20210158156 A1) and Abbaszadeh et al. (US 20200067969 A1). Regarding claim 22, the combination of Ranjan and Shamir teaches: The system of claim 21, wherein the processor is further programmed to: However, Ranjan and Shamir do not explicitly teach: generate a mixture model for each of the respective time series of data, the mixture model comprising a plurality of clusters of normal distributions that together approximates the respective time series of data. But Abbaszadeh teaches: generate a mixture model for each of the respective time series of data, the mixture model comprising a plurality of clusters of normal distributions that together approximates the respective time series of data. ([0070], lines 6-10, [0096] and [0097], lines 1-10) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have clustered Ranjan and Shamir’s time series data based on Abbaszadeh’s GMM. A motivation for the combination is that a GMM may allow for the building of a complex probability distribution from a linear superposition of simpler components. Gaussian distributions may be the most common choice as mixture components because of the mathematical simplicity of parameter estimation as well as their ability to perform well in many situations. (Abbaszadeh, [0096]) Regarding claim 23, the combination of Ranjan, Shamir, and Abbaszadeh teaches: The system of claim 22, wherein the processor is further programmed to: Ranjan teaches: normalize values of each of the respective time series of data ([0093], lines 7-14) However, Ranjan and Shamir do not explicitly teach: normalize values of each of the respective time series of data based on the mixture model generated for the respective time series of data. Abbaszadeh’s teaches a mixture model generated for each respective time series of data at [0096] and [0097], lines 1-10. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have applied Ranjan’s standardization to the time series data belonging to each of Abbaszadeh’s clusters. A motivation for the combination is the same as the motivation given for claim 22. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Asher H. Jablon whose telephone number is (571)270-7648. The examiner can normally be reached Monday - Friday, 9:00 am - 6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.H.J./Examiner, Art Unit 2127 /ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Aug 09, 2022
Application Filed
Feb 09, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572794
SYSTEM AND METHOD FOR AUTOMATED OPTIMAZATION OF A NEURAL NETWORK MODEL
2y 5m to grant Granted Mar 10, 2026
Patent 12456047
Distilling from Ensembles to Improve Reproducibility of Neural Networks
2y 5m to grant Granted Oct 28, 2025
Patent 12450493
DIMENSION REDUCTION IN THE CONTEXT OF UNSUPERVISED LEARNING
2y 5m to grant Granted Oct 21, 2025
Patent 12437198
CLASSIFICATION OF A NON-MONETARY DONATION BASED ON MACHINE LEARNING
2y 5m to grant Granted Oct 07, 2025
Patent 12437215
DEVICE, METHOD, AND COMPUTER PROGRAM PRODUCT FOR EXECUTING INFERENCE USING INPUT SIGNAL
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
88%
With Interview (+43.9%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 90 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month