Prosecution Insights
Last updated: April 19, 2026
Application No. 17/383,289

Machine Learning Portfolio Simulating and Optimizing Apparatuses, Methods and Systems

Final Rejection §103
Filed
Jul 22, 2021
Examiner
FIGUEROA, KEVIN W
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Fmr LLC
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
4y 0m
To Grant
91%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
252 granted / 362 resolved
+14.6% vs TC avg
Strong +21% interview lift
Without
With
+21.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
20 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 362 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Response to Arguments Applicant’s arguments have been fully considered. The claim objections and 112(b) rejections have been withdrawn due to claim amendments. Applicant’s arguments regarding the 101 rejections have been fully considered. However in light of new guidance, the rejections are withdrawn. Applicant’s arguments regarding the 103 rejection have been fully considered. In light of the claim amendments, the rejection is updated to show how the current reference still read on the claim language. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim(s) 1, 8-9, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buehler, Hans, et al. "A data-driven market simulator for small data environments." in view of Zhu et al. US 2021/0027379. Regarding claim 1, Buehler teaches “a machine learning portfolio generating apparatus” (abstract “A deep-learning neural network can be trained to model a probability distribution of the asset-price trends for a future time period using a training data set, which can include asset-price trends of a plurality of assets over a past time period and a latent vector sampled from a prior distribution associated with the asset-price trends of a plurality of assets”) “generate a set of simulated market scenarios data structures” (abstract “Neural network based data-driven market simulation unveils a new and flexible way of modelling financial time series, without imposing assumptions on the underlying stochastic dynamics”) “with a variational autoencoder [yia a network accessible server cloud]” (pg. 2 last ¶ “we generate synthetic market scenarios using variational autoencoders both in the signature-based setting and in the standard returns-based setting”): “in which the variational autoencoder data structure is structured as including set of latent variables generated via a neural network encoder, and” (pg. 15 “The encoder network has one hidden layer and two latent layers, with 50 nodes on the hidden layer.”), “in which date the set of latent variables are simulated with neural networks as decoder” (pg. 15) “The decoder network has ne hidden layer with 50 units and activation function leaky (parametric) ReLU with parameter α = 0.3” “such that decoding of the simulated market scenarios data structures follow dynamic dependencies and volatilities of historical market risk factors;” (pg. 13 Step 3 (c) “In the refined version we also calculate and store relevant market conditions such as current level13 of volatility, current level of the index, signature of the previous path segment”); “distribute distribution and dependency joint distribution structures with a transfer layer between the encoder and the decoder in which the latent space variables are structured take adopting the distribution and dependency joint distribution structures;” (pg. 15 “The encoder network has one hidden layer and two latent layers,” in which the latent layers allow latent space variables to take on distributions and is considered the transfer layer after the encoding); “tuning a transfer layers codec in which the transfer layers codec is the encoder and the decoder and the transfer layer in between, in which the tuning is structured including a number of the latent space variables, a number of neurons in the transfer layers codec, a number of layers of the transfer layers codec in which the tuning is structure as optimizing an overall fit between the set of simulated market scenarios data structures and a set of historical market scenarios.” (previous citation “The encoder: The encoder network has one hidden layer and two latent layers, with 50 nodes on the hidden layer. The activation function is a leaky (parametric) ReLU with parameter α = 0.3. The decoder: The decoder network has ne hidden layer with 50 units and activation function leaky (parametric)) The Buehler reference has been addressed above. More specifically, Zhu teaches “at least one memory; a component collection stored in the at least one memory; any of at least one processor disposed in communication with the at least one memory the any of at least one processor executing processor-executable instructions from the component collection, storage of the component collection structured with processor-executable instructions comprising:” (Zhu [0003] “A system, in one aspect, may include a hardware processor. A memory device may be coupled with the hardware process. The hardware processor operable to create a training data set including at least asset-price trends of a plurality of assets over a past time period and a latent vector sampled from a prior distribution”): “[…] yia a network accessible server cloud […]” (Zhu [0027] “FIG. 2 is a diagram illustrating an example of computer system architecture that can implement generative network portfolio management system in one embodiment. The system 100 can be a cloud-based system, such as a computing system being implemented by a cloud-computing platform”) It would be obvious to one having ordinary skill in the art at the time that the invention was filed to the teachings of Buehler with that of Zhu since a combination of known methods would yield predictable results. As shown in Zhu, cloud-based systems are known in the art, in particular with portfolio simulations using variational autoencoders. Thus a combination of the system of Buehler operating in a cloud environment would work on a predictable manner since such systems are known to exist and work. Regarding claim 8, the Buehler and Zhu references have been addressed above. Zhu further teaches “filter, the set of simulated market scenarios data structures associated with a time period length based on specified ranges of allowable values for specified customized market factors” (see Zhu claims 4-6, “wherein the training and the portfolio optimization is performed based on configurable parameters [… ] wherein at least some of the configurable parameters are received from user input […]wherein at least some of the configurable parameters are adjustable” parameters including the time periods which are therefore configurable, i.e. filtering out specific time ranges) Regarding claim 9, the Buehler and Zhu references have been addressed above. Zhu further teaches “further, comprising:: filter, the set of simulated market scenarios data structures associated with a time period length based on specified business cycle settings” (previous citation, specified business cycle settings are arbitrary settings) Regarding claim 15 the Buehler and Zhu references have been addressed above. Buehler further teaches “further, comprising: in which the set of simulated market scenarios data structures are further generated with a set of multi-variate mixture data structures” (pg. 12 “Given a financial data stream10” and footnote 10 “10We assume the data stream to be univariate for notational simplicity but it is straightforward to extend the methodology can be extended to multivariate data streams too.”) Claim(s) 2-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buehler and Zhu in view of Hsu et al. US 2012/0246094. Regarding claim 2, the Buehler and Zhu references have been addressed above. While the references have been addressed above, Hsu more specifically teaches “determine, a historical data set” (Hsu [0098] “the mean-variance optimized portfolio based on 5-year historical averages” i.e. historical data set”), “a rolling window period length” ([0259] “we choose rolling windows of length 5 years (60 months) for estimation”), “and a set of market factors” (abstract “receiving data about a plurality of monthly returns for multiple years for a universe of asset classes; receiving data about investment returns; extracting a plurality of orthogonal risk factors, at least one factor characteristic, and an asset class-factor translation matrix by principal component analysis (PCA) from the data about the universe of asset classes”); “determine, a set of rolling window periods with the historical data set and the rolling window period length” (Hsu [0098] “the mean-variance optimized portfolio based on 5-year historical averages”); and “calculate, for each market factor from the set of market factors, for each rolling window period from the set of rolling window periods, a change to the respective market factor during the respective rolling window period” (Hsu [0140] “as the risk factors may change over time, a revised investible portfolio may be provided to the portfolio manager”), “each historical market scenario from the set of historical market scenarios structured comprising calculated changes to the set of market factors during a rolling window period” (Hsu [0021] “the system, method or computer program product may be adapted where the rebalancing may include electronically rebalancing periodically, which may include at least one of: rebalancing annually; rebalancing by accounting period; rebalancing monthly; rebalancing quarterly; or rebalancing biannually.”) It would have been obvious to one having ordinary skill in the art at the time that the invention was filed to combine the teachings of Buehler and Zhu with that of Hsu since a combination of known methods would yield predictable results. As shown in Hsu, it is known in the art and field of optimization portfolios to choose specific window periods (rolling windows) for better optimization. As such combining these techniques with the references above would allow for better portfolio simulation. Regarding claim 3, the Buehler, Zhu, and Hsu references have been addressed above. Hsu further teaches “further, comprising: determine, a delta between values of the market factor at a beginning time point and an ending time point of the rolling window period” (Hsu [0259] “For this reason we choose rolling windows of length 5 years (60 months) for estimation. Rebalancing is done at the end of each quarter; monthly adjustments create too much turnover, even for some benchmarks, whereas annual changes would likely hinder the timing ability of some strategies”) Claim(s) 5-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buehler and Zhu in view of Wilhelmsson, Per. "Hierarchical Clustering of Time Series using Gaussian Mixture Models and Variational Autoencoders." [herein Wil]. Regarding claim 5, the Buehler and Zhu references have been addressed above. Buehler further teaches “comprising: structure a deep learning neural network for a time period bucket” (abstract “Neural network based data-driven market simulation unveils a new and flexible way of modelling financial time series”), The references however do not explicitly teach a Gaussian mixture. Wil however teaches “the trained deep learning neural network is trained to generate a set of Gaussian mixture latent variables” (Wil abstract “comprised of a variational autoencoder to compress the series and a Gaussian mixture model to merge them into an appropriate cluster hierarchy”) It would have been obvious to one having ordinary skill in the art at the time that the invention was filed to combine the teachings of Buehler and Zhu with that of Wil since “the agglomerative hierarchical Gaussian mixture model delivers powerful results without any tuning or extra preprocessing of the data (other than the dimensionality reduction from the VAE). Including a likelihood framework into a clustering algorithm is probably the most intuitive and easy way of dealing with the problem of finding the right number of clusters in the data” Wil pg. 63. Therefore by combining these techniques, one would have more optimal learning. Regarding claim 6, the Buehler, Zhu, and Wil references have been addressed above. Buehler further teaches “further, comprising: in which cloud computing clusters structured generating simulated market scenarios data structures for the time period bucket, with the trained deep learning neural network associated with the time period bucket” (abstract “We give a brief overview of currently used generative modelling approaches and performance evaluation metrics for financial time series, and address some of the challenges to achieve good results in the latter. We also contrast some classical approaches of market simulation with simulation based on generative modelling and highlight some advantages and pitfalls of the new approach”). Regarding claim 7, the Buehler, Zhu, and Wil references have been addressed above. Buehler further teaches “further, comprising: in which the time period bucket are structured including to: generate, a set of random values for the set of Gaussian mixture latent variables” (pg. 24 (B.2) “.2) The basic functioning of a VAE is the following: Given one random variable z with one distribution, we can create another random variable X = g(z) with very different distribution”); and “generate a simulated market scenario, from the simulated market scenarios data structures for the time period bucket, from the generated set of random values using a neural network decoder of the trained deep learning neural network associated with the time period bucket” (pg. 15 3.2.3 “Conditional Variational Autoencoder, adapting to specific market conditions. In order to further accommodate to the non-stationarity of the data, we now refine the VAE to certain specific market conditions: a Conditional Variational Autoencoder (CVAE).” which would use all the previous data) Claim(s) 11-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buehler and Zhu in view of Amazon SageMaker Developer Guide (as of 03/08/2021) [herein SageMaker]. Regarding claim 11, the Buehler and Zhu references have been addressed above. While the references generally teach a cloud environment, SageMaker more specifically teaches “further, comprising: pply cloud service, Amazon Web Services (AWS) SageMaker, training, tune, and deploy deep learning models in a parallel and distributed way on multiple instances and multiple GPUs” (SageMaker pg. 1 “Amazon SageMaker is a fully managed machine learning service. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment. It provides an integrated Jupyter authoring notebook instance for easy access to your data sources for exploration and analysis, so you don't have to manage servers”) It would have been obvious to one having ordinary skill in the art at the time that the invention was filed to combine the teachings of Buehler and Zhu with that of SageMaker since SageMaker is known in the art to be a robust cloud-based machine learning service that allows building, training, and deployment of machine learning models. Regarding claim 12, the Buehler, Zhu, and SageMaker references have been addressed. SageMaker further teaches “further, comprising: manage a machine learning pipeline including any of: a machine learning service, SageMaker, in AWS to manage machine learning pipeline” (pg. 1 “Create and manage machine learning pipelines integrated directly with SageMaker jobs.”). Regarding claim 13, the Buehler, Zhu, and SageMaker references have been addressed. SageMaker further teaches “further, comprising:structure SageMaker supporting collaboration between developers and data scientists” (previous citation, SageMaker is a cloud based platform that allows user collaboration). Regarding claim 14, the Buehler, Zhu, and SageMaker references have been addressed. SageMaker further teaches further, comprising: in which SageMaker for parallel market scenario simulation structured as: create a SageMaker notebook instance with specific lifecycle structure, permissions and encryption, and network settings” (SageMaker pg. 59 “An Amazon SageMaker notebook instance is a fully managed machine learning (ML) Amazon Elastic Compute Cloud (Amazon EC2) compute instance that runs the Jupyter Notebook App.” which would have the parameters as claimed); “upload input data to S3 by providing a S3 path” (pg. 9 “The URL of the Amazon Simple Storage Service (Amazon S3) bucket where you've stored the training data. • The com”); “structure a training job as an estimator by providing arguments including any: training script entry point, SageMaker execution role, number and type of training instance, security key, and a set of hyperparameters” (pg. 9 To train a model in SageMaker, you create a training job. The training job includes the following information: • The URL of the Amazon Simple Storage Service (Amazon S3) bucket where you've stored the training data. • The compute resources that you want SageMaker to use for model training. Compute resources are ML compute instances that are managed by SageMaker. • The URL of the S3 bucket where you want to store the output of the job. • The Amazon Elastic Container Registry path where the training code is stored. For more information, see Docker Registry Paths for SageMaker Built-in Algorithms (p. 658).); “trigger the training job by launching a docker container on EC2 instances with prebuilt SageMaker docker images and downloading the input data from the specified S3 path starting the training process” (pg. 1398 “Amazon SageMaker makes extensive use of Docker containers for build and runtime tasks. SageMaker provides prebuilt Docker images for its built-in algorithms and the supported deep learning frameworks used for training and inference. Using containers, you can train machine learning algorithms and deploy models quickly and reliably”); “repeat the training job on market scenarios with different delta length structured as multiple training jobs triggerable together and trained on multiple instances in parallel” (pg. 1045 “Amazon SageMaker automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose.”); “deploy models as multiple SageMaker endpoints by specifying instance type and number of instances used for hosting the endpoints” (pg. 1187 “To create an endpoint that can host multiple models, use multi-model endpoints. Multi-model endpoints provide a scalable and cost-effective solution to deploying large numbers of models. They use a shared serving container that is enabled to host multiple models”); and Buehler further teaches “simulate market scenarios with different delta length with the SageMaker endpoints” (pg. 11 “The first term captures drift – i.e. the increment of a price path over a period of time. The second term indicates the volatility over the period of time (through the L`evy area)” periods of time imply different lengths) Allowable Subject Matter It is noted that no individual claim feature renders the claims as a whole patentable. Each limitation indicated as allowable renders the claim patentable only when taken in combination with the other claim limitations. No prior art has been cited for claims 4, 10 and 16. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN W FIGUEROA whose telephone number is (571)272-4623. The examiner can normally be reached Monday-Friday, 10AM-6PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MIRANDA HUANG can be reached at (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. KEVIN W FIGUEROA Primary Examiner Art Unit 2124 /Kevin W Figueroa/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Jul 22, 2021
Application Filed
Feb 27, 2025
Non-Final Rejection — §103
Sep 05, 2025
Response Filed
Dec 13, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586093
SYSTEMS AND METHODS FOR FACILITATING NETWORK CONTENT GENERATION VIA A DYNAMIC MULTI-MODEL APPROACH
2y 5m to grant Granted Mar 24, 2026
Patent 12573477
MOLECULAR STRUCTURE ACQUISITION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12570281
METHOD FOR EVALUATING DRIVING RISK LEVEL IN TUNNEL BASED ON VEHICLE BUS DATA AND SYSTEM THEREFOR
2y 5m to grant Granted Mar 10, 2026
Patent 12554964
CIRCUIT FOR HANDLING PROCESSING WITH OUTLIERS
2y 5m to grant Granted Feb 17, 2026
Patent 12547873
METHOD AND APPARATUS WITH NEURAL NETWORK INFERENCE OPTIMIZATION IMPLEMENTATION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
91%
With Interview (+21.0%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 362 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month