Prosecution Insights
Last updated: April 19, 2026
Application No. 18/302,712

SYSTEMS AND METHODS OF DETERMINING DYNAMIC TIMERS USING MACHINE LEARNING

Non-Final OA §103
Filed
Apr 18, 2023
Examiner
NGUYEN, NHAT HUY T
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Nasdaq Inc.
OA Round
1 (Non-Final)
54%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
79%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
185 granted / 341 resolved
-0.7% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
59 currently pending
Career history
400
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
16.9%
-23.1% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 341 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-4, 7-9, 12-13, 15-17 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taylor et al. (U.S. 2012/0089497 hereinafter Taylor) in view of Arik et al. (U.S. 2023/0110117 hereinafter Arik). As Claim 1, Taylor teaches a method of training a neural network, the method comprising: provisioning, as part of a distributed computer system, a memory buffer (Taylor (¶0019 line 1-3), order book feeds) that is concurrently readable (Taylor (¶0206 line 2-6), “a parallel set of tree traversal engines can operate in parallel and interleave their accesses to memory. Furthermore, the SVU module may optionally cache recently accessed tree nodes in on-chip memory in order to further reduce memory read latency”) and writable by a plurality of separate worker instances (Taylor (¶0148 line 1-14), “parallel engines update and maintain the order and price aggregation data structures in parallel. In one embodiment, the data structures are maintained in the same physical memory. In this case, the one or more order engines (worker instances) and one or more price engines (worker instants) interleave their accesses to memory, masking the memory access latency of the memory technology and maximizing throughput of the system.”); in a data preparation phase: executing, on the distributed computer system, the plurality of separate worker processes that concurrently perform generation of training data for each one of a plurality of identifiers, where each one of the plurality of separate worker processes perform, for a respective one of the plurality of identifiers (Taylor (¶0148 line 1-14), “parallel engines update and maintain the order and price aggregation data structures in parallel. In one embodiment, the data structures are maintained in the same physical memory. In this case, the one or more order engines (worker instances) and one or more price engines (worker instants) interleave their accesses to memory, masking the memory access latency of the memory technology and maximizing throughput of the system.”), at least: (a1) obtaining, for the respective one of the plurality of identifiers, a plurality of tuples that each include: a current state, an action, a calculated reward, and a next state, wherein the current state and the next state are represented as n-dimensional vectors that are each based on state data from a data transaction processing system, where n is a number of features used in training the neural network (Taylor (¶0019 line 10-14, ¶0017 line 16-21, fig. 2(a)), “The Options Price Reporting Authority (OPRA) feed is the most significant source of derivatives market data, and it belongs to the class of feeds known as "level 1" feeds. Level 1 feeds report quotes (current state), trades (an action), trade cancels (next state) and corrections (a calculated reward), and a variety of summary events”. Fig. 2(a) shows an example of order book feeds), and (a2) writing, to the memory buffer, the plurality of tuples that have been obtained, wherein obtained tuples from multiple ones of the plurality of separate worker processes are written to the memory buffer concurrently (Taylor (¶0148 line 1-14), “parallel engines update and maintain the order and price aggregation data structures in parallel. In one embodiment, the data structures are maintained in the same physical memory. In this case, the one or more order engines (worker instances) and one or more price engines (worker instants) interleave their accesses to memory, masking the memory access latency of the memory technology and maximizing throughput of the system.”); Taylor may not explicitly disclose: storing a first neural network and a second neural network; in a training phase: performing a plurality of iterations that each include at least: (b1) sampling, from across the memory buffer that includes tuples loaded via the data preparation phase, a batch of tuples, (b2) for each tuple in the sampled batch of tuples, generating a target Q-value, (b3) calculating, based at least on the target Q-values for each tuple in the sampled batch of tuples, a loss value for the sampled batch of tuples, and (b4) updating weights of the first neural network by performing gradient descent using the calculated loss value; and after performing the plurality of iterations, updating weights of the second neural network based on a combination of the weights from the first neural network and weights of a prior instance of the second neural network, wherein the data preparation phase and the training phase are performed on state data of the data transaction processing system that is associated with a plurality of different operational periods. Arik teaches: storing a first neural network (Arik (¶0061 line 1-4, ¶0087), encoder model is trained and stored) and a second neural network (Arik (¶0061 line 1-4, ¶0088), backcast model is trained and stored); in a training phase: performing a plurality of iterations (Arik (¶0060 line 4-6), multiple iterations is applied to update models) that each include at least: (b1) sampling, from across the memory buffer that includes tuples loaded via the data preparation phase, a batch of tuples (Arik (¶0049 last 3 lines), “For example, the observed data 210 can be financial trading data, with different entities representing different companies buying and selling financial assets on the observed market.”), (b2) for each tuple in the sampled batch of tuples, generating a target Q-value (Arik (¶0089), “updates weights for one or both of the encoder and backcast machine learning models, based on error values between one or more data points of the backcast and the one or more data points of the masked portion of the time window,”), (b3) calculating, based at least on the target Q-values for each tuple in the sampled batch of tuples, a loss value for the sampled batch of tuples (Arik (¶0089), “updates weights for one or both of the encoder and backcast machine learning models, based on error values between one or more data points of the backcast and the one or more data points of the masked portion of the time window,”), and (b4) updating weights of the first neural network by performing gradient descent using the calculated loss value (Arik (¶0086 last 3 lines), “update the weights of the encoder, backcast decoder, and forecast decoder using batch gradient descent with weight updates”); and after performing the plurality of iterations, updating weights of the second neural network based on a combination of the weights from the first neural network and weights of a prior instance of the second neural network (Arik (¶0089), “updates weights (update the weight of the prior instance of the second neural network) for one or both of the encoder (first model) and backcast machine learning models (second model), based on error values between one or more data points of the backcast and the one or more data points of the masked portion of the time window,”), wherein the data preparation phase and the training phase are performed on state data of the data transaction processing system that is associated with a plurality of different operational periods (Arik (¶0079, fig. 4B), system operates on different time series dataset stored in memory). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify system of Taylor instead be a predictions system taught by Arik, with a reasonable expectation of success. The motivation would be to “provide for self-adapting forecasting (SAF) during the training and execution of machine learning models trained for multi-horizon forecasting on time-series data” (Arik (¶0026 line 1-4)). As Claim 3, besides claim 1, Taylor in view of Arik teaches wherein the memory buffer includes a first memory buffer (Arik (¶0087), encoder model is trained and stored) and a second memory buffer (Arik (¶0088), backcast model is trained and stored). As Claim 4, besides Claim 3, Taylor in view of Arik teaches wherein the first memory buffer is associated with a first component of a reward function (Arik (¶0087), encoder model is trained and stored) and the second memory buffer is associated with a second component of the reward function (Arik (¶0088), backcast model is trained and stored). As Claim 7, besides Claim 1, Taylor in view of Arik teaches wherein the data transaction processing system is a simulation system, the method further comprising: executing a simulated matching process by the simulation system that simulates how data transaction requests are processed based on a dynamic timer value (Arik (¶0077 line 1-4), “The SAF system generates a forecast of one or more data points at one or more future points in time using the encoded representation of the time window, according to block 440A.”). As Claim 8, besides Claim 7, Taylor in view of Arik teaches further comprising: selecting, as part of the data preparation phase, a timer value to provide to the simulation system as the dynamic timer value (Arik (¶0032 line 4-9, ¶0057 line 4-7), “The window length can vary in length, up to the initial timestep (timestep zero) of the time-series data. The window length can be predetermined or received as input at prediction time”). As Claim 9, besides Claim 8, Taylor in view of Arik teaches wherein the timer value is selected based on a most recent version of the first neural network or the second neural network produced from a prior iteration of the training phase (Arik (¶0032 line 4-9, ¶0057 line 4-7), “The window length can vary in length, up to the initial timestep (timestep zero) of the time-series data. The window length can be predetermined or received as input at prediction time”). As Claim 12, besides Claim 1, Taylor in view of Arik teaches wherein each of the plurality of different operational periods corresponds to a different operational day that the data transaction processing system has operated (Arik (¶0036 last 5 lines, ¶0051 last 3 lines), “A model trained for multi-horizon forecasting can receive input time-series data time windows. A time window is defined as a range of timesteps”). As Claim 13, Taylor teaches a computer system comprising: a processing system comprising instructions that are configured to, when executed by at least one hardware processor included with the processing system, cause the at least one hardware processor (Taylor (¶0075 line 1-2), processor 812 and RAM) to perform operations comprising: The rest of the limitation(s) are rejected for the same reasons as Claim 1. As Claim 15, the Claim is rejected for the same reasons as Claim 3. As Claim 16, the Claim is rejected for the same reasons as Claim 7. As Claim 17, the Claim is rejected for the same reasons as Claim 12. As Claim 19, the Claim is rejected for the same reasons as Claim 1. As Claim 20, the Claim is rejected for the same reasons as Claim 4. Claim(s) 2 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taylor in view of Arik in further view of Gadanho et al. (U.S. 2010/0070436 hereinafter Gadanho). As Claim 2, besides Claim 1, Taylor in view of Arik may not explicitly disclose wherein, as part of (a2), only tuples with a calculated reward that is non-zero are written to the memory buffer. Gadanho teaches: wherein, as part of (a2), only tuples with a calculated reward that is non-zero are written to the memory buffer (Gadanho (¶0077 last 2 lines), “All examples having confidence values of zero or below are then removed from the training set”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify dataset of Taylor in view of Arik instead be a training set taught by Gadanho, with a reasonable expectation of success. The motivation would be so that “the combined preference value is generated in response to the confidence values of the duplicates. For example, the model processor 115 may discard all preferences that have a value below a given threshold (and e.g. average the rest) or may simply select the preference value corresponding to the highest confidence value” (Gadanho (¶0070)). As Claim 14, the Claim is rejected for the same reasons as Claim 2. Claim(s) 5-6 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taylor in view of Arik in further view of Keskar et al. (U.S. 2019/0251431 hereinafter Keskar). As Claim 5, besides Claim 1, Taylor in view of Arik may not explicitly disclose wherein the plurality of iterations is no more than a total number of the plurality of identifiers. Keskar teaches: wherein the plurality of iterations is no more than a total number of the plurality of identifiers (Keskar (¶0065 line 13-16), “the location and placement of the joint training strategy intervals may be based on a number of training iterations ( e.g., a number of training samples presented to the system) for each task type”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify iteration module of Taylor in view of Arik instead be a iteration module taught by Keskar, with a reasonable expectation of success. The motivation would be to allow “the performance metrics rapidly improve to values that are better than the performance metrics of the joint training strategy only approach of FIG.9B and more closely reach the performance metrics of the separately trained versions of system 300 in FIG. 9A” (Keskar (¶0071 last 5 lines)). As Claim 6, besides Claim 5, Taylor in view of Arik in further view of Keskar teaches wherein the plurality of iterations is equal to the total number of the plurality of identifiers (Keskar (¶0065 line 13-16), “the location and placement of the joint training strategy intervals may be based on a number of training iterations ( e.g., a number of training samples presented to the system) for each task type”). As Claim 18, the Claim is rejected for the same reasons as Claim 6. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taylor in view of Arik in further view of Karnagel et al. (U.S. 2020/0327357 hereinafter Karnagel). As Claim 10, besides Claim 1, Taylor in view of Arik may not explicitly disclose wherein the number of features is at least 100. Karnagel teaches: wherein the number of features is at least 100 (Karnagel (¶0034 line 1-3), system accommodates hundreds or thousands of features). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify features of Taylor in view of Arik instead be features taught by Karnagel, with a reasonable expectation of success. The motivation would be to allow system to accommodate rich sample data (Karnagel (¶0034 line 1-3)). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taylor and Arik in view of Karnagel in further view of Yang et al. (U.S. 2019/0311246 hereinafter Yang). As Claim 11, besides claim 10, Taylor in view of Arik in further view of Karnagel may not explicitly disclose wherein a number of weights within the first neural network and the second neural network is at least 30,000. Karnagel teaches wherein a number of weights within the first neural network and the second neural network is at least 30,000 (Yang (¶0018 line 11-14), “Training a CNN model may require significant amount of computing power, even with a physical AI chip because a CNN model may include tens of thousands of weights.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify weights of Taylor in view of Arik in further view of Karnagel instead be weights taught by Yang, with a reasonable expectation of success. The motivation would be to allow “the weights in the CNN can easily vary and be loaded into the virtual AI chip without the cost associated with a physical AI chip” (Yang (¶0018 line 7-9)). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Curtis (U.S. 2022/0114664) teaches machine learning system to forecast, recommend and trade securities. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHAT HUY T NGUYEN whose telephone number is (571)270-7333. The examiner can normally be reached M-F: 12:00-8:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NHAT HUY T NGUYEN/Primary Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Apr 18, 2023
Application Filed
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530116
MEDIA CAPTURE LOCK AFFORDANCE FOR GRAPHICAL USER INTERFACE
2y 5m to grant Granted Jan 20, 2026
Patent 12504866
AUTOMATED TAGGING OF CONTENT ITEMS
2y 5m to grant Granted Dec 23, 2025
Patent 12489720
INFERRING ASSISTANT ACTION(S) BASED ON AMBIENT SENSING BY ASSISTANT DEVICE(S)
2y 5m to grant Granted Dec 02, 2025
Patent 12463859
ENABLING AN OPERATOR TO RESOLVE AN ISSUE ASSOCIATED WITH A 5G WIRELESS TELECOMMUNICATION NETWORK USING AR GLASSES
2y 5m to grant Granted Nov 04, 2025
Patent 12443419
ADJUSTING EMPHASIS OF USER INTERFACE ELEMENTS BASED ON USER ATTRIBUTES
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
54%
Grant Probability
79%
With Interview (+25.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 341 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month