Prosecution Insights
Last updated: April 19, 2026
Application No. 18/302,671

SYSTEMS AND METHODS OF DETERMINING DYNAMIC TIMERS USING MACHINE LEARNING

Non-Final OA §102§103
Filed
Apr 18, 2023
Examiner
TRAN, TAN H
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Nasdaq Inc.
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
184 granted / 307 resolved
+4.9% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
60 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This action is in response to the original filing on 04/18/2023. Claims 1-20 are pending and have been considered below. Information Disclosure Statement 3. The information disclosure statement (IDS(s)) submitted on 01/17/2024 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 5. Claims 1, 2, 9, and 10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dalmiya et al. (U.S. Patent Application Pub. No. US 20210314270 A1). Claim 1: Dalmiya teaches a computer system comprising: memory configured to store (i.e. The apparatus generally includes at least one processor and a memory coupled with the at least one processor. The processor and the memory are configured to input one or more parameters to a machine learning algorithm; para. [0007]): a trained neural network (i.e. Training system 830 generally includes a predictive model training manager 832 that uses training data to generate predictive model 824 for predicting a packet buffering duration. Predictive model 824 may be generated based, at least in part, on the information in training repository 815; para. [0074, 0075]), and a plurality of features (i.e. when using a machine learning algorithm, training system 830 generates vectors from the information in training repository 815. In some examples, training repository 815 stores vectors. In some examples, the vectors map one or more features to a label; para. [0083]) that are used by the trained neural network (i.e. In some examples, the machine learning (e.g., used by the training system 830) is performed using a neural network; para. [0080]); a processing system comprising instructions that are configured to, when executed by at least one hardware processor included with the processing system, cause the at least one hardware processor to perform operations comprising (i.e. The apparatus generally includes at least one processor and a memory coupled with the at least one processor. The processor and the memory are configured to input one or more parameters to a machine learning algorithm; para. [0007]): receiving data messages that indicate state changes (i.e. FIG. 6, a PDCP transmitting entity 606 sends PDUs indexed 1-8 in a PDCP transmit buffer 608 to a PDCP receiving entity 602 (e.g., such as receiving node 500 in FIG. 5). PDU 1 and PDU 5 are sent over an LTE link 610 and the PDUs 2-4 and 6-8 are sent over an NR link 612. In the example in FIG. 6, the PDUs 1 and 5 over the LTE link 610 are lost over the air (e.g., due to physical BLER in LTE link 610), while the PDUs 2-4 and 6-8 over NR link 612 reach PDCP receiving entity 602 successfully. The PDU 2-4 and 6-8 and all later PDUs, therefore, are cached in PDPC receiving entity's PDCP reordering buffer 604 until the PDCP reordering timer expires; para. [0066], state change driven by message arrival) in how a transaction processing system (i.e. a receiving node; para. [0032]) has processed previously submitted data transaction requests (i.e. a receiving node buffers received packets. For example, packet buffering can be used for hybrid automatic repeat request (HARD) systems and/or for handling out-of-order packets. For example, in some cases, packets may not be received in the correct order (e.g., the packet sequence numbers (SNs) of a transmission, such as a transport block (TB) may not be received sequentially) and/or some packets may not be received, may not be successfully decoded, and/or may not be successfully processed; para. [0032]), the transaction processing system (i.e. the receiving node; para. [0033]) configured to process some data transaction requests using a timer (i.e. the receiving node may be configured with a timer or duration for buffering packets. The timer may allow a duration for buffering packets during which missing packets can be retransmitted by a transmitting node and received at the receiving node. The receiving node may be configured to wait for expiry of the timer before taking further actions. For example, the receiving node may wait for expiry of the timer before sending the received, and buffered, packets to upper layers and/or before sending a negative acknowledgment to the transmitting node of missing packets; para. [0033, 0058]), the system uses multiple configured timers during processing of received PDUs; based on the received data messages (i.e. The PDU 2-4 and 6-8 and all later PDUs, therefore, are cached in PDPC receiving entity's PDCP reordering buffer 604 until the PDCP reordering timer expires; para. [0066]), generating feature values for the plurality of features and populating a buffer with the generated feature values (i.e. In some examples, when using a machine learning algorithm, training system 830 generates vectors from the information in training repository 815. In some examples, training repository 815 stores vectors. In some examples, the vectors map one or more features to a label. For example, the features may correspond to various candidate durations, buffering capabilities, and/or other factors discussed above. The label may correspond to the predicted likelihoods of receiving a missing packet and/or selected packet buffering duration(s). Predictive model training manager 832 may use the vectors to train predictive model 824 for node 820. As discussed above, the vectors may be associated with weights in the machine learning algorithm; para. [0060, 0075, 0083, 0086, 0096]), the system teaches generating vectors form PDUs because received PDUs drive buffer and timer behavior (e.g., reception, reordering, missing-PDU handling), that behavior is recorded as historical numerical parameters (buffer history, timer expiry history), and those stored historical feature values are organized into vectors mapping features to labels for use by the ML model; retrieving the generated features values from the buffer (i.e. the input parameters may include historical values (e.g., stored past values of the parameters) associated with the one or more parameters; para. [0096], previously stored feature values are retrieved as inputs to the ML model) and performing machine learning inference using the generated feature values to generate an output signal (i.e. Output of the machine learning algorithm can include a time duration, per radio bearer, to buffer packets; para. [0098], ML inference producing an output based on the input parameters), wherein the output signal corresponds to one of a plurality of possible actions to take in connection with a dynamic timer value (i.e. After packet buffer manager 922 uses predictive model 924 to determine the packet buffer duration, receiving node 920 applies determined packet buffer duration 925. In some examples, packet buffer manager 922 updates, replaces, and/or overrides a configured timer value; para. [0099]), ML output corresponds to one of multiple possible actions on the dynamic timer; and communicating, based on the output signal, a timer message to the transaction processing system to change a duration for which at least newly activated instances of the timer last (i.e. After packet buffer manager 922 uses predictive model 924 to determine the packet buffer duration, receiving node 920 applies determined packet buffer duration 925. In some examples, packet buffer manager 922 updates, replaces, and/or overrides a configured timer value; para. [0099]). Claim 2: Dalmiya teaches the computer system of claim 1. Dalmiya further teaches wherein the operations further comprise: repeatedly (i.e. Operations 1000 may begin, at 1005, by dynamically determining one or more time durations to buffer packets; para. [0103]) performing: (a) generating updated feature values based on newly received data messages, and storing the updated feature values to the buffer (i.e. Training repository 815 may include training data obtained before and/or after deployment of node 820. Node 820 may be trained in a simulated communication environment (e.g., in field testing, drive testing) prior to deployment of node 820. For example, various buffer history information can be stored to obtain training information related to the estimates, predictions, etc; para. [0075, 0076]); and (b) performing the machine learning inference based on the updated feature values that are stored within the buffer to generate further output signals from the trained neural network (i.e. Training repository 815 may include training data obtained before and/or after deployment of node 820. Node 820 may be trained in a simulated communication environment (e.g., in field testing, drive testing) prior to deployment of node 820. For example, various buffer history information can be stored to obtain training information related to the estimates, predictions, etc; para. [0075, 0076]). Claim 9: Dalmiya teaches the computer system of claim 1. Dalmiya further teaches wherein the machine learning inference is performed for a plurality of identifiers based on differently calculated feature values (i.e. training repository 815 stores vectors. In some examples, the vectors map one or more features to a label. For example, the features may correspond to various candidate durations, buffering capabilities, and/or other factors discussed above. The label may correspond to the predicted likelihoods of receiving a missing packet and/or selected packet buffering duration(s). Predictive model training manager 832 may use the vectors to train predictive model 824 for node 820; para. [0083-0086]). Claim 10: Dalmiya teaches the computer system of claim 9. Dalmiya further teaches wherein the same trained neural network is used for the machine learning inference for the differently calculated feature values to thereby obtain output signals from the same trained neural network that indicate different changes to different dynamic timer values (i.e. Training system 830 generally includes a predictive model training manager 832 that uses training data to generate predictive model 824 for predicting a packet buffering duration. Predictive model 824 may be generated based, at least in part, on the information in training repository 815; para. [0074]). Claim Rejections – 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 3-5 are rejected under 35 U.S.C. 103 as being unpatentable over Dalmiya in view of Kesavan et al. (U.S. Patent Application Pub. No. US 20230071606 A1). Claim 3: Dalmiya teaches the computer system of claim 2. Dalmiya further teaches wherein (b) is repeatedly performed (i.e. dynamically determining the one or more time durations to buffer packets comprises redetermining the time duration at different times; para. [0169]). Dalmiya does not explicitly teach at least once every minute. However, Kesavan teaches wherein (b) is repeatedly performed at least once every minute (i.e. As an example for using the model, the following inference operations may be performed at a period of a minute or so (e.g., twice per minute, once per minute, once every ten minutes, once per hour, or the like); para. [0018, 0199]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Dalmiya to include the feature of Kesavan. One would have been motivated to make this modification because timing constraints on repeated ML inference for responsiveness and efficiency. Claim 4: Dalmiya and Kesavan teach the computer system of claim 3. Dalmiya further teaches wherein the feature values that are stored into the buffer are continuously updated, between each machine learning inference that is performed (b), based on newly received data messages (i.e. This information can be stored in training repository 815. After deployment, training repository 815 can be updated to include feedback associated with packet buffering durations used by node 820. The training repository can also be updated with information from other BSs and/or other UEs, for example, based on learned experience by those BSs and UEs, which may be associated with packet buffering performed by those BSs and/or UEs; para. [0076, 0083, 0096, 0169]). Claim 5: Dalmiya teaches the computer system of claim 1. Dalmiya further teaches wherein the dynamic timer value changes at least times over an operational period of the computer system (i.e. dynamically determining the one or more time durations to buffer packets comprises redetermining the time duration at different times; para. [0103, 0169]). Dalmiya does not explicitly teach at least 100 times over an operational period. However, Kesavan teaches wherein the value changes at least 100 times (i.e. As an example for using the model, the following inference operations may be performed at a period of a minute or so (e.g., twice per minute, once per minute, once every ten minutes, once per hour, or the like); para. [0018, 0199]) over an operational period of the computer system (i.e. if a node has had a failure, the labelling indicates the time that the node failed and captures server parameters of a few hours or days before the failure; para. [0140]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Dalmiya to include the feature of Kesavan. One would have been motivated to make this modification because timing constraints on repeated ML inference for responsiveness and efficiency. 8. Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Dalmiya in view of Wu et al. (U.S. Patent Application Pub. No. US 20220191775 A1). Claim 6: Dalmiya teaches the computer system of claim 1. Dalmiya further teaches wherein plurality of possible actions include at least one to the dynamic timer value (i.e. Output of the machine learning algorithm can include a time duration, per radio bearer, to buffer packets. Output of the machine learning algorithm can include a predicted time duration to wait for a specified portion of missing PDUs. The portion of missing PDUs may include a number of packets to maintain a maximum application throughput; para. [0098, 0099]). Dalmiya does not explicitly teach at least one decrease to the timer value, at least one increase to the timer value, and no change to the timer value. However, Wu teaches wherein plurality of possible actions include at least one decrease to the dynamic timer value, at least one increase to the dynamic timer value, and no change to the dynamic timer value (i.e. the search timer in regions II and III may be a sequence of {10, 5, 2, 2, 5, 10, . . . } time units, which includes increasing, unchanged, and decreasing search timer values corresponding to the change in the probability values; para. [0036]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Dalmiya to include the feature of Wu. One would have been motivated to make this modification because it ensures a complete action set for adaptive timer control, improving flexibility and stability. Claim 7: Dalmiya and Wu teach the computer system of claim 6. Dalmiya further teaches wherein the plurality of possible actions include at least two different amounts in the dynamic timer value (i.e. the ML algorithm may output predicted optimized duration(s) to buffer packets and/or one or more parameters that can be used by packet buffer manager 922 to select/determine the duration to buffer packets; para. [0083, 0086]). Dalmiya does not explicitly teach decrease in the timer value and increases to the dynamic value. However, Wu further teaches wherein the plurality of possible actions include at least two different amounts for decrease in the dynamic timer value and at least two different increases to the dynamic timer value (i.e. the UE searches with increasing frequency when the probability trends upwards (as in regions I and II), and with decreasing frequency when the probability trends downwards (as in regions III and IV). The UE may use a search timer to set the time for starting each search duration. The search timer may be configured by the length of a search duration plus the length of a non-search duration that immediately follows the search duration. The length of the search timer indicates how frequently the UE searches. For example, the search timer in region I may be a sequence of {50, 40, 30, . . . } time units, and in region IV may be a sequence of {30, 40, 50, . . . } time units, wherein each time unit may be a minute, a second, a millisecond, or the like. That is, in region I the first search duration and the first non-search duration last 50 time units, and the second search duration and the second non-search duration last 40 time units, etc. Similarly, the search timer in regions II and III may be a sequence of {10, 5, 2, 2, 5, 10, . . . } time units, which includes increasing, unchanged, and decreasing search timer values corresponding to the change in the probability values; para. [0036, 0037]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Dalmiya to include the feature of Wu. One would have been motivated to make this modification because it ensures a complete action set for adaptive timer control, improving flexibility and stability. 9. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Dalmiya in view of Ohtani (U.S. Patent Application Pub. No. US 20090248804 A1). Claim 8: Dalmiya teaches the computer system of claim 1. Dalmiya further teaches wherein, a corresponding timer update message is communicated based on each output signal (i.e. After packet buffer manager 922 uses predictive model 924 to determine the packet buffer duration, receiving node 920 applies determined packet buffer duration 925; para. [0099]). Dalmiya does not explicitly teach including any output signal that indicates no change in the value. However, Ohtani teaches wherein, a corresponding timer update message is communicated based on each output signal, including any output signal that indicates no change in the dynamic timer value (i.e. The first and second embodiments do not mention a technique to periodically update information (such as "Status" and "Availability") stored in storage parts by each of the proxy servers 200, but the present invention is not limited to this. For example, the proxy server 200 may periodically acquire "Status" and "Availability" to update such information by referring to a timer to periodically exchange status information of the server with the other proxy server 200 and to retry access to the server 300; para. [0242]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Dalmiya to include the feature of Ohtani. One would have been motivated to make this modification because it ensures synchronization by communicating all timer states, including no change. 10. Claims 11 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Dalmiya in view of Anbarani (U.S. Patent Application Pub. No. US 20060146711 A1). Claim 11: Dalmiya teaches the computer system of claim 1. Dalmiya further teaches where the operations further comprise: executing a stability process based on at least some of the data messages (i.e. According to certain aspects, a machine learning algorithm may be enabled once the machine learning algorithm has been trained to a satisfactory level. For example, the machine learning algorithm may be enabled or use based at least in part on reaching a threshold rate of the machine learning algorithm successfully predicting time durations for receiving missed packets. The machine learning algorithm may be enabled per use on a per radio bearer basis. The machine learning algorithm may be enabled based at least in part on an application type attached to the radio bearer; para. [0097]). Dalmiya does not explicitly teach to calculate a stability metric based on at least some of the data messages; and based on determination that the calculated stability metric violates a stability threshold, communicating a message to the system to change to a maximum value that is greater than or equal to any other value used for. However, Anbarani teaches executing a stability process to calculate a stability metric based on at least some of the data messages; and based on determination that the calculated stability metric violates a stability threshold, communicating a timer update message to the transaction processing system to change the timer to a maximum value that is greater than or equal to any other value used for the timer (i.e. The stability metric is used to dynamically change the time interval between mutations. This interval may be set to a fixed value, and auto-tuning would function well enough without the stability metric. However, in order to decide the value of the time interval, a compromise would have to be made between a short interval to speed up the optimization process and a long interval to keep the network traffic disruption small. The range of the stability metric should be chosen in a range between a minimum and a maximum desired time interval between mutations. The initial value should be the minimum value, and every time a mutation is accepted, the stability value should be reset to that minimum. Every time a mutation is rejected, the stability value is doubled until it reaches the defined maximum; para. [0040]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Dalmiya to include the feature of Anbarani. One would have been motivated to make this modification because it prevents instability by bounding timer values. Claim 15: Dalmiya and Anbarani teach the computer system of claim 11. Dalmiya further teaches where the operations further comprise: storing data from the at least some of the data messages (i.e. The receiving node may be configured to: when a PDCP Data PDU is received from lower layers, and if the received PDCP Data PDU with COUNT value=RCVD_COUNT is not discarded already, the receiving PDCP entity stores the resulting PDCP service data unit (SDU) in the reception buffer; para. [0060]) to a rolling buffer (i.e. The receiving node may have a configured timer for packet buffering. In some systems, the PDCP reordering buffer is configured with a reordering timer and the RLC reassembly buffer is configured with a reassembly timer. The timer may provide a duration in which missing packets can be retransmitted by the transmitter and/or enough time for out-of-order packets to reach the receiver side; para. [0058, 0066]). 11. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Dalmiya in view of Anbarani, and further in view of Atkinson (U.S. Patent Application Pub. No. US 20230286225 A1). Claim 12: Dalmiya and Anbarani teach the computer system of claim 11. Dalmiya further teaches where the operations further comprise: calculating the stability based on performing a process on a tracked previous value (i.e. the input parameters may include historical values (e.g., stored past values of the parameters) associated with the one or more parameters; para. [0096]) to determine a threshold of the tracked previous value as unstable (i.e. According to certain aspects, a machine learning algorithm may be enabled once the machine learning algorithm has been trained to a satisfactory level; para. [0097]). Dalmiya does not explicitly teach calculating the stability threshold based on performing a bisectional process on a tracked previous value to determine a threshold. However, Anbarani further teaches calculating the stability threshold based on performing a process on a tracked previous value to determine a threshold of the tracked previous value as unstable (i.e. The stability metric is used to dynamically change the time interval between mutations. This interval may be set to a fixed value, and auto-tuning would function well enough without the stability metric. However, in order to decide the value of the time interval, a compromise would have to be made between a short interval to speed up the optimization process and a long interval to keep the network traffic disruption small. The range of the stability metric should be chosen in a range between a minimum and a maximum desired time interval between mutations. The initial value should be the minimum value, and every time a mutation is accepted, the stability value should be reset to that minimum. Every time a mutation is rejected, the stability value is doubled until it reaches the defined maximum; para. [0040]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Dalmiya to include the feature of Anbarani. One would have been motivated to make this modification because it prevents instability by bounding timer values. However, Atkinson teaches calculating the stability threshold based on performing a bisectional process on a tracked previous value to determine a threshold percentage of the tracked previous value as unstable (i.e. The foregoing methods include incrementing the lower bound threshold until a preliminary ply design is found to satisfy all design requirements. In alternative embodiments, however, the lower bound threshold can be varied through iterations in accordance with bisection search techniques. For example, the lower bound can initially be set to 100 percent, then to 50 percent, then to 75 percent, then to 62.5 percent, and continue this way in accordance with bisection search techniques, such as for a predetermined number of iterations, with a preliminary ply designed in accordance with the lowest lower bound threshold that is still found to satisfy all design requirements being designated as the final ply design. In other embodiments, other methods can be used, such as gradient descent techniques or simplex algorithms; para. [0030]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Dalmiya and Anbarani to include the feature of Atkinson. One would have been motivated to make this modification because it ensures thresholds are calculated consistently, reducing instability in the system. 12. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Dalmiya in view of Anbarani, and further in view of Arar et al. (U.S. Patent Application Pub. No. US 20180323643 A1). Claim 13: Dalmiya and Anbarani teach the computer system of claim 11. Dalmiya does not explicitly teach executing the stability process at least once every second. However, Anbarani further teaches executing the stability process at least once (i.e. The stability metric is used to dynamically change the time interval between mutations. This interval may be set to a fixed value, and auto-tuning would function well enough without the stability metric. However, in order to decide the value of the time interval, a compromise would have to be made between a short interval to speed up the optimization process and a long interval to keep the network traffic disruption small. The range of the stability metric should be chosen in a range between a minimum and a maximum desired time interval between mutations. The initial value should be the minimum value, and every time a mutation is accepted, the stability value should be reset to that minimum. Every time a mutation is rejected, the stability value is doubled until it reaches the defined maximum; para. [0040]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Dalmiya to include the feature of Anbarani. One would have been motivated to make this modification because it prevents instability by bounding timer values. However, Arar teaches executing the stability process at least once every second (i.e. Method 100 further includes performing a process. See operation 106. Depending on the approach, the process may be performed more than once, e.g., periodically, upon receiving user input, preconfigured settings, depending on a result of the process, etc. It should be noted that “periodically” as used herein may include every second, several seconds, minute, two or more minutes, hour, two or more hours, day, week, month, etc., or any other desired frequency of reoccurring intervals; para. [0032]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Dalmiya and Anbarani to include the feature of Arar. One would have been motivated to make this modification because it ensures rapidly detect instability in highly dynamic systems. 13. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Dalmiya in view of Anbarani, Arar, and further in view of Amaitis et al. (U.S. Patent Application Pub. No. US 20150157947 A1). Claim 14: Dalmiya, Anbarani, and Arar teach the computer system of claim 13. Dalmiya does not explicitly teach performed no more than once every 15 seconds. However, Amaitis teaches performed no more than once every 15 seconds (i.e. Such a validating may occur continuously, periodically (e.g., every 5 seconds, every 15 seconds, every minute, every 5 minutes, every hour, etc.), randomly, on demand, and so on; para. [0332]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Dalmiya, Anbarani, and Arar to include the feature of Amaitis. One would have been motivated to make this modification because it prevents excessive computation or messaging overhead. 14. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Dalmiya in view of Anbarani, and further in view of Hu (U.S. Patent Application Pub. No. US 20190236955 A1). Claim 16: Dalmiya and Anbarani teach the computer system of claim 15. Dalmiya further teaches wherein the rolling buffer holds between second and seconds of data (i.e. the receiving node may be configured with a timer or duration for buffering packets; para. [0033]). Dalmiya does not explicitly teach holds between 1 second and 10 seconds of data. However, Hu teaches wherein the rolling buffer holds between 1 second and 10 seconds of data (i.e. The autonomous vehicle repeats this process during each subsequent scan cycle to generate a sequence of timestamped scan images and stores these scan images in local memory (e.g., in a ten-second rolling buffer); para. [0087]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Dalmiya and Anbarani to include the feature of Hu. One would have been motivated to make this modification because it limits memory usage while maintaining sufficient history. 15. Claims 17-20 are similar in scope to Claims 1, 4, and 11 and are rejected under a similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Alam et al. (Pub. No. US 20220236845 A1), The dynamic screen timeout also depends on the screen content or the content of the data items presented through the display 106 of the electronic device 100 and the relation of the user with the content and the amount of time the user spends on the data item presented on the display 106 before scrolling ahead automatically is calculated using a supervised machine learning by the processor 102. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached on 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Apr 18, 2023
Application Filed
Dec 12, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594668
BRAIN-LIKE DECISION-MAKING AND MOTION CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579420
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12579421
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12572850
METHOD FOR IMPLEMENTING MODEL UPDATE AND DEVICE THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12572326
DIGITAL ASSISTANT FOR MOVING AND COPYING GRAPHICAL ELEMENTS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
92%
With Interview (+31.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month