Prosecution Insights
Last updated: April 19, 2026
Application No. 18/515,576

METHOD, APPARATUS AND SYSTEM FOR AUTOMATICALLY SCALING DYNAMIC COMPUTING RESOURCES BASED ON PREDICTED TRAFFIC PATTERNS

Non-Final OA §101§103
Filed
Nov 21, 2023
Examiner
TOLENTINO, RODERICK
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
Foundation Of Soongsil University-Industry Cooperation
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
545 granted / 705 resolved
+19.3% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
25 currently pending
Career history
730
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
56.2%
+16.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 705 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Office Action is in response to the instant Application 18/515,576 filed on 11/21/2023. Claims 1-15 are pending. This Office Action is Non-Final. Information Disclosure Statement The information disclosure statement (IDS), submitted on 7/18/2025, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: A prediction API and a prediction-based autoscaler in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-15 is rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claims 1, 8, 9 and 15 are rejected under 35 USC 101 as being directed to an abstract idea without being integrated into a practical application or being significantly more. Regarding claim 1 and similarly claims 8, 9 and 15, the claim recites the limitations “predicting resource …;” “calculating required resource …;” and “generating a template…;” Broadly interpreted, the aforementioned steps are directed to mental processes as said steps could be performed in the human mind. Therefore, the claims recite an abstract idea. Said abstract idea and/or judicial exception is not integrated into a practical application as the claim does not recite any other active steps that could be considered that the abstract idea is being integrated into a practical application. However, said operations are not sufficient to consider that the abstract idea is being interpreted into a practical application. Said operations are recited at a high level of generality in gathering/processing, which are a form of insignificant extra-solution activity. It’s also noted that the claims recite additional limitation/elements (i.e., system, processing circuitry, processor, memory, etc.,). However, said additional elements are recited at a high-level of generality (i.e., as a generic computing device performing a generic computer functions) such that it amounts no more than mere instructions to apply the exception or abstract idea using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements/limitations/embodiments that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As mentioned above, although the claims recite additional elements, said elements taken individually or as a combination, do not result in the claim amounting to significantly more than the abstract idea because as the additional elements perform generic computer content distributing functions routinely used in information technology field. As discussed above, the additional elements recited at a high-level of generality such that they amount no more than mere instructions to apply the exception using a generic computer component. Therefore, the claim is directed to non-statutory subject matter. Regarding claims 2-7 and 10-14, claims 2-7 and 10-14 are also rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter for the same reasons addressed above as the claims recite an abstract idea and the claims do not positively recite any other operations that could be considered as the abstract idea is being integrated into a practical application or significantly more. Regarding claim 15; claim 15 is rejected under 35 U.S.C. 101 because the claims is directed to non-statutory subject matter. Claim 15 recites “computer-readable recording medium”. Under a recent precedential opinion, the scope of the recited “computer-readable recording medium” encompasses transitory media such as signals or carrier waves, where, as here the Specification does not limit the computer readable storage medium to non-transitory forms. See Ex parte Mewherter, 107 USPQ2d 1857, 1862 (PTAB 2013) (precedential) (holding recited machine-readable storage medium ineligible under § 35 U.S.C. 101 since it encompassed transitory media). The Examiner respectfully suggests that the claim be amended to either “A non-transitory computer-readable storage medium” or “a computer-readable storage device” to make the claim statutory under 35 USC 101; (emphasis added). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-3, 7-11 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dwivedi et al. (US 2022/0269548) in view of Ramanathan et al. (US 2022/0116289). As per claim 1, Dwivedi teaches a system for automatically scaling dynamic computing resource based on a predicted traffic pattern comprising: a prediction API for predicting resource according to a traffic request for each cluster configured on a workload server using a pre-trained machine learning model (Dwivedi, Paragraph 0141 recites “In at least one embodiment, data center 1100 may include tools, services, software or other resources to train one or more machine learning models or predict or infer information using one or more machine learning models according to one or more embodiments described herein. For example, in at least one embodiment, a machine learning model may be trained by calculating weight parameters according to a neural network architecture using software and computing resources described above with respect to data center 1100. In at least one embodiment, trained machine learning models corresponding to one or more neural networks may be used to infer or predict information using resources described above with respect to data center 1100 by using weight parameters calculated through one or more training techniques described herein.”). But fails to teach a prediction-based autoscaler for calculating required resource by comparing available resource for each cluster with the predicted resource, selecting an optimal flavor according to the calculated required resource, and generating a template corresponding to the selected optimal flavor. However, in an analogous art Ramanathan teaches a prediction-based autoscaler for calculating required resource by comparing available resource for each cluster with the predicted resource, selecting an optimal flavor according to the calculated required resource, and generating a template corresponding to the selected optimal flavor (Ramanathan, Fig. 10A and Paragraphs 0123-0125 recites “The edge provider device may be part of an edge management software component responsible for management and maintenance of the edge cluster. The edge provider device may communicate with an edge cluster node auto-scaler. The edge node instance auto-scaler may use a standard API to talk to the edge provider device to add or remove nodes to or from the edge cluster. The edge node instance auto-scaler may obtain a current configuration of the edge cluster from the edge provider device. A REST interface may be used between the two, such that these components may be run anywhere in the network. The edge provider device may group the nodes based on capabilities and features of the nodes, for example to create “nodegroups”. For example, nodes with specific hardware accelerators, CPU cores etc., may be grouped into each nodegroup. The edge provider device may provide a resource template to the edge node instance auto-scaler. The node instance auto-scaler may use a template during scale up to identify a nodegroup from which the nodes are to be selected to satisfy requirements of a pending application instances in the cluster (e.g., an application instance to be scheduled).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Ramanathan’s Adaptive cloud autoscaling with Dwivedi’s Profiling and performance monitoring of distributed computational pipelines because it offers the advantage of providing the ability to serve and respond to multiple applications in real-time and meet ultra-low latency requirements for these multiple applications. As per claim 2, Dwivedi in combination with Ramanathan teaches the system of claim 1, Dwivedi further teaches wherein the pre-trained machine learning model is a Bi-LSTM model based on a recurrent neural network (Dwivedi, Paragraph 0114 recites “In at least one embodiment, and without limitation, machine learning models used by system 1000 may include machine learning model(s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naïve Bayes, k-nearest neighbor (Knn), K means clustering, random forest, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, Long/Short Term Memory (LSTM), Bi-LSTM, Hopfield, Boltzmann, deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), and/or other types of machine learning models.”). As per claim 3, Dwivedi in combination with Ramanathan teaches the system of claim 1, Dwivedi further teaches wherein the traffic request and the available resource are received by calling a monitoring system, wherein the monitoring system monitors traffic requests flowing into the workload server by segmenting them for each cluster (Dwivedi, Paragraph 0018 recites “Existing methods and approaches are limited to monitoring individual physical nodes, where a node's main memory usage, GPU memory usage, CPU/GPU utilization, network bandwidth utilization, network traffic data, input/output (I/O) traffic data, and the like, are measured.” And Paragraph 0025 recites “System 100 may include a workflow server 150 that may be a separate computing device, or a part of another computing device such as metric collection server 101, or one of computing devices 102. Workflow server 150 may be used to set up and configure execution of various pipelines and may further be used to control data flows and direct ongoing execution of sub-tasks and other components of pipelines. Workflow server 150 may include pipeline manager 152 and a workflow orchestration engine 152.”). As per claim 7, Dwivedi in combination with Ramanathan teaches the system of claim 1, Ramanathan further teaches wherein the template is compatible with a currently operating cluster management program (Ramanathan, Fig. 10A and Paragraphs 0123-0125 recites “The edge provider device may be part of an edge management software component responsible for management and maintenance of the edge cluster. The edge provider device may communicate with an edge cluster node auto-scaler. The edge node instance auto-scaler may use a standard API to talk to the edge provider device to add or remove nodes to or from the edge cluster. The edge node instance auto-scaler may obtain a current configuration of the edge cluster from the edge provider device. A REST interface may be used between the two, such that these components may be run anywhere in the network. The edge provider device may group the nodes based on capabilities and features of the nodes, for example to create “nodegroups”. For example, nodes with specific hardware accelerators, CPU cores etc., may be grouped into each nodegroup. The edge provider device may provide a resource template to the edge node instance auto-scaler. The node instance auto-scaler may use a template during scale up to identify a nodegroup from which the nodes are to be selected to satisfy requirements of a pending application instances in the cluster (e.g., an application instance to be scheduled).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Ramanathan’s Adaptive cloud autoscaling with Dwivedi’s Profiling and performance monitoring of distributed computational pipelines because it offers the advantage of providing the ability to serve and respond to multiple applications in real-time and meet ultra-low latency requirements for these multiple applications. Regarding claims 8, 9 and 15, claims 8, 9 and 15 are directed to an apparatus, a method and computer-readable recording medium associated with the system of claim 1. Claims 8, 9 and 15 are of similar scope to claim 1, and are therefore rejected under similar rationale. Regarding claim 10, claim 10 is directed to a similar method associated with the method of claim 2 respectively. Claim 10 is similar in scope to claim 2, respectively, and are therefore rejected under similar rationale. Regarding claim 11, claim 11 is directed to a similar method associated with the method of claim 3 respectively. Claim 11 is similar in scope to claim 3, respectively, and are therefore rejected under similar rationale. Claim(s) 4-6 and 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dwivedi et al. (US 2022/0269548) and Ramanathan et al. (US 2022/0116289) and in further view of Khalid (US 11,646,935). As per claim 4, Dwivedi in combination with Ramanathan teaches the system of claim 3, but fails to teach wherein information on a collected traffic request is transmitted to a traffic mesh when a trigger point is activated in the monitoring system and a resource utilization rate of a given cluster is greater than or equal to a preset value. However, in an analogous art Khalid teaches wherein information on a collected traffic request is transmitted to a traffic mesh when a trigger point is activated in the monitoring system and a resource utilization rate of a given cluster is greater than or equal to a preset value (Khalid, Col. 4 Line 66 – Col. 5 Line 27 recites “In step 320, the SNMP manager 224 of the network controller 130 monitors the status of the mesh network 110 by receiving and processing information from the end nodes 120 via links 134 and from the routers 112 via links 132. The network monitoring of step 320 involves querying the routers 112 and end nodes 120 for performance, CPU usage, loading, etc., and monitoring dynamic network changes such as surges in demand, looking for better paths to prioritize certain traffic to meet the QoS demand, etc. Although not explicitly represented in FIG. 1, in typical implementations, the communication system 100 will have a number of port monitors distributed throughout the mesh network 110, where the port monitors (i) monitor the traffic at the different network interfaces of the routers 112 and end nodes 120 and (ii) feed corresponding traffic information back to the network controller 130 for use in performing the network monitoring of step 320. Various network thresholds are measured using traps, and the information is shared via master information blocks (MIBs). MIBs and traps are a part of the standard SNMP protocol, and the routers 112 are polled by SNMP agents (implemented either at the routers or at port monitors associated with those routers) for their status and current configuration, as needed. Routers 112 have specific settings/configurations to define certain flows. If the network controller 130 determines that a path should be changed, the network controller 130 will need current configurations of the corresponding routers 112 which the network controller 130 can determine using the SNMP protocol.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Khalid’s Automated Provisioning And Configuration For Dynamically Loaded NFV- And SDN-based Networks with Dwivedi’s Profiling and performance monitoring of distributed computational pipelines because it offers the advantage of handling load changes in a mesh network to automatically adjust. As per claim 5, Dwivedi in combination with Ramanathan and Khalid teaches the system of claim 4, Khalid further teaches wherein the trigger point is SNMP_exporter, wherein the SNMP_exporter is initially in an off state, and when the resource utilization rate is greater than or equal to a preset first threshold, the SNMP_exporter is changed to an on state and the information on the traffic request is stored in a database of the monitoring system (Khalid, Col. 4 Line 66 – Col. 5 Line 27 recites “In step 320, the SNMP manager 224 of the network controller 130 monitors the status of the mesh network 110 by receiving and processing information from the end nodes 120 via links 134 and from the routers 112 via links 132. The network monitoring of step 320 involves querying the routers 112 and end nodes 120 for performance, CPU usage, loading, etc., and monitoring dynamic network changes such as surges in demand, looking for better paths to prioritize certain traffic to meet the QoS demand, etc. Although not explicitly represented in FIG. 1, in typical implementations, the communication system 100 will have a number of port monitors distributed throughout the mesh network 110, where the port monitors (i) monitor the traffic at the different network interfaces of the routers 112 and end nodes 120 and (ii) feed corresponding traffic information back to the network controller 130 for use in performing the network monitoring of step 320. Various network thresholds are measured using traps, and the information is shared via master information blocks (MIBs). MIBs and traps are a part of the standard SNMP protocol, and the routers 112 are polled by SNMP agents (implemented either at the routers or at port monitors associated with those routers) for their status and current configuration, as needed. Routers 112 have specific settings/configurations to define certain flows. If the network controller 130 determines that a path should be changed, the network controller 130 will need current configurations of the corresponding routers 112 which the network controller 130 can determine using the SNMP protocol.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Khalid’s Automated Provisioning And Configuration For Dynamically Loaded NFV- And SDN-based Networks with Dwivedi’s Profiling and performance monitoring of distributed computational pipelines because it offers the advantage of handling load changes in a mesh network to automatically adjust. As per claim 6, Dwivedi in combination with Ramanathan and Khalid teaches the system of claim 5, Khalid further teaches wherein if the resource utilization rate is greater than or equal to a second threshold greater than the first threshold, the information on the traffic request stored in the database is transmitted to a storage for training the machine learning model (Khalid, Col. 4 Line 66 – Col. 5 Line 27 recites “In step 320, the SNMP manager 224 of the network controller 130 monitors the status of the mesh network 110 by receiving and processing information from the end nodes 120 via links 134 and from the routers 112 via links 132. The network monitoring of step 320 involves querying the routers 112 and end nodes 120 for performance, CPU usage, loading, etc., and monitoring dynamic network changes such as surges in demand, looking for better paths to prioritize certain traffic to meet the QoS demand, etc. Although not explicitly represented in FIG. 1, in typical implementations, the communication system 100 will have a number of port monitors distributed throughout the mesh network 110, where the port monitors (i) monitor the traffic at the different network interfaces of the routers 112 and end nodes 120 and (ii) feed corresponding traffic information back to the network controller 130 for use in performing the network monitoring of step 320. Various network thresholds are measured using traps, and the information is shared via master information blocks (MIBs). MIBs and traps are a part of the standard SNMP protocol, and the routers 112 are polled by SNMP agents (implemented either at the routers or at port monitors associated with those routers) for their status and current configuration, as needed. Routers 112 have specific settings/configurations to define certain flows. If the network controller 130 determines that a path should be changed, the network controller 130 will need current configurations of the corresponding routers 112 which the network controller 130 can determine using the SNMP protocol.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Khalid’s Automated Provisioning And Configuration For Dynamically Loaded NFV- And SDN-based Networks with Dwivedi’s Profiling and performance monitoring of distributed computational pipelines because it offers the advantage of handling load changes in a mesh network to automatically adjust. Regarding claim 12, claim 12 is directed to a similar method associated with the system of claim 4 respectively. Claim 12 is similar in scope to claim 4, respectively, and are therefore rejected under similar rationale. Regarding claim 13, claim 13 is directed to a similar method associated with the system of claim 5 respectively. Claim 13 is similar in scope to claim 5, respectively, and are therefore rejected under similar rationale. Regarding claim 14, claim 14 is directed to a similar method associated with the system of claim 6 respectively. Claim 14 is similar in scope to claim 6, respectively, and are therefore rejected under similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RODERICK TOLENTINO whose telephone number is (571)272-2661. The examiner can normally be reached Mon- Fri 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached at 571-270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. RODERICK . TOLENTINO Examiner Art Unit 2439 /RODERICK TOLENTINO/Primary Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

Nov 21, 2023
Application Filed
Mar 02, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603907
SERVER AND METHOD FOR PROVIDING ONLINE THREAT DATA BASED ON USER-CUSTOMIZED KEYWORDS FOR PRIVATE CHANNEL
2y 5m to grant Granted Apr 14, 2026
Patent 12592915
INFERENCE-BASED SELECTIVE FLOW INSPECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12580946
SYSTEMS AND METHODS FOR TRIGGERING TOKEN ALERTS
2y 5m to grant Granted Mar 17, 2026
Patent 12580948
CYBERSECURITY OPERATIONS MITIGATION MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572632
SYSTEMS AND METHODS FOR DATA SECURITY MODEL MODIFICATION AND ANOMALY DETECTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 705 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month