Prosecution Insights
Last updated: April 19, 2026
Application No. 18/844,047

METHOD AND DEVICE FOR MONITORING OF NETWORK EVENTS

Final Rejection §101§103
Filed
Sep 04, 2024
Examiner
WONG, ERIC TAK WAI
Art Unit
3693
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Vocalink Limited
OA Round
2 (Final)
51%
Grant Probability
Moderate
3-4
OA Rounds
4y 1m
To Grant
64%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
266 granted / 523 resolved
-1.1% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
50 currently pending
Career history
573
Total Applications
across all art units

Statute-Specific Performance

§101
31.3%
-8.7% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 523 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status The claims filed 12/15/2025 are entered. Claims 1-2 and 4-21 are pending. Claims 1, 4, 8, and 14-15 are currently amended. Claims 5, 7, 9, 11-13 are previously presented. Claims 2, 6, and 10 are original. Claims 16-21 are new. Response to Arguments Applicant's arguments filed 12/15/2025 have been fully considered but they are not persuasive. 35 U.S.C. 101 Applicant’s arguments regarding the rejection of claims 1-2 and 4-21 under 35 U.S.C. 101 have been considered but are not persuasive. Regarding representative claim 1, Applicant argues that the claim does not recite an abstract idea under Step 2A Prong 1 of the eligibility framework. More specifically, Applicant argues that applying a machine learning model and taking corrective action, including re-training the machine learning model, cannot reasonably be performed in the human mind, nor does it involve a mathematical concept (see Remarks, pg. 9). The argument that the claimed features cannot reasonably be performed in the human mind is not persuasive because the rejection does not identify the grouping of “Mental Processes”. The argument that the claim does not recite “Mathematical Concepts” is also not persuasive. The claim limitations delineated in the rejection are drawn to determining feature distributions and comparing distributions, and therefore set forth or describe mathematical data analysis, which falls under the abstract idea grouping of “Mathematical Concepts”. Applicant further argues that the claim as a whole integrates the abstract idea into a practical application under Step 2A Prong 2 of the framework. More specifically, Applicant argues: …Claim 1 reflects a specific technological solution to a technological problem. Applicant's specification explains that the disclosure is intended to solve negative impacts on machine learning models from nefarious actors changing their behavior over time. (See, Para. [0003]). Previous approaches included regularly scheduled re-training of the models, but "regularly scheduling re-training of the machine learning model [uses] resources both time and computational that are unnecessary if, in fact, the re-training is not required at [a] particular time." (Para. [0004]). Independent claim 1 provides a technical improvement by "applying a machine learning model to a dataset relating to activity in a network" and taking corrective action only when needed - "based on determining [an] unexpected discrepancy between the determined feature distribution and the reference feature distribution that indicates feature drift" rather than regularly scheduled re-training. The solution provided by the disclosure retrains only when there is an indication of feature drift rather than regularly scheduled re- training, as described in Para. [0004], [0048], and [0059]. The argument is not persuasive. The specification describes the features at a high level of generality in a manner which does not convey a technological improvement to one of ordinary skill in the art. Here, applying a model, monitoring drift, and retraining a model when there is drift does not go beyond generic machine learning operations. There is no technological improvement in computers or machine learning itself. The machine learning is instead merely used as a tool to implement the abstract idea. Applicant further argues: …“taking corrective action based on determining the unexpected discrepancy between the determined feature distribution and the reference feature distribution that indicates feature drift; wherein the corrective action includes re-training the machine learning model based on an updated training dataset that takes into account the feature drift" as recited in claim 1 are not mere instructions to implement the solution on a computer, but tangible, inventive steps representing a particular way of solving the technical problem of regularly scheduled model re-training and integrate the alleged abstract idea into a practical application. The argument is not persuasive. For reasons discussed above, the additional elements drawn to applying a model, monitoring drift, and retraining the model are not indicative of a technological improvement. The machine learning is instead merely used as a tool to implement the abstract idea. With regards to Applicant’s argument that the claim recites tangible steps, eligibility is not evaluated based on whether the claim recites a "useful, concrete, and tangible result," State Street Bank, 149 F.3d 1368, 1374, 47 USPQ2d 1596, 1602 (Fed. Cir. 1998) (quoting In re Alappat, 33 F.3d 1526, 1544, 31 USPQ2d 1545, 1557 (Fed. Cir. 1994)), as this test has been superseded. Applicant further argues that the claim recites an inventive concept under Step 2B of the framework. More specifically, Applicant argues: The ordered combination of elements in Claim 1 is not routine, conventional, or generic. In contrast, Claim 1 includes features that are not field-of-use limitations, but specific improvements to taking a targeted corrective action rather than regularly retraining a model with cases such as DDR Holdings and McRO. For example: "taking corrective action based on determining the unexpected discrepancy between the determined feature distribution and the reference feature distribution that indicates feature drift; [and] wherein the corrective action includes re-training the machine learning model based on an updated training dataset that takes into account the feature drift." (See Para. [0059]). These features are not "well-understood, routine, or conventional." Rather, they are specific steps for taking corrective action to re-train the machine learning model. Again, the Memo reminds examiners that if eligibility is a "close call," a rejection under §101 should only be made where it is more likely than not that the claim is ineligible. The Memo further emphasizes that where claims provide a "technological solution to a technological problem," they satisfy Step 2B. As such, Applicant submits that given the architecture, the demonstrated improvements, and the inventive concept of Claim 1, Claim 1 falls squarely within the type of eligible computer-implemented technology contemplated by the Office's current guidance. The argument is not persuasive. For reasons discussed above, the additional elements drawn to applying a model, monitoring drift, and retraining the model do not provide a technological improvement. The machine learning is instead merely used as a tool to implement the abstract idea. Furthermore, as described in the rejection herein, no additional element or combination of elements are other than what is well-understood, routine, conventional activity in the field. The additional elements simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, per MPEP § 2106.05(d). Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. For the above reasons, the rejections under 35 U.S.C. 101 are maintained herein. 35 U.S.C. 102 and 103 Applicant’s arguments regarding the prior rejections under 35 U.S.C. 102 and 35 U.S.C. 103 in view of the Umesh reference (US 2023/0124621 A1) have been considered but are moot in view of the new grounds of rejection necessitated by the current amendment presented herein. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2 and 4-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Claims 1-2 and 4-21 are directed to a system, method, or a non-transitory computer product, and thus fall within the statutory categories of invention. (Step 1: YES). Step 2A - Prong 1 The Examiner has identified independent method claim 1 as the claim that represents the claimed invention for analysis and is similar to product claim 14 and system claim 15. Claim 1 recites the limitations of: 1. (Original) A computer implemented method of monitoring a machine learning model for enabling identification of fraudulent activity in a network, and providing corrective actions, the method comprising: applying a machine learning model to a dataset relating to activity in a network, the dataset comprising a plurality of features as inputs into the machine learning model, the machine learning model having been trained on a training dataset to identify fraudulent activity in a network; monitoring a feature of the plurality of features to determine drift of the feature from an expected value indicating incorrect identification of whether activity is fraudulent, wherein the step of monitoring comprises: determining a feature distribution for the feature at a current time; comparing the determined feature distribution to a reference feature distribution for the feature, the reference feature distribution determined from the training dataset; determining an unexpected discrepancy between the determined feature distribution and the reference feature distribution that indicates feature drift; and taking corrective action based on determining the unexpected discrepancy between the determined feature distribution and the reference feature distribution that indicates feature drift wherein the corrective action includes re-training the machine learning model based on an updated training dataset that takes into account the feature drift. These limitations, under their broadest reasonable interpretation, cover performance of the limitation as “Certain Methods of Organizing Human Activity”. The claim limitations delineated in bold above recite a fundamental economic practice (mitigating risk), as they pertain to identification of fraudulent activity, which in view of the specification includes payment transactions. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as a fundamental economic practice, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. These limitations, under their broadest reasonable interpretation, also cover performance of the limitation as “Mathematical Concepts”. The claim limitations delineated in bold above recite mathematical relationships/calculations, as they pertain to comparing feature distributions, which in view of the specification includes calculations with methods such as Jenson Shannon Divergence. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as mathematical relationships/calculations, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. The machine learning model in claim 1 is just applying generic computer components to the recited abstract limitations. The recitation of generic computer components in a claim does not necessarily preclude that claim from reciting an abstract idea. Claims 14 and 15 are also abstract for similar reasons. (Step 2A-Prong 1: YES. The claims recite an abstract idea) The limitations are considered together as a single abstract idea for Step 2A Prong Two and Step 2B rather than as a plurality of separate abstract ideas to be analyzed individually (see MPEP 2106.04). Step 2A - Prong 2 This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of: Claim 1: computer-implemented (preamble); machine learning model Claim 14: claim 1 additional elements; non-transitory computer-readable storage medium Claim 15: claim 14 additional elements; data processing device comprising one or more processors The computer hardware/software is/are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The machine learning model is recited in conjunction with applying the model to detect fraudulent activity, monitoring/determining feature drift, and re-training the model. However, these elements are generic machine learning operations and are described in the specification at a high level of generality which does not convey a technical improvement in machine learning models to one of ordinary skill in the art. Thus, the computer hardware/software, even in conjunction with the claimed machine learning features, is still merely used as a tool to implement the abstract idea. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea and are at a high level of generality. Therefore, claims 1, 14, and 15 are directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application) Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer hardware amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. See Applicant’s specification pg. 18 about implementation using general purpose or special purpose computing devices and MPEP 2106.05(f) where applying a computer as a tool is not indicative of significantly more. Furthermore, no additional element or combination of elements are other than what is well-understood, routine, conventional activity in the field. The additional elements simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, per MPEP § 2106.05(d). For example, it is well-understood, routine, conventional activity to monitor drift in machine learning models.1 2 Accordingly, these additional elements, do not change the outcome of the analysis, when considered separately and as an ordered combination. Thus, claims 1, 14, and 15 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more) Dependent Claims Dependent claims 2 and 4-21 further define the abstract idea that is present in independent claims 1 and 15. The claims correspond to “Certain Methods of Organizing Human Activity” and “Mathematical Concepts” and recite an abstract idea for the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Therefore, the dependent claims are directed to an abstract idea without significantly more. Thus, claims 1-2 and 4-21 are not patent-eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-8, 12, and 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over Walters (US 2020/0012900 A1) in view of Hines (US 2022/0383038 A1). Regarding claims 1, 14, and 15, Walters discloses a computer-implemented method, associated computer-readable medium, and associated data processing device, for monitoring a machine learning model for enabling identification of fraudulent activity in a network, and providing corrective actions, the method comprising: applying a machine learning model to a dataset relating to activity in a network, the dataset comprising a plurality of features as inputs into the machine learning model, the machine learning model having been trained on a training dataset to identify fraudulent activity in a network (para. 0068); monitoring data to determine drift from an expected value indicating incorrect identification of whether activity is fraudulent (see para. 0191, 0194, 0196), wherein the step of monitoring comprises: determining a current data metric at a current time (see para. 0191, 0194, 0196); comparing the determined current data metric to a reference data metric (see para. 0191, 0194, 0196); determining an unexpected discrepancy between the determined current data metric and the reference data metric that indicates feature drift (see para. 0191, 0194, 0196); and taking corrective action based on determining the unexpected discrepancy between the determined metric and the reference data metric that indicates feature drift (see para. 0197), wherein the corrective action includes re-training the machine learning model (see para. 0197). Walters does not explicitly disclose, but Hines teaches comparing feature distributions against reference feature distributions determined from the training dataset to determine feature data drift and retraining based on an updated training dataset that takes into account the drift (see para. 0030-0031, 0034-0035, wherein the representations/embeddings are learned feature vectors of the model inputs). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method, product, and device of Walter to include the feature of Hines. One of ordinary skill in the art would have been motivated to make the modification to improve performance of the model and the device using the model (see Hines, para. 0028). Regarding claims 4 and 16, Walters discloses applying the re-trained machine learning model to a dataset relating to activity in a network; and identifying a fraudulent activity in a network from the re-trained machine learning model (see para. 0068, wherein the re-trained model is used for the same purpose). Regarding claims 5 and 17, Walters discloses generating an alert that incorrect identification of whether activity is fraudulent has occurred based on the determining of the unexpected discrepancy (see para. 0068). Regarding claims 6 and 18, Walters discloses taking corrective action only when the unexpected discrepancy is above, or below a threshold value (see para. 0196). Regarding claims 7 and 19, Hines teaches wherein the step of monitoring a feature comprises: tracking a change in the feature distribution for the feature over time through performing the step of: determining a feature distribution for the feature at various points in time and comparing the determined feature distribution at each point in time to the reference feature distribution for the feature to generate a comparison value at each time; and wherein the step of determining an unexpected discrepancy between the determined feature distribution and the reference feature distribution that indicates drift further comprises: comparing the comparison value over time to determine change in the feature distribution indicating drift (see para. 0030-0035). Regarding claim 8, Walters discloses wherein the various points in time are daily (see para. 0192). Regarding claim 12, Walters discloses wherein the network is a financial network, and wherein the activity is payment transactions and the fraudulent activity is a fraudulent payment transaction (see para. 0068). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Walters (US 2020/0012900 A1) in view of Hines (US 2022/0383038 A1), further in view of Badawy (US 11,295,241 B1). Regarding claim 2, Walters does not explicitly disclose, but Badawy teaches wherein the step of comparing comprises using a Jenson-Shannon divergence calculated based on the determined feature distribution and the reference feature distribution (see col. 4, ll. 4-22). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Walters to include the feature taught by Badawy. One of ordinary skill in the art would have been motivated to make the modification to determine the drift measure using existing drift detection models (see Badawy, col. 4, ll. 4-22). Claims 9, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Walters (US 2020/0012900 A1) in view of Hines (US 2022/0383038 A1), further in view of Liu (US 8,326,575 B1). Regarding claims 9 and 20, Walters does not explicitly disclose, but Liu teaches wherein the reference feature distribution determined from the training dataset is determined by: arranging data from the training dataset relating to the feature in a series of buckets each bucket representing a range of values for such feature in the training dataset (see col. 1, Table 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Walters to include the feature of Liu. One of ordinary skill in the art would have been motivated to make the modification to characterize a change between a typical model score distribution based on a typical development data set and the model core distribution based on a typical validation data set (see Liu, col. 1, ll. 41-46). Regarding claim 11, Walters does not explicitly disclose, but Liu teaches wherein the determined feature distribution is determined by: arranging data from the dataset relating to the feature in a series of buckets each bucket representing a range of values for such feature (see col. 1, Table 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Walters to include the feature of Liu. One of ordinary skill in the art would have been motivated to make the modification to characterize a change between a typical model score distribution based on a typical development data set and the model core distribution based on a typical validation data set (see Liu, col. 1, ll. 41-46). Claims 10 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Walters (US 2020/0012900 A1) in view of Hines (US 2022/0383038 A1), further in view of Liu (US 8,326,575 B1), further in view of Axelrod (US 2017/0231521 A1). Regarding claims 10 and 21, Walters teaches arranging remaining data equally between remaining buckets of the series of buckets (see Liu, col. 1, Table 1). As discussed above, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Walters to include the feature of Liu. One of ordinary skill in the art would have been motivated to make the modification to characterize a change between a typical model score distribution based on a typical development data set and the model core distribution based on a typical validation data set (see Liu, col. 1, ll. 41-46). Walters/Liu do not explicitly disclose, but Axelrod teaches wherein the step of arranging data further comprises arranging the data above the 2nd percentile in a first bucket, and the data above the 98th percentile in a second bucket (see para. 0012). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Walters further to include the feature taught by Axelrod. One of ordinary skill in the art would have been motivated to make the modification to identify data points in extreme percentiles (see Axelrod, para. 0012). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Walters (US 2020/0012900 A1) in view of Hines (US 2022/0383038 A1), further in view of Sampaio (US 2022/0222670 A1). Regarding claim 13, Walters does not explicitly disclose, but Sampaio teaches wherein the plurality of features comprise at least one of: number of transactions from or to an account in a period of time, time of transaction, characteristic of account to which a transaction is made (see para. 0034). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Walters to include the feature of Sampaio. One of ordinary skill in the art would have been motivated to make the modification because these are profile features used in fraud detection (see Sampaio, para. 0034). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hearty (US 2022/0172215 A1) discloses methods and systems of providing fraud prediction services. One method includes receiving a request from a resource provider network and generating a set of features associated with the request. The method also includes accessing a fraud prediction model from a model database and applying the fraud prediction model to the set of features. The method also includes determining, with an electronic processor, a fraud prediction for the request based on the application of the fraud prediction model to the set of features. The method also includes generating and transmitting, with the electronic processor, a response to the request, the response including the fraud prediction. Dwivedi (US 2020/0364551 A1) discloses a computer system programmed to assemble a plurality of synthetic datasets and blend those synthetic datasets into a synthesized dataset. An evaluation is then performed to determine whether an existing model should be associated with the synthesized dataset or a new model should be trained from an existing model using the synthesized dataset. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC T WONG whose telephone number is (571)270-3405. The examiner can normally be reached 9am-5pm M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael W Anderson can be reached at 571-270-0508. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC T WONG/Primary Examiner, Art Unit 3693 ERIC WONG Primary Examiner Art Unit 3693 1 US 2022/0027749 “MACHINE LEARNING MODEL MONITORING” - [0013] For example, conventional monitors may include bias detection monitors that detect whether a machine learning model processes some datasets in a way that does not reflect accurate data and/or true correlations (e.g., such that the machine learning model has a bias either for or against some portions of this data and/or some determinations that are related to this data). For another example, conventional monitors may include drift detection monitors that detect a change in one or more relationships between input data and output data over time. Conventional systems may have these and many other monitors that work independently and/or together to track and analyze the algorithms and performance of models over time. In some examples, a conventional system may include a plurality of monitors that provide data on the performance of a model to determine a “healthiness” of this model (e.g., where a healthiness of a machine learning model relates to an accuracy, repeatability, and/or reliability with which the machine learning model characterizes and/or utilizes data). 2 US 2025/0053618 “NODE AND METHODS PERFORMED THEREBY FOR HANDLING DRIFT IN DATA” – Fig. 1; [0040] Although drift detection has been a widely studied research problem, conventional drift detectors are run on feature(s), target(s) or model error metric(s) independently, and hence may not be able to capture drift in scenarios where such co-relations may exist. Embodiments herein may specifically focus on the occurrence of drift in such network management scenarios. Of particular interest may be cases where the drift in feature KPI(s) at a given time instant may be dependent on the drift in target KPI(s) of previous time instants.
Read full office action

Prosecution Timeline

Sep 04, 2024
Application Filed
Sep 29, 2025
Non-Final Rejection — §101, §103
Nov 05, 2025
Interview Requested
Nov 12, 2025
Applicant Interview (Telephonic)
Nov 13, 2025
Examiner Interview Summary
Dec 15, 2025
Response Filed
Mar 17, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561687
Decentralized Digital Identity Exchange for Fraud Detection
2y 5m to grant Granted Feb 24, 2026
Patent 12530721
COMPUTER SYSTEM AND A COMPUTERIZED METHOD FOR CENTRAL COUNTERPARTY LIMIT MANAGEMENT
2y 5m to grant Granted Jan 20, 2026
Patent 12469085
Optimized Inventory Analysis for Insurance Purposes
2y 5m to grant Granted Nov 11, 2025
Patent 12469086
SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM FOR FACILITATING TREATMENT OF A VEHICLE DAMAGED IN A CRASH
2y 5m to grant Granted Nov 11, 2025
Patent 12423672
AUTHENTICATION SYSTEMS AND METHODS USING LOCATION MATCHING
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
51%
Grant Probability
64%
With Interview (+13.3%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 523 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month