Prosecution Insights
Last updated: April 19, 2026
Application No. 17/337,140

PREDICTING FUTURE EVENTS OF PREDETERMINED DURATION USING ADAPTIVELY TRAINED ARTIFICIAL-INTELLIGENCE PROCESSES

Non-Final OA §101§103§DP
Filed
Jun 02, 2021
Examiner
RIFKIN, BEN M
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
The Toronto-Dominion Bank
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
4y 12m
To Grant
59%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
139 granted / 317 resolved
-11.2% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 12m
Avg Prosecution
38 currently pending
Career history
355
Total Applications
across all art units

Statute-Specific Performance

§101
21.8%
-18.2% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 317 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The instant application having Application No. 17337140 has a total of 22 claims pending in the application, of which claims 10 and 21 have been cancelled. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-9 and 11-20 and 22 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-11 of copending Application No. 17528362 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because each of the limitations of claims 1-20 can be met by the claims of 17528362. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Instant Application 17528362 Examiners Comment An apparatus comprising Claim 1: An apparatus, comprising: A memory storing instructions Claim 1: a memory storing instructions A communications interface; Claim 1: a communications interface At least one processor coupled to the memory and the communications interface, the at least one processor being configured to execute the instructions to: Claim 1: At least one processor coupled to the memory and the communications interface, the at least one processor being configured to execute the instructions to: Receive an identifier of a customer from a computing system via the communications interface, and based on the identifier, obtain, from the memory, elements of first interaction data that characterize the customer during an extraction temporal interval Claim 1: an identifier associated with a customer from a computing system, and based on the received identifier, obtain, from the memory, first elements of consolidated data associated with a first temporal interval and with the received identifier Generate an input dataset based on elements of interaction data associated with the identifier and the extraction interval Claim 1: Generate a first input dataset based on elements of first interaction data associated with a first temporal interval Both references deal with time based data, here the temporal interval is the same as the extraction interval as both are over time. Perform operations that apply a trained artificial intelligence process to the input dataset, and based on the application of the trained artificial intelligence process to the input dataset, generate output data representative of a predicted likelihood of (i) a non-occurrence of a first event during a first portion of a target interval and (ii) an occurrence of the first event during a second portion of the target interval, the target interval being subsequent to the extraction interval, the second portion of the target interval being separated from the extraction interval by the first portion of the target interval, and the occurrence of the first event being associated with a temporal duration that exceeds a threshold temporal duration within the second portion of the target interval; Claim 1: based on an application of a trained first artificial intelligence process to the first input dataset, generate output data representative of a predicted likelihood of an occurrence of each of a plurality of target events during a second temporal interval, the second temporal interval being subsequent to the first temporal interval and being separated from the first temporal interval by a corresponding buffer interval The use of multiple events vs a single event, and different references to various time intervals within the target intervals are obvious variations of the same idea. The only differences here are that the first interval comes before the second in the reference application, there’s a little more detail as to the timing of events, and the use of a buffer interval. As this is all dealing with time, having a gap of time between events would be something common when looking at different time points. Transmit the identifier and at least a portion of the generated output data to the computing system via the communications interface, the computing system being configured to perform operations that obtain, from a data repository, second interaction data associated with the customer and the event based on the identifier and that modify second interaction data in accordance with the portion of the output data, the modification to the second interaction data reducing the predicted likelihood of the occurrence of the first event during the second portion of the target interval . Claim 1: transmit at least a portion of the output data to a computing system via the communications interface, the computing system being configured to generate, based on the portion of the output data, notification data associated with the predicted likelihood of the occurrence of at least one of the target events and to provision the notification data to a device. Here the operations would be providing a notification to the device, with the notification being the proposed modification. As can be shown, each limitation of the instant application is met by reference application 17528362, and therefore the claim is provisionally rejected under obvious type double patenting. As per claims 2-9, 11-20 and 22, these claims are rejected for similar reasons to claim 1 over claims 1-11 of the reference application 17528362. Double Patenting Claims 1-9 and 11-20 and 22 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-11 of copending Application No. 17726184 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because each of the limitations of claims 1-20 can be met by the claims of 17528362. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Instant Application 17726184 Examiners Comment An apparatus comprising Claim 1: An apparatus, comprising: A memory storing instructions Claim 1: a memory storing instructions A communications interface; Claim 1: a communications interface At least one processor coupled to the memory and the communications interface, the at least one processor being configured to execute the instructions to: Claim 1: At least one processor coupled to the memory and the communications interface, the at least one processor being configured to execute the instructions to: Receive an identifier of a customer from a computing system via the communications interface, and based on the identifier, obtain, from the memory, elements of first interaction data that characterize the customer during an extraction temporal interval Claim 1: receive, via the communications interface, a first identifier associated with a customer from a computing system, and based on the first identifier, obtain elements of first interaction data associated with the customer form a portion of a data repository Generate an input dataset based on elements of interaction data associated with the identifier and the extraction interval Claim 1: generate an input dataset based on elements of first interaction data … the elements of first interaction data characterizing an occurrence of a first event during a first temporal interval Both references deal with time based data, here the temporal interval is the same as the extraction interval as both are over time. Perform operations that apply a trained artificial intelligence process to the input dataset, and based on the application of the trained artificial intelligence process to the input dataset, generate output data representative of a predicted likelihood of (i) a non-occurrence of a first event during a first portion of a target interval and (ii) an occurrence of the first event during a second portion of the target interval, the target interval being subsequent to the extraction interval, the second portion of the target interval being separated from the extraction interval by the first portion of the target interval, and the occurrence of the first event being associated with a temporal duration that exceeds a threshold temporal duration within the second portion of the target interval; Claim 1: apply a trained artificial intelligence process to the input dataset, and based on the application of the trained artificial intelligence process to the input dataset, generate output data representative of a predicted likelihood of an occurrence of a second event during a second temporal interval, the second event being associated with the first event, and the second temporal interval being subsequent to the first temporal interval and being separated from the first temporal interval by a corresponding buffer interval The use of multiple events vs a single event, and different references to various time intervals within the target intervals are obvious variations of the same idea. The only differences here are that the first interval comes before the second in the reference application, discussions of when particular events happen within the time period and the use of a buffer interval. As this is all dealing with time, having a gap of time between events would be something common when looking at different time points. Transmit the identifier and at least a portion of the generated output data to the computing system via the communications interface, the computing system being configured to perform operations that obtain, from a data repository, second interaction data associated with the customer and the event based on the identifier and that modify second interaction data in accordance with the portion of the output data, the modification to the second interaction data reducing the predicted likelihood of the occurrence of the first event during the second portion of the target interval . Claim 1: transmit… at least a portion of the output data to a computing system via the communications interface, the computing system being configured to perform one or more operations associated with the first identifier and in accordance with the portion of the output data, and the one or more operations being associated with a reduction in the predicted likelihood of the occurrence of the second event during the second temporal interval. As can be shown, each limitation of the instant application is met by reference application 17726184, and therefore the claim is provisionally rejected under obvious type double patenting. As per claims 2-9, 11-20 and 22, these claims are rejected for similar reasons to claim 1 over claims 1-11 of the reference application 17726184. Double Patenting Claims 1-9 and 11-20 and 22 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-12 of copending Application No. 17681215 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because each of the limitations of claims 1-20 can be met by the claims of 17681215. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Instant Application 17681215 Examiners Comment An apparatus comprising Claim 1: An apparatus, comprising: A memory storing instructions Claim 1: a memory storing instructions A communications interface; Claim 1: a communications interface At least one processor coupled to the memory and the communications interface, the at least one processor being configured to execute the instructions to: Claim 1: At least one processor coupled to the memory and the communications interface, the at least one processor being configured to execute the instructions to: Receive an identifier of a customer from a computing system via the communications interface, and based on the identifier, obtain, from the memory, elements of first interaction data that characterize the customer during an extraction temporal interval Claim 6: wherein the first interaction data comprises a customer identifier associated with a customer and a temporal identifier associated with the first temporal interval Generate an input dataset based on elements of interaction data associated with the identifier and the extraction interval Claim 1: generate an input dataset based on elements of first interaction data associated with a first temporal interval Both references deal with time based data, here the temporal interval is the same as the extraction interval as both are over time. Perform operations that apply a trained artificial intelligence process to the input dataset, and based on the application of the trained artificial intelligence process to the input dataset, generate output data representative of a predicted likelihood of (i) a non-occurrence of a first event during a first portion of a target interval and (ii) an occurrence of the first event during a second portion of the target interval, the target interval being subsequent to the extraction interval, the second portion of the target interval being separated from the extraction interval by the first portion of the target interval, and the occurrence of the first event being associated with a temporal duration that exceeds a threshold temporal duration within the second portion of the target interval; Claim 1: apply a trained artificial intelligence process to the input dataset, and based on the application of the trained artificial intelligence process to the input dataset, generate output data indicative of a predicted likelihood of an occurrence of each of a plurality of targeted events during a second temporal interval, the second temporal interval being subsequent to the first temporal interval and being separated from the first temporal interval by a corresponding buffer interval; The use of multiple events vs a single event, and different references to various time intervals within the target intervals are obvious variations of the same idea. The only differences here are that the first interval comes before the second in the reference application, various descriptions of where events occur within the time period and the use of a buffer interval. As this is all dealing with time, having a gap of time between events would be something common when looking at different time points. Transmit the identifier and at least a portion of the generated output data to the computing system via the communications interface, the computing system being configured to perform operations that obtain, from a data repository, second interaction data associated with the customer and the event based on the identifier and that modify second interaction data in accordance with the portion of the output data, the modification to the second interaction data reducing the predicted likelihood of the occurrence of the first event during the second portion of the target interval . Claim 1: transmit the output data to a computing system via the communications interface, the computing system being configured to transmit digital content to the device based on at least a portion of the output data. Here the digital content would be an example of potential operations one could send in response to making predictions about a user including modifications to data for the system. As can be shown, each limitation of the instant application is met by reference application 17681215, and therefore the claim is provisionally rejected under obvious type double patenting. As per claims 2-9, 11-20 and 22, these claims are rejected for similar reasons to claim 1 over claims 1-12 of the reference application 17681215. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9, 11-20 and 22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claim 1 is a machine type claim. Claim 12 is a process type claim. Claim 20 is a manufacture type claim. Therefore, claims 1-20 are directed to either a process, machine, manufacture or composition of matter. As per claim 1, 2A Prong 1: “Generate an input dataset based on the elements of first interaction data associated with the identifier and the extraction interval” A user mentally or with pencil and paper assembles the interaction data based on the identifier and the associated extraction interval. “perform operations that apply a … process to the input dataset, and based on the application of the … process to the input dataset, generate output data representative of a predicted likelihood of (i) a non-occurrence of a first event during a first portion of a target interval, and (ii) an occurrence of the first event during a second portion of the target interval, the target interval being subsequent to the extraction interval, the second portion of the target interval being separated from the extraction interval by the first portion of the target interval, and the occurrence of the first event being associated with a temporal duration that exceeds a threshold temporal duration within the second portion of the target interval” The user mentally or with pencil and paper uses a process to look at previous information and then make a prediction about future events. “Perform operations that modify second interaction data in accordance with the portion of the output data, the modification to the second interaction data reducing the predicted likelihood of the occurrence of the first event during the second portion of the target interval” The user, mentally or with pencil and paper, takes an action based upon the prediction they made in order to make the occurrence of the event less likely). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: “A memory”, “a communications interface”, “at least one processor”, “the memory”, , “the communications interface”, “the at least one processor”, “a computing system”, “the computing system” (mere instructions to apply the exception using a generic computer component); “a trained artificial intelligence process” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning here is generic, off the shelf machine learning. Any training is inherently required in the use of a machine learning algorithm, and this claim has no detail or aspects which move this machine learning model from a generic, off the shelf machine learning model). “receive the identifier of a customer from a computing system via the communications interface and based on the identifier, obtain, from the memory, elements of first interaction data that characterize the customer during an extraction temporal interval”, “transmit the identifier and at least a portion of the generated output data to the computing system via the communications interface, the computing system being configured to perform operations that obtain, from a data repository, second interaction data associated with the customer and the event based on the identifier…” (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: “A memory”, “a communications interface”, “at least one processor”, “the memory”, , “the communications interface”, “the at least one processor”, “a computing system”, “the computing system” (mere instructions to apply the exception using a generic computer component) “a trained artificial intelligence process” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning here is generic, off the shelf machine learning. Any training is inherently required in the use of a machine learning algorithm, and this claim has no detail or aspects which move this machine learning model from a generic, off the shelf machine learning model). “receive the identifier of a customer from a computing system via the communications interface and based on the identifier, obtain, from the memory, elements of first interaction data that characterize the customer during an extraction temporal interval”, “transmit the identifier and at least a portion of the generated output data to the computing system via the communications interface, the computing system being configured to perform operations that obtain, from a data repository, second interaction data associated with the customer and the event based on the identifier…” (MPEP 2106.05(d)(II) indicate that merely “receiving or transmitting data” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed transmitting step is well-understood, routine, conventional activity is supported under Berkheimer). As per claims 2-3, 6-8, these claims contain additional generic machine learning aspects, generating aspects, and mental steps of determining, examining the data, and responding to the data and is rejected similarly to claim 1. As per claim 4, 9 and 11, this contains additional mental steps of making a prediction and dealing with incoming data, and is rejected similarly to claim 1 above. As per claim 5, this denotes additional generic machine learning aspects, and is rejected similarly to claim 1 above. As per claim 22, this claim contains additional generic computer hardware and mental steps to claim 1, and is rejected for similar reasons to claim 1. As per claim 12, 2A Prong 1: “generating … an input dataset based on the elements of first interaction data associated with the identifier and the extraction interval” A user mentally or with pencil and paper assembles the interaction data based on the identifier and the associated extraction interval. “Performing operations … that apply a … process to the input dataset … based on the application of the … process to the input dataset, generate output data representative of a predicted likelihood of an occurrence (i) a non-occurrence of a first event during a first portion of a target interval and (ii) an occurrence of the first even during a second portion of the target interval, the target interval being subsequent to the extraction interval, the second portion of the target interval being separated from the extraction interval by the first portion of the target interval, and the occurrence of the first event being associated with a temporal duration that exceeds a threshold temporal duration within the second portion of the target interval” The user mentally or with pencil and paper uses a process to look at previous information and then make a prediction about future events. “… modify the second interaction data in accordance with the portion of the output data, the modification to the second interaction data reducing the predicted likelihood of the occurrence of the first event during the second portion of the target interval ” The user, mentally or with pencil and paper, takes an action based upon the prediction they made to reduce the likelihood of the event occurring in the second portion of the timeframe. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: “A computer”, “at least one processor”, “the at least one processor”, “a computing system”, “the computing system” (mere instructions to apply the exception using a generic computer component); “a trained artificial intelligence process” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning here is generic, off the shelf machine learning. Any training is inherently required in the use of a machine learning algorithm, and this claim has no detail or aspects which move this machine learning model from a generic, off the shelf machine learning model). “receiving an identifier of a customer from a computing system using the at least one processor and based on the identifier, obtaining, using the at least one processor, elements of first interaction data that characterize the customer during an extraction temporal interval from a data repository”, “transmitting…the identifier and at least a portion of the generated output data to a computing system via the communications interface, the computing system being configured to perform operations that obtain, from an additional data repository, second interaction data associated with the customer and the event based on the identifier…” (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: “A computer”, “at least one processor”, “the at least one processor”, “a computing system”, “the computing system” (mere instructions to apply the exception using a generic computer component) “a trained artificial intelligence process” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning here is generic, off the shelf machine learning. Any training is inherently required in the use of a machine learning algorithm, and this claim has no detail or aspects which move this machine learning model from a generic, off the shelf machine learning model). “receiving an identifier of a customer from a computing system using the at least one processor and based on the identifier, obtaining, using the at least one processor, elements of first interaction data that characterize the customer during an extraction temporal interval from a data repository”, “transmitting…the identifier and at least a portion of the generated output data to a computing system via the communications interface, the computing system being configured to perform operations that obtain, from an additional data repository, second interaction data associated with the customer and the event based on the identifier…” (MPEP 2106.05(d)(II) indicate that merely “transmitting data” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed transmitting step is well-understood, routine, conventional activity is supported under Berkheimer). As per claims 13-14, and 17-18, these claims contain additional generic machine learning aspects, generating aspects, and mental steps of determining, examining the data, and responding to the data and is rejected similarly to claim 12. As per claim 15 and 19, this contains additional mental steps of making a prediction and dealing with incoming data, and is rejected similarly to claim 12 above. As per claim 16, this denotes additional generic machine learning aspects, and is rejected similarly to claim 12 above. As per claim 20, 2A Prong 1: “generating an input dataset based on the elements of first interaction data associated with the identifier and the extraction interval” A user mentally or with pencil and paper assembles the interaction data based on the identifier and the associated extraction interval. “performing operations that apply a … process to the input dataset and based on an application of the … process to the input dataset, that generate output data representative of a predicted likelihood of (i) a non-occurrence of a first event during a first portion of a target interval and (ii) an occurrence of the first event during a second portion of the target interval, the target interval being subsequent to the extraction interval, the second portion of the target interval being separated from the extraction interval by the first portion of the target interval, and the occurrence of the first event being associated with a temporal duration within the second portion of the target interval” The user mentally or with pencil and paper uses a process to look at previous information and then make a prediction about future events. “… modify the second interaction data in accordance with the portion of the output data, the modification to the second interaction data reducing the predicted likelihood of the occurrence of the first event during the second portion of the target interval” The user, mentally or with pencil and paper, takes an action based upon the prediction they made to reduce the likelihood of the event in the second portion of the timeframe. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: “tangible non-transitory compute readable medium“, “at least one processor”, “the at least one processor”, “a computing system”, “the computing system” (mere instructions to apply the exception using a generic computer component); “a trained artificial intelligence process” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning here is generic, off the shelf machine learning. Any training is inherently required in the use of a machine learning algorithm, and this claim has no detail or aspects which move this machine learning model from a generic, off the shelf machine learning model). “receiving an identifier of a customer from a computing system, and based on the identifier, obtaining elements of first interaction data that characterize the customer during an extraction temporal interval from a data repository”, “transmitting the identifier and at least a portion of the generated output data to a computing system via the communications interface, the computing system being configured to perform operations that obtain, from an additional data repository, second interaction data associated with the customer and the event based on the identifiers, and… ” (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: “tangible non-transitory compute readable medium“, “at least one processor”, “the at least one processor”, “a computing system”, “the computing system” (mere instructions to apply the exception using a generic computer component) “a trained artificial intelligence process” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning here is generic, off the shelf machine learning. Any training is inherently required in the use of a machine learning algorithm, and this claim has no detail or aspects which move this machine learning model from a generic, off the shelf machine learning model). “receiving an identifier of a customer from a computing system, and based on the identifier, obtaining elements of first interaction data that characterize the customer during an extraction temporal interval from a data repository”, “transmitting the identifier and at least a portion of the generated output data to a computing system via the communications interface, the computing system being configured to perform operations that obtain, from an additional data repository, second interaction data associated with the customer and the event based on the identifiers, and… ” (MPEP 2106.05(d)(II) indicate that merely “transmitting data” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed transmitting step is well-understood, routine, conventional activity is supported under Berkheimer). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 8, 10-15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Dorai et al (US 20170116531 A1) in view of Guy et al (US 20050154664 A1), Zeng et al (“Using Predictive Analysis to Improve Invoice-to-Cash Collection”) and Lawrence et al (US 7389265 B2). As per claims 1, 12 and 20, Dorai discloses, “An apparatus comprising:” (Pg.3, particularly paragraph 0050; EN: this denotes the hardware to run the system). “A memory storing instructions” (Pg.3, particularly paragraph 0050; EN: this denotes the hardware to run the system). “a communications interface” (Pg.3, particularly paragraph 0050; EN: this denotes the hardware to run the system). “at least one processor coupled to the memory and the communications interface, the at least one processor configured to execute the instructions to” (Pg.3, particularly paragraph 0050; EN: this denotes the hardware to run the system). “receive an identifier of a customer” (pg.5, particularly paragraph 0073; EN: this denotes receiving information from the customer about their situation, which will include identifying the customer as the system acknowledgers them to be a customer of the mortgage servicer, which requires identification). “From a computing system via the communications interface” (Pg.3, particularly paragraph 0047; EN: this denotes clients connecting to the system). “and based on the identifier, obtain, from the memory, elements of first interaction data that characterize the customer during an extraction temporal interval” (pg.7, particularly paragraph 0117; EN: this denotes taking in customer data and using it to make predictions about the customer. The extraction interval the time period where the customers actions are being monitored). “generate an input dataset based on elements of interaction data …” (pg.5, particularly paragraph 0065-0066; EN: this denotes monitoring the customer’s behavior over time. Here the extraction interval is the time period where the customer’s action is monitored, and the interaction data is whatever is being monitored). “perform operations that apply a trained artificial intelligence process to the input dataset” (pg.5, particularly paragraph 0078; EN: this denotes using a trained model to predict payment patterns). “based on an application of a trained artificial intelligence” (pg.5, particularly paragraph 0067; EN: this denotes using machine learning to perform the prediction). “process to the input dataset, generate output data representative of a predicted likelihood of …(ii) an occurrence of the first event… of the target interval” (Pg.5, particularly paragraph 0076; EN: this denotes looking for late payment patterns and the like. Here the predicted event is late payment). “the target interval being subsequent to the extraction interval” (Pg.5, particularly paragraph 0076; EN: The future events occur after the previous historical events used to predict the future events). “… modify second interaction data in accordance with the portion of the output data, the modification to the second interaction data reducing the predicted likelihood of the first event during the second portion of the target interval” (Pg.5, particularly paragraph 0070; EN: this denotes offering different reactions based upon what the system detects and using that to keep payments current). However, Dorai fails to explicitly disclose, “elements of first interaction data associated with the identified and the extraction interval”, “predicted likelihood of (i) non-occurrence of a first event during a first portion of a target interval and(ii) an occurrence of the first event during a second portion of the target interval”, “the second portion of the target interval being separated from the extraction interval by the first portion of the target interval, and the occurrence of the first event being associated with a temporal duration that exceeds a threshold temporal duration within the second portion of the target interval”, and “Transmit the identifier and at least a portion of the generated output data to the computer system via the communications interface, the computing system being configured to perform operations that…”, and “Transmit the identifier and at least a portion of the generated output data to the computer system via the communications interface, the computing system being configured to perform operations that….” Guy discloses, “elements of first interaction data associated with the identified and the extraction interval” (Pg.16, particularly paragraph 0181; EN: this denotes credit reporting groups keeping only 36 months of delinquency histories. When combined with Dorai, this denotes having specific intervals of payment histories and the like for the system to consider). Zeng discloses, “predicted likelihood of (i) non-occurrence of a first event during a first portion of a target interval and (ii) an occurrence of the first during a second portion of the target interval” (pg.1045, particularly C1, first paragraph section 3; EN: this denotes various intervals for late payment. An example of non-occurrence of a first event is not paying in the first 1-30 days (a first interval) and paying 31-60) days late (second interval) with the payment being the event). “The second portion of the target interval being separated from the extraction interval by the first portion of the target interval” (pg.1045, particularly C1, first paragraph section 3; EN: This denotes the 31-60 days late payment interval after the 1-30 days late payment interval). “and the occurrence of the first event being associated with a temporal duration that exceeds a threshold temporal duration within the second portion of the target interval” (Pg.1045, particularly C1, first paragraph section 3; EN: this denotes the threshold being the payment being late by at least 31 days). Lawrence discloses, “Transmit the identifier and at least a portion of the generated output data to the computer system via the communications interface, the computing system being configured to perform operations that…” (C9, particularly L32-40; EN: this denotes the server/client process being something that can overlap. When combined with Dorai, this shows it would be obvious to one of ordinary skill in the art at the time of filing to allow actions to be performed server or client side, or both, as needed by the system). Dorai and Guy are analogous art because both involve mortgages. Before the effective filing date it would have been obvious to one skilled in the art of mortgages to combine the work of Dorai and Guy in order to look at particular intervals of payment histories for users. The motivation for doing so would be because “many credit reporting bureaus maintain a delinquency history of up to 36 months” (Guy, Pg.16, paragraph 0181) or in the case of Dorai, allow the system to target data that has been found to be relevant to determining peoples credit worthiness when it comes to mortgages and the like. Therefore before the effective filing date it would have been obvious to one skilled in the art of mortgages to combine the work of Dorai and Guy in order to look at particular intervals of payment histories for users. Dorai and Zeng are analogous art because both involve payment prediction. Before the effective filing date it would have been obvious to one skilled in the art of payment prediction to combine the work of Dorai and Zeng in order to look at particular intervals of payment for users. The motivation for doing so would be to “build models for predicting the payment outcomes of newly created invoices, thus enabling customized collection actions tailored for each invoice or customer” (Zeng, Abstract) or in the case of Dorai, allow the system to consider late payment history of particular customers and predict time of payment in order to better respond to the customers. Therefore before the effective filing date it would have been obvious to one skilled in the payment prediction to combine the work of Dorai and Zeng in order to look at particular intervals of payment for users. Dorai and Lawrence are analogous art because both involve financial risk analysis. Before the effective filing date it would have been obvious to one skilled in the art of financial risk analysis to combine the work of Dorai and Lawrence in order to allow tasks of the Dorai reference to be performed client and server side as needed. The motivation for doing so would be to allow “the risk server 102 may, for example, be included within and/or as a part of a device including one or more of the user devices 106a-n” (Lawrence, C9, L32-40) or in the case of Dorai, allow the system to implement its steps where it is most convenient for the system to do so, whether that is client side or server side. Therefore before the effective filing date it would have been obvious to one skilled in the art of financial risk analysis to combine the work of Dorai and Lawrence in order to allow tasks of the Dorai reference to be performed client and server side as needed. As per claims 2 and 13, Dorai discloses, “obtain (i) one or more parameters that characterize the trained artificial intelligence process” (Pg.5, particularly paragraph 0067; EN: this denotes using the data to train a machine learning algorithm. Here the parameters are the trained parameters of the machine learning algorithm). “and (ii) data that characterizes a composition of the input data” (pg.5, particularly paragraph 0066; EN: this denotes taking in relevant data from the input data in order to characterize the customer in relation to their mortgages or other financial packages). “generate the input dataset in accordance with the data that characterizes it” (Pg.5, particularly paragraph 0067; EN: this denotes using the data to train a machine learning algorithm). “apply the trained artificial intelligence process to the input dataset in accordance with the one or more parameters” (Pg.5, particularly paragraph 0067; EN: this denotes using the trained algorithm to make predictions about the customer). As per claims 3 and 14, Dorai discloses, “based on the data that characterizes the composition, perform operations that at least one of exact a first feature value from the interaction data or compute a second feature value based on the first feature value;” (Pg.5, particularly paragraph 0069; EN: this denotes at looking at different methods to predict for the customers. One such feature is changes in behavior among the data). “Generate the input dataset based on at least one of the extracted first feature value or the computed second feature value” (Pg.5, particularly paragraph 0067; EN: this denotes using the data to train a machine learning algorithm). As per claims 4 and 15, Dorai discloses, “wherein the output data comprises a numerical score indicative of the predicted likelihood of the occurrence of the first event…” (Pg.5, particularly paragraph 0067; EN: this denotes the outputs having confidences associated with the various events). Zeng discloses, “during the second portion of the target interval” (pg.1045, particularly C1, first paragraph section 3; EN: this denotes various intervals for late payment. An example of non-occurrence of a first event is not paying in the first 1-30 days (a first interval) and paying 31-60) days late (second interval) with the payment being the event). As per claim 8, Dorai discloses, “the first interaction data is associated with a plurality of customers” (pg.1, particularly paragraph 0016; EN: this denotes the system being able to work with multiple different people). “The at least one processor is further configured to execute the instructions to” (Pg.3, particularly paragraph 0050; EN: this denotes the hardware to run the system). “generate input datasets based on the interaction data, each of the input datasets being associated with a corresponding one of the customers” (pg.5, particularly paragraph 0065-0066; EN: this denotes monitoring the customer’s behavior over time. Here the extraction interval is the time period where the customer’s action is monitored, and the interaction data is whatever is being monitored). “Based on the application of the trained artificial intelligence” ((pg.5, particularly paragraph 0067; EN: this denotes using machine learning to perform the prediction). “process to each of the input datasets and generate a corresponding element of the output data” (pg.5, particularly paragraph 0066; EN: this denotes making future predictions such as missed payments or disengagement continuing based upon life events). “representative of a predicted likelihood of a corresponding occurrence” (Pg.5, particularly paragraph 0067; EN: this denotes the outputs having confidences associated with the various events). “of the first event during the first portion of the target interval” (Pg.5, particularly paragraph 0065-0066; EN: the target interval here is the time after a detected life event). “each of the elements of the output data includes a numerical score indicative of the predictive likelihood of the corresponding occurrence of the first event for a corresponding one of the customers” (Pg.5, particularly paragraph 0067; EN: this denotes the outputs having confidences associated with the various events). As per claim 11, Dorai discloses, “wherein the computing system is further configured to perform operations that implement one or more treatment processes in accordance based on the portion of the output data, the implementation of the one or more treatment processes reducing the predicted likelihood of the occurrence of the first event …” (Pg.5, particularly paragraph 0070; EN: this denotes taking action to prevent the delinquency when potential delinquency is detected). Zeng discloses, “during the second portion of the target interval” (pg.1045, particularly C1, first paragraph section 3; EN: this denotes various intervals for late payment. An example of non-occurrence of a first event is not paying in the first 1-30 days (a first interval) and paying 31-60) days late (second interval) with the payment being the event). As per claim 19, Dorai discloses, “the computing system is further configured to perform operations that implement one or more treatment processes in accordance based on the portion of the output data, the implementation of the one or more treatment processes reducing the predicted likelihood of the occurrence of the first event …” (Pg.5, particularly paragraph 0070; EN: this denotes taking action to prevent the delinquency when potential delinquency is detected). Zeng discloses, “during the first portion of the target interval” (pg.1045, particularly C1, first paragraph section 3; EN: this denotes various intervals for late payment. An example of non-occurrence of a first event is not paying in the first 1-30 days (a first interval) and paying 31-60) days late (second interval) with the payment being the event). As per claim 22, Dorai discloses, “wherein the at least one processor is further configured to execute the instructions to perform operations… that apply the trained artificial intelligence process to the input dataset” (pg.5, particularly paragraph 0078; EN: this denotes using a trained model to predict payment patterns). “and that generate output data representative of the predicted likelihood of … (ii) an occurrence of the first event….” (Pg.5, particularly paragraph 0076; EN: this denotes looking for late payment patterns and the like. Here the predicted event is late payment). Zeng discloses, “(i) the non-occurrence of the first event during the first portion of the target interval and (ii) an occurrence of the first event during a second portion of the target interval” (pg.1045, particularly C1, first paragraph section 3; EN: this denotes various intervals for late payment. An example of non-occurrence of a first event is not paying in the first 1-30 days (a first interval) and paying 31-60) days late (second interval) with the payment being the event). Lawrence discloses, “… across a plurality of distributed computing components interconnected across a communications network” (C7, particularly L60-68; C8, particularly L1-3; EN: this denotes distributing processing across multiple servers as needed). Dorai and Lawrence fails to explicitly disclose, “in parallel”, however, the Examiner takes official notice that one of ordinary skill in the art of computer processing at the time of filing would know that allowing processing to proceed in parallel across distributed devices is a well-known process of speeding up or efficiently performing data processing in order to increase the speed and/or efficiency of a system running a computer program. Claim Rejections - 35 USC § 103 Claims 5 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Dorai et al (US 20170116531 A1) in view of Guy et al (US 20050154664 A1), Zeng et al (“Using Predictive Analysis to Improve Invoice-to-Cash Collection”) and Lawrence et al (US 7389265 B2) and further in view of Ye et al (“Stochastic Gradient Boosted Distributed Decision Trees”). As per claims 5 and 16, Dorai discloses, “wherein the trained artificial intelligence process comprises a trained … process” (Pg.5, particularly paragraph 0067; EN: this denotes using the data to train a machine learning algorithm). However, Dorai fails to explicitly disclose, “gradient boosted decision tree.” Ye discloses, “gradient boosted decision tree” (abstract; EN: this denotes the use of a gradient boosted decision tree in machine learning). Dorai and Ye are analogous art because both involve machine learning. Before the effective filing date it would have been obvious to one skilled in the art of machine learning to combine the work of Dorai and Ye in order to make use of boosted gradient decision trees. The motivation for doing so would be because they are “adaptable, easy to interpret, and produce[] highly accurate models” (Ye, Abstract). Therefore before the effective filing date it would have been obvious to one skilled in the art of machine learning to combine the work of Dorai and Ye in order to make use of boosted gradient decision trees. Claim Rejections - 35 USC § 103 Claims 6-7 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Dorai et al (US 20170116531 A1) in view of Guy et al (US 20050154664 A1), Zeng et al (“Using Predictive Analysis to Improve Invoice-to-Cash Collection”) and Lawrence et al (US 7389265 B2) and further in view of Holzheimer et al (US 20200327821 A1). As per claims 6 and 17, Dorai discloses, “obtain elements of … interaction data, each of the elements of the … interaction data comprising a temporal identifier associated with a temporal interval” (pg.1, particularly paragraph 0006-0009; EN: this denotes the data being kept in relation to a person over time including time series data about that person. Here the temporal identifier of that time series is the customer’s profile). However, Dorai fails to explicitly disclose, “additional interaction data” and “based on the temporal identifiers, determine that a first subset of the elements of the additional interaction data are associated with a prior training interval, and that second subset of the elements of the additional interaction data are associated with a prior validation interval”, “generating a plurality of training datasets based on corresponding portions of the first subset and perform operations that train the artificial intelligence process based on the training sets” Holzheimer discloses, “additional interaction data” (Pg.5, particularly paragraph 0128; EN: this denotes taking in additional data from the user and using it to update the machine learning model). “based on the temporal identifiers, determine that a first subset of the elements of the additional interaction data are associated with a prior training interval, and that second subset of the elements of the additional interaction data are associated with a prior validation interval” (Pg.6, particularly paragraph 0135; EN: this denotes breaking up the data into test and training data sets related to the user profile. Here the test dataset is the validation data). “generate a plurality of training datasets based on corresponding portions of the first subset and perform operations that train the artificial intelligence process based on the training datasets” (pg.6, particularly paragraph 0136; EN: this denotes using the newly created training data to train). Dorai and Holzheimer are analogous art because both involve machine learning. Before the effective filing date it would have been obvious to one skilled in the art of machine learning to combine the work of Dorai and Holzheimer in order to update models with new data. The motivation for doing so would be to “develop updated demographic preference patterns” (Holzheimer, Pg.5, paragraph 0128) or in the case of Dorai, allow the system to keep their prediction system updated as new data presents itself related to their customers. Therefore before the effective filing date it would have been obvious to one skilled in the art of machine learning to combine the work of Dorai and Holzheimer in order to update models with new data. As per claims 7 and 18, Holzheimer discloses, “generate a plurality of validation datasets based on portions of the second subset” (Pg.6, particularly paragraph 0135; EN: this denotes breaking up the data into test and training data sets related to the user profile. Here the test dataset is the validation data). “perform the operations that apply the trained artificial intelligence process to the plurality of validation datasets, and generate additional elements of output data based on the application of the trained artificial intelligence process to the plurality of validation datasets” (pg.6, particularly paragraph 0137; EN: this denotes using the test sets to evaluate the training of the system). “compute one or more validation metrics based on the additional elements of output data” (pg.6, particularly paragraph 0137; EN: this denotes using the test sets to evaluate the training of the system). “based on determined consistency between the one or more validation metrics… , validate the trained artificial intelligence process” (Pg.6, particularly paragraph 0137; EN: this denotes using the test sets to evaluate the training of the system). While the Holzheimer reference fails to explicitly disclose “consistency between the one or more validation metrics and a threshold condition” this is none-the-less rendered obvious. The Examiner takes Official Notice that it would be obvious to one of ordinary skill in the art of machine learning at the time of filing that when using test data to test a machine learning algorithm, the goal would be to reach a certain level of accuracy in order to confirm that the machine learning algorithm is properly trained, as this would allow the system to be confident their machine learning algorithm will respond with a required level of accuracy in its predictions. Claim Rejections - 35 USC § 103 Claims 9 is rejected under 35 U.S.C. 103 as being unpatentable over Dorai et al (US 20170116531 A1) in view of Guy et al (US 20050154664 A1), Zeng et al (“Using Predictive Analysis to Improve Invoice-to-Cash Collection”) and Lawrence et al (US 7389265 B2) and further in view of Kotsiantis et al (“Data Preprocessing for Supervised Learning”). As per claim 9, Dorai fails to explicitly disclose, “Perform operations that filter the first interaction data in accordance with one or more filtration criteria and generate the input dataset based on at least a portion of the filtered interaction data.” Kotsiantis discloses, “Perform operations that filter the first interaction data in accordance with one or more filtration criteria and generate the input dataset based on at least a portion of the filtered interaction data” (The entire document, but particularly paragraph 111-112, section II, and Table 1; EN: this denotes various methods of filtering the data in order to remove illegal values, misspellings, and other errors in training data). Dorai and Kotsiantis are analogous art because both involve machine learning. Before the effective filing date it would have been obvious to one skilled in the art of machine learning to combine the work of Dorai and Kotsiantis in order to filter data for machine learning. The motivation for doing so would be to use “well known algorithms for each step of data-preprocessing so that one achieves the best performance for their dataset” (Kotsiantis, Abstract) or in the case of Dorai, allow the system to correct any errors or other problems with the input to optimize it for training the machine learning algorithm. Therefore before the effective filing date it would have been obvious to one skilled in the art of machine learning to combine the work of Dorai and Kotsiantis in order to filter data for machine learning. Response to Arguments In pg. 22, the Applicant argues in regards to the rejection under U.S.C. 101 of the independent claims, As an initial point, Applicant submits that the Office’s summary of the elements recited by Applicant's independent claims mischaracterizes the actual language recited by the claims and is inconsistent with the Office’s current examination processes. Indeed, Applicant's claims do not recite “applying a. . . process to the input dataset, generate output data representative of a predicted likelihood of an occurrence of a first event during a first portion of a target interval,” as alleged by the Office- instead, Applicant's independent claims, in unamended form, “apply a trained artificial intelligence process to the input dataset, and based on the application of the trained artificial intelligence process to the input dataset, generate output data representative of a predicted likelihood of an occurrence of a first event during a first portion of a target interval.” Id. (emphases added). The Office’s apparent abstraction of the “application of a trained artificial-intelligence process” from Applicant's independent claims during its analysis under Prong One of Revised Step 2A of the Alice/Mayo finds no support within the Office’s current examination processes, and Applicant submits that the abstraction of the “application of a trained artificial-intelligence process,” which is not a mental process, from Applicant's independent claims is plainly inconsistent with these procedures, which require the Office to “[i]dentify the specific limitation(s) in the claim under examination (individually or in combination) that the examiner believes recites an abstract idea.” 2019 Guidance, 84 Fed. Reg. 4, p. 54. As the Office’s analysis under Prong One of Revised Step 2A of the Alice/Mayo fails to rely on the specific limitations of Applicant's claims, the Office’s analysis is inconsistent with the Office’s current examination procedures and the rejection of Applicant's claims under 35 U.S.C. §101 is improper and should be withdrawn. In response, the Examiner maintains the rejection as shown above. Prong 1 part 1 denotes the abstract idea of the claimed invention. The use of a generic “trained artificial intelligence process” is not something that would be a mental process or abstract idea. Applicant’s claims are no different than stating that the process is performed using a processor or memory. The use of generic computer hardware, generic machine learning models, or extra-solutionary activity like receiving or transmitting data is not considered part of the abstract idea or prong 1, and dealt with under prong 2A/2B as shown above. Therefore the rejection is maintained as shown above. In pg.21-22, the Applicant further argues in regards to the rejection under U.S.C. 101, Further, point, even assuming that the Office’s summary of Applicant's independent claims reflects accurately the actual language recited by these claims, which it does not, the Office fails to provide reasoning sufficient to support its conclusion that Applicant's independent recite a patent-ineligible mental process. When interpreting Applicant's claims under current Office examination practice, the Office is required to afford Applicant's claims their broadest reasonable interpretation consistent with the Applicant's Specification. See M.P.E.P. § 2111. Here, despite conclusory, unsupported assertions, the Office’s analysis of Applicant's claims under Prong One of Revised Step 2A of the Alice/Mayo test does not- and cannot- identify any portion of Applicant’s Specification that would support its conclusion that a user could perform, via pen and paper or in the mind, any of the actual elements recited by Applicant's independent claims. See, e.g., id. Further, the claimed combination of elements recited by Applicant's independent claims encompass artificial intelligence in a manner that cannot be performed practically in the human mind and as such, Applicant's independent claims cannot recite a patent-ineligible mental process in accordance with the Office’s examination guidelines. See, e.g., Memorandum entitled “Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. § 101,” issued on August 4, 2025, p. 2 (hereinafter, the “August 4th Memorandum”). In response, the Examiner maintains the rejection as shown above. Simply because the Applicant’s claim the action is performed by a machine learning model does not prevent the actions from being considered an abstract idea and performed within the human mind/with pencil and paper. The claims contain no details or limitations which cause the performance of the “trained artificial intelligence process” to be anything more than a generic, off the shelf machine learning model, and thus is no different than the abstract idea being implemented on generic computer hardware in order to “apply it.” Therefore the rejection is maintained as shown above. In pg.24, the Applicant further argues in regards to the rejection under U.S.C. 101, Contrary to the Office’s assertions, the elements recited by Applicant's independent claims, when considered as a whole even in unamended form, provide a specific, technological improvement to existing, computer-implemented predictive processes that ingest, operate on, and process increasingly large volumes of interaction data as such, integrate any allegedly recited abstract idea into a patent-eligible, practical application. See, e.g., Applicants Amendment filed April 27, 2025, pp. 18-20 (citing Applicant's Specification, 1] [0020]-[0023]). Nonetheless, without acquiescing to the propriety of the Office’s characterization of Applicant's claims, and solely to advance prosecution, Applicant amends independent claims 1, 12, and 20 to even further clarify the claimed subject matter. For at least the reasons set forth below, Applicant submits that independent claims 1, 12, and 20, as amended herein, integrate any allegedly recited abstract idea into a patent-eligible, practical application under Prong Two of Step 2A of the Alice/Mayo test and as such, represent patent-eligible subject matter under 35 U.S.C. § 101. In response, the Examiner maintains the rejection as shown above. Applicant appears to be arguing that the use of generic computer hardware and machine learning models somehow is a technological improvement, but fails to even describe what technology is being improved. Further, the volume of data being processed is not something that denotes a technological improvement, as the use of a processor and other computer components allow the processing of large volumes of data, but does not denote anything more than using generic off the shelf computer hardware, and therefore the rejection is maintained as shown above. In pg.25-26, the Applicant argues in regards to the rejection under U.S.C. 101, These quoted elements recited similarly of Applicant's amended independent claims when considered individually and as a whole in accordance with the Office’s current examination procedures, represent a specific, technological improvement to computing systems and environments that implement existing, computer-implemented predictive processes, which often an iterative application of machine learning or artificial processes to corresponding sets of input data in an effort to predict and characterize future occurrences and non-occurrences of events during target temporal intervals. See, e.g., Applicant's Specification, 4] [0019]-[0023]. Indeed, by performing operations that apply the trained artificial intelligence process to an input dataset and that dynamically generate the output data indicating a predicted likelihood of both a non- occurrence of a first event during a first portion of a target interval and an occurrence of the first event during a second portion of the target interval, the specific, technological solution provided by Applicant's independent claims reduces a number of discrete computational operations, and as such, an amount of computational resources, required to generate the claimed output data when compared to existing, computer-implemented predictive solutions require an iterative application of machine learning or artificial processes to corresponding sets of input data. See, e.g. id. In response, the Examiner maintains the rejection as shown above. Once again applicant fails to actually denote what technology is being improved. Predicting occurrence of events during particular target temporal intervals is a mental step, as any person can look at data and predict what may or may not happen at a future time. Merely adding on generic computer hardware and generic machine learning models does not improve the hardware or machine learning model, it improves the abstract idea, and merely making an abstract idea better/more efficient does not improve the computer hardware or machine learning model, it improves the abstract idea. Since improving the abstract idea does not cause the claimed invention to be significantly more than the abstract idea, the rejection is maintained as shown above. Applicant's remaining arguments with respect to claims 1-9, 11-20, and 22 have been considered but are moot in view of the new ground(s) of rejection or repetitions of the above arguments with the rejections maintained for similar reasons. Conclusion The examiner requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEN M RIFKIN whose telephone number is (571)272-9768. The examiner can normally be reached Monday-Friday 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEN M RIFKIN/Primary Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Jun 02, 2021
Application Filed
Oct 17, 2024
Non-Final Rejection — §101, §103, §DP
Apr 10, 2025
Interview Requested
Apr 24, 2025
Examiner Interview Summary
Apr 24, 2025
Applicant Interview (Telephonic)
Apr 29, 2025
Response Filed
Jun 11, 2025
Final Rejection — §101, §103, §DP
Aug 13, 2025
Response after Non-Final Action
Sep 15, 2025
Request for Continued Examination
Sep 16, 2025
Interview Requested
Sep 23, 2025
Applicant Interview (Telephonic)
Sep 23, 2025
Examiner Interview Summary
Sep 29, 2025
Response after Non-Final Action
Jan 13, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12541685
SEMI-SUPERVISED LEARNING OF TRAINING GRADIENTS VIA TASK GENERATION
2y 5m to grant Granted Feb 03, 2026
Patent 12455778
SYSTEMS AND METHODS FOR DATA STREAM SIMULATION
2y 5m to grant Granted Oct 28, 2025
Patent 12236335
SYSTEM AND METHOD FOR TIME-DEPENDENT MACHINE LEARNING ARCHITECTURE
2y 5m to grant Granted Feb 25, 2025
Patent 12223418
COMMUNICATING A NEURAL NETWORK FEATURE VECTOR (NNFV) TO A HOST AND RECEIVING BACK A SET OF WEIGHT VALUES FOR A NEURAL NETWORK
2y 5m to grant Granted Feb 11, 2025
Patent 12106207
NEURAL NETWORK COMPRISING SPINTRONIC RESONATORS
2y 5m to grant Granted Oct 01, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
59%
With Interview (+15.6%)
4y 12m
Median Time to Grant
High
PTA Risk
Based on 317 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month