Prosecution Insights
Last updated: April 19, 2026
Application No. 17/681,237

PREDICTING OCCURRENCES OF TARGETED CLASSES OF EVENTS USING TRAINED ARTIFICIAL-INTELLIGENCE PROCESSES

Final Rejection §103§DP
Filed
Feb 25, 2022
Examiner
HASTY, NICHOLAS
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
The Toronto-Dominion Bank
OA Round
2 (Final)
51%
Grant Probability
Moderate
3-4
OA Rounds
4y 8m
To Grant
83%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
178 granted / 348 resolved
-3.9% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
31 currently pending
Career history
379
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
68.5%
+28.5% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
1.4%
-38.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 348 resolved cases

Office Action

§103 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to communications: Amendment filed on 9/25/2025. Claims 1-7, 9-16, and 18-22 are pending. Claims 1, 12, and 20 are independent. Claims 8 and 17 are newly canceled. Claims 21-22 are newly added. The previous rejection of claims 1-7, 9-16, and 18-20 under 35 USC § 103 have been withdrawn in view of the amendment. . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim1-3, 5, and 9-11, 12-14, and 18-20 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-3, 6, and 9-11 of copending Application No. 17/180745 (reference application) in view of Vanderveld et al. (US11,188,940). Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1-3, 5, and 9-11 of the current application are anticipated by claims 1-3, 6, and 9-11, of the reference application as shown by the table below. Application ‘745 does not explicitly recite each of the targeted classes of events being associated with a numerical class identifier, the output data comprising the numerical class identifier, the output data comprising the numerical class identifier of the corresponding one of the targeted classes of events. However Vanderveld et al. discloses each of the targeted classes of events being associated with a numerical class identifier, the output data comprising the numerical class identifier, the output data comprising the numerical class identifier of the corresponding one of the targeted classes of events (Vanderveld et al. col24 ln35-52, numerical identifier associates events with cohort). It would have been obvious to one of ordinary skill in the art to modify the claimed invention of application ‘745 with the prediction system of Vanderveld et al. in order to identify what promotions lead to consumer purchases (Vanderveld et al. col1 ln17-26). Claims 12, 13, 14, 18, and 19 are directed towards a method but recite the same limitations as apparatus claims 1, 3, 5, 18, and 19. Thus claims 12-14, and 18-19 are rejected along the same rationale as claims 1, 3, 5, 18, and 19 as shown in the table below. Claim 20 is directed towards a non-transitory computer-readable medium, but recites the same limitations as apparatus claim 1. Thus claim 20 is rejected along the same rationale as claim 1 shown in the table below. This is a provisional nonstatutory double patenting rejection. Current application Application 17/180,745- claims filed 7/24/2025 1. An apparatus, comprising: a memory storing instructions; a communications interface; and at least one processor coupled to the memory and the communications interface, the at least one processor being configured to execute the instructions to: 1. An apparatus, comprising: A plurality of distributed computing components interconnected across a communications network, each of the distributed computing components comprising a processor coupled to a memory storing instructions and to a communications interface and the processor of at least one of the distributed computing components being configured to execute the instructions to: receive, via the communications interface, an identifier associated with a customer from a computing system, and obtain, from the memory, elements of the first interaction data associated with a first temporal interval based on the received identifier; generate an input dataset based on elements of first interaction data associated with the first temporal interval; Receive, via the communications interface of the at least one of the distributed computing components, a plurality of identifiers from a computing system across the communications network, each of the identifiers being associated with a device in communications with the computing system; obtain, from a data repository, elements of first interaction data associated with the identifiers and with one or more temporal intervals, and detect a change in a composition of the elements of first interaction data associated with at least a first one of the temporal intervals; based on the detected change in the composition, generate, for each of the identifiers, an input dataset based on corresponding ones of the elements of first interaction data associated with the first temporal interval; based on an application of a trained artificial intelligence process to the input dataset, generate output data indicative of an expected occurrence of an event associated with a corresponding one of a plurality of targeted events during a second temporal interval, each of the targeted classes of events being associated with a numerical class identifier, the output data comprising the numerical class identifier, the output data comprising the numerical class identifier of the corresponding one of the targeted classes of events, and the second temporal interval being subsequent to the first temporal interval and being separated from the first temporal interval by a corresponding buffer interval; and Perform operations, in parallel across the distributed computing components, that apply a trained artificial intelligence process to each of the input datasets in accordance with a value of at least one process parameter, and that based on the application of the trained artificial intelligence process to each of the input datasets, generate, in real-time upon receipt of the plurality of identifiers, a corresponding element of output data representative of a predicted likelihood of an occurrence of an event during a second temporal interval, the second temporal interval being subsequent to the first temporal interval and being separated from the first temporal interval by a corresponding buffer interval; transmit at least a portion of the output data to a computing system via the communications interface, the computing system being configured to transmit digital content to a device associated with the expected occurrence based on the portion of the output data. transmit at least a subset of the elements of output data and corresponding ones of the identifiers across the communications network to the computing system via the communications interface of at least one of the distributed computing components, the computing system being configured to: at least one of modify an element of second interaction data associated with at least one of the identifiers based on the corresponding element of output data, or generate an additional element of the second interaction data associated with the at least one of the identifiers based on the corresponding element of output data; transmit, via the communications interface of the at least one of the distributed computing components, data characterizing the at least one of the modification or the generation across the communications network to a device associated with the at least one of the identifiers; based on at least the elements of output data, compute a value of one or more metrics characterizing the application of the application of the trained artificial intelligence process to each of the input datasets; and determine an inconsistency between the one or more metric values and at least one threshold condition, and perform operations that modify the at least one process parameter value in accordance with the determined inconsistency. 2. The apparatus of claim 1, wherein the at least one processor is further configured to: receive at least a portion of the first interaction data from the computing system via the communications interface; and store the portion of the first interaction data within the memory. 2. The apparatus of claim 1, wherein the processor of the at least one of the distributed components is further configured to execute the instructions to: receive at least a portion of the first interaction data from the computing system via the communications interface of the at least one of the distributed computing components; and store the received portion of the first interaction data within the data repository. 3. The apparatus of claim 1, wherein the at least one processor is further configured to: obtain (i) one or more parameters that characterize the trained artificial intelligence process and (ii) data that characterizes a composition of the input dataset; generate the input dataset in accordance with the data that characterizes the composition; and apply the trained artificial intelligence process to the input dataset in accordance with the one or more parameters. 3. The apparatus of claim 1, wherein the processor of the at least one of the distributed components is further configured to execute the instructions to: obtain (i) a value of one or more process parameters that characterize the trained artificial intelligence process and (ii) composition data that characterizes a composition of the input dataset of the trained artificial intelligence process; generate each of the input datasets in accordance with the composition data; and perform the operations in parallel that apply the trained artificial intelligence process to each of the input datasets in accordance with the one or more process parameter values. 5. The apparatus of claim 1, wherein the trained artificial intelligence process comprises a trained, gradient-boosted, decision-tree process. 6. The apparatus of claim 1, wherein the trained artificial intelligence process comprises a trained, gradient-boosted, decision-tree process. 9. The apparatus of claim 1, wherein: the first interaction data is associated with a plurality of customers; and the at least one processor is further configured to execute the instructions to: generate a plurality of input datasets based on the first interaction data, each of the plurality of input datasets being associated with a corresponding one of the customers; apply the trained artificial intelligence process to each of the plurality of input datasets, and based on the application of the trained artificial intelligence to each of the plurality of input datasets, generate elements of the output data indicative of expected occurrences of corresponding ones of the targeted events involving the corresponding one of the customers during the second temporal interval; and perform operations that sort the elements of output data and transmit at least a portion of the sorted elements of output data to the computing system via the communications interface. 8. The apparatus of claim 1, wherein: the first interaction data is associated with a plurality of customers; and the processor of the at least one of the distributed computing components is further configured to execute the instructions to: generate the input datasets based on the corresponding elements of first interaction data associated with the first temporal interval, each of the plurality of input datasets being associated with a corresponding one of the customers; and perform the operations in parallel that apply the trained artificial intelligence process to each of the input datasets, and that based on the application of the trained artificial intelligence to each of the plurality of input datasets, generate the corresponding element of the output data representative of the predicted likelihood of the occurrence of the event involving the corresponding one of the customers during the second temporal interval. 9. (Previously Presented) The apparatus of claim 8, wherein: each of the generated elements of output data includes a numerical score indicative of the predicted likelihood of the occurrence of the event involving the corresponding one of the customers; and the processor of the at least one of the distributed components is further configured to execute the instructions to: perform operations that associate the elements of output data and the corresponding ones of the identifiers, and that rank pairs of the associated elements of output data and the corresponding identifiers based on the numerical scores; and transmit at least a portion of the ranked pairs of the associated elements of output data and the corresponding identifiers across the communications network to the computing system via the communications interface of the at least one of the distributed computing components. 10. The apparatus of claim 1, wherein the at least one processor is further configured to execute the instructions to: obtain elements of second interaction data and elements of targeting data, each of the elements of the second interaction data comprising a temporal identifier associated with a temporal interval, and the elements of targeting data identifying the targeted events; based on the temporal identifiers, determine that a first subset of the elements of the second interaction data are associated with a prior training interval, and that a second subset of the elements of the second interaction data are associated with a prior validation interval; and generate a plurality of training datasets based corresponding portions of the first subset, and perform operations that train the artificial intelligence process based on the training datasets and on the targeting data. 10. The apparatus of claim 1, wherein the processor of the at least one of the distributed components is further configured to execute the instructions to: obtain elements of third interaction data, each of the elements of the third interaction data comprising a temporal identifier associated with a temporal interval; based on the temporal identifiers, determine that a first subset of the elements of the third interaction data are associated with a prior training interval, and that a second subset of the elements of the third interaction data are associated with a prior validation interval; and generate a plurality of training datasets based corresponding portions of the first subset, and perform operations that train the artificial intelligence process based on the training datasets. 11. The apparatus of claim 10, wherein the at least one processor is further configured to execute the instructions to: generate a plurality of the validation datasets based on portions of the second subset; apply the trained artificial intelligence process to the plurality of validation datasets, and generate additional elements of output data based on the application of the trained artificial intelligence process to the plurality of validation datasets; compute one or more validation metrics based on the additional elements of output data; and based on a determined consistency between the one or more validation metrics and a threshold condition, validate the trained artificial intelligence process. 11. (Original) The apparatus of claim 10 wherein the processor of the at least one of the distributed components is further configured to execute the instructions to: generate a plurality of the validation datasets based on portions of the second subset; apply the trained artificial intelligence process to the plurality of validation datasets, and generate additional elements of output data based on the application of the trained artificial intelligence process to the plurality of validation datasets; compute one or more validation metrics based on the additional elements of output data; and based on a determined consistency between the one or more validation metrics and a threshold condition, validate the trained artificial intelligence process. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-7, 9-13, 15-16, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nandan et al. (US10,937,089) in view of Horvitz (US2014/0304211) and Vanderveld et al. (US11,188,940). In regards to claim 1, Nandan et al. discloses an apparatus, comprising: a memory storing instructions (Nandan et al. col12 ln9-16); a communications interface (Nandan et al. col17 ln43-60); and at least one processor coupled to the memory and the communications interface (Nandan et al. col11 ln61-67), the at least one processor being configured to execute the instructions to: generate an input dataset based on elements of first interaction data associated with the first temporal interval (Nandan et al. fig.5 501 col13 ln14-37, generate data associated with a subject from different data sources); based on an application of a trained artificial intelligence process to the input dataset, generate output data indicative of an expected occurrence of an event associated with a corresponding one of a plurality of targeted classes of events during a second temporal interval, , and the second temporal interval being subsequent to the first temporal interval and being separated from the first temporal interval by a corresponding buffer interval (Nandan et al. col13 ln49-65, use analysis of data to generate prediction of event occurring at future time including a project data, timeframe and accuracy rating). Nandan et al. does not explicitly disclose transmit at least a portion of the output data to the computing system via the communications interface, the computing system being configured to transmit digital content to a device associated with the expected occurrence based on the portion of the output data. However Horvitz discloses transmit at least a portion of the output data to a computing system via the communications interface, the computing system being configured to transmit digital content to a device associated with the expected occurrence based on the portion of the output data (Horvitz para[0085], alerting component sends information about predicted event to particular device of user.) It would have been obvious to one of ordinary skill in the art before the filing date of the invention to have combined the prediction system of Nandan et al. with the prediction notification system of Horvitz in order to notify and guide users through unexpected and anomalous events (Horvitz para[0067]). Nandan et al. does not explicitly disclose receive, via the communications interface, an identifier associated with a customer from a computing system, and obtain, from the memory, elements of the first interaction data associated with a first temporal interval based on the received identifier; each of the targeted classes of events being associated with a numerical class identifier, the output data comprising the numerical class identifier, the output data comprising the numerical class identifier of the corresponding one of the targeted classes of events. However Vanderveld et al. discloses receive, via the communications interface, an identifier associated with a customer from a computing system, and obtain, from the memory, elements of the first interaction data associated with a first temporal interval based on the received identifier (Vanderveld et al. col24 ln8-21, receives historical data associated with a first consumer); each of the targeted classes of events being associated with a numerical class identifier, the output data comprising the numerical class identifier, the output data comprising the numerical class identifier of the corresponding one of the targeted classes of events (Vanderveld et al. col24 ln35-52, numerical identifier associates events with cohort). It would have been obvious to one of ordinary skill in the art before the filing date of the invention to have combined the prediction system of Nandan et al. with the prediction system of Vanderveld et al. in order to identify what promotions lead to consumer purchases (Vanderveld et al. col1 ln17-26) In regards to claim 2, Nandan et al. as modified by Horvitz and Vanderveld et al. discloses the apparatus of claim 1, wherein the at least one processor is further configured to: receive at least a portion of the first interaction data from the computing system via the communications interface (Nandan et al. col17 ln43-60, receive interaction data via data access interface); and store the portion of the first interaction data within the memory (Nandan et al. col17 ln43-60, stores received data in local data cache). In regards to claim 3, Nandan et al. as modified by Horvitz and Vanderveld et al. discloses the apparatus of claim 1, wherein the at least one processor is further configured to: obtain (i) one or more parameters that characterize the trained artificial intelligence process and (ii) data that characterizes a composition of the input dataset (Nandan et al. col4 ln38-53, generates parameters for classifiers based on input training dataset); generate the input dataset in accordance with the data that characterizes the composition (Nandan et al. col4 ln54-65, mines and transforms data to induce more accurate classifiers); and apply the trained artificial intelligence process to the input dataset in accordance with the one or more parameters (Nandan et al. col6 ln33-51, apply variables to machine learning model). In regards to claim 4, Nandan et al. as modified by Horvitz and Vanderveld et al. discloses the apparatus of claim 3, wherein the at least one processor is further configured to: based on the data that characterizes the composition, perform operations that at least one of extract a first feature value from the first interaction data or compute a second feature value based on the first feature value (Nandan et al. col10 ln51-67, extract features from data); and generate the input dataset based on at least one of the extracted first feature value or the computed second feature value (Nandan et al. col 11ln46-60, generate input for machine learning techniques from extracted data). In regards to claim 6, Nandan et al. as modified by Horvitz and Vanderveld et al. discloses the apparatus of claim 1, wherein: the first interaction data is associated with the customer (Nandan et al. col14ln32-39, data associated with customer “Jon”); the event comprises an acquisition event associated with the customer, and the acquisition event is associated with the corresponding one of the plurality of targeted classes of events (Nandan et al. col14 ln50-59, acquisition event include purchasing a car or a house); and the plurality of targeted classes of events comprises a first targeted class, a second targeted class, and a third targeted class, the first targeted class being associated with a failure of the customer to acquire a first product or a second product, the second targeted class being associated with an acquisition of the first product by the customer, and the third targeted class being associated with an acquisition of the second product by the customer (Nandan col15 ln4-23, acquisition events also include customer being unable to purchase a new car). In regards to claim 7, Nandan et al. as modified by Horvitz and Vanderveld et al. discloses the apparatus of claim 1, wherein: the first interaction data comprises the identifier associated with the customer and a temporal identifier associated with the first temporal interval (Nandan et al. col5 ln16-27, data includes vendor identifier associated with historic procurement data); and the at least one processor is further configured to execute the instructions to: obtain the elements of the first interaction data from a portion of the memory based on the received customer identifier (Nandan et al. col5 ln36-60, retrieve stored information using identifier). In regards to claim 9, Nandan et al. as modified by Horvitz and Vanderveld et al. discloses the apparatus of claim 1, wherein: the first interaction data is associated with a plurality of customers (Nandan et al. col14ln32-39, data associated with customers such as “Jon”); and the at least one processor is further configured to execute the instructions to: generate a plurality of input datasets based on the first interaction data, each of the plurality of input datasets being associated with a corresponding one of the customers (Nandan et al. col14 ln30-50, input data sets associated with customers from data sources); apply the trained artificial intelligence process to each of the plurality of input datasets, and based on the application of the trained artificial intelligence to each of the plurality of input datasets, generate elements of the output data indicative of expected occurrences of events associated with corresponding ones of the targeted classes of events involving the corresponding one of the customers during the second temporal interval (Nandan et al. col15 ln24-49, predictive analytics identifies Jon will purchase a car in the next month or so); and perform operations that sort the elements of output data and transmit at least a portion of the sorted elements of output data to the computing system via the communications interface (Nandan et al. col14 ln46-49, outputs information to recommend loan information or car dealerships). In regards to claim 10, Nandan et al. as modified by Horvitz and Vanderveld et al. discloses the apparatus of claim 1, wherein the at least one processor is further configured to execute the instructions to: obtain elements of second interaction data and elements of targeting data, each of the elements of the second interaction data comprising a temporal identifier associated with a temporal interval, and the elements of targeting data identifying the targeted classes of events (Nandan et al. col15 ln50-67, obtain historical marriage statistics for customer’s age group); based on the temporal identifiers, determine that a first subset of the elements of the second interaction data are associated with a prior training interval, and that a second subset of the elements of the second interaction data are associated with a prior validation interval (Nandan et al. col15 ln50-67, determines that customers age has not reached average marriage age and determines customer has low probability of getting married soon ); and generate a plurality of training datasets based corresponding portions of the first subset, and perform operations that train the artificial intelligence process based on the training datasets and on the targeting data (Nandan et al. col15 ln61 to col16 ln7, generates additional training to data to update model as more data becomes available). In regards to claim 11, Nandan et al. as modified by Horvitz and Vanderveld et al. discloses the apparatus of claim 10, wherein the at least one processor is further configured to execute the instructions to: generate a plurality of the validation datasets based on portions of the second subset (Nandan et al. col9 ln21-37, use historical data to generate validation sets); apply the trained artificial intelligence process to the plurality of validation datasets, and generate additional elements of output data based on the application of the trained artificial intelligence process to the plurality of validation datasets (Nandan et al. col8 ln61-67, decision tree is analyzed to identify multicollinearity of predictive variables); compute one or more validation metrics based on the additional elements of output data (Nandan et al. col9 ln29-48, compute validation metrics (predictive variables) by transforming multicollinear variables); and based on a determined consistency between the one or more validation metrics and a threshold condition, validate the trained artificial intelligence process (Nandan et al. cl9 ln49-62, use validation metric (predictive variable) to validate ensemble classifier). Claims 12, 13, 15-16, and 18-19 recites substantially similar limitations to claims 1, 3, 6-7, and 10-11. Thus claims 12, 13, 15-16, and 18-19 are rejected along the same rationale as claims , 3, 6-7, and 10-11. Claim 20 recites substantially similar limitations to claim 1. Thus claim 20 is rejected along the same rationale as claim 1. In regards to claim 21, Nandan et al. as modified by Horvitz and Vanderveld et al. the apparatus of claim 1, wherein the at least one processor is further configured to execute the instructions to perform operations that generate the input dataset, generate the output data indicative of the expected occurrence of the event, and transmit the portion of the output data to the computing system in real time upon receipt of the identifier from the computing system (Vanderveld et al. col ln65 to col16 ln15, data received from promotion and marketing service provided in real time). It would have been obvious to one of ordinary skill in the art before the filing date of the invention to have combined the prediction system of Nandan et al. with the prediction system of Vanderveld et al. in order to identify what promotions lead to consumer purchases (Vanderveld et al. col1 ln17-26) In regards to claim 22, Nandan et al. as modified by Horvitz and Vanderveld et al. the apparatus of claim 1, wherein: The event associated with the corresponding one of the plurality of targeted classes of events is associated with a product (Vanderveld et al. col23 ln52-67, associated with purchase of products); and The digital content comprises a deep link associated with a pre-populated portion of a corresponding digital interface of an application for the product (Vanderveld et al. col23 ln52-67, provides digital promotion of products associated with an application (camping)). It would have been obvious to one of ordinary skill in the art before the filing date of the invention to have combined the prediction system of Nandan et al. with the prediction system of Vanderveld et al. in order to identify what promotions lead to consumer purchases (Vanderveld et al. col1 ln17-26) Claim(s) 5, and 14,is/are rejected under 35 U.S.C. 103 as being unpatentable over Nandan et al. in view of Horvitz, Vanderveld et al. and Yates et al. (US 2018/0075367). In regards to claim 5, Nandan et al. as modified by Horvitz and Vanderveld et al. discloses the apparatus of claim 1. Nandan et al. does not explicitly disclose wherein the trained artificial intelligence process comprises a trained, gradient-boosted, decision-tree process However Yates et al. substantially discloses wherein the trained artificial intelligence process comprises a trained, gradient-boosted, decision-tree process (Yates et al. para[0051], uses gradient-boosted decision-trees as machine-learned models). It would have been obvious to one of ordinary skill in the art before the filing date of the invention to have combined the event prediction system of Nandan et al. with the response prediction system of Yates et al. in order to identify how a user will react to a notification or advertisement (Yates et al. para[0002]). Claim 14 recites substantially similar limitations to claim 5. Thus claim 14 is rejected along the same rationale as claim 5. Response to Arguments Applicant’s arguments with respect to claims 1-7, 9-16, and 18-20 have been considered but are moot because the arguments do not apply the current rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS HASTY whose telephone number is (571)270-7775. The examiner can normally be reached Monday-Friday 8:30am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached at (571)270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.H/Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Feb 25, 2022
Application Filed
Mar 22, 2025
Non-Final Rejection — §103, §DP
Sep 25, 2025
Response Filed
Feb 13, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579517
AUTOMATED DESCRIPTION GENERATION FOR JOB POSTING
2y 5m to grant Granted Mar 17, 2026
Patent 12578840
Devices, Methods, and Graphical User Interfaces for Navigating, Displaying, and Editing Media Items with Multiple Display Modes
2y 5m to grant Granted Mar 17, 2026
Patent 12561605
USER INTERFACE MANAGEMENT FRAMEWORK
2y 5m to grant Granted Feb 24, 2026
Patent 12547291
Tree Frog Computer Navigation System for the Hierarchical Visualization of Data
2y 5m to grant Granted Feb 10, 2026
Patent 12536468
MODEL TRAINING METHOD, SHORT MESSAGE AUDITING MODEL TRAINING METHOD, SHORT MESSAGE AUDITING METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
51%
Grant Probability
83%
With Interview (+32.3%)
4y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 348 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month