Prosecution Insights
Last updated: April 19, 2026
Application No. 18/217,356

Machine Learning Model for Predicting Likelihoods of Events on Multiple Different Surfaces of an Online System

Non-Final OA §103
Filed
Jun 30, 2023
Examiner
SPRATT, BEAU D
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Maplebear Inc. (Dba Instacart)
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
342 granted / 432 resolved
+24.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-20 are presented in the case. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, 9-15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over TSOY et al. (US 20220327134 A1 hereinafter Tsoy) in view of Cetin et al. (US 20110246286 A1 Cetin) As to in dependent claim 1, Tsoy teaches a method, performed at a computer system comprising a processor and a computer-readable medium, comprising: [processor and memory ¶25] accessing a machine learning model trained to predict a likelihood of a target event given a display of a content item by an online system to a user in a presentation context of a plurality of different presentation contexts, wherein the machine learning model is trained by: [accesses an ML algorithm (model) to predict a score used as a likelihood of interaction (event) ¶10, ¶99 " A relevance score indicating a predicted likelihood that the user will interact with a content element may be determined for each of the content elements 301-05"], [dataset includes ranking and positioning presentation contexts ¶105] obtaining a plurality of training examples, wherein each training example comprises a context-dependent set of training features associated with a previous display of a content item and a label indicating whether the target event occurred, [records of previous interactions used for training with labels including view events and attributes ¶105, ¶10 "Records of previous user interactions with interfaces displaying content elements may be used as training data for the MLA. A training dataset may be generated, where each content element in the training dataset is assigned a label."… "a number of times the content element has been viewed, a user rating corresponding to the content element, and/or any other attributes of the content element"] updating the machine learning model, for each of the training examples, based on an error between a prediction by the machine learning model of whether the target event occurred and a label indicating whether the target event occurred; [adjusts/retrains MLA (updates) based on a difference between score and label (error, how off it was) ¶136-137 "The predicted relevance score may then be compared to the label for the content element that was determined at step 540. A difference between the predicted relevance score and the label may be determined, and the MLA may be adjusted based on the difference."] obtaining a set of input features associated with an opportunity to present content by the online system to a viewing user in a presentation context of the opportunity; [Fig. 6 605 receives content to use as input for a user output ¶138-147 " set of content elements may be received. Any number and/or type of content elements may be received"] applying the machine learning model to the obtained set of input features to predict a likelihood of the target event given a display of a candidate content item by the online system to the viewing user in the presentation context; [predicts according to received content and ranks for positioning using MLA Fig. 6 610 ¶143-145 " a relevance score may be predicted for each of the content elements received at step 605."] generating a user interface in the presentation context that selectively includes the candidate content item based on the predicted likelihood; and [Fig. 6 620 step for generating interface accordingly ¶147 "an interface with the content elements may be output, such as the interface 300. The interface may display the content elements in their ranked order"] sending the user interface to a device of the viewing user, the sending causing the device of the viewing user to display the user interface. [user can view and scroll interface accordingly ¶147, ¶7] Tsoy does not specifically teach wherein the context-dependent set of training features associated with a first presentation context has at least one training feature in common with the context-dependent set of training features associated with a second presentation context, and wherein the context-dependent set of training features associated with the first presentation context has at least one training feature that is inapplicable to the context-dependent set of training features associated with the second presentation context . However, Cetin teaches wherein the context-dependent set of training features associated with a first presentation context has at least one training feature in common with the context-dependent set of training features associated with a second presentation context, and [features are based on presentation context (query, add, user and location) ¶30 "features 308 may be derived using any information available in the context of the search, including the query, ad, user, and location"] wherein the context-dependent set of training features associated with the first presentation context has at least one training feature that is inapplicable to the context-dependent set of training features associated with the second presentation context, and [features are derived according to context information (context dependent) including unknown/missing (inapplicable) features Fig. 4, ¶30 "The predictor may have certain features or data available for calculating a click probability. For each advertisement and/or query, there may be features that are unknown and may need to be estimated for accurately determining the click probability. In one example, a new ad that has never been shown would not have any past CTR information and would have several missing features. The click-prediction features 308 may be derived using any information available in the context of the search"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the training features by Tsoy by incorporating the wherein the context-dependent set of training features associated with a first presentation context has at least one training feature in common with the context-dependent set of training features associated with a second presentation context, and wherein the context-dependent set of training features associated with the first presentation context has at least one training feature that is inapplicable to the context-dependent set of training features associated with the second presentation context disclosed by Cetin because both techniques address the same field of machine learning and by incorporating Cetin into Tsoy provides models with more accurate and reliable predictions in different environments [ Cetin ¶ 10 ] As to dependent claim 2, the rejection of claim 1 is incorporated, Tsoy and Cetin further teach wherein the first presentation context comprises a search interface, and the second presentation context comprises a browsing page. [Cetin search results ¶12 browser ¶13], [Tsoy content feed Fig. 3 (browsing page) ¶97] As to dependent claim 3, the rejection of claim 1 is incorporated, Tsoy and Cetin further teach wherein the first presentation context and the second presentation context generate different input features. [Cetin query and search context driven features ¶30, ¶35] As to dependent claim 4, the rejection of claim 1 is incorporated, Tsoy and Cetin further teach wherein obtaining the plurality of training examples features comprises deriving the context dependent sets of training features from historically observed data associated with presenting content in at least the first presentation context and the second presentation context. [Tsoy records of previous interactions used for training ¶105, ¶10] As to dependent claim 5, the rejection of claim 1 is incorporated, Tsoy and Cetin further teach wherein obtaining the plurality of training examples features comprises deriving the context-dependent sets of training features are from one or more of: user data, item data, event occurrences, and scoring metrics associated with historical orders of the online system. [Cetin item data (ad data) and records with impressions, clicks (occurrences, metrics) ¶16] As to dependent claim 6, the rejection of claim 1 is incorporated, Tsoy and Cetin further teach wherein obtaining the set of input features comprises masking a subset of the input features relating to data that is inapplicable to the presentation context. [Cetin unknown/missing (inapplicable) features can be imputed ¶30, ¶37 " In cases with missing data, the generative model infers a posterior distribution over the missing values, and imputes them"] As to dependent claim 7, the rejection of claim 1 is incorporated, Tsoy and Cetin further teach wherein the target event comprises a selection of the candidate content item. [Tsoy predict a score used as a likelihood of interaction (event) ¶10, ¶99 " A relevance score indicating a predicted likelihood that the user will interact with a content element may be determined for each of the content elements 301-05"], [dataset includes ranking and location contexts ¶105] As to in dependent claim 9, Tsoy teaches a non-transitory computer-readable storage medium storing instructions executable by one or more processors for performing steps including: [processor, memory and instruction ¶25] accessing a machine learning model trained to predict a likelihood of a target event given a display of a content item by an online system to a user in a presentation context of a plurality of different presentation contexts, wherein the machine learning model is trained by: [accesses an ML algorithm (model) to predict a score used as a likelihood of interaction (event) ¶10, ¶99 " A relevance score indicating a predicted likelihood that the user will interact with a content element may be determined for each of the content elements 301-05"], [dataset includes ranking and positioning presentation contexts ¶105] obtaining a plurality of training examples, wherein each training example comprises a context-dependent set of training features associated with a previous display of a content item and a label indicating whether the target event occurred, [records of previous interactions used for training with labels including view events and attributes ¶105, ¶10 "Records of previous user interactions with interfaces displaying content elements may be used as training data for the MLA. A training dataset may be generated, where each content element in the training dataset is assigned a label."… "a number of times the content element has been viewed, a user rating corresponding to the content element, and/or any other attributes of the content element"] updating the machine learning model, for each of the training examples, based on an error between a prediction by the machine learning model of whether the target event occurred and a label indicating whether the target event occurred; [adjusts/retrains MLA (updates) based on a difference between score and label (error, how off it was) ¶136-137 "The predicted relevance score may then be compared to the label for the content element that was determined at step 540. A difference between the predicted relevance score and the label may be determined, and the MLA may be adjusted based on the difference."] obtaining a set of input features associated with an opportunity to present content by the online system to a viewing user in a presentation context of the opportunity; [Fig. 6 605 receives content to use as input for a user output ¶138-147 " set of content elements may be received. Any number and/or type of content elements may be received"] applying the machine learning model to the obtained set of input features to predict a likelihood of the target event given a display of a candidate content item by the online system to the viewing user in the presentation context; [predicts according to received content and ranks for positioning using MLA Fig. 6 610 ¶143-145 " a relevance score may be predicted for each of the content elements received at step 605."] generating a user interface in the presentation context that selectively includes the candidate content item based on the predicted likelihood; and [Fig. 6 620 step for generating interface accordingly ¶147 "an interface with the content elements may be output, such as the interface 300. The interface may display the content elements in their ranked order"] sending the user interface to a device of the viewing user, the sending causing the device of the viewing user to display the user interface. [user can view and scroll interface accordingly ¶147, ¶7] Tsoy does not specifically teach wherein the context-dependent set of training features associated with a first presentation context has at least one training feature in common with the context-dependent set of training features associated with a second presentation context, and wherein the context-dependent set of training features associated with the first presentation context has at least one training feature that is inapplicable to the context-dependent set of training features associated with the second presentation context . However, Cetin teaches wherein the context-dependent set of training features associated with a first presentation context has at least one training feature in common with the context-dependent set of training features associated with a second presentation context, and [features are based on presentation context (query, add, user and location) ¶30 "features 308 may be derived using any information available in the context of the search, including the query, ad, user, and location"] wherein the context-dependent set of training features associated with the first presentation context has at least one training feature that is inapplicable to the context-dependent set of training features associated with the second presentation context, and [features are derived according to context information (context dependent) including unknown/missing (inapplicable) features Fig. 4, ¶30 "The predictor may have certain features or data available for calculating a click probability. For each advertisement and/or query, there may be features that are unknown and may need to be estimated for accurately determining the click probability. In one example, a new ad that has never been shown would not have any past CTR information and would have several missing features. The click-prediction features 308 may be derived using any information available in the context of the search"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the training features by Tsoy by incorporating the wherein the context-dependent set of training features associated with a first presentation context has at least one training feature in common with the context-dependent set of training features associated with a second presentation context, and wherein the context-dependent set of training features associated with the first presentation context has at least one training feature that is inapplicable to the context-dependent set of training features associated with the second presentation context disclosed by Cetin because both techniques address the same field of machine learning and by incorporating Cetin into Tsoy provides models with more accurate and reliable predictions in different environments [ Cetin ¶ 10] As to dependent claim 10, the rejection of claim 9 is incorporated, Tsoy and Cetin further teach wherein the first presentation context comprises a search interface, and the second presentation context comprises a browsing page. [Cetin search results ¶12 browser ¶13], [Tsoy content feed Fig. 3 (browsing page) ¶97] As to dependent claim 11, the rejection of claim 9 is incorporated, Tsoy and Cetin further teach wherein the first presentation context and the second presentation context generate different input features. [Cetin query and search context driven features ¶30, ¶35] As to dependent claim 12, the rejection of claim 9 is incorporated, Tsoy and Cetin further teach wherein obtaining the plurality of training examples features comprises deriving the context dependent sets of training features from historically observed data associated with presenting content in at least the first presentation context and the second presentation context. [Tsoy records of previous interactions used for training ¶105, ¶10] As to dependent claim 13, the rejection of claim 9 is incorporated, Tsoy and Cetin further teach wherein obtaining the plurality of training examples features comprises deriving the context-dependent sets of training features are from one or more of: user data, item data, event occurrences, and scoring metrics associated with historical orders of the online system. [Cetin item data (ad data) and records with impressions, clicks (occurrences, metrics) ¶16] As to dependent claim 14, the rejection of claim 9 is incorporated, Tsoy and Cetin further teach wherein obtaining the set of input features comprises masking a subset of the input features relating to data that is inapplicable to the presentation context. [Cetin unknown/missing (inapplicable) features can be imputed ¶30, ¶37 " In cases with missing data, the generative model infers a posterior distribution over the missing values, and imputes them"] As to dependent claim 15 , the rejection of claim 9 is incorporated, Tsoy and Cetin further teach wherein the target event comprises a selection of the candidate content item. [Tsoy predict a score used as a likelihood of interaction (event) ¶10, ¶99 " A relevance score indicating a predicted likelihood that the user will interact with a content element may be determined for each of the content elements 301-05"], [dataset includes ranking and location contexts ¶105] As to in dependent claim 17, Tsoy teaches a system, comprising: one or more processors; and a non-transitory computer-readable storage medium storing instructions executable by one or more processors for performing steps including: [system, processor, memory and instruction ¶25] accessing a machine learning model trained to predict a likelihood of a target event given a display of a content item by an online system to a user in a presentation context of a plurality of different presentation contexts, wherein the machine learning model is trained by: [accesses an ML algorithm (model) to predict a score used as a likelihood of interaction (event) ¶10, ¶99 " A relevance score indicating a predicted likelihood that the user will interact with a content element may be determined for each of the content elements 301-05"], [dataset includes ranking and positioning presentation contexts ¶105] obtaining a plurality of training examples, wherein each training example comprises a context-dependent set of training features associated with a previous display of a content item and a label indicating whether the target event occurred, [records of previous interactions used for training with labels including view events and attributes ¶105, ¶10 "Records of previous user interactions with interfaces displaying content elements may be used as training data for the MLA. A training dataset may be generated, where each content element in the training dataset is assigned a label."… "a number of times the content element has been viewed, a user rating corresponding to the content element, and/or any other attributes of the content element"] updating the machine learning model, for each of the training examples, based on an error between a prediction by the machine learning model of whether the target event occurred and a label indicating whether the target event occurred; [adjusts/retrains MLA (updates) based on a difference between score and label (error, how off it was) ¶136-137 "The predicted relevance score may then be compared to the label for the content element that was determined at step 540. A difference between the predicted relevance score and the label may be determined, and the MLA may be adjusted based on the difference."] obtaining a set of input features associated with an opportunity to present content by the online system to a viewing user in a presentation context of the opportunity; [Fig. 6 605 receives content to use as input for a user output ¶138-147 " set of content elements may be received. Any number and/or type of content elements may be received"] applying the machine learning model to the obtained set of input features to predict a likelihood of the target event given a display of a candidate content item by the online system to the viewing user in the presentation context; [predicts according to received content and ranks for positioning using MLA Fig. 6 610 ¶143-145 " a relevance score may be predicted for each of the content elements received at step 605."] generating a user interface in the presentation context that selectively includes the candidate content item based on the predicted likelihood; and [Fig. 6 620 step for generating interface accordingly ¶147 "an interface with the content elements may be output, such as the interface 300. The interface may display the content elements in their ranked order"] sending the user interface to a device of the viewing user, the sending causing the device of the viewing user to display the user interface. [user can view and scroll interface accordingly ¶147, ¶7] Tsoy does not specifically teach wherein the context-dependent set of training features associated with a first presentation context has at least one training feature in common with the context-dependent set of training features associated with a second presentation context, and wherein the context-dependent set of training features associated with the first presentation context has at least one training feature that is inapplicable to the context-dependent set of training features associated with the second presentation context . However, Cetin teaches wherein the context-dependent set of training features associated with a first presentation context has at least one training feature in common with the context-dependent set of training features associated with a second presentation context, and [features are based on presentation context (query, add, user and location) ¶30 "features 308 may be derived using any information available in the context of the search, including the query, ad, user, and location"] wherein the context-dependent set of training features associated with the first presentation context has at least one training feature that is inapplicable to the context-dependent set of training features associated with the second presentation context, and [features are derived according to context information (context dependent) including unknown/missing (inapplicable) features Fig. 4, ¶30 "The predictor may have certain features or data available for calculating a click probability. For each advertisement and/or query, there may be features that are unknown and may need to be estimated for accurately determining the click probability. In one example, a new ad that has never been shown would not have any past CTR information and would have several missing features. The click-prediction features 308 may be derived using any information available in the context of the search"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the training features by Tsoy by incorporating the wherein the context-dependent set of training features associated with a first presentation context has at least one training feature in common with the context-dependent set of training features associated with a second presentation context, and wherein the context-dependent set of training features associated with the first presentation context has at least one training feature that is inapplicable to the context-dependent set of training features associated with the second presentation context disclosed by Cetin because both techniques address the same field of machine learning and by incorporating Cetin into Tsoy provides models with more accurate and reliable predictions in different environments [ Cetin ¶ 10] As to dependent claim 18, the rejection of claim 17 is incorporated, Tsoy and Cetin further teach wherein the first presentation context comprises a search interface, and the second presentation context comprises a browsing page. [Cetin search results ¶12 browser ¶13], [Tsoy content feed Fig. 3 (browsing page) ¶97] As to dependent claim 19, the rejection of claim 17 is incorporated, Tsoy and Cetin further teach wherein the first presentation context and the second presentation context generate different input features. [Cetin query and search context driven features ¶30, ¶35] As to dependent claim 20, the rejection of claim 17 is incorporated, Tsoy and Cetin further teach wherein obtaining the plurality of training examples features comprises deriving the context dependent sets of training features from historically observed data associated with presenting content in at least the first presentation context and the second presentation context. [Tsoy records of previous interactions used for training ¶105, ¶10] Claim s 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Tsoy in view of Cetin , as applied in the rejection of claim 1 and 9 above, and further in view of Harper et al. (US 20230044574 A1 hereinafter Harper) As to dependent claim 7 , Tsoy and Cetin teach the rejection of claim 1 that is incorporated. Tsoy and Cetin do not specifically teach identifying a superset of training features representing a union of the context-dependent sets of training features; and raining the machine learning model based on the superset in which training features of the superset that are inapplicable to a training dataset are masked with respect to the training dataset. However, Harper teaches identifying a superset of training features representing a union of the context-dependent sets of training features; and [generate a superset via a union of data ¶4 "a superset may be generated from a union of data points from an open dataset and a closed dataset."] t raining the machine learning model based on the superset in which training features of the superset that are inapplicable to a training dataset are masked with respect to the training dataset. [simulates/drop missing data when training (mask) ¶4 "closed dataset is modified to simulate data missingness of a target open claims dataset. These modified datasets can be generated by selectively dropping certain datapoints and/or features from closed datasets that include comprehensive patient data"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the training features by Tsoy and Cetin by incorporating the identifying a superset of training features representing a union of the context-dependent sets of training features; and raining the machine learning model based on the superset in which training features of the superset that are inapplicable to a training dataset are masked with respect to the training dataset disclosed by Harper because all techniques address the same field of data machine learning and by incorporating Harper into Tsoy and Cetin improves the predictive capacitive while minimizing loss [ Harper ¶ 91 ] . As to dependent claim 16 , Tsoy and Cetin teach the rejection of claim 9 that is incorporated. Tsoy and Cetin do not specifically teach identifying a superset of training features representing a union of the context-dependent sets of training features; and raining the machine learning model based on the superset in which training features of the superset that are inapplicable to a training dataset are masked with respect to the training dataset. However, Harper teaches identifying a superset of training features representing a union of the context-dependent sets of training features; and [generate a superset via a union of data ¶4 "a superset may be generated from a union of data points from an open dataset and a closed dataset."] training the machine learning model based on the superset in which training features of the superset that are inapplicable to a training dataset are masked with respect to the training dataset. [simulates/drop missing data when training (mask) ¶4 "closed dataset is modified to simulate data missingness of a target open claims dataset. These modified datasets can be generated by selectively dropping certain datapoints and/or features from closed datasets that include comprehensive patient data"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the training features by Tsoy and Cetin by incorporating the identifying a superset of training features representing a union of the context-dependent sets of training features; and raining the machine learning model based on the superset in which training features of the superset that are inapplicable to a training dataset are masked with respect to the training dataset disclosed by Harper because all techniques address the same field of data machine learning and by incorporating Harper into Tsoy and Cetin improves the predictive capacitive while minimizing loss [Harper ¶ 91]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Collins et al. (US 9805102 B1) teaches selecting content based on context or display slots (see Col. 4 ln. 16-45) . Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT BEAU SPRATT whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-9919 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 8:30-5 PST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Jennifer Welch can be reached on FILLIN "SPE Phone?" \* MERGEFORMAT 5712127212 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEAU D SPRATT/ Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Jun 30, 2023
Application Filed
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595715
Cementing Lab Data Validation based On Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12596955
REWARD FEEDBACK FOR LEARNING CONTROL POLICIES USING NATURAL LANGUAGE AND VISION DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596956
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR PRESENTING REACTION-ADAPTIVE EXPLANATION OF AUTOMATIC OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12561464
CATALYST 4 CONNECTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12561606
TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month