Prosecution Insights
Last updated: April 19, 2026
Application No. 18/803,185

Performance Optimization System and Method for a Client Advertising Campaign

Non-Final OA §101§103
Filed
Aug 13, 2024
Examiner
DURAN, ARTHUR D
Art Unit
3622
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Loopme Limited
OA Round
5 (Non-Final)
16%
Grant Probability
At Risk
5-6
OA Rounds
6y 0m
To Grant
41%
With Interview

Examiner Intelligence

Grants only 16% of cases
16%
Career Allow Rate
67 granted / 427 resolved
-36.3% vs TC avg
Strong +26% interview lift
Without
With
+25.7%
Interview Lift
resolved cases with interview
Typical timeline
6y 0m
Avg Prosecution
36 currently pending
Career history
463
Total Applications
across all art units

Statute-Specific Performance

§101
27.4%
-12.6% vs TC avg
§103
48.9%
+8.9% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 427 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-5, 7-11, 14-26, 30, 32, 33 have been examined. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/28/26 has been entered. Election/Restrictions Applicant’s election without traverse of Group I, claims 1-31, in the reply filed on 9/5/25 is acknowledged. Response to Arguments Applicant's arguments with respect to the claims have been considered but are moot in view of the new ground(s) of rejection. On 1/28/26, Applicant amended the independent claims. Applicant’s remarks address this added amended feature. See the new citations and motivation added to the 103 below that address these new features. Also, in Applicant’s Remarks dated 1/28/26, on page 17 Applicant states, “No teaching, anticipation or suggestion appears anywhere in Skudlark nor in Shor, considered singly or in combination, of the recitation in amended independent claims 1 and 30 of a POS system that determines the POS score in real time"”. Examiner notes that this remark is in regards to the feature, “a machine learning platform configured to use machine learning to determine the POS score in real time”. See the new 103 obviousness statement and explanation in the rejection below that addresses this real time feature. On page 18, Applicant states that the prior art does not disclose the survey feature. However, it is the prior art combination that renders obvious the actually claimed features. And, Shor clearly discloses that the survey is related to ads and multiple ads [7, 19, 24]. Also, see the additional citations below concerning campaigns. On page 18, Applicant states that the prior art teaches away from requiring consent. Examiner notes that this remark is in regards to this feature of representative independent claim 30, “wherein a relatively small number of survey responses are needed with the consent of the end user to build a predictive model that can then be used to target all end users, whether those end users provide their consent or not.”. However, it is the combination of prior art that renders obvious the actually claimed features. And, Shor discloses taking a sample of surveyed users to extrapolate to a larger unsurveyed group. The sampled users provide their consent via the survey and direct answers to questions on personal items (Shor discloses that the survey takers provide detailed user info like name and birthdate [21] and PII [26]). So, this group in Shor provides consent via their answers to PII and personal info questions. Then, in Shor there is the extrapolation to the unsurveyed group who has Not provided PII type info. This also makes the combination of Skudlark and Shor possible since both protect PII and anonymous users. Also, the 101 is still found to apply. The machine learning is considerded generic. The predictive model is considered generic. No new additional elements beyond the generic have been added. See the 101 below. Also, the prediction server in claim 1, 30 is interpreted as a physical server so claim 1 is Not interpreted as software per se (see Applicant Spec at “[8]… the prediction server comprising a server configured to receive an advertisement request from a client demand-side platform (DSP)”). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Independent Claims 1, 30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are in a statutory category of invention. However, the claims recite a configured to store data usable to determine the POS score in real time; determine the POS score, a advertisement request from a client demand-side platform (DSP), the prediction server further configured to create a prediction request from the advertisement request by selecting relevant prediction request data from the advertisement request, the prediction request data comprising end user data, copying the relevant prediction request data to the prediction request, sending the prediction request, score the prediction request to determine a likelihood to influence an end user by exposing the end user to the brand advertisement; and a prediction request log; a model builder configured to build a predictive model usable to determine the POS score, a model scorer operably connected to the model builder, the model scorer configured to receive the predictive model from the model builder, the model scorer further configured to use the predictive model to determine the POS score, wherein the POS score comprises an end user engagement metric predicting end user engagement with an advertisement, wherein the POS score further comprises one or more of a brand awareness score, a purchase intent score, a brand consideration score, an estimate of whether an end user receiving an advertisement is likely to be influenced by exposure to the brand advertisement, and another metric configured to estimate awareness of an end user of an advertised brand, wherein the prediction server is further configured to add the POS score to the prediction request, creating a scored prediction request, send the scored prediction request to the client DSP, log the scored prediction request in the prediction request log, wherein the prediction request log is configured to log the scored prediction request, a profile store comprising a plurality of profiles of end users, the POS data platform further comprising a model store configured to store predictive models that the model builder builds, wherein the performance optimization system is operably connected to a client demand-side platform (DSP), wherein the client DSP comprises an entity configured to do one or more of run an advertising campaign directly as an advertiser and run the advertising campaign on behalf of an advertiser, wherein the model scorer scores the prediction request without using the end user data, wherein the model scorer scores the prediction request without using personally identifiable information (PII) regarding the end user, wherein the prediction request comprises a request for a POS score from a client DSP to the prediction server, wherein the model builder builds the model using customer engagement data, wherein the customer engagement data comprises survey responses and the survey responses comprising an end user’s response to a customer engagement campaign, wherein a relatively small number of survey responses are needed with the consent of the end user to build that can then be used to target all end users, whether those end users provide their consent or not. This is considered in the Abstract Idea grouping of certain methods of organizing human activity - advertising, marketing or sales activities or behaviors. This judicial exception is not integrated into a practical application because the claim is directed to an abstract idea with additional generic computer elements. The additional elements are considered a POS data platform, a machine learning platform configured to use machine learning, a prediction server operably connected to the machine learning platform, the prediction server, a predictive model. These are considered generic. The machine learning and predictive model are considered generic. The generically recited computer elements do not add a practical application or meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations only perform well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP § 2106.05(d). Also, the additional hardware elements are: (i) mere instructions to implement the idea on a computer, and/or (ii) recitation of generic computer structure that serves to perform generic computer functions. Viewed separately or as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amounts to significantly more than the abstract idea itself. The claim does not provide significantly more than the identified abstract idea, in that there is no improvement to another technology or technical field, no improvement to the functioning of a computer, no application with, or by use of a particular machine, no transformation or reduction of a particular article to a different state or thing, no specific limitation other than what is well-understood, routing and conventional in the field, no unconventional step that confines the claim to a particular useful application, or meaningful limitations that amount to more than generally linking the use of the abstract idea to a particular technological environment. Therefore, the claims are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Dependent claims 2-5, 7-11, 14-26, 32, 33 are not considered directed to any additional non-abstract claim elements. The features in these claims are considered generic. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above. While these descriptive elements may provide further helpful description for the claimed invention, these elements do not confer subject matter eligibility to the invention since their individual and combined significance is still not more than the abstract concepts identified in the claimed invention. Hence, these dependent claims are also rejected under 101. Please see the 35 USC 101 section at the Examination Guidance and Training Materials page on the USPTO website. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-11, 14-26, 30, 32, 33 are rejected under 35 U.S.C. 103 as being unpatentable over Skudlark (20100057560) in view of Shor (20200184510). Claim 30, 1 (independent) and also dependent claims 2, 3, 5, 6, 8-15, 20-24, 26, 27: Note that independent claim 30 includes the features of independent claim 1 and also some of claim 1’s dependents. So, Claim 30 is the first claim listed as rejected. Skudlark discloses a performance optimization system (POS) comprising: a POS data platform configured to store data usable to determine the POS score (Fig. 1); a machine learning platform configured to use machine learning (Fig. 1 and see neural networks and machine learning at [78]) and also determine the POS score (see score at [49]) and also uses real time data for the POS score (see real time and score at [94]). Skudlark does not explicitly disclose the machine learning determining the score or the machine learning platform operably connected to the POS data platform or to determine the POS score in real time. However, Skudlark discloses a plurality of models and scoring and predicting and data [49, 53, 62, 75, 94, 98]. And, Skudlark discloses the models connected to the scoring data (Fig. 1). And, Skudlark discloses using machine learning for predicting and models [78]. Therefore, it would have been obvious to one having ordinary skill in the art at the time the invention was made to add Skudlark’s models and machine learning to Skudlarks’s models and score so that Skudlark can use machine learning models to score. One would have been motivated to do this in order to better score using available computational techniques. And, in further regards to determine the POS score in real time, Skudlark does not explicitly disclose to determine the POS score in real time. However, Skudlard discloses and also uses real time data for the POS score (see real time and score at [94]). And, Skudlark further discloses that customer real time data is used to create a customer behavior predictor used to estimate customer likely ad response [0014]. And, this predictor And, this predictor presents a score for the ad appeal [49, 55]. Hence, real time data are used to create a customer behavior predictor that generates a score for customer likely response to advertisements. In another embodiment, Skudlark discloses that customer data is used to generate a score for the ad [75]. And, customer data includes real time customer data [14]. In another embodiment, Skudlark discloses live/realtime/current customer information like current weather and events is used to asses responsiveness in a particular environment/current weather/current event [113, 1]. And, responsiveness to ads can be scored [98]. Hence, current responsiveness to ad can be scored based on current environment/weather in real time. And, Skudlark discloses that real time customer info can be used to refine the predictor and model [100, 92] and this predictor and model produces a score [49, 55]. And, Skudlark discloses that there are millions of sites and customers for a particular ad and that these particular ads need scored for the particular sites and customer combos [75]. And, at [94], Skudlark discloses that real time data are used to refine the model and that customer response may be correlated with indicia to generate indication of customer interest and that these indications may be expressed as a score (“[94]… the real time data are used to create and refine the model 154…. Customer response and access to the various services may be correlated with the indicia surrounding the services to generate indications of customer interest. These indications may be expressed in the form of interest scores, for example.”). Therefore, it would have been obvious to one having ordinary skill in the art at the time the invention was made to add Skudlark’s realtime data used to refine the model to produce a score to Skudlark’s site and customer combo that needs a score now for the customer and site combo that needs an ad so that a score can now, or in real time, be provided based on current or real time data and conditions. One would have been motivated to do this in order to better present an ad now based on the current site and customer and the current customer data and conditions/environment. Skudlark further discloses a prediction server operably connected to the machine learning platform, the prediction server comprising a server configured to receive an advertisement request from a client demand-side platform (DSP) (see Fig. 1 and predictor server at item 152 which responds to the ad request from the end user devices in Fig. 1), the prediction server further configured to create a prediction request from the advertisement request by selecting relevant prediction request data from the advertisement request (see Fig. 1 and predictor server at item 152 which responds to the ad request from the end user devices in Fig. 1 and uses the model 154 and the ad data at 130; see [97, 54, 56]), the prediction request data comprising end user data (see Fig. 1 and predictor server at item 152 uses the model 154 and the customer profile data at 162, see Fig. 2 with predictor 152 connected to user data at 132 and 146), the prediction server then copying the relevant prediction request data to the prediction request, the prediction server sending the prediction request to the machine learning platform (see neural networks and machine learning at [78] and see machine generated filter at Fig. 2 item 215, see predict and advertisements and responsiveness and influence at [81]), the prediction server further configured to score the prediction request to determine a likelihood to influence an end user by exposing the end user to the brand advertisement (see predict and advertisements and responsiveness and influence at [81], see score and predict and ad at [49, 53, 55, 98], see score at [53]); and a prediction request log operably connected to the prediction server (see report, predict, score at [49], where report reads on log). In further regards to claim 1, the prediction request log feature is found in the following features of claim 30. In further regards to dependent claims 2, 3 5, the copy feature, end user data feature, and send the prediction request features are found in the prediction server clause above. Skudlark further discloses wherein the machine learning platform comprises a model builder configured to build a predictive model usable to determine the POS score (see model and score and predict at [49, 53, 62]; note neural networks and machine learning at [78] ), the machine learning platform further comprising a model scorer operably connected to the model builder, the model scorer configured to receive the predictive model from the model builder, the model scorer further configured to use the predictive model to determine the POS score, wherein the POS score comprises an end user engagement metric predicting end user engagement with an advertisement (Figs. 1, 2, [49, 53, 55, 62, 81, 98]). Skudlark further discloses an estimate of whether an end user receiving an advertisement is likely to be influenced by exposure to the brand advertisement (“[0014] Customer static data and customer real time data are used to create a customer behavior predictor used to estimate a customer's likely response to advertisements and other content,”; also note predict and advertisements and responsiveness and influence at [81]). Skudlark does not explicitly disclose wherein the POS score further comprises one or more of a brand awareness score, a purchase intent score, a brand consideration score, an estimate of whether an end user receiving an advertisement is likely to be influenced by exposure to the brand advertisement, and another metric configured to estimate awareness of an end user of an advertised brand. However, Skudlark discloses scores related to ads and ad appeal [49] and also interest scores [94] and scores for response advertiser is trying to elicit [98] and tracking interest in particular brands or products that may reflect interest in a purchase and how this correlates to info/ad interest (see Honda and automobile and purchase and interest at [71]). Also, Skudlark further discloses scores and a likelihood to influence an end user by exposing the end user to the brand advertisement (see score and predict and ad at [49, 53, 55, 98], see score at [53]) and also predict level of interest and scores [53, 55] and likelihood and scores [75]. Therefore, it would have been obvious to one having ordinary skill in the art at the time the invention was made to add Skudlarks’s brand/product and purchase interest and info of interest and estimate likely to influence to Skudlark’s variety of scores related to ad influence or response so that brand awareness scores, purchase intent scores, brand consideration scores can be scored. One would have been motivated to do this in order to better present a useful score. Skudlark further discloses wherein the prediction server is further configured to add the POS score to the prediction request, creating a scored prediction request, wherein the prediction server is further configured to send the scored prediction request to the client DSP (see predict and score at [49, 53, 55, 98]; see network connections in Figs. 1, 2 to client), wherein the prediction server is further configured to log the scored prediction request in the prediction request log, wherein the prediction request log is configured to log the scored prediction request (see report, predict, score at [49], where report reads on log), wherein the POS data platform comprises a profile store comprising a plurality of profiles of end users, the POS data platform further comprising a model store configured to store predictive models that the model builder builds (see profile and predictor at Figs. 1, 2; see score and customer information at [53, 62]), wherein the performance optimization system is operably connected to a client demand-side platform (DSP), wherein the client DSP comprises an entity configured to do one or more of run an advertising campaign directly as an advertiser and run the advertising campaign on behalf of an advertiser (see Fig. 1 and advertisement manager 166 and/or advertisement service 170), wherein the model scorer scores the prediction request without using the end user data (see anonymize at [50, 57]; see [66] where customer information is removed and anonymous information is used, so end user data, such as particular user information, is not used in the prediction or score), wherein the model scorer scores the prediction request without using personally identifiable information (PII) regarding the end user (see anonymized at Abstract and [37, 46, 50]), wherein the prediction request comprises a request for a POS score from a client DSP to the prediction server, wherein the model builder builds the model using customer engagement data (see customer and predict at Figs. 1, 2; see customer and score at [49, 53, 55, 98]). Skudlark further discloses wherein the customer engagement data comprises survey responses (see questions and questionnaires and preferences which reads on survey at [36, 38], and questions and customer interests at [87]) and also advertising and campaign details [26]. Skudlark does not explicitly disclose the survey responses comprising an end user’s response to a customer engagement campaign or wherein a relatively small number of survey responses are needed with the consent of the end user to build a predictive model that can then be used to target all end users, whether those end users provide their consent or not. However, Examiner notes that Applicant Spec at [100] was found relevant for this feature. And, Skudlark discloses questions for customer preferences and interests [36, 38, 87] and that these go into the customer profile [36, 87]. And, Skudlark discloses that the customer profile tracks behavior and interest in a product [79] and that the profile is uses to develop a model for response to advertising to better presents ads aligning with interests [12, 13] and that the profile has customer interactions and behaviors [13] and profiles and measuring response to ads [28] and also that ads are in a campaign and targeting profiles for ad campaigns [26] and also using machine learning [78] and targeting [26, 80]. And, Shor further discloses wherein the customer engagement data comprises survey responses and also that the survey responses comprising an end user’s response to a customer engagement campaign (see survey and ad at [14, 19]). And, Shor discloses that the survey is related to ads and multiple ads [7, 19, 24]. And, Shor also further discloses wherein a relatively small number of survey responses are needed with the consent of the end user to build a predictive model that can then be used to target all end users, whether those end users provide their consent or not (Shor shows surveying uses on an ad response [14, 19] and then taking that survey sample to build a predictive model [18-20] where that small survey group is used to find a larger unsurveyed group to target [26, 28-29] and also Figs. 1, 3; also, Examiner interprets that the surveyed users are providing consent via their answers to direct questions, meanwhile, the unsurveyed, target users do Not provide any direct consent since they are not surveyed). Shor also discloses using machine learning [20] and targeting (Abstract, [7]). Also, Shor discloses taking a sample of surveyed users to extrapolate to a larger unsurveyed group. The sampled users provide their consent via the survey and direct answers to questions on personal items (Shor discloses that the survey takers provide detailed user info like name and birthdate [21] and PII [26]). So, this group in Shor provides consent via their answers to PII and personal info questions. Then, in Shor there is the extrapolation to the unsurveyed group who has Not provided PII type info. This also makes the combination of Skudlark and Shor possible since both protect PII and anonymous users. Therefore, it would have been obvious to one having ordinary skill in the art at the time the invention was made to add Shor’s surveys and models and machine learning for ad(s) targeting to Skudlarks’s profiles and questions on preferences/interests and Skudlarks’s ad responses and behavior and models and ad campaigns and machine learning for targeting. One would have been motivated to do this in order to better track ad response and interests in order to better use models and machine learning to target. In regards to dependent claim 6, 8, 9, 10, 11, the POS score feature and DSP feature and log feature and profile store feature and profile/model store features are found in the clause preceding. In further regards to dependent claim 12, 13, 14, 15, 20-24, 26, 27, the features are found in the clause preceding. Claim 4. Skudlark further discloses the performance optimization system of claim 3, wherein the end user data comprises one or more of end user personal data, end user device data, contextual data, advertisement spot data, website data, mobile app data, network data, and privacy data (see profile at Figs. 1, 2, see interactions and behavior at [13], see demographic, geographic, behavior, context data at [34]). Claim 7. Skudlark does not explicitly disclose the performance optimization system of claim 6, the machine learning platform comprising both the model builder and the model scorer. However, Skudlark discloses using models for building and scoring (see model building, creating at [12, 14] and model and score at [49]) and also using machine learning techniques ([78]). Therefore, it would have been obvious to one having ordinary skill in the art at the time the invention was made to add Skudlark’s machine learning to Skudlark’s model building and model scoring. One would have been motivated to do this in order to better score using available machine learning and computational techniques. Claim 16. Skudlark further discloses the performance optimization system of claim 15, wherein the client DSP comprises an entity doing one or more of running an advertising campaign directly as an advertiser and running the advertising campaign on behalf of an advertiser (see advertiser at Fig. 1). Claim 17. Skudlark further discloses the performance optimization system of claim 1, wherein the performance optimization system is operably connected to an analytics end user, the analytics end user comprising an end user of data collected by the performance optimization system (see Fig. 1, 2 with end user data, and see report at [49). Claim 18. Skudlark further discloses the performance optimization of claim 17, wherein the performance optimization system provides analytics data to the analytics end user using one or more of the scored prediction request and the end user profiles stored in the profile store (see Fig. 1, 2 with predictor and profile and see report at [49]). Claim 19. Skudlark further discloses the performance optimization system of claim 17, wherein the client DSP comprises the analytics end user (see Fig 1 and see report at [49]). Claim 25. Skudlark further discloses the performance optimization system of claim 1, wherein the prediction server receives the advertisement request directly from an SSP (see advertiser at Fig. 1). Claim 32, 33. Skudlark does not explicitly disclose the performance optimization system of claim 1, wherein the performance optimization system effectively targets advertisements to end users when a number of end users providing consent is as low as ten percent (10%). However, the prior art combination renders obvious the survey and predictive model features above. And, Shor further discloses using a sample of surveyed users to target a bigger, unsurveyed group (Figs. 1, 3; [18-20, 26-29]). And, Shor further discloses that the size of the survey sample group compared to the unsurveyed target group can vary in order to more precisely target or in order to have a broader reach [5]. And, the MPEP states that a change in size/proportion (MPEP 2144.04.IV.A) and that ranges, amounts and proportions are obvious (2144.05). Therefore, it would have been obvious to one having ordinary skill in the art at the time the invention was made to add Shor’s range of sample size such that is can be as low as 10% to Skudlarks’s ad responses and behavior and models and machine learning and targeting. One would have been motivated to do this in order to better use models and machine learning to target and better find a balance between targeting accuracy and reach (as Shor discloses at [5]). Conclusion The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Shor and the other cited art disclose relevant features. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARTHUR DURAN whose telephone number is (571)272-6718. The examiner can normally be reached Mon-Thurs, 7-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ilana Spar can be reached at (571) 270-7537. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARTHUR DURAN/Primary Examiner, Art Unit 3621 3/19/2026
Read full office action

Prosecution Timeline

Aug 13, 2024
Application Filed
Sep 18, 2025
Non-Final Rejection — §101, §103
Oct 02, 2025
Examiner Interview Summary
Oct 02, 2025
Applicant Interview (Telephonic)
Oct 03, 2025
Response Filed
Oct 14, 2025
Final Rejection — §101, §103
Oct 28, 2025
Request for Continued Examination
Nov 06, 2025
Response after Non-Final Action
Dec 02, 2025
Non-Final Rejection — §101, §103
Dec 30, 2025
Response Filed
Jan 14, 2026
Final Rejection — §101, §103
Jan 28, 2026
Request for Continued Examination
Feb 22, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555134
SYSTEM OF DETERMINING ADVERTISING INCREMENTAL LIFT
2y 5m to grant Granted Feb 17, 2026
Patent 12536510
METHOD AND SYSTEM FOR ONLINE MATCHMAKING AND INCENTIVIZING USERS FOR REAL-WORLD ACTIVITIES
2y 5m to grant Granted Jan 27, 2026
Patent 12499472
SYSTEM FOR REPLACING ELEMENTS IN CONTENT
2y 5m to grant Granted Dec 16, 2025
Patent 12482010
METHODS AND APPARATUS TO MONITOR CONSUMER BEHAVIOR ASSOCIATED WITH LOCATION-BASED WEB SERVICES
2y 5m to grant Granted Nov 25, 2025
Patent 12462274
DIGITAL PROMOTION PROCESSING SYSTEM FOR DETERMINING A VISUAL CHARACTERISTIC FOR A SINGLE COMBINED DIGITAL PROMOTION AND RELATED METHODS
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
16%
Grant Probability
41%
With Interview (+25.7%)
6y 0m
Median Time to Grant
High
PTA Risk
Based on 427 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month