Prosecution Insights
Last updated: April 18, 2026
Application No. 18/826,603

DIMENSIONALITY REDUCTION OF MULTI-ATTRIBUTE CONSUMER PROFILES

Final Rejection §101
Filed
Sep 06, 2024
Examiner
POINVIL, FRANTZY
Art Unit
3693
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Insurance Zebra Inc.
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
96%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
756 granted / 953 resolved
+27.3% vs TC avg
Strong +16% interview lift
Without
With
+16.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
42 currently pending
Career history
995
Total Applications
across all art units

Statute-Specific Performance

§101
38.1%
-1.9% vs TC avg
§103
23.4%
-16.6% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 953 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 3/9/2026 have been fully considered but they are not persuasive. Applicant’s representative argues that the claims recite patentable subject matter in light of the decision noted from/in Ex Parte Desjardins (Appeal No. 2024-000567, Sept. 26, 2025) (in which the) Ex Parte Desjardins decision concerns claims that recited improved AI/ML techniques. See id. at 2-3. Applicant’s representative further indicates that the decision emphasized the Federal Circuit's teachings in Enfish that "[m]uch of the advancement made in computer technology consists of improvements to software that, by their very nature, may not be defined by particular physical features but rather by logical structures and processes." Id. at 8 (citing Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1339 (Fed. Cir. 2016))…”.In support for their argument, applicant’s representative cites Enfish, , LLC v. Microsoft Corp. 822 F. 3d 1327, 1339 (Fed. Cir. 2016)), McRo, Inc. v. Bndai Namco Games Am. Inc., 837 F.3d 1299, 1307-08 (Fed. Cir. 2016). In response, the instant independent claims use an artificial intelligence model or “machine learning model” to determine and generate scores based on input values within a range of input values for attribute for different input values within respective ranges of input values for at least some other attributes in the set of attributes to determine a plurality of determined changes between different scores among the second scores output by the model responsive to differences between the different ones of the input values within the range of input values for the attribute for the corresponding ones of the different input values within the respective ranges of input values of the at least some other attributes in the set of attributes” as found in independent claims 21 and 41, as these functions are not a technological implementation or improvement of a technological field. Applicant is directed to e.g., McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F .3 d 1299, 1314-1315 (Fed. Cir. 2016) (finding claims not abstract because they "focused on a specific asserted improvement in computer animation"). Applicant is to be reminded that a system, apparatus, machine or method for performing business, however, novel, useful, or commercially successful, is not patentable apart from the means for making the system practically useful or carrying it out. The applicant is making use of generic devices to finally send or transmit the results to a computing device. Accordingly, the additional elements (such as a generic artificial model and one or more servers) do not improve (1) the server(s) or machine learning model, or (2) another technology or technical field. See Guidance, 84 Fed. Reg. at 55 (citing MPEP § 2106.05(a)). Rather, the above-noted additional elements merely (1) apply the abstract idea on a computer; (2) include instructions to implement the abstract idea on a computer (computing device or system) ; or (3) use the computer as a tool to perform the abstract idea. See Guidance, 84 Fed. Reg. at 55 (citing MPEP § 2106.05. Therefore, the recited additional elements do not integrate the abstract idea into a practical application when reading the claims. None of the steps, functions and/or elements recited in the claims provide, and nowhere in the applicant’s shows any description or explanation as to how the claimed artificial model, or server(s) are intended to provide: (1) a “solution . . . necessarily rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks,” as explained by the Federal Circuit in DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1257 (Fed. Cir. 2014); (2) “a specific improvement to the way computers operate,” as explained in Enfish, 822 F.3d at 1336; or (3) an “unconventional technological solution ... to a technological problem” that “improve[s] the performance of the system itself,” as explained in Amdocs (Israel) Ltd. v. Openet Telecom, Inc., 841 F.3d 1288, 1299-1300 (Fed. Cir. 2016). Accordingly, the applicant’s arguments in this respect are not convincing. Further in respect to Enfish, the reliance of a processor with a memory such as a server or a machine learning model without specific structures to perform its routine tasks even more accurately is not sufficient to transform a claim into patent eligible subject matter as noted in Alice 134 S. Ct. at 2359. As indicated by the court "use of a computer to create electronic records, track multiple transactions and issue simultaneous instructions" was not an inventive incept. The claims or even the applicant's specification does not support or provide or claim any specifically inventive technology or algorithm for performing the claimed functions. As noted in the applicant’s specification, there is not a specific structure or computer components to perform the claimed functions. The generic machine learning model and/or server can be any known server or computer processor or software or hardware components. However, there is not a specific or new algorithm noted in the applicant’s specification to generate the claimed functions. Furthermore., there is not a showing or description of the generating or determining function to effect specific improvements to the server or artificial intelligence model. Furthermore there is a lacking of evidence that the claims improve the manner in which the server or machine learning model perform the claimed functions, as the claims in Enfish had performed their claimed invention via a “self-referential table” for a computer database. Applicant is being referred to Enfish, 822, F.3d at 1327, 1337. Hence, there is not a significant improvement of the claimed server or machine learning model or the architecture of the overall system. The elements together execute in routinely and conventionally accepted coordinated manners and interact with their partner elements to achieve an overall outcome which, similarly, are merely the combined and coordinated execution of generic computer functionalities which are well-understood, routine and conventional activities previously known to the industry. Accordingly, the applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-43 remain rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Subject Matter Eligibility Standard When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. Specifically, claim 21 and 41 are directed to a method. Each of the claims falls under one of the four statutory classes of invention. If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea). Step 2A, Prong One: the claims recite the following limitations that are understood to recite an abstract idea absent the bolded limitations Claim 21 recites: A tangible, non-transitory, machine readable medium storing instructions that when executed by one or more processors effectuate operations comprising: determining, with one or more servers, for different attributes in a set of attributes, contributed differences on scores output by a machine learning model based on a combination of values obtained for respective ones of the different attributes in the set of attributes, wherein a contributed difference by an attribute in the set of attributes is determined based on the model by: generating, with the model, first scores based on different input values within a range of input values for the attribute to determine a plurality of measured changes between different scores among the first scores output by the model responsive to differences between corresponding ones of the different input values within the range of input values for the attribute; and generating, with the model, second scores based on different ones of the input values within the range of input values for the attribute for different input values within respective ranges of input values for at least some other attributes in the set of attributes to determine a plurality of determined changes between different scores among the second scores output by the model responsive to differences between the different ones of the input values within the range of input values for the attribute for the corresponding ones of the different input values within the respective ranges of input values of the at least some other attributes in the set of attributes; receiving, with one or more servers, a request, from a user computing device, to access a comparison application; sending, with one or more servers executing the comparison application, in response to receiving the request to access the comparison application, one or more user interfaces corresponding to the comparison application to the user computing device via a network, the one or more user interfaces having a plurality of user inputs configured to receive user-entered attributes within the set of attributes and return the user-entered attributes to the comparison application; receiving, with one or more servers executing the comparison application, user input values for four or more attributes within the set of attributes from the user computing device via the one or more user interfaces; determining, with one or more servers executing the comparison application, respective amounts of effect of the four or more attributes of the user on a score output by the model for the user based on the respective user input values and the contributed differences determined for at least the four or more attributes on scores output by the model; and sending, with one or more servers executing the comparison application, to the user computing device, via the network, instructions to present a subsequent user interface with visual elements indicating the respective amounts of effect of the respective attributes on the score of the model for the user, wherein the instructions cause presentation of the subsequent user interface, and wherein three or more of the visual elements indicate three or more respective amounts of effect of three or more of the attributes of the user on the score. Claim 22 recites: classifying at least some of the attributes of the user based on the respective amount of effect of the respective attribute on the score output by the model for the user; and determining the visual elements for corresponding ones of the three or more attributes of the user based on a respective result of a respective classification. Claim 23 recites: classifying comprises assigning an ordinal classification to the respective attributes, the ordinal classification of an attribute based on the respective amount of effect of the user input value on the score relative to other possible user input values or a determined distribution of user input values of other users. Claim 24 recites: assigning ordinal classifications comprises assigning different ordinal classifications to at least some attributes and assigning the same ordinal classification to at least some attributes. Claim 25 recites: wherein: at least some ordinal classifications scale linearly in stepwise fashion with at least some attribute values. Claim 26 recites: wherein: classifying comprises assigning a respective letter grade to each of the at least three attributes based on the respective amount of effect of the user input value relative to other possible user input values or a determined distribution of user input values of other users; and determining visual elements comprises instructing the user computing device to display assigned letter grades in association with labels identifying graded attributes. Claim 27 recites: ranking the attributes based on the respective amounts of effect of the respective attributes on the score of the model for the user; and selecting attributes above a threshold rank for inclusion in the subsequent user interface with the visual elements indicating the respective amounts of effect, wherein attributes below the threshold ranking are not displayed in the subsequent user interface with the visual elements indicating the respective amounts of effect of the respective attributes on the score for the user. Claim 28 recites: ranking the attributes based on the respective amounts of effect of the respective attributes on the score of the model for the user, wherein the instructions to present the subsequent user interface with visual elements indicating the respective amounts of effect of the respective attributes on the score for the user comprises: instructing the user computing device to display identifiers of at least some of the attributes in ranked order. Claim 29 recites: determining respective amount of effects of attributes on the score of the model for the user comprises, for a given attribute, estimating an amount of effect of the given attribute toward the score for the user and comparing the estimated amount of effect to a distribution of amounts of effect of the given attribute to scores output by the model for a group of users. Claim 30 recites: determining respective amount of effect of attributes on the score of the model for the user comprises, for a given attribute, determining a partial derivative of the score for the user with respect to the given attribute. Claim 31 recites: wherein determining respective amounts of effect of attributes on the score of the model for the user comprises: accessing the model that outputs the scores based on a weighted sum of the attributes; calculating a plurality of products of respective attributes and respective weights of the model corresponding to the respective attributes; and determining the respective amounts based on calculated respective products corresponding to the respective attributes. Claim 32 recites: wherein determining respective amounts of effect of attributes on the score of the model for the user comprises, for a given attribute: accessing a weight applied to the given attribute in the model; comparing the given attribute to a distribution of the given attribute in a population to determine a value indicative of percentage of the population that has an instance of the given attribute is larger than the given attribute of the user; and determining a respective amount of effect of the given attribute on the score for the user based on both the weight and the value indicative of percentage of the population that has an instance of the given attribute is larger than the given attribute of the user. Claim 33 recites: selecting and grading a subset of the attributes that have a larger amount of effect on the score for the user than unselected attributes among the four or more attributes. Claim 34 recites: wherein: the one or more user interfaces and the subsequent user interface are webpages; the subsequent user interface presents four or more of the attributes as scoring factors presented adjacent the score for the user, each scoring factor being visually associated with an identifier of an ordinal classification indicating whether the respective scoring factor raises or lowers the score for the user, wherein changes in the scores output by the model are indicative of changes in a price of insurance for the user; and the ordinal classifications are determined based on a scoring factor model that is calibrated based on a plurality of calibration records obtained by query in a pricing analytics application before receiving the request to access the comparison application. Claim 35 recites: wherein: the three or more attributes are classified into ordinal categories according to three or more different scales by which values are binned; the score is indicative of a price of automotive insurance; and the received attributes comprise at least seven of the following: gender, marital status, age, driving history, credit rating, current insurance status, home ownership status, annual miles driven, geolocation, make of vehicle to be insured, model of vehicle to be insured, or year of vehicle to be insured. Claim 36 recites: wherein: determining respective amounts of effect of the respective attributes on the score of the model for the user comprises steps for determining respective amounts of effect of respective attributes on scores indicative of a price of insurance based on the contributed differences of the respective attributes on the scores output by the model. Claim 37 recites: steps for classifying respective amounts of effect of respective attributes on scores output by the model; and steps for determining which attributes to present to the user in a report indicative of which attributes have larger amounts of effect on the score for the user output by the model than other attributes. Claim 38 recites: the operations comprising: sending a plurality of insurance options to the user computing device for presentation to the user. Claim 39 recites: wherein: each of the insurance options is associated with an address of a server of a respective insurance provider of the respective insurance option. Claim 40 recites: wherein: classifying comprises assigning an ordinal classification to the respective attributes, the ordinal classification of an attribute based on the respective amount of effect of the user input value on the score relative to other possible user input values or a determined distribution of user input values of other users; determining respective amounts of effect of attributes on the score of the model for the user comprises: accessing the model that outputs the scores based on a weighted sum of the attributes, calculating a plurality of products of respective attributes and respective weights of the model corresponding to the respective attributes, and determining the respective amounts based on calculated respective products corresponding to the respective attributes; the one or more user interfaces and the subsequent user interface are webpages; the subsequent user interface presents four or more of the attributes as scoring factors presented adjacent the score for the user, each scoring factor being visually associated with an identifier of an ordinal classification indicating whether the respective scoring factor raises or lowers the score for the user, wherein changes in the scores output by the model are indicative of changes in a price of insurance for the user; and the ordinal classifications are determined based on a scoring factor model that is calibrated based on a plurality of calibration records obtained by query in a pricing analytics application before receiving the request to access the comparison application. Claim 41 recites: A method, comprising: determining, with one or more servers, for different attributes in a set of attributes, contributed differences on scores output by a machine learning model based on a combination of values obtained for respective ones of the different attributes in the set of attributes, wherein a contributed difference by an attribute in the set of attributes is determined based on the model by: generating, with the model, first scores based on different input values within a range of input values for the attribute to determine a plurality of measured changes between different scores among the first scores output by the model responsive to differences between corresponding ones of the different input values within the range of input values for the attribute; and generating, with the model, second scores based on different ones of the input values within the range of input values for the attribute for different input values within respective ranges of input values for at least some other attributes in the set of attributes to determine a plurality of determined changes between different scores among the second scores output by the model responsive to differences between the different ones of the input values within the range of input values for the attribute for the corresponding ones of the different input values within the respective ranges of input values of the at least some other attributes in the set of attributes; receiving, with one or servers, a request, from a user computing device, to access a comparison application; sending, with one or more servers executing the comparison application, in response to receiving the request to access the comparison application, one or more user interfaces corresponding to the comparison application to the user computing device via a network, the one or more user interfaces having a plurality of user inputs configured to receive user-entered attributes within the set of attributes and return the user-entered attributes to the comparison application; receiving, with one or more servers executing the comparison application, user input values for four or more attributes within the set of attributes from the user computing device via the one or more user interfaces; determining, with one or more servers executing the comparison application, respective amounts of effect of the four or more attributes of the user on a score output by the model for the user based on the respective user input values and the contributed differences determined for at least the four or more attributes on scores output by the model; and sending, with one or more servers executing the comparison application, to the user computing device, via the network, instructions to present a subsequent user interface with visual elements indicating the respective amounts of effect of the respective attributes on the score of the model for the user, wherein the instructions cause presentation of the subsequent user interface, and wherein three or more of the visual elements indicate three or more respective amounts of effect of three or more of the attributes of the user on the score. Claim 42 recites: the operations comprising: training the model, wherein training the model comprises performing a sequence of dimension selections and splits at least in part by dividing a parameter space of the model into different regions. Claim 43 recites: wherein each of the regions corresponds to different insurability scores. In removing the bolded additional elements, it is noted that the remaining limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106. 04(a}(2), such as fundamental economic principes or practices (including hedging, insurance, managing risk) because they amount to limitations specifying functions for determining, respective amounts of effect of the four or more attributes of the user on a score output for the user based on the respective user input values and the contributed differences determined for at least the four or more attributes on scores output for insurance purposes. Step 2A, Prong two: This judicial exception is not integrated into a practical application, In particular, the clams recite the above noted bolded limitations understood to be additional limitations: These limitations performing steps using one or more servers, a computing device, and a machine learning model for presenting data on a user interface using a comparison application and/via a network, merely amount to instructions to implement an abstract idea on a computer or merely using a computer as a tool to perform an abstract idea. See MPEP 2106.05(f), also see applicant's specification for guiding interpretation of these claim features, describing implementation with generic commercially available devices or any machine capable of executing a set of instructions. The computer system and machine learning model are similarly understood in light of applicant's specification as mere usage of any arrangement of generic computers and hardware intermediate components potentially using networks to communicate between systems. Performance of a receiving step by a computer processor or server amounts to performing steps which amount io insignificant extra-solution activity of data gathering - see MPEP 2106.05(g). Performing steps by computer processor hardware with scores for insurance comparison purposes electronic, and using a machine learning model limit the abstraction to computer field by execution by generic computers - see MPEP 2106.05(h). As noted in MPEP 2106.04(d), limitations which amount to instructions to implement an abstract idea on a computer or merely using a computer as a tool, limitations which amount to insignificant extra-solution activity, and limitations which amount generally linking to a particular technological environment do not integrate a judicial exception into a practical application. While the claims do not specify any particular manner of receiving and sending information, the breadth of the limitations reasonably includes generating cores using a comparison algorithm for sending data and results by communicating between devices over a network. Reciting "servers” and “machine learning model” is understood to be similar to Alappat, which as noted in MPEP 2106. 05(b)(I), is superseded, and the correct analysis is to look whether the added elements integrate the exception into a practical application or provide significantly more than the judicial exception. The claims in the instant application are performed by one or more servers and a machine learning model and sending scores and results using a comparison algorithm via a network. Consideration of these steps or functions as a combination does not change the analysis as they do not add anything compared to when the steps are considered separately. The claims recite a particular sequence of determining scores for an insurance. Performance of these steps or functions technologically does present a meaningful limit to the scope of the claim which would reasonably integrate the abstraction into a practical application. , Step 2B: The elements discussed above with respect to the practical application in Step 2A, prong 2 are equally applicable to consideration of whether the claims amount to significantly more. Accordingly, the clams fail to recite additional elements which, when considered individually and in combination, amount to significantly more. Reconsideration of these elements identified as insignificant extra-solution activity as part of Step 2B does not change the analysis. Sending and receiving information by computer hardware amounts to receiving and sending information over a network has been recognized by the courts as well- understood, routine, and conventional (See MPEP 2106.05(d)(II), citing Symantec, 835 F.3d at 1321, 120 OSPQ2d at 1362 (utilizing an intermediary computer to forward information); TL Communications LEC v. AV Auto. LLC, 823 F.3d 607, G10, L18 USPO2d 1744, 1748 (ed. Cir. 2016) Casing a telephone for image transmission); OFF Techs., fac. v. Amazon.com, fic., 788 F.3d 1359, 1363, 115 USPO2d 1090, 1093 (ed, Cir. 2015) (sending messages over a network), buySAFE fic. v. Google, Inc.. 768 F.3d 1350, 1355, 112 USPQ2d 1093, 1996 (Pod, Cyr. 2014) (computer receives and sends information over a network)). Independent claims 21 and 41: Independent claim 41 recites the same limitations as claim 21. The same reasons discussed above with respect to claim 21 are equally applicable to claim 41. These claimed elements also as found in the dependent claims are also recited at a high level of generality such that they amount to no more than mere instructions to apply the exception using a generic component. In processing the claims, it is noted that the recitation of these additional elements do not impact the analysis of the claims because these elements in combination are noted only to be a general purpose computer for performing basic or routine computer functions. These claimed elements are noted to a be a generic computer for collecting data, storing data, generating data and sending data, and performing routine and conventional functions. These additional elements do not overcome the analysis as these elements are merely considered as additional elements which amount to instructions to be applied to the generic computer or server. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claimed elements are also seen as generic computer components for receiving data, generating data, and determining data, thus performing generic functions without an inventive concept as they do not amount to significantly more than the abstract idea. The claimed additional elements are interpreted as being recited at a high level of generality and even if the claims recited in the affirmative. The type of data being manipulated does not impose meaningful limitations or renders the idea less abstract. Looking at the elements as a combination, the elements do not add anything more than the elements analyzed individually. Therefore, the claims do not amount to significantly more than the abstract idea itself. Applicant is reminded that a statutory claim would recite an automated machine implemented method or system with specific structures for performing the claimed invention so as to provide an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment. The claims as a whole, do not amount to significantly more than the abstract idea itself. This is because the claims do not effect an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of a computer itself; and the claims do not move beyond a general link of the use of an abstract idea to a particular technological environment. The reliance of a computer or processor to perform its routine tasks even more accurately is not sufficient to transform a claim into patent eligible subject matter as noted in Alice 134 S. Ct. at 2359. As indicated by the court "use of a computer to create electronic records, track multiple transactions and issue simultaneous instructions" was not an inventive concept. The claims or even the applicant's specification does not support or provide or claim any specifically inventive technology or algorithm for performing the claimed functions. Therefore, the recited additional elements do not integrate the abstract idea into a practical application when reading the claims. The independent claims 21 and 41 and do not contain structures to provide a significantly more than the abstract idea. The dependent claim(s) when analyzed and each taken as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. Accordingly, claims 21-41 are directed to an abstract idea. Allowable Subject Matter The following is an examiner's statement of reasons for allowance: The prior art taken alone or in combination and as argued by the applicant failed to teach or suggest: “sending with one or more servers executing the comparison application, in response to receiving the request to access the comparison application, one or more user interfaces corresponding to the comparison application to the user computing device via a network, the one or more user interfaces having a plurality of user inputs configured to receive user-entered attributes within the set of attributes and return the user-entered attributes to the comparison application, receiving, with one or more servers executing the comparison application, user input values for four or more attributes within the set of attributes from the user computing device via the one or more user interfaces, determining, with one or more servers executing the comparison application, respective amounts of effect of the four or more attributes of the user on a score output by the model for the user based on the respective user input values and the contributed differences determined for at least the four or more attributes on scores output by the model, and sending, with one or more servers executing the comparison application, to the user computing device, via the network, instructions to present a subsequent user interface with visual elements indicating the respective amounts of effect of the respective attributes on the score of the model for the user, wherein the instructions cause presentation of the subsequent user interface, and wherein three or more of the visual elements indicate three or more respective amounts of effect of three or more of the attributes of the user on the score” as recited in independent claim 21 and as similarly recited in independent claim 41. Fossier (US 20240394801 A1) discloses a method and system for machine-learning analysis of medical-claim pay class coverage. The machine-learning analysis may predict whether a third-party payor class is the proper payor class for a medical claim sent to a health insurance company or directly to the patient. The system may communicate payor class determinations between devices in a computer network. A payor class determination may be based on a computed likeliness score from a trained machine-learning model. The analysis may include identifying medical codes from historical medical claims, standardizing the medical codes, screening the historical medical claims based on the medical codes, training a model based on the standardized medical codes and corresponding payor class determinations, and applying the model to new medical claims to generate predictions and determinations. Fogarty rt al (US 12125067 B1) disclose a computer system which includes memory hardware configured to store a machine learning model, historical feature vector inputs, and computer-executable instructions, and processor hardware configured to execute the instructions. The instructions include training a first machine learning model with the historical feature vector inputs to generate a title score output, and training a second machine learning model with the historical feature vector inputs to generate a background score output. For each entity in a set, the instructions include processing a title feature vector input with the first machine learning model, and processing a background feature vector with a second machine learning model, to generate a tittle score output and a background score output each indicative of a likelihood that the entity is a decision entity. The instructions include automatically distributing structured campaign data to the entity based on the title score output and the background score output. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANTZY POINVIL whose telephone number is (571)272-6797. The examiner can normally be reached M-Th 7:00AM to 5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Anderson can be reached at 571-270-0508. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FRANTZY POINVIL/Primary Examiner, Art Unit 3693 March 17, 2026
Read full office action

Prosecution Timeline

Sep 06, 2024
Application Filed
Nov 25, 2025
Non-Final Rejection — §101
Mar 09, 2026
Response Filed
Apr 03, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548000
SOCIAL MEDIA MARKETPLACE
2y 5m to grant Granted Feb 10, 2026
Patent 12536543
SYSTEM AND METHOD FOR SUSPENDING ACCESS TO ACCOUNTS DUE TO INCAPACITY OF USER
2y 5m to grant Granted Jan 27, 2026
Patent 12530663
SYSTEM AND METHOD FOR PAYMENT PLATFORM SELF-CERTIFICATION FOR PROCESSING FINANCIAL TRANSACTIONS WITH PAYMENT NETWORKS
2y 5m to grant Granted Jan 20, 2026
Patent 12499437
MULTI-SIGNATURE VERIFICATION NETWORK
2y 5m to grant Granted Dec 16, 2025
Patent 12450664
ASSESSING PROPERTY DAMAGE USING A 3D POINT CLOUD OF A SCANNED PROPERTY
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
96%
With Interview (+16.4%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 953 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month