Prosecution Insights
Last updated: April 19, 2026
Application No. 18/600,252

APPARATUS AND A METHOD FOR THE GENERATION OF UNIQUE SERVICE DATA

Final Rejection §101
Filed
Mar 08, 2024
Examiner
HENRY, MATTHEW D
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The Strategic Coach Inc.
OA Round
8 (Final)
30%
Grant Probability
At Risk
9-10
OA Rounds
3y 2m
To Grant
52%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
126 granted / 417 resolved
-21.8% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
48 currently pending
Career history
465
Total Applications
across all art units

Statute-Specific Performance

§101
43.3%
+3.3% vs TC avg
§103
31.4%
-8.6% vs TC avg
§102
5.5%
-34.5% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 417 resolved cases

Office Action

§101
DETAILED ACTION Status of Claims This Final Office Action is responsive to Applicant's reply filed 2/10/2026. Claims 1 and 11 have been amended. Claims 1-6, 8-16, and 18-20 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments Applicant’s amendments have been fully considered, but do not overcome the previously pending 35 USC 101 rejections. Response to Arguments Applicant's arguments have been fully considered but they are not persuasive. The Examiner notes that Applicant did not invent machine learning and did not invent OCR (as indicated in Applicant’s specification), but rather is using these known computer functionalities to implement the abstract idea. The Examiner further notes there is no recited improvement to the machine learning or OCR. With regard to the limitations of claims 1-6, 8-16, and 18-20, Applicant argues that the claims are patent eligible under 35 USC 101 because the pending claims are not directed toward an abstract idea. The Examiner respectfully disagrees. The Examiner has already set forth a prima facie case under 35 USC 101. The Examiner has clearly pointed out the limitations directed towards the abstract idea, what the additional elements are and why they do not integrate the abstract idea into a practical application, and why the additional elements and remaining limitations do not amount to significantly more than the abstract idea. The Examiner asserts the MPEP provides a non-exhaustive list of examples of abstract ideas, rather than a list of every abstract idea, where generating service data, such as for identifying consumer preferences, market share, and/or profitability are commercial interactions, where the steps taken by humans is managing how humans interact, which is Organizing Human Activity. The Examiner notes that Applicant is using a fuzzy matching process and tokenizing keywords where Applicant’s own specification (Paragraph 0047) admits that these are known algorithms being used to implement the abstract idea. Applicant’s arguments are not persuasive. Applicant points to Example 39 and states the claims are eligible. The Examiner respectfully disagrees. The Examiner asserts Example 39 did not recite an abstract idea because it is all basic computer functionality. There is no actual invention in Example 39, but rather generic use of a neural network, which is completely unrelated to Applicant’s claimed limitations. Applicant’s arguments are not persuasive. Applicant argues the claims are eligible under prong 2. The Examiner respectfully disagrees. The Examiner has clearly shown why each element individually and in combination does not integrate the abstract idea into a practical application. Applicant is merely throwing well known algorithms and generic machine learning into the claims and alleging the claims are eligible. The Applicant does not point out how use of these algorithms and generic machine learning improves the computer itself. The Examiner points to Page 2 of the McRO-Bascom Memo from December 2016, "The McRO court indicated that it was the incorporation of the particular claimed rules in computer animation "that improved [the] existing technological process", unlike cases such as Alice where a computer was merely used as a tool to perform an existing process." The Applicants’ claims are geared toward generating service data from keywords with generic use of machine learning and OCR, where these techniques are merely being applied/calculated in a computing environment. Simply applying machine learning and OCR to a specific technical environment (e.g. the computers/Internet) does not account for significantly more than the abstract idea because it does not solve a problem rooted in computer technology nor does it improve the functioning of the computer itself because it is merely making a determination based on rules and/or mathematical relationships (e.g. generic machine learning) to output to a user. The Applicant’s claimed limitations do not appear to bring about any improvement in the operation or functioning of a computer per se, or to improve computer-related technology by allowing computer performance of a function not previously performable by a computer (see page 2 of the McRo-Bascom memo). The solution appears to be more of a business-driven solution rather than a technical one. In addition, McRO had no evidence that the process previously used by animators is the same as the process required by the claims. The Applicant’s claimed limitations and originally filed specification provide no evidence that the claimed process/functions are any different than what would be done without a computer, where there are no adjustments to the mental process to accommodate implementation by computers. Applicant’s arguments are not persuasive. Applicant further argues the claims improve the technology in view of Desjardins memo. The Examiner respectfully disagrees. The Examiner notes that the sanitizing of data is recited as merely filtering out data “having a signal to noise ratio below a threshold value” (Paragraph 0068) and then the claims generically recite “train a score machine learning model as a function of the sanitized score training data” and further writes out long hand OCR functionality as stated in Paragraph 0024 of Applicant’s specification. The details are recited at such a high level of generality that they merely add the words apply it with the judicial exception. The Examiner notes that this noise analysis is only mentioned once in Paragraph 0068 of Applicant’s specification and OCR is recited as implementing an OCR algorithm, with no recited improvement to the technology, but rather merely use of the technology on a general purpose computer to implement the abstract idea. The Examiner has clearly analyzed each claim limitation individually and as an ordered combination and shown why the claims are ineligible in the rejection below. Applicant’s arguments are not persuasive. Applicant copy and pastes large amounts of text in reference to the Desjardins, but does not state what the improvement in the claims actually is. In addition, the specification makes no mention of any type of improvement to the computer itself. Applicant’s arguments are not persuasive. Applicant argues the claims are eligible under 2B. The Examiner respectfully disagrees. The Examiner specifically asks how the use of a fuzzy matching algorithm for tokenizing keyword sets is non-generic. A simple google search shows that Applicant is using a generic fuzzy algorithm and generic tokenization of keywords (See google – “Tokenization refers to converting sensitive or complex data into smaller, manageable, or non-sensitive units depending on the context. It is widely used in data security to protect sensitive information and in Natural Language Processing (NLP) to segment text for analysis”). Applicant’s claims recite nothing beyond generic use of these well known algorithms and technology for implementing the claimed abstract idea, where Applicant’s specification supports and recites no details beyond generic use. The Examiner refers to the McRO – Bascom response above for further details. Applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6, 8-16, and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter; When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so, it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. In the instant case (Step 1), claims 11-16 and 18-20 are directed toward a process and claims 1-6 and 8-10 are directed toward a system; which are statutory categories of invention. Additionally (Step 2A Prong One), the independent claims are directed toward an apparatus for generating unique service data, wherein the apparatus comprises: at least a processor; and a memory communicatively connected to the at least a processor, wherein the memory contains instructions configuring the at least a processor to: generate a plurality of user data; receive a plurality of entity data, wherein the plurality of entity data comprises demand data comprising entity records including at least data reflecting industry economics including analysis of macroeconomic indicators that influence industry demand in a geographic area wherein at least a portion of each of the entity records are converted into a machine encoded text using an optical character reader (OCR), wherein converting the at least a portion of each of the entity records into the machine-encoded text comprises converting images of text in the at least a portion of each of the entity records into the machine-encoded text and further comprises: pre-processing the images of text by de-skewing at least one image component associated with the at least a portion of each of the entity records by applying a transform operation to the at least one image component; and implementing an OCR algorithm comprising a matrix matching process by comparing pixels of the pre-processed images to pixels of a stored glyph on a pixel by pixel basis; receive score training data, wherein the score training data comprises entity data inputs correlated to entity score outputs; sanitize the score training data using a dedicated hardware unit comprising circuitry configured to perform signal processing operations, wherein sanitizing the score training data comprises: determining by the dedicated hardware unit that at least one training data entry of the score training data has a signal to noise ratio below a threshold value; and removing the at least one training data entry from the score training data to create sanitized score training data; train a score machine learning model as a function of the sanitized score training data; generate an entity score as a function of the plurality of entity data, using the trained score machine learning model; identify one or more user clusters as a function of the user data and the entity score generated using the score machine learning model, wherein identifying the one or more user clusters as a function of the user data comprises extracting one or more user keyword sets from each of the one or more user clusters, wherein the one or more clusters comprises a graphical representation of the entity score generated using the score machine learning model; identify one or more entity clusters as a function of the entity data, wherein identifying the one or more entity clusters as a function of the entity data comprises extracting one or more entity keyword sets from each of the one or more entity clusters; generate unique service data as a function of a comparison of the one or more user clusters to the one or more entity clusters, wherein the comparison comprises tokenizing keyword sets associated with the one or more user clusters and the one or more entity clusters and comparing the tokenized keyword sets using a fuzzy matching process including applying at least one fuzzy matching algorithm and determining similarity scores relative to a similarity threshold, wherein generating the unique service data utilizes a service machine learning model generated by creating an artificial neural network and comprises: receiving a service training data set, wherein the service training data set comprises the one or more user clusters, the one or more entity clusters, and the entity score generated by the trained score machine learning model as input correlated to examples of unique service data as an output; iteratively updating the service machine learning model with past outputs if the unique service data and additional market feedback, by: assigning and adjusting weighted parameters between interconnected nodes of the neural network based on correlations between clustered keyword sets, wherein the correlations are determined using the similarity scores generated by the fuzzy matching process relative to the similarity threshold, and corresponding examples of unique service data in the service training data set; and performing an auditing process, wherein the processor; receiving user feedback indicating suboptimal predictive performance; determines and accuracy score based on the user feedback; compares past outputs of the service machine learning model to convergence criteria including the accuracy score; and reconfiguring weighted connections between nodes of the neural network based on the results of the comparison to promote convergence with desired output values of the service machine learning model; training the service machine learning model; and outputting the unique service data using the trained service machine learning model, wherein the unique service data comprises a service score, wherein the service score integrates a product’s value and consumer feedback; display the unique service data using a display device (claims 1 and 11) (Organizing Human Activity), which are considered to be abstract ideas (See MPEP 2106.05). The steps/functions disclosed above and in the independent claims are directed toward the abstract idea of Organizing Human Activity because the claimed limitations are analyzing entity and user data to identify clusters by extracting keywords using a machine learning model, which is then trained to generate and display unique service data including a score, which is managing how humans interact for commercial purposes. Dependent claims 2-6, 8-10, 12-16, and 18-20 further narrow the abstract idea identified in the independent claims, where any additional elements introduced are discussed below. Step 2A Prong Two: In this application, even if not directed toward the abstract idea, the independent claim additionally recite “an apparatus, wherein the apparatus comprises: at least a processor; and a memory communicatively connected to the at least a processor, wherein the memory contains instructions configuring the at least a processor to: generate a plurality of user data; receive a plurality of entity data; wherein at least a portion of each of the entity records are converted into a machine encoded text using an optical character reader (OCR); and implementing an OCR algorithm comprising a matrix matching process by comparing pixels of the pre-processed images to pixels of a stored glyph on a pixel by pixel basis; using a dedicated hardware unit comprising circuitry; using the trained score machine learning model; utilizes a service machine learning model generated by creating an artificial neural network: iteratively updating the service machine learning model; using the trained service machine learning model; using a display device (claims 1 and 11)”, which are additional elements that would not integrate the judicial exception (e.g. abstract idea) into a practical application because the claimed structure merely adds the words to apply it with the judicial exception and mere instructions to implement an abstract idea on a computer (See MPEP 2106.05) and are recited at such a high level of generality. These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer. Even when viewed in combination, the additional elements in the claims do no more than use the computer components as a tool. There is no change to the computer or other technology that is recited in the claim, and thus the claims do not improve computer functionality or other technology. The Examiner further notes that the claimed “train a score machine learning model; utilizes a service machine learning model generated by creating an artificial neural network; iteratively training the service machine learning model using service training data; using a trained service machine learning model; using a score machine learning model trained using score training data (claims 1 and 11)” are so generically recited (e.g. no technical features of the machine learning) that the machine learning merely adds the words apply it with the judicial exception (See MPEP 2106.05) because they are recited at such a high level of generality. For example, Paragraph 0031 of Applicant’s specification states “The machine-learning model may be performed using, without limitation, linear machine-learning models such as without limitation logistic regression and/or naive Bayes machine-learning models, nearest neighbor machine-learning models such as k-nearest neighbors machine-learning models, support vector machines, least squares support vector machines, fisher’s linear discriminant, quadratic machine-learning models, decision trees, boosted trees, random forest machine-learning model, and the like”, which shows how high level the machine learning actually is. The Examiner further asserts “wherein at least a portion of each of the entity records are converted into a machine encoded text using an optical character reader (OCR), wherein converting the at least a portion of each of the entity records into the machine-encoded text comprises converting images of text in the at least a portion of each of the entity records into the machine-encoded text and further comprises: pre-processing the images of text by de-skewing at least one image component associated with the at least a portion of each of the entity records by applying a transform operation to the at least one image component; and implementing an OCR algorithm comprising a matrix matching process by comparing pixels of the pre-processed images to pixels of a stored glyph on a pixel by pixel basis (claims 1 and 11)” is recited at such a high level of generality it merely adds the words apply it with the judicial exception (See MPEP 2106) because the Applicant is merely reciting generic OCR functionality as recited in Paragraph 0024 of Applicant’s specification which specifically states “an OCR process may include a feature extraction process … General techniques of feature detection in computer vision are applicable to this type of OCR. In some embodiments, machine-learning processes like nearest neighbor classifiers (e.g., k-nearest neighbors algorithm) can be used to compare image features with stored glyph features and choose a nearest match. OCR may employ any machine-learning process described in this disclosure, for example machine-learning processes described with reference to FIGS. 5-7. Exemplary non-limiting OCR software includes Cuneiform and Tesseract. Cuneiform is a multi-language, open-source optical character recognition system originally developed by Cognitive Technologies of Moscow, Russia. Tesseract is free OCR software originally developed by Hewlett-Packard of Palo Alto, California, United States”, showing how the OCR functionality is just generic use of OCR that is well-known to one of ordinary skill in the art and is just software being applied to implement the abstract idea. In addition, dependent claims 2-6, 8-10, 12-16, and 18-20 further narrow the abstract idea and dependent claims 10 and 20 additionally recite “using a chatbot”, which are additional elements that do not integrate the judicial exception (e.g. abstract idea) into a practical application because the claimed software component merely adds the words to apply it with the judicial exception and mere instructions to implement an abstract idea on a computer (See MPEP 2106.05). Step 2B: When analyzing the additional element(s) and/or combination of elements in the claim(s) other than the abstract idea per se the claim limitations amount(s) to no more than: a general link of the use of an abstract idea to a particular technological environment and merely amounts to the application or instructions to apply the abstract idea on a computer (See MPEP 2106.05). Further, method; and System Independent claims 1-6, 8-16, and 18-20 recite “an apparatus, wherein the apparatus comprises: at least a processor; and a memory communicatively connected to the at least a processor, wherein the memory contains instructions configuring the at least a processor to: generate a plurality of user data; receive a plurality of entity data; wherein at least a portion of each of the entity records are converted into a machine encoded text using an optical character reader (OCR); and implementing an OCR algorithm comprising a matrix matching process by comparing pixels of the pre-processed images to pixels of a stored glyph on a pixel by pixel basis; using a dedicated hardware unit comprising circuitry; using the trained score machine learning model; utilizes a service machine learning model generated by creating an artificial neural network: iteratively updating the service machine learning model; using the trained service machine learning model; using a display device (claims 1 and 11)”; however, these elements merely facilitate the claimed functions at a high level of generality and they perform conventional functions and are considered to be general purpose computer components which is supported by Applicant’s specification in Paragraphs 0024, 0031, and 0115-0118 and Figures 1 and 10. The Applicant’s claimed additional elements are mere instructions to implement the abstract idea on a general purpose computer and generally link of the use of an abstract idea to a particular technological environment. When viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. In addition, claims 2-6, 8-10, 12-16, and 18-20 further narrow the abstract idea identified in the independent claims. The Examiner notes that the dependent claims merely further define the data being analyzed and how the data is being analyzed. Similarly, claims 10 and 20 additionally recite “using a chatbot”, which are additional elements that do not amount to significantly more than the abstract idea because the claimed structure merely amounts to the application or instructions to apply the abstract idea on a computer and does not move beyond a general link of the use of an abstract idea to a particular technological environment (See MPEP 2106.05). The additional limitations of the independent and dependent claim(s) when considered individually and as an ordered combination do not amount to significantly more than the abstract idea. The examiner has considered the dependent claims in a full analysis including the additional limitations individually and in combination as analyzed in the independent claim(s). Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Allowable over 35 USC 103 Claims 1-6, 8-16, and 18-20 are allowable over the prior art, but remain rejected under §101 for the reasons set forth above. Independent claims 1-6, 8-16, and 18-20 disclose a system and method for analyzing entity and user data to identify clusters by extracting keywords using a machine learning model, which is then trained to generate and display unique service data including a score, where the machine learning is iteratively updated based on user feedback to convergence criteria to produce an accuracy score. Regarding a possible 103 rejection: The closest prior art of record is: Cook (US 2024/0126794 A1) – which discloses generating a digital assistant based on user queries. Shariff et al. (US 2019/0012682 A1) – which discloses forecasting demand using real time demand. Ganti et al. (US 2019/0172082 A1) – which discloses dynamic pricing systems. The prior art of record neither teaches nor suggests all particulars of the limitations as recited in claims 1-6, 8-16, and 18-20, such as analyzing entity and user data to identify clusters by extracting keywords using a machine learning model, which is then trained to generate and display unique service data including a score, where the machine learning is iteratively updated based on user feedback to convergence criteria to produce an accuracy score. While individual features may be known per se, there is no teaching or suggestion absent applicants’ own disclosure to combine these features other than with impermissible hindsight and the combination/arrangement of features are not found in analogous art. Specifically the claimed “an apparatus for generating unique service data, wherein the apparatus comprises: at least a processor; and a memory communicatively connected to the at least a processor, wherein the memory contains instructions configuring the at least a processor to: generate a plurality of user data; receive a plurality of entity data, wherein the plurality of entity data comprises demand data comprising entity records including at least data reflecting industry economics including analysis of macroeconomic indicators that influence industry demand in a geographic area wherein at least a portion of each of the entity records are converted into a machine encoded text using an optical character reader (OCR), wherein converting the at least a portion of each of the entity records into the machine-encoded text comprises converting images of text in the at least a portion of each of the entity records into the machine-encoded text and further comprises: pre-processing the images of text by de-skewing at least one image component associated with the at least a portion of each of the entity records by applying a transform operation to the at least one image component; and implementing an OCR algorithm comprising a matrix matching process by comparing pixels of the pre-processed images to pixels of a stored glyph on a pixel by pixel basis; receive score training data, wherein the score training data comprises entity data inputs correlated to entity score outputs; sanitize the score training data using a dedicated hardware unit comprising circuitry configured to perform signal processing operations, wherein sanitizing the score training data comprises: determining by the dedicated hardware unit that at least one training data entry of the score training data has a signal to noise ratio below a threshold value; and removing the at least one training data entry from the score training data to create sanitized score training data; train a score machine learning model as a function of the sanitized score training data; generate an entity score as a function of the plurality of entity data, using the trained score machine learning model; identify one or more user clusters as a function of the user data and the entity score generated using the score machine learning model, wherein identifying the one or more user clusters as a function of the user data comprises extracting one or more user keyword sets from each of the one or more user clusters, wherein the one or more clusters comprises a graphical representation of the entity score generated using the score machine learning model; identify one or more entity clusters as a function of the entity data, wherein identifying the one or more entity clusters as a function of the entity data comprises extracting one or more entity keyword sets from each of the one or more entity clusters; generate unique service data as a function of a comparison of the one or more user clusters to the one or more entity clusters, wherein the comparison comprises tokenizing keyword sets associated with the one or more user clusters and the one or more entity clusters and comparing the tokenized keyword sets using a fuzzy matching process including applying at least one fuzzy matching algorithm and determining similarity scores relative to a similarity threshold, wherein generating the unique service data utilizes a service machine learning model generated by creating an artificial neural network and comprises: receiving a service training data set, wherein the service training data set comprises the one or more user clusters, the one or more entity clusters, and the entity score generated by the trained score machine learning model as input correlated to examples of unique service data as an output; iteratively updating the service machine learning model with past outputs if the unique service data and additional market feedback, by: assigning and adjusting weighted parameters between interconnected nodes of the neural network based on correlations between clustered keyword sets, wherein the correlations are determined using the similarity scores generated by the fuzzy matching process relative to the similarity threshold, and corresponding examples of unique service data in the service training data set; and performing an auditing process, wherein the processor; receiving user feedback indicating suboptimal predictive performance; determines and accuracy score based on the user feedback; compares past outputs of the service machine learning model to convergence criteria including the accuracy score; and reconfiguring weighted connections between nodes of the neural network based on the results of the comparison to promote convergence with desired output values of the service machine learning model; training the service machine learning model; and outputting the unique service data using the trained service machine learning model, wherein the unique service data comprises a service score, wherein the service score integrates a product’s value and consumer feedback; display the unique service data using a display device (as required by independent claims 1-6, 8-16, and 18-20)”, thus rendering claims 1-6, 8-16, and 18-20as allowable over the prior art. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record, but not relied upon is considered pertinent to applicant's disclosure is listed on the attached PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW D HENRY whose telephone number is (571)270-0504. The examiner can normally be reached on Monday-Thursday 9AM-5PM. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW D HENRY/Primary Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Mar 08, 2024
Application Filed
May 02, 2024
Non-Final Rejection — §101
May 16, 2024
Interview Requested
May 29, 2024
Examiner Interview Summary
May 29, 2024
Applicant Interview (Telephonic)
Aug 06, 2024
Response Filed
Aug 12, 2024
Final Rejection — §101
Aug 30, 2024
Request for Continued Examination
Sep 03, 2024
Response after Non-Final Action
Nov 07, 2024
Non-Final Rejection — §101
Dec 12, 2024
Interview Requested
Dec 18, 2024
Examiner Interview Summary
Dec 18, 2024
Applicant Interview (Telephonic)
Jan 06, 2025
Response Filed
Jan 28, 2025
Final Rejection — §101
Apr 02, 2025
Request for Continued Examination
Apr 07, 2025
Response after Non-Final Action
May 06, 2025
Non-Final Rejection — §101
Aug 06, 2025
Examiner Interview Summary
Aug 06, 2025
Applicant Interview (Telephonic)
Aug 11, 2025
Response Filed
Aug 26, 2025
Final Rejection — §101
Oct 31, 2025
Request for Continued Examination
Nov 08, 2025
Response after Non-Final Action
Jan 13, 2026
Non-Final Rejection — §101
Jan 22, 2026
Interview Requested
Feb 10, 2026
Response Filed
Mar 17, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12468854
SECURE PLATFORM FOR THE DISSEMINATION OF DATA
2y 5m to grant Granted Nov 11, 2025
Patent 12307472
System and Methods for Generating Market Planning Areas
2y 5m to grant Granted May 20, 2025
Patent 12271846
DISPATCH ADVISOR TO ASSIST IN SELECTING OPERATING CONDITIONS OF POWER PLANT THAT MAXIMIZES OPERATIONAL REVENUE
2y 5m to grant Granted Apr 08, 2025
Patent 12229707
INTUITIVE AI-POWERED WORKER PRODUCTIVITY AND SAFETY
2y 5m to grant Granted Feb 18, 2025
Patent 12205056
SYSTEMS AND METHODS FOR PASSENGER PICK-UP BY AN AUTONOMOUS VEHICLE
2y 5m to grant Granted Jan 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
30%
Grant Probability
52%
With Interview (+21.4%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 417 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month