Prosecution Insights
Last updated: April 19, 2026
Application No. 17/491,495

ARTIFICIAL INTELLIGENCE BASED COMPLIANCE DOCUMENT PROCESSING

Final Rejection §103§112
Filed
Sep 30, 2021
Examiner
MAUNI, HUMAIRA ZAHIN
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Intuit Inc.
OA Round
4 (Final)
38%
Grant Probability
At Risk
5-6
OA Rounds
4y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
6 granted / 16 resolved
-17.5% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
39 currently pending
Career history
55
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
40.2%
+0.2% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1-8, 11-18, and 21-24 remain pending within the application. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-8, 11-18, and 21-24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1 and 11 recite “wherein the training is completed instantaneously in response to obtaining the compliance document and the seed data”, “wherein the automatic completion of the compliance document is completed instantaneously in response to training the Al model”, and “wherein the reconverting is completed instantaneously in response to receiving the feedback.”. The definition of instantaneous is “done, occurring, or acting without any perceptible duration of time”, “done without any delay being purposely introduced”, or “occurring or present at a particular instant” (Merriam-Webster). The specification fails to describe the instantaneous completion of the above subject matter, specifically how training, reconverting, and automatic completion is carried out instantaneously. Dependent claims 2-8, 12-18, and 21-24 inherit the deficiency from their respective independent claims and therefore are rejected on the same basis. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-8, 11-18, and 21-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 and 11 recite “wherein the training is completed instantaneously in response to obtaining the compliance document and the seed data”, “wherein the automatic completion of the compliance document is completed instantaneously in response to training the Al model”, and “wherein the reconverting is completed instantaneously in response to receiving the feedback.”. It is unclear whether the claims reciting “instantaneous” completion refers to completing in direct response to or completing in real time, as disclosed in the specification paragraph 52. Dependent claims 2-8, 12-18, and 21-24 inherit the deficiency from their respective independent claims and therefore are rejected on the same basis. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-14, 16-18, and 21-24 are rejected under 35 U.S.C. 103 as being unpatentable over Mukherjee et al. (Pub. No.: US 2018/0032497 A1), hereafter Mukherjee, in view of Wang et. al. ("Want To Reduce Labeling Cost? GPT-3 Can Help"), hereafter Wang. Regarding claim 1, Mukherjee discloses: A computer-implemented method for automatically completing a compliance document, the method performed by one or more processors of a computing system and comprising (Mukherjee, paragraph 0111, last 4 lines “electronic document preparation system 111 quickly provides functionality that electronically complete the data fields of the new and/or updated form as part of preparing a financial document.”, and paragraph 0308, lines 3-5 “The system includes at least one processor at least one memory coupled to the at least one processor” teaches automatically completing a compliance document by one or more processors of a computing system), obtaining a compliance document including a number of completable portions (Mukherjee, paragraph 0052, lines 1-4 “embodiments of the present disclosure receive natural language textual form data related to a new and/or updated form having data fields which generally are to be completed” teaches receiving a compliance document including a number of completable portions, i.e., natural language textual form data related to a new and/or updated form having data fields), obtaining a seed data associated with the compliance document, wherein the seed data includes a plurality of sample text inputs and a plurality of sample computer readable operations associated with the plurality of sample text inputs (Mukherjee, paragraph 0139, last 4 lines " utilize the historical tax return data to gather or generate relevant training set data 122 that can be used by machine learning module 113" teaches obtaining associated seed data with a plurality of text inputs and paragraph 0134, lines 22-24 “acceptable machine-executable functions related to data fields of the historical forms” teaches sample computer readable operations associated with the plurality of sample text inputs found in training set data), parsing, based on executing a parsing engine of the computing system, text in the compliance document into one or more text segments (Mukherjee, Fig. 1 and paragraph 0015, lines "one or more segments of the sentence data are isolated" teaches parsing text in the compliance document into segments using a parsing engines in Fig 1), processing the seed data, wherein processing the seed data includes training, using a … learning technique in conjunction with the seed data, an artificial intelligence (AI) model to convert text input into computer readable operation output (Mukherjee, ¶[0095] and ¶[0101] teaches training a machine learning model to learn to convert text input to computer readable operation output using machine learning techniques in conjecture with seed data), wherein the training is completed instantaneously in response to obtaining the compliance document and the seed data (Examiner’s Note: “instantaneously” is understood to be equivalent to “in real time”, as per ¶[0052] of the specification) (Mukherjee, Fig. 1, paragraph 0068, and paragraph 0097, lines 6-9 " In one embodiment, third party computing environment 140 is configured to automatically transmit financial data to electronic document preparation system 111" teaches a user computing environment 140 where document processing is performed in real time with the learning and obtaining of the document, performed quickly in order to retain user attention/interest), converting, using the trained Al model, the one or more text segments into one or more computer readable operations each of the one or more computer readable operations for automatically completing one of the completable portions of the compliance (Mukherjee, and paragraph 0052, lines 6-10 "These embodiments utilize machine learning to parse and otherwise analyze natural language in a unique way and thereby correctly determine and learn one or more machine executable functions equivalent to or otherwise represented by the instructions for each data field." and Fig. 1, elements 114, 113 teaches converting text segments to machine executable functions as computer readable operations, generated by a machine learning model using seed data in module 114 and 113), automatically completing, using the one or more computer readable operations, at least one of the completable portions of the compliance document (Mukherjee, paragraph 0066, lines 2-10 "in preparing documents related to one or more forms that include data fields which are intended to be completed … machine-executable functions to be executed by a computing processor in the context of an electronic document preparation system." And paragraph 0111 teaches completing portions of the compliance document based on computer readable operations), wherein the automatic completion of the compliance document is completed instantaneously in response to training the Al model (Mukherjee, Fig. 1, and paragraph 0068 and 0097, lines 6-9 " In one embodiment, third party computing environment 140 is configured to automatically transmit financial data to electronic document preparation system 111" teaches a user computing environment 140 where document processing is performed in real time with the training of the AI model, and the automatic completion of the document is completed in response to training), receiving feedback about one of the completed portions of the compliance document (Mukherjee, paragraph 0116 teaches receiving feedback about completed portions of the document from an expert after form data is provided to the document preparation system), adjusting the seed data based on the feedback (Mukherjee, paragraph 0116 teaches adjusting the seed data, i.e. training data, based on expert feedback), retraining the Al model, using the … learning technique in conjunction with the adjusted seed data, to more accurately convert text input into computer readable operation output (Mukherjee, paragraph 0116 teaches the machine learning module to use learning techniques to retrain the AI model in conjunction with the expert feedback adjusted seed data), reconverting, using the retrained AI model, the particular one of the text segments into an updated computer readable operation, wherein the reconverting is completed instantaneously in response to receiving the feedback (Mukherjee, Fig. 1, paragraphs 0052, 0116 and 0111 teaches reconverting the text segments into an updated computer readable operation upon retraining of the AI model in response to receiving the feedback, as the machine learning module iteratively trains the model based on the feedback and updated fields). While Mukherjee teaches processing the seed data, wherein processing the seed data includes training, using a … learning technique in conjunction with the seed data, an artificial intelligence (AI) model to convert text input into computer readable operation output, they do not explicitly teach the AI model to use few shot learning. Wang teaches: Al model using few-shot learning (Wang, page 1, introduction, paragraph 1, last two lines "state-of-the-art few shot learner" and Figure 2 teaches models that use few-shot learning). Mukherjee and Wang are analogous art because they are from the same field of endeavor, natural language processing and machine learning models. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mukherjee to include using few-shot learning, based on the teachings of Wang. The motivation for doing so would have been to obtain "better performance with limited … budget" (Wang, abstract, lines 19-20). While Mukherjee teaches retraining the Al model, using the … learning technique in conjunction with the adjusted seed data, to more accurately convert text input into computer readable operation output, they do not disclose doing so using a few shot learning technique. Wang discloses: retraining … Al model, using the few-shot learning technique in conjunction with … adjusted … data… (Wang, Figure 2 and Figure 3 teaches retraining an AI model using few shot learning technique in conjunction with adjusted training data). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mukherjee to include retraining … Al model, using the few-shot learning technique in conjunction with … adjusted … data… based on the teachings of Wang. The motivation for doing so would have been to obtain "better performance with limited … budget" (Wang, abstract, lines 19-20). Regarding claim 2, Mukherjee, in view of Wang, discloses the method of claim 1, Mukherjee further discloses: obtaining user data associated with the compliance document (Mukherjee, paragraph 0090, lines 5-7, "for one or more historical users of electronic document preparation system 111, data representing one or more items associated with various users"), completing the compliance document further based on the user data (Mukherjee, paragraph 0066, last 5 lines "Once the electronic document preparation system has learned machine-executable functions that produce the required data entries for the data fields, the electronic document preparation system can assist individual users in electronically completing the form." and Fig. 1 teaches completing the compliance document based on the user data collected as historical data). Regarding claim 3, Mukherjee, in view of Wang, discloses the method of claim 1, Mukherjee further discloses: providing at least one computer readable operation of the one or more computer readable operations to one or more reviewers, wherein the feedback is received(Mukherjee, paragraph 0077, lines 1-4 "Electronic document preparation system 111 request, in one embodiment, input from an expert to approve at least one of the acceptable candidate machine-executable functions." teaches expert review and feedback of computer readable operations), Regarding claim 4, Mukherjee, in view of Wang, discloses the method of claim 3, Mukherjee further discloses: the at least one computer readable operation includes an undefined operation based on a text segment that is unknown to the Al model based on the seed data (Mukherjee, paragraph 109, lines 2-4 "an acceptable candidate machine-executable function for data fields of the new and/or updated form that needed to be learned" teaches an undefined operation based on a new/updated form as a text segment that is unknown to the AI model), the feedback includes a suggested computer readable operation associated with the text segment (Mukherjee, paragraph 107, lines 15-16 "If the candidate machine-executable function is approved by the expert or other personnel," teaches feedback of suggested computer readable operation associated with the text segment, as approval of an operation can be construed as a suggestion of that operation), adjusting the seed data includes: adding a sample text input corresponding to the text segment (Mukherjee, paragraph 0072, line 10-11 "identify possible dependencies by receiving data from an expert" and paragraph 0071, lines 3-5 "These dependencies can include one or more data values from other data fields of the new and/or updated form" teaches receiving sample text inputs from experts), adding a sample computer readable operation associated with the added sample text input and corresponding to the suggested computer readable operation (Mukherjee, paragraph 0073, lines 2-4 "generates, for each data field to be learned, one or more candidate machine-executable functions based on the one or more dependencies" teaches generating candidate machine-executable functions as adding sample computer readable operations, based on the one or more dependencies as the added sample text input and corresponding to the suggested computer readable operation). Regarding claim 6, Mukherjee, in view of Wang, discloses the method of claim 1, Mukherjee further discloses: parsing the text is based on one or more of: punctuation, wherein each text segment is a separate sentence (Mukherjee, paragraph 0123, lines 3-4 "plurality of separate sentences"), one or more line breaks, wherein each text segment is a line of text (Mukherjee, paragraph 0080, lines 7-8 "prepare one or more lines of the form"). Regarding claim 7, Mukherjee, in view of Wang, discloses the method of claim 1, Mukherjee further discloses: obtaining a user indication of a portion of the text in the compliance document, wherein parsing the text is based on the user indication (Mukherjee, paragraph 0116, lines 2-5 "a user ... can input an indication of which data fields of the new and/or updated form should be learned by machine learning module"). Regarding claim 8, Mukherjee, in view of Wang, discloses the method of claim 1, Wang further discloses: the Al model includes a Generative Pre-Trained Transformer 3 (GPT-3) model, wherein the GPT-3 model includes a davinci model (Wang, page 6, Section 3.2, paragraph 1, lines 1-2 " For GPT-3 labeling API, we select the largest version Davinci"). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mukherjee to include a Generative Pre-Trained Transformer 3 (GPT-3) model, wherein the GPT-3 model includes a davinci model based on the teachings of Wang. The motivation for doing so would have been to obtain "better performance with limited … budget" (Wang, abstract, lines 19-20). Claims 11-14 are found to be substantially similar to claims 1-4, and thus is rejected on the same basis as claims 1-4. Claims 16-18 are found to be substantially similar to claims 6-8, and thus is rejected on the same basis as claims 6-8. Regarding claim 21, Mukherjee, in view of Wang, discloses the method of claim 1. Mukherjee further discloses: providing a service interface accessible by a user via a web browser or a client portal, wherein obtaining the compliance document includes electronically receiving the compliance document from the user via the service interface (Mukherjee, ¶[0110] and ¶[0079] teaches interface module 112 as a service interface accessible by users, where obtaining the compliance document includes electronically receiving the compliance document from the user via the service interface), electronically transmitting the completed compliance document to the user via the service interface (Mukherjee, Fig. 1 and ¶[0111] and ¶[0113] teaches electronically transmitting the completed compliance document to the user via the service interface). Regarding claim 22, Mukherjee, in view of Wang, discloses the method of claim 1. Wang further discloses: hosting the Al model, wherein the Al model is pretrained using a general corpus (Wang, page 3, right column, section GPT-3 Labeling, lines 1-3 “GPT-3 (Brown et al., 2020) is a large-scale pretrained language model, and we use the largest model, Davinci, from OpenAI to label data” and page 6, Section 3.2, paragraph 1, lines 1-3 “For GPT-3 labeling API, we select the largest version Davinci4. Our in-house NLG model is initialized by PEGASUSlarge” teaches hosting the AI model, wherein the AI model is pretrained using a general corpus). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mukherjee to include hosting the Al model, wherein the Al model is pretrained using a general corpus based on the teachings of Wang. The motivation for doing so would have been to obtain "better performance with limited … budget" (Wang, abstract, lines 19-20). Claims 23-24 are found to be substantially similar to claims 21-22, and thus is rejected on the same basis as claims 21-22. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Mukherjee et al. (Pub. No.: US 2018/0032497 A1), hereafter Mukherjee, in view of Wang et. al. ("Want To Reduce Labeling Cost? GPT-3 Can Help"), hereafter Wang, in further view of Begun et al. (Pub. No.: US 2021/0081613 A1), hereafter Begun. Regarding claim 5, Mukherjee, in view of Wang, discloses the method of claim 4, Mukherjee further discloses: providing, to the Al model, the one or more text segments including the text segment that is previously unknown to the Al model based on the seed data (Mukherjee, Fig. 1, system 111 teaches the AI model in module 113 and paragraph 0102, lines 7-11 "training set data 122 includes copies of the form that have a data entry in the data field that corresponds to the data field of the new and/or updated form currently being analyzed and learned by the machine learning module 113." teaches providing, to the Al model, the one or more text segments including the text segment that is previously unknown to the Al model based on the seed data ) providing the … seed data to the Al model (Mukherjee, paragraph 0069, lines 8 – 9 “The training set data can include historical data “ teaches historical training data for a model as seed data for an AI model), obtaining, from the Al model, one or more updated computer readable operations that are generated by the Al model based on... seed data (Mukherjee, paragraph 0070, lines 5-8 "utilizes machine learning in combination with the training set data to learn machine-executable functions that determine data entries for the data fields of the new and/or updated form." teaches obtaining updated operations generated by machine learning models based on seed data in training set), the one or more updated computer readable operations includes a new computer readable operation associated with the text segment previously unknown to the Al model based on the seed data, wherein: the new computer readable operation is generated by the Al model from the previously unknown text segment using the ... seed data (Mukherjee, Fig. 1 teaches an AI model in machine learning module 113 that learns new computer readable operations associated with previously unknown text segments from new form data 119 using historical data as seed data), the new computer readable operation differs from the undefined operation (Mukherjee, paragraph 0008, lines 6-8 "to determine, generate and update machine-executable functions associated with a document preparation system" teaches updated operations differing from generated undefined operations), …one or more updated computer readable operations for completing the compliance document (Mukherjee, paragraph 0066, lines 8-10 "machine-executable functions to be executed by a computing processor in the context of an electronic document preparation system." teaches one or more computer readable operations for completing the compliance document). Mukherjee teaches providing the … seed data to the Al model, and the new computer readable operation is generated by the Al model from the previously unknown text segment using the ... seed data but does not teach providing … adjusted…data to the AI model and AI model…using the adjusted…data. Mukherjee teaches one or more updated computer readable operations for completing the compliance document, but does not explicitly teach storing data during various stages of document processing. Mukherjee, in view of Wang, teaches using few-shot learning, but does not teach using few-shot learning using adjusted data. Begun teaches: providing … adjusted…data to the AI model and the AI model…using the adjusted…data (Begun, paragraph 0194, lines 1-5 “The dispatcher is a methodology for connecting user feedback on the combined output of a number of ML models and non-ML algorithms back to the particular learning models 120 that can learn from the feedback.” teaches providing adjusted data from user feedback to the AL model and the AI model using the adjusted data), few-shot learning using … adjusted … data (Begun, paragraph 0110, lines 4-6 "Few-shot structure learning takes care of creating a machine learning model relying on feedback provided by the user" teaches few-shot learning using adjusted data from user feedback), storing (Fig. 1 teaches Document store 110, which stores data during various stages of document processing). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mukherjee, in view of Wang, to include providing adjusted data to the AI model and the AI model using the adjusted data, based on the teachings of Begun. The motivation for doing so would have been to allow "models to improve from user feedback" (Begun, paragraph 0195, lines 1-2). It would also have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mukherjee, in view of Wang, to include few-shot learning using adjusted data, based on the teachings of Begun. The motivation for doing so would have been to “improve machine learning models and … avoid requiring large numbers of review steps or corrections … in order to minimize the amount of user action required.” (Begun, paragraph 0037, lines 6-14). It would also have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mukherjee, in view of Wang, to include storing data, based on the teachings of Begun. The motivation for doing so would have been for enabling more efficient and less expensive business data flows and enhancing quality assurance, consistency, and reporting (Begun, paragraph 0042, lines 5-7). Claim 15 is found to be substantially similar to claim 5, and thus is rejected on the same basis as claim 5. Response to Arguments Applicant's arguments filed 10/08/2025 have been fully considered with regards to the 101 rejection, and they are found persuasive. The rejections have been withdrawn. Applicant's arguments filed 10/08/2025 have been fully considered with regards to the 103 rejection, but they are not persuasive. The applicant asserts on page 14 of the remarks that the amended claim 1 is not taught by the prior art, specifically such that “the training is completed instantaneously in response to obtaining the compliance document and the seed data”. The examiner respectfully disagrees, Mukherjee teaches training in response to the obtaining of the compliance document and the seed data in Fig. 1, paragraph 0068, last 7 lines “Electronic document preparation system 111 of the present disclosure advantageously utilizes machine learning in addition to training set data in order to quickly and efficiently learn machine-executable functions related to data fields of a form and incorporate those machine-executable functions into electronic document preparation system 111.”, and paragraph 0097, lines 6-9 " In one embodiment, third party computing environment 140 is configured to automatically transmit financial data to electronic document preparation system 111", which discloses a user computing environment 140 where document processing is performed in real time with the learning and obtaining of the document, performed quickly in order to retain user attention/interest. The examiner notes that “instantaneously” is understood to be equivalent to “in real time”, as per ¶[0052] of the specification. Furthermore, model training is never be considered truly instantaneous, regardless of the short time taken by few shot learning and/or GPT-3 models. Claims substantially similar to claim 1 are rejected on the same basis as claim 1. Claims dependent on independent claims do not overcome the deficiencies of the rejected independent claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAIRA ZAHIN MAUNI whose telephone number is (703)756-5654. The examiner can normally be reached Monday - Friday, 9 am - 5 pm (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.Z.M./Examiner, Art Unit 2141 /HOPE C SHEFFIELD/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Sep 30, 2021
Application Filed
Nov 19, 2024
Non-Final Rejection — §103, §112
Feb 07, 2025
Response Filed
Feb 26, 2025
Final Rejection — §103, §112
Apr 23, 2025
Examiner Interview Summary
Apr 23, 2025
Applicant Interview (Telephonic)
Apr 24, 2025
Response after Non-Final Action
Jun 03, 2025
Request for Continued Examination
Jun 08, 2025
Response after Non-Final Action
Jul 01, 2025
Non-Final Rejection — §103, §112
Oct 01, 2025
Applicant Interview (Telephonic)
Oct 01, 2025
Examiner Interview Summary
Oct 08, 2025
Response Filed
Dec 05, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585969
GENERATING CONFIDENCE SCORES FOR MACHINE LEARNING MODEL PREDICTIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
38%
Grant Probability
99%
With Interview (+66.7%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month