Prosecution Insights
Last updated: April 19, 2026
Application No. 19/169,288

METHOD AND SYSTEM FOR INTENT-BASED ACTION RECOMMENDATIONS AND/OR FULFILLMENT IN A MESSAGING PLATFORM

Non-Final OA §103§DP
Filed
Apr 03, 2025
Examiner
OBERLY, VAN HONG
Art Unit
2166
Tech Center
2100 — Computer Architecture & Software
Assignee
Orangedot Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
456 granted / 608 resolved
+20.0% vs TC avg
Strong +16% interview lift
Without
With
+15.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
11 currently pending
Career history
619
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
3.7%
-36.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 608 resolved cases

Office Action

§103 §DP
DETAILED ACTION The Action is responsive to Applicant’s Application filed April 3, 2025. Please note claims 1-20 are pending. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, or 365(c) is acknowledged. Drawings The drawings, filed April 3, 2025 are considered in compliance with 37 CFR 1.81 and accepted. Information Disclosure Statement The information disclosure statement filed May 20, 2025 fails to comply with 37 CFR 1.98(a)(1), which requires the following: (1) a list of all patents, publications, applications, or other information submitted for consideration by the Office; (2) U.S. patents and U.S. patent application publications listed in a section separately from citations of other documents; (3) the application number of the application in which the information disclosure statement is being submitted on each page of the list; (4) a column that provides a blank space next to each document to be considered, for the examiner’s initials; and (5) a heading that clearly indicates that the list is an information disclosure statement. The information disclosure statement has been placed in the application file, but the information referred to therein has not been considered because it is empty. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 2, 6, 11, 17-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, 10, 12, 18, 19 of U.S. Patent No. 12,292,912. Although the claims at issue are not identical, they are not patentably distinct from each other because: Instant Application 19/169,288 US Patent No. 12,292,912 1. A method comprising: receiving a set of messages of a conversation thread, the set of messages comprising: a first message posted to the conversation thread by a first user; and a second message posted to the conversation thread by a second user, wherein at least one of the first message and the second message comprises a query message; receiving an action selection from at least one of the first user and the second user; at a dialog manager comprising a first set of trained models, based on the action selection, determining an action prompt for a large language model (LLM); at the LLM: receiving the action prompt and the set of messages; and in response to receiving the action prompt and the set of messages, generating a response message, the response message responsive to the query message and to the action prompt; generating an edited message upon receiving edits to the response message; posting the edited message to the conversation thread; and refining the first set of trained models with the response message and the edited message. 1. A method comprising: receiving a set of messages of a conversation thread, the set of messages comprising: at least one message posted to the conversation thread by the user; and a query message, wherein the query message was posted to the conversation thread by a second user; receiving an action selection from a user; at a first set of trained models, based on the action selection, determining an action prompt for a large language model (LLM); at the LLM: receiving the action prompt and the set of messages; and in response to receiving the action prompt and the set of messages, generating a response message, the response message responsive to the query message and to the action prompt; and providing the response message to the user; after providing the response message to the user, receiving an edited message from the user; and providing the edited message, and not the response message, to the second user. 2. The method of claim 1, further comprising, before receiving the action selection: at the first set of trained models, determining a plurality of action selection options, the plurality of action selection options comprising the action selection; and providing the plurality of action selection options to the user. 3. The method of Claim 1, further comprising, before receiving the action selection from the user: at the first set of trained models, determining a plurality of action selection options, the plurality of action selection options comprising the action selection; and providing the plurality of action selection options to the user. 6. The method of Claim 1, further comprising, after posting the edited message: receiving a second query message, the second query message different from the query message; at the first set of trained models, based on the second query message, determining a second action prompt for the LLM, the second action prompt different from the action prompt; at the LLM: receiving the second action prompt and the second query message; and in response to receiving the second action prompt and the second query message, generating a second response message, the second response message responsive to the second query message and to the action prompt, wherein generating the second response message comprises: at a classifier model of the first set of trained models, based on at least the second query message, selecting a selected message from a corpus of predetermined messages, wherein the second response message is determined based on the selected message. 12. The method of Claim 11, further comprising, after providing the response message to the user: receiving a second query message from the user, the second query message different from the query message; at the first set of trained models, based on the second query message and the action selection, determining a second action prompt for the LLM, the second action prompt different from the action prompt; at the LLM: receiving the second action prompt and the second query message; and in response to receiving the second action prompt and the second query message, generating a second response message, the second response message responsive to the second query message and to the action prompt; and providing the second response message to the user. 11. A method comprising: receiving a set of messages of a conversation thread, the set of messages comprising: a first message posted to the conversation thread by a first user; and a second message posted to the conversation thread by a second user, wherein at least one of the first message and the second message comprises a query message; receiving an action selection from at least one of the first user and the second user; at a first set of trained models, based on the action selection, determining an action prompt for a large language model (LLM); at the LLM: receiving the action prompt and the set of messages; and in response to receiving the action prompt and the set of messages, generating a response message, the response message responsive to the query message and to the action prompt, wherein generating the response message comprises: at a classifier model of the first set of trained models, based on at least the query message, selecting a selected message from a corpus of predetermined messages, wherein the response message is determined based on the selected message; and posting the response message to the conversation thread. 1. A method comprising: receiving a set of messages of a conversation thread, the set of messages comprising: at least one message posted to the conversation thread by the user; and a query message, wherein the query message was posted to the conversation thread by a second user; receiving an action selection from a user; at a first set of trained models, based on the action selection, determining an action prompt for a large language model (LLM); at the LLM: receiving the action prompt and the set of messages; and in response to receiving the action prompt and the set of messages, generating a response message, the response message responsive to the query message and to the action prompt; and providing the response message to the user; after providing the response message to the user, receiving an edited message from the user; and providing the edited message, and not the response message, to the second user. 17. The method of claim 11, wherein determining the action prompt is performed at a generative model of the first set of trained models. 10. The method of Claim 9, wherein determining the action prompt is performed at a generative model of the first set of trained models. 18. The method of claim 11, wherein the set of messages further comprises at least one message posted to the conversation thread by a non-human agent. 18. The method of Claim 1, wherein the set of messages further comprises at least one message posted to the conversation thread by a non-human agent. 19. The method of claim 11, wherein the response message is posted to the conversation thread in near-real time in response to receiving the action selection and the set of messages. 19. The method of Claim 1, wherein the response message is provided to the user in near-real time in response to receiving the action selection and the set of messages. 20. A system of a digital mental health platform, the system comprising: a messaging platform accessible through a set of user interfaces of a set of user devices; a set of models stored in non-transitory computer-readable media and structured to perform a set of actions, the set of models comprising: a dialog manager comprising a first set of trained models and a large language model (LLM), wherein the set of actions comprises: at the dialog manager, receiving a set of messages of a conversation thread from the messaging platform, the set of messages comprising: a first message posted to the conversation thread by a first user; and a second message posted to the conversation thread by a second user, wherein at least one of the first message and the second message comprises a query message; from the messaging platform, receiving an action selection from at least one of the first user and the second user; at a dialog manager, determining an action prompt for the large language model (LLM) based on the action selection; at the LLM, receiving the action prompt and the set of messages; and in response to receiving the action prompt and the set of messages, generating a response message, the response message responsive to the query message and to the action prompt; generating an edited message upon receiving edits to the response message within the messaging platform; and posting the edited message to the conversation thread. 1. A method comprising: receiving a set of messages of a conversation thread, the set of messages comprising: at least one message posted to the conversation thread by the user; and a query message, wherein the query message was posted to the conversation thread by a second user; receiving an action selection from a user; at a first set of trained models, based on the action selection, determining an action prompt for a large language model (LLM); at the LLM: receiving the action prompt and the set of messages; and in response to receiving the action prompt and the set of messages, generating a response message, the response message responsive to the query message and to the action prompt; and providing the response message to the user; after providing the response message to the user, receiving an edited message from the user; and providing the edited message, and not the response message, to the second user. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 5-6, 11, 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mielke et al. (US Pub. No. 2023/0135179) further in view of Shevchenko et al. (US Pub. No. 2023/0325590) Regarding claim 1, Miekle teaches a method comprising: ‘receiving a set of messages of a conversation thread, the set of messages comprising: a first message posted to the conversation thread by a first user’ as picking up conversation threads of a user and managing conversations between the user and an assistant system (¶0088, 69) ‘wherein at least one of the first message and the second message comprises a query message’ as receiving at the chatbot queries from a user (¶0134) ‘receiving an action selection from at least one of the first user and the second user’ as an action logger to log actions received from a user (¶0054) ‘at a dialog manager comprising a first set of trained models, based on the action selection, determining an action prompt for a large language model (LLM)’ as at a set of large pre-trained language models, and determining responses to be input prompts of the language model (¶0262) ‘at the LLM: receiving the action prompt and the set of messages’ as receiving commands from an action selector along with dialog information from the conversation (¶0108-109) ‘in response to receiving the action prompt and the set of messages, generating a response message, the response message responsive to the query message and to the action prompt’ as the response generation component using the language models to generate outputs based on the conversation and the actions (¶0112-115) ‘posting the edited message to the conversation thread’ as rendering outputs generated by the assistant system to the user (¶0043) Mielke fails to explicitly teach: ‘a second message posted to the conversation thread by a second user’ ‘generating an edited message upon receiving edits to the response message’ ‘refining the first set of trained models with the response message and the edited message’ Shevchenko teaches: ‘a second message posted to the conversation thread by a second user’ as a second message from a second user in a conversation thread (¶0218) ‘generating an edited message upon receiving edits to the response message’ as editing user messages based on feedback (¶0159) ‘refining the first set of trained models with the response message and the edited message’ as re-training or tuning models based on user interactions, including feedback to messages (¶0167) It would have been obvious to one of ordinary skill in the art at the time that the present invention was effectively filed to modify the teachings of the cited references because Shevchenko’s would have allowed Miekle’s to improve efficiency and effectiveness of communications (¶0003) Regarding claim 5, Shevchenko teach further comprising: ‘receiving, at an action server comprising a second set of trained models, a context of the conversation thread’ as receiving contextual information of a conversation (¶0146) ‘instructions comprising a persona prompt and a language in which responses for the conversation thread should be generated’ as AIA predicting perception and language and style based on contextual information (¶0146) Regarding claim 6, Mielke and Shavchenko teach further comprising, after posting the edited message: ‘receiving a second query message, the second query message different from the query message’ as a plurality of queries (Mielke ¶0134) ‘at the first set of trained models, based on the second query message, determining a second action prompt for the LLM, the second action prompt different from the action prompt’ as at a set of large pre-trained language models, and determining responses to be input prompts of the language model (Mielke ¶0262) ‘at the LLM: receiving the second action prompt and the second query message’ as receiving commands from an action selector along with dialog information from the conversation (Mielke ¶0108-109) ‘in response to receiving the second action prompt and the second query message, generating a second response message, the second response message responsive to the second query message and to the action prompt’ as the response generation component using the language models to generate outputs based on the conversation and the actions (Mielke ¶0112-115) ‘wherein generating the second response message comprises: at a classifier model of the first set of trained models, based on at least the second query message, selecting a selected message from a corpus of predetermined messages, wherein the second response message is determined based on the selected message’ as presenting templates or completion options for a response (Shevchenko ¶0167-168) Regarding claim 11, Mielke teaches a method comprising: ‘receiving a set of messages of a conversation thread, the set of messages comprising: a first message posted to the conversation thread by a first user’ as picking up conversation threads of a user and managing conversations between the user and an assistant system (¶0088, 69) ‘wherein at least one of the first message and the second message comprises a query message’ as receiving at the chatbot queries from a user (¶0134) ‘receiving an action selection from at least one of the first user and the second user’ as an action logger to log actions received from a user (¶0054) ‘at a first set of trained models, based on the action selection, determining an action prompt for a large language model (LLM)’ as at a set of large pre-trained language models, and determining responses to be input prompts of the language model (¶0262) ‘at the LLM: receiving the action prompt and the set of messages’ as receiving commands from an action selector along with dialog information from the conversation (¶0108-109) ‘in response to receiving the action prompt and the set of messages, generating a response message, the response message responsive to the query message and to the action prompt’ as the response generation component using the language models to generate outputs based on the conversation and the actions (¶0112-115) ‘posting the response message to the conversation thread’ as rendering outputs generated by the assistant system to the user (¶0043) Mielke fails to explicitly teach: ‘a second message posted to the conversation thread by a second user’ ‘wherein generating the response message comprises: at a classifier model of the first set of trained models, based on at least the query message, selecting a selected message from a corpus of predetermined messages, wherein the response message is determined based on the selected message’ Shevchenko teaches: ‘a second message posted to the conversation thread by a second user’ as a second message from a second user in a conversation thread (¶0218) ‘wherein generating the second response message comprises: at a classifier model of the first set of trained models, based on at least the second query message, selecting a selected message from a corpus of predetermined messages, wherein the second response message is determined based on the selected message’ as presenting templates or completion options for a response (Shevchenko ¶0167-168) It would have been obvious to one of ordinary skill in the art at the time that the present invention was effectively filed to modify the teachings of the cited references because Shevchenko’s would have allowed Miekle’s to improve efficiency and effectiveness of communications (¶0003) Regarding claim 16, Shevchenko teach further comprising: ‘receiving, at an action server comprising a second set of trained models, a context of the conversation thread’ as receiving contextual information of a conversation (¶0146) ‘instructions comprising a persona prompt according to which generated messages should be posted to the conversation thread’ as AIA predicting perception and language and style based on contextual information (¶0146) Regarding claim 17, Miekle teaches ‘wherein determining the action prompt is performed at a generative model of the first set of trained models’ (¶0262) Regarding claim 18, Mielke teaches ‘wherein the set of messages further comprises at least one message posted to the conversation thread by a non-human agent’ (¶0134) Regarding claim 19, Shevchenko teaches ‘wherein the response message is posted to the conversation thread in near-real time in response to receiving the action selection and the set of messages’ as updating conversation threads in real time (¶0209) Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miekle et al. (US Pub. No. 2023/0135179) Shevchenko et al. (US Pub. No. 2023/0325590) further in view of Sengupta et al. (US Pub. No. 2023/0063131) Regarding claim 2, Miekle fails to explicitly teach further comprising, before receiving the action selection from the user: ‘at the first set of trained models, determining a plurality of action selection options, the plurality of action selection options comprising the action selection’ ‘providing the plurality of action selection options to the user’ Sengupta teaches: ‘at the first set of trained models, determining a plurality of action selection options, the plurality of action selection options comprising the action selection’ as a group of actions that may be selected by a virtual agent (¶0060) ‘providing the plurality of action selection options to the user’ as an action space to present actions to a user (¶0060) It would have been obvious to one of ordinary skill in the art at the time that the present invention was effectively filed to modify the teachings of the cited references because Sengupta’s would have allowed Miekle and Shevchenko’s to reduce operational costs and improve flexibility (¶0002) Claim(s) 7-10, 12-13, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miekle et al. (US Pub. No. 2023/0135179) Shevchenko et al. (US Pub. No. 2023/0325590) further in view of Rooks (US Pub. No. 2004/0010420) Regarding claim 7, Miekle fails to explicitly teach ‘wherein the action selection pertains to initiating a quiz.’ Rooks teaches ‘wherein the action selection pertains to initiating a quiz.’ (¶0037) It would have been obvious to one of ordinary skill in the art at the time that the present invention was effectively filed to modify the teachings of the cited references because Rook’ would have allowed Miekle and Shevchenko’s to increase flexibility in various domains (¶0005) Regarding claim 8, Rooks teaches ‘wherein the conversation thread is provided within a messaging platform of a digital mental health platform’ as communications within a health management program (¶0028) Regarding claim 9, Rooks teaches ‘wherein the first user is a therapy-providing entity’ as a therapist (¶0017, 20) Regarding claim 10, Rooks teaches ‘further comprising providing at least one of first user and the second user an option to schedule a therapy session, through the dialog manager’ as scheduling sessions (¶0024) Regarding claim 12, Rooks teaches ‘wherein the conversation thread is provided within a messaging platform of a digital mental health platform’ as communications within a health management program (¶0028) Regarding claim 13, Rooks teaches ‘wherein the first user is a therapy-providing entity’ as a therapist (¶0017, 20) Regarding claim 20, Mielke, Shevchenko and Rooks teaches a system of a digital mental health platform, the system comprising: ‘a messaging platform accessible through a set of user interfaces of a set of user devices’ as a messaging application (Mielke ¶0041) ‘a set of models stored in non-transitory computer-readable media and structured to perform a set of actions’ as at a set of large pre-trained language models (Mielke ¶0262) ‘the set of models comprising: a dialog manager comprising a first set of trained models and a large language model (LLM)’ as at a set of large pre-trained language models, and determining responses to be input prompts of the language model (Mielke ¶0262) ‘wherein the set of actions comprises: at the dialog manager, receiving a set of messages of a conversation thread from the messaging platform’ as picking up conversation threads of a user and managing conversations between the user and an assistant system (Mielke ¶0088, 69) ‘the set of messages comprising: a first message posted to the conversation thread by a first user’ (Mielke ¶0088, 69) ‘a second message posted to the conversation thread by a second user, wherein at least one of the first message and the second message comprises a query message’ as a second message from a second user in a conversation thread (Shevchenko ¶0218) and receiving at the chatbot queries from a user (Mielke ¶0134) from the messaging platform, receiving an action selection from at least one of the first user and the second user’ as an action logger to log actions received from a user (Mielke ¶0054) ‘at a dialog manager, determining an action prompt for the large language model (LLM) based on the action selection’ as at a set of large pre-trained language models, and determining responses to be input prompts of the language model (Mielke ¶0262) ‘at the LLM, receiving the action prompt and the set of messages’ as receiving commands from an action selector along with dialog information from the conversation (Mielke ¶0108-109) ‘in response to receiving the action prompt and the set of messages, generating a response message, the response message responsive to the query message and to the action prompt’ as the response generation component using the language models to generate outputs based on the conversation and the actions (Mielke ¶0112-115) ‘generating an edited message upon receiving edits to the response message within the messaging platform’ as editing user messages based on feedback (Shevchenko ¶0159) ‘posting the edited message to the conversation thread’ as rendering outputs generated by the assistant system to the user (Mielke ¶0043) Examiner’s Note Examiner has cited particular columns/paragraphs and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN OBERLY whose telephone number is (571)272-7025. The examiner can normally be reached Monday - Friday, 7:30am-4pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sanjiv Shah can be reached at (571) 272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VAN H OBERLY/Primary Examiner, Art Unit 2166
Read full office action

Prosecution Timeline

Apr 03, 2025
Application Filed
Feb 13, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602404
DATA LABELING WORK SUPPORT APPARATUS, DATA LABELING WORK SUPPORT METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602418
INTELLIGENT QUERY DECOMPOSITION, SPECIALIZED MODEL ROUTING, AND HIERARCHICAL AGGREGATION WITH CONFLICT RESOLUTION
2y 5m to grant Granted Apr 14, 2026
Patent 12596526
A COMPUTER-IMPLEMENTED METHOD AND A DATA PROCESSING HARDWARE FOR PROCESSING SENSOR DATA POINTS
2y 5m to grant Granted Apr 07, 2026
Patent 12591628
ASSISTANT SYSTEM, ASSISTANT METHOD, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12572572
Information Retrieval Using an Augmented Query Produced by Graph Convolution
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
90%
With Interview (+15.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 608 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month