Prosecution Insights
Last updated: April 19, 2026
Application No. 18/333,404

OMNI-CHANNEL ARTIFICIAL INTELLIGENCE (AI) CHATBOT FOR MEDICAL DIAGNOSIS AND MEDICAL TREATMENT

Non-Final OA §101§103
Filed
Jun 12, 2023
Examiner
EVANS, TRISTAN ISAAC
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 8m
To Grant
90%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
17 granted / 47 resolved
-15.8% vs TC avg
Strong +54% interview lift
Without
With
+54.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
27 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
41.7%
+1.7% vs TC avg
§103
39.0%
-1.0% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
9.1%
-30.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 47 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the amended claims received 09 October 2025 the following occurred: claims 1,6,8,13,15,20 were amended. Claims 5,12,19 were rejected. Claims 21-23 were newly added. Claims 1-4,6-11,13-18,20-23 are pending. Claims 1-4,6-11,13-18,20-23 are rejected herein. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on October 9, 2025 has been entered. Priority This application does not claim priority to another application and has a filing date equivalent to 12 June 2023. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4,6-11,13-18,20-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Step 1: The Statutory Categories Claims 1,8, and 15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a computer implemented method, a computer program product, and a computer system comprising: one or more processors, one or more computer-readable memories and one or more computer readable, tangible storage devices with stored program instructions. All are within a statutory class for subject matter eligibility purposes. Step 2A Prong One: The Abstract Idea The limitations of (claim 1 being representative): […] training each pair of a plurality of pairs of Artificial Intelligence (AI) […entities…] and data analyzer for a different disease, wherein the training includes calibrating weights for each AI […entity…] with forward propagation and backward propagation; selecting a particular pair of AI […entities…] and data analyzer based on a particular disease; and under the control of the AI […entities…] and the data analyzer retrieving preference data for a participant, wherein the preference data indicates an order of a plurality of channels of communication to try specified by the participant with a corresponding period of time for contact; identifying a first channel of the plurality of channels of communication and a corresponding first period of time for contact using the preference data; initiating a conversation by attempting to contact the participant using the first channel and during the period of time; and in response to the participant accepting the contact, converting survey questions to natural language comprising a preferred native language comprising the preferred native language of the participant; interacting with the participant using the first channel to receive survey answers to the survey questions in the natural language comprising the preferred native language of the participant; analyzing the survey answers; outputting an analysis result, wherein the analysis result comprises a medical diagnosis and a medical treatment comprising a prescription medication;… as drafted is a process that, under the broadest reasonable interpretation, covers a certain method of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for the recitation of generic computer components. That is, other than reciting (claim 8) a computer program product comprising a computer readable storage medium having program instructions executable by a processor, and (claim 15) a computer system comprising: one or more processors, one or more computer readable memories, and one or more computer readable, tangible storage devices the claimed invention amounts to managing personal behavior or interaction between people (i.e., a person following a series of rules or steps). For example, but for the various general-purpose computer elements, the claims encompass receiving answers to questions from a patient and determining a diagnosis and treatment. The Examiner notes that “certain methods of organizing human activity” includes a person’s interaction with a computer (MPEP 2106.04(a)(2)(II)). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic components, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly the claim recites an abstract idea. The claim further recites “training each pair of plurality of pairs of Artificial Intelligence (AI) chatbot and data analyzer for a different disease, wherein the training includes calibrating weights for each AI chatbot; selecting a particular pair of AI chatbot and data analyzer based on the disease’ and under control of the AI chatbot and the data analyzer,…”. When given its broadest reasonable interpretation in light of the disclosure, the training of a machine learning model represents the creation of mathematical interrelationships between data. As such, the training of the machine learning model represents a mathematical concept that is interpreted to be part of the identified abstract idea, supra. The types of identified abstract ideas are considered together as a single abstract idea for analysis purposes. Step 2A Prong 2: The Practical Application This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of (claim 8) a computer program product comprising a computer readable storage medium having program instructions executable by a processor, and (claim 15) a computer system comprising: one or more processors, one or more computer readable memories, and one or more computer readable, tangible storage devices that implements the abstract idea. These additional elements are not exclusively described by the applicant and are recited at a high-level of generality (i.e., a generic general-purpose computer or components thereof) such that they amount no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The independent claim also recites “and sending the prescription medication to the participant.” MPEP 2106.05(f) indicates that a consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. The identified additional element is no more than mere recitation of the words “apply it” (or an equivalent) and/or are instructions to implement an abstract idea or other exception on a computer and therefore cannot provide a practical application. Accordingly, even in combination, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim also recites an AI chatbot. The AI chatbot generally links the judicial exception to a particular technological environment. Additional elements that generally link the judicial exception to a particular technological environment or field of use cannot serve to integrate the exception into a practical application. See MPEP 2106.04(d)(l), Relevant Consideration for Evaluating Whether Additional Elements Integrate A Judicial Exception Into A Practical Application, and MPEP 2106.05(h). The claim further recites the additional element of using the trained machine learning model. This represents mere instructions to implement the abstract idea on a generic computer. Implementing an abstract idea using a generic computer or components thereof does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Alternatively or in addition, the implementation of the trained machine learning model merely confines the use of the abstract idea (i.e., the trained model) to a particular technological environment or field of use and thus fails to add an inventive concept to the claims. Step 2B: Significantly More The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a general -purpose computer (and/or components thereof) to perform the noted steps amounts to no more than mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (“significantly more”). The claims also recite (claims 1,8,15) an artificial intelligence chatbot. The artificial intelligence chatbot generally links the judicial exception to a particular technological environment (i.e., or field of use). Additional elements that generally link the judicial exception to a particular technological environment or field of use cannot serve to provide significantly more. See MPEP 2106.04(d)(l), Relevant Considerations for Evaluating Whether Additional Elements Integrate A Judicial Exception Into A Practical Application, and MPEP 2106.05(h). The independent claim also recites “and sending the prescription medication to the participant.” MPEP 2106.05(f) indicates that a consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. The identified additional element is no more than mere recitation of the words “apply it” (or an equivalent) and/or are instructions to implement an abstract idea or other exception on a computer and therefore cannot provide a practical application or significantly more. Accordingly, even in combination, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using the trained machine learning model was found to represent mere instructions to implement the abstract idea on a generic computer and/or confine the use of the abstract idea (i.e., the trained model) to a particular technological environment or field of use. This has been re-evaluated under the “significantly more” analysis and determined to be insufficient to provide significantly more. MPEP 2106.05(I) indicates that mere instructions to implement the abstract idea on a generic computer and/or confining the use of the abstract idea to a particular technological environment or field of use cannot provide significantly more. Dependent Claims and Dependent Additional Elements Claims 2-4,6-11,13-14,16-18,20-23 are similarly rejected because they either further define/narrow the abstract idea and/or do not further limit the claim to a practical application or provide as inventive concept such that the claims are subject matter eligible even when considered individually or as an ordered combination. Claim 2,9,16 and merely describes training via medical knowledge and clinical staff interactions with other participants. Claim 3,10 and 17 merely describe the plurality of channel modalities. Claim 4,11 and 18 merely describe attempting to contact the participant using one or more other channels of the plurality of channels of communication for a predetermined number of attempts. Claim 6,13 and 20 and merely describes converting a survey answer of the survey answer received from the participant into a survey answer embedding using a Natural Language Processing (NLP) model and comparing the survey answer embedding with embeddings of a plurality of multiple choice answers associated with the survey question and selecting a multiple choice answer from the plurality of multiple choice answers having an embedding that is closest to the survey answer embedding. Claim 7,14 and merely describes the analysis result is selected from a group comprising: the medical diagnosis and the medical treatment for the participant, an effectiveness of the medical treatment for the participant, a recommendation of other data to collect from the participant, a recommendation of other data to collect from the participant, transcription by automatic speech recognition (ASR), clinical information extraction, natural language processing (NLP), and acoustic analysis for cognitive decline and neurodegenerative disease assessments. Claims 21-23 merely describe wherein the AI chatbot is trained with conversion knowledge to convert between languages. The dependent claims contain a variety of additional elements including an AI chatbot. This additional element was analyzed and rejected as it was in the independent claims. The dependent claims recite a plurality of channels of communication including social media and web applications, which both generally link the judicial exception to a particular technological environment. Additional elements that generally link the judicial exception to a particular technological environment or field of use cannot serve to integrate the exception into a practical application or provide significantly more. See MPEP 2106.04(d)(l), Relevant Consideration for Evaluating Whether Additional Elements Integrate A Judicial Exception Into A Practical Application, and MPEP 2106.05(h). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2,4,7-9,11,14-16,18,21-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over KR 10-24444460 B1 (hereafter Yeon) in view of US 11545141 B1 (hereafter Poddar) in view of US 2023/0326577 A1 (hereafter Prince) in view of Android based chatbot application using back propagation neural network to help the first treatment of children’s disease (hereafter Muklason). Regarding Claim 1 Yeon teaches: A computer implemented method, comprising operations for: training each pair of a plurality of pairs of Artificial Intelligence (AI) chatbot and data analyzer for a different disease, [Yeon teaches at pg. 3 in order to achieve the above object, in the method for providing a chatbot service for artificial intelligence-based symptom and disease matching performed by a device according to an embodiment of the present invention, the step of receiving natural language-based symptom input information from a user, extracting a symptom keyword based on the symptom input information, tokenizing the symptom keyword, standardizing the tokenized symptom keyword to extract a symptom standard keyword, based on the symptom standard keyword disease and it will include the step of matching and providing the matched disease information to the user. Yeon teaches at pg. 3 in this case, the tokenizing will be performed using at least one of Soynlp, koNLpy, and a tokenizer. The tokenizer and the chatbot service taught by Yeon are interpreted as Artificial Intelligence (AI) chatbot and data analyzer pair for a different disease.] […] selecting a particular pair of AI chatbot and data analyzer from the plurality of pairs of AI chatbot and data analyzer based on a particular disease; [Yeon teaches at pg. 3 at this time, in the method for providing a chatbot service for symptom and disease matching based on artificial intelligence according to an embodiment of the present invention, a weight corresponding to the symptom standard keyword for each symptom standard keyword through the first machine learning model for disease matching can be printed out. Yeon teaches at pg. 3 in this case the matching will include matching the disease with the disease through a second machine learning model that outputs the disease and disease possibility that are match with the symptom standard keyword and the weight as inputs. Collectively, this is selecting a particular pair of AI chatbot and data analyzer (interpreted here to be the first and second machine learning models) from the plurality of pairs of AI chatbot and data analyzer based on a particular disease. The plurality of pairs of AI chatbots and data analyzers are comprised here of the first model selecting the second machine learning model corresponding to the keyword weight and symptom.] […] Yeon may not explicitly teach: wherein the training includes calibrating weights for each AI chatbot with forward propagation and backward propagation; and under control of the AI chatbot and data analyzer, retrieving preference data for a participant, wherein the preference data indicates an order of a plurality of channels of communication to try specified by the participant with a corresponding period of time for contact; identifying a first channel of the plurality of channels of communication and a corresponding first period of time for contact using the preference data; initiating a conversation by attempting to contact the participant using the first channel and during the period of time; and in response to the participant accepting the contact, converting survey questions to natural language comprising a preferred native language of the participant; interacting with the participant using the first channel to receive survey answers to the survey questions in the natural language comprising the preferred native language of the participant; analyzing the survey answers; outputting an analysis results, wherein the analysis result comprises a medical diagnosis and a medical treatment comprising a prescription medication; and sending the prescription medication to the participant. Poddar teaches: […] and under control of the AI chatbot and data analyzer, retrieving preference data for a participant, wherein the preference data indicates an order of a plurality of channels of communication to try specified by the participant with a corresponding period of time for contact; [Poddar teaches at col. 9 line 22-29 teaches that when the virtual conversation agent did not initiate the conversation by phone (‘No’), the first part of the omni-channel orchestrated conversation process continues ahead to a step at which the virtual conversation agent sends a follow up text message, email message (in this case, no call or voicemail, since a voicemail message was already left from the initial phone call to the human candidate) requesting a response from the human candidate within the particular time frame. Poddar teaches at col 9 line 29-34 when the virtual conversation agent affirmatively did initiate the conversation by phone 30 ('YES'), then the first part of the omni-channel orchestrated conversation process 100 proceeds to a step at which the virtual conversation agent leaves a voicemail message if the human candidate does not pick up the phone (at 150). Poddar teaches at col. 9 line 34-41 that the omni-channel orchestrated conversation process 100 35 continues forward to the step at which the virtual conversation agent sends a follow-up text message or email message (in this case, no call or voicemail, since a voicemail message was already left from the initial phone call to the human candidate) requesting a response from the human candidate within the particular time frame (at 160). These teachings establish an order of channels of the plurality of channels of communication. The particular time frame is interpreted as the period of time for contact for each of the channels. Poddar teaches at col. 15 line 54-57 that in some embodiments, the invention’s process are stored in the system memory, the permanent storage device, and/or the read only memory. ] identifying a first channel of the plurality of channels of communication and a corresponding first period of time for contact using the preference data; [Poddar teaches at column 4 line 56-59 that in some embodiments the virtual conversation agent is used to conduct conversations across different channels such as phone call, email, and text in lieu of providing direct contact with a human being. Poddar teaches at column 4 line 34-39 that in some embodiments the omni channel orchestrated conversation process for conducting real-time contextual and fluid conversation with a human by a virtual conversation agent involve an email based conversation between the human and the virtual conversation agent. Poddar teaches at Figure 1 Item 160 that a Bot sends a follow-up text, email, or phone call to candidate and requests response within a timeframe. The timeframe is interpreted as the period of time for contact.] initiating a conversation by attempting to contact the participant using the first channel and during the period of time; [Poddar teaches at col 9 line 29-34 when the virtual conversation agent affirmatively did initiate the conversation by phone 30 ('YES'), then the first part of the omni-channel orchestrated conversation process 100 proceeds to a step at which the virtual conversation agent leaves a voicemail message if the human candidate does not pick up the phone (at 150). Poddar teaches at column 4 line 56-59 that in some embodiments the virtual conversation agent is used to conduct conversations across different channels such as phone call, email, and text in lieu of providing direct contact with a human being. Poddar teaches at column 4 line 34-39 that in some embodiments the omni channel orchestrated conversation process for conducting real-time contextual and fluid conversation with a human by a virtual conversation agent involves an email based conversation between the human and the virtual conversation agent.] and in response to the participant accepting the contact, converting survey questions to natural language comprising a preferred native language of the participant; [Poddar teaches at col. 17 line 5-7 capturing and converting live speech from the telephonic speech conversation to structured text used to understand intent of the human user…] interacting with the participant using the first channel to receive survey answers to the survey questions in the natural language comprising the preferred native language of the participant; [Poddar teaches at Figure 3 Item 340 Bot determines an appropriate response or next question (interpreted to be a survey question for a participant) based on candidate’s prior response and at Item 350 bot analyzes user response and asks contextual follow up questions. This teaches interacting with the participant using the first channel to receive survey answers. Poddar teaches at Figure 3 Item 310 live speech of candidate converted to text (interpreted to be natural language) and fed into natural language understanding AI model to understand candidate intent.] analyzing the survey answers; [Poddar teaches at Figure 3 Item 310 live speech of candidate converted to text and fed into natural language understanding AI model to understand candidate intent.] […]. Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the method of providing chatbot service for symptom and disease matching based on AI of Yeon to the omni-channel orchestrated conversation system and virtual conversation agent for real time contextual and orchestrated omni-channel conversation with a human and an omni-channel orchestrated conversation process for conducting real time contextual and fluid conversation with the human by the virtual conversation agent of Poddar with the motivation of addressing limitations which include, non-exhaustively, inability to hold a real-time, spontaneous and contextually relevant telephonic speech conversation with a human (e.g., not able to engage in a real-time telephonic screening of a candidate for employment), inability to effectively handle interruptions by the human speaker during conversation, lack of domain specific conversational data for language processing, inability to engage in non-linear conversations, and the inability to ask contextually-relevant follow up questions when responses from the human speaker are sufficiently clear, detailed, and or explained (Poddar at col. 1, line 36-47). Yeon/Poddar may not explicitly teach: wherein the training includes calibrating weights for each AI chatbot with forward propagation and backward propagation; […] outputting an analysis results, wherein the analysis result comprises a medical diagnosis and a medical treatment comprising a prescription medication; and sending the prescription medication to the participant. Prince teaches: […] outputting an analysis results, wherein the analysis result comprises a medical diagnosis and a medical treatment comprising a prescription medication; and sending the prescription medication to the participant. [Prince teaches at the Abstract the health platform ingests a first data set from a first medical diagnostics assessment of a patient and a second date set of identifying factors associated with the patient. Prince teaches at Figure 3 Item 308 generating, based on the application of the rule engine to the ingested data, a recommendation, interpreted to be a medical treatment. Prince teaches at para. [0036] that the rules engine ingests the first data set (e.g., the medical diagnostic assessment) to determine at least one particular state (e.g., PTSD, anxiety, depression, etc.) of the patient and a corresponding level of severity on a numerical scale (e.g., 1 to 7). Prince teaches at Figure 5 Item 500 the “state” column, is interpreted to be outputting the diagnosis.] Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the method of providing chatbot service for symptom and disease matching based on AI of Yeon to the omni-channel orchestrated conversation system and virtual conversation agent for real time contextual and orchestrated omni-channel conversation with a human and an omni-channel orchestrated conversation process for conducting real time contextual and fluid conversation with the human by the virtual conversation agent of Poddar to the artificial intelligence mental health diagnostic system and method of Prince with the motivation of more efficiently generating a course of treatment for a mental health concern. Yeon/Poddar/Prince may not explicitly teach: wherein the training includes calibrating weights for each AI chatbot with forward propagation and backward propagation; […] Muklason teaches: wherein the training includes calibrating weights for each AI chatbot with forward propagation and backward propagation; [Muklason teaches at pg. 15 the algorithm calculates the output of all neurons in each layer and is forwarded to the next layer until the output of the last layer is obtained. Muklason teaches at pg. 15 this process is called forward pass. This teaches wherein the training includes calibrating weights for each AI chatbot with forward propagation. Muklason teaches that MLP training itself is a procedure in which the value for an individual weight is determined in such a way that the modeled network relationship can be solved accurately. Muklason teaches at pg. 15 the purpose of MLP training is to find the combination of weights that produces the smallest error rate. Muklason teaches at pg. 15 measuring the error contribution of each layer by passing each layer in reverse or backwards. This teaches calibrating weights with backwards propagation.] […] Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the method of providing chatbot service for symptom and disease matching based on AI of Yeon to the omni-channel orchestrated conversation system and virtual conversation agent for real time contextual and orchestrated omni-channel conversation with a human and an omni-channel orchestrated conversation process for conducting real time contextual and fluid conversation with the human by the virtual conversation agent of Poddar to the artificial intelligence mental health diagnostic system and method of Prince to the android based chatbot application using back propagation neural network to help the first treatment of children’s diseases of Muklason with the motivation of improving upon the infant mortality rate (IMR), which data from the Indonesian demographic and health survey in 2017 showed reached 24 deaths out of 1000 live births (Muklason at the Abstract). Regarding Claim 8 and 15 Due to their similarity to claim 1, claims 8 and 15 is/are similarly analyzed and rejected in a manner consistent with the rejection of claim 1. Regarding Claim 2 Yeon/Poddar/Prince/Muklason teach the computer-implemented method of claim 1. Yeon/Poddar/Prince/Muklason further teach: wherein the AI chatbot comprises an AI chatbot that is trained on medical knowledge and clinical staff interactions with other participants. [Prince teaches at para. [0017] the platform will receive a survey (e.g., a survey, medical diagnostics test, etc.) from a patient using the client device running the application. Prince teaches at para. [0035] the health platform, applying the rule engine includes utilizing machine learning techniques (e.g., artificial intelligence, neural networks, natural language processing etc.) on first and second data sets, with reference to Figs. 4 and 5. Prince teaches at para. [0035] that the health platform uses a machine learning model to incorporate the historical data (e.g., ingested data from the patient, including identifying information and historical static interactive assessments) store in big data in the application of the rules engine. At para. [0036] Prince teaches the rules engine ingests the first data set (e.g., the medical diagnostic assessment) to determine at least one particular state (E.g., PTSD, anxiety, depression, etc.) of the patient and a corresponding level of severity based on a numerical scale (e.g., 1 to 7). Prince teaches at para. [0029] that any additional data including changes to the patient assessment and results from the recommended treatments will be added to big data 108 and further analyzed, resulting in refinement and improvement of the AI engine (e.g., machine learning model) over time.] Regarding Claim 9 and 16 Due to their similarity to claim 2, claims 9 and 16 are similarly analyzed and rejected in a manner consistent with the rejection of Claim 2. Regarding Claim 4 Yeon/Poddar/Prince/Muklason teach the computer-implemented method of claim 1. Yeon/Poddar/Prince/Muklason further teach: further comprising operations for: in response to the participant rejecting the contact, attempting to contact the participant using one or more other channels of the plurality of channels of communication for a predetermined number of attempts.[Poddar teaches at Figure 2 Item 220 the conversation continuing via text or email, the bot tries to converge the conversation into a phone. Poddar teaches at Figure 3 Item 390 that the bot automatically sends a follow up text/email to candidate with information to reconnect if the call was closed (inadvertently) too soon. These teachings are interpreted as attempting to contact the participant using one or more other channels of the plurality of channels of communication for a predetermined number of attempts. The predetermined number of attempts is interpreted to be one.] Regarding Claim 11 and 18 Due to their similarity to Claim 4, Claims 11 and 18 are similarly analyzed and rejected in a manner consistent with the rejection of Claim 4. Regarding Claim 7 Yeon/Poddar/Prince/Muklason teach the computer-implemented method of claim 1. Yeon/Poddar/Prince/Muklason further teach: wherein the analysis result is selected from a group comprising: the medical diagnosis and the medical treatment for the participant, an effectiveness of the medical treatment for the participant, a recommendation of other data to collect from the participant, transcription by Automatic Speech Recognition (ASR), clinical information extraction, Natural Language Processing (NLP), and acoustic analysis for cognitive decline and neurodegenerative disease assessments. [Prince teaches at the Abstract the health platform ingests a first data set from a first medical diagnostics assessment of a patient and a second date set of identifying factors associated with the patient. Prince teaches at Figure 3 Item 308 generating, based on the application of the rule engine to the ingested data, a recommendation, interpreted to be a medical treatment. Prince teaches at para. [0036] that the rules engine ingests the first data set (e.g., the medical diagnostic assessment) to determine at least one particular state (e.g., PTSD, anxiety, depression, etc.) of the patient and a corresponding level of severity on a numerical scale (e.g., 1 to 7). Prince teaches at Figure 5 Item 500 the “state” column, is interpreted to be outputting the diagnosis.] Regarding Claim 14 Due to its similarity to Claim 7, Claims 14 is similarly analyzed and rejected in a manner consistent with the rejection of Claim 7. Regarding Claim 21 The computer implemented method of claim 1, wherein the AI chatbot is trained with conversion knowledge to convert between languages. [Yeon teaches at pg. 12 disease name translation and integration will be carried out with the disease related tasks mentioned above. Yeon teaches at pg. 11 English-Korean translation. Yeon teaches at pg. 11 disease translation and the disease name of the raw data will be written in English. Yeon teaches at pg. 11 if the predicted disease name appears in English, intuitive understanding is often impossible, and it is judged that there will be inconvenience even when the user directly searches for the disease, so it can be translated into Korean and unified. Yeon teaches at pg. 10 referring to Fig. 4, the format of data for training the machine learning model is different, so a preprocessing process is required. The preprocessing process is wherein the AI chatbot is trained with conversion knowledge to convert between languages.] Regarding Claim 22 and 23 Due to their similarity to Claim 21, Claims 22 and 23 are similarly analyzed and rejected in a manner consistent with the rejection of Claim 21. Claim(s) 3,10,17 is/are rejected under 35 U.S.C. 103 as being unpatentable over KR 10-2444460 B1 (hereafter Yeon) in view of US 11545141 B1 (hereafter Poddar) in view of US 2023/0326577 A1 (hereafter Prince) in view of Android based chatbot application using back propagation neural network to help the first treatment of children’s disease (hereafter Muklason) further in view of US 20240289362 A1 (hereafter Williams). Regarding Claim 3 Yeon/Poddar/Prince/Muklason teach the computer-implemented of claim 1. Yeon/Poddar/Prince/Muklason further teach: wherein the plurality of channels of communication comprise phone calls, text messages, [Poddar teaches at column 4 line 56-59 that in some embodiments the virtual conversation agent is used to conduct conversations across different channels such as phone call, email, and text in lieu of providing direct contact with a human being.] Yeon/Poddar/Prince/Muklason may not explicitly teach: video calls, social media, and web applications. Williams teaches: video calls, social media, and web applications. [Williams at claim 20 teaches the indication of the user identity includes at least one of (i) a phone call, (ii) a video call, (iii) a text message, or (iv) an email and the personalized dialogue output for the user includes a summary of one or more predetermined call topics, wherein the summary is personalized. The personalized summary is interpreted as social media and a web application.] Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the method of providing chatbot service for symptom and disease matching based on AI of Yeon to the omni-channel orchestrated conversation system and virtual conversation agent for real time contextual and orchestrated omni-channel conversation with a human and an omni-channel orchestrated conversation process for conducting real time contextual and fluid conversation with the human by the virtual conversation agent of Poddar to the artificial intelligence mental health diagnostic system and method of Prince to the android based chatbot application using back propagation neural network to help the first treatment of children’s diseases of Muklason to the systems and methods for analysis of user telematics data using generative AI of Williams with the motivation of making current systems for analyzing and accessing data cumbersome and/or difficult to understand for the user (Williams at para. [0003]). Regarding Claim 10 and 17 Due to their similarity to Claim 3, Claims 10 and 17 are similarly analyzed and rejected in a manner consistent with the rejection of Claim 3. Claim(s) 6,13,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over KR 10-24444460 B1 (hereafter Yeon) in view of US 11545141 B1 (hereafter Poddar) in view of US 2023/0326577 A1 (hereafter Prince) in view of Android based chatbot application using back propagation neural network to help the first treatment of children’s disease (hereafter Muklason) further in view of Dudechenko (Comparison of Word Embeddings for Extraction from Medical Records) further in view of US 2020/0349593 A1 (hereafter Whiting). Regarding Claim 6 Yeon/Poddar/Prince/Muklason teach the computer-implemented method of claim 1. Yeon/Poddar/Prince/Muklason further teach: further comprising operations for: converting a survey answer of the survey answers received from the participant into a survey answer embedding using a Natural Language Processing (NLP) model; [Prince teaches at para. [0017] the platform will receive a survey (e.g., a survey, medical diagnostics test, etc.) from a patient using the client device running the application. Prince teaches at para. [0035] the health platform, applying the rule engine includes utilizing machine learning techniques (e.g., artificial intelligence, neural networks, natural language processing etc.) on first and second data sets, with reference to Figs. 4 and 5. Prince teaches at para. [0035] that the health platform uses a machine learning model to incorporate the historical data (e.g., ingested data from the patient, including identifying information and historical static interactive assessments) store in big data in the application of the rules engine. At para. [0036] Prince teaches the rules engine ingests the first data set (e.g., the medical diagnostic assessment) to determine at least one particular state (E.g., PTSD, anxiety, depression, etc.) of the patient and a corresponding level of severity based on a numerical scale (e.g., 1 to 7). Prince teaches at para. [0029] that any additional data including changes to the patient assessment and results from the recommended treatments will be added to big data 108 and further analyzed, resulting in refinement and improvement of the AI engine (e.g., machine learning model) over time.] Yeon/Poddar/Prince/Muklason may not explicitly teach: comparing the survey answer embedding using a Natural Language Processing (NLP) model; comparing the survey answer embedding with embeddings of a plurality of multiple choice answers associated with the survey question and selecting a multiple choice answer from the plurality of multiple choice answers having an embedding that is closest to the survey answer embedding. Dudechenko teaches: comparing the survey answer embedding using a Natural Language Processing (NLP) model; [This limitation is equivalent to comparing word embeddings using a Natural Language Processing (NLP) model. Dudechenko teaches at the title comparisons of word embeddings for extraction from medical records.] Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the method of providing chatbot service for symptom and disease matching based on AI of Yeon to the omni-channel orchestrated conversation system and virtual conversation agent for real time contextual and orchestrated omni-channel conversation with a human and an omni-channel orchestrated conversation process for conducting real time contextual and fluid conversation with the human by the virtual conversation agent of Poddar to the artificial intelligence mental health diagnostic system and method of Prince to the android based chatbot application using back propagation neural network to help the first treatment of children’s diseases of Muklason to the systems and methods for analysis of user telematics data using generative AI of Williams to the comparison of word embeddings for extraction from medical records of Dudechenko with the motivation of making data from texts available for decision support systems (Dudechenko at the Abstract). Yeon/Poddar/Prince/Muklason/Williams/Dudechenko may not explicitly teach: comparing the survey answer embedding with embeddings of a plurality of multiple choice answers associated with a survey question of the survey questions; and selecting a multiple choice answer from the plurality of multiple choice answers having an embedding that is closest to the survey answer embedding. Whiting teaches: comparing the survey answer embedding with embeddings of a plurality of multiple choice answers associated with a survey question of the survey questions; and selecting a multiple choice answer from the plurality of multiple choice answers having an embedding that is closest to the survey answer embedding [Whiting teaches at the Abstract that the disclosure covers methods, systems, and computer readable media that select answer choices from potential answer choices for a digital question based on responses to other digital questions and/or embedded user data. Whiting teaches at the Abstract that the disclosed systems select answer choices from potential answer choices for a digital question based on keywords and/or sentiment values identified by analyzing a text response.] Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the method of providing chatbot service for symptom and disease matching based on AI of Yeon to the omni-channel orchestrated conversation system and virtual conversation agent for real time contextual and orchestrated omni-channel conversation with a human and an omni-channel orchestrated conversation process for conducting real time contextual and fluid conversation with the human by the virtual conversation agent of Poddar to the artificial intelligence mental health diagnostic system and method of Prince to the android based chatbot application using back propagation neural network to help the first treatment of children’s diseases of Muklason to the systems and methods for analysis of user telematics data using generative AI of Williams to the comparison of word embeddings for extraction from medical records of Dudechenko to the dynamic choice reference list of Whiting with the motivation of selecting answer choices from potential answer choices for a digital question based on a response (Whiting at the Abstract). Regarding Claim 13 and 20 Due to their similarity to Claim 6, Claims 13 and 20 are similarly analyzed and rejected in a manner consistent with the rejection of Claim 6. Response to Arguments 35 U.S.C. 101 Argument Responses Applicant argues that claim 1,8 and 15 are not directed to abstract ideas. The Examiner respectfully disagrees. MPEP 2106. 04(a)(2)(II) states that a claimed invention is directed to certain methods of organizing human activity if the identified claim elements contain limitations that encompass fundamental economic principles or practices, commercial or legal interactions, or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). Additionally, The Examiner submits that the identified claim elements represent a series of rules or instructions that a person or persons, with or without the aid of a computer, would follow to train chatbots to interview patients and diagnose and treat disease. Furthermore, the Examiner submits that healthcare itself inherently represents the organization of human activity. Applicant has not pointed to anything in the claims that fall outside of this characterization. Because the claim elements fall under a series of rules or instructions that a person or persons would follow to train chatbots to interview patients and diagnose and treat disease, the claimed invention is directed to an abstract idea. The claim further recites “training each pair of plurality of pairs of Artificial Intelligence (AI) chatbot and data analyzer for a different disease, wherein the training includes calibrating weights for each AI chatbot with forward propagation and backward propagation;….” When given its broadest reasonable interpretation in might of the disclosure, the training of a machine learning model represents the creation of mathematical interrelationships between data. As such, the training of the machine learning model represents a mathematical concept that is interpreted to be part of the identified abstract idea, supra. The types of identified abstract ideas are considered together as a single abstract idea for analysis purposes. Applicant argues that claims 1,8 and 15 are patent eligible because claims 1, 8 and 15, as a whole, integrate a recited judicial exception into a practical application of that exception. The Examiner respectfully disagrees. At Step 2A Prong One, the claim recites an abstract idea, law of nature, or natural phenomenon. At Step 2A Prong Two the claim does not recite additional elements that integrate the judicial exception into a practical application. The judicial exception (the abstract idea) is not integrated into a practical application of the exception. For example, MPEP 2106.04(d)(2) indicates that a practical application may be present where the abstract idea effects a particular treatment or provides particular prophylaxis for a disease or medical condition. A particular treatment/prophylaxis is present where: (a) there is a particular (i.e., named/described) treatment/prophylaxis that occurs when the claim is implemented; (b) the treatment/prophylaxis has more than a nominal connection/correlation to the abstract idea; and (c) the administration is more than extra-solution activity or a field of use. The indication added to provide a generalized treatment (no type, amount, frequency etc. is given) for a generic condition does not provide a practical application because a particular treatment/prophylaxis is not present in the claims. In this case, the limitation was also found to be an instruction to put to practical use the abstract idea on the computer system. For example, the limitation in question reads “and outputting an analysis result, wherein the analysis result comprises a medical diagnosis and a medical treatment comprising a prescription medication; and sending the prescription medication to the participant.” MPEP 2106.05(f) indicates that a consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. The identified additional element is no more than mere recitation of the words “apply it” (or an equivalent) and/or are instructions to implement an abstract idea or other exception on a computer and therefore cannot provide a practical application or significantly more. Accordingly, even in combination, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Additionally, there is no improvement to any other technology or the computer presented in the claims by any measure in the MPEP. Applicant argues that the 2019 Guidance has specifically indicated that “ claims that do not recite matter than falls within these enumerated groupings of abstract ideas [mathematical concepts, certain methods of organizing human activity, or mental processes] should not be treated as reciting abstract ideas…” unless the Director of the Technology Center specifically agrees. 2019 Guidance at pg. 11, 21-22. In addition, Applicant cites the August 4, 2025 Memo: Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101 that the mental process grouping is not without limits, and Examiners are reminded not to expand this grouping in a manner that encompasses claim limitations that cannot practically be performed in the human mind. Applicant respectfully submits that claims 1,8 and 15 are not directed to mathematical concepts, certain methods of organizing human activity or mental processes. Thus, the claims 1,8 and 15 are not directed to abstract ideas. This argument is repetitive. Please see the first argument response. In addition, of course the Examiner respects the quotation necessitating the TC director’s agreement. However, in this case Applicant is reciting a chatbot who performs diagnosis and issues treatment instructions or a prescription, which is an example of interacting with/surveying, diagnosing and treating patients, activity that typically, though not always, falls under certain methods of organizing human activity because they are the basis of healthcare and inherently involve patient interaction at their core. The TC director typically looks at exceptions. Additionally, the TC director, like the Examiner, would, through a Alice/Mayo analysis determine that the abstract idea preempts a number of healthcare activities but there is no disclosure of an improvement to technology, a particular treatment or prophylaxis or any other measure derived from the MPEP to integrate the judicial exception into a practical application at Step 2A prong two. Applicant argues similar to Example 39, claims 1, 8 and 15 recite training steps specifically that the training steps (for example calibrating weights for each chatbot with forward propagation and backward propagation) make the material not abstract and therefore patentable subject matter. The Examiner disagrees. Example 39 is an example of a specific training step that had various features including an iterative training algorithm, and this algorithm minimized false positives resulting in a robust facial detection model that detects faces in distorted images. Example 39 was found to be eligible at step 2A prong one because it did not recite an abstract idea (mathematical concepts, mental process nor a method of organizing human activity), a natural law and/or a natural phenomenon. Thus, the claim was found to be eligible because it does not recite a judicial exception. The Example 49 explanation includes the following text: As there are no bright lines between the types of judicial exceptions, and many of the concepts identified by the courts as exceptions can fall under several exceptions, MPEP 2106.04, subsection I instructs examiners to “identify… the claimed concept (the specific claim limitation(s) that the examiner believe may recite an exception) that aligns with at least one mathematical concept-type abstract idea, a mental process-type abstract idea, and a law of nature), it is adequate for an examiner to identify the elimination as falling under at least one judicial exception and to base further analysis on that identification. The remainder of this discussion is premised on the recited exception as an abstract idea. See MPEP 2016.04, subsection II.B. The training step was incorporated into the abstract idea, the additions reinforce and extend this categorization when the claim is viewed as a whole. As an additional element, alone or in combination with the other additional elements, this training does not integrate the judicial exception into a practical application of the exception because it does not meaningfully limit the application of the judicial exception to a practical application, it merely recites another judicial exception (for example with forward propagation and backward propagation is just extending detail concerning the type of mathematics used, it does not integrate the judicial exception into a practical application of the exception). Note that complex abstract idea can still be considered abstract ideas. Applicant’s recitation of mathematical concepts (for example calibrating weights for each chatbot in training) is at least one difference between Example 39 and the instant claims and one reason that the subject matter eligibility analysis proceeded past Step 2A Prong One. At Step 2A prong Two the Examiner found that the claim does not recite additional elements that integrate the judicial exception into a practical application. The claims preempt the abstract idea but fail to recite a particular treatment or prophylaxis, improvement to the computer recited, improvement to any other technology or technical field recited nor any other measure in the MPEP to promote eligibility at step 2A prong two. Applicant argues that even if it was established that claims 1,8 and 15 are directed to some abstract idea, the claims are still patient eligible. Applicant notes the analysis should take into consideration all the claim limitations and how these limitations interact and impact each other when evaluating whether the exception is integrated into a practical application. The Examiner appreciates the need to be sensitive to the features of the invention. To paraphrase the invention: the method trains pairs of AI chatbots and data analyzer for different diseases with forward propagation and backward propagation, selects chatbots based on disease, retrieves certain data indicating an order of communication channels to try and a corresponding period of time for contact, identifies the channel of contact using preference data, initiates a conversation during the first period of time and via the first mode of communication, and, converts survey questions to natural language comprising a preferred native language of the participant; interacts with the participant using the first channel to receive information, analyzes the survey answers, outputs an analysis result and sends the prescription medication to the participant. Applicant argues that, when taking into consideration all the claim limitations and how these limitations interact and impact each other when evaluating whether the exception is integrated into a practical application, claims 1,8 and 15 cover a particular solution to a problem or a particular way to achieve a desired outcome. A particular solution to a problem may or may not be patentable depending on a range of factors disclosed in full in the MPEP. In terms of 35 U.S.C. 101 eligibility, which is of course only one factor used to determine patentability, in light of a disclosed abstract idea at Step 2A Prong One of the Alice/Mayo subject matter eligibility analysis, the claims do integrate the judicial exception into a practical application of the exception by representing an improvement via a demonstrated technical improvement to a technical problem. In short, for example, Applicant’s disclosure has reinforced that the recited subject matter is directed to an abstract idea. For example: The claim further recites “training each pair of plurality of pairs of Artificial Intelligence (AI) chatbot and data analyzer for a different disease, wherein the training includes calibrating weights for each AI chatbot; selecting a particular pair of AI chatbot and data analyzer based on the disease’ and under control of the AI chatbot and the data analyzer,…”. When given its broadest reasonable interpretation in light of the disclosure, the training of a machine learning model represents the creation of mathematical interrelationships between data. As such, the training of the machine learning model represents a mathematical concept that is interpreted to be part of the identified abstract idea, supra. The types of identified abstract ideas are considered together as a single abstract idea for analysis purposes. The training step was incorporated into the abstract idea, the addition reinforced and extended this categorization when the claim is viewed as a whole. As an additional element, alone or in combination with the other additional elements, this training does not integrate the judicial exception into a practical application of the exception because it does not meaningfully limit the application of the judicial exception to a practical application, it merely recites another judicial exception (for example with forward propagation and backward propagation is just extending detail concerning the type of mathematics used, it does not integrate the judicial exception into a practical application of the exception). Note that complex abstract idea can still be considered abstract ideas. Selecting a particular pair of AI chatbot and data analyzer from the plurality of pairs of AI chatbot and data analyzer based on a particular disease; Retrieving preference data Using the preference data Converting survey questions to natural language comprising a preferred native language of the participant. These features were all part of the abstract idea and reinforced the recitation of a certain methods of organizing human activity abstract idea relevant to following rules or instructions to interact with a patient to diagnosis and treat disease. Note that certain methods of organizing human activity can include interaction with computer(s). Applicant argues Example 49 is patent eligible because claims 1,8 and 15 are similar to Example 49 in that the independent claims involve outputting an analysis result, wherein the analysis result comprises a medical diagnosis and a medical treatment comprising a prescription medication; and sending the prescription medication to the participant. Thus, Applicant argues, prescription medication is delivered to the participant and the application is allowable. The Examiner respectfully disagrees. This is actually a compendium of redundant arguments. For example, it has been established that the delivering a generic prescription on the basis of a general diagnosis to a patient is not the same as reciting a particular treatment or prophylaxis. Additionally, the Examiner does not see the analogy between Example 49, a Fibrosis Treatment, and a generalized method for a chatbot interviewing a patient and providing diagnosis and treatment information. 35 U.S.C. 103 Argument Responses Yeon does not teach nor suggest a plurality of tokenizer and chatbot service pairs for different diseases and does not teach or suggest selecting a particular pair based on a particular disease. Note that this is not the specific recitation of the claim language. Regardless, Yeon teaches at pg. 3 in this case, the tokenizing will be performed using at least one of Soynlp, koNLpy, and a tokenizer. Yeon teaches at pg. 3 at this time, in the method for providing a chatbot service for symptom and disease matching based on artificial intelligence according to an embodiment of the present invention, a weight corresponding to the symptom standard keyword for each symptom standard keyword through the first machine learning model for disease matching can be printed out. Yeon teaches at pg. 3 in this case the matching will include matching the disease with the disease through a second machine learning model that outputs the disease and disease possibility that are match with the symptom standard keyword and the weight as inputs. Collectively, this is selecting a particular pair of AI chatbot and data analyzer (interpreted here to be the first and second machine learning models) from the plurality of pairs of AI chatbot and data analyzer based on a particular disease. The plurality of pairs of AI chatbots and data analyzers are comprised of the first model selecting the second machine learning model corresponding to the keyword weight and symptom. Note that nothing indicates in the claim that the data analyzer is a Tokenizer. Yeon does not teach nor suggest that the training includes calibrating weights for each AI chatbot with forward propagation and backward propagation. Please see the updated rejection for additional art, which addresses the limitation. Prince does not cure the defects of the other cited references. Prince does not teach nor suggest a plurality of pairs of AI chatbot and data analyzer for a different disease and selecting a particular pair of AI chatbot and data analyzer from the plurality of pairs of AI chatbot and data analyzer based on a particular disease. In addition, Prince does not teach nor suggest that the training includes calibrating weights for each AI chatbot with forward propagation and backward propagation. Please see the updated rejection for additional art, which addresses the limitation. Poddar does not cure the defects of the other cited references. Poddar describes that the omni-channel orchestrated conversation server system includes several components and functional elements, namely, a virtual conversation agent, a conversation transformation module, and an artificial intelligence (AI) engine and machine learning sub-system 44(Figure 4, col. 13 lines, 20-24). However, Poddar does not teach nor suggest a plurality of pairs of AI chatbot and data analyzer for a different disease and selecting a particular pair of AI chatbot and data analyzer from the plurality of pairs of AI chatbot and data analyzer based on a particular disease. In addition, Poddar does not teach nor suggest that the training includes calibrating weights for each AI chatbot with forward propagation and backward propagation. Each of the specific limitations has been addressed in the argument above, and in the rejection, where new art has been added to address the amended limitations. Applicant asserts the combination of Yeon, Prince, and Poddar do not teach or suggest: Training each pair of a plurality of pairs of Artificial Intelligence (AI) chatbot and data analyzer for a different disease, wherein the training includes calibrating weights for each AI chatbot with forward propagation and backward propagation; and selecting a particular pair of AI chatbot and data analyzer from the plurality of pairs of AI chatbot and data analyzer based on a particular disease. Please see the updated 35 U.S.C. 103 rejection which includes new art to address the amended claim limitation. Conclusion The following prior art is relevant to the current claims but was not used as the basis of a rejection: Dey, Aniket. An Integrated Approach to Non-Invasive Diagnosis of Dementia Using Natural Language Processing and Machine Learning.2022. IEEE 2nd International Conference on Data Science and Computer Application ICDSCA). October 28-30, 2022 Dalian, China. The non-patent literature describes a particular application of chatbots to the medical field relevant to neuro subject matter. A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRISTAN ISAAC EVANS whose telephone number is (571)270-5972. The examiner can normally be reached Mon-Thurs 8:00am-12:00pm & 1:00pm-7:00pm, off Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached on 571-272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /T.I.E./Examiner, Art Unit 3683 /CHRISTOPHER L GILLIGAN/Primary Examiner, Art Unit 3683
Read full office action

Prosecution Timeline

Jun 12, 2023
Application Filed
Mar 17, 2025
Non-Final Rejection — §101, §103
May 01, 2025
Applicant Interview (Telephonic)
May 01, 2025
Examiner Interview Summary
May 02, 2025
Response Filed
Aug 08, 2025
Final Rejection — §101, §103
Sep 25, 2025
Interview Requested
Oct 07, 2025
Examiner Interview Summary
Oct 07, 2025
Applicant Interview (Telephonic)
Oct 09, 2025
Response after Non-Final Action
Oct 22, 2025
Request for Continued Examination
Oct 31, 2025
Response after Non-Final Action
Feb 05, 2026
Non-Final Rejection — §101, §103
Apr 06, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586684
DECISION SUPPORT TOOLS FOR REDUCING READMISSIONS OF INDIVIDUALS WITH ACUTE MYOCARDIAL INFARCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12482555
SURGICAL DATA SYSTEM AND CLASSIFICATION
2y 5m to grant Granted Nov 25, 2025
Patent 12469604
Computer Vision Monitoring and Prediction of Ailments
2y 5m to grant Granted Nov 11, 2025
Patent 12462934
DEVICE-INSULATED MONITORING OF PATIENT CONDITION
2y 5m to grant Granted Nov 04, 2025
Patent 12462927
METHODS AND SYSTEMS TO OPTIMIZE THE UTILIZATION OF HEALTH WORKER AND ENHANCE HEALTHCARE COVERAGE FOR POPULATION TO DELIVER CRITICAL/IN-NEED HEALTHCARE SERVICES
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
90%
With Interview (+54.2%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 47 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month