Prosecution Insights
Last updated: April 19, 2026
Application No. 18/341,569

ELECTRONIC DATA VERIFICATION USING ARTIFICIAL INTELLIGENCE

Non-Final OA §101§102§103§112
Filed
Jun 26, 2023
Examiner
WOOLWINE, SHANE D
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Stripe, Inc.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
324 granted / 375 resolved
+31.4% vs TC avg
Strong +21% interview lift
Without
With
+21.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
10 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
13.6%
-26.4% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 16 recites the limitation "The system of claim 1, wherein the set of instructions further cause the processor to" in line one of the claim. There is insufficient antecedent basis for this limitation in the claim because claim 1 is a method claim, and claim 1 upon which claim 16 depends, does not previously describe a set of instructions or a processor. Therefore, claim 16 is rejected as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. For the purposes of this office action, claim 16 is interpreted to depend on claim 9 which contains the processor and set of instructions as it also appears that this is a typographical error and claim 16 was intended to depend on claim 9. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a mental process without significantly more. In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.) Regarding claims 1, 9, and 17, taking claim 9 as exemplary as exemplary: Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 1, a method, 9 a system, and claim 17 a system. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes. The claim recites “determine whether a correct decision can be made for the request based on a current information available and a likelihood of success; in response to the determine that the decision can be determined for the request based on the current information available, utilizing, by the system, a first model to determine a set of questions corresponding to the request, the first model previously trained using training data comprising a set of questions associated with a set of requests; utilize a second model to determine one or more predicted answers for the set of questions, the second model ingesting the set of questions determined by the first model and at least one attribute associated with the request to generate the one or more predicted answers; and in response to the determining the set of questions and the one or more predicted answers, utilize a third model to determine the decision for the request, wherein the third model receives the set of questions and the one or more predicted answers as inputs in determining the decision.” This all recites the mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more. Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No – Although claims 1, 9, and 17 recite “models”, “training”, “computer-readable medium”, “a processor” and the use of a “database”, they are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No— No – Although claims 1, 9, and 17 recite “models”, “training”, “computer-readable medium”, “a processor” and the use of a “database”, they are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 134 S. Ct. 2347, 2360 (2014). For the reasons above, claims 1, 9, and 17 are rejected as being directed to non-patentable subject matter under §101. The additional limitations of the dependent claims are addressed briefly below: Regarding dependent claims 2, 10, and 18, taking claim 10 as exemplary: “wherein the set of instruction further cause the processor to: re-train at least one of the first model, the second model, or the third model, in accordance with an input associated with the decision.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claims 1, 9, and 17. The use of “model” is recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Regarding dependent claims 3, 11, and 19, taking claim 11 as exemplary: “wherein the third model ingests one or more predicted answers having a confidence score that satisfy a threshold.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claims 1, 9, and 17. The “model” is recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Regarding dependent claims 4, 12, and 20, taking claim 12 as exemplary: “wherein at least one question is generated by the first model to an impact value of a feature corresponding to a category of the at least one question.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claims 1, 9, and 17. The “model” is recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Regarding dependent claims 5 and 13, taking claim 13 as exemplary: “wherein the set of instructions further cause the processor to: display the set of questions and at least one predicted answer.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claims 1 and 9. The “display” is recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Regarding dependent claims 6 and 14, taking claim 14 as exemplary: “wherein when a confidence value of an answer is below a threshold, the system transmit a corresponding question to a computing device of a reviewer.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claims 1 and 9. Regarding dependent claims 7 and 15, taking claim 15 as exemplary: “wherein the third model determines the decision for the request based on the set of questions, the one or more predicted answers, and at least one answer received from the computing device as inputs in determining the decision.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claims 6 and 14. The “model” and “computing device” are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Regarding dependent claims 8 and 16, taking claim 8 as exemplary: “wherein the set of instructions further cause the processor to: transmit the request to a computing device of an employee in response to determining that the decision cannot be determined based on the current information available.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claims 1 and 9. The “processor” is recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Taken alone, the additional elements of the dependent claims above do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-7, 9-15, and 17-20 is/are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Boxwell et al., (US 2020/0403945 A1, hereinafter Boxwell). Regarding claims 1, 9, and 17, taking claim 9 as exemplary: Boxwell shows: “A system comprising: a computer-readable medium having a set of instructions, that when executed, cause a processor to:” (Paragraph [0099]: “The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.” And in paragraph [0061]: “Hardware and software layer 60 includes hardware and software components. Examples of hardware components include: mainframes 61; RISC (Reduced Instruction Set Computer) architecture based servers 62; servers 63; blade servers 64; storage devices 65; and networks and networking components 66. In some embodiments, software components include network application server software 67 and database software 68.”) “receive a request; determine whether a correct decision can be made for the request based on a current information available and a likelihood of success;” (Paragraph [0014]: “For example, consider a workplace environment that utilizes a chatbot that has a question answering functionality (or a question answering system). A new employee, who will not be working in the environment very long, is expected to make use of the system to perform their duties. They are also aware that several other employees have implemented particular adaptations (or domain adaptation models) with their use of the chatbot system. One of the other employees is a manager (or supervisor, etc.) who has customized their system extensively to make use of the appropriate domain “jargon” (e.g., terms of art within the field). Another employee has similarly customized their system with similar jargon and a few inside jokes related to the workplace, but has only been employed at the workplace for a short time (e.g., a month) and may have customized their system in such a way that is not helpful for other users.” In paragraph [0066]: “In some embodiments, a user (e.g., a primary user) first selects or identifies which domain adaptation models (or models) they would like to utilize when interacting with (or using) the chatbot (or question answering) system. For example, when creating a user profile for or registering with the system, or perhaps via a system setting/preferences functionality, the user may be provided with a list of the available models, each of which may be associated with one or more other (or current, previous, etc.) users (e.g., users for which models have already been created).” In paragraph [0078]: “a confidence score of 90% may be determined (or calculated) for “QWE Technology” (and/or the answer generated using the model of User 2 as a whole), which corresponds to the weighting assigned to User 2's model (e.g., 0.9). Likewise, a confidence score of 50% may be determined for “QWE Guild,” which corresponds to the weighting assigned to User 3's model (e.g., 0.5).” In paragraph [0095]: “A plurality of models (or domain adaptation models) is received (or retrieved) (step 504). Each of the plurality of models is associated with answering questions for a respective user. Each of the plurality of models may include at least one of synonyms, type information, and answer filters, and each of the users may be an individual (e.g., a primary user or other users). The plurality of models may be selected based on an indication received from a user (e.g., the user may provide input indicating a selection of particular ones of the available models).” And in paragraph [0097]: “An answer to a question (e.g., received from the primary user) is generated based on the plurality of models and the weighting assigned to each of the plurality of models (step 508). The generating of the answer to the question may include generating a preliminary answer to the question based on each of the plurality of models and scoring each of the preliminary answers based on the weighting assigned to the respective model.”) “in response to the determine that the decision can be determined for the request based on the current information available, utilizing, by the system, a first model to determine a set of questions corresponding to the request, the first model previously trained using training data comprising a set of questions associated with a set of requests;” (Paragraph [0095]: “A plurality of models (or domain adaptation models) is received (or retrieved) (step 504). Each of the plurality of models is associated with answering questions for a respective user. Each of the plurality of models may include at least one of synonyms, type information, and answer filters, and each of the users may be an individual (e.g., a primary user or other users). The plurality of models may be selected based on an indication received from a user (e.g., the user may provide input indicating a selection of particular ones of the available models).” In paragraph [0097]: “An answer to a question (e.g., received from the primary user) is generated based on the plurality of models and the weighting assigned to each of the plurality of models (step 508). The generating of the answer to the question may include generating a preliminary answer to the question based on each of the plurality of models and scoring each of the preliminary answers based on the weighting assigned to the respective model.” And in paragraph [0066]: “In some embodiments, a user (e.g., a primary user) first selects or identifies which domain adaptation models (or models) they would like to utilize when interacting with (or using) the chatbot (or question answering) system. For example, when creating a user profile for or registering with the system, or perhaps via a system setting/preferences functionality, the user may be provided with a list of the available models, each of which may be associated with one or more other (or current, previous, etc.) users (e.g., users for which models have already been created).” “utilize a second model to determine one or more predicted answers for the set of questions, the second model ingesting the set of questions determined by the first model and at least one attribute associated with the request to generate the one or more predicted answers;” (Paragraph [0095]: “A plurality of models (or domain adaptation models) is received (or retrieved) (step 504). Each of the plurality of models is associated with answering questions for a respective user. Each of the plurality of models may include at least one of synonyms, type information, and answer filters, and each of the users may be an individual (e.g., a primary user or other users). The plurality of models may be selected based on an indication received from a user (e.g., the user may provide input indicating a selection of particular ones of the available models).” And in paragraph [0097]: “An answer to a question (e.g., received from the primary user) is generated based on the plurality of models and the weighting assigned to each of the plurality of models (step 508). The generating of the answer to the question may include generating a preliminary answer to the question based on each of the plurality of models and scoring each of the preliminary answers based on the weighting assigned to the respective model.”) “and in response to the determining the set of questions and the one or more predicted answers, utilize a third model to determine the decision for the request, wherein the third model receives the set of questions and the one or more predicted answers as inputs in determining the decision.” (Paragraph [0098]: “Method 500 ends (step 510) with, for example, the generated (final) answer being provided to the user (i.e., the primary user) via (e.g., rendered by) a suitable computing device (e.g., the computing utilized by the primary user to submit the question) via, for example, a voice response, being displayed on a display device, provided via electronic communication, etc. The process may be repeated when the selected models are change, the weightings are changed, and/or a subsequent question is received. For example, a second weighting may be assigned to each of the plurality of models. An answer for a second question may be generated based on the plurality of models and the second weighting assigned to each of the plurality of models. The second question may be the same as the (first) question. The generated answer for the second question may be different than the generated answer for the (first) question. In some embodiments, the user(s) may provide feedback related to the management of the question answering system, which may be utilized by the system to improve performance over time.”) Regarding claims 2, 10, and 18, taking claim 10 as exemplary: Boxwell shows the method, system, and system of claims 1, 9, and 17 as claimed and specified above. And Boxwell shows “wherein the set of instruction further cause the processor to: re-train at least one of the first model, the second model, or the third model, in accordance with an input associated with the decision.” (Paragraph [0098]: “Method 500 ends (step 510) with, for example, the generated (final) answer being provided to the user (i.e., the primary user) via (e.g., rendered by) a suitable computing device (e.g., the computing utilized by the primary user to submit the question) via, for example, a voice response, being displayed on a display device, provided via electronic communication, etc. The process may be repeated when the selected models are change, the weightings are changed, and/or a subsequent question is received. For example, a second weighting may be assigned to each of the plurality of models. An answer for a second question may be generated based on the plurality of models and the second weighting assigned to each of the plurality of models. The second question may be the same as the (first) question. The generated answer for the second question may be different than the generated answer for the (first) question. In some embodiments, the user(s) may provide feedback related to the management of the question answering system, which may be utilized by the system to improve performance over time.”) Regarding claims 3, 11, and 19, taking claim 11 as exemplary: Boxwell shows the method, system, and system of claims 1, 9, and 17 as claimed and specified above. And Boxwell shows “wherein the third model ingests one or more predicted answers having a confidence score that satisfy a threshold.” (Paragraph [0098]: “Method 500 ends (step 510) with, for example, the generated (final) answer being provided to the user (i.e., the primary user) via (e.g., rendered by) a suitable computing device (e.g., the computing utilized by the primary user to submit the question) via, for example, a voice response, being displayed on a display device, provided via electronic communication, etc. The process may be repeated when the selected models are change, the weightings are changed, and/or a subsequent question is received. For example, a second weighting may be assigned to each of the plurality of models. An answer for a second question may be generated based on the plurality of models and the second weighting assigned to each of the plurality of models. The second question may be the same as the (first) question. The generated answer for the second question may be different than the generated answer for the (first) question. In some embodiments, the user(s) may provide feedback related to the management of the question answering system, which may be utilized by the system to improve performance over time.” And in paragraph [0078]: “a confidence score of 90% may be determined (or calculated) for “QWE Technology” (and/or the answer generated using the model of User 2 as a whole), which corresponds to the weighting assigned to User 2's model (e.g., 0.9). Likewise, a confidence score of 50% may be determined for “QWE Guild,” which corresponds to the weighting assigned to User 3's model (e.g., 0.5).”) Regarding claims 4, 12, and 20, taking claim 12 as exemplary: Boxwell shows the method, system, and system of claims 1, 9, and 17 as claimed and specified above. And Boxwell shows “wherein at least one question is generated by the first model to an impact value of a feature corresponding to a category of the at least one question.” (Paragraph [0098]: “Method 500 ends (step 510) with, for example, the generated (final) answer being provided to the user (i.e., the primary user) via (e.g., rendered by) a suitable computing device (e.g., the computing utilized by the primary user to submit the question) via, for example, a voice response, being displayed on a display device, provided via electronic communication, etc. The process may be repeated when the selected models are change, the weightings are changed, and/or a subsequent question is received. For example, a second weighting may be assigned to each of the plurality of models. An answer for a second question may be generated based on the plurality of models and the second weighting assigned to each of the plurality of models. The second question may be the same as the (first) question. The generated answer for the second question may be different than the generated answer for the (first) question. In some embodiments, the user(s) may provide feedback related to the management of the question answering system, which may be utilized by the system to improve performance over time.” And in paragraph [0078]: “a confidence score of 90% may be determined (or calculated) for “QWE Technology” (and/or the answer generated using the model of User 2 as a whole), which corresponds to the weighting assigned to User 2's model (e.g., 0.9). Likewise, a confidence score of 50% may be determined for “QWE Guild,” which corresponds to the weighting assigned to User 3's model (e.g., 0.5).”) Regarding claims 5 and 13, taking claim 13 as exemplary: Boxwell shows the method and system of claims 1 and 9 as claimed and specified above. And Boxwell shows “wherein the set of instructions further cause the processor to: display the set of questions and at least one predicted answer.” (Paragraph [0098]: “Method 500 ends (step 510) with, for example, the generated (final) answer being provided to the user (i.e., the primary user) via (e.g., rendered by) a suitable computing device (e.g., the computing utilized by the primary user to submit the question) via, for example, a voice response, being displayed on a display device, provided via electronic communication, etc. The process may be repeated when the selected models are change, the weightings are changed, and/or a subsequent question is received. For example, a second weighting may be assigned to each of the plurality of models. An answer for a second question may be generated based on the plurality of models and the second weighting assigned to each of the plurality of models. The second question may be the same as the (first) question. The generated answer for the second question may be different than the generated answer for the (first) question. In some embodiments, the user(s) may provide feedback related to the management of the question answering system, which may be utilized by the system to improve performance over time.” And in paragraph [0089]: “Although not shown in detail, the computing device 402 may include various user input devices that may be used by the user 412 to pose (or provide, submit, etc.) questions (or commands) to the question answering system, such as a microphone, a keyboard, mouse, touchscreen, etc., along with a display device and perhaps a speaker.”) Regarding claims 6 and 14, taking claim 14 as exemplary: Boxwell shows the method and system of claims 1 and 9 as claimed and specified above. And Boxwell shows “wherein when a confidence value of an answer is below a threshold, the system transmit a corresponding question to a computing device of a reviewer.” (Paragraph [0093]: “an answer(s) 414 to the submitted question and provide the (final) answer(s) to the primary user 412 via the computing device 402 (e.g., via a display screen, speaker, etc.), as described above.”) Regarding claims 7 and 15, taking claim 15 as exemplary: Boxwell shows the method and system of claims 6 and 14 as claimed and specified above. And Boxwell shows “wherein the third model determines the decision for the request based on the set of questions, the one or more predicted answers, and at least one answer received from the computing device as inputs in determining the decision.” (Paragraph [0098]: “Method 500 ends (step 510) with, for example, the generated (final) answer being provided to the user (i.e., the primary user) via (e.g., rendered by) a suitable computing device (e.g., the computing utilized by the primary user to submit the question) via, for example, a voice response, being displayed on a display device, provided via electronic communication, etc. The process may be repeated when the selected models are change, the weightings are changed, and/or a subsequent question is received. For example, a second weighting may be assigned to each of the plurality of models. An answer for a second question may be generated based on the plurality of models and the second weighting assigned to each of the plurality of models. The second question may be the same as the (first) question. The generated answer for the second question may be different than the generated answer for the (first) question. In some embodiments, the user(s) may provide feedback related to the management of the question answering system, which may be utilized by the system to improve performance over time.”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Boxwell in view of Salcido et al., (US 2022/0022760 A1, hereinafter Salcido). Regarding claims 8 and 16: Boxwell shows the method and system of claims 1 and 9 as claimed and specified above. But Boxwell does not appear to explicitly recite “wherein the set of instructions further cause the processor to: transmit the request to a computing device of an employee in response to determining that the decision cannot be determined based on the current information available.” However Salcido teaches “wherein the set of instructions further cause the processor to: transmit the request to a computing device of an employee in response to determining that the decision cannot be determined based on the current information available.” (Paragraph [0002]: “a system is provided comprising a hardware processor executing instructions stored on a non-transitory media, the instructions for a method for providing real-time assessment of a potential risk of an infectious disease without face-to-face interaction between a human user and a human healthcare professional, the method including receiving a predetermined acceptable range for an aspect of a human user's physiological measurement data, receiving a predetermined acceptable answer for a survey response question, securely receiving an aspect of the human user's physiological measurement data, determining if the aspect of the human user's physiological measurement data is within the predetermined acceptable range, if the aspect of the human user's physiological measurement data is within the predetermined acceptable range, transmitting to the human user a survey comprising a question, if the aspect of the human user's physiological measurement data is not within the predetermined acceptable range, transmitting an active alert and not providing the human user with the survey comprising a question, and if the human user fails to transmit an acceptable answer to the survey comprising the question, transmitting an active alert.” – The unable to get an acceptable answer and transmitting an alert is the transmitting a request that a decision cannot be made.) Boxwell and Salcido are analogous in the arts because Boxwell and Salcido both describe question and answer systems. Therefore, it would be obvious to one of ordinary skill in the art at the filing date of the instant application, having the teachings of Boxwell and Salcido before him or her, to modify the teachings of Boxwell to include the teachings of Salcido in order to support requesting more information when needed in a question and answer system so as to get acceptable responses through alerts and thereby increase the marketability, safety, and efficiency of Boxwell (see Salcido paragraph [0002]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Beaver (US 2020/0320134 A1), part of the prior art made of record, describes modifying a model using with questions and responses of claims 1, 9, and 17 in paragraph [0009] by training “a question model and a response model using at least some of the received data; use the question model to generate a plurality of questions; for each question, assign an intent to the question; for each question, use the response model to generate a response.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE D WOOLWINE whose telephone number is (571)272-4138. The examiner can normally be reached M-F 9:30-6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MIRANDA HUANG can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHANE D. WOOLWINE Primary Examiner Art Unit 2124 /SHANE D WOOLWINE/ Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Jun 26, 2023
Application Filed
Mar 06, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596741
SYSTEMS AND METHODS FOR SEMANTIC CONCEPT DEFINITION AND SEMANTIC CONCEPT RELATIONSHIP SYNTHESIS UTILIZING EXISTING DOMAIN DEFINITIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12591764
FAIRNESS ASSESSMENT FOR DEEP GENERATIVE MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12567005
DETECTING ANOMALOUS DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12561618
ANOMALY DETECTION SYSTEM USING MULTI-LAYER SUPPORT VECTOR MACHINES AND METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Patent 12554985
OPERATIONAL NEURAL NETWORK PERFORMANCE VIA FEATURE SPACE ANALYSIS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+21.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month