Prosecution Insights
Last updated: April 19, 2026
Application No. 18/726,529

AUTOMATED SYSTEMS FOR DIAGNOSIS AND MONITORING OF STROKE AND RELATED METHODS

Final Rejection §101§103
Filed
Jul 03, 2024
Examiner
GEDRA, OLIVIA ROSE
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ohio State Innovation Foundation
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 12 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
39 currently pending
Career history
51
Total Applications
across all art units

Statute-Specific Performance

§101
39.8%
-0.2% vs TC avg
§103
43.6%
+3.6% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the present action filed on December 10th, 2025. Claims 1, 17, and 31 have been amended. Claims 1-3, 6, 12-15, 17-19, 24-26, 29, 31-35 are currently pending and have been examined. This action is made final. Claim Objections Claim 1 is objected to for stating “receive, from the microphone, an audio signal from the microphone”. This limitation is redundant, and appropriate correction is required. Claim 3 is objected to for stating it is currently amended; however, it recites the same limitation as the Claim 3 in the Claims filed on 07/23/2024. Therefore it is previously presented and appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 6, 12-15, 17-19, 24-26, 29, 31-35 are rejected under 35 USC § 101 as being directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1 Analysis: Independent Claims 1, 17, and 31 are within the four statutory categories. Claims 1, 17, and 31 are drawn to systems and a method, respectively. Dependent Claims 2-3, 12-15, 17-19, 24-26, and 29 are also directed to a system and Claims 32-35 are directed to a method. Therefore, the dependent claims also fall into one of the four statutory categories. Step 2A Analysis- Prong One: Claim 1, which is representative of the inventive concept, recites the following: A system, comprising: an imaging device configured to capture image frame representative of a state of a patient; a microphone configured to output a digitized audio stream; and a computing device comprising a processor and a memory operably coupled to the processor, wherein the memory has computer-executable instructions stored thereon that, when executed by the processor, cause the processor to: receive a sequence of images from the imaging device, the sequence of images capturing a state of a patient, the sequence of images being input to a trained computer-vision neural network to extract feature locations from the image frames; analyze the feature locations using a machine-learning impairment-detection model trained to identify at least one of facial asymmetry, limb-trajectory deviation, or gaze-vector discontinuity to detect one or more of limb impairment, gaze impairment, or facial palsy; assign a respective numeric score to each of the detected one or more limb impairment, gaze impairment, or facial palsy; receive, from the microphone, an audio signal from the microphone, the audio signal capturing a voice of the patient that is synchronized with the image frames; analyze the audio signal to detect aphasia or agnosia using features extracted by a Natural Language Processing (NLP) module and machine-learning classifier; assign a respective numeric score to the detected aphasia or agnosia; generate a stroke score comprising a sum of the respective numeric scores for the detected one or more of limb impairment, gaze impairment, or facial palsy and the detected aphasia or agnosia; and output the stroke score via a display or network interface. The limitations as shown in underline above, given the broadest reasonable interpretation, cover the abstract idea of certain methods of organizing human activity because they recite managing personal behavior or relationships or interactions between people (i.e. social activities, teaching, and following rules or instructions, and/or mental process that a neurologist should follow when testing a patient for nervous system malfunctions- in this case, receiving images, analyzing the images for impairment, assigning a numeric score, receiving audio, analyzing the audio signal, assigning a numeric score based on the audio, and generating an overall stroke score), e.g. see MPEP 2106.04(a)(2). Any limitations not identified above as part of the abstract idea are deemed “additional elements” and will be discussed in further detail below. Dependent Claims 2-3, 12-15, 17-18, 25-26, 29, and 32-35 include other limitations directed to the abstract idea. For example, Claims 2, 25, and 32 recite diagnosing the patient with a stroke based on the stroke score. Claims 3, 26, and 33-34 recite assessing a severity of the stroke based on the stroke score and recommending a triage action. Claims 6, 29, and 35 recite recommending a treatment for the patient. Claims 12 and 13 recite analyzing the sequence of images and audio signal, assigning the respective scores, and generating the stroke score. Claim 14 recites applying a force, vibration, or motion to the patient. Claim 15 recites the types of stroke score that can be implemented. Claim 18 recites extracting features from the sequence of images. These limitations only server to further narrow the abstract idea, and a claim may not preempt abstract ideas, even if the judicial exception is narrow, e.g., see MPEP 2106.04. Additionally, any limitations in the dependent claims not addressed above are deemed additional elements to the abstract idea and will be further addressed below. Hence dependent Claims 2-3, 12-15, 17-18, 25-26, 29, and 32-35 are nonetheless directed towards fundamentally the same abstract idea as independent Claims 1, 17, and 31. Step 2A Analysis- Prong Two: Claims 1, 17, and 31 are not integrated into practical application because the additional elements (i.e. the non-underlined limitations above – in this case, the imaging device, microphone, computing device, memory, the outputted audio stream being digitized, computer-vision neural network, machine-learning impairment-detection model, Natural Language Processing, display, network interface and processor of Claim 1, the artificial intelligence modules, computing device, memory, computer-vision neural network, audio-processing neural network, and processor of Claim 17, and the imaging device, microphone, machine learning impairment model, NLP, and computing device of Claim 31) are recited at a high level of generality (i.e. as a generic processor performing generic computer functions) such that they amount to no more than mere instructions to apply an exception using generic computer parts. For example, Applicant’s specification explains that the system is a computing device (e.g., computing device 4100 of FIG.4) such as a mobile computing device, for example, a smartphone, a tablet computer, or a laptop computer…a mobile computing device is deployable in the field, for example, by EMT …(Applicant’s specification, ¶ 0066). Computing device 400 typically includes at least one processing unit 406 and system memory 404. Depending on the exact configuration and type of computing device, system memory 404 may be volatile (such as random access memory (RAM)), non-volatile (such as read-only memory (ROM), flash memory, etc.),or some combination of the two [0115]. In some implementations, the step of analyzing the sequence of images includes a machine learning model [0008]. As a non-limiting example, the patient's response to the physical stimulus can be verbal, such as "yes" or "I feel it" and natural language processing can be applied to the verbal response to determine whether the response is affirmative or negative [0083]. Further, the additional elements of the imaging device and the microphone were found to be generally linking to a technical environment or field of use. For example, Applicant’s specification explains that the system optionally further includes an imaging device for capturing the sequence of images and a microphone for capturing the audio signal (Applicant’s specification, ¶ 0020). The imaging device and microphone are operably coupled to the computing device, for example, by one or more communication links [0065]. MPEP 2106.04(d)(1) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide a practical application. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on the abstract idea. Therefore, Claims 1, 17, and 31 are directed to an abstract idea without practical application. Dependent Claims 2-3, 6, 12-14, 17-19, 24-26, and 29 recite additional elements, but these limitations amount to no more than mere instructions to apply an exception. Claims 2 and 25 recite the previously recited memory and processor and specify the memory has further instructions that the processor uses to diagnose the patient with a stroke based on the stroke score. Claims 3 and 26 recite the previously recited memory and processor and specify the memory has further instructions that the processor uses to assess the severity of a stroke and recommend a triage action. Claims 6 and 29 recite the previously recited memory and processor and specify the memory has further instructions that the processor uses to recommend a treatment for a patient. Claim 12 recites a new additional element of an expert system and specifies the expert system analyzes the audio signal and images, assigns respective scores, and generates a stroke score. Claim 13 recites a new additional element of a trained machine learning model and specifies the model analyzes the audio signal and images, assigns respective scores, and generates a stroke score. Claim 14 recites a new additional element of a haptic device and previously recited memory/processor and specifies the haptic device applies a force, vibration, or motion to the patient and that the processor controls the haptic device. Claim 24 recites a new additional element of an imaging device for capturing images and a microphone for capturing an audio signal. However, these additional elements are described only at a high level of generality and are being used in their expected fashion, so these additional elements do not integrate the abstract idea into a practice application because they do not impose any meaningful limits on the abstract idea. These limitations amount to no more than mere instructions to apply an exception, and hence, do not integrate the aforementioned abstract idea into practical application. Step 2B Analysis: The claims, whether considered individually or in combination, do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of the imaging device, microphone, computing device, memory, the outputted audio stream being digitized, computer-vision neural network, machine-learning impairment-detection model, Natural Language Processing, display, network interface and processor of Claim 1, the artificial intelligence modules, computing device, memory, computer-vision neural network, audio-processing neural network, and processor of Claim 17, and the imaging device, microphone, machine learning impairment model, NLP, and computing device of Claim 31 amount to no more than mere instructions to apply an exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (“significantly more”). MPEP 2106.05(I)(A) indicates that merely stating “apply it” or equivalent to the abstract idea cannot provide an inventive concept (“significantly more). Further, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of the microphone and the imaging device were considered to generally link the abstract idea to a particular technological environment or field of use. This has been re-evaluated under the ‘significantly more’ analysis and has been found insufficient to provide significantly more. MPEP2106.05 (A) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide an inventive concept (‘significantly more"). Accordingly, even in combination, these additional elements do not provide significantly more. As such, Claims 1, 17, and 31 are not patent eligible. Dependent Claims 15 and 32-35 are similarly rejected because they further define/narrow the abstract idea and do not provide any additional elements. Claim 15 narrows the abstract idea by specifying the types of stroke score that can be implemented. Claim 32 narrows the abstract idea by reciting diagnosing the patient with a stroke based on the stroke score. Claims 33-34 recite assessing a severity of the stroke based on the stroke score and recommending a triage action. Claim 35 recites recommending a treatment for the patient. Dependent Claims 2-3, 6, 25-26, and 29 recite previously cited additional elements, which are not eligible for the reasons stated above, and further narrow the abstract idea. Claims 2 and 25 recite the previously recited memory and processor and specify the memory has further instructions that the processor uses to diagnose the patient with a stroke based on the stroke score. Claims 3 and 26 recite the previously recited memory and processor and specify the memory has further instructions that the processor uses to assess the severity of a stroke and recommend a triage action. Claims 6 and 29 recite the previously recited memory and processor and specify the memory has further instructions that the processor uses to recommend a treatment for a patient. Claims 12-14 and 24 recite new additional elements. Claim 12 recites a new additional element of an expert system and specifies the expert system analyzes the audio signal and images, assigns respective scores, and generates a stroke score. Claim 13 recites a new additional element of a trained machine learning model and specifies the model analyzes the audio signal and images, assigns respective scores, and generates a stroke score. Claim 14 recites a new additional element of a haptic device and previously recited memory/processor and specifies the haptic device applies a force, vibration, or motion to the patient and that the processor controls the haptic device. Claim 24 recites a new additional element of an imaging device for capturing images and a microphone for capturing an audio signal. Hence, Claims 2-3, 6, 12-14, 24-26, and 29 do not include any additional elements that amount to “significantly more” than the judicial exception. Thus, taken alone, the additional elements do not amount to significantly more than the abstract idea identified above. Furthermore, looking at the limitations as an ordered combination does not add anything that is already present when looking at the elements taken individually, and there is no indication that the combination of elements improves the functioning of computer or improves any other technology, and their collective functions merely provide conventional computer implementation. Thus, Claims 1-3, 6, 12-15, 17-19, 24-26, 29, 31-35 are rejected under 35 USC § 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6, 12-13, 15, 26, 29, and 31-35 are rejected under 35 U.S.C. 103 as being unpatentable over O'Donovan et al. (US 20210202090 A1) in view of Tran et al. (US 20070276270 A1) and Shaked et al. (US 20210312915 A1). Regarding Claim 1, O’Donovan discloses the following: A system, comprising: (O'Donovan discloses a system for automated health condition scoring may include at least one communication interface to receive an audio stream and a video stream from an endpoint in proximity to a patient [0007].) an imaging device configured to capture image frames representative of the state of a patient; (O'Donovan discloses the video receiver 113 (e.g., camera) in proximity to the patient 108 may capture one or more video frames 202 showing the patient's face, including, in one embodiment, at least the patient's eyes and lips. The video frames 202 may include a series of 2D or 3D still images (i.e., key frames)…[0060].) a microphone… (O'Donovan discloses the patient endpoint 110 may include a patient-side audio receiver 112 (e.g., microphone)…[0057].) and a computing device comprising a processor and a memory operably coupled to the processor, wherein the memory has computer-executable instructions stored thereon that, when executed by the processor, cause the processor to: (O'Donovan discloses the computer system 1600 includes one or more computing components in communication via a bus 1602. In one implementation, the computer system 1600 includes one or more processors 1614. A memory 1608 may include one or more memory cards and control circuits (not depicted), or other forms of removable memory, and may store various software applications including computer executable instructions, that when run on the processor 1614, implement the methods and systems set out herein [0186-187].) receive a sequence of images from the imaging device, the sequence of images capturing a state of a patient; (O'Donovan discloses the patient endpoint 110 may include a patient-side audio receiver 112 (e.g., microphone) and a patient-side video receiver (e.g., camera) 113 [0057]. In one embodiment, the video receiver 113 (e.g., camera) in proximity to the patient 108 may capture one or more video frames 202 showing the patient's face, including, in one embodiment, at least the patient's eyes and lips. The video frames 202 may include a series of 2D or 3D still images (i.e., key frames) or may include a video stream compressed using a proprietary or standard compression scheme… [0060]. The Examiner interprets the video capturing the patient's face being the state of the patient.) the sequence of images being input to a trained computer-vision neural network to extract feature locations from the image frames; (O'Donovan discloses the facial landmark detector 204 may include (or have access to via the communication network 106) a machine learning system 213, such as a deep learning neural network [0064]. The facial keypoints 205 may be annotated in one or both of 2D and 3D coordinates. In one embodiment, the facial landmark detector 204 is capable of detecting sixty-eight (68) or more different facial keypoints 205 on a human face. Moreover, the facial landmark detector 204 may be able to predict both the 2D and 3D facial keypoints 205 in a face [0065]. The Examiner interprets the facial keypoints as facial locations.) analyze the feature locations using a machine-learning-impairment-detection model trained to identify at least one of facial asymmetry, limb-trajectory deviation, or gaze-vector discontinuity to detect one or more of limb impairment, gaze impairment, or facial palsy; (O'Donovan discloses the health condition is a stroke, and the at least two different AI detectors are selected from a group consisting of a facial droop detector, an ataxia detector, and slurred speech detector [0011]. The system 600 further includes an ataxia detector 605 that automatically provides the stroke scorer 608 with a second stroke likelihood 606B based on a measurement of the patient’s limb weakness… the measurement of limb weakness may be a function of the movement velocity of a particular limb of the patient 108 over a time interval during which the patient 108 is instructed to keep the limb motionless [0104]. The asymmetry detector 601 may include a facial landmark detector 204, which provides a set of facial keypoints 205 to a facial droop detector 206, which, in turn, provides the degree (and/or rate of change) of facial droop 207 to an asymmetry scorer 704 …the asymmetry scorer 704 may operate in much the same way as the stroke scorer 208 of FIG. 2… i.e., facial asymmetry…the asymmetry scorer 704 may include or have access to a machine learning system [0116].) assign a respective numeric score to each of the detected one or more limb impairment, gaze impairment, or facial palsy; (O'Donovan discloses the asymmetry detector may process the video stream to automatically determine a first stroke likelihood based on a measurement of facial droop. Concurrently…with the asymmetry detector, the ataxia detector may process the video stream to automatically determine a second stroke likelihood based on a measurement of limb weakness [0012]. The asymmetry detector 601 may include a facial landmark detector 204, which provides a set of facial keypoints 205 to a facial droop detector 206, which, in turn, provides the degree (and/or rate of change) of facial droop 207 to an asymmetry scorer 704 [0116].) receive, from the microphone, an audio signal from the microphone, the audio signal capturing a voice of the patient (O'Donovan discloses the patient endpoint 110 may include a patient-side audio receiver 112 (e.g., microphone)…The physician endpoint 124 may likewise include a physician-side audio receiver 126…[0057]. The system z200 may further include a speech-to-text unit 216, which may convert spoken audio communicated between the patient 108 and/or physician 118 via the communication interface 203 into readable text 218. The system may distinguish among participants using voice recognition techniques [0075].) and generate a stroke score comprising a sum of the respective numeric scores for the detected one or more of limb impairment, gaze impairment, or facial palsy… (O'Donovan discloses the asymmetry detector may process the video stream to automatically determine a first stroke likelihood based on a measurement of facial droop. Concurrently…with the asymmetry detector, the ataxia detector may process the video stream to automatically determine a second stroke likelihood based on a measurement of limb weakness. Concurrently…with the asymmetry detector and/or the ataxia detector, the dysarthria detector may process the audio stream to automatically determine a third stroke likelihood based on a measurement of slurred speech. After the first, second, and third stroke likelihoods are determined, a stroke scorer may automatically determine a stroke score for the patient based on a combination of the first, second, and third stroke likelihoods. The display interface may then display an indication of the stroke score…[0012-13].) and output the stroke score via a display or network interface. (O'Donovan discloses a stroke scorer may automatically determine a stroke score for the patient based on a combination of the first, second, and third stroke likelihoods. The display interface may then display an indication of the stroke score to a physician [0013].) O’Donovan does not disclose the audio signal detecting aphasia/agnosia which is met by Tran: a microphone configured to output a digitized audio stream; (Tran teaches the CPU is coupled via the bus to processor wake-up logic, one or more accelerometers to detect sudden movement in a patient, an ADC 102 which receives speech input from the microphone. The ADC converts the analog signal produced by the microphone into a sequence of digital values representing the amplitude of the signal produced by the microphone at a sequence of evenly spaced times [0247].) analyze the audio signal to detect aphasia or agnosia using… machine-learning classifier;… to the detected aphasia or agnosia; (Tran teaches the system can detect aphasia, including receptive aphasia and expressive aphasia. Aphasia is a cognitive disorder marked by an impaired ability to comprehend (receptive aphasia) or express (expressive aphasia) language. Exemplary embodiments are disclosed for detecting receptive aphasia by displaying text or playing verbal instructions to the user, followed by measuring the correctness and/or time delay of the response from the user [0441]. These data driven analyzers may incorporate a number of models such as…regression methods, and engineered (artificial) neural networks [0315].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the systems and methods for receiving audio and image data and using it to determine a stroke score as disclosed by O’Donovan to incorporate the detection of aphasia as taught by Tran. This modification would create a system and method capable of early identification of stroke to prevent deaths (see Tran, ¶ 0002-3, 0010). O’Donovan and Tran do not teach synchronizing images with audio and the use of natural language processing which is met by Shaked: audio…that is synchronized with the image frames; (Shaked teaches the sync engine 130 may be configured to apply various algorithms such as, as examples and without limitation, deep-learning, artificial intelligence (Al), machine learning, and the like. In an embodiment, the sync engine 130 may be configured to correlate audio and video providing, as an example, a synchronization of a human voice with recorded lip movement [0049].) analyze the audio…using features extracted by a Natural Language Processing (NLP) module… (Shaked teaches the ASR module 161 outputs text for each detected voice channel, i.e., performs speech-to-text. The engine 160 further includes an NLP module 162 configured to apply a natural language processing (NLP) technique to parse the text provided by the ASR module 161 [0044].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the systems and methods for receiving audio and image data and using it to determine a stroke score as disclosed by O’Donovan to incorporate synchronizing images with audio and the use of natural language processing as taught by Shaked. This modification would create a system which allows for speaker isolation, direction-based speech separation, and context-driven speech applications (see Shaked, ¶ 0004). Regarding Claim 31, this claim recites limitations that are substantially similar to those recited in Claim 1 above; thus, the same rejection applies. O’Donovan further discloses: A computer-implemented method for automated detection of stroke, (O'Donovan discloses a method 1200 for automated stroke scoring based on a plurality of inputs generated by the detectors 601, 605, 607 of FIG. 6 [0149]. The stroke score 209 may be a probability, a percentage chance or other indicator of likelihood…For example, an angle of zero or approximately zero may indicate a high degree facial symmetry, which the stroke scorer 208 might determine a low stroke score 209 suggesting that a stroke is unlikely, whereas an angle exceeding a threshold…given a moderate to high stroke score 209 indicating that the patient 108 likely experienced (or is undergoing) a stroke [0068].) …analyzing the sequence of images using a machine-learning impairment model trained to detect facial asymmetry… (O'Donovan discloses the video receiver 113 (e.g., camera)…may capture one or more video frames 202 showing the patient's face, including, in one embodiment, at least the patient's eyes and lips. The video frames 202 may include a series of 2D or 3D still images (i.e., key frames)…[0060].) The health condition is a stroke, and the at least two different AI detectors are selected from a group consisting of a facial droop detector,… [0011]. The asymmetry detector 601 may include a facial landmark detector 204, which provides a set of facial keypoints 205 to a facial droop detector 206, which, in turn, provides the degree (and/or rate of change) of facial droop 207 to an asymmetry scorer 704 …the asymmetry scorer 704 may operate in much the same way as the stroke scorer 208 of FIG. 2… i.e., facial asymmetry…the asymmetry scorer 704 may include or have access to a machine learning system [0116].) Regarding Claim 2, O’Donovan, Tran, and Shaked teach the limitations as seen in the rejection of Claim 1 above. O’Donovan further discloses: wherein the memory has further computer- executable instructions stored thereon that, when executed by the processor, cause the processor to diagnose the patient with a stroke based on the stroke score. (O'Donovan discloses the stroke score 209 may be a probability, a percentage chance or other indicator of likelihood, and/or a function of the calculated angle with respect to threshold 211 and/or other inputs or parameters. For example, an angle of zero or approximately zero may indicate a high degree facial symmetry, which the stroke scorer 208 might determine a low stroke score 209 suggesting that a stroke is unlikely, whereas an angle exceeding a threshold 211 of 2.5 degrees may be given a moderate to high stroke score 209 indicating that the patient 108 likely experienced (or is undergoing) a stroke [0068].) Regarding Claim 32, this claim recites limitations that are substantially similar to those recited in Claim 2 above; thus, the same rejection applies. Regarding Claims 3, O’Donovan, Tran, and Shaked teach the limitations as seen in the rejection of Claim 2 above. O’Donovan further discloses: wherein the memory has further computer-executable instructions stored thereon that, when executed by the processor, cause the processor to assess a severity of the stroke based on the stroke score,… (O'Donovan discloses the stroke score 209 may include an indication of stroke severity based on the rate of change of the degree of facial droop 207 as determined by the facial droop detector 206. For example, if, during the course of a consultation, the patient's facial droop 207 worsens, the stroke scorer 208 may indicate that the stroke is severe and/or assess the severity of the stroke quantitatively based on the rate of change [0070].) O’Donovan does not disclose the recommendation of a triage action which is met by Tran: …and recommend a triage action. (Tran teaches the expert system recommends treatments based on the frequency or reoccurrence of similar conditions/treatment in a population. For example, strep may be determined where a sibling has strep, and the same conditions are manifested in the patient being examined, thus leading to the diagnosis of strep without having a test performed to corroborate the diagnosis [0482]. The system can differentiate pathological from benign heart murmurs, detect cardiovascular diseases or conditions that might otherwise escape attention, recommend that the patient go through for a diagnostic study such as an echocardiography or to a specialist, monitor the course of a disease and the effects of therapy, decide when additional therapy or intervention is necessary, and providing a more objective basis for the decision(s) made [0351].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the systems and methods for receiving audio and image data and using it to determine a stroke score as disclosed by O’Donovan to incorporate the recommendation of a triage action as taught by Tran. This modification would create a system and method capable of early treatment for stroke to prevent deaths (see Tran, ¶ 0002-3, 0010). Regarding Claims 33 and 34 these claims recites limitations that are substantially similar to those recited in Claim 3 above; thus, the same rejection applies. Regarding Claim 6, O’Donovan, Tran, and Shaked teach the limitations as seen in the rejection of Claim 2 above. O’Donovan further discloses: wherein the memory has further computer-executable instructions stored thereon that, when executed by the processor, cause the processor to… (O'Donovan discloses the computer system 1600 includes one or more computing components in communication via a bus 1602. In one implementation, the computer system 1600 includes one or more processors 1614. A memory 1608 may include one or more memory cards and control circuits (not depicted), or other forms of removable memory, and may store various software applications including computer executable instructions, that when run on the processor 1614, implement the methods and systems set out herein [0186-187].) O’Donovan does not disclose the recommendation of a treatment which is met by Tran: …recommend a treatment for the patient. (Tran teaches the expert system recommends treatments based on the frequency or reoccurrence of similar conditions/treatment in a population [0482].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the systems and methods for receiving audio and image data and using it to determine a stroke score as disclosed by O’Donovan to incorporate the recommendation of a treatment as taught by Tran. This modification would create a system and method capable of early treatment for stroke to prevent deaths (see Tran, ¶ 0002-3, 0010). Regarding Claim 35, this claim recites limitations that are substantially similar to those recited in Claim 6 above; thus, the same rejection applies. Regarding Claim 12, O’Donovan, Tran, and Shaked teach the limitations as seen in the rejection of Claim 1 above. O’Donovan further discloses: wherein the…system is configured to analyze the sequence of images, analyze the audio signal, assign the respective scores, and/or generate the stroke score (O'Donovan discloses the asymmetry detector may process the video stream… the dysarthria detector may process the audio stream to automatically determine a third stroke likelihood based on a measurement of slurred speech. After the first, second, and third stroke likelihoods are determined, a stroke scorer may automatically determine a stroke score for the patient based on a combination of the first, second, and third stroke likelihoods. The display interface may then display an indication of the stroke score… [0012-13].) O’Donovan does not disclose the use of an expert system which is met by Tran: further comprising an expert system,…the expert system is configured to analyze… (Tran teaches the expert system recommends treatments based on the frequency or reoccurrence of similar conditions/treatment in a population. For example, strep may be determined where a sibling has strep, and the same conditions are manifested in the patient being examined, thus leading to the diagnosis of strep without having a test performed to corroborate the diagnosis. A person having been diagnosed with a sinus infection would typically be prescribed a strong antibiotic. Using an expert system to assist in diagnosing and prescribing treatment, the system can identify and propose the treatment of generic or standard problems in a streamlined manner and allowing professionals to focus on complex problems [0482].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the systems and methods for receiving audio and image data and using it to determine a stroke score as disclosed by O’Donovan to incorporate the use of an expert system as taught by Tran. This modification would create a system and method capable of identifying and proposing the treatment of problems in a streamlined manner and allowing professionals to focus on complex problems. (see Tran, ¶ 0492). Regarding Claim 13, O’Donovan, Tran, and Shaked teach the limitations as seen in the rejection of Claim 1 above. O’Donovan further discloses: further comprising a trained machine learning model, wherein the trained machine learning model is configured to analyze the sequence of images, analyze the audio signal, assign the respective scores, and/or generate the stroke score. (O'Donovan discloses the facial keypoint detector may include or make use of a machine learning system in automatically identifying the set of facial keypoints, which may include a deep learning neural network [0016]. The stroke scorer 208 may include (or have access to via the communication network 106 of FIG. 1) a machine learning system 404, such as a deep learning neural network, which may be the same as (or separate from) the machine learning system 213 shown in FIG. 2. The machine learning system 404 may combine various thresholds 211 or other inputs 402 with the degree and/or rate of change of facial droop 207 in order to determine the stroke score 209 [0088]. The body keypoints 714 may be identified using a machine learning system 213, such as a neural network, in the same manner that the facial keypoints 205 were determined in FIG. 2. The machine learning system 213 may be a component of the ataxia detector 605 or accessed remotely via the communication interface 603, as shown [0120].) Regarding Claim 15, O’Donovan, Tran, and Shaked teach the limitations as seen in the rejection of Claim 1 above. O’Donovan further discloses: wherein the stroke score is a Rapid Arterial occlusion Evaluation (RACE), National Institutes of Health Stroke Score (NIHSS), Los Angeles Motor Scale (LAMS), or Cincinnati Stroke Scale. (O'Donovan discloses the degree and/or rate of change of facial droop 207 may only be one of a plurality of inputs based on the National Institutes of Health Stroke Scale (NIHSS). For example, the stroke scorer 208 may also receive as an input the patient's level of dysarthria (i.e., slurred or slow speech) or ataxia (i.e., lack of voluntary coordination of muscle movements that can include gait abnormality, and abnormalities in eye movements), each of which may be used to formulate the stroke score 209 in certain embodiments [0069].) Regarding Claim 26, O’Donovan and Shaked teach the limitations as seen in the rejection of Claim 17 above. O’Donovan further discloses: - wherein the memory has further computer-executable instructions stored thereon that, when executed by the processor, cause the processor to assess a severity of the stroke based on the stroke score,… (O'Donovan discloses the stroke score 209 may include an indication of stroke severity based on the rate of change of the degree of facial droop 207 as determined by the facial droop detector 206. For example, if, during the course of a consultation, the patient's facial droop 207 worsens, the stroke scorer 208 may indicate that the stroke is severe and/or assess the severity of the stroke quantitatively based on the rate of change [0070].) O’Donovan does not disclose the recommendation of a triage action which is met by Tran: …and recommend a triage action. (Tran teaches the expert system recommends treatments based on the frequency or reoccurrence of similar conditions/treatment in a population. For example, strep may be determined where a sibling has strep, and the same conditions are manifested in the patient being examined, thus leading to the diagnosis of strep without having a test performed to corroborate the diagnosis [0482]. The system can differentiate pathological from benign heart murmurs, detect cardiovascular diseases or conditions that might otherwise escape attention, recommend that the patient go through for a diagnostic study such as an echocardiography or to a specialist, monitor the course of a disease and the effects of therapy, decide when additional therapy or intervention is necessary, and providing a more objective basis for the decision(s) made [0351].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the systems and methods for receiving audio and image data and using it to determine a stroke score as disclosed by O’Donovan to incorporate the recommendation of a triage action as taught by Tran. This modification would create a system and method capable of early treatment for stroke to prevent deaths (see Tran, ¶ 0002-3, 0010). Regarding Claim 29, O’Donovan, Tran, and Shaked teach the limitations as seen in the rejection of Claim 26 above. O’Donovan further discloses: wherein the memory has further computer-executable instructions stored thereon that, when executed by the processor, cause the processor to… (O'Donovan discloses the computer system 1600 includes one or more computing components in communication via a bus 1602. In one implementation, the computer system 1600 includes one or more processors 1614. A memory 1608 may include one or more memory cards and control circuits (not depicted), or other forms of removable memory, and may store various software applications including computer executable instructions, that when run on the processor 1614, implement the methods and systems set out herein [0186-187].) O’Donovan does not disclose the recommendation of a treatment which is met by Tran: …recommend a treatment for the patient. (Tran teaches the expert system recommends treatments based on the frequency or reoccurrence of similar conditions/treatment in a population [0482].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the systems and methods for receiving audio and image data and using it to determine a stroke score as disclosed by O’Donovan to incorporate the recommendation of a treatment as taught by Tran. This modification would create a system and method capable of early treatment for stroke to prevent deaths (see Tran, ¶ 0002-3, 0010). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over O'Donovan, Tran, and Shaked as seen in the rejection of Claim 1 above, further in view of Tian et al. (US 20200126297 A1). Regarding Claim 14, O’Donovan, Tran, and Shaked teach the limitations as seen in the rejection of Claim 1 above. O’Donovan, Tran, and Shaked do not teach the following limitations met by Tian: further comprising a haptic device configured to apply a force, a vibration, or a motion to the patient, wherein the memory has further computer-executable instructions stored thereon that, when executed by the processor, cause the processor to control the haptic device. (Tian teaches the one or more input devices include a haptic-enabled input device 111 …that generates force, motion, and/or texture feedback to the hand(s) of the remote expert 108 in accordance with simulated physical characteristics and physical interactions that occurs at a location on the 3D human body mesh of the patient's body 197 that corresponds to the current movement and position inputs provided via the input device 111 [0057].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the systems and methods for receiving audio and image data and using it to determine a stroke score as disclosed by O’Donovan to incorporate the use of a haptic device to apply pressure and motion to the patient as taught by Tian. This modification would create a system and method which allows users to accomplish personalized electric muscle stimulation (see Tian, ¶ 0006). Claims 17-19 and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over O’Donovan et al. (US 20210202090 A1) in view of Shaked et al. (US 20210312915 A1) Regarding Claim 17, O’Donovan discloses: A system, comprising: one or more artificial intelligence (AI) models, including at least one computer-vision neural network and at least one audio-processing neural network; (O'Donovan discloses the disclosed techniques may employ artificial intelligence (Al) using, for example, a deep learning neural network, in order to detect facial asymmetries of a patient consistent with stroke [0053]. The facial keypoint detector may include or make use of a machine learning system in automatically identifying the set of facial keypoints, which may include a deep learning neural network [0016]. The slurred speech scorer may determine the third stroke likelihood by comparing a first set of audio coefficients produced while the patient reads or repeats a pre-defined text with a second set of audio coefficients …the slurred speech scorer comprises or accesses a deep learning neural network [0022].) and a computing device comprising a processor and a memory operably coupled to the processor, wherein the memory has computer-executable instructions stored thereon that, when executed by the processor, cause the processor to: (O'Donovan discloses the computer system 1600 includes one or more computing components in communication via a bus 1602. In one implementation, the computer system 1600 includes one or more processors 1614. A memory 1608 may include one or more memory cards and control circuits (not depicted), or other forms of removable memory, and may store various software applications including computer executable instructions, that when run on the processor 1614, implement the methods and systems set out herein [0186-187].) receive a sequence of images, the sequence of images capturing a state of a patient; (O'Donovan discloses the patient endpoint 110 may include… a patient-side video receiver (e.g., camera) 113 [0057]. In one embodiment, the video receiver 113 (e.g., camera) in proximity to the patient 108 may capture one or more video frames 202 showing the patient's face, including, in one embodiment, at least the patient's eyes and lips. The video frames 202 may include a series of 2D or 3D still images (i.e., key frames) or may include a video stream compressed using a proprietary or standard compression scheme… [0060]. The Examiner interprets the video capturing the patient's face being the state of the patient.) receive an audio signal, the audio signal capturing a voice of the patient; (O'Donovan discloses the patient endpoint 110 may include a patient-side audio receiver 112 (e.g., microphone)…The physician endpoint 124 may likewise include a physician-side audio receiver 126… [0057]. The system 200 may further include a speech-to-text unit 216, which may convert spoken audio communicated between the patient 108 and/or physician 118 via the communication interface 203 into readable text 218. The system may distinguish among participants using voice recognition techniques [0075].) input the sequence of images and the audio signal into the one or more AI models; (O'Donovan discloses the system may further include at least two different artificial intelligence (“Al”) detectors to respectively process one or both of the audio stream and the video stream using machine learning to automatically determine at least two respective likelihoods of the patient having a health condition [0007].) and receive a stroke score, the stroke score being predicted by the one or more AI models. (O'Donovan discloses the facial keypoint detector may include or make use of a machine learning system in automatically identifying the set of facial keypoints, which may include a deep learning neural network [0016]. The stroke scorer 208 may include…a machine learning system 404, such as a deep learning neural network, which may be the same as (or separate from) the machine learning system 213 shown in FIG. 2. The machine learning system 404 may combine various thresholds 211 or other inputs 402 with the degree and/or rate of change of facial droop 207 in order to determine the stroke score 209 [0088]. The body keypoints 714 may be identified using a machine learning system 213, such as a neural network, in the same manner that the facial keypoints 205 were determined in FIG. 2. The machine learning system 213 may be a component of the ataxia detector 605 or accessed remotely via the communication interface 603, as shown [0120].) the stroke score being representative of a detected one or more of limb impairment, gaze impairment, facial palsy, aphasia or agnosia… (O'Donovan concurrently…with the asymmetry detector, the ataxia detector may process the video stream to automatically determine a second stroke likelihood based on a measurement of limb weakness…the dysarthria detector may process the audio stream to automatically determine a third stroke likelihood based on a measurement of slurred speech. After the first, second, and third stroke likelihoods are determined, a stroke scorer may automatically determine a stroke score for the patient based on a combination of the first, second, and third stroke likelihoods. The display interface may then display an indication of the stroke score… [0012-13].) O’Donovan does not disclose synchronizing images with audio which is met by Shaked: configured to synchronize the sequence of images and the audio signal;…determined by analyzing the synchronized sequence of images and audio data. (Shaked teaches the sync engine 130 may be configured to apply various algorithms such as, as examples and without limitation, deep-learning, artificial intelligence (Al), machine learning, and the like. In an embodiment, the sync engine 130 may be configured to correlate audio and video providing, as an example, a synchronization of a human voice with recorded lip movement [0049].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the systems and methods for receiving audio and image data and using it to determine a stroke score as disclosed by O’Donovan to incorporate synchronizing images with audio as taught by Shaked. This modification would create a system which allows for speaker isolation, direction-based speech separation, and context-driven speech applications (see Shaked, ¶ 0004). Regarding Claim 18, O’Donovan and Shaked teach the limitations as seen in the rejection of Claim 17 above. O’Donovan further discloses: wherein the memory has further computer-executable instructions stored thereon that, when executed by the processor, cause the processor extract one or more features from the sequence of images and the audio signal, (O'Donovan discloses the computer system 1600 includes one or more processors 1614. A memory 1608 may include one or more memory cards…[0186-187].) and wherein the step of inputting the sequence of images and the audio signal into the one or more AI models comprises inputting the extracted features into the one or more AI models (O’Donovan discloses the system may further include at least two different artificial intelligence (“Al”) detectors to respectively process one or both of the audio stream and the video stream using machine learning to automatically determine at least two respective likelihoods of the patient having a health condition [0007].) Regarding Claim 19, O’Donovan and Shaked teach the limitations as seen in the rejection of Claim 17 above. O’Donovan further discloses: wherein the one or more AI models are an expert system or one or more trained machine learning models. (O'Donovan discloses the system may further include at least two different artificial intelligence (“Al”) detectors to respectively process one or both of the audio stream and the video stream using machine learning to automatically determine at least two respective likelihoods of the patient having a health condition [0007].) Regarding Claim 24, O’Donovan and Shaked teach the limitations as seen in the rejection Claim 17 above. O’Donovan further discloses: further comprising an imaging device for capturing the sequence of images and a microphone for capturing the audio signal (O’Donovan discloses the patient endpoint 110 may include a patient-side audio receiver 112 (e.g., microphone) and a patient-side video receiver (e.g., camera) 113 [0057].) Regarding Claim 25, O’Donovan and Shaked teach the limitations as seen in the rejection Claim 17 above. O’Donovan further discloses: wherein the memory has further computer- executable instructions stored thereon that, when executed by the processor, cause the processor to diagnose the patient with a stroke based on the stroke score. (O'Donovan discloses the stroke score 209 may be a probability, a percentage chance or other indicator of likelihood, and/or a function of the calculated angle with respect to threshold 211 and/or other inputs or parameters. For example, an angle of zero or approximately zero may indicate a high degree facial symmetry, which the stroke scorer 208 might determine a low stroke score 209 suggesting that a stroke is unlikely, whereas an angle exceeding a threshold 211 of 2.5 degrees may be given a moderate to high stroke score 209 indicating that the patient 108 likely experienced (or is undergoing) a stroke [0068].) Response to Arguments Regarding rejections under 35 USC 101 to Claims 1-3, 6, 12-15, 17-19, 24-26, 29, 31-35, Applicant’s arguments have been considered but are not persuasive. The rejection has been updated in light of the amendments above. Applicant argues the rejection is premised on an oversimplified characterization of the claims as directed to merely “recognizing stroke symptoms” or “analyzing patient behavior” which the Examiner equates with abstract mental processes. The amended claims expressly recite a series of concrete and technologically rooted operations that cannot be performed mentally and are not abstract in nature. For example, the claims require an imaging device configured to capture image frames representative of a state a patient, a microphone configured to output audio samples, a computer- vision neural network trained to extract feature locations from the image frames, and a machine- learning impairment-detection model trained to identify facial asymmetry, limb trajectory deviation, and gaze-vector discontinuity. The claims further require applying a trained machine-learning classifier to detect aphasia and agnosia. These technological operations cannot feasibly be performed by a human mind and are not mere automation of a mental step (see p. 1-2 of Applicant’s Remarks). Regarding (a), Examiner respectfully disagrees. In this case, the example of “a mental process a that neurologist should follow when testing a patient for nervous system malfunctions” is a court case (In re Meyer, 688 F.2d 789, 791-93, 215 USPQ 193, 194-96 (CCPA 1982)) which was also grouped into the certain methods of organizing human activity abstract ideas grouping. The Examiner did not label this case in the mental processes grouping, but as an example of a court case grouped in the same abstract idea category of certain methods of organizing human activity. The Examiner notes that MPEP 2106.04(a)(2)(II) states that certain methods of organizing human activity encompasses activity of a single person, activity that involves multiple people, and certain activity between a person and a computer. The abstract idea is identified in the 101 rejection above and does not include any of the additional elements, and these steps are recited at a high level of generality such that a person could follow them to carry out the steps using a generic computer. Applicant argues under Step 2A Prong 1, the Examiner's position that the claims are directed to an abstract idea results from improperly over-generalizing the claims and disregarding the technical requirements recited therein. Courts have repeatedly cautioned that characterizing claims at too high a level of abstraction is improper, and that claims must be evaluated based on what they actually recite. When properly considered, Applicant respectfully submits that the claims are directed to a specific technological improvement in the processing of multimodal medical sensor data and not to a mental process or fundamental human activity. Regarding (b), Examiner respectfully disagrees. The 101 rejection above explicitly underlines the abstract idea taken from the wording of the independent claim, so the abstract idea does not over-generalize the claims – it is a part of the claim. The technological aspects are not a part of the abstract idea and are deemed additional elements which are analyzed in Step 2A Prong 2 and Step 2B. The Step 2A Prong One analysis does not consider any additional elements when making the grouping and merely determines that an abstract idea is present. Applicant argues under Step 2A, Prong 2, the claims clearly integrate the subject matter into a practical application that improves computer technology itself. The claims address longstanding challenges in real-time neurological evaluation, including synchronizing heterogeneous sensor streams, extracting 3D facial and limb features, and classifying speech impairments using spectral representations. These are computational problems that arise uniquely within the context of computer-implemented medical analysis and therefore fall squarely within the holdings of Enfish, McRO, CardoNet, and DDR Holdings, each of which held that claims improving the function of a computer or solving a technology-centric problem are patent-eligible (p. 2). Regarding (c), the Examiner respectfully disagrees. If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies a technical problem and explains the details of an unconventional technical solution expressed in the claim, or identifies technical improvements realized by the claim over the prior art (MPEP § 2106.05(a)). Based on the specification, the technology claimed appears to automate and standardize the calculation of stroke scales without human intervention required (see Applicant’s specification , ¶ 0022, 0053-54). Efficiency is not enough to amount to a practical application via an improvement to computer or technology under Step 2A Prong 2 (see MPEP § 2106.05(a)(I) examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality: ii. accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016)) (also see MPEP § 2106.05(f)(2) stating “"claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not provide an inventive concept (Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367 (Fed. Cir. 2015)”), and, thus, the combination of the generic computer components do not provide a non-conventional and non-generic arrangement of known, conventional pieces; note this is applied to Step 2B as well as Step 2A Prong 2). Applicant argues under Step 2B, the claims recite significantly more than any alleged abstract concept. The use of depth-augmented landmark extraction, timestamp synchronization, non-linear impairment-severity mapping, and mel-spectrogram-based aphasia detection constitute unconventional and non-routine technological features that amount to an inventive concept. The Examiner has not demonstrated, nor could the cited art suggest, that these specialized operations were well-understood, routine, or conventional. To the contrary, the recited architecture represents a substantial improvement in computer technology and real-time multimodal signal processing. Regarding (d), Examiner respectfully disagrees. Examiner firstly notes that the claims do not recite the use of depth augmented landmark extraction – the claims merely receive images and input the images to “a trained computer-vision neural network to extract feature locations” which are then analyzed with a “machine learning impairment-detection model” (Claim 1). This recitation does not provide an inventive concept but the instant claims seem more analogous to "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a machine learning model a tool to perform an abstract idea (with the abstract idea being analyzing data and generating an output), as discussed in MPEP § 2106.05(f). In the same way, the other recited “unconventional and non-routine” technological features of “timestamp synchronization, non-linear impairment-severity mapping, and mel-spectrogram-based aphasia detection” amount to no more than “apply it” with the judicial exception. Because the “apply it” analysis was used, the Examiner is not required to show how the additional elements are well-understood, routine, and conventional activities. Regarding rejections under 35 USC 103 to Claims 1-3, 6, 12-15, 17-19, 24-26, 29, 31-35, Applicant’s arguments have been considered and are persuasive, therefore the rejection is withdrawn. However, in light of the amendments, a new rejection has been made, rejecting Claim 1 over O’Donovan in view of Tran and Shaked. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLIVIA R GEDRA whose telephone number is (571)270-0944. The examiner can normally be reached Monday - Friday 8:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter H Choi can be reached at (469)295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLIVIA R. GEDRA/Examiner, Art Unit 3681 /PETER H CHOI/Supervisory Patent Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Jul 03, 2024
Application Filed
Sep 10, 2025
Non-Final Rejection — §101, §103
Dec 10, 2025
Response Filed
Feb 10, 2026
Final Rejection — §101, §103
Apr 13, 2026
Response after Non-Final Action

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month