Prosecution Insights
Last updated: April 19, 2026
Application No. 18/791,292

CYBER-PHYSICAL SYSTEM TO ENHANCE USABILITY AND QUALITY OF TELEHEALTH CONSULTATION

Final Rejection §101§103§112
Filed
Jul 31, 2024
Examiner
GO, JOHN PHILIP
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Care Constitution Corp.
OA Round
2 (Final)
35%
Grant Probability
At Risk
3-4
OA Rounds
4y 0m
To Grant
80%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
101 granted / 290 resolved
-17.2% vs TC avg
Strong +46% interview lift
Without
With
+45.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
56 currently pending
Career history
346
Total Applications
across all art units

Statute-Specific Performance

§101
35.1%
-4.9% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 290 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-2, 4-15, 18-19, and 22-25 are currently pending. Claims 17 and 20-21 are canceled in the Claims filed on December 5, 2025. Information Disclosure Statement The information disclosure statement submitted on January 29, 2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by Examiner. Claim Objections Claim 1 is objected to for the following informalities: as currently amended Claim 1 cancels the “and” corresponding to the “practitioner display” and keeps the “and” corresponding to the “practitioner microphone.” However, the “and” corresponding to the “practitioner microphone” is not the final “and” of the list of limitations comprising the practitioner system. This appears to be a typographical error, and in the interest of compact prosecution, Examiner will interpret Claim 1 as maintaining the “and” after the practitioner display and cancel the “and” after the practitioner microphone. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 22-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 22, Claim 22 recites “calculating state variables…of a patient” in line 10. It is unclear if this refers to a second, distinct patient from the previously recited “a patient” in line 2 of Claim 22. In the interest of compact prosecution, Examiner will interpret “a patient” as referring to the same patient. Appropriate correction is required. Regarding Claims 23-25, dependent Claims 23-25 are also rejected under 35 U.S.C. 112(b) due to their dependence on independent Claim 22. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-15, 18-19, and 22-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1 Claims 1-2, 4-15, 18-19, and 22-25 are within the four statutory categories. Claims 1-2, 4-15, and 18-19 are drawn to a system for conducting a telehealth session, which is within the four statutory categories (i.e. machine). Claims 22-25 are drawn to a method for conducting a telehealth session, which is within the four statutory categories (i.e. process). Prong 1 of Step 2A Claim 1 recites: A cyber-physical system for conducting a telehealth session between a practitioner and a patient, the system comprising: a practitioner system, comprising: a practitioner camera configured to capture practitioner video data of the practitioner; a practitioner microphone configured to capture practitioner audio data of the practitioner; a practitioner display configured to display patient video data via a practitioner user interface; and a practitioner speaker configured to output patient audio data; a patient system, in network communication with the practitioner system, comprising: a patient display configured to display the practitioner video data; a patient speaker configured to output the practitioner audio data; a patient microphone configured to capture the patient audio data; a patient camera configured to capture the patient video data, wherein the patient camera is a remotely-controllable pan-tilt-zoom camera configured to receive control signals from said practitioner system to enable the practitioner to remotely control operation of the remotely-controllable pan-tilt-zoom camera; and a hardware control box that includes hardware buttons and provides functionality for the patient to initiate the telehealth session; a computer vision module configured to perform computer vision analysis to identify each region of interest in the patient video data; and a patient tracking module configured to output control signals to the pan-tilt-zoom camera to zoom in on a region of interest relevant to the examination being performed. The underlined limitations as shown above, given the broadest reasonable interpretation, cover the abstract idea of a mathematical concept and/or a certain method of organizing human activities because they recite mathematical relationships, formulas, equations, and/or mathematical calculations (in this case, the step of performing vision analysis to identify a region of interest cover mathematical calculations) and/or managing personal behavior or relationships or interactions between people (i.e. social activities, teaching, and following rules or instructions – in this case, the steps of receiving control signals from a practitioner to enable the practitioner to remotely control the patient camera, the patient to initiating the telehealth session, and performing vision analysis to identify a region of interest in the patient video cover following rules or instructions to perform a remote health consultation), e.g. see MPEP 2106.04(a)(2). Dependent Claims 2, 4-15, and 18-19 include other limitations, for example Claim 4 recites a practitioner activating a beeper, Claims 8-9 recite obtaining sensor data and calculating one or more state variables from the sensor data, Claim 10 recites outputting questions to patients, receiving patient answers to the questions, and calculating state variables based on the answers, Claims 11-14 recite obtaining audio and video data, analyzing the audio and video data to calculate state variables indicative of a neurological disease, Claim 15 recites forming a digital twin from the state variables, wherein the digital twin is a mathematical representation of the patient, Claim 18 recites using the digital twin to determine regions of interest, and Claim 19 recites detecting deviations between the calculated state variables and previously determined state variables from the digital twin and identifying diagnostic data based on the deviations, but these only serve to further narrow the abstract idea, and a claim may not preempt abstract ideas, even if the judicial exception is narrow, e.g. see MPEP 2106.04, and/or do not further narrow the abstract idea and instead only recite additional elements, which will be further addressed below. Hence dependent Claims 2, 4-15, and 18-19 nonetheless recite the same abstract idea as independent Claim 1 Claim 22 recites: A method of calculating state variables indicative of the physical, emotive, cognitive, and social state of a patient, the method comprising: receiving patient audio data captured by a patient microphone of a patient system for conducting a telehealth session; receiving patient video data captured by a patient camera of a patient system for conducting the telehealth session; performing audio analysis on the patient audio data; performing computer vision analysis on the patient video data; calculating state variables indicative of the physical, emotive, cognitive, and social state of a patient based on the audio analysis and computer vision analysis; and determining if the patient has a neurological disease based on the calculated state variables. The underlined limitations as shown above recite the abstract idea of a mathematical concept and/or a certain method of organizing human activity because they recite mathematical relationships, formulas, equations, and/or mathematical calculations (in this case, the steps of performing audio analysis on patient audio data, vision analysis on video data, and calculating state variables based on the audio and vision analysis cover mathematical calculations) and/or managing personal behavior or relationships or interactions between people (i.e. social activities, teaching, and following rules or instructions – in this case, the steps of performing audio analysis on patient audio data, vision analysis on video data, calculating state variables based on the audio and vision analysis, and determining if the patient has a neurological disease based on the state variables cover following rules or instructions to diagnose a patient), e.g. see MPEP 2106.04(a)(2). Any limitations not identified above as part of the abstract idea are deemed “additional elements,” and will be discussed in further detail below. Dependent Claims 23-25 include other limitations, for example Claim 23 recites that the state variables are indicative of a neurological disease, Claim 24 recites calculating additional state variables based on eye data, and Claim 25 recites outputting questions to patients, receiving patient answers to the questions, and calculating state variables based on the answers, but these only serve to further narrow the abstract idea, and a claim may not preempt abstract ideas, even if the judicial exception is narrow, e.g. see MPEP 2106.04, and/or do not further narrow the abstract idea and instead only recite additional elements, which will be further addressed below. Hence dependent Claims 23-25 nonetheless recite the same abstract idea as independent Claim 22. Hence Claims 1-2, 4-15, 18-19, and 22-25 are directed towards the aforementioned abstract ideas. Prong 2 of Step 2A Claims 1 and 22 are not integrated into a practical application because the additional elements (i.e. the non-underlined limitations above – in this case, the hardware limitations in the form of the practitioner system, the patient system, the computer vision module, and the patient tracking module, and the steps of receiving the audio and video data) amount to no more than limitations which: amount to mere instructions to apply an exception – for example, the recitation of the practitioner system, the patient system, and their various hardware components, which amount to merely invoking a computer as a tool to perform the abstract idea, e.g. see [0059]-[0060] and [0133] of the as-filed Specification, and see MPEP 2106.05(f); generally link the abstract idea to a particular technological environment or field of use – for example, the claim language of the cameras, eye tracker, and microphone, which amounts to limiting the abstract idea to the field of video conferencing, e.g. see MPEP 2106.05(h); and/or add insignificant extra-solution activity to the abstract idea – for example, the recitation of receiving the audio and video data, which amounts to mere data gathering, and/or the recitation of the output of the control signals to zoom in on a region of interest, which amounts to an insignificant application, e.g. see MPEP 2106.05(g). Additionally, dependent Claims 2, 4-15, 18-19, and 23-25 include other limitations, but these limitations also amount to no more than mere instructions to apply an exception (e.g. the various hardware limitations recited in dependent Claims 2, 4-7, 9, and 24), generally linking the abstract idea to a particular technological environment or field of use (e.g. the specific types of data recited in dependent Claims 12-13 and 23), and/or do not include any additional elements beyond those already recited in independent Claims 1 and 22, and hence also do not integrate the aforementioned abstract idea into a practical application. Hence Claims 1-2, 4-13, 18-19, and 22-25 do not include additional elements that integrate the judicial exception into a practical application. Step 2B Claims 1 and 22 do not include additional elements that are sufficient to amount to “significantly more” than the judicial exception because the additional elements (i.e. the non-underlined limitations above – in this case, the hardware limitations in the form of the practitioner system, the patient system, the computer vision module, and the patient tracking module, and the steps of receiving the audio and video data), as stated above, are directed towards no more than limitations that amount to mere instructions to apply the exception, generally link the abstract idea to a particular technological environment or field of use, and/or add insignificant extra-solution activity to the abstract idea, wherein the additional elements comprise limitations which: amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, as demonstrated by: The present Specification expressly disclosing that the structural additional elements are well-understood, routine, and conventional in nature: [0059]-[0060] and [0133] of the as-filed Specification discloses that the additional elements (i.e. the practitioner system, the patient system, the computer vision module, and the patient tracking module) comprise a plurality of different types of generic computing systems; Relevant court decisions: The functional limitations interpreted as additional elements are analogized to the following examples of court decisions demonstrating well-understood, routine and conventional activities, e.g. see MPEP 2106.05(d)(II): Receiving or transmitting data over a network, e.g. see Intellectual Ventures v. Symantec – similarly, the additional elements recite receiving data and transmits the data over a network, for example the Internet, e.g. see [0042], [0061], and [0135] of the as-filed Specification; Performing repetitive calculations, e.g. see Parker v. Flook, and/or Bancorp Services v. Sun Life – similarly, the additional elements recite performing basic calculations (i.e. calculating the state variables) and does not impose meaningful limits on the scope of the claims; and/or Storing and retrieving information in memory, e.g. see Versata Dev. Group, Inc. v. SAP Am., Inc. – similarly, the additional elements recite storing audio and video data, and retrieving the audio and video data from storage in order to determine a region of interest and/or determine state variables and if the patient has a neurological disease based on the state variables. Dependent Claims 2, 4-15, 18-19, and 23-25 include other limitations, but none of these limitations are deemed significantly more than the abstract idea because the additional elements recited in the aforementioned dependent claims similarly amount to mere instructions to apply the exception (e.g. the various hardware limitations recited in dependent Claims 2, 4-7, 9, and 24), generally linking the abstract idea to a particular technological environment or field of use (e.g. the specific types of data recited in dependent Claims 12-13 and 23), and/or the limitations recited by the dependent claims do not recite any additional elements not already recited in independent Claims 1 and 22, and hence do not amount to “significantly more” than the abstract idea. Hence, Claims 1-2, 4-15, 18-19, and 22-25 do not include any additional elements that amount to “significantly more” than the judicial exception. Thus, taken alone, the additional elements do not amount to significantly more than the abstract idea identified above. Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually, and there is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and their collective functions merely provide conventional computer implementation. Therefore, whether taken individually or as an ordered combination, Claims 1-2, 4-15, 18-19, and 22-25 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 8, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Shaya (US 2012/0029303) in view of Hoelsaeter (US 2011/0141222). Regarding Claim 1, Shaya teaches the following: A cyber-physical system for conducting a telehealth session between a practitioner and a patient, the system comprising: a practitioner system (The system includes a physician environment including a physician device, e.g. see Shaya [0021], Fig. 1.), comprising: a practitioner camera configured to capture practitioner video data of the practitioner (The physician device includes a camera to generate physician video data, e.g. see Shaya [0037].); a practitioner microphone configured to capture practitioner audio data of the practitioner (The physician device includes a microphone to generate physician voice (i.e. audio) data, e.g. see Shaya [0037].); and a practitioner display configured to display patient video data via a practitioner user interface (The physician device includes a display screen for viewing video images of the patient and graphical user interfaces, e.g. see Shaya [0037] and [0040].); and a practitioner speaker configured to output patient audio data (The physician device includes a speaker for listening to voice communications (i.e. audio data) from the patient, e.g. see Shaya [0037].); a patient system, in network communication with the practitioner system (The system includes a patient environment including a patient device in communication with the physician device over a network, e.g. see Shaya [0021].), comprising: a patient display configured to display the practitioner video data (The patient device includes a display screen that displays video data from the physician, e.g. see Shaya [0040] and [0047].); a patient speaker configured to output the practitioner audio data (The patient device includes a speaker to receive voice communications (i.e. audio data) from the physician, e.g. see Shaya [0037] and [0042].); a patient microphone configured to capture the patient audio data (The patient device includes a microphone that the patient speaks into to generate voice (i.e. audio) data, e.g. see Shaya [0036].); a patient camera configured to capture the patient video data (The patient device includes a camera that generates patient video data, e.g. see Shaya [0036].); and a hardware control box that includes hardware buttons and provides functionality for the patient to initiate the telehealth session (The patient device includes buttons in addition to or instead of graphical icons that allow the user to initiate operations of the system, wherein the operations include initiating a video conference call (i.e. a telehealth session), e.g. see Shaya [0042], Fig. 2.). But Shaya does not teach and Hoelsaeter teaches the following: wherein the patient camera is a remotely-controllable pan-tilt-zoom camera configured to receive control signals from said practitioner system to enable the practitioner to remotely control operation of the remotely-controllable pan-tilt-zoom camera (The system comprises a videoconferencing system including camera that can pan, tilt, and zoom to capture a target point, e.g. see Hoelsaeter [0022]-[0023], wherein the camera may automatically adjust but also be controlled manually by an input from a user to control the zoom of the camera, e.g. see Hoelsaeter [0023], [0045]-[0047], and [0051].); a computer vision module configured to perform computer vision analysis to identify each region of interest in the patient video data (The system performs a picture analysis process that detects target points (i.e. a region of interest), e.g. see Hoelsaeter [0023], [0035]-[0036], and [0049]-[0050].); and a patient tracking module configured to output control signals to the pan-tilt-zoom camera to zoom in on a region of interest relevant to the examination being performed (The system performs panning, tilting, and zooming of the camera to center on the location defined by the target point, e.g. see Hoelsaeter [0023], wherein the panning, tilting, and zooming may be performed automatically or via manual user input, e.g. see Hoelsaeter [0023], [0045]-[0047], and [0051].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of videoconferencing to modify Shaya to incorporate the partially automatic and partially manually remotely controlled pan, tilt, and zoom camera as taught by Hoelsaeter in order to increase the convenience and optimize the image/video captured by the camera, e.g. see Hoelsaeter [0006]-[0009] and [0051]. Regarding Claim 4, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 1, and Shaya further teaches the following: The system of claim 1, wherein: the control box includes a beeper configured to output an audible sound (The patient device includes a speaker that outputs sounds including voice communications from the physician (i.e. audible sounds), e.g. see Shaya [0037] and [0042].); the system provides functionality for the practitioner to activate the beeper to help the patient locate the control box (The patient device outputs physician voice communications (i.e. the physician activates the beeper by speaking into the physician microphone, which sends the recorded physician voice to be output by the patient speaker of the patient device), e.g. see Shaya [0037] and [0042].). Examiner further notes that the language of “to help the patient locate the control box” is deemed a statement of intended use. Statements of intended use raise a question as to the limiting effect of the language in the claims, e.g. see MPEP 2103(I)(C). That is, the intent of the limitation does not change/add any functions to the claim itself, because instead of positively reciting a function (e.g. outputting sound until the patient locates and deactivates the sound), the aforementioned language merely recites a result (i.e. the helping of the patient locate the control box) of an undisclosed limitation. Regarding Claim 8, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 1, and Shaya further teaches the following: The system of claim 1, further comprising: a sensor data classification module configured to analyze sensor data captured by the patient system and calculate one or more state variables indicative of the physical, emotive, cognitive, or social state of the patient (The patient environment further includes diagnostic devices that obtain patient diagnostic data such as ECG data, weight, blood chemistry, and/or blood pressure (i.e. any of which may be interpreted as one or more state variables indicative of the physical, emotive, cognitive, or social state of the patient), e.g. see Shaya [0021], [0023], and [0064].). Regarding Claim 14, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 8, and Shaya further teaches the following: The system of claim 8, wherein the practitioner user interface displays: the patient video data captured by the patient camera (The system enables a physician to accept a patient videoconference call, which results in the displaying of the patient video images on the physician device, e.g. see Shaya [0063].); and the sensor data captured by the patient system or the one or more state variables calculated by the sensor data classification module (The physician device retrieves and displays patient diagnostic data obtained using one or more diagnostic devices, e.g. see Shaya [0067].); Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Hoelsaeter in view of Kalipatnapu (US 2009/0240770). Regarding Claim 2, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 1, but does not teach and Kalipatnapu teaches the following: The system of claim 1, wherein the system provides functionality for the patient to initiate the telehealth session with the practitioner via: a single click of one of the hardware buttons (The system enables a conference between a plurality of entities, wherein the conference may be initiated by pushing a single button, e.g. see Kalipatnapu [0017] and [0048].); or a voice command, input via the patient microphone, that is recognized by the system using voice recognition. Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of remote conferencing to modify the combination of Shaya and Hoelsaeter to incorporate initiating the conference via a single button push as taught by Kalipatnapu in order to reduce the complexity required to conduct the conference, e.g. see Kalipatnapu [0002]-[0003]. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Hoelsaeter in view of Gannon (US 2018/0356945). Regarding Claim 5, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 4, and Shaya further teaches the following: The system of claim 4, wherein the beeper is configured to output the audible sound via the patient speaker, the system further comprising: an audio calibration module configured to: receive audio data, captured by the patient microphone, indicative of the audible signal output by the beeper via the patient speaker (The patient device includes a microphone that captures audio data, e.g. see Shaya [0036].). But Shaya does not teach and Gannon teaches the following: adjust the volume of the patient speaker or the sensitivity of the patient microphone based on the received audio data (The system automatically adjusts the volume of a speaker based on ambient noise detected by a microphone, e.g. see Gannon [0081].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of remote conferencing to modify the combination of Shaya and Hoelsaeter to incorporate adjusting the volume of the speaker as taught by Gannon in order to compensate for ambient noise levels and to facilitate simple and convenient ways for sharing media including videoconferencing media, e.g. see Gannon [0007], [0069], and [0081]. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Hoelsaeter in view of Libbey (US 2004/0155956). Regarding Claim 6, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 1, but does not teach and Libbey teaches the following: The system of claim 1, wherein the patient camera is enclosed in a camera enclosure, the camera enclosure comprising a single one-way mirror that faces the patient and enables the patient camera to capture patient video data of the patient and prevent the patient from seeing the patient camera (The system includes a camera contained within an enclosure, wherein the enclosure includes two mirrors, one that is fully reflective and another that is a partially reflective mirror comprising a one-way mirror, wherein the orientation of the enclosure and mirrors causes the user to not see the camera behind the glass, e.g. see Libbey [0012] and [0023]-[0025], Figs. 2-4.). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of remote conferencing to modify the combination of Shaya and Hoelsaeter to incorporate placing the camera behind the mirror as taught by Libbey in order to obscure the camera from the view of the user while still enabling the user to see the image captured by the camera, e.g. see Libbey [0024], Fig. 3. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Hoelsaeter in view of Huang (US 2019/0237204). Regarding Claim 7, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 1, but does not teach and Huang teaches the following: The system of claim 1, wherein the patient system further comprises one or more environmental sensors that capture information indicative of one or more environmental conditions (The system includes a patient environment including environmental sensors that detect environmental data, wherein the environmental data may be sent to a caregiver as part of a video conference, e.g. see Huang [0032] and [0043]-[0048], Fig. 3.). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of remote conferencing to modify the combination of Shaya and Hoelsaeter to incorporate the environmental sensors as taught by Huang in order to ensure patient safety, e.g. see Huang [0032], [0034], and [0044]. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Hoelsaeter in view of Hennessey (US 2014/0184550). Regarding Claim 9, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 8, but does not teach and Hennessey teaches the following: The system of claim 8, wherein: the patient system further comprises an eye tracker, a thermal imaging camera, or a depth camera (The system includes an eye tracker that may be used as part of a video conference, e.g. see Hennessey [0035], [0054], and [0110], Fig. 23.); and the sensor data classification module is configured calculate the one or more state variables based on eye tracking data captured by the eye tracker, thermal images captured by the thermal imaging camera, or three-dimensional images captured by the depth camera (The eye movement data can be classified into a number of different behaviors and/or an emotional state of the user, e.g. see Hennessey [0061].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of remote conferencing to modify the combination of Shaya and Hoelsaeter to incorporate the eye tracking data as taught by Hennessey in order to provide behavioral insight into the user’s cognitive processes, e.g. see Hennessey [0053]. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Hoelsaeter in view of Charvat (US 2019/0150819). Regarding Claim 10, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 8, but does not teach and Charvat teaches the following: The system of claim 8, wherein: the patient system is configured to conduct a computer-assisted cognitive impairment assessment by: outputting questions for the patient via the patient display (The system displays questions to the user, e.g. see Charvat [0052], Fig. 3.); providing functionality for the patient to provide responses to the questions using the hardware buttons of the control box (The system receives patient responses via a mobile computing device including I/O devices such as a keyboard (i.e. hardware buttons), e.g. see Charvat [0032], [0052]-[0054], and [0132]); and time stamping the questions output via the patient display and the responses provided by the patient (The system timestamps the questions and the responses, e.g. see Charvat [0053]-[0055].); and the sensor data classification module is configured to calculate state variables indicative of the cognitive state of the patient based on the time-stamped questions output via the patient display and the time-stamped responses provided by the patient (The system generates a report for the user including statements about the cognitive and emotional intelligence and function of the user based on the response time, wherein the response time is calculated based on the timestamps for the questions and responses, e.g. see Charvat [0028], [0030], [0052]-[0055], [0059], and [0087].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shaya and Hoelsaeter to incorporate the questionnaire and response monitoring as taught by Charvat in order to evaluate the cognitive and emotional intelligence and function of the user, e.g. see Charvat [0087]. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Hoelsaeter in view of Yoo (US 2018/0165062). Regarding Claim 11, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 8, but does not teach and Yoo teaches the following: The system of claim 8, wherein the sensor data classification module is configured to perform audio analysis on the patient audio data to calculate one or more state variables indicative of the physical, emotive, cognitive, or social state of the patient (The system receives a patient call and analyzes patient voice data during the call to identify the patient’s emotional state, e.g. see Yoo [0096].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shaya and Hoelsaeter to incorporate the patient voice analysis as taught by Yoo in order to determine the urgency of the patient need and allocate the appropriate resources for the patient, e.g. see Yoo [0096]. Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Hoelsaeter in view of Vaughan (US 2019/0019581). Regarding Claim 12, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 8, but does not teach and Vaughan teaches the following: The system of claim 8, wherein the sensor data classification module is configured to perform computer vision analysis on the patient video data to calculate one or more state variables indicative of the physical, emotive, cognitive, or social state of the patient (The system collects patient data which includes video data, e.g. see Vaughan [0015] and [0186], wherein the collected digital diagnostics data is analyzed to determine diagnosis and therapy (i.e. state variables) for neurological disorders, e.g. see Vaughan [0188]-[0191].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shaya and Hoelsaeter to incorporate the patient video analysis as taught by Vaughan in order to improve the medical, psychological, or physiological state of an individual, e.g. see Vaughan [0011]. Regarding Claim 13, the combination of Shaya, Hoelsaeter, and Vaughan teaches the limitations of Claim 12, and Vaughan further teaches the following: The system of claim 12, wherein the sensor data classification module calculates one or more state variables indicative of a neurological disease (The system analyses the patient video in order to determine diagnosis and therapy for neurological disorders, e.g. see Vaughan [0188]-[0191].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shaya and Hoelsaeter to incorporate the patient video analysis as taught by Vaughan in order to improve the medical, psychological, or physiological state of an individual, e.g. see Vaughan [0011]. Claims 15 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Hoelsaeter in view of Peterson (US 2019/0005195). Regarding Claim 15, the combination of Shaya and Hoelsaeter teaches the limitations of Claim 8, but does not teach and Peterson teaches the following: The system of claim 8, wherein the one or more state variables are combined with previously-determined state variables to form a digital twin, the digital twin comprising a mathematical representation of the physical, emotive, cognitive, or social state of the patient (The system receives various patient data, for example sensor data, lab results, and self-reported data, e.g. see Peterson [0037]-[0039], and utilizes obtained patient data over time to create and modify a digital twin for the patient, e.g. see Peterson [0039] and [0070], wherein the digital twin forms a model that mathematically represents inputs and outputs from the patient, e.g. see Peterson [0088].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shaya and Hoelsaeter to incorporate generating the digital twin for the patient as taught by Peterson in order to aid in the diagnosis and treatment of patients, e.g. see Peterson [0058]. Regarding Claim 18, the combination of Shaya, Hoelsaeter, and Peterson teaches the limitations of Claim 15, and Peterson further teaches the following: The system of claim 15, wherein the computer vision module uses the digital twin of the patient to identify the regions of interest in the patient video data (The digital twin identifies target organs and/or body parts (i.e. regions of interest) for the patient based on a particular operation and/or issue at hand, e.g. see Peterson [0045] and [0092].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shaya and Hoelsaeter to incorporate identifying the regions of interest in the digital twin for the patient as taught by Peterson in order to aid in the diagnosis and treatment of patients, e.g. see Peterson [0058]. Regarding Claim 19, the combination of Shaya, Hoelsaeter, and Peterson teaches the limitations of Claim 18, and Peterson further teaches the following: The system of claim 18, further comprising a heuristic computer reasoning engine configured to: detect deviations between the one or more state variables calculated by the sensor data classification module and previously-determined state variables included in the digital twin of the patient (The system compares a submitted piece of data against a previously verified piece of data to determine whether the submitted data matches and/or is consistent with the previously verified data, wherein the data may be data that forms the patient digital twin, e.g. see Peterson [0076], and wherein the digital twin can be used for comparison of data, e.g. see Peterson [0043]-[0045].); or identify potentially relevant diagnostic explorations based on the digital twin of the patient and the one or more state variables calculated by the sensor data classification module (The data of the digital twin may be used to identify differences, similarities, and/or trends (i.e. potentially relevant diagnostic explorations), e.g. see Peterson [0043]-[0045].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shaya and Hoelsaeter to incorporate utilizing the digital twin for data comparison as taught by Peterson in order to aid in the diagnosis and treatment of patients, e.g. see Peterson [0058]. Claims 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Shaya in view of Vaughan. Regarding Claim 22, Shaya teaches the following: The method of calculating state variables indicative of the physical, emotive, cognitive, or social state of a patient, the method comprising: receiving patient audio data captured by a patient microphone of a patient system for conducting a telehealth session (The system includes a patient device including a microphone that the patient speaks into to generate voice (i.e. audio) data, wherein the patient audio is used as part of a videoconference with a physician, e.g. see Shaya [0036].); receiving patient video data captured by a patient camera of a patient system for conducting the telehealth session (The patient device includes a camera that generates patient video data for the videoconference, e.g. see Shaya [0036].). But Shaya does not teach and Vaughan teaches the following: performing audio analysis on the patient audio data (The system performs an analysis by reviewing patient audio/voice data, e.g. see Vaughan [0200] and [0259].) performing computer vision analysis on the patient video data (The system collects and analyzes patient video data, e.g. see Vaughan [0186], [0200], and [0259].); and calculating state variables indicative of the physical, emotive, cognitive, or social state of a patient based on the audio analysis and computer vision analysis (The analysis of the patient video and audio/voice data may be used to determine various metrics including features of interest such as facial landmarks (i.e. physical), behaviors (i.e. cognitive), and emotions (i.e. emotive), and cognitive data, e.g. see Vaughan [0200] and [0259].). determining if the patient has a neurological diseases based on the calculated state variables (The analysis may be used to diagnose the patient, e.g. see Vaughan [0200], wherein the diagnosis can include a diagnosis for neurological disorders, e.g. see Vaughan [0188]-[0191]). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify Shaya to incorporate the patient video and audio analysis as taught by Vaughan in order to improve the medical, psychological, or physiological state of an individual, e.g. see Vaughan [0011]. Regarding Claim 23, the combination of Shaya and Vaughan teaches the limitations of Claim 22, and Vaughan further teaches the following: The method of claim 22, wherein the state variables are indicative of a neurological disease (The system analyses the patient video in order to determine diagnosis and therapy for neurological disorders, e.g. see Vaughan [0188]-[0191].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify Shaya to incorporate the patient video analysis as taught by Vaughan in order to improve the medical, psychological, or physiological state of an individual, e.g. see Vaughan [0011]. Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Vaughan in view of Hennessey. Regarding Claim 24, the combination of Shaya and Vaughan teaches the limitations of Claim 22, but does not teach and Hennessey teaches the following: The method of claim 22, further comprising: calculating additional state variables by analyzing eye tracking data captured by an eye tracker, thermal images captured by a thermal imaging camera, or three-dimensional images captured by a depth camera (The system includes an eye tracker that may be used as part of a video conference, e.g. see Hennessey [0035], [0054], and [0110], Fig. 23, wherein the eye movement data can be classified into a number of different behaviors and/or an emotional state (i.e. additional state variables) of the user, e.g. see Hennessey [0061].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of remote conferencing to modify the combination of Shaya and Vaughan to incorporate the eye tracking data as taught by Hennessey in order to provide behavioral insight into the user’s cognitive processes, e.g. see Hennessey [0053]. Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shaya and Vaughan in view of Charvat. Regarding Claim 25, the combination of Shaya and Vaughan teaches the limitations of Claim 22, but does not teach and Charvat teaches the following: The method of claim 22, further comprising: outputting questions for the patient (The system displays questions to the user, e.g. see Charvat [0052], Fig. 3.); time stamping the questions (The system timestamps the questions and the responses to the questions, e.g. see Charvat [0053]-[0055].); receiving patient responses to the questions (The system receives patient responses to the questions, e.g. see Charvat [0032], [0052]-[0054], and [0132]); time stamping the patient responses (The system timestamps the questions and the responses to the questions, e.g. see Charvat [0053]-[0055].); and calculating additional state variables indicative of the cognitive state of the patient based on the time-stamped questions and the time-stamped patient responses (The system generates a report for the user including statements about the cognitive and emotional intelligence and function of the user based on the response time, wherein the response time is calculated based on the timestamps for the questions and responses, e.g. see Charvat [0028], [0030], [0052]-[0055], [0059], and [0087].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shaya and Vaughan to incorporate the questionnaire and response monitoring as taught by Charvat in order to evaluate the cognitive and emotional intelligence and function of the user, e.g. see Charvat [0087]. Response to Arguments Applicant’s arguments, see Remarks, filed December 5, 2025, with respect to the rejections of Claims 17-19 under 35 U.S.C. 112(b) have been fully considered and, in combination with the claim amendments, are persuasive. The rejections of Claims 17-19 under 35 U.S.C. 112(b) have been withdrawn. However, as shown above, Claims 22-25 are nonetheless rejected under 35 U.S.C. 112(b) due to the newly amended language. Applicant’s arguments, see Remarks, filed December 5, 2025, with respect to the rejections of Claims 1-2, 4-15, 18-19, and 22-25 under 35 U.S.C. 101 have been fully considered but are not persuasive. Applicants allege that the claimed invention is patent eligible because it merely involves a judicial exception and because it does not recite a specific mathematical relationship, e.g. see pg. 9 of Remarks – Examiner disagrees. As shown above, the claimed limitations do not merely involve an abstract idea, but instead recite an abstract idea, specifically a mathematical concept and/or a certain method of organizing human activities. Additionally, Examiner further notes that a specific formula or equation need not be recited in order for a limitation to recite a mathematical concept, e.g. see MPEP 2106.04(a)(2)(I). For example, a mathematical concept includes a mathematical calculation, wherein a mathematical calculation includes “an act of calculating using mathematical methods to determine a variable or number,” e.g. see MPEP 2106.04(a)(2)(I)(C). Given the broadest reasonable interpretation, performing a vision analysis to identify a region of interest in order to ultimately determine a region for a camera to zoom in on as recited by Claim 1 includes a mathematical calculation because as defined by the Specification, the vision analysis may include a deep learning process, e.g. see [0072] of the as-filed Specification. Additionally or alternatively, as shown above, the limitations of receiving control signals from a practitioner to enable the practitioner to remotely control the patient camera, the patient to initiating the telehealth session, and performing vision analysis to identify a region of interest in the patient video cover following rules or instructions to perform a remote health consultation, and hence also recite a certain method of organizing human activities. Additionally, the steps of performing audio and vision analyses and calculating state variables based on the audio and vision analyses as recited by Claim 22 also recite mathematical calculations for the same reasons as those pertaining to Claim 1. Furthermore, as shown above, the steps of performing audio analysis on patient audio data, vision analysis on video data, calculating state variables based on the audio and vision analysis, and determining if the patient has a neurological disease based on the state variables also recite following rules or instructions to diagnose a patient and hence also recite a certain method of organizing human activities. For the aforementioned reasons, Claims 1-2, 4-15, 18-19, and 22-25 are rejected under 35 U.S.C. 101. Applicant’s arguments, see Remarks, filed December 5, 2025, with respect to the rejections of Claims 1-2, 4-15, 18-19, and 22-25 under 35 U.S.C. 103 have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection. As stated above, the newly amended claim limitations of Claims 1 and 22 has necessitated the new grounds of rejection, and the new Hoelsaeter reference and a new grounds of rejection with regards to Shaya and Vaughan is now cited to address the newly amended claim limitations of Claims 1 and 22. Hence Claims 1-2, 4-15, 18-19, and 22-25 are rejected under 35 U.S.C. 103 for the reasons disclosed above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN P GO whose telephone number is (703)756-1965. The examiner can normally be reached Monday-Friday 9am-6pm Pacific. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PETER H CHOI can be reached at (469)295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN P GO/Primary Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Sep 03, 2025
Non-Final Rejection — §101, §103, §112
Dec 05, 2025
Response Filed
Feb 25, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597521
SURVEY-BASED DIAGNOSIS METHOD AND SYSTEM THEREFOR
2y 5m to grant Granted Apr 07, 2026
Patent 12580078
METHOD, SERVER, AND SYSTEM INTELLIGENT VENTILATOR MONITORING USING NON-CONTACT AND NON-FACE-TO-FACE
2y 5m to grant Granted Mar 17, 2026
Patent 12548079
SYSTEMS AND METHODS FOR DETERMINING AND COMMUNICATING PATIENT INCENTIVE INFORMATION TO A PRESCRIBER
2y 5m to grant Granted Feb 10, 2026
Patent 12537108
APPARATUS AND METHOD FOR PROVIDING HEALTHCARE SERVICES REMOTELY OR VIRTUALLY WITH OR USING AN ELECTRONIC HEALTHCARE RECORD AND/OR A COMMUNICATION NETWORK
2y 5m to grant Granted Jan 27, 2026
Patent 12537080
EHR SYSTEM WITH ALERT FOOTER AND RELATED METHODS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
35%
Grant Probability
80%
With Interview (+45.7%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 290 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month