Prosecution Insights
Last updated: April 19, 2026
Application No. 17/891,098

SYSTEMS AND METHODS FOR PROVIDING DIGITAL HEALTH SERVICES

Final Rejection §101§103§DP
Filed
Aug 18, 2022
Examiner
SITTNER, MICHAEL J
Art Unit
3621
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Advanced Neuromodulation Systems Inc.
OA Round
4 (Final)
11%
Grant Probability
At Risk
5-6
OA Rounds
4y 9m
To Grant
26%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
42 granted / 381 resolved
-41.0% vs TC avg
Strong +15% interview lift
Without
With
+15.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
47 currently pending
Career history
428
Total Applications
across all art units

Statute-Specific Performance

§101
29.6%
-10.4% vs TC avg
§103
36.9%
-3.1% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 381 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Status of Claims The present application, filed on or after 3/16/2013, is being examined under the first inventor to file provisions of the AIA . This action is in reply to the Remarks and Amendments filed 11/13/2025. Claim 1 is amended. Claims 1-14 have been examined and are pending. (AIA ) Examiner Note In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were effectively filed absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned at the time a later invention was effectively filed in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/ patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/ patents/apply/applying-online/eterminal-disclaimer. Claims 1-14, of the instant application, are provisionally rejected under 35 U.S.C. 101 as claiming an obvious variation of the invention recited per claims 1-15 of co-pending Application No. 17/891,102 (e.g. claims filed 7/2/2025). Although the claims at issue are not identical, they are not patentably distinct from each other for the following reasons: Examiner notes the lexical differences between independent claim of the instant invention and independent claim 1 of co-pending Application No. 17/891,102 do not distinguish the substance of the invention because this application, under the broadest reasonable interpretation resolves to at least an obvious variation of the invention of the co-pending application as the two claim sets, although they do not fully mirror each other verbatim, the areas where they differ, as shown in the table below, amount to gaps in necessary steps for each invention, where such steps are each disclosed in both respective specifications, and each respective claim set either provides the necessary steps to fill such gaps in steps needed by the other or provides a generic summary of such steps. The claim features of the instant application are not mutually exclusive to those claim features of the co-pending Application No. 17/891,102. Therefore, the invention of the instant application resolves to a generic means by which to state the description of the invention as recited in the claims (filed 7/2/2025) of the 17/891,102 application. Therefore, in view of this finding, as shown in the table below, the Examiner finds the inventions, as currently claimed per respective independent claim 1 of each application, to be obvious variants of each other: 17/891,099 17/891,102 (filed 7/2/2025) 1. (currently amended) A method of remotely programming an implantable medical device that provides therapy to a patient, comprising: establishing a first communication between a patient controller (PC) device and the implantable medical device, wherein the implantable medical device provides therapy to the patient according to one or more programmable parameters, the PC device communicates signals to the implantable medical device to set or modify the one or more programmable parameters, and the PC device comprises a video camera; establishing a video connection between the PC device and a clinician programmer (CP) device of a clinician for a remote programming session in a second communication that includes an audio/video (A/V) session; communicating a value for a respective programmable parameter of the medical device from the CP device to the PC device during the remote programming session; and modifying, by the PC device, the respective programming parameter of the medical device according to the communicated value from the CP device during the remote programming session; wherein the method further comprises: [a step of receiving patient physiological data is necessary before automatic analysis can be performed as claimed below and it is obvious that receipt of patient physiological data may be from sensors, in the form of “sensor data from a wearable patient device” which is known to be common, especially if the goal is automating the activity as is recited in the following limitation] automatically analyzing, by one or more processors, patient physiological data during the remote programming session to calculate one or more metrics related to a neurological condition of the patient wherein the automatically analyzing comprises applying a quantified noise characterization related to a non-rigid patient state for calculation of one metric of the one or more metrics that is related to patient rigidity; providing the one or more metrics to a plurality of trained neural networks to generate patient condition metrics with each patient condition metric representing a different condition of the patient; generating a patient disorder metric representing a combination of the patient condition metrics; and displaying, via the CP device, a graphical user interface (GUI) display of a patient value related to the generated patient disorder metric during the remote programming session. [the aforementioned feature of “displaying” patient metrics, i.e. on a GUI, appears to be an obvious means by which to make/perform the generic intended determination as claimed in the co-pending application] 1. (Original) A method of remotely programming an implantable medical device that provides therapy to a patient, comprising: establishing a first communication between a patient controller (PC) device and the implantable medical device, wherein the implantable medical device provides therapy to the patient according to one or more programmable parameters, the PC device communicates signals to the implantable medical device to set or modify the one or more programmable parameters, and the PC device comprises a video camera; establishing a video connection between the PC device and a clinician programmer (CP) device of a clinician for a remote programming session in a second communication that includes an audio/video (A/V) session; communicating a value for a respective programmable parameter of the medical device from the CP device to the PC device during the remote programming session; and modifying, by the PC device, the respective programming parameter of the medical device according to the communicated value from the CP device during the remote programming session; wherein the method further comprises: receiving data pertaining to one or more patient reported outcomes (PRO) from a patient device of the patient prior to the remote programming session; receiving sensor data from a wearable patient device related to physiological signals and movement of the patient prior to the remote programming session; receiving sensor data from [the] wearable patient device during the remote programming session; and [obvious to perform the corresponding step from 17/891,098 to convert “sensor data” to types of “patient data” to facilitate the following step of providing such “patient data” as needed to a NN]; note that “applying a quantified noise characterization” is very broad and reads on a plurality of readily recognized obvious choices of analysis to a person of ordinary skill in this art. providing patient data, received prior to and during the remote programming session, related to sensor data and PRO data to one or more trained neural networks [obvious to perform the corresponding step from 17/891,098 to convert “patient data” to types of metrics from the already claimed neural network to facilitate the following determination of whether a change in one or more programmable parameters, i.e. metrics, induces an improvement in patient conditions]. to determine whether a change in one or more programmable parameters during the remote programming session induces an improvement in one or more patient conditions. [the aforementioned feature is a merely a generic description of the intended conclusion to be made/performed which may be accomplished in the specific manner as recited per the “displaying” step now recited in the 17/891,099 application] Claim Rejections - 35 USC § 103 (AIA ) The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1-14 are rejected under 35 U.S.C. 103 as obvious over John et al. (US 11,229,788 B1; hereinafter, "John") in view of Burton (U.S. 2021/0169417 A1; hereinafter, "Burton"). Claim 1: (currently amended) Pertaining to claim 1, John teaches the following: A method of remotely programming an implantable medical device that provides therapy to a patient, comprising: establishing a first communication between a patient controller (PC) device and the implantable medical device (John, see at least Fig. 2, Summary, and [8:14-8:40], e.g.: “…For example, the neurostimulator 50 [implantable medical device] may be controlled by a user who has established wireless communication [first communication] link between it and a smartphone, tablet, laptop or other device, that may serve as type of user programmer 70 [patient controller (PC) device] (which is a device such as a computer having a control module with controller circuitry such as a processor, display, memory, power source, communication means, and other circuitry as is well known, see FIG. 2)... The neurostimulator 50 [implantable medical device] can communicate using wireless signals [i.e. the first communication]… and can send data over the internet. A user's laptop [i.e. programmer 70 / patient controller (PC)] can be provided with a software application that provides instructions to a processor for linking with, and subsequently control of, at least one neurostimulator 50 [implantable medical device] as well as serving as a display/controller device…”; neurostimulator 50 may be an “implantable” device as noted throughout John’s disclosure; see also at least [11:28-65]), wherein the implantable medical device provides therapy to the patient according to one or more programmable parameters, (John, see at least [8:14-8:40], e.g.: “…the neurostimulator or the linked device can operate to notify a user (patient or administrator) by sending a visual, sonic, or other alert signal indicating a status or parameter value related to the provision of treatment [therapy] using the user interface module 80 and related alerting components 156.…”), the PC device communicates signals to the implantable medical device to set or modify the one or more programmable parameters (John, see again at least Fig. 2, Summary, and [8:14-8:40], teaching: user’s “smartphone, tablet, laptop or other device, that may serve as type of user programmer 70” where such device e.g. “A user's laptop [i.e. programmer 70 / patient controller (PC)] can be provided with a software application that provides instructions to a processor for linking with, and subsequently control of, at least one neurostimulator 50 [implantable medical device]…”) and the PC device comprises a video camera (John, see again [8:14-8:40] in view of at least [31:40-32:65], the user’s laptop [PC device] comprises multimedia hardware (e.g. 65 video camera…) [a video camera]); establishing a second communication between the PC device and a clinician programmer (CP) device of a clinician for a remote programming session, the second communication comprising an audio/video (A/V) session, the remote programming session in the second communication occurring during the first communication (John, see citations noted supra, and also at least Fig. 7 A, 7B, and 9 and [11:28-12:18] and again [31:40-33:67], e.g.: “…The web-meeting medical assistance module 362 provides software application, and multimedia hardware (e.g. 65 video camera and microphone of a smartphone miming the treatment application software of the system) which support virtual online meeting capability and also permits a medical professional to view the treatment log of a user to assess treatment history… A graphic display 20 such as an LCD visually presents information related to operation such as neurostimulation parameter values or information about compliance as shown in FIGS. 7 A, 7B, power levels, elapsed time of stimulation, and treatment credit information…”; i.e. a medical professional may connect to the user’s laptop [PC device], via the “web-meeting medical assistance module 362” at the same time that the user’s laptop [PC device] is in communication [first communication] with the neurostimulator 50 [implantable medical device] and is providing instructions to a processor for linking with, and subsequently control of, said neurostimulator 50 [implantable medical device]); communicating a value for a respective programmable parameter of the medical device from the CP device to the PC device during the remote programming session (John, again, see citations noted supra, and also at least Figs. 10A,B and [33:35-67], e.g.: “…FIG. 10A, shows a menu screen of a computer system such as a computer [CP device] in a medical clinic that is connected to the internet. In an embodiment related to use of neurostimulators 51a in a clinic, the computer may serve as a user physician programmer 70’. The menu screen 170 is a user interface and includes virtual buttons that allow selection of operations related to managing one or more neurostimulators 51a… The user of the menu screen 170 may be a patient, doctor, technician, health care professional, office employee (with sufficient permissions), or anyone that manages treatment sessions with patients… Clinic staff can enter ID codes assigned to the clinic to modify, view, and selectively adjust values related to a patient account, including… programing and/or setting operating parameters of a neurostimulator 51a…” per [37:47-38:55] the medical clinic computer [CP device] further has “…A "Send to Device" button control 172n, and associated module, allows the user to update the neurostimulator 51a and/or the programmer 70 [PC device] with the new settings…”); and modifying, by the PC device, the respective programming parameter of the medical device according to the communicated value from the CP device during the remote programming session associated with the first communication and the second communication (John, see again at least [8:14-8:40], teaching: user’s “smartphone, tablet, laptop or other device, that may serve as type of user programmer 70 [PC device]” where such device e.g. “A user's laptop [i.e. programmer 70 / patient controller (PC)] can be provided with a software application that provides instructions to a processor for linking with, and subsequently control of, at least one neurostimulator 50 [implantable medical device]…”); wherein the method further comprises: Although John teaches the above limitations, he may not explicitly teach the minutia as recited below. However, regarding these features, John in view of Burton teaches the following: automatically analyzing, by one or more processors, patient movement in video data from the A/V session to calculate one or more metrics related to a neurological condition of the patient, wherein the automatically analyzing comprises applying a quantified noise characterization related to a non-rigid patient state for calculation of one metric of the one or more metrics that is related to patient rigidity (Burton, see at least [0985] in view of [1865]-[1879]; .e.g. per [1879], Burton describes that his system applies a baseline and/or subject/driver specific (personalized) reference-level characterization analysis [applying a quantified noise characterization related to a non-rigid patient state for calculation] to a monitored period applicable to video or other face and/or eye and/or other physiological parameter monitoring analysis. And again note per [0985] Burton notes his system includes: “…means of cognitive evaluation… such as camera tracking and video analysis, of head position and/or eyes and/or blinking eyes based on interaction with audio-visual [A/V] material [automatically analyzing, by one or more processors, patient movement in video data from the A/V session] and comparative cognitive response measures (i.e. means of comparing subject/patient response [non-rigid patient state] to a specific audiovisual sequence compared to various population studies representative of calibrated [another type of quantified noise characterization] mild, moderate, severe cognitive deficiency or alertness or attention or responsiveness [again, this is related to a non-rigid patient state] etc.) [i.e. to calculate one or more metrics related to a neurological condition of the patient]…”).; providing the one or more metrics to a plurality of trained neural networks to generate patient condition metrics with each patient condition metric representing a different condition of the patient (Burton, see citations noted supra, including at least [2633], teaching his system has: “…the capability of automatic determination [generation] of the onset or incidence of health conditions/disorders [patient condition metrics]…” in view of at least [2749]-[2753], teaching e.g.: “…Whereby said "determination" includes (but is not limited to)…"machine learning",… Whereby said "machine learning" algorithms can include (but is not limited to) approaches including: clustering, decision tree learning association rule learning, artificial neural networks [plurality of trained neural networks], etc…”; i.e. Examiner understands that Burton’s teachings regarding use of machine learning algorithms is not in a vacuum but instead pertains to his objective of making his “automatic determination of an onset or incidence of a health condition/disorder [generation of patient condition metrics].” Therefore, whether explicitly stated, Examiner finds it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have used the tools disclosed by Burton to perform his disclosed “automatic determination of an onset or incidence of a health condition/disorder [generation of patient condition metrics] via use of his “machine learning algorithms” [plurality of trained neural networks] where the observed “comparative cognitive response measures [metrics related to a neurological condition of the patient]] are understood to be fed to these “machine learning algorithms” [plurality of trained neural networks] to achieve the result which Burton teaches is enabled – i.e. the “automatic determination [generation] of the onset or incidence of health conditions/disorders [patient condition metrics]” because per MPEP 2143(I) (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference teachings to arrive at the claimed invention is obvious. The motivation may be implicit and may be found in the knowledge of one of ordinary skill in the art, or, in some cases, from the nature of the problem to be solved. Id. at 1366, 80 USPQ2d at 1649.); generating an ensemble patient disorder metric representing a combination of the patient condition metrics (Burton, again per at least [2167], [2582]-[2583], and [2750]-[2753], teaching his system: “…the ensemble of target "events" [ensemble patient disorder metrics] or "clusters" of interest can be established [generated] based on prior data recordings or earlier monitoring data…”; where “earlier monitoring data” is understood to include “the onset or incidence of health conditions/disorders [patient condition metrics] as already noted per at least [2633]; see also at least [2867], e.g.: “…The present invention further incorporates or enables the ability to target diagnostic conditions measures to be defined in terms of biological markers or clusters/ensembles [ensemble patient disorder metrics] of different biological markers applicable to [i.e. representing] the incident of onset of health… comprising any combination of: A structured means of breaking down early onset or incident of health conditions [patient condition metrics] of interest in terms of said health condition "related events"; whereby said health conditions can relate to any combination of cardiac, respiratory, neurological, ventilation physiological parameters, etc…”); displaying, via the CP device, a graphical user interface (GUI) display of a patient value related to the generated patient disorder metric during the remote programming session (Burton, see at least [2873], teaching: “…whereby said patterns of events includes clusters or different ensembles of events which can be reported or displayed in terms of a 3-dimensional brain view with visual identification, or validation of the incident, or early onset detection of different health conditions, or more complex scientific, medical, or technical interfaces…” See also at least Figs. 68-70) Therefore, the Examiner understands that the limitations in question are merely applying known techniques of Burton (who is also directed towards technology comprising implantable medical devices and means of enabling diagnostic or prognostic monitoring applicable to monitoring relevant parameters and corresponding analysis determination and characterization applicable to the onset or detection of events or health conditions of interest of a patient) which are applicable to a known base device/method of John (directed towards a system method of providing remote treatment to a patient with an implantable medical device) to yield predictable results. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the techniques of Burton to the device/method of John because John and Burton are analogous art in the same field of endeavor (at least A61B 5/4082 diagnosing or monitoring movement diseases or A61B 5/0002 Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network) and because according to MPEP 2143(I) (C) and/or (D), the use of known technique to improve a known device, methods, or products in the same way (or which is ready for improvement) is obvious. Claim 2: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John in view of Burton teaches the following: The method of claim 1 further comprising: overlaying one or more graphical user interface (GUI) elements over video of the patient to indicate a level or classification of one or more metrics related to the neurological condition of the patient, wherein the one or more metrics related to the neurological condition are calculated using one or more trained neural networks (Burton, see citations noted supra, and also at least Figs. 90-91, [0092]-[0093] and [4256]-[4259], e.g.: “…The sequence taken as a whole can then be represented in a type of brain schematic overlay visually showing the progressive changes and sequence or flow of brain connected data as a function of time (i.e. multiple layers and color coding or line annotations can be deployed in order to denote different coherence delays representative of fibre interconnections such as 0-2 ms, through to Auditory brainstem response (ABR) and middle latency auditory evoked potential (MLAEP), slow latency AEP (SLAEP), or long latency auditory evoked potential (LLAEP)), in order to provide the present invention's brain connectivity sequencing capabilities….”; where per at least [2740]-[2755] said "determination" and/or notification of homogeneous groups of brain activation or "healthconditions/ state and/or events of interest are determined by "machine learning" algorithms Which can include (but is not limited to) approaches including: clustering, decision tree learning association rule learning, artificial neural networks, bayesian networks, inductive logic programming, support vector machines, etc… PNG media_image1.png 802 732 media_image1.png Greyscale ). Therefore, the Examiner understands that the limitations in question are merely applying known techniques of Burton (who is also directed towards technology comprising implantable medical devices and means of enabling diagnostic or prognostic monitoring applicable to monitoring relevant parameters and corresponding analysis determination and characterization applicable to the onset or detection of events or health conditions of interest of a patient) which are applicable to a known base device/method of John (directed towards a system method of providing remote treatment to a patient with an implantable medical device) to yield predictable results. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the techniques of Burton to the device/method of John because John and Burton are analogous art in the same field of endeavor (at least A61B 5/4082 diagnosing or monitoring movement diseases or A61B 5/0002 Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network) and because according to MPEP 2143(I) (C) and/or (D), the use of known technique to improve a known device, methods, or products in the same way (or which is ready for improvement) is obvious. Claim 3: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John teaches the following: The method of claim 2 wherein the neurological condition of the patient is related to a motor disorder of the patient (John, see at least [8:50-55], e.g.: “…According to an embodiment of the present invention, using sensor data enables the detection of a quantitative measure such as a motor evoked response…”) Claim 4: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John teaches the following: The method of claim 2 wherein the neurological condition of the patient is related to chronic pain of the patient (John, see at least [16:60-67], e.g.: “…TENS based stimulation may be used to decrease discomfort associated with foot pain as well as provide treatment in OAB…”). Claim 5: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John in view of Burton teaches the following: The method of claim 2 wherein the one or more GUI elements are superimposed over or surrounding bodily regions automatically analyzed for patient movement (Burton, see citations noted supra, and also at least Figs. 90-91, [0092]-[0093] and [4256]-[4259], e.g.: “…The sequence taken as a whole can then be represented in a type of brain schematic overlay visually showing the progressive changes and sequence or flow of brain connected data as a function of time (i.e. multiple layers and color coding or line annotations can be deployed in order to denote different coherence delays representative of fibre interconnections such as 0-2 ms, through to Auditory brainstem response (ABR) and middle latency auditory evoked potential (MLAEP), slow latency AEP (SLAEP), or long latency auditory evoked potential (LLAEP)), in order to provide the present invention's brain connectivity sequencing capabilities….”;). Claim 6: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John teaches the following: The method of claim 2 wherein the one or more GUI elements are indicative of tremor of the patient (John, see at least [37:35-45], e.g.: “…this example, the neurostimulator 51a can be configured to provide any type of assessment instrument or survey items related to a disorder suffered by, or condition to be modified in, a patient. This may include the assessment of depression, migraine, memory, pain, sleep apnea, anxiety, hypertension, tremor, etc…”). Claim 7: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John teaches the following: 7. The method of claim 2 wherein the one or more GUI elements are indicative of rigidity of the patient (John, see citations noted supra and also at least [17:18-44], e.g.: “…For example, stimulation may only be provided during certain sleep/arousal stages or only when the subject is experiencing restful sleep (e.g., leg movement measures remain below a selected threshold) [rigidity]...”). Claim 8: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John teaches the following: The method of claim 2 wherein the one or more GUI elements are modified according to an artificial intelligence (Al) classification of patient movement (John, see citations noted supra in view of at least [27:10-20], e.g.: “…In an embodiment, the device 50 or user interface module 80 is configured with an algorithm that operates according to rules or an artificial intelligence (AI) program that is operated upon by the processor 58 that collaborates with the user interface module 80 to verbally instruct the user on how to set-up, adjust, and use the stimulator….”). Claim 9: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John teaches the following: The method of claim 2 wherein the one or more GUI elements are modified according to an artificial intelligence (Al) quantification of patient movement (John, see citations noted supra in view of at least [27:10-20], e.g.: “…In an embodiment, the device 50 or user interface module 80 is configured with an algorithm that operates according to rules or an artificial intelligence (AI) program that is operated upon by the processor 58 that collaborates with the user interface module 80 to verbally instruct the user on how to set-up, adjust, and use the stimulator….”). Claim 10: (Currently amended) John/Burton teaches the limitations upon which this claim depends. Furthermore, John in view of Burton teaches the following: The method of claim 1 further comprising: displaying calculated anatomical features that track patient movement over a display of the patient in [[the]] a first mode of operation (Burton, see citations noted supra in view of at least Fig. 44A e.g. “eye movement sensor” and EMS Option; see also at least Fig. 57A, e.g. Physiological markers include e.g.: Body movements; per Figs. 74A-74H and [0076]: “…FIG. 74A-74H are schematic diagrams illustrating an example of the present invention's automatic (with online capability) monitoring and/or analysis capabilities and/or indications of online event discrimination and delineation between any combination of CNS, PNS, MT, BM, Ar, ArNx, eye-movement, EMG intrusion or burst event discrimination and classification,…” and per [0099]-[0110]: “…FIG. 97 is a schematic diagram illustrating the 4 stage entrainment adaptive monitoring (EAM) system; FIGS. 98A-98B are schematic diagrams illustrating a sentient state determination system incorporating reflective oculography and/or video face, head, eye-lid and eye movements;…”; see also at least [03870] in view of Figs. 100 parts C-E, etc… and associated disclosure, e.g.: PNG media_image2.png 818 662 media_image2.png Greyscale ). Claim 11: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John teaches the following: The method of claim 10 wherein the calculated anatomical features comprise one or more features that follow limb movement of the patient (Burton, see citations noted supra including at least [0146] e.g.: “…A wearable wrist-based monitoring device incorporating a gyro-meter or positional tracking system capable of inputting to automatic incorporating means of computing gait, walking characteristics (including Parkinson's onset) including automatic analysis of long-term trending of automatic gait analysis capable of detecting fluidity of walking and manoeuvring, along with predictive assessment of associated outcomes (i.e. hint to see GP or specialist based on detected trends that may have further implications) of individual's walking (gait) [limb movement] such as inability to naturally swing arms with walking stride, short or shuffling steps and difficulties (i.e. change of motion of limbs and stride associated with manoeuvring comers). Any combination of measures such as GPS, gyro-meter, motion, location data can be analysed as a marker of predefined events or health condition onset, or incidence (FIG. 1 RHS, etc…)…”) Claim 12: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John teaches the following: The method of claim 10 wherein the calculated anatomical features comprise one or more features that follow torso movement of the patient. (Burton, see citations noted supra including at least [0146] in view of at least [1551] e.g.: “…whereby such analyses can be deployed as a means of segmenting an individual motion in accordance to various typical movement categories such as (but not limited to) sitting [torso movement], walking, running, sports activity determination, stride determination, jogging, sitting, general desk activity movements etc…”). Claim 13: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John teaches the following: The method of claim 10 wherein the calculated anatomical features comprise one or more features that follow head movement of the patient (Burton, see including at least [1849], e.g.: “…Other combinations or monitoring and analysing a range of facial features, eye-gaze and head movements [follow head movement],…”). Claim 14: (Original) John/Burton teaches the limitations upon which this claim depends. Furthermore, John teaches the following: The method of claim 1 wherein the CP device is programmed to provide one or more pop-up windows to display of one or more GUI components related to one or more neurological conditions of the patient (John, see at least [33:52-67], e.g.: “…Pop-up dialogue boxes with fields for user ID and passwords can be presented to a user for making certain selections or adjustments. Clinic staff can enter ID codes assigned to the clinic to modify, view, and selectively adjust values related to a patient account, including… programing and/or setting operating parameters of a neurostimulator 51a. The menu screen 170 is accessible from web-based application using a physician programmer 70’ or computer…”). Response to Arguments Applicant amended claim 1 on 11/13/2025. Applicant's arguments (hereinafter “Remarks”) also filed 11/13/2025, have been fully considered but are moot in view of the new grounds of rejection necessitated by applicant’s amendments. Note the citations to the prior art of Burton teachings applicant’s amended and argued features. For example, as shown in the rejection above and hereinbelow for ease of reference, Burton teaches applicant’s amended feature: automatically analyzing, by one or more processors, patient movement in video data from the A/V session to calculate one or more metrics related to a neurological condition of the patient, wherein the automatically analyzing comprises applying a quantified noise characterization related to a non-rigid patient state for calculation of one metric of the one or more metrics that is related to patient rigidity (Burton, see at least [0985] in view of [1865]-[1879]; .e.g. per [1879], Burton describes that his system applies a baseline and/or subject/driver specific (personalized) reference-level characterization analysis [applying a quantified noise characterization related to a non-rigid patient state for calculation] to a monitored period applicable to video or other face and/or eye and/or other physiological parameter monitoring analysis. And again note per [0985] Burton notes his system includes: “…means of cognitive evaluation… such as camera tracking and video analysis, of head position and/or eyes and/or blinking eyes based on interaction with audio-visual [A/V] material [automatically analyzing, by one or more processors, patient movement in video data from the A/V session] and comparative cognitive response measures (i.e. means of comparing subject/patient response [non-rigid patient state] to a specific audiovisual sequence compared to various population studies representative of calibrated [another type of quantified noise characterization] mild, moderate, severe cognitive deficiency or alertness or attention or responsiveness [again, this is related to a non-rigid patient state] etc.) [i.e. to calculate one or more metrics related to a neurological condition of the patient]…”). For at least this reason, the applicant’s argument and assertion is not convincing and the rejection is maintained. Also note the new provisional non-statutory double patenting rejection in view of co-pending application 17/891,102. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). The following prior art is made of record although not relied upon as it is considered pertinent to applicant's disclosure: US Publication 2016/0136443 A1 to Grandhe; used in the Final Rejection of co-pending sister Application No. 17/891,102 US Publication 2020/0282218 A1 to McDonald; used in the Final Rejection of co-pending sister Application No. 17/891,102 Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J SITTNER whose telephone number is (571)270-3984. The examiner can normally be reached M-F; ~9:30-6:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Waseem Ashraf can be reached on (571) 270-3948. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Michael J Sittner/ Primary Examiner, Art Unit 3621
Read full office action

Prosecution Timeline

Aug 18, 2022
Application Filed
Mar 09, 2024
Non-Final Rejection — §101, §103, §DP
Jun 14, 2024
Response Filed
Oct 07, 2024
Final Rejection — §101, §103, §DP
Jan 09, 2025
Notice of Allowance
Jun 09, 2025
Request for Continued Examination
Jun 16, 2025
Response after Non-Final Action
Aug 11, 2025
Non-Final Rejection — §101, §103, §DP
Nov 13, 2025
Response Filed
Dec 05, 2025
Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561735
INFORMATION PRESENTATION METHOD AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12469047
METHOD AND SYSTEM FOR DETECTING FRAUDULENT USER-CONTENT PROVIDER PAIRS
2y 5m to grant Granted Nov 11, 2025
Patent 12462227
DISPENSING SYSTEM
2y 5m to grant Granted Nov 04, 2025
Patent 12456135
Systems for Integrating Online Reviews with Point of Sale (POS) OR EPOS (Electronic Point of Sale) System
2y 5m to grant Granted Oct 28, 2025
Patent 12417752
COORDINATED MULTI-VIEW DISPLAY EXPERIENCES
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
11%
Grant Probability
26%
With Interview (+15.4%)
4y 9m
Median Time to Grant
High
PTA Risk
Based on 381 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month