DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 8-10 are objected to because of the following informalities: Claims are dependent on new claim 15, which comes after claims 8-10. Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3-7, and 12-17 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being unpatentable by Zimmerman(US 20190005200 A1).
Regarding claim 1, Zimmerman discloses a system for configuring patient monitoring by a patient monitor unit, wherein the patient monitor sensor data from one or more patient sensors, the patient monitor comprising a signal processing module configured to process the sensor data to generate a sensor output related to the monitored patient, the system comprising: a processor communicatively coupled to data storage storing a digital model of at least part of an anatomy of the monitored patient, the digital model configured for simulating an actual physical state of the at least part of the anatomy based on the sensor data, wherein the digital model is configured to generate a model output related to a current or predicted future clinical state of the at least part of the anatomy of the patient(The cloud computing system can further include or can be coupled with one or more other processor circuits or modules configured to perform a specific task, such as to perform tasks related to patient monitoring, diagnosis, treatment, scheduling, etc., via the digital twin 130[0141]. FIG. 12. In certain examples, the digital twin 130 of the patient 110 can be used for monitoring, diagnostics, and prognostics for the patient 110. Using sensor data in combination with historical information, current and/or potential future conditions of the patient 110 can be identified, predicted, monitored, etc., using the digital twin 130. Causation, escalation, improvement, etc., can be monitored via the digital twin 130. Using the digital twin 130, the patient's 110 physical behaviors can be simulated and visualized for diagnosis, treatment, monitoring, maintenance, etc.[0036]); wherein the processor is adapted to configure the signal processing performed by the signal processing module based at least in part on the model output, whereby the selection of sensor outputs to be generated by the signal processing module is determined based on the model outputs indicating clinical information that is most clinically relevant to a patient's condition(At block 1108, a medical event (e.g., surgery, image acquisition, real or virtual office visit, other procedure, etc.) is processed with respect to the patient digital twin 130. For example, image data, sensor data, observations, test results, etc., from a medical event is processed with respect to information and/or modeling of the patient digital twin 130. Image data can be processed to form image analysis, computer aided detection, image quality determination, etc. Sensor data can be processed to identify a value, change, difference with respect to a threshold, etc. Test results can be processed in comparison to a threshold, etc., based on the digital twin 130.[0064]. Example processor 1530 processes data received at input 1510 and generates a result that can be provided to one or more of output 1520, memory 1540, and communication interface 1550.[0115]. FIG. 15)
PNG
media_image1.png
638
440
media_image1.png
Greyscale
PNG
media_image2.png
431
448
media_image2.png
Greyscale
Regarding claim 3, Zimmerman discloses the system as claimed in claim 1, wherein the digital model is operable to generate pathology output related to a current or predicted future pathology of the at least part of the anatomy of the patient, and wherein the signal processing is configured based at least in part on the pathology outputs(In certain examples, the digital twin 130 of the patient 110 can be used for monitoring, diagnostics, and prognostics for the patient 110. Using sensor data in combination with historical information, current and/or potential future conditions of the patient 110 can be identified, predicted, monitored, etc., using the digital twin 130. Causation, escalation, improvement, etc., can be monitored via the digital twin 130. Using the digital twin 130, the patient's 110 physical behaviors can be simulated and visualized for diagnosis, treatment, monitoring, maintenance, etc.[0036]. Using the digital twin 130, however, allows a person and/or system to view and evaluate a visualization of a situation (e.g., a patient 110 and associated patient problem, etc.) without translating to data and back. With the digital twin 130 in common perspective with the actual patient 110, physical and virtual information can be viewed together, dynamically and in real time (or substantially real time accounting for data processing, transmission, and/or storage delay). Rather than reading a report, a healthcare practitioner can view and simulate with the digital twin 130 to evaluate a condition, progression, possible treatment, etc., for the patient 110. In certain examples, features, conditions, trends, indicators, traits, etc., can be tagged and/or otherwise labeled in the digital twin 130 to allow the practitioner to quickly and easily view designated parameters, values, trends, alerts, etc.[0038]).
Regarding claim 4, the system as claimed in claims 1, wherein: the digital model is operable to generate a prediction output pertaining to a predicted future state of the at least part of the anatomy, and wherein the signal processing is configured based at least in part on the prediction outputs(FIG. 13 illustrates an example application of the patient digital twin 130 to patient 110 health outcome(s). As shown in the example flow 1300 of FIG. 13, the patient digital twin 130 can be used to generate a risk profile 1302 for the patient 110. For example, based on information stored and/or otherwise modeled in the digital twin 130, the patient's 110 risk for certain conditions, diseases, etc., can be modeled to generate the patient's risk profile 1302. The risk profile 1302 can enumerate potential disease(s) and/or other condition(s) for which the patient 110 is at risk based on the digital twin 130. The digital twin 130 can be used to simulate, predict, and/or otherwise the patient's 110 risk, and that risk can be stored as the risk profile 1302. For example, based on weight, blood pressure, eating habit information, and/or other behavioral information stored in the digital twin 130, the patient's 110 risk for developing diabetes can be modeled and quantified in the risk profile 1302. As another example, the patient's 110 prior ligament history, age, and social history of playing basketball from the digital twin 130 can be used to predict the patient's 110 risk of ligament injury[0078]).
Regarding claim 5, Zimmerman discloses the system as claimed in claim 1, wherein the digital model is operable to generate indicative of physiological or anatomical parameters of the patient(In certain examples, the patient digital twin 130 forms a model that can be used with a transfer function to mathematically represent or model inputs to and outputs from the patient 110 (e.g., physical changes, mental changes, symptoms, etc., and resulting conditions, effects, etc.). The transfer function helps the digital twin 130 to generate and model patient 110 attributes and/or evaluation metrics, for example. In certain examples, variation can be modeled based on analytics, etc., and modeled variation can be used to evaluate possible health outcomes for the patient 110 via the patient digital twin 130[0084]).
Regarding claim 6, Zimmerman discloses system as claimed in claim 1, wherein the outputs generated by the signal processing module include one or more clinical parameters of the patient(Example processor 1530 processes data received at input 1510 and generates a result that can be provided to one or more of output 1520, memory 1540, and communication interface 1550. For example, example processor 1530 can take user annotation provided via input 1510 with respect to an image displayed via output 1520 and can generate a report associated with the image based on the annotation.[0115].t block 1108, a medical event (e.g., surgery, image acquisition, real or virtual office visit, other procedure, etc.) is processed with respect to the patient digital twin 130. For example, image data, sensor data, observations, test results, etc., from a medical event is processed with respect to information and/or modeling of the patient digital twin 130. Image data can be processed to form image analysis, computer aided detection, image quality determination, etc. Sensor data can be processed to identify a value, change, difference with respect to a threshold, etc. Test results can be processed in comparison to a threshold, etc., based on the digital twin 130[0064]).
Regarding claim 7, Zimmerman discloses the system as claimed in claim 1, wherein one or more of the sensor outputs generated by the signal processing module are supplied as respective information inputs to the digital model(Sensors connected to the physical object (e.g., the patient 110) can collect data and relay the collected data 120 to the digital twin 130 (e.g., via self-reporting, using a clinical or other health information system such as a picture archiving and communication system (PACS), radiology information system (RIS), electronic medical record system (EMR), laboratory information system (LIS), cardiovascular information system (CVIS), hospital information system (HIS), and/or combination thereof, etc.). Interaction between the digital twin 130 and the patient 110 can help improve diagnosis, treatment, health maintenance, etc., for the patient 110, for example. An accurate digital description 130 of the patient 110 benefiting from a real-time or substantially real-time (e.g., accounting from data transmission, processing, and/or storage delay) allows the system 100 to predict “failures” in the form of disease, body function, and/or other malady, condition, etc.[0033]).
Regarding claim 12, Zimmerman discloses the system as claimed in claim 1, wherein the system comprises the patient monitor unit(As shown in the example of FIG. 17, a plurality of devices (e.g., information systems, imaging modalities, etc.) 1710-1712 can access a cloud 1720, which connects the devices 1710-1712 with a server 1730 and associated data store 1740. Information systems, for example, include communication interfaces to exchange information with server 1730 and data store 1740 via the cloud 1720. Other devices, such as medical imaging scanners, patient monitors, etc., can be outfitted with sensors and communication interfaces to enable them to communicate with each other and with the server 1730 via the cloud 1720[0136]).
Regarding claim 13, Zimmerman discloses a method of configuring patient monitoring by a patient monitor unit , the patient monitor unit arranged in use to receive input sensor data from one or more patient sensors, the patient monitor unit comprising a signal processing module configured to apply signal processing to the input sensor data to generate a sensor output related to the monitored patient, the method comprising: retrieving a digital model of at least part of an anatomy of the monitored patient, and developing the digital model based on input sensor data from the one or more patient sensors so as to simulate with the model an actual physical state of the at least part of the anatomy of the patient, wherein the digital model is configured to generate related to a current or predicted future clinical state of the at least part of the anatomy of the patient(For example, sensors associated with the patient 110 can supplement the modeled information of the patient digital twin 130, which can be stored and/or otherwise instantiated in a cloud-based computing environment for access by a plurality of systems with respect to the patient 110[0140]. The cloud computing system can further include or can be coupled with one or more other processor circuits or modules configured to perform a specific task, such as to perform tasks related to patient monitoring, diagnosis, treatment, scheduling, etc., via the digital twin 130[0141]. FIG. 12. In certain examples, the digital twin 130 of the patient 110 can be used for monitoring, diagnostics, and prognostics for the patient 110. Using sensor data in combination with historical information, current and/or potential future conditions of the patient 110 can be identified, predicted, monitored, etc., using the digital twin 130. Causation, escalation, improvement, etc., can be monitored via the digital twin 130. Using the digital twin 130, the patient's 110 physical behaviors can be simulated and visualized for diagnosis, treatment, monitoring, maintenance, etc.[0036]); and configuring the signal processing performed by the signal processing module based at least in part on the outputs from the digital model to output clinical information that is most clinically relevant to a patient's condition at any given time(At block 1108, a medical event (e.g., surgery, image acquisition, real or virtual office visit, other procedure, etc.) is processed with respect to the patient digital twin 130. For example, image data, sensor data, observations, test results, etc., from a medical event is processed with respect to information and/or modeling of the patient digital twin 130. Image data can be processed to form image analysis, computer aided detection, image quality determination, etc. Sensor data can be processed to identify a value, change, difference with respect to a threshold, etc. Test results can be processed in comparison to a threshold, etc., based on the digital twin 130.[0064]).
Regarding claim 14, Zimmerman discloses a non-transitory computer readable medium that stores a computer program product which, when executed on a processor, causes the method of claim 13 to be performed(Additionally or alternatively, the example data structures and processes of at least FIGS. 2-8, 9, 11-13, and 14 can be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information)[0150]).
Regarding claim 15, Zimmerman discloses the system of claim 1, wherein the patient monitor is configured to display the clinical information(Example processor 1530 processes data received at input 1510 and generates a result that can be provided to one or more of output 1520, memory 1540, and communication interface 1550. For example, example processor 1530 can take user annotation provided via input 1510 with respect to an image displayed via output 1520 and can generate a report associated with the image based on the annotation[0115]).
Regarding claim 16, Zimmerman discloses the system of claim 15, wherein the clinical information displayed by the patient monitor is based in part on the outputs of the digital model(Example output 1520 can provide a display generated by processor 1530 for visual illustration on a monitor or the like. The display can be in the form of a network interface or graphic user interface (GUI) to exchange data, instructions, or illustrations on a computing device via communication interface 1550, for example. Example output 1520 may include a monitor (e.g., liquid crystal display (LCD), plasma display, cathode ray tube (CRT), etc.), light emitting diodes (LEDs), a touch-screen, a printer, a speaker, or other conventional display device or combination thereof[0114]).
Regarding claim 17, Zimmerman discloses the system of claim 6, wherein the one or more clinical parameters of the patient are one or more physiological or anatomical parameters of the patient(Thus, rather than a generic model, the digital twin 130 is a collection of actual physics-based, anatomically-based, and/or biologically-based models reflecting the patient 110 and his or her associated norms, conditions, etc.[0035]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2, 8-11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zimmerman in view of Rusak(US 20190005200 A1).
Regarding claim 2, Zimmerman discloses the system as claimed in claim 1, but fails to explicitly disclose wherein a selection of one or more signal processing methods applied for deriving a given one or more information outputs is determined based in part on outputs from the digital model.
However, Rusak teaches “Optionally, the adaptation to the UI outputted by the model is computed for increasing likelihood of the current medical state reaching the target medical outcome, as described herein. For example, when the current medical state is a current value of the monitored patient parameter, and the target medical outcome is a target value or target range or target threshold, the adaptation to the UI is selected to increase likelihood of reaching the target value(see attached copy, page 26, paragraph 1)”.
It would be obvious to one of ordinary skill in the art before the effective filing date to configure the system for generating a patient digital twin of Zimmerman with the selection of the UI adapting system of Rusak. Doing so would specify selecting a specific processing method based on the outputs or desired outputs of the system.
Regarding claim 8, Zimmerman discloses the system as claimed of claim 15, but fails to explicitly disclose wherein a selection of clinical information displayed on the patient monitor display is configured based at least in part on outputs from the digital model, and preferably configured based on one or more outputs of the digital model indicative of a current or future clinical state of the patient.
However, Rusak teaches “At least some implementations of the systems, methods, apparatus, and/or code instructions described herein relate to the technical problem of processing a large amount of medical information for selecting relevant data for treatment of a patient(see attached copy, page 9, paragraph 7). At least some implementations of the systems, methods, apparatus, and/or code instructions described herein provide treatment analysis based on contextual data (e.g., at least the interaction journey) and/or the point data (i.e., patient parameters) that are combined. At least some implementations of the systems, methods, apparatus, and/or code instructions described herein unify the relevant data and/or information that is connected to a certain condition and enables the caregiver on hand to get in real time the detailed overview, for example, by displaying on a touch screen and/or other screens(see attached copy, page 14, paragraph 6)”.
It would be obvious to one of ordinary skill in the art before the effective filing date to configure the system for generating a patient digital twin of Zimmerman with the selection of the UI adapting system of Rusak. Doing so would specify selecting a specific information to be displayed based on the outputs or desired outputs of the system.
Regarding claim 9, Zimmerman discloses the system as claimed of claim 15, but fails to explicitly disclose wherein a display size and/or display position at which each element of clinical information is displayed on the patient monitor display is determined at least in part on outputs from the digital model, and preferably determined based on one or more outputs of the digital model indicative of a current or future clinical state of the patient.
However, Rusak teaches “The interaction journey may capture the interactions of the healthcare provider with multiple medical devices, for example, accessing the EMR to obtain a certain blood test result, then checking a medical image, then setting the ventilator to certain values, then adjusting the pulse oximeter, selecting which system (e.g., PACS, medication order system) to present on a large or standard- sized screen, touching values of blood test results on a touch screen, writing notes on a digital screen with a special pen, entering patient treatment orders into a digital patient chart, and adjusting a presentation of data on a screen (e.g., zoom-in, open app, download app, perform action in app, minimize window, open minimized window, highlight data, and arrange order of data on the screen)(see attached copy, page 21, paragraph 8)”.
It would be obvious to one of ordinary skill in the art before the effective filing date to configure the system for generating a patient digital twin of Zimmerman with the display orientation of the UI adapting system of Rusak. Doing so would specify modifying the display size or position based on the outputs of the digital model to optimize viewing.
Regarding claim 10, Zimmerman discloses the system as claimed in claim 8,, wherein the selection of clinical information displayed by the patient monitor unit and/or the display size and/or display position at which each element of clinical information is displayed by the patient monitor unit is updated recurrently based on changes in the outputs of the digital model(At block 930, the patient digital twin 130 is accessed. For example, the patient digital twin 130 can be stored on the care system 1020 and/or otherwise can be accessed via the care system 1020 (e.g., via a graphical user interface 1025 display of the care system 1020, etc.) to communicate the change and/or other scheduling of the follow-up event. Thus, a change in exam time and/or other scheduling of a follow-up exam can be incorporated in the digital twin 130 (e.g., to model patient 110 behavior leading up to the event, process information obtained/changed after the event, etc.) and ingested as part of the digital twin 130 avatar or model[0057]).
Regarding claim 11, Zimmerman discloses the system as claimed in claim 1, but fails to disclose wherein a frequency of simulation runs of the digital model is adjusted recurrently based in part on outputs of the digital model.
However, Rusak teaches “Alternatively or additionally, the model is updated based on an update received from an external entity, for example, a facility other than the one which employs the current user of the UI. The update may include a newly discovered correlation and/or newly discovered adaptation. The update may be distributed for updating sub-components of the model located at each facility, and/or a central model. At 168, one or more features described with reference to 150-166 are iterated. The iterations may be performed, for example, for continuously (and/or per event) monitoring the interaction journey and new patient parameters and/or new other data, and dynamically updating the UI based on real time outputs of the model. The iterations may be performed, for example, by monitoring the interaction journey of the user interacting with the adapted UI (created by implementing the adaptations outputted by the model for the current UI), feeding the interaction journey with the adapted UI into the model to output another adaptation to the UI, and updating the previously adapted UI according to the new adaptation. In this manner, the interaction of the user with the dynamically adapted UI created based on the model is monitored and adjusted(see attached copy, page 26, paragraph 6)”.
It would be obvious to one of ordinary skill in the art before the effective filing date to configure the system for generating a patient digital twin of Zimmerman with the iterations of the UI adapting system of Rusak. Doing so would specify repeating the simulation and making adjustments based on the outputs.
Regarding claim 18, Zimmerman discloses the system of claim 11, wherein the output of the digital model is indicative of a current or predicted future clinical state of the patient determined by the digital model(In certain examples, the digital twin 130 of the patient 110 can be used for monitoring, diagnostics, and prognostics for the patient 110. Using sensor data in combination with historical information, current and/or potential future conditions of the patient 110 can be identified, predicted, monitored, etc., using the digital twin 130[0036]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA CATHERINE ANTHONY whose telephone number is (703)756-4514. The examiner can normally be reached 7:30 am - 4:30 pm, EST, M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CARL LAYNO can be reached at (571) 272-4949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARIA CATHERINE ANTHONY/Examiner, Art Unit 3796
/CARL H LAYNO/Supervisory Patent Examiner, Art Unit 3796