DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicant
Receipt of Applicant’s Amendment filed August 6, 2025 is acknowledged.
Information Disclosure Statement
Claims 1-20 have been amended. Claims 1-20 are pending and are provided to be examined upon their merits.
Response to Arguments
Applicant’s arguments filed August 6, 2025 have been fully considered but they are not persuasive. A response is provided below.
Applicant argues 35 U.S.C. §101 Rejections, pg. 9 of Remarks:
Regarding Step 2A, Prong One, Applicant argues that the characterization of the claims oversimplifies and misidentifies the claim as the claim is directed towards rendering a modified GUI, citing Examples 23 and 37, which both teach claims wherein improved GUI inventions have been found to be non-abstract. Examiner notes that the instant application is dissimilar to Examples 23 and 37 in that the claimed subject matter is different from those examples. Instead, the instant application is “identifying,…, which of the plurality of vital sign categories is clinically relevant to the medical patient” (amended claim 1), which could otherwise be performed by modifying the behavior of a medical care professional for monitoring of the patient’s condition. The claim further recites “determining, …, a visual style for presentation of the real-time measurements of the identified vital sign category that is predicted to be comfortable for the medical patient” (amended claim 1). This determination of a visual style could otherwise be performed by modifying the behavior of a graphic designer, as it is a goal of the profession to increase visual comfort. Thus, the Examiner maintains that the claims fall under an abstract idea of certain methods of organizing human activity as managing personal behaviors.
Regarding Step 2A, Prong Two, Applicant argues that, similar to Examples 23 and 37, the GUI of the instant application provides a solution by providing a dynamic and customized display. Examiner notes that Examples 23 and 37 address specific technical limitations; overlapping of information that may occur to multiple windows being displayed on limited display space and limited efficiency of GUIs due to arrangement of icons, respectively. Here, the GUI addresses patient comfort, which is an abstract problem of aiding the visualization of data, and is more akin to adjusting display settings based on inferred patient preference. This is supported by [0038] of Applicant specification, which recites: “For instance, suppose that the patient metadata indicates that the medical patient is elderly. In such case, the deep learning neural network can infer that the elderly often have an easier time viewing large font sizes and a harder time viewing small font sizes (e.g., since visual acuity generally declines with age).”
Examiner further notes that the determined visual style and the GUI do not have a functional relationship. Rather, the GUI only serves as a support for the displayed content. See MPEP 2111.95(I)B, which recites: “For example, a claimed measuring tape having electrical wiring information thereon, or a generically claimed substrate having a picture of a golf ball thereupon, would lack a functional relationship as the claims as a whole are directed towards conveying wiring information (unrelated to the measuring tape) or an aesthetically pleasing image (unrelated to the substrate) to the reader.” Thus, the improvement is not the GUI, but rather to the content, which is akin to graphic design.
Regarding Step 2B, Applicant argues that the claims provide significantly more by including elements that are not well-understood, routine, or conventional. Examiner notes that the consideration under Step 2B is if the additional elements, alone or in combination (memory, processor, medical sensors, display device, standardized graphical patient graphical user interface, machine learning model), are well-understood, routine and conventional in the field – the novelty of the abstract idea (“apply it” of a machine learning model to perform graphic design) is not considered relevant under the Step 2B analysis. Here, the additional elements, alone or in combination, amount to instruction to implement the abstract idea using a general purpose computer. Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 134 S. Ct. 2347, 1357 (2014).
Applicant argues 35 U.S.C. §102 Rejections, pg. 13 of Remarks:
Examiner agrees that the amended claims overcome the prior 102 rejection. Please see the modified 103 rejection below.
Applicant argues 35 U.S.C. §103 Rejections, pg. 1 of Remarks:
Applicant Sun and Miles fail to teach the amended limitations. However, new art is applied to teach the amended claims. Please see below.
Claim Objections
Claim 20 is objected to because of the following informalities:
Claim 20 recites: “at least one of protanopia, photic sneezing syndrome, photo-induced epileptic seizures”. Examiner suggests including an “or” before “photo-induced”, mimicking claim 19.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The terms “comfortable” in claims 1, 2, 9, 10, 17 and “discomfort” in claims 2, 10, and 18 are relative terms which renders the claim indefinite. The terms “comfortable” and “discomfort” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claims 2-8, 10-16, and 18-20 are further rejected by virtue of their dependency on claims 1, 9, and 17, respectively.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Subject Matter Eligibility Criteria – Step 1:
The claims recite subject matter within a statutory category as a method and a machine (claims 1-20). Accordingly, claims 1-20 are all within at least one of the four statutory categories.
Subject Matter Eligibility Criteria – Step 2A – Prong One:
Regarding Prong One of Step 2A of the Alice/Mayo test, the claim limitations are to be analyzed to determine whether, under their broadest reasonable interpretation they “recite” a judicial exception or in other words whether a judicial exception is “set forth” or “described” in the claims. MPEP §2106.04(II)(A)(1). An “abstract idea” judicial exception is subject matter that falls within at least one of the following groupings: a) certain methods of organizing human activity, b) mental processes, and /or c) mathematical concepts. MPEP §2106.04(a).
The Examiner has identified system claim 1 as the claim that represent the claimed invention for analysis, and are similar to method claim 9 and product claim 17.
Claim 1:
A system, comprising:
a memory configured to store computer-executable components; and
a processor that executes at least one of the computer-executable components that:
in response to a medical patient, that is being monitored in-real time via medical sensors at a medical facility, interacting with a display device of the medical facility, dynamically generates a customized patient graphical user interface for the medical patient from a standardized patient graphical user interface of the display device, wherein the dynamically generating comprises:
accessing patient metadata corresponding to the medical patient and that accesses real-time measurements from the medical sensors of a plurality of vital sign categories of the medical patient;
identifying, via execution of a machine learning model on the patient metadata, which of the plurality of vital sign categories is clinically relevant to the medical patient, thereby yielding an identified vital sign category;
determining, via the execution of the machine learning model, a visual style for presentation of the real-time measurements of the identified vital sign category that is predicted to be comfortable for the medical patient based on at least one physical attribute in the patient metadata for the medical patient in viewing the real-time measurements of the identified vital sign category;
modifying the standardized patient graphical user interface to generate the customized patient graphical user interface comprising the real-time measurements of the identified vital sign category in the visual style; and
rendering the customized patient graphical user interface on the display device.
These above limitations, under their broadest reasonable interpretation, cover performance of the limitation as certain methods of organizing human activity as managing personal behaviors. The claim elements are directed towards “a medical patient,…, interacting with a display device” and identifying “which of the plurality of vital sign categories is clinically relevant to the medical patient”, which falls under managing patient care. Managing patient care falls under the abstract concept of managing personal behaviors of people, as it is a human activity regularly performed by healthcare providers for their patients. The claims further recite “determining,…, a visual style for presentation”, which falls under an abstract idea as managing personal behaviors, as determining visual styles for presentation could otherwise be performed by graphic designers. It is important to note that the examples provided by the MPEP such as social activities, teaching, and following rules or instructions are provided as examples and not an exclusive listing and that 2106.04(a)(2)II states certain activity between a person and a computer may fall within the “certain methods of organizing human activity” grouping.
Accordingly, the claim recites at least one abstract idea.
Claims 9 and 17 are abstract for similar reasons.
Subject Matter Eligibility Criteria – Step 2A – Prong Two:
Regarding Prong Two of Step 2A of the Alice/Mayo test, it must be determined whether the claim as a whole integrates the idea into a practical application. As noted at MPEP §2106.04 (ID)(A)(2), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” MPEP §2106.05(I)(A).
In the present case, the additional elements beyond the above-noted at least one abstract idea recited in the claim are as follows (where the bolded portions are the “additional elements” while the underlined portions continue to represent the at least one “abstract idea”):
Additional elements cited in the Claims:
A system, a memory, a processor, a display device, a customized patient graphical user interface, a standardized patient graphical user interface, a machine learning model(1); a mobile computing device (6); a hospital console (7); a device (9); a computer program product (17)
The independent claims teach insignificant extra-solution activities of accessing/receiving and simply displaying data. See also MPEP 2106.05(g).
Any computing devices and their associated components (a processor, memory) that would be able to perform the method are taught at a high level of generality such that the claim elements amounts to no more than mere instructions to apply the exception using any generic component capable of performing the claim limitations. [0066] of Applicant specification recites: “the access component 114 can electronically retrieve the patient metadata 104 from whatever computing devices (e.g., desktop computer, laptop computer, smart phone, tablet) that are responsible for maintaining, storing, or collecting the patient metadata 104.” [0027] of Applicant specification further recites: “Various embodiments described herein can be considered as a computerized tool (e.g., any suitable combination of computer-executable hardware or computer-executable software) that can facilitate intelligent clinical user interfaces. In various aspects, such computerized tool can comprise an access component, a model component, or a display component.” No specific, technical improvements are being made to computing devices as a variety of generic computing devices are simply applied to perform the abstract idea.
Machine learning is taught at a high level of generality. [0032] of Applicant specification recites: “the model component of the computerized tool can electronically store, maintain, control, or otherwise access a deep learning neural network. In various aspects, the deep learning neural network can exhibit any suitable deep learning internal architecture. For example, the deep learning neural network can include any suitable numbers of any suitable types of layers (e.g., input layer, one or more hidden layers, output layer, any of which can be convolutional layers, dense layers, non-linearity layers, long short-term memory (LSTM) layers, pooling layers, batch normalization layers, or padding layers). As another example, the deep learning neural network can include any suitable numbers of neurons in various layers (e.g., different layers can have the same or different numbers of neurons as each other). As yet another example, the deep learning neural network can include any suitable activation functions (e.g., softmax, sigmoid, hyperbolic tangent, rectified linear unit) in various neurons (e.g., different neurons can have the same or different activation functions as each other). As still another example, the deep learning neural network can include any suitable interneuron connections or interlayer connections (e.g., forward connections, skip connections, recurrent connections).” [0033] of Applicant specification further recites: “the deep learning neural network can be configured to receive as input medical information and to produce as output various GUI-related predictions based on that inputted medical information.” No specific, technical improvements are being made to the field of machine learning as a any generic neural network is applied to perform the abstract idea of generating predictions.
Display devices (graphical user interface, a mobile computing device, a hospital console) are also taught at a high level of generality. [0073] of Applicant specification recites: “the display component of the computerized tool can electronically control any suitable electronic display (e.g., computer screen, smart phone screen). In various aspects, the display component can visually render, on that electronic display, a patient-tailored GUI, based on the one or more GUI-related inferences produced by the deep learning neural network.” [0097] of Applicant specification recites: “the display component 118 can cause the patient-tailored GUI 502 to be rendered on an electronic computer screen of any suitable hospital console device (e.g., a bedside hospital monitor that is near the medical patient).” No specific, technical improvements are being made to display devices as generic devices are simply applied to perform the insignificant extra-solution activity of simply displaying data.
Memory devices (non-transitory computer-readable memory) are also taught at a high level of generality. [0169] of Applicant specification recites: “memory or memory components described herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of illustration, and not limitation, nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), flash memory, or nonvolatile random access memory (RAM) (e.g., ferroelectric RAM (FeRAM). Volatile memory can include RAM, which can act as external cache memory, for example. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), direct Rambus RAM (DRRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM). Additionally, the disclosed memory components of systems or computer-implemented methods herein are intended to include, without being limited to including, these and any other suitable types of memory.” No specific, technical improvements are being made to display devices as generic devices are simply applied to perform the insignificant extra-solution activity of storing data.
Augmented reality is also taught at a high level of generality. [0103] of Applicant specification recites: “As some non-limiting examples, the patient-tailored GUI 502 can implement: any suitable data packaging, analysis, or exporting techniques (e.g., diagnostic models can analyze the real-time vital sign measurements 602); any suitable augmented reality or virtual reality techniques (e.g., if scanned or endoscopic images of the medical patient are available, they can be leveraged to construct a two-dimensional or three-dimensional virtual model of an anatomical structure of the medical patient, and such two-dimensional or three-dimensional model can be superimposed over an image or live feed that depicts the medical patient)”. )”. No specific, technical improvements are being made to augmented reality as known techniques are simply applied to overlay images without any linkage to the abstract idea of analyzing vital sign measurements.
Thus, taken alone, the additional elements do not integrate the at least one abstract idea into a practical application.
Looking at the additional elements as an ordered combination adds nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole with the limitations reciting the at least one abstract idea, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole does not integrate the abstract idea into a practical application of the abstract idea. MPEP §2106.05(I)(A) and §2106.04(IID)(A)(2).
The remaining dependent claim limitations not addressed above fail to integrate the abstract idea into a practical application as set forth below:
Claims 2, 10, and 18: These claims recite wherein determining the visual style based on the patient metadata for the medical patient comprises: identifying the at least one physical attribute of the medical patient that is determined to cause discomfort to the medical patient in viewing the standardized patient graphical user interface; and based on the at least one physical attribute of the medical patient, identifying the visual style that is predicted to be comfortable for the medical patient in viewing the real- time measurements of the identified vital sign category; which teaches an abstract idea of certain methods of organizing human activity as managing personal behaviors. Graphic design is a human activity performed by graphic designers to aid in viewing of information. This claim further teaches an insignificant extra-solution activity of selecting a data source or type for manipulation.
Claims 3 and 11: These claims recite wherein the visual style comprises at least one of a font size to use, a font size to avoid, a color to use, a color to avoid, a geometric pattern to use, a geometric pattern to avoid, a screen brightness level to use, a screen brightness level to avoid, a display contrast level to use, or a display contrast level to avoid; which only serves to limit abstract idea of graphic design.
Claims 4 and 12: These claims recite wherein the at least one of the computer-executable components further: generates an electronic alert, in response to the real-time measurements of the identified vital sign category failing to satisfy a threshold value; which teaches an abstract idea of certain methods of organizing human activity, such as alerting, and mental processes, such as comparing data to a threshold.
Claims 5 and 13: These claims recite wherein the at least one of the computer-executable components further: computes, via the execution of the machine learning model, the threshold value; which teaches an abstract idea of certain methods of organizing human activity as managing personal behaviors. Generating a threshold value for vital signs is a human activity typically performed by physicians for their patients for monitoring a patient’s state. This claim further teaches using a machine learning model at a high level of generality.
Claims 6 and 14: These claims recite wherein the display device is associated with a mobile computing device; which teaches a mobile computing device at a high level of generality.
Claims 7 and 15: These claims recite wherein the display device is associated with a hospital console which teaches a hospital console at a high level of generality.
Claims 8 and 16: These claims recite wherein the customized patient graphical user interface incorporates an augmented reality overlay superimposed over an image of the medical patient; which teaches augmented reality at a high level of generality.
Claim 19: This claim recites wherein the at least one physical attribute of the medical patient comprises at least one of an eye attribute or an age- related attribute; which only serves to further limit the type of medical patient.
Claim 20: This claim recites wherein the at least one physical attribute of the medical patient comprises at least one of protanopia, photic sneezing syndrome, photo-induced epileptic seizures; which only serves to further limit the type of medical patient.
Subject Matter Eligibility Criteria – Step 2B:
Regarding Step 2B of the Alice/Mayo test, representative independent claims do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for reasons the same as those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application.
These claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception, add insignificant extra-solution activity to the abstract idea, and generally link the abstract idea to a particular technological environment or field use. Additionally, the additional limitations, other than the abstract idea per se, amount to no more than limitations which:
Amount to elements that have been recognized as activities in particular fields (such as Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information), MPEP §2106.05(d)(II)(i);storing and retrieving information in memory, Versata Dev. Group, MPEP §2106.05(d)(II)(iv)).
Dependent claims recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea. Dependent claims recite additional subject matter which amount to limitations consistent additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claims 2-8, 10-16, and 18-20, additional limitations which amount to elements that have been recognized as activities in particular fields, claims 2-8, 10-16, and 18-20, e.g., performing repetitive calculations, Flook, MPEP §2106.05(d)(II)(ii); claims 2-8, 10-16, and 18-20, e.g., storing and retrieving information in memory, Versata Dev. Group, MPEP §2106.05(d)(II)(iv). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
Therefore, whether taken individually or as an ordered combination, claims 1-20 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-7, 9-15, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Bulut (US 20230079951) in view of Strasser (US 20230072403).
Regarding claim 1, Bulut teaches a system, comprising:
a memory configured to store computer-executable components ([0216], “The processor arrangement may be associated with one or more storage media such as volatile and non-volatile computer memory such as RAM, PROM, EPROM, and EEPROM. The storage media may be encoded with one or more programs that, when executed on one or more processors and/or controllers, perform the required functions.”); and
a processor that executes at least one of the computer-executable components ([0120], “The processor component 66 of the digital twin 32 is arranged to receive the various model inputs 62, 64, 70 and to run the modelling to simulate a state of the patient.”) that:
in response to a medical patient, that is being monitored in-real time via medical sensors at a medical facility, interacting with a display device of the medical facility, dynamically generates a customized patient graphical user interface for the medical patient from a standardized patient graphical user interface of the display device ([0147], “The function of the patient monitor is to display clinical information (related to the monitored patient) in real-time, and in visible and easily interpretable manner (for clinicians).” [0181], “Signal ranking and importance indicators for different clinical parameters can be determined based on a current patient status determined by the digital twin. For example, a ranking in terms of clinical significance can be determined and a degree of emphasis or prominence with which the parameter or signal is displayed adapted based on this.” [0193], “The rankings may be determined and updated in real time, i.e. continuously.”). Examiner notes that a patient monitor is interacting with the patient, as it displays data about the patient. Examiner further notes that adjusting the display based on real-time updates in the rankings of important clinical signals encompasses a customization of the patient GUI from a standard GUI.
wherein the dynamically generating comprises: accessing patient metadata corresponding to the medical patient and that accesses real-time measurements from the medical sensors of a plurality of vital sign categories of the medical patient ([0117], “The personal data inputs 62 may include patient sensor data, medical imaging data, and/or patient medical record data (e.g. EMR data).” [0120], “The processor component 66 of the digital twin 32 is arranged to receive the various model inputs 62, 64, 70 and to run the modelling to simulate a state of the patient.” [0100], “the system 10 (for example the processor arrangement 22 of the system) is arranged to recurrently or continuously update or develop the patient digital model 32 based on the sensor data 58 received from the patient sensors 56 in order to keep the model as an up-to-date live replica or representation of the real-time physical state of the patient (anatomy).” [0018], “The personal digital model is configured to provide a real-time replica of the state of the patient, based on real-time sensor data. Such a digital model is known in the art as a digital twin since it simulates an actual physical state of the patient in real time or approximately real time.”);
identifying, via execution of a machine learning model on the patient metadata, which of the plurality of vital sign categories is clinically relevant to the medical patient, thereby yielding an identified vital sign category ([0018], “a digital model is known in the art as a digital twin since it simulates an actual physical state of the patient in real time or approximately real time.” [0103], “The digital twin may integrate artificial intelligence, machine learning and/or software analytics with spatial network graphs to create a ‘living’ digital simulation model of the at least portion of the patient's anatomy” [0104], “the digital twin forms a learning system that learns from itself using the sensor data provided by the one or more sensors 56. The digital twin is thus a dynamic model which dynamically develops or updates so as to provide an accurate representation of the patient's real anatomy.” [0181], “Signal ranking and importance indicators for different clinical parameters can be determined based on a current patient status determined by the digital twin. For example, a ranking in terms of clinical significance can be determined and a degree of emphasis or prominence with which the parameter or signal is displayed adapted based on this.”). Examiner notes that the digital twin represents the model component, as it comprises a machine learning model.
modifying the standardized patient graphical user interface to generate the customized patient graphical user interface comprising the real-time measurements of the identified vital sign category in the visual style ([0047], “The clinical parameters displayed on the screen may be selected to be those determined as the most important. The selection of parameters displayed on the screen may be dynamically updated at regular intervals based on change in patient clinical state as determined by the digital twin.” [0193], “The rankings may be determined and updated in real time, i.e. continuously.” [0050], “the display size and/or display position at which each element of clinical information is displayed on the patient monitor display may be determined at least in part on outputs from the digital model. Preferably it is determined based on one or more outputs of the digital model indicative of a current or future clinical state of the patient.” [0147], “The function of the patient monitor is to display clinical information (related to the monitored patient) in real-time, and in visible and easily interpretable manner (for clinicians).”). As noted above, adjusting the display based on real-time updates in the rankings of important clinical signals encompasses a customization of the patient GUI from a standard GUI.
and rendering the customized patient graphical user interface on the display device ([0047], “The clinical parameters displayed on the screen may be selected to be those determined as the most important. The selection of parameters displayed on the screen may be dynamically updated at regular intervals based on change in patient clinical state as determined by the digital twin.” [0004], “vital signs can be measured and displayed in real time on the monitor.”).
Bulut does not teach determining, via the execution of the machine learning model, a visual style for presentation of the real-time measurements of the identified vital sign category that is predicted to be comfortable for the medical patient based on at least one physical attribute in the patient metadata for the medical patient in viewing the real-time measurements of the identified vital sign category.
However, Strasser does teach determining, via the execution of the machine learning model, a visual style for presentation of the real-time measurements of the identified vital sign category that is predicted to be comfortable for the medical patient based on at least one physical attribute in the patient metadata for the medical patient in viewing the real-time measurements of the identified vital sign category (Strasser, [0071], “The healthcare system 170 receives and processes the monitored data regardless of the different formats. The healthcare system 170 analyzes the monitored data in order to determine the probability that the patient experienced, or is currently exhibiting, any impairment. More specifically, the analysis assesses the data using one or more, but not limited to, the following: machine learning algorithms, various sensory baselines, or trends of recorded data associated with the patient. Based on the analysis, the healthcare system 170 is configured to assign a probability of impairment, such as a visual, hearing, and/or a speech impairment.”[0072], “Several output presentation modifications may be defined in the event the probability exceeds the predetermined threshold. It will be appreciated that modifications to the output presentation may occur at any time over the monitoring period. For example, in response to an elevated probability of a visual impairment, the healthcare system 170 may modify the output presentation of visual data/information text and/or graphics by one or more of the following: provide corresponding or enhanced auditory output to the patient; modify the visual output presentation, such as increasing a font size of the text or changing the brightness of the text and graphics.” [0095], “GUI configured to display a status of one or more biometrics” [0070], “The compiled data associated with biometrics, characteristics, and/or parameters of the monitored patient include, but are not limited to, sensory characteristics and measurements, such as temperature, heart rate, gait, vision, hearing, and speech, as discussed herein.”).
Bulut in view of Strasser are considered analogous to the claimed invention because they are in the field of visual modifications to patient monitors. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bulut with Strasser for the advantage of incorporating “a certain type or style or font of visual presentation” (Strasser; [0069]).
Regarding claim 2, Bulut teaches the system of claim 1, as described above. Bulut does not teach wherein determining the visual style based on the patient metadata for the medical patient comprises: identifying the at least one physical attribute of the medical patient that is determined to cause discomfort to the medical patient in viewing the standardized patient graphical user interface; and based on the at least one physical attribute of the medical patient, identifying the visual style that is predicted to be comfortable for the medical patient in viewing the real- time measurements of the identified vital sign category.
However, the combination of Bulut in view of Strasser does teach wherein determining the visual style based on the patient metadata for the medical patient comprises:
identifying the at least one physical attribute of the medical patient that is determined to cause discomfort to the medical patient in viewing the standardized patient graphical user interface (Strasser, [0069], “determines that there is an elevated probability of at least one sensory impairment, the system 170 may be configured to modify the output format of data/information accordingly.”); and
based on the at least one physical attribute of the medical patient, identifying the visual style that is predicted to be comfortable for the medical patient in viewing the real- time measurements of the identified vital sign category (Strasser, [0072], “For example, in response to an elevated probability of a visual impairment, the healthcare system 170 may modify the output presentation of visual data/information text and/or graphics by one or more of the following: provide corresponding or enhanced auditory output to the patient; modify the visual output presentation, such as increasing a font size of the text or changing the brightness of the text and graphics.” [0095], “GUI configured to display a status of one or more biometrics” [0070], “The compiled data associated with biometrics, characteristics, and/or parameters of the monitored patient include, but are not limited to, sensory characteristics and measurements, such as temperature, heart rate, gait, vision, hearing, and speech, as discussed herein.” Bulut, [0004], “vital signs can be measured and displayed in real time on the monitor.”).
Bulut in view of Strasser are considered analogous to the claimed invention because they are in the field of visual modifications to patient monitors. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bulut with Strasser for the advantage of incorporating “a certain type or style or font of visual presentation” (Strasser; [0069]).
Regarding claim 3, Bulut teaches the system of claims 1 and 2, as described above. Bulut does not teach wherein the visual style comprises at least one of a font size to use, a font size to avoid, a color to use, a color to avoid, a geometric pattern to use, a geometric pattern to avoid, a screen brightness level to use, a screen brightness level to avoid, a display contrast level to use, or a display contrast level to avoid.
However, Strasser does teach wherein the visual style comprises at least one of a font size to use, a font size to avoid, a color to use, a color to avoid, a geometric pattern to use, a geometric pattern to avoid, a screen brightness level to use, a screen brightness level to avoid, a display contrast level to use, or a display contrast level to avoid ([0072], “For example, in response to an elevated probability of a visual impairment, the healthcare system 170 may modify the output presentation of visual data/information text and/or graphics by one or more of the following: provide corresponding or enhanced auditory output to the patient; modify the visual output presentation, such as increasing a font size of the text or changing the brightness of the text and graphics.” [0086], “For example, when a patient experiences vision loss or motor function loss on a left side, content and/or inputs may be positioned toward a right side of the GUI. ”).
Bulut in view of Strasser are considered analogous to the claimed invention because they are in the field of visual modifications to patient monitors. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bulut with Strasser for the advantage of incorporating “a certain type or style or font of visual presentation” (Strasser; [0069]).
Regarding claim 4, Bulut in view of Strasser teaches the system of claim 1, as described above. Bulut further teaches wherein the at least one of the computer-executable components further:
generates an electronic alert, in response to the real-time measurements of the identified vital sign category failing to satisfy a threshold value ([0060], “the patient monitor unit may include an alarm for generating a sensory alarm signal to alert a user, e.g. clinician, based on the derived information outputs meeting one or more alarm criteria, such as exceeding a defined alert threshold. In advantageous examples, the alarm criteria are configured based in part on outputs from the digital twin. For example, one or more thresholds for the alarm trigger may be set based on information from the digital twin.” [0202], “The digital twin may determine one or more trigger criteria related to patient condition for triggering simulation runs, e.g. a particular threshold change in one or more clinical parameters for instance.”).
Regarding claim 5, Bulut in view of Strasser teaches the system of claims 1 and 4, as described above. Bulut further teaches wherein the at least one of the computer-executable components further: computes, via the execution of the machine learning model, the threshold value ([0060], “the patient monitor unit may include an alarm for generating a sensory alarm signal to alert a user, e.g. clinician, based on the derived information outputs meeting one or more alarm criteria, such as exceeding a defined alert threshold. In advantageous examples, the alarm criteria are configured based in part on outputs from the digital twin. For example, one or more thresholds for the alarm trigger may be set based on information from the digital twin.” [0166], “Outputs 88 of the digital twin 32 can be used to guide alarm generation by the alarm generator 77. The digital twin 32 may provide to the alarm generator 77 information 88 indicative e.g. of a current or predicted future state of the patient, predicted pathologies, or more directly of proposed alarm thresholding functions, timing of alarms, and/or intensity of alarms. The alarm generator may configure for instance criteria for the alarm, such as threshold levels for instance, and other settings of the alarm based in part on this information.” [0103], “The digital twin may integrate artificial intelligence, machine learning and/or software analytics with spatial network graphs”).
Regarding claim 6, Bulut in view of Strasser teaches the system of claim 1, as described above. Bulut further teaches wherein the display device is associated with a mobile computing device ([0049], “the patient monitor unit may comprise more than one display unit, and optionally wherein a different selection of information outputs is displayed on each of the display units.”[0060], “In either case the patient monitor unit 14 is preferably a bedside patient monitor unit. It may be a mobile or ambulatory patient monitor unit.” [0176], “The display interface may be configured to provide multiple windows or tabs, each containing different elements of clinical information.”).
Regarding claim 7, Bulut in view of Strasser teaches the system of claim 1, as described above. Bulut further teaches wherein the display device is associated with a hospital console ([0049], “the patient monitor unit may comprise more than one display unit, and optionally wherein a different selection of information outputs is displayed on each of the display units.”[0060], “In either case the patient monitor unit 14 is preferably a bedside patient monitor unit. It may be a mobile or ambulatory patient monitor unit.” [0176], “The display interface may be configured to provide multiple windows or tabs, each containing different elements of clinical information.”).
Regarding claims 9, 10, 11, 12, 13, 14, 15, 17, and 18, these claims are rejected for the same reasons as claims 1, 2, 3, 4, 5, 6, 7, 1, and 2, respectively. Bulut further teaches a computer program product ([0067], “the invention provide a computer program product comprising code (i.e. instructions) configured when run on a processor, to cause the processor to perform the method in accordance with any embodiment set out above or described below”).
Regarding claim 19, Bulut in view of Strasser teaches the system of claim 17, as described above. Bulut does not teach wherein the at least one physical attribute of the medical patient comprises at least one of an eye attribute or an age- related attribute.
However, Strasser does teach wherein the at least one physical attribute of the medical patient comprises at least one of an eye attribute or an age- related attribute ([0071], “Further, the healthcare system 170 may develop several algorithms based on anonymous patient data that can be grouped by gender, age, race, and/or health event. The probability is then compared to a predetermined threshold in S2030, and, in the event the probability exceeds the predetermined threshold, the output presentation of data/information is modified in S2040 according to predefined specifications.” [0086], “Patients may experience muscle weakness or loss of function, vision changes or loss, hearing changes or loss, mental capacity changes, etc. As such, a GUI of a user device may update to accommodate these changes in abilities, symptoms, and/or disabilities so that the user can easily and effectively interact with the system… For example, when a patient experiences vision loss or motor function loss on a left side, content and/or inputs may be positioned toward a right side of the GUI.” [0142], ““symptoms” of stroke onset or “disabilities” as a result of a stroke event may include, but not be limited to: blurred vision;… altered smell, taste, hearing, and/or vision; drooping of eyelid (i.e., ptosis); weakness of ocular muscles; …; visual field defects;…”).
Bulut in view of Strasser are considered analogous to the claimed invention because they are in the field of visual modifications to patient monitors. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bulut with Strasser for the advantage of “modify[ing] the output presentation of visual data/information text and/or graphics” “in response to an elevated probability of a visual impairment” (Strasser; [0069]).
Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Bulut (US 20230079951) in view of Strasser (US 20230072403) further in view of Sun (US 9547940).
Regarding claim 8, Bulut in view of Strasser teaches the system of claim 1, as described above. Bulut does not teach wherein the customized patient graphical user interface incorporates an augmented reality overlay superimposed over an image of the medical patient.
However, Sun does teach wherein the customized patient graphical user interface incorporates an augmented reality overlay superimposed over an image of the medical patient (Col. 7, lines 51-57, “Once the three-dimensional surface models have been registered and the position and orientation of the endoscope is known, renderings of one or more obscured organs (i.e., augmented reality images) can be combined or merged with (e.g., overlaid onto) the intra-operative image data (e.g., laparoscope video) shown to the surgeon in real time, as indicated in block 52 of FIG. 3.” Col. 8, lines 24-28, “an interface can be provided to enable the surgical staff to select one or several anatomic structures that have been either manually or semi-automatically segmented and reconstructed from pre-operative image data using a three-dimensional slicer.”). Examiner notes that as the operative image data is of the patient, it encompasses a customized patient graphical user interface.
Bulut in view of Strasser further in view of Sun are considered analogous to the claimed invention because they are in the field of patient monitoring. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed inventio