Prosecution Insights
Last updated: April 19, 2026
Application No. 17/743,440

ENHANCED ELECTRONIC WHITEBOARDS FOR CLINICAL ENVIRONMENTS

Non-Final OA §101§103
Filed
May 12, 2022
Examiner
WINSTON III, EDWARD B
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hill-Rom Services, Inc.
OA Round
3 (Non-Final)
20%
Grant Probability
At Risk
3-4
OA Rounds
4y 11m
To Grant
52%
With Interview

Examiner Intelligence

Grants only 20% of cases
20%
Career Allow Rate
74 granted / 370 resolved
-32.0% vs TC avg
Strong +32% interview lift
Without
With
+31.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
35 currently pending
Career history
405
Total Applications
across all art units

Statute-Specific Performance

§101
37.1%
-2.9% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 370 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The following Office action in response to communications received August 18, 2025. Claims 1, 3-4, 6, 8, 12-14, 16 and 21 have been amended. Claim 9 has been canceled. Claim 22 has been added. Therefore, claims 1, 3-8 and 10-22 are pending and addressed below. Applicant’s amendments to the claims are not sufficient to overcome the rejections set forth in the previous office action dated May 12, 2025. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-8 and 10-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Based upon consideration of all of the relevant factors with respect to the claims as a whole, the claims are directed to non-statutory subject matter which do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of the following analysis: Independent Claim(s) 1, 8 and 13 are directed to a method including outputting data associated with a patient. A displaying of font size of is adjusted based on a position of the patient. Based on determining that a position of a care provider is within the threshold distance of the output device or within the room associated with the patient, outputting, the second display data being different than the first display data. Claim 1 recites “determine an attribute of at least one of the patient, the care provider, or a visitor of the patient; present first information about the patient; identifying a condition of the patient; causing, at a second time, and based on identifying the condition of the patient, presenting the first information about the patient to presenting second information about the patient, wherein the second information: is different than the first information, and comprises at least one of: a timer, or an instruction for treating the condition of the patient; and determining a presentation format, used to present at least one of the first information or the second information, based at least in part on the attribute.” Claim 8 recites “detect a first position of the first care provider relative to the screen; and a second position of the second care provider relative to the screen; determining that the first position of the first care provider, is at least one of: within a threshold distance from the screen, or within the room associated with the patient; determining that the second position of the second care provider is at least one of: within the threshold distance from the screen, or within the room associated with the patient; and based on determining that the second position of the second care provider is at least one of within the threshold distance of the screen or within the room associated with the patient.” Claim 13 recites “detecting, based on first location data a distance between the patient and the electronic whiteboard; adjusting a font size based on the distance between the patient; determining, based on second location data provided, that a position of a care provider is at least one of: within a threshold distance or within the room associated with the patient; and based on determining that the position of the care provider is within the threshold distance or within the room associated with the patient, causing the change from presenting data to a second data being different than the first.” The limitations of independent Claim 1, 8 and 13, as drafted, under its broadest reasonable interpretation, covers the performance of “Certain Methods of Organizing Human Activity” which are concepts performed by managing personal behavior, relationships or interactions between people (including social activities, teaching, and following rules or instructions), but for the recitation of generic computer components. That is, other than reciting “sensor, processor; memory, electronic whiteboard display, screen, microphone array, camera, input device, transceiver, graphical user interface (GUI), output device” nothing in the claim element precludes the step from practically being performed by managing personal behavior, relationships or interactions between people. For example, but for the “memory communicatively coupled to the at least one processor and storing instructions that, when executed by the at least one processor, cause the at least one processor to perform operations” language, “present” in the context of this claim encompasses the user manually showing first information about the patient. Similarly, the determining a presentation format, used to present at least one of the first information or the second information, covers concepts performed by managing personal behavior, relationships or interactions between people, but for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers concepts performed by managing personal behavior, relationships or interactions between people, but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of using a “sensor, processor; memory, electronic whiteboard display, screen, microphone array, camera, input device, transceiver, graphical user interface (GUI), output device” to perform all of the “outputting, adjusting, determining, causing, modifying, detecting” steps. The “sensor, processor; memory, electronic whiteboard display, screen, microphone array, camera, input device, transceiver, graphical user interface (GUI), output device” is/are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of executing computer-executable instructions for implementing the specified logical function(s) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Claim 1 has the following additional elements (i.e., sensor, processor; memory, electronic whiteboard display). Claim 8 has the following additional elements (i.e., screen, processor, sensor, microphone, camera, input device, transceiver, memory, graphical user interface (GUI)). Claim 13 has the following additional elements (i.e., electronic whiteboard display, graphical user interface (GUI), sensor, microphone or a camera, output device). Looking to the specification, these components are described at a high level of generality (¶ 21 and 41; The clinical environment may include an electronic whiteboard 102 located in a room 104 associated with a patient 106. The electronic whiteboard 102 may be implemented by one or more computing devices. As used herein, the term "computing device," and its equivalents, may refer to a device including at least one processor configured to perform predetermined operations. Examples of computing devices include mobile phones, tablet computers, personal computers, laptops, and smart televisions. In particular cases, the care provider 120 may further wear, carry, or otherwise be associated with a care provider device 124. The care provider device 124 may be a computing device. For example, the care provider device 124 may be a mobile phone, a tablet computer, a laptop computer, a personal digital assistant (PDA), or some other type of computing device. In some implementations, the care provider 120 may access the EMR of the patient 106 via the care provider device 124. For example, the care provider device 124 may execute an application that enables the care provider device 124 to receive information in the EMR of the patient 106 from the EMR server(s) 116). The use of a general-purpose computer, taken alone, does not impose any meaningful limitation on the computer implementation of the abstract idea, so it does not amount to significantly more than the abstract idea. Also, although the claims add “[storage]” steps, it is only considered as insignificant extrasolution activity. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. The combination of elements does not indicate a significant improvement to the functioning of a computer or any other technology and their collective functions merely provide a conventional computer implementation of the abstract idea. Furthermore, the additional elements or combination of elements in the claims, other than the abstract idea per se, amount to no more than a recitation of generally linking the abstract idea to a particular technological environment or field of use, as the courts have found in Parker v. Flook. Therefore, there are no limitations in the claims that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception. It is worth noting that the above analysis already encompasses each of the current dependent claims (i.e., claims 3-7, 10-12 and 14-22). Particularly, each of the dependent claims also fails to amount to “significantly more’ than the abstract idea since each dependent claim is directed to a further abstract idea, and/or a further conventional computer element/function utilized to facilitate the abstract idea. Accordingly, none of the current claims implements an element—or a combination of elements—directed to an inventive concept (e.g., none of the current claims is reciting an element—or a combination of elements—that provides a technological improvement over the existing/conventional technology). These information characteristics do not change the fundamental analogy to the abstract idea grouping of “Certain Methods of Organizing Human Activity,” and, when viewed individually or as a whole, they do not add anything substantial beyond the abstract idea. Furthermore, the combination of elements does not indicate a significant improvement to the functioning of a computer or any other technology. Therefore, the claims when taken as a whole are ineligible for the same reasons as the independent claims. Claims 1, 3-8 and 10-22 are therefore not drawn to eligible subject matter as they are directed to an abstract idea without significantly more. As per Claim 1, Doyle, III et al. teach a system, comprising: -- an electronic whiteboard display mounted in a room associated with a patient or a care provider (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44 and Col 50 || 32-42 (e.g. GUI); The user devices 904 may be any type of computing device such as, but not limited to, a mobile phone, a smart phone, a personal digital assistant (PDA), a laptop computer, a desktop computer, a server computer, a thin-client device, a tablet PC, etc. Additionally, user devices 904 may include a wearable technology device, such as a watch, wristband, earpiece, a pair of glasses, or any other suitable wearable technology. In addition, the user device may include location tracking technology, such as a real time location system (RTLS) tag. The user device 904 may include one or more processors 910 capable of processing user input. The user device 904 may also include one or more input sensors 912 for receiving user input. As is known in the art, there are a variety of input sensors 912 capable of detecting user input, such as accelerometers, cameras, microphones, or any other suitable sensor device. The user input obtained by the input sensors may be from a variety of data input types, including, but not limited to, audio data, visual data, or biometric data. Embodiments of the application on the user device 904 may be stored and executed from its memory 914. The memory 920 and the additional storage 924, both removable and non-removable, are examples of computer-readable storage media. For example, computer-readable storage media may include volatile or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. As used herein, modules may refer to programming modules executed by computing systems (e.g., processors) that are part of the user device 904 or the service provider 906. The service provider 906 may also contain communications connection(s) 932 that allow the service provider 906 to communicate with a stored database, another computing device or server, user terminals, and/or other devices on the network(s) 908. The service provider 906 may also include input/output (I/O) device(s) and/or ports 934, such as for enabling connection with a keyboard, a mouse, a pen, a voice input device, a touch input device, a display, speakers, a printer, etc. As depicted in FIG. 19, a GUI 1902 executed on an electronic whiteboard may be caused to display a number of user-specific details. In some embodiments, the type of details and/or format of presentation may be dictated by one or more configuration setting relevant to the user or another present user. For example, the service provider may cause the electronic whiteboard to display a set of user goals 1908 for a particular user that is within the vicinity of the electronic display device. Additionally, the electronic whiteboard may be caused to display one or more configuration settings 1910 that are to be used to filter/format information.); -- a sensor configured to determine an attribute of at least one of the patients, the care provider, or a visitor of the patient, wherein the attribute comprises at least one of: a distance between the electronic whiteboard display and the at least one of the patient, the care provider, or the visitor, or a language used by the at least one of the patient, the care provider, or the visitor (see Doyle, III et al. Col 14 || 55-67, Col 15 || 1-3 and Col 41 || 22-57; In example embodiment 1300, a display device 1302 is depicted as being mounted on a wall. As a first user 1304 approaches the display device 1302, he or she may be identified. In accordance with at least one embodiment, the first user 1304 may be identified as being associated with a wearable device 1306 (such as an RTLS bracelet). In accordance with at least one embodiment, the first user 1304 may be identified using facial recognition techniques. Once the first user 1304 is identified, user authorizations may be determined from an account associated with the first user 1304. In response to the first user 1304 approaching the display device 1302, or in response to a request made by the first user 1304, information may be displayed. In this example, a first document 1308 and a second document 1310 have been displayed on the display device 1302.); -- at least one processor communicatively coupled to the electronic whiteboard display and the sensor; (see Doyle, III et al. Col 34 || 26-44 and Col 41 || 22-57; The memory 920 and the additional storage 924, both removable and non-removable, are examples of computer-readable storage media. For example, computer-readable storage media may include volatile or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. As used herein, modules may refer to programming modules executed by computing systems (e.g., processors) that are part of the user device 904 or the service provider 906. The service provider 906 may also contain communications connection(s) 932 that allow the service provider 906 to communicate with a stored database, another computing device or server, user terminals, and/or other devices on the network(s) 908. The service provider 906 may also include input/output (I/O) device(s) and/or ports 934, such as for enabling connection with a keyboard, a mouse, a pen, a voice input device, a touch input device, a display, speakers, a printer, etc.); -- memory communicatively coupled to the at least one processor and storing instructions that, when executed by the at least one processor, cause the at least one processor to perform operations comprising (see Doyle, III et al. Col 34 || 26-44 and Col 35 || 22-42; The memory 920 and the additional storage 924, both removable and non-removable, are examples of computer-readable storage media. For example, computer-readable storage media may include volatile or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. As used herein, modules may refer to programming modules executed by computing systems (e.g., processors) that are part of the user device 904 or the service provider 906. The service provider 906 may also contain communications connection(s) 932 that allow the service provider 906 to communicate with a stored database, another computing device or server, user terminals, and/or other devices on the network(s) 908. The service provider 906 may also include input/output (I/O) device(s) and/or ports 934, such as for enabling connection with a keyboard, a mouse, a pen, a voice input device, a touch input device, a display, speakers, a printer, etc.). The user input obtained by the input sensors may be from a variety of data input types, including, but not limited to, audio data, visual data, and biometric data. For example, the display device 1009 may be equipped with cameras capable of utilizing facial recognition techniques. In accordance with at least one embodiment, the display device 1009 may be equipped with eye-tracking cameras capable of detecting a user's focus.: -- causing, at a first time, the electronic whiteboard display to present first information about the patient (see Doyle, III et al. Col 36 || 19-39; In accordance with at least one embodiment, service provider 1010 may provide medical-related data 1020 to a display device 1022 for presentation. Medical-related data may contain treatment-related documents (e.g., x-ray images, ultrasound images, magnetic resonance imaging images, etc.), biometric data (e.g., heart rate, blood pressure, glucose levels, etc.), user input data (e.g., hospital discharge date, user pain level, health care provider comments, etc.), or any other suitable information relevant to one or more users. As described above, the data chosen for display by the service provider 1010 may be obtained from a number of sources, including, but not limited to, the active unified data layer 1002, various data stores, manual input, and user devices. Although FIG. 10 depicts data from data stores 1004, 1006, and 1008 as being accessed through the active unified data layer 1002, it is envisioned that one or more data stores may be accessed directly by service provider 1010. Furthermore, although FIG. 10 depicts the service provider 1010 as being separate from display device 1022, it is envisioned that the display device 1022 may contain service provider 1010.); and -- identifying a condition of the patient (see Doyle, III et al. Figure 11 (1102, 1112, 1116, 1118)); -- causing, at a second time, and based on identifying the condition of the patient, the electronic whiteboard display to change from presenting the first information about the patient to presenting second information about the patient, wherein the second information being different than the first information and comprising at least one of a timer or an instruction for treating the condition of the patient (see Doyle, III et al. Col 41 || 22-57; FIG. 13 depicts an example of user presentation and restriction in accordance with at least one embodiment of the invention. In example embodiment 1300, a display device 1302 is depicted as being mounted on a wall. As a first user 1304 approaches the display device 1302, he or she may be identified. In accordance with at least one embodiment, the first user 1304 may be identified as being associated with a wearable device 1306 (such as an RTLS bracelet). In accordance with at least one embodiment, the first user 1304 may be identified using facial recognition techniques. Once the first user 1304 is identified, user authorizations may be determined from an account associated with the first user 1304. In response to the first user 1304 approaching the display device 1302, or in response to a request made by the first user 1304, information may be displayed. In this example, a first document 1308 and a second document 1310 have been displayed on the display device 1302. When a second user 1312 approaches the display device 1302, he or she may also be identified, and user authorizations may be determined from an account associated with the second user 1312. In accordance with at least one embodiment, the second user 1304 may be identified as being associated with a second wearable device 1314. In accordance with at least one embodiment, the second user 1312 may not be identifiable (e.g., the user is not wearing a bracelet or is not in the database). In that embodiment, the user authorizations for the second user 1312 may be defaulted to non-sensitive information only. In response to the second user 1312 approaching the display device 1302, information may be displayed or removed from display based on user information. In this example, the second document 1310 has been removed from display. Additionally, new information 1316 has been presented. In this example, both the first user 1304 and the second user 1312 must have authorization to view either first document 1308 or information 1316 in order for it to be displayed.); and -- dynamically determining a presentation format, used to present at least one of the first information or the second information via the electronic whiteboard display, based at least in part on the attribute determined by the sensor (see Doyle, III et al. Col 15 || 4-34 and Col 41 || 22-57; In example embodiment 1300, a display device 1302 is depicted as being mounted on a wall. As a first user 1304 approaches the display device 1302, he or she may be identified. In accordance with at least one embodiment, the first user 1304 may be identified as being associated with a wearable device 1306 (such as an RTLS bracelet). In accordance with at least one embodiment, the first user 1304 may be identified using facial recognition techniques. Once the first user 1304 is identified, user authorizations may be determined from an account associated with the first user 1304. In response to the first user 1304 approaching the display device 1302, or in response to a request made by the first user 1304, information may be displayed. In this example, a first document 1308 and a second document 1310 have been displayed on the display device 1302.). As per Claim 3, Doyle, III et al. teaches the system of claim 1, wherein: -- the sensor comprises a location sensor configured to detect the distance between the electronic whiteboard display, and the at least one of the patients, the care provider, or the visitor of the patient (see Doyle, III et al. Col 15 || 4-34 and Col 41 || 22-57); -- dynamically determining the presentation format comprises determining a size of a least one of a font or an icon, used to present the at least one of the first information or the second information, based on the distance between the electronic whiteboard display and the at least one of the patient, the care provider, or the visitor (see Doyle, III et al. Col 37 || 4-31; In accordance with at least one embodiment, information presentation may be customized to account for a user context. In accordance with at least one embodiment, data regarding user requirements, such as clinical or demographic requirements, stored by the service provider may be used to filter and/or enhance information presentation. For example, a user's eyesight information may be used to determine the font size in which data is presented to that user. In this example, a hospital user with bad eyesight may be presented with information in a larger font). As per Claim 6, Doyle, III et al. teaches the system of claim 1, further comprising: -- a transceiver configured to receive, from one or more electronic medical record (EMR) servers, EMR data associated with the patient (see Doyle, III et al. Col 4 || 6-8, Col 9 || 35-53; The medical-related data is transmitted throughout the medical provider network 100 in accordance with any suitable transmission protocol); and -- a second sensor configured to detect at least one of a vital sign of the patient, a fluid administered to the patient, or a medication administered to the patient, wherein the condition is identified based on at least one of the EMR data, the vital sign, the fluid, or the medication (see Doyle, III et al. Col 35 || 55-67 through Col 36 || 1-18; Wearable device 1016 (e.g. sensor) may provide service provider 906 with data related to the user 118, such as biometric data (e.g. heart rate), location data, or any other suitable user-related data. User devices 1012 and 1016 are example user device(s) 904 of FIG. 9. In accordance with at least one embodiment, service provider 1010 may also send data to user devices 1012 and 1016. For example, service provider 1010 may send educational material or medical data to user device 1012 for presentation to user 1014. In accordance with at least one embodiment, the service provider 1010 may send information to a user device when it receives an indication that the user device 1012 has met a specified condition (e.g., the user device has entered a particular area or is in the vicinity of a particular asset).). As per Claim 8, Doyle, III et al. teaches an electronic whiteboard, comprising: -- a screen physically mounted in a room associated with a patient (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44 and Col 50 || 32-42 (e.g. GUI); -- at least one processor communicatively coupled to the screen; at least one sensor communicatively coupled to the at least one processor, wherein the at least one sensor: comprises at least one of: a microphone array configured to detect voices of a first care provider and a second care provider, or a camera configured to detect images of the first care provider and the second care provider, and is configured to detect: a first position of the first care provider relative to the screen; and a second position of the second care provider relative to the screen (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44 and Col 50 || 32-42 (e.g. GUI)); -- an input device communicatively coupled to the at least one processor and configured to receive an input signal from the first care provider (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44 and Col 50 || 32-42 (e.g. GUI)); -- a transceiver configured to transmit, to one or more electronic medical record (EMR) servers, data based on the input signal; and memory communicatively coupled to the at least one processor and storing instructions that, when executed by the at least one processor, cause the at least one processor to perform operations comprising (see Doyle, III et al. Fig. 12, Col 32 || 27-46, Col 34 || 26-44, Col 35 || 22-42 and Col 50 || 32-42 (e.g. GUI)): -- determining that the first position of the first care provider, detected by the at least one sensor, is at least one of within a threshold distance from the screen or within the room associated with the patient (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44 and Col 50 || 32-42 (e.g. GUI)); -- determining that the second position of the second care provider, detected by the at least one sensor, is at least one of within the threshold distance of the screen or within the room associated with the patient (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44 and Col 50 || 32-42 (e.g. GUI)); -- causing the screen to present a first graphical user interface (GUI) associated with the first care provider (see Doyle, III et al. Col 34 || 26-44 and Col 41 || 22-57; FIG. 13 depicts an example of user presentation and restriction in accordance with at least one embodiment of the invention. In example embodiment 1300, a display device 1302 is depicted as being mounted on a wall. As a first user 1304 approaches the display device 1302, he or she may be identified. In accordance with at least one embodiment, the first user 1304 may be identified as being associated with a wearable device 1306 (such as an RTLS bracelet). In accordance with at least one embodiment, the first user 1304 may be identified using facial recognition techniques. Once the first user 1304 is identified, user authorizations may be determined from an account associated with the first user 1304. In response to the first user 1304 approaching the display device 1302, or in response to a request made by the first user 1304, information may be displayed. In this example, a first document 1308 and a second document 1310 have been displayed on the display device 1302. When a second user 1312 approaches the display device 1302, he or she may also be identified, and user authorizations may be determined from an account associated with the second user 1312. In accordance with at least one embodiment, the second user 1304 may be identified as being associated with a second wearable device 1314. In accordance with at least one embodiment, the second user 1312 may not be identifiable (e.g., the user is not wearing a bracelet or is not in the database). In that embodiment, the user authorizations for the second user 1312 may be defaulted to non-sensitive information only. In response to the second user 1312 approaching the display device 1302, information may be displayed or removed from display based on user information. In this example, the second document 1310 has been removed from display. Additionally, new information 1316 has been presented. In this example, both the first user 1304 and the second user 1312 must have authorization to view either first document 1308 or information 1316 in order for it to be displayed.); -- modifying the first GUI based on the input signal from the first care provider (see Doyle, III et al. Col 34 || 26-44 and Col 41 || 22-57; FIG. 13 depicts an example of user presentation and restriction in accordance with at least one embodiment of the invention. In example embodiment 1300, a display device 1302 is depicted as being mounted on a wall. As a first user 1304 approaches the display device 1302, he or she may be identified. In accordance with at least one embodiment, the first user 1304 may be identified as being associated with a wearable device 1306 (such as an RTLS bracelet). In accordance with at least one embodiment, the first user 1304 may be identified using facial recognition techniques. Once the first user 1304 is identified, user authorizations may be determined from an account associated with the first user 1304. In response to the first user 1304 approaching the display device 1302, or in response to a request made by the first user 1304, information may be displayed. In this example, a first document 1308 and a second document 1310 have been displayed on the display device 1302. When a second user 1312 approaches the display device 1302, he or she may also be identified, and user authorizations may be determined from an account associated with the second user 1312. In accordance with at least one embodiment, the second user 1304 may be identified as being associated with a second wearable device 1314. In accordance with at least one embodiment, the second user 1312 may not be identifiable (e.g., the user is not wearing a bracelet or is not in the database). In that embodiment, the user authorizations for the second user 1312 may be defaulted to non-sensitive information only. In response to the second user 1312 approaching the display device 1302, information may be displayed or removed from display based on user information. In this example, the second document 1310 has been removed from display. Additionally, new information 1316 has been presented. In this example, both the first user 1304 and the second user 1312 must have authorization to view either first document 1308 or information 1316 in order for it to be displayed.); and -- based on determining that the second position of the second care provider is at least one of within the threshold distance of the screen or within the room associated with the patient, causing the screen to change from presenting the first GUI associated with the first care provider to presenting a second GUI associated with the second care provider, the second GUI being different than the first GUI (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44 and Col 50 || 32-42 (e.g. GUI)). As per Claim 10, Doyle, III et al. teaches the electronic whiteboard of claim 8, the data being first data, wherein: -- the transceiver is further configured to periodically receive, from the one or more EMR servers, second data indicating a condition of the patient, and at least one of the first GUI or the second GUI indicates the condition of the patient (see Doyle, III et al. Fig. 12, Col 7 || 11-24 and Col 50 || 32-42 (e.g. GUI). As per Claim 11, Doyle, III et al. teaches the electronic whiteboard of claim 8, wherein: -- the first GUI indicates first information and second information about the patient, and the second GUI indicates the first information without indicating the second information (see Doyle, III et al. Fig. 12, Col 7 || 11-24 and Col 50 || 32-42 (e.g. GUI). As per Claim 12, Doyle, III et al. teaches the electronic whiteboard of claim 8, wherein the operations further comprise: -- determining that the first position of the first care provider and the second position of the second care provider are at least one of greater than the threshold distance from the screen or outside of the room of the patient; and based on determining that the first position of the first care provider and the second position of the second care provider are at least one of greater than the threshold distance from the screen or outside of the room of the patient: causing the screen to present a third GUI; or causing the screen to enter a dormant state displaying an absence of information about the patient (see Doyle, III et al. Fig. 12, Col 32 || 27-46, Col 34 || 26-44, Col 35 || 22-42 and Col 50 || 32-42 (e.g. GUI). As per Claim 13, Doyle, III et al. teaches a method, comprising: -- presenting, by a display of an electronic whiteboard mounted in a room associated with a patient, a first Graphical User Interface (GUI) associated with the patient (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44, Col 39 || 18-33, Col 41 || 22-57, Col 49 || 55-67 and Col 50 || 1-8, 32-42 , Col 49 || 55-67 and Col 50 || 1-8, 32-42 (e.g. GUI)); -- detecting, based on first location data provided by a location sensor communicatively coupled with the electronic whiteboard, a distance between the patient and the electronic whiteboard, wherein the location sensor comprises at least one of a microphone or a camera (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44, Col 35 || 22-42 and Col 50 || 32-42 (e.g. GUI)); -- adjusting a font size of the first GUI, presented by the display of the electronic whiteboard, based on the distance between the patient and the electronic whiteboard detected by the location sensor (see Doyle, III et al. Col 37 || 4-31); -- determining, based on second location data provided by the location sensor, that a position of a care provider is at least one of: within a threshold distance of the electronic whiteboard, output device or within the room associated with the patient (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44, Col 39 || 18-33, Col 41 || 22-57, Col 49 || 55-67 and Col 50 || 1-8, 32-42 and Col 50 || 32-42 (e.g. GUI)); and -- based on determining that the position of the care provider is within the threshold distance of the electronic whiteboard or within the room associated with the patient, causing the display of the electronic whiteboard to change from presenting the first GUI to presenting, a second GUI, the second GUI being different than the first GUI (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44, Col 39 || 18-33, Col 41 || 22-57, Col 49 || 55-67 and Col 50 || 1-8, 32-42 , Col 49 || 55-67 and Col 50 || 1-8, 32-42 (e.g. GUI)). As per Claim 14, Doyle, III et al. teaches the method of claim 13, further comprising: -- identifying, based on at least one of an electronic medical record (EMR) of the patient or one or more vital signs of the patient, a condition of the patient (see Doyle, III et al. Col 8 || 60-67; In some examples, the one or more servers of the medical care facility 110 share medical-related data directly with a record service (not shown), and the record service makes the medical-related data available to the transformative integration engine 102 and/or the transaction management engine 104. Once an electronic medical record is updated at the medical care facility 110, an indication of the update may be provided to the record service); and -- causing based on identifying the condition of the patient, the display of the electronic whiteboard to present, a third GUI associated with the condition of the patient (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44, Col 39 || 18-33, Col 49 || 55-67 and Col 50 || 1-8, 32-42, Col 49 || 55-67 and Col 50 || 1-8, 32-42 (e.g. GUI)). As per Claim 16, Doyle, III et al. teaches the method of claim 13, further comprising: -- receiving, by an input device or the location sensor, an input signal from the patient or the care provider (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44 and Col 50 || 32-42 (e.g. GUI); The user devices 904 may be any type of computing device such as, but not limited to, a mobile phone, a smart phone, a personal digital assistant (PDA), a laptop computer, a desktop computer, a server computer, a thin-client device, a tablet PC, etc. Additionally, user devices 904 may include a wearable technology device, such as a watch, wristband, earpiece, a pair of glasses, or any other suitable wearable technology. In addition, the user device may include location tracking technology, such as a real time location system (RTLS) tag. The user device 904 may include one or more processors 910 capable of processing user input. The user device 904 may also include one or more input sensors 912 for receiving user input. As is known in the art, there are a variety of input sensors 912 capable of detecting user input, such as accelerometers, cameras, microphones, or any other suitable sensor device. The user input obtained by the input sensors may be from a variety of data input types, including, but not limited to, audio data, visual data, or biometric data.); and -- updating an emergency medical record (EMR) of the patient based on the input signal (see Doyle, III et al. Col 8 || 60-67; In some examples, the one or more servers of the medical care facility 110 share medical-related data directly with a record service (not shown), and the record service makes the medical-related data available to the transformative integration engine 102 and/or the transaction management engine 104. Once an electronic medical record is updated at the medical care facility 110, an indication of the update may be provided to the record service). As per Claim 19, Doyle, III et al. teaches the method of claim 13, wherein the first GUI comprises at least one of an identity of the care provider, contact information associated with the care provider, an ambulation instruction, educational materials about a condition of the patient, a care schedule of the patient, or a game related to the condition of the patient (see Doyle, III et al. Col 43 || 17-23; At 1412, the interface layer module 1410 may identify the presenter of the information. The presenter may be identified based on RTLS data, facial recognition, a user login, or any number of suitable identifications means. Once the presenter is identified, presenter requirements (a set of user requirements associated with a presenter) for that presenter may be identified). As per Claim 20, Doyle, III et al. teaches the method of claim 13, wherein the second GUI comprises at least one of contact information of the patient, one or more care goals of the patient, a pain scale of the patient, one or more timers, diagnostic information about the patient, one or more tasks to be completed by the care provider, one or more conditions of the patient, one or more vital signs of the patient, or a discharge plan of the patient (see Doyle, III et al. Col 39 || 34-44; FIG. 12 shows a diagram 1200 which depicts an example of setting and automatic tracking of user goals in accordance with at least one embodiment of the invention. In example embodiment 1200, a display device 1202 is used to depict one or more user goals 1204. The display device 1202 may also depict a progress indicator 1206 associated with one or more of the user goals 1204. The progress indicator 1206 may be any type of indicator, such as a progress bar, a percentage, a notification of remaining time, or any other suitable notice that indicates that a goal has been at least partially completed). As per Claim 22, Doyle, III et al. teaches the electronic whiteboard of claim 8, wherein the at least one sensor: comprises the microphone array, and is configured to detect the first position of the first care provider and the second position of the second care provider, relative to the screen, based on magnitudes of the voices of the first care provider and the second care provider detected by the microphone array (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44, Col 39 || 18-33, Col 41 || 22-57, Col 49 || 55-67 and Col 50 || 1-8, 32-42 , Col 49 || 55-67 and Col 50 || 1-8, 32-42 (e.g. GUI)). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-5, 17-18 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Doyle, III et al. as applied to claims 1, 3, 6 8, 10-14, 16, 19-20 and 22 above, and further in view of Pub. No.: US 20120293422 A1 to Collins et al. As per Claim 4, Doyle, III et al. teaches the system of claim 1, wherein the sensor comprises a microphone, configured to capture voice data associated with a voice of at least one of the patient, the care provider or the visitor, the voice data indicates the language used by the at least one of the patient, the care provider, or the visitor, and dynamically determining the presentation format comprises (see Doyle, III et al. Fig. 12 [e.g. mounted display 1202], Col 32 || 27-46, Col 34 || 26-44, Col 39 || 18-33, Col 41 || 22-57, Col 49 || 55-67 and Col 50 || 1-8, 32-42 , Col 49 || 55-67 and Col 50 || 1-8, 32-42 (e.g. GUI)); Doyle, III et al. fails to teach: -- using the language to present the at least one of the first information or the second information via the electronic whiteboard display. Collins et al. teaches an audio interface including a microphone 116 coupled to a voice recognition module (112 of FIG. 1) and a speaker coupled to a voice synthesizer (110 of FIG. 1). For example, if the patient is speaking, but the EMT does not recognize the native language of the patient, the EMT can press a "search" icon 212 on the touchscreen 102, wherein the microphone and voice recognition module can collect and provide a speech sample of the patient to the processor (104 of FIG. 1), which can compare the speech sample to known language samples pre-stored in the memory (106 of FIG. 1) in order to determine the native language of the patient. Alternatively, if the EMT recognizes (but does not understand) the patient's native language, the EMT can press a "language" icon 218 on the touchscreen 102, wherein the EMT can be presented a list of languages of pre-stored dialogue in the memory of the tablet, or the EMT can type in the recognized language using a keypad (not shown) of the tablet. The processor will direct the voice synthesizer to then ask the patient if the selected language is their native language. This instruction can be provided through the speaker 114, and can additionally be provided as text 224, with a translation that can be understood by the EMT, e.g. "Parlez-vous francais? Do you speak French?" Responses from the patient can also be recognized and shown on the touchscreen as text 224, with a translation that can be understood by the EMT (see Collins et al. paragraph 16). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include systems/methods as taught by reference Collins et al. with the systems/methods as taught by reference Doyle, III et al. with the motivation of providing a voice recognition module, thereby delivering communication techniques that allows a patient which speaks a different language than the provider and can additionally be provided as text, with a translation that can be understood by the (see Collins et al. paragraphs 10 and 16). As per Claim 5, Doyle, III et al., and Collins et al. teaches the system of claim 4, wherein: -- the voice of the care provider (see Doyle, III et al. Col 36 || 49-67; For example, a particular presenter may wish to present information in a particular order when speaking with users. In this example, the presenter could customize the order and/or style that the information is presented in and each time that the presenter presents this information, the customization will be applied. The presenter may also customize the level of detail that is presented to the user. In addition to formats and presentation styles, this customization may also include specific gesture or voice activation customizations. For example, a presenter may like to use a specific hand gesture in order to highlight a specific section of a document such as an x-ray image or a chart that a user should focus on. Customization of these gestures or commands will be described in greater detail with relation to FIG. 11 below. In accordance with at least one embodiment, information or actions may be associated with voice commands or keywords. For example, a presenter may wish to have the system present a definition or additional educational material when he mentions a specific medical condition or term.); and -- the instruction causes a transceiver of the system to transmit, to one or more electronic medical record (EMR) servers, EMR data associated with the patient and based on the voice data (see Doyle, III et al. Col 4 || 6-8, Col 8 || 65-67, Col 9 || 1-3 and 35-53; The medical-related data is transmitted throughout the medical provider network 100 in accordance with any suitable transmission protocol), -- the microphone is configured to capture the voice data as the care provider is treating the patient (see Doyle, III et al. Col 36 || 49-67; For example, a particular presenter may wish to present information in a particular order when speaking with users. In this example, the presenter could customize the order and/or style that the information is presented in and each time that the presenter presents this information, the customization will be applied. The presenter may also customize the level of detail that is presented to the user. In addition to formats and presentation styles, this customization may also include specific gesture or voice activation customizations. For example, a presenter may like to use a specific hand gesture in order to highlight a specific section of a document such as an x-ray image or a chart that a user should focus on. Customization of these gestures or commands will be described in greater detail with relation to FIG. 11 below. In accordance with at least one embodiment, information or actions may be associated with voice commands or keywords. For example, a presenter may wish to have the system present a definition or addition
Read full office action

Prosecution Timeline

May 12, 2022
Application Filed
Nov 14, 2024
Non-Final Rejection — §101, §103
Feb 14, 2025
Response Filed
May 03, 2025
Final Rejection — §101, §103
Aug 08, 2025
Applicant Interview (Telephonic)
Aug 08, 2025
Examiner Interview Summary
Aug 18, 2025
Request for Continued Examination
Aug 29, 2025
Response after Non-Final Action
Dec 05, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592309
AUTOMATED DETECTION OF LUNG CONDITIONS FOR MONITORING THORACIC PATIENTS UNDERTGOING EXTERNAL BEAM RADIATION THERAPY
2y 5m to grant Granted Mar 31, 2026
Patent 12548648
A METHOD OF TREATMENT OR PROPHYLAXIS
2y 5m to grant Granted Feb 10, 2026
Patent 12488878
Aligning Image Data of a Patient with Actual Views of the Patient Using an Optical Code Affixed to the Patient
2y 5m to grant Granted Dec 02, 2025
Patent 12205698
ADVISING DIABETES MEDICATIONS
2y 5m to grant Granted Jan 21, 2025
Patent 12046350
METHODS AND SYSTEMS FOR CALCULATING AN EDIBLE SCORE IN A DISPLAY INTERFACE
2y 5m to grant Granted Jul 23, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
20%
Grant Probability
52%
With Interview (+31.5%)
4y 11m
Median Time to Grant
High
PTA Risk
Based on 370 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month