Prosecution Insights
Last updated: April 19, 2026
Application No. 19/041,771

SYSTEM AND APPARATUS FOR REMOTE MONITORING AND COMMUNICATION

Non-Final OA §102§103§DP
Filed
Jan 30, 2025
Examiner
TRAN, LOI H
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Artisight, Inc.
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
394 granted / 611 resolved
+6.5% vs TC avg
Strong +24% interview lift
Without
With
+23.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
636
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
54.9%
+14.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 611 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 102 3. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. If the exception under 35 U.S.C. 102(b)(2)(C) is properly invoked, a disqualified U.S. patent document is not prior art under 35 U.S.C. 102(a)(2) as of its effectively filed date (for both anticipation and obviousness rejections), but it may still be used as prior art under 35 U.S.C. 102(a)(1) as of its publication or issue date. In addition, the examiner may make a subsequent, new double patenting rejection based upon the disqualified reference. See MPEP 717.02 and 2154.02. 4. Claims 1-4, 7-8, 12, 14-15, 18, 20, 23-26, and 28-30 are rejected under 35 U.S.C. 102(a)(1)(2) as being anticipated by Shelton (US Publication 2023/0064821). Regarding claim 1, Shelton discloses a remote communication device for facilitating communication between remote users (Shelton, fig. 1, ref. 106, hub; fig. 3; para. 1052, "surgical hub 106"; fig. 81, ref. 5104, "Surgical Hub'') comprising: a plurality of network interfaces, each network interface configured to connect with a respective external device of one or more external devices (Shelton, fig. 1, ref. 106, hub; fig. 3; para. 1052, "surgical hub 106"; fig. 81, ref. 5104, "Surgical Hub''); one or more network communication processors, configured to communicate data with the one or more external devices connected to the plurality of network interfaces (Shelton, fig. 1, ref. 106, ref. 108-112; fig. 3; par. 1052, the surgical system 102 includes a visualization system 108, a robotic system 110, and a handheld intelligent surgical instrument 112, which are configured to communicate with one another and/or the hub 106; fig. 81, ref. 5126, "modular devices", ''patient monitoring devices''); and one or more machine learning models configured to process one or more data inputs received from the one or more external devices (Shelton, para. 1566, the situational awareness system of the surgical hub 5104 can be configured to derive the contextual information from the data received from the data sources 5126 in a variety of different ways. In one exemplification, the situational awareness system includes a pattern recognition system, or machine learning system (e.g., an artificial neural network), that has been trained on training data to correlate various inputs (e.g., data from databases 5122, patient monitoring devices 5124, and/or modular devices 5102) to corresponding contextual information regarding a surgical procedure. In other words, a machine learning system can be trained to accurately derive contextual information regarding a surgical procedure from the provided inputs). Regarding claim 2, Shelton discloses the device of claim 1, wherein: the device further comprises a memory unit configured to store data received from the one or more external devices (Shelton, fig. 3, ref. 134 and para. 1067, storage array). Regarding claim 3, Shelton discloses the device of claim 2, wherein: at least one of the one or more machine learning models is configured to: receive data from the one or more external devices; determine an indication of whether an event has occurred based at least in part on the received data; and output the indication of whether the event has occurred (Shelton, para’s 1561-1578, the surgical hub is configured to communicate with a surgical instrument, the surgical hub comprising: a processor; and a memory coupled to the processor, the memory storing instructions executable by the processor to: receive a first data set associated with a surgical procedure, wherein the first data set is generated at a first time; receive a second data set associated with the efficacy of the surgical procedure, wherein the second data set is generated at a second time, wherein the second time is separate and distinct from the first time; anonymize the first and second data sets by removing information that identifies a patient, a surgery, or a scheduled time of the surgery; and store the first and second anonymized data sets to generate a data pair grouped by surgery. The present disclosure further provides a surgical hub, wherein the memory stores instructions executable by the processor to reconstruct a series of chronological events based on the data pair; para. 1566, the situational awareness system of the surgical hub 5104 can be configured to derive the contextual information from the data received from the data sources 5126. The situational awareness system includes a pattern recognition system, or machine learning system (e.g., an artificial neural network), that has been trained on training data to correlate various inputs (e.g., data from databases 5122, patient monitoring devices 5124, and/or modular devices 5102) to corresponding contextual information regarding a surgical procedure. In other words, a machine learning system can be trained to accurately derive contextual information regarding a surgical procedure from the provided inputs. In another exemplification, the situational awareness system can include a lookup table storing pre-characterized contextual information regarding a surgical procedure in association with one or more inputs (or ranges of inputs) corresponding to the contextual information. In response to a query with one or more inputs, the lookup table can return the corresponding contextual information for the situational awareness system for controlling the modular devices 5102; para’s 1580-1584, as the process 5000a continues, the control circuit of the surgical hub 5104 can derive 5006a contextual information from the data received 5004a from the data sources 5126. The contextual information can include, for example, the type of procedure being performed, the particular step being performed in the surgical procedure, the patient's state (e.g., whether the patient is under anesthesia or whether the patient is in the operating room), or the type of tissue being operated on. The control circuit can derive 5006a contextual information according to data from ether an individual data source 5126 or combinations of data sources 5126. Further, the control circuit can derive 5006a contextual information according to, for example, the type(s) of data that it receives, the order in which the data is received, or particular measurements or values associated with the data. For example, if the control circuit receives data from an RF generator indicating that the RF generator has been activated, the control circuit could thus infer that the RF electrosurgical instrument is now in use and that the surgeon is or will be performing a step of the surgical procedure utilizing the particular instrument. As another example, if the control circuit receives data indicating that a laparoscope imaging device has been activated and an ultrasonic generator is subsequently activated, the control circuit can infer that the surgeon is on a laparoscopic dissection step of the surgical procedure due to the order in which the events occurred. As yet another example, if the control circuit receives data from a ventilator indicating that the patient's respiration is below a particular rate, then the control circuit can determine that the patient is under anesthesia; the surgical hub 5104 can receive 5002d, 5004d perioperative data from an insufflator and a medical imaging device indicating that both devices have been activated and paired to the surgical hub 5104, derive 5006d the contextual information therefrom that a video-assisted thoracoscopic surgery (VATS) procedure is being performed, determine 5008d that the displays connected to the surgical hub 5104 should be set to display particular views or information associated with the procedure type, and then control 5010d the displays accordingly). Regarding claim 4, Shelton discloses the device of claim 3, wherein: at least one of the one or more external devices is a camera; and determining an indication of whether the event has occurred comprises: determining whether a person is within a frame of the camera (Shelton, para’s 1392-1394, 1559, 1589, 1609, and 1745, the imaging device 124 can be in the form of an endoscope, including a camera and a light source positioned at a remote surgical site, and configured to provide a livestream of the remote surgical site at the primary display; image recognition algorithms can be implemented to identify features or objects in still frames of a surgical site that are captured by the frame grabber 3200. Useful information pertaining to the surgical steps associated with the captured frames can be derived from the identified features. For example, identification of staples in the captured frames indicates that a tissue-stapling surgical step has been performed at the surgical site. The type, color, arrangement, and size of the identified staples can also be used to derive useful information regarding the staple cartridge and the surgical instrument employed to deploy the staples. As described above, such information can be overlaid on a livestream directed to a primary display 119 in the operating room). Regarding claim 7, Shelton discloses the device of claim 3, wherein: at least one of the one or more external devices is configured to gather health data associated with a patient; the received data is the health data associated with the patient; and determining an indication of whether the event has occurred comprises: determining whether the patient has had a health event based at least in part on the health data associated with the patient (Shelton, para’s 1565-1578, FIG. 81 illustrates a diagram of a situationally aware surgical system 5100; the data sources 5126 include the modular devices 5102 (which can include sensors configured to detect parameters associated with the patient and/or the modular device itself), databases 5122 (e.g., an EMR database containing patient records), and patient monitoring devices 5124 (e.g., a blood pressure (BP) monitor and an electrocardiography (EKG) monitor). The surgical hub 5104 can be configured to derive the contextual information pertaining to the surgical procedure from the data based upon, for example, the particular combination(s) of received data or the particular order in which the data is received from the data sources 5126. The contextual information inferred from the received data can include, for example, the type of surgical procedure being performed, the particular step of the surgical procedure that the surgeon is performing, the type of tissue being operated on, or the body cavity that is the subject of the procedure. This ability by some aspects of the surgical hub 5104 to derive or infer information related to the surgical procedure from received data can be referred to as “situational awareness”. The surgical hub 5104 can incorporate a situational awareness system, which is the hardware and/or programming associated with the surgical hub 5104 that derives contextual information pertaining to the surgical procedure from the received data). Regarding claim 8, Shelton discloses the device of claim 4, wherein: the one or more machine learning models is further configured to remove identifying features of the person determined to be within the frame of the camera (Shelton, para’s 1565-1578, FIG. 81 illustrates a diagram of a situationally aware surgical system 5100; the data sources 5126 include the modular devices 5102 (which can include sensors configured to detect parameters associated with the patient and/or the modular device itself), databases 5122 (e.g., an EMR database containing patient records), and patient monitoring devices 5124 (e.g., a blood pressure (BP) monitor and an electrocardiography (EKG) monitor). The surgical hub 5104 can be configured to derive the contextual information pertaining to the surgical procedure from the data based upon, for example, the particular combination(s) of received data or the particular order in which the data is received from the data sources 5126. The contextual information inferred from the received data can include, for example, the type of surgical procedure being performed, the particular step of the surgical procedure that the surgeon is performing, the type of tissue being operated on, or the body cavity that is the subject of the procedure. This ability by some aspects of the surgical hub 5104 to derive or infer information related to the surgical procedure from received data can be referred to as “situational awareness”. The surgical hub 5104 can incorporate a situational awareness system, which is the hardware and/or programming associated with the surgical hub 5104 that derives contextual information pertaining to the surgical procedure from the received data). (Claim 11 depends from claim 9 and claim 9 is not rejected in this group.) Regarding claim 12, Shelton discloses the device of claim 1, wherein: each network interface of the plurality of network interfaces is further configured to deliver power to the respective external devices (Shelton, para. 1121, the USB network hub 300 can connect 127 functions configured in up to six logical layers (tiers) to a single computer. Further, the USB network hub 300 can connect to all peripherals using a standardized four-wire cable that provides both communication and power distribution. The power configurations are bus-powered and self-powered modes. The USB network hub 300 may be configured to support four modes of power management: a bus-powered hub, with either individual-port power management or ganged-port power management, and the self-powered hub, with either individual-port power management or ganged-port power management. In one aspect, using a USB cable, the USB network hub 300, the upstream USB transceiver port 302 is plugged into a USB host controller, and the downstream USB transceiver ports 304, 306, 308 are exposed for connecting USB compatible devices, and so forth). Regarding claim 14, Shelton discloses the device of claim 12, wherein: at least a subset of the plurality of network interfaces are universal serial bus (USB) ports configured to deliver power and transmit data to the respective external devices (Shelton, para’s 1101 and 1121, the USB network hub 300 can connect 127 functions configured in up to six logical layers (tiers) to a single computer. Further, the USB network hub 300 can connect to all peripherals using a standardized four-wire cable that provides both communication and power distribution. The power configurations are bus-powered and self-powered modes. The USB network hub 300 may be configured to support four modes of power management: a bus-powered hub, with either individual-port power management or ganged-port power management, and the self-powered hub, with either individual-port power management or ganged-port power management. In one aspect, using a USB cable, the USB network hub 300, the upstream USB transceiver port 302 is plugged into a USB host controller, and the downstream USB transceiver ports 304, 306, 308 are exposed for connecting USB compatible devices, and so forth). Regarding claim 15, Shelton discloses the device of claim 1, wherein: the one or more network communication processors are further configured to: receive from a management station, software for controlling or updating firmware of the one or more external devices; control each of the respective external devices of the one or more external devices; and update the firmware of the one or more external devices (Shelton, para’s 2108-2118, control program update). Regarding claim 18, Shelton discloses the device of claim 15, wherein: the one or more network communication processors are further configured to transmit data associated with the device and the one or more external devices to the management station (Shelton, para’s 1051, 1578, 1655-1657, and 2274, the data generated by the various surgical devices and medical hubs about the patient and the medical procedure may be transmitted to the cloud-based medical analytics system. The control circuit can control 5010a the modular devices 5102 according to the determined 5008a control adjustment by, for example, transmitting the control adjustments to the particular modular device to update the modular device's 5102 programming). Regarding claim 20, Shelton discloses the device of claim 1, wherein: at least one of the one or more external devices is a camera; at least one of the one or more external devices is a microphone; and the one or more network communication processors are further configured to: transmit first data collected by the camera and the microphone across a network to a receiving device configured to output the first data; receive second data collected by a second camera and a second microphone associated with the receiving device; and output the second data (Shelton, para. 1753, FIG. 131 illustrates a communication system 6370 comprising an intermediate signal combiner 6372 positioned in the communication path between an imaging module 238 and a surgical hub display 217, according to one aspect of the present disclosure. The signal combiner 6372 receives image data from an imaging module 238 in the form of short range wireless or wired signals. The signal combiner 6372 also receives audio and image data form a headset 6374 and combines the image data from the imaging module 238 with the audio and image data from the headset 6374. The surgical hub 206 receives the combined data from the combiner 6372 and overlays the data provided to the display 217, where the overlaid data is displayed. The signal combiner 6372 may communicate with the surgical hub 206 via wired or wireless signals. The headset 6374 receives image data from an imaging device 6376 coupled to the headset 6374 and receives audio data from an audio device 6378 coupled to the headset 6374. The imaging device 6376 may be a digital video camera and the audio device 6378 may be a microphone. In one aspect, the signal combiner 6372 may be an intermediate short range wireless, e.g., Bluetooth, signal combiner. The signal combiner 6374 may comprise a wireless heads-up display adapter to couple to the headset 6374 placed into the communication path of the display 217 to a console allowing the surgical hub 206 to overlay data onto the screen of the display 217. Security and identification of requested pairing may augment the communication techniques. The imaging module 238 may be coupled to a variety if imaging devices such as an endoscope 239, laparoscope, etc., for example). Claim 23 is rejected for the same reasons set forth in claim 1. Shelton further discloses transmitting the one or more outputs to a management station over a communication network (Shelton, fig. 1, ref. 104, cloud and 113, server and para. 1051, the data generated by the various surgical devices and medical hubs about the patient and the medical procedure may be transmitted to the cloud-based medical analytics system. This data may then be aggregated with similar data gathered from many other surgical hubs and surgical devices located at other medical facilities. Various patterns and correlations may be found through the cloud-based analytics system analyzing the collected data. Improvements in the techniques used to generate the data may be generated as a result, and these improvements may then be disseminated to the various surgical hubs and surgical devices", see also para’s 1578, 1655-1657, and 2274, showing that device management can be performed by both cloud computing system 104 and/or a control circuit of the surgical hub 5104). Regarding claim 24, Shelton discloses the method of claim 23, wherein: at least one of the one or more external devices is a camera; the first location is a patient room within a medical facility; and determining one or more outputs associated with an event occurring at the first location comprises determining whether the patient room is occupied by a patient by: providing image data received from the camera to the one or more machine learning models; and determining, using the one or more machine learning models, one or more features of the patient room indicative of whether the patient room is occupied (Shelton, para’s 1392-1395, the imaging device 124 can be in the form of an endoscope, including a camera and a light source positioned at a remote surgical site; para’s 1424-1425, the surgical hub 106 can be configured to utilize patient data received from a heart rate monitor connected along with data regarding the location of the surgical site to assess proximity of the surgical site to sensory nerves. An increase in the patient's heart rate, when combined with anatomical data indicating that the surgical site is in a region high in sensory nerves, can be construed as an indication of sensory nerve proximity; the surgical hub 106 may be configured to determine the type of surgical procedure being performed on a patient from data received from one or more of the operating-room monitoring devices; para’s 1426-1483, location of operating room (OR) where the surgery has occurred; para’s 1580 and 1587, the contextual information can include, for example, the type of procedure being performed, the particular step being performed in the surgical procedure, the patient's state (e.g., whether the patient is under anesthesia or whether the patient is in the operating room; the perioperative data that can be received by the situational awareness system of the surgical hub 5104 from the patient monitoring devices 5124 can include, for example, the patient's oxygen saturation, blood pressure, heart rate, and other physiological parameters. The contextual information that can be derived by the surgical hub 5104 from the perioperative data transmitted by the patient monitoring devices 5124 can include, for example, whether the patient is located in the operating theater or under anesthesia. The surgical hub 5104 can derive these inferences from data from the patient monitoring devices 5124 alone or in combination with data from other data sources 5126 (e.g., the ventilator 5118)). Regarding claim 25, Shelton discloses the method of claim 24, wherein determining one or more features of the patient room indicative of whether the patient room is occupied comprises: determining whether the patient is within a frame of the camera based at least in part on the image data (Shelton, para’s 1392-1395, the imaging device 124 can be in the form of an endoscope, including a camera and a light source positioned at a remote surgical site; para’s 1424-1425, the surgical hub 106 can be configured to utilize patient data received from a heart rate monitor connected along with data regarding the location of the surgical site to assess proximity of the surgical site to sensory nerves. An increase in the patient's heart rate, when combined with anatomical data indicating that the surgical site is in a region high in sensory nerves, can be construed as an indication of sensory nerve proximity; the surgical hub 106 may be configured to determine the type of surgical procedure being performed on a patient from data received from one or more of the operating-room monitoring devices; para’s 1424-1483, location of operating room (OR) where the surgery has occurred; para. 1425, the surgical hub 106 may be configured to determine the type of surgical procedure being performed on a patient from data received from one or more of the operating-room monitoring devices, such as, for example, heart rate monitors and insufflation pumps; para’s 1580 and 1587, the contextual information can include, for example, the type of procedure being performed, the particular step being performed in the surgical procedure, the patient's state (e.g., whether the patient is under anesthesia or whether the patient is in the operating room; the perioperative data that can be received by the situational awareness system of the surgical hub 5104 from the patient monitoring devices 5124 can include, for example, the patient's oxygen saturation, blood pressure, heart rate, and other physiological parameters; the perioperative data that can be received by the situational awareness system of the surgical hub 5104 from the patient monitoring devices 5124 can include, for example, the patient's oxygen saturation, blood pressure, heart rate, and other physiological parameters. The contextual information that can be derived by the surgical hub 5104 from the perioperative data transmitted by the patient monitoring devices 5124 can include, for example, whether the patient is located in the operating theater or under anesthesia. The surgical hub 5104 can derive these inferences from data from the patient monitoring devices 5124 alone or in combination with data from other data sources 5126 (e.g., the ventilator 5118). Regarding claim 26, Shelton discloses the method of claim 24, wherein determining one or more features of the patient room indicative of whether the patient room is occupied comprises: determining a presence in the patient room of at least one of: personal belongings of the patient, ruffled or missing sheets on a bed in the patient room, and/or one or more visitors in the patient room (Shelton, para. 1587, the perioperative data that can be received by the situational awareness system of the surgical hub 5104 from the patient monitoring devices 5124 can include, for example, the patient's oxygen saturation, blood pressure, heart rate, and other physiological parameters. The contextual information that can be derived by the surgical hub 5104 from the perioperative data transmitted by the patient monitoring devices 5124 can include, for example, whether the patient is located in the operating theater or under anesthesia. The surgical hub 5104 can derive these inferences from data from the patient monitoring devices 5124 alone or in combination with data from other data sources 5126 (e.g., the ventilator 5118; see also para’s 1392-1395, 1559, 1587, 1609, and 1745). Regarding claim 28, Shelton discloses the method of claim 23, wherein: receiving data from one or more external devices comprises receiving health data associated with a patient from at least one external device of the one or more external devices configured to gather health data (Shelton, Shelton, para’s 1561-1578, the surgical hub is configured to communicate with a surgical instrument, the surgical hub comprising: a processor; and a memory coupled to the processor, the memory storing instructions executable by the processor to: receive a first data set associated with a surgical procedure, wherein the first data set is generated at a first time; receive a second data set associated with the efficacy of the surgical procedure, wherein the second data set is generated at a second time, wherein the second time is separate and distinct from the first time; anonymize the first and second data sets by removing information that identifies a patient, a surgery, or a scheduled time of the surgery; and store the first and second anonymized data sets to generate a data pair grouped by surgery. The present disclosure further provides a surgical hub, wherein the memory stores instructions executable by the processor to reconstruct a series of chronological events based on the data pair; para. 1566, the situational awareness system of the surgical hub 5104 can be configured to derive the contextual information from the data received from the data sources 5126. The situational awareness system includes a pattern recognition system, or machine learning system (e.g., an artificial neural network), that has been trained on training data to correlate various inputs (e.g., data from databases 5122, patient monitoring devices 5124, and/or modular devices 5102) to corresponding contextual information regarding a surgical procedure. In other words, a machine learning system can be trained to accurately derive contextual information regarding a surgical procedure from the provided inputs. In another exemplification, the situational awareness system can include a lookup table storing pre-characterized contextual information regarding a surgical procedure in association with one or more inputs (or ranges of inputs) corresponding to the contextual information. In response to a query with one or more inputs, the lookup table can return the corresponding contextual information for the situational awareness system for controlling the modular devices 5102; para’s 1580-1584, as the process 5000a continues, the control circuit of the surgical hub 5104 can derive 5006a contextual information from the data received 5004a from the data sources 5126. The contextual information can include, for example, the type of procedure being performed, the particular step being performed in the surgical procedure, the patient's state (e.g., whether the patient is under anesthesia or whether the patient is in the operating room), or the type of tissue being operated on. The control circuit can derive 5006a contextual information according to data from ether an individual data source 5126 or combinations of data sources 5126. Further, the control circuit can derive 5006a contextual information according to, for example, the type(s) of data that it receives, the order in which the data is received, or particular measurements or values associated with the data. For example, if the control circuit receives data from an RF generator indicating that the RF generator has been activated, the control circuit could thus infer that the RF electrosurgical instrument is now in use and that the surgeon is or will be performing a step of the surgical procedure utilizing the particular instrument. As another example, if the control circuit receives data indicating that a laparoscope imaging device has been activated and an ultrasonic generator is subsequently activated, the control circuit can infer that the surgeon is on a laparoscopic dissection step of the surgical procedure due to the order in which the events occurred. As yet another example, if the control circuit receives data from a ventilator indicating that the patient's respiration is below a particular rate, then the control circuit can determine that the patient is under anesthesia; the surgical hub 5104 can receive 5002d, 5004d perioperative data from an insufflator and a medical imaging device indicating that both devices have been activated and paired to the surgical hub 5104, derive 5006d the contextual information therefrom that a video-assisted thoracoscopic surgery (VATS) procedure is being performed, determine 5008d that the displays connected to the surgical hub 5104 should be set to display particular views or information associated with the procedure type, and then control 5010d the displays accordingly; see also para’s 1249, 1292-1295, 1299, 1464, 1484-1492, and 1566-1578). Regarding claim 29, Shelton discloses the method of claim 28, wherein: determining one or more outputs associated with an event at the first location comprises determining whether the patient has had a health event based at least in part on the health data associated with the patient (Shelton, para’s 1580 and 1587, the contextual information can include, for example, the type of procedure being performed, the particular step being performed in the surgical procedure, the patient's state (e.g., whether the patient is under anesthesia or whether the patient is in the operating room; the perioperative data that can be received by the situational awareness system of the surgical hub 5104 from the patient monitoring devices 5124 can include, for example, the patient's oxygen saturation, blood pressure, heart rate, and other physiological parameters.). Regarding claim 30, Shelton discloses the method of claim 23, wherein: at least one of the one or more external devices is a camera; the first location is an operating room within a medical facility; and determining one or more outputs associated with an event occurring at the first location comprises: identifying, using the one or more machine learning models, one or more features indicative of an identity of a patient in the operating room in image data received from the camera; and removing the one or more features indicative of an identity of a patient in the operating room from the image data (Shelton, para’s 1561-1578, the surgical hub is configured to communicate with a surgical instrument, the surgical hub comprising: a processor; and a memory coupled to the processor, the memory storing instructions executable by the processor to: receive a first data set associated with a surgical procedure, wherein the first data set is generated at a first time; receive a second data set associated with the efficacy of the surgical procedure, wherein the second data set is generated at a second time, wherein the second time is separate and distinct from the first time; anonymize the first and second data sets by removing information that identifies a patient, a surgery, or a scheduled time of the surgery; and store the first and second anonymized data sets to generate a data pair grouped by surgery. The present disclosure further provides a surgical hub, wherein the memory stores instructions executable by the processor to reconstruct a series of chronological events based on the data pair; para. 1566, the situational awareness system of the surgical hub 5104 can be configured to derive the contextual information from the data received from the data sources 5126. The situational awareness system includes a pattern recognition system, or machine learning system (e.g., an artificial neural network), that has been trained on training data to correlate various inputs (e.g., data from databases 5122, patient monitoring devices 5124, and/or modular devices 5102) to corresponding contextual information regarding a surgical procedure. In other words, a machine learning system can be trained to accurately derive contextual information regarding a surgical procedure from the provided inputs. In another exemplification, the situational awareness system can include a lookup table storing pre-characterized contextual information regarding a surgical procedure in association with one or more inputs (or ranges of inputs) corresponding to the contextual information. In response to a query with one or more inputs, the lookup table can return the corresponding contextual information for the situational awareness system for controlling the modular devices 5102; para’s 1580-1584, as the process 5000a continues, the control circuit of the surgical hub 5104 can derive 5006a contextual information from the data received 5004a from the data sources 5126. The contextual information can include, for example, the type of procedure being performed, the particular step being performed in the surgical procedure, the patient's state (e.g., whether the patient is under anesthesia or whether the patient is in the operating room), or the type of tissue being operated on. The control circuit can derive 5006a contextual information according to data from ether an individual data source 5126 or combinations of data sources 5126. Further, the control circuit can derive 5006a contextual information according to, for example, the type(s) of data that it receives, the order in which the data is received, or particular measurements or values associated with the data. For example, if the control circuit receives data from an RF generator indicating that the RF generator has been activated, the control circuit could thus infer that the RF electrosurgical instrument is now in use and that the surgeon is or will be performing a step of the surgical procedure utilizing the particular instrument. As another example, if the control circuit receives data indicating that a laparoscope imaging device has been activated and an ultrasonic generator is subsequently activated, the control circuit can infer that the surgeon is on a laparoscopic dissection step of the surgical procedure due to the order in which the events occurred. As yet another example, if the control circuit receives data from a ventilator indicating that the patient's respiration is below a particular rate, then the control circuit can determine that the patient is under anesthesia; the surgical hub 5104 can receive 5002d, 5004d perioperative data from an insufflator and a medical imaging device indicating that both devices have been activated and paired to the surgical hub 5104, derive 5006d the contextual information therefrom that a video-assisted thoracoscopic surgery (VATS) procedure is being performed, determine 5008d that the displays connected to the surgical hub 5104 should be set to display particular views or information associated with the procedure type, and then control 5010d the displays accordingly; see also para’s 1249, 1292-1295, 1299, 1464, and 1484-1492). Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1,148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 6. Claims 5-6 and 27 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Shelton, as applied to claims 4 and 24 above, in view of Kumar et al. (US Publication 2023/0215568). Regarding claims 5-6, Shelton discloses the device of claim 4. Shelton does not explicitly disclose but Kumar discloses wherein: determining an indication of whether the event has occurred further comprises: determining whether the person within the frame of the camera is at risk of falling; and wherein: determining an indication of whether the event has occurred further comprises: determining whether the person within the frame of the camera has fallen (Kumar, para. 0058, the memory device 144 stores machine-readable instructions that are executable by the processor 142 of the control system 140. The machine-readable instructions can include the fall risk algorithm 118, algorithms for combining total risk scores, algorithms for prioritizing resources (e.g., used by prioritizing resources module 120), algorithms for selecting subjects or groups for evaluation (e.g., used by re-evaluate module 130), and information control algorithms for controlling data flow within the various modules; para. 0065, some definitions and clarification for the listed features are as follows. Function Score from section GG refers to functional ability and includes admission and discharge self-care and mobility performance. Progress Notes are the part of a medical record where healthcare professionals record details to document a patient's clinical status or achievements during the course of a hospitalization or over the course of outpatient care. PDPM refers to Patient-Driven Payment Model. MDS refers to The Minimum Data Set which is part of a federally mandated process for clinical assessment of all residents in Medicare or Medicaid certified nursing homes. Environmental factors include where the individual was when the fall occurred, such on stairs, in a shower, getting out of bed, outdoors, or on a smooth/slippery surface). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Kumar’s features into Shelton’s invention for enhancing object monitoring system by estimating an object fall risk and providing an indication of fallen occurrences. Regarding Claim 27, Shelton discloses the method of claim 24. Shelton does not explicitly disclose but Kumar discloses determining a fall risk of the patient occupying the patient room by providing the received image data as input to a second machine learning model of the one or more machine learning models, and further discloses a machine learning model of the one or more machine learning models (Kumar, para. 0058, the memory device 144 stores machine-readable instructions that are executable by the processor 142 of the control system 140. The machine-readable instructions can include the fall risk algorithm 118, algorithms for combining total risk scores, algorithms for prioritizing resources (e.g., used by prioritizing resources module 120), algorithms for selecting subjects or groups for evaluation (e.g., used by re-evaluate module 130), and information control algorithms for controlling data flow within the various modules; para. 0065, some definitions and clarification for the listed features are as follows. Function Score from section GG refers to functional ability and includes admission and discharge self-care and mobility performance. Progress Notes are the part of a medical record where healthcare professionals record details to document a patient's clinical status or achievements during the course of a hospitalization or over the course of outpatient care. PDPM refers to Patient-Driven Payment Model. MDS refers to The Minimum Data Set which is part of a federally mandated process for clinical assessment of all residents in Medicare or Medicaid certified nursing homes. Environmental factors include where the individual was when the fall occurred, such on stairs, in a shower, getting out of bed, outdoors, or on a smooth/slippery surface). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Kumar’s features into Shelton’s invention for enhancing object monitoring system by estimating an object fall risk and providing an indication of fallen occurrences. (Claim 27 depends from claim 24. Claim 24 is rejected under 102 and the secondary reference, Kumar, is not discussed in this rejection. Should claim 27 be rejected under 102?) 7. Claims 9-11 and 13 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Shelton, as applied to claims 1 and 12 above, in view of Amling (US Publication 2014/0108608). Regarding claims 9-10, Shelton discloses the device of claim 1. Shelton does not explicitly disclose but Amling discloses wherein: at least one network interface of the plurality of network interfaces is configured to connect to a power source; and wherein: the at least one network interface configured to connect to a power source is an ethernet port configured to receive power from the power source using a power-over-ethernet (POE) standard (Amling, para. 0017, the network interfaces for various medical and/or operating room devices may have different maximum throughputs or maximum bandwidths. However, the network interfaces employ the same network protocol for communicating over the communications network. One lower layer is used to provide all required bandwidth capabilities. Each of the medical and/or operating room devices may further include separate or common power supply couplings; for example, "Power Over Ethernet" applications). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Amling’s features into Shelton’s invention for effectively managing network power distribution by using power-over-ethernet applications. Regarding claim 11, Shelton-Amling discloses the device of claim 9, wherein: the at least one network interface configured to connect to a power source is a universal serial bus (USB) micro-B port configured to receive power from the power source (Shelton, para. 1121, the USB network hub 300 can connect 127 functions configured in up to six logical layers (tiers) to a single computer. Further, the USB network hub 300 can connect to all peripherals using a standardized four-wire cable that provides both communication and power distribution. The power configurations are bus-powered and self-powered modes. The USB network hub 300 may be configured to support four modes of power management: a bus-powered hub, with either individual-port power management or ganged-port power management, and the self-powered hub, with either individual-port power management or ganged-port power management. In one aspect, using a USB cable, the USB network hub 300, the upstream USB transceiver port 302 is plugged into a USB host controller, and the downstream USB transceiver ports 304, 306, 308 are exposed for connecting USB compatible devices, and so forth). Regarding claim 13, Shelton discloses the device of claim 12. Shelton does not explicitly disclose but Amling discloses wherein: at least a subset of the plurality of network interfaces are ethernet ports configured to deliver power and transmit data to the respective external devices using a power-over-ethernet (POE) standard (Amling, para. 0017, the network interfaces for various medical and/or operating room devices may have different maximum throughputs or maximum bandwidths. However, the network interfaces employ the same network protocol for communicating over the communications network. One lower layer is used to provide all required bandwidth capabilities. Each of the medical and/or operating room devices may further include separate or common power supply couplings; for example, "Power Over Ethernet" applications). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Amling’s features into Shelton’s invention for effectively managing power distribution over a connected network by using power-over-ethernet applications. 8. Claims 16-17 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Shelton, as applied to claim 15 above, in view of Paramasivan et al. (US Publication 2022/0317827). Regarding claims 16-17, Shelton discloses the device of claim 15. Shelton does not explicitly disclose but Paramasivan discloses wherein: controlling each external device of the one or more external devices comprises: determining one or more methods associated with controlling the external device from the received software; and executing, using the one or more network communication processors, at least one of the one or more methods associated with controlling the external device; and wherein: determining one or more methods associated with controlling the external device from the received software comprises: executing one or more test methods associated with controlling the one or more external devices; and determining which of the one or more test methods control the external device (Paramasivan, para. 0008, when a device is connected to the hub, the device can send identifying information to the hub so that the hub can ascertain the identity of the device. Based on the identification data, the hub can access an internal graphical user interface (GUI) database to determine if there are any entries in the database that correspond to the device that is now connected to the hub. If it is determined that the internal GUI database of the hub contains one or more GUIs associated with the device, then the GUI database can transmit the corresponding GUI(s) to the electronic display for rendering at the appropriate times (for instance when the surgeon issues a command at the hub to show the GUI associated with the device). However, if the internal GUI database does not include one or more GUIs associated with the device, then the hub can transmit a request to an external GUI database (that is e.g. located on a cloud computing device) to download one or more GUIs associated with the device being connected to the hub. The external cloud-based GUI database can be updated to include GUI information for new devices as they become available, and the external GUI database can thus serve a central repository for GUI information for devices that can be accessed by multiple hubs. the, e.g. cloud-based, GUI database can be updated by an external device that is communicatively coupled to the, e.g. cloud-based, GUI database. When a hub requests GUI information (such as GUI layout information) from the cloud-based GUI database, the database can search for corresponding entries, and transmit those entries to the requesting hub. The hub after receiving the GUI layout information from the cloud-based GUI database, can use the received information to update its own internal GUI database, and can also use the information to render GUIs for the new device. In this way, rather than requiring the hub to update its entire software to accommodate new devices, the hub can instead retrieve the new GUI information from the cloud-based GUI database without the need to update its software; see also para. 0013, optionally, the GUI layout information comprises information associated with a location on the electronic display of one or more graphical features to be displayed when operating the device associated with the GUI layout information, i.e. information associated with a location on the electronic display where the one or more graphical features are to be displayed when operating the device associated with the GUI layout information). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Paramasivan’s features into Shelton’s invention for effectively managing networked external devices by testing for the most desired network application from among a plurality of applications. 9. Claim 19 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Shelton, as applied to claim 1 above, in view of Kim et al. (US Publication 2023/0410717). Regarding claim 19, Shelton discloses the device of claim 1. Shelton does not explicitly disclose but Kim discloses wherein: the device further comprises an infrared (IR) transmitter; at least one of the one or more external devices comprises a display having an IR receiver; and the one or more network communication processors are further configured to control the display by causing the IR transmitter to transmit an IR signal to the IR receiver of the display (Kim, para. 0052, a user may control the display modules 110, 120, 130, etc. included in the display apparatus 100 through a remote control apparatus 10. According to various embodiments, a first display module 110 may include an IR signal receiver that receives an infrared ray (IR) signal received from the remote control apparatus 10. The first display module 110 that received an IR signal from the remote control apparatus 10 through the IR signal receiver can integrally control the plurality of display modules 110, 120, 130, etc. included in the display apparatus 100 by transmitting the received IR signal to second and third display modules, etc.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Kim’s features into Shelton’s invention for effectively managing networked display devices remotely. 10. Claims 21-22 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Shelton, as applied to claim 20 above, in view of Gu et al. (US Publication 2012/0281062). Regarding claims 21-22, Shelton discloses the device of claim 20. Shelton does not explicitly disclose but Gu discloses wherein: the first data comprises a plurality of frames and transmitting the first data comprises transmitting each frame of the plurality of frames; and the one or more network communications processors are configured to: determine a dropped frame rate indicative of a number of frames of the plurality of frames that were not outputted by the receiving device; and adjust a resolution of the first data until the dropped frame rate is below a threshold dropped frame rate; and wherein: the first data comprises a plurality of frames and transmitting the first data comprises transmitting each frame of the plurality of frames; and the one or more network communications processors are configured to: determine a dropped frame rate indicative of a number of frames of the plurality of frames that were not outputted by the receiving device; and adjust a resolution of the first data until the dropped frame rate is within a threshold dropped frame rate range (Gu, para’s 0041-0044, confirming that frame rate and corresponding network packet loss rate of the video sender with the initial resolution meet the preset conditions, e.g., confirming whether the frame rate of the initial resolution meets the preset value of frame rate and whether the corresponding network packet loss rate is smaller than or equal to the preset value of the network packet loss rate. Network bandwidth does not necessarily represent the network's available bandwidth. Thus, by increasing the frame rate of the initial resolution, to enable it to be equivalent to the required bandwidth needed by the frame rate of the target resolution, confirming whether the frame rate of the initial resolution meets the preset value of frame rate, and whether the corresponding network packet loss rate is smaller than or equal to the preset value of the network packet loss rate. When confirming that the frame rate of the initial resolution meets the preset value of frame rate and the corresponding network packet loss rate is smaller than or equal to the preset value of the network packet loss rate, it may be considered that the network bandwidth meets the requirements, then the initial resolution of video sender to the target resolution is switched. For example, for video data, video bandwidth=video resolution.times.video frame rate. Thus, increasing high frame rate of 320.times.240 resolution, enables it to be equivalent to the required video bandwidth needed by a low frame rate of 640.times.480 resolution. Moreover, when there is no available video bandwidth, video data packets will be lost, and network packet loss may be detected when increasing the frame rate of 320.times.240 resolution. When the frame rate of 320.times.240 resolution is greater than or equal to the preset value of frame rate, and the corresponding network packet loss rate is smaller than or equal to the preset value of network packet loss rate, the video bandwidth meets the frame rate of 640.times.480 resolution; see also para’s 0082-0083, video A detects packet loss simultaneously when increasing the frame rate of video with 320.times.240 resolution. Increasing frame rate of 320.times.240 resolution, enables it to be equivalent to the bandwidth needed by frame rate of 640.times.480 resolution. If the frame rate of 320.times.240 resolution is greater than or equal to 20 frames, and the corresponding network packet loss rate is smaller than or equal to 3%, then the bandwidth meets the frame rate of the video with 640.times.480 resolution, and proceed with block 66, otherwise end the procedure). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Gu’s features into Shelton’s invention for effectively managing data transferring between networked devices by adjusting frame rate with respect to frame resolution, available bandwidth, and packet loss rate. Conclusion 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOI H TRAN whose telephone number is (571)270-5645. The examiner can normally be reached 8:00AM-5:00PM PST FIRST FRIDAY OF BIWEEK OFF. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOI H TRAN/ Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Jan 30, 2025
Application Filed
Feb 21, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598366
CONTENT DATA PROCESSING METHOD AND CONTENT DATA PROCESSING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12593112
METHOD, DEVICE, AND COMPUTER PROGRAM FOR ENCAPSULATING REGION ANNOTATIONS IN MEDIA TRACKS
2y 5m to grant Granted Mar 31, 2026
Patent 12592261
VIDEO EDITING METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12576798
CAMERA SYSTEM AND ASSISTANCE SYSTEM FOR A VEHICLE AND A METHOD FOR OPERATING A CAMERA SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12579810
SYSTEM AND METHOD FOR AUTOMATIC EVENTS IDENTIFICATION ON VIDEO
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
88%
With Interview (+23.6%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 611 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month