Prosecution Insights
Last updated: April 19, 2026
Application No. 18/547,520

SURGERY DETAILS EVALUATION SYSTEM, SURGERY DETAILS EVALUATION METHOD, AND COMPUTER PROGRAM

Final Rejection §103§112
Filed
Aug 23, 2023
Examiner
YANG, WEI WEN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Anaut Inc.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
539 granted / 657 resolved
+20.0% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
691
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
72.5%
+32.5% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 657 resolved cases

Office Action

§103 §112
DETAILED ACTION Response to Arguments The amendments filed 12/17/2025 have been entered and made of record. Applicant's amendments and arguments filed 12/17/2025 have been fully considered but are moot in view of the new ground(s) of rejection because the Applicant has amended independent claim(s), and Applicant’s arguments are not persuasive: First, similar amendments in independent claims 1, 6, and 8 filed 12/17/2025 have been considered, however, amended claims 1-6, and 8-15 are subject to the claim rejection under 35 USC § 112: Because as required in the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. However, there is not found support through the specification {including all the paragraphs and Figures mentioned in the Applicant’s Arguments/Remarks of 12/17/2025, even digitally searched the whole Spec.) to amended limitation of “a change in body information indicating a state of the body”, and “quality of the surgery” Furthermore, no other claims have been amended to limit how and what {characteristics, properties…etc., of} the surgical image is applied into analysis “a change in body information indicating a state of the body”, And there is no other claims have been amended to limit how and what algorithms to evaluate “quality of the surgery”; Therefore, the specification lacks the required description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to make and use above claimed limitations. And, there is no found the best mode contemplated by the inventor or joint inventor of carrying out the invention as required in the first paragraph of 35 U.S.C. 112(a). Similarly, claims 1-5, 7-12, 14-18, and 20 are subject to the claim rejection under 35 USC § 112 (b): The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. And amended limitations of “a change in body information indicating a state of the body”, and “quality of the surgery” are also rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention: it is not defined in amended independent claim 1, how and what {characteristics, properties…etc., of} the surgical image is applied into analysis “a change in body information indicating a state of the body”, And. It is no defined how and what algorithms to evaluate “quality of the surgery”; The Applicant's amendments and arguments filed 12/17/2025 have been considered, in light of above discussions of claim interpretation, particularly about the amended limitations, are unpersuasive: Applicant asserts (in pages 7-8/10 of the Arguments of 12/17/2025) that cited references, particularly Wolf as modified by Shelton do not disclose “analyzes a change in body information indicating a state of the body”; However, the Examiner disagrees, because: Wolf apparently discloses “analyzes a change in body information indicating a state of the body” (see Wolf” e.g., --involve receiving video footage of a surgical procedure performed by a surgeon on a patient in an operating room and accessing at least one data structure including image-related data characterizing surgical procedures….systems, methods, and computer readable media for estimating contact force on an anatomical structure during a surgical procedure….and analyzing the received image data to determine an identity of an anatomical structure and to determine a condition of the anatomical structure as reflected in the image data. A contact force threshold associated with the anatomical structure may be selected based on the determined condition of the anatomical structure…., The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined. [0025] Consistent with disclosed embodiments, systems, methods, and computer readable media for predicting post discharge risk are disclosed. The operations for predicting post discharge risk may include accessing frames of video captured during a specific surgical procedure on a patient, accessing stored historical data identifying intraoperative events and associated outcomes, analyzing the accessed frames, and based on information obtained from the historical data, identifying in the accessed frames at least one specific intraoperative event, determining, based on information obtained from the historical data and the identified at least one intraoperative event, a predicted outcome associated with the specific surgical procedure, and outputting the predicted outcome in a manner associating the predicted outcome with the patient. --, in [0021]-[0025]; Apparently, above analyzes “contract force”, determines “abnormal fluid leakage”, and predicts “post discharge risk” read on claimed “analyzes a change in body information indicating a state of the body”; Also see: --to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331, the temperature of tissue 331, mechanical properties of tissue 331 and the like. To determine elastic properties of tissue 331, for example, tips 323A and 323B may be first separated by an angle 317 and applied to tissue 331. The tips may be configured to move such as to reduce angle 317, and the motion of tips may result in pressure on tissue 331. Such pressure may be measured (e.g., via a piezoelectric element 327 that may be located between a first branch 312A and a second branch 312B of instrument 301), and based on the change in angle 317 (i.e., strain) and the measured pressure (i.e., stress), the elastic properties of tissue 331 may be measured. Furthermore, based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1.-- Apparently, Wolf’s teaching of above “measure the elastic properties of tissue 331” is an example of, and read on claimed “analyzes a change in body information indicating a state of the body”; Further see WOLF’s disclosures of: -- image recognition may identify when a particular organ is incised, to enable marking of that incision event. In another example, image recognition may be used to note the severance of a vessel or nerve, to enable marking of that adverse event. Image recognition may also be used to mark events by detection of bleeding or other fluid loss. In some embodiments, analyzing the video footage to identify the event location may include using a neural network model (such as a deep neural network, a convolutional neural network, etc.) trained using example video frames including previously-identified surgical events to thereby identify the event location. In one example, a machine learning model may be trained using training examples to identify locations of intraoperative surgical events in portions of videos, and the trained machine learning model may be used to analyze the video footage (or a portion of the video footage corresponding to the surgical phase) and identify the event location of the particular intraoperative surgical event within the surgical phase. An example of such training example may include a video clip together with a label indicating a location of a particular event within the video clip, or an absence of such event. [0163] Some aspects of the present disclosure may involve associating an event tag with the event location of the particular intraoperative surgical event. As discussed above, a tag may include any means for associating information with data or a portion of data. An event tag may be used to associate data or portions of data with an event, such as an intraoperative surgical event. Similar to the phase tag, associating the event tag with the event location may include writing data to a video file, for example, to the properties of the video file. In other embodiments, associating the event tag with the event location may include writing data to a file or database associating the event tag with the video footage and/or the event location. Alternatively, associating an event tag with an event location may include recording a marker in a data structure, where the data structure correlates a tag with a particular location or range of locations in video footage. In some embodiments, the same file or database may be used to associate the phase tag to the video footage as the event tag. In other embodiments, a separate file or database may be used. [0164] Consistent with the present disclosure, the disclosed methods may include storing an event characteristic associated with the particular intraoperative surgical event. The event characteristic may be any trait or feature of the event. For example, the event characteristic may include properties of the patient or surgeon, properties or characteristics of the surgical event or surgical phase, or various other traits. Examples of features may include, excessive fatty tissue, an enlarged organ, tissue decay, a broken bone, a displaced disc, or any other physical characteristic associated with the event.--, in [0163]-[0164]; Again, above “detection of bleeding or other fluid loss.” also read on claimed “analyzes a change in body information indicating a state of the body”; Applicant also asserts (in pages 7-8/10 of the Arguments of 12/17/2025) that cited references, particularly Wolf as modified by Shelton do not disclose “evaluates at least one of quality of the surgery, appropriateness of the surgery, or efficiency of the surgery with respect to the content of the surgery performed by the surgeon” However, the Examiner disagrees, because: Apparently WOLF’s above disclosures of “detection of bleeding or other fluid loss.”, and, --predicting post discharge risk are disclosed. The operations for predicting post discharge risk may include accessing frames of video captured during a specific surgical procedure on a patient, accessing stored historical data identifying intraoperative events and associated outcomes, analyzing the accessed frames, and based on information obtained from the historical data, identifying in the accessed frames at least one specific intraoperative event, determining, based on information obtained from the historical data and the identified at least one intraoperative event, a predicted outcome associated with the specific surgical procedure, and outputting the predicted outcome in a manner associating the predicted outcome with the patient. --, in [0021]-[0025]; read on claimed “evaluates quality of the surgery”; Furthermore, Shelton discloses “evaluates at least one of quality of the surgery, appropriateness of the surgery, or efficiency of the surgery with respect to the content of the surgery performed by the surgeon” (see SHELTON: e.g., -- The surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument.--, in abstract, and, -- A surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the healthcare professional. The surgical computing system may compare the actions of the healthcare professional with expected or acceptable actions.--, in [0053], and, -- [0119] A GI tract imaging/sensing system may collect images of a patient's colon. The GI tract imaging/sensing system may include an ingestible wireless camera and a receiver. The GI tract imaging/sensing system may include one or more white LEDs, a battery, radio transmitter, and antenna. The ingestible camera may include a pill. The ingestible camera may travel through the digestive tract and take pictures of the colon. The ingestible camera may take pictures up to 35 frames per second during motion. The ingestible camera may transmit the pictures to a receiver. The receiver may include a wearable device. The GI tract imaging/sensing system may process the images locally or transmit them to a processing unit. Doctors may look at the raw images to make a diagnosis.--, in [0117]-0120]; and, -- The systems and techniques may be employed to evaluate a healthcare professional's techniques for using a surgical instrument. The systems and techniques may monitor a healthcare professional's range-of-motions and efficiency-of-motion while conducting a surgical procedure. The system may evaluate the actions of the healthcare professional relative to the technique of others and/or accepted techniques and may offer feedback to the healthcare professional to improve his or her technique as the healthcare professional is performing a surgical procedure.--, in [0414]). Therefore, claims 1-6, 8-15 are still not patentably distinguishable over the prior art reference(s). Further discussions are addressed in the prior art rejection section below. Claim Rejections - 35 USC § 112 similar amendments in independent claims 1, 6, and 8 filed 12/17/2025 have been considered, however, the amended claims 1-6, and 8-15 are subject to the claim rejection under 35 USC § 112: Because as required in the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. However, there is not found support through the specification {including all the paragraphs and Figures mentioned in the Applicant’s Arguments/Remarks of 12/17/2025, even digitally searched the whole Spec.) to amended limitation of “a change in body information indicating a state of the body”, and “quality of the surgery” Furthermore, no other claims have been amended to limit how and what {characteristics, properties…etc., of} the surgical image is applied into analysis “a change in body information indicating a state of the body”, And there is no other claims have been amended to limit how and what algorithms to evaluate “quality of the surgery”; Therefore, the specification lacks the required description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to make and use above claimed limitations. And, there is no found the best mode contemplated by the inventor or joint inventor of carrying out the invention as required in the first paragraph of 35 U.S.C. 112(a). Similarly, claims 1-5, 7-12, 14-18, and 20 are subject to the claim rejection under 35 USC § 112 (b): The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. And amended limitations of “a change in body information indicating a state of the body”, and “quality of the surgery” are also rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention: it is not defined in amended independent claim 1, how and what {characteristics, properties…etc., of} the surgical image is applied into analysis “a change in body information indicating a state of the body”, And. It is no defined how and what algorithms to evaluate “quality of the surgery”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-15 are rejected under 35 U.S.C. 103 as being unpatentable over Wolf (US 20210012868 A1), in view of SHELTON (US 20220233241 A1, Date Filed: 2021-01-22). Re Claim 1, Wolf discloses a surgical content evaluation system for evaluating a content of surgery performed by a surgeon (see Wolf: e. g., Fig. 1, --systems and methods for analysis of surgical videos…systems, methods, and computer readable media related to reviewing surgical video are disclosed. The embodiments may include accessing at least one video of a surgical procedure… The embodiments may further include overlaying, on the at least one video outputted for display, a surgical timeline. The surgical timeline may include markers identifying at least one of a surgical phase, an intraoperative surgical event, and a decision making junction. The surgical timeline may enable a surgeon, while viewing playback of the at least one video to select one or more markers on the surgical timeline, and thereby cause a display of the video to skip to a location associated with the selected marker….video indexing are disclosed. The video indexing may include accessing video footage to be indexed, including footage of a particular surgical procedure, which may be analyzed to identify a video footage location associated with a surgical phase of the particular surgical procedure. A phase tag may be generated and may be associated with the video footage location. The video indexing may include analyzing the video footage to identify an event location of a particular intraoperative surgical event within the surgical phase and associating an event tag with the event location of the particular intraoperative surgical event. Further, an event characteristic associated with the particular intraoperative surgical event may be stored. [0007] In one embodiment, the one or more markers may include a decision making junction marker corresponding to a decision making junction of the surgical procedure. .--, in [0005]-[0009]; also see: --[0023] Some embodiments of this disclosure involve systems, methods and computer readable media for updating a predicted outcome during a surgical procedure. These embodiments may involve receiving, from at least one image sensor arranged to capture images of a surgical procedure, image data associated with a first event during the surgical procedure. The embodiments may determine, based on the received image data associated with the first event, a predicted outcome associated with the surgical procedure, and may receive, from at least one image sensor arranged to capture images of a surgical procedure, image data associated with a second event during the surgical procedure. The embodiments may then determine, based on the received image data associated with the second event, a change in the predicted outcome, causing the predicted outcome to drop below a threshold. A recommended remedial action may be identified and recommended based on image-related data on prior surgical procedures contained in a data structure. [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-0024], and [0402]), the system comprising: an acquisition unit that acquires a surgical image, which is a captured image of a body of a patient on which the surgery is performed by the surgeon (see WOLF: e.g., --to capture images of a surgical procedure, image data associated with a first event during the surgical procedure. The embodiments may determine, based on the received image data associated with the first event, a predicted outcome associated with the surgical procedure, and may receive, from at least one image sensor arranged to capture images of a surgical procedure, image data associated with a second event during the surgical procedure. The embodiments may then determine, based on the received image data associated with the second event, a change in the predicted outcome, causing the predicted outcome to drop below a threshold. A recommended remedial action may be identified and recommended based on image-related data on prior surgical procedures contained in a data structure. [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]); an analysis unit that analyzes, in the surgical image, at least a change in body information indicating a state of the body and/or instrument information indicating a state of an instrument operated by the surgeon (see WOLF: e.g., [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; --involve receiving video footage of a surgical procedure performed by a surgeon on a patient in an operating room and accessing at least one data structure including image-related data characterizing surgical procedures….systems, methods, and computer readable media for estimating contact force on an anatomical structure during a surgical procedure….and analyzing the received image data to determine an identity of an anatomical structure and to determine a condition of the anatomical structure as reflected in the image data. A contact force threshold associated with the anatomical structure may be selected based on the determined condition of the anatomical structure…., The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined. [0025] Consistent with disclosed embodiments, systems, methods, and computer readable media for predicting post discharge risk are disclosed. The operations for predicting post discharge risk may include accessing frames of video captured during a specific surgical procedure on a patient, accessing stored historical data identifying intraoperative events and associated outcomes, analyzing the accessed frames, and based on information obtained from the historical data, identifying in the accessed frames at least one specific intraoperative event, determining, based on information obtained from the historical data and the identified at least one intraoperative event, a predicted outcome associated with the specific surgical procedure, and outputting the predicted outcome in a manner associating the predicted outcome with the patient. --, in [0021]-[0025]; Apparently, above analyzes “contract force”, determines “abnormal fluid leakage”, and predicts “post discharge risk” read on claimed “analyzes a change in body information indicating a state of the body”; Also see: --to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331, the temperature of tissue 331, mechanical properties of tissue 331 and the like. To determine elastic properties of tissue 331, for example, tips 323A and 323B may be first separated by an angle 317 and applied to tissue 331. The tips may be configured to move such as to reduce angle 317, and the motion of tips may result in pressure on tissue 331. Such pressure may be measured (e.g., via a piezoelectric element 327 that may be located between a first branch 312A and a second branch 312B of instrument 301), and based on the change in angle 317 (i.e., strain) and the measured pressure (i.e., stress), the elastic properties of tissue 331 may be measured. Furthermore, based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1.-- Apparently, Wolf’s teaching of above “measure the elastic properties of tissue 331” is an example of, and read on claimed “analyzes a change in body information indicating a state of the body”; further as evidenced in: Fig. 3, and, Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]; and see: Fig. 8A, and, -- At step 804, process 800 may include analyzing the video footage to identify a video footage location associated with a surgical phase of the particular surgical procedure. As discussed above, the location may be associated with a particular frame, a range of frames, a time index, a time range, or any other location identifier. [0189] Process 800 may include generating a phase tag associated with the surgical phase, as shown in step 806. This may occur, for example, through video content analysis (VCA), using techniques such as one or more of video motion detection, video tracking, shape recognition, object detection, fluid flow detection, equipment identification, behavior analysis, or other forms of computer aided situational awareness. When learned characteristics associated with a phase are identified in the video, a tag may be generated demarcating that phase. The tag may include, for example, a predefined name for the phase. At step 808, process 800 may include associating the phase tag with the video footage location. The phase tag may indicate, for example, that the identified video footage location is associated with the surgical phase of the particular surgical procedure. At step 810, process 800 may include analyzing the video footage using one or more of the VCA techniques described above, to identify an event location of a particular intraoperative surgical event within the surgical phase. Process 800 may include associating an event tag with the event location of the particular intraoperative surgical event, as shown at step 812. The event tag may indicate, for example, that the video footage is associated with the surgical event at the event location.--, in [0197]-[0199]; and, -- to track a surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., a surgical instrument) from different view angles. For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure…. tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument,… to predict future position and orientation of cameras 115-125 based on the movement of the hand of the surgeon, the movement of a surgical instrument, the movement of a body of the surgeon, historical data reflecting likely next steps, or any other data from which future movement may be derived.--, in [0307]-[0309]); and an evaluation unit that determines the content of the surgery performed by the surgeon based on the body information and/or the instrument information analyzed by the analysis unit, and evaluates quality of the surgery (see WOLF: e.g., --[0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; and, apparently WOLF’s above disclosures of “detection of bleeding or other fluid loss.”, and, --predicting post discharge risk are disclosed. The operations for predicting post discharge risk may include accessing frames of video captured during a specific surgical procedure on a patient, accessing stored historical data identifying intraoperative events and associated outcomes, analyzing the accessed frames, and based on information obtained from the historical data, identifying in the accessed frames at least one specific intraoperative event, determining, based on information obtained from the historical data and the identified at least one intraoperative event, a predicted outcome associated with the specific surgical procedure, and outputting the predicted outcome in a manner associating the predicted outcome with the patient. --, in [0021]-[0025]; read on claimed “evaluates quality of the surgery”); SHELTON discloses an evaluation unit that evaluates at least one of quality of the surgery, appropriateness of the surgery, or efficiency of the surgery to the content of the surgery performed by the surgeon based on the body information and/or the instrument information analyzed by the analysis unit (see SHELTON: e.g., -- The surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument.--, in abstract, and, -- A surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the healthcare professional. The surgical computing system may compare the actions of the healthcare professional with expected or acceptable actions.--, in [0053], and, -- [0119] A GI tract imaging/sensing system may collect images of a patient's colon. The GI tract imaging/sensing system may include an ingestible wireless camera and a receiver. The GI tract imaging/sensing system may include one or more white LEDs, a battery, radio transmitter, and antenna. The ingestible camera may include a pill. The ingestible camera may travel through the digestive tract and take pictures of the colon. The ingestible camera may take pictures up to 35 frames per second during motion. The ingestible camera may transmit the pictures to a receiver. The receiver may include a wearable device. The GI tract imaging/sensing system may process the images locally or transmit them to a processing unit. Doctors may look at the raw images to make a diagnosis.--, in [0117]-0120]; and, -- The systems and techniques may be employed to evaluate a healthcare professional's techniques for using a surgical instrument. The systems and techniques may monitor a healthcare professional's range-of-motions and efficiency-of-motion while conducting a surgical procedure. The system may evaluate the actions of the healthcare professional relative to the technique of others and/or accepted techniques and may offer feedback to the healthcare professional to improve his or her technique as the healthcare professional is performing a surgical procedure.--, in [0414]); Wolf and SHELTON are combinable as they are in the same field of endeavor: medical image processing in surgery monitoring and evaluation. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Wolf’s system using SHELTON’s teachings by including evaluation unit that evaluates at least one of quality of the surgery, appropriateness of the surgery, or efficiency of the surgery to the content of the surgery performed by the surgeon based on the body information and/or the instrument information analyzed by the analysis unit to Wolf’s determining outcome, and the content of the surgery performed by the surgeon based on the body information and/or the instrument information analyzed by the analysis unit in order to determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument (see SHELTON: e.g.. in abstract, [0053], [0117]-[0120], and [0414]). Re Claim 2, Wolf as modified by SHELTON further disclose the analysis unit analyzes a specific region in the surgical image (see WOLF: e.g., [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; Fig. 3, and, Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]; and see: Fig. 8A, and, -- At step 804, process 800 may include analyzing the video footage to identify a video footage location associated with a surgical phase of the particular surgical procedure. As discussed above, the location may be associated with a particular frame, a range of frames, a time index, a time range, or any other location identifier. [0189] Process 800 may include generating a phase tag associated with the surgical phase, as shown in step 806. This may occur, for example, through video content analysis (VCA), using techniques such as one or more of video motion detection, video tracking, shape recognition, object detection, fluid flow detection, equipment identification, behavior analysis, or other forms of computer aided situational awareness. When learned characteristics associated with a phase are identified in the video, a tag may be generated demarcating that phase. The tag may include, for example, a predefined name for the phase. At step 808, process 800 may include associating the phase tag with the video footage location. The phase tag may indicate, for example, that the identified video footage location is associated with the surgical phase of the particular surgical procedure. At step 810, process 800 may include analyzing the video footage using one or more of the VCA techniques described above, to identify an event location of a particular intraoperative surgical event within the surgical phase. Process 800 may include associating an event tag with the event location of the particular intraoperative surgical event, as shown at step 812. The event tag may indicate, for example, that the video footage is associated with the surgical event at the event location.--, in [0197]-[0199]), and, the evaluation unit evaluates, when the specific region in the surgical image has exceeded a predetermined threshold, that a body fluid has flowed out or an organ has been damaged (see WOLF: e.g., --[0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; also see SHELTON: e.g., -- The surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument.--, in abstract, and, -- A surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the healthcare professional. The surgical computing system may compare the actions of the healthcare professional with expected or acceptable actions.--, in [0053], and, -- [0119] A GI tract imaging/sensing system may collect images of a patient's colon. The GI tract imaging/sensing system may include an ingestible wireless camera and a receiver. The GI tract imaging/sensing system may include one or more white LEDs, a battery, radio transmitter, and antenna. The ingestible camera may include a pill. The ingestible camera may travel through the digestive tract and take pictures of the colon. The ingestible camera may take pictures up to 35 frames per second during motion. The ingestible camera may transmit the pictures to a receiver. The receiver may include a wearable device. The GI tract imaging/sensing system may process the images locally or transmit them to a processing unit. Doctors may look at the raw images to make a diagnosis.--, in [0117]-0120]; and, -- the edema sensing system may detect a risk of colorectal anastomotic leak based on fluid build-up. Based on the detected edema physiological conditions, the edema sensing system may generate a score for healing quality. For example, the edema sensing system may generate the healing quality score by comparing edema information to a certain threshold lower leg circumference. Based on the detected edema information--, in [0123]-[0126]; and, -- The systems and techniques may be employed to evaluate a healthcare professional's techniques for using a surgical instrument. The systems and techniques may monitor a healthcare professional's range-of-motions and efficiency-of-motion while conducting a surgical procedure. The system may evaluate the actions of the healthcare professional relative to the technique of others and/or accepted techniques and may offer feedback to the healthcare professional to improve his or her technique as the healthcare professional is performing a surgical procedure.--, in [0414]). Re Claim 3, Wolf as modified by SHELTON further disclose the analysis unit analyzes the instrument information including information relating to the instrument (see WOLF: e.g., [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; Fig. 3, and, Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]; and see: Fig. 8A, and, -- At step 804, process 800 may include analyzing the video footage to identify a video footage location associated with a surgical phase of the particular surgical procedure. As discussed above, the location may be associated with a particular frame, a range of frames, a time index, a time range, or any other location identifier. [0189] Process 800 may include generating a phase tag associated with the surgical phase, as shown in step 806. This may occur, for example, through video content analysis (VCA), using techniques such as one or more of video motion detection, video tracking, shape recognition, object detection, fluid flow detection, equipment identification, behavior analysis, or other forms of computer aided situational awareness. When learned characteristics associated with a phase are identified in the video, a tag may be generated demarcating that phase. The tag may include, for example, a predefined name for the phase. At step 808, process 800 may include associating the phase tag with the video footage location. The phase tag may indicate, for example, that the identified video footage location is associated with the surgical phase of the particular surgical procedure. At step 810, process 800 may include analyzing the video footage using one or more of the VCA techniques described above, to identify an event location of a particular intraoperative surgical event within the surgical phase. Process 800 may include associating an event tag with the event location of the particular intraoperative surgical event, as shown at step 812. The event tag may indicate, for example, that the video footage is associated with the surgical event at the event location.--, in [0197]-[0199]), and the evaluation unit evaluates an operation performance of the instrument in the surgery based on the information relating to the instrument (see WOLF: e.g., --[0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; also see SHELTON: e.g., -- The surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument.--, in abstract, and, -- A surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the healthcare professional. The surgical computing system may compare the actions of the healthcare professional with expected or acceptable actions.--, in [0053], and, -- [0119] A GI tract imaging/sensing system may collect images of a patient's colon. The GI tract imaging/sensing system may include an ingestible wireless camera and a receiver. The GI tract imaging/sensing system may include one or more white LEDs, a battery, radio transmitter, and antenna. The ingestible camera may include a pill. The ingestible camera may travel through the digestive tract and take pictures of the colon. The ingestible camera may take pictures up to 35 frames per second during motion. The ingestible camera may transmit the pictures to a receiver. The receiver may include a wearable device. The GI tract imaging/sensing system may process the images locally or transmit them to a processing unit. Doctors may look at the raw images to make a diagnosis.--, in [0117]-0120]; and, -- the edema sensing system may detect a risk of colorectal anastomotic leak based on fluid build-up. Based on the detected edema physiological conditions, the edema sensing system may generate a score for healing quality. For example, the edema sensing system may generate the healing quality score by comparing edema information to a certain threshold lower leg circumference. Based on the detected edema information--, in [0123]-[0126]; and, -- The systems and techniques may be employed to evaluate a healthcare professional's techniques for using a surgical instrument. The systems and techniques may monitor a healthcare professional's range-of-motions and efficiency-of-motion while conducting a surgical procedure. The system may evaluate the actions of the healthcare professional relative to the technique of others and/or accepted techniques and may offer feedback to the healthcare professional to improve his or her technique as the healthcare professional is performing a surgical procedure.--, in [0414]). Re Claim 4, Wolf as modified by SHELTON further disclose analyzes the body information including information relating to an anatomical structure of the body (see WOLF: e.g., [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; Fig. 3, and, Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]; and see: Fig. 8A, and, -- At step 804, process 800 may include analyzing the video footage to identify a video footage location associated with a surgical phase of the particular surgical procedure. As discussed above, the location may be associated with a particular frame, a range of frames, a time index, a time range, or any other location identifier. [0189] Process 800 may include generating a phase tag associated with the surgical phase, as shown in step 806. This may occur, for example, through video content analysis (VCA), using techniques such as one or more of video motion detection, video tracking, shape recognition, object detection, fluid flow detection, equipment identification, behavior analysis, or other forms of computer aided situational awareness. When learned characteristics associated with a phase are identified in the video, a tag may be generated demarcating that phase. The tag may include, for example, a predefined name for the phase. At step 808, process 800 may include associating the phase tag with the video footage location. The phase tag may indicate, for example, that the identified video footage location is associated with the surgical phase of the particular surgical procedure. At step 810, process 800 may include analyzing the video footage using one or more of the VCA techniques described above, to identify an event location of a particular intraoperative surgical event within the surgical phase. Process 800 may include associating an event tag with the event location of the particular intraoperative surgical event, as shown at step 812. The event tag may indicate, for example, that the video footage is associated with the surgical event at the event location.--, in [0197]-[0199]and, -- to track a surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., a surgical instrument) from different view angles. For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure…. tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument,… to predict future position and orientation of cameras 115-125 based on the movement of the hand of the surgeon, the movement of a surgical instrument, the movement of a body of the surgeon, historical data reflecting likely next steps, or any other data from which future movement may be derived.--, in [0307]-[0309]), and analyzes the body information and/or the instrument information including information relating to a position of the instrument with respect to the anatomical structure, and the evaluation unit evaluates an operation performance of the instrument with respect to the anatomical structure in the surgery based on the information relating to the position of the instrument with respect to the anatomical structure (see WOLF: e.g., [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; Fig. 3, and, Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]; and see: Fig. 8A, and, -- At step 804, process 800 may include analyzing the video footage to identify a video footage location associated with a surgical phase of the particular surgical procedure. As discussed above, the location may be associated with a particular frame, a range of frames, a time index, a time range, or any other location identifier. [0189] Process 800 may include generating a phase tag associated with the surgical phase, as shown in step 806. This may occur, for example, through video content analysis (VCA), using techniques such as one or more of video motion detection, video tracking, shape recognition, object detection, fluid flow detection, equipment identification, behavior analysis, or other forms of computer aided situational awareness. When learned characteristics associated with a phase are identified in the video, a tag may be generated demarcating that phase. The tag may include, for example, a predefined name for the phase. At step 808, process 800 may include associating the phase tag with the video footage location. The phase tag may indicate, for example, that the identified video footage location is associated with the surgical phase of the particular surgical procedure. At step 810, process 800 may include analyzing the video footage using one or more of the VCA techniques described above, to identify an event location of a particular intraoperative surgical event within the surgical phase. Process 800 may include associating an event tag with the event location of the particular intraoperative surgical event, as shown at step 812. The event tag may indicate, for example, that the video footage is associated with the surgical event at the event location.--, in [0197]-[0199]and, -- to track a surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., a surgical instrument) from different view angles. For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure…. tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument,… to predict future position and orientation of cameras 115-125 based on the movement of the hand of the surgeon, the movement of a surgical instrument, the movement of a body of the surgeon, historical data reflecting likely next steps, or any other data from which future movement may be derived.--, in [0307]-[0309]). Re Claim 5, Wolf as modified by SHELTON further disclose the analysis unit analyzes, in the surgical image of the surgery including a plurality of steps, each of the plurality of steps, the surgical content evaluation system further includes a time measurement unit that measures an inter-step period of time, which is a period of time from one step of the plurality of steps to a next other step of the plurality of steps (see WOLF: e.g., [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; Fig. 3, and, Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]; and see: Fig. 8A, and, -- At step 804, process 800 may include analyzing the video footage to identify a video footage location associated with a surgical phase of the particular surgical procedure. As discussed above, the location may be associated with a particular frame, a range of frames, a time index, a time range, or any other location identifier. [0189] Process 800 may include generating a phase tag associated with the surgical phase, as shown in step 806. This may occur, for example, through video content analysis (VCA), using techniques such as one or more of video motion detection, video tracking, shape recognition, object detection, fluid flow detection, equipment identification, behavior analysis, or other forms of computer aided situational awareness. When learned characteristics associated with a phase are identified in the video, a tag may be generated demarcating that phase. The tag may include, for example, a predefined name for the phase. At step 808, process 800 may include associating the phase tag with the video footage location. The phase tag may indicate, for example, that the identified video footage location is associated with the surgical phase of the particular surgical procedure. At step 810, process 800 may include analyzing the video footage using one or more of the VCA techniques described above, to identify an event location of a particular intraoperative surgical event within the surgical phase. Process 800 may include associating an event tag with the event location of the particular intraoperative surgical event, as shown at step 812. The event tag may indicate, for example, that the video footage is associated with the surgical event at the event location.--, in [0197]-[0199]; and, -- to track a surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., a surgical instrument) from different view angles. For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure…. tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument,… to predict future position and orientation of cameras 115-125 based on the movement of the hand of the surgeon, the movement of a surgical instrument, the movement of a body of the surgeon, historical data reflecting likely next steps, or any other data from which future movement may be derived.--, in [0307]-[0309]), and the evaluation unit evaluates a surgical skill for the one step of the plurality of steps, based on the inter-step period measured by the time measurement unit (see SHELTON: e.g., -- The surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument.--, in abstract, and, -- A surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the healthcare professional. The surgical computing system may compare the actions of the healthcare professional with expected or acceptable actions.--, in [0053], and, -- [0119] A GI tract imaging/sensing system may collect images of a patient's colon. The GI tract imaging/sensing system may include an ingestible wireless camera and a receiver. The GI tract imaging/sensing system may include one or more white LEDs, a battery, radio transmitter, and antenna. The ingestible camera may include a pill. The ingestible camera may travel through the digestive tract and take pictures of the colon. The ingestible camera may take pictures up to 35 frames per second during motion. The ingestible camera may transmit the pictures to a receiver. The receiver may include a wearable device. The GI tract imaging/sensing system may process the images locally or transmit them to a processing unit. Doctors may look at the raw images to make a diagnosis.--, in [0117]-0120]; and, -- the edema sensing system may detect a risk of colorectal anastomotic leak based on fluid build-up. Based on the detected edema physiological conditions, the edema sensing system may generate a score for healing quality. For example, the edema sensing system may generate the healing quality score by comparing edema information to a certain threshold lower leg circumference. Based on the detected edema information--, in [0123]-[0126]; and, -- The systems and techniques may be employed to evaluate a healthcare professional's techniques for using a surgical instrument. The systems and techniques may monitor a healthcare professional's range-of-motions and efficiency-of-motion while conducting a surgical procedure. The system may evaluate the actions of the healthcare professional relative to the technique of others and/or accepted techniques and may offer feedback to the healthcare professional to improve his or her technique as the healthcare professional is performing a surgical procedure.--, in [0414]). Re Claim 6, claim 6 is the corresponding method claim to claim 1 respectively. Claim 6 thus is rejected for the similar reasons for claim 1. See above discussions with regard to claim 1 respectively. Furthermore, Wolf as modified by SHELTON further disclose method executed by a surgical content evaluation system for evaluating a content of surgery performed by a surgeon (see Wolf: e. g., Fig. 1, --systems and methods for analysis of surgical videos…systems, methods, and computer readable media related to reviewing surgical video are disclosed. The embodiments may include accessing at least one video of a surgical procedure… The embodiments may further include overlaying, on the at least one video outputted for display, a surgical timeline. The surgical timeline may include markers identifying at least one of a surgical phase, an intraoperative surgical event, and a decision making junction. The surgical timeline may enable a surgeon, while viewing playback of the at least one video to select one or more markers on the surgical timeline, and thereby cause a display of the video to skip to a location associated with the selected marker….video indexing are disclosed. The video indexing may include accessing video footage to be indexed, including footage of a particular surgical procedure, which may be analyzed to identify a video footage location associated with a surgical phase of the particular surgical procedure. A phase tag may be generated and may be associated with the video footage location. The video indexing may include analyzing the video footage to identify an event location of a particular intraoperative surgical event within the surgical phase and associating an event tag with the event location of the particular intraoperative surgical event. Further, an event characteristic associated with the particular intraoperative surgical event may be stored. [0007] In one embodiment, the one or more markers may include a decision making junction marker corresponding to a decision making junction of the surgical procedure.--, in [0005]-[0009]; also see: --[0023] Some embodiments of this disclosure involve systems, methods and computer readable media for updating a predicted outcome during a surgical procedure. These embodiments may involve receiving, from at least one image sensor arranged to capture images of a surgical procedure, image data associated with a first event during the surgical procedure. The embodiments may determine, based on the received image data associated with the first event, a predicted outcome associated with the surgical procedure, and may receive, from at least one image sensor arranged to capture images of a surgical procedure, image data associated with a second event during the surgical procedure. The embodiments may then determine, based on the received image data associated with the second event, a change in the predicted outcome, causing the predicted outcome to drop below a threshold. A recommended remedial action may be identified and recommended based on image-related data on prior surgical procedures contained in a data structure. [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-0024], and [0402]). Re Claim 8, Wolf discloses a surgical content evaluation system for evaluating a content of surgery performed by a surgeon (see Wolf: e. g., Fig. 1, --systems and methods for analysis of surgical videos…systems, methods, and computer readable media related to reviewing surgical video are disclosed. The embodiments may include accessing at least one video of a surgical procedure… The embodiments may further include overlaying, on the at least one video outputted for display, a surgical timeline. The surgical timeline may include markers identifying at least one of a surgical phase, an intraoperative surgical event, and a decision making junction. The surgical timeline may enable a surgeon, while viewing playback of the at least one video to select one or more markers on the surgical timeline, and thereby cause a display of the video to skip to a location associated with the selected marker….video indexing are disclosed. The video indexing may include accessing video footage to be indexed, including footage of a particular surgical procedure, which may be analyzed to identify a video footage location associated with a surgical phase of the particular surgical procedure. A phase tag may be generated and may be associated with the video footage location. The video indexing may include analyzing the video footage to identify an event location of a particular intraoperative surgical event within the surgical phase and associating an event tag with the event location of the particular intraoperative surgical event. Further, an event characteristic associated with the particular intraoperative surgical event may be stored. [0007] In one embodiment, the one or more markers may include a decision making junction marker corresponding to a decision making junction of the surgical procedure. .--, in [0005]-[0009]; also see: --[0023] Some embodiments of this disclosure involve systems, methods and computer readable media for updating a predicted outcome during a surgical procedure. These embodiments may involve receiving, from at least one image sensor arranged to capture images of a surgical procedure, image data associated with a first event during the surgical procedure. The embodiments may determine, based on the received image data associated with the first event, a predicted outcome associated with the surgical procedure, and may receive, from at least one image sensor arranged to capture images of a surgical procedure, image data associated with a second event during the surgical procedure. The embodiments may then determine, based on the received image data associated with the second event, a change in the predicted outcome, causing the predicted outcome to drop below a threshold. A recommended remedial action may be identified and recommended based on image-related data on prior surgical procedures contained in a data structure. [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-0024], and [0402]), the system comprising: an acquisition unit that acquires a surgical image, which is a captured image of a body of a patient on which the surgery is performed by the surgeon (see WOLF: e.g., --to capture images of a surgical procedure, image data associated with a first event during the surgical procedure. The embodiments may determine, based on the received image data associated with the first event, a predicted outcome associated with the surgical procedure, and may receive, from at least one image sensor arranged to capture images of a surgical procedure, image data associated with a second event during the surgical procedure. The embodiments may then determine, based on the received image data associated with the second event, a change in the predicted outcome, causing the predicted outcome to drop below a threshold. A recommended remedial action may be identified and recommended based on image-related data on prior surgical procedures contained in a data structure. [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]); an analysis unit that analyzes, in the surgical image, at least a change in body information indicating a state of the body and/or instrument information indicating a state of an instrument operated by the surgeon (see WOLF: e.g., [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; Fig. 3, and, Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]; and see: Fig. 8A, and, -- At step 804, process 800 may include analyzing the video footage to identify a video footage location associated with a surgical phase of the particular surgical procedure. As discussed above, the location may be associated with a particular frame, a range of frames, a time index, a time range, or any other location identifier. [0189] Process 800 may include generating a phase tag associated with the surgical phase, as shown in step 806. This may occur, for example, through video content analysis (VCA), using techniques such as one or more of video motion detection, video tracking, shape recognition, object detection, fluid flow detection, equipment identification, behavior analysis, or other forms of computer aided situational awareness. When learned characteristics associated with a phase are identified in the video, a tag may be generated demarcating that phase. The tag may include, for example, a predefined name for the phase. At step 808, process 800 may include associating the phase tag with the video footage location. The phase tag may indicate, for example, that the identified video footage location is associated with the surgical phase of the particular surgical procedure. At step 810, process 800 may include analyzing the video footage using one or more of the VCA techniques described above, to identify an event location of a particular intraoperative surgical event within the surgical phase. Process 800 may include associating an event tag with the event location of the particular intraoperative surgical event, as shown at step 812. The event tag may indicate, for example, that the video footage is associated with the surgical event at the event location.--, in [0197]-[0199]; and, -- to track a surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., a surgical instrument) from different view angles. For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure…. tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument,… to predict future position and orientation of cameras 115-125 based on the movement of the hand of the surgeon, the movement of a surgical instrument, the movement of a body of the surgeon, historical data reflecting likely next steps, or any other data from which future movement may be derived.--, in [0307]-[0309]); and an evaluation unit that determines the content of the surgery performed by the surgeon based on the body information analyzed by the analysis unit, and evaluates quality of the surgery (see WOLF: e.g., --[0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]); SHELTON discloses an evaluation unit that evaluates at least one of quality of the surgery, appropriateness of the surgery, or efficiency of the surgery to the content of the surgery performed by the surgeon based on the body information analyzed by the analysis unit (see SHELTON: e.g., -- The surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument.--, in abstract, and, -- A surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the healthcare professional. The surgical computing system may compare the actions of the healthcare professional with expected or acceptable actions.--, in [0053], and, -- [0119] A GI tract imaging/sensing system may collect images of a patient's colon. The GI tract imaging/sensing system may include an ingestible wireless camera and a receiver. The GI tract imaging/sensing system may include one or more white LEDs, a battery, radio transmitter, and antenna. The ingestible camera may include a pill. The ingestible camera may travel through the digestive tract and take pictures of the colon. The ingestible camera may take pictures up to 35 frames per second during motion. The ingestible camera may transmit the pictures to a receiver. The receiver may include a wearable device. The GI tract imaging/sensing system may process the images locally or transmit them to a processing unit. Doctors may look at the raw images to make a diagnosis.--, in [0117]-0120]; and, -- The systems and techniques may be employed to evaluate a healthcare professional's techniques for using a surgical instrument. The systems and techniques may monitor a healthcare professional's range-of-motions and efficiency-of-motion while conducting a surgical procedure. The system may evaluate the actions of the healthcare professional relative to the technique of others and/or accepted techniques and may offer feedback to the healthcare professional to improve his or her technique as the healthcare professional is performing a surgical procedure.--, in [0414]); Wolf and SHELTON are combinable as they are in the same field of endeavor: medical image processing in surgery monitoring and evaluation. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Wolf’s system using SHELTON’s teachings by including evaluation unit that evaluates at least one of quality of the surgery, appropriateness of the surgery, or efficiency of the surgery to the content of the surgery performed by the surgeon based on the body information analyzed by the analysis unit to Wolf’s determining outcome, and the content of the surgery performed by the surgeon based on the body information and/or the instrument information analyzed by the analysis unit in order to determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument (see SHELTON: e.g.. in abstract, [0053], [0117]-[0120], and [0414]). Re Claim 9, Wolf as modified by SHELTON further disclose the analysis unit calculates a recognition degree indicating a degree of recognition of the body information in the surgical image (see WOLF: e.g., -- machine learning algorithms (also referred to as machine learning models in the present disclosure) may be trained using training examples, for example in the cases described below. Some non-limiting examples of such machine learning algorithms may include classification algorithms, data regressions algorithms, image segmentation algorithms, visual detection algorithms (such as object detectors, face detectors, person detectors, motion detectors, edge detectors, etc.), visual recognition algorithms (such as face recognition, person recognition, object recognition, etc.), speech recognition algorithms, mathematical embedding algorithms, natural language processing algorithms, support vector machines, random forests, nearest neighbors algorithms, deep learning algorithms, artificial neural network algorithms, convolutional neural network algorithms, recursive neural network algorithms, linear machine learning models, non-linear machine learning models, ensemble algorithms, and so forth. For example, a trained machine learning algorithm may comprise an inference model, such as a predictive model, a classification model, a regression model, a clustering model, a segmentation model, an artificial neural network (such as a deep neural network, a convolutional neural network, a recursive neural network, etc.), a random forest, a support vector machine, and so forth. In some examples, the training examples may include example inputs together with the desired outputs corresponding to the example inputs. Further, in some examples, training machine learning algorithms using the training examples may generate a trained machine learning algorithm, and the trained machine learning algorithm may be used to estimate outputs for inputs not included in the training examples.--, in [0079], -- camera 115 may be equipped with a laser 137 (e.g., an infrared laser) for precision tracking. In some cases, camera 115 may be tracked automatically via a computer-based camera control application that uses an image recognition algorithm for positioning the camera to capture video/image data of a ROI. For example, the camera control application may identify an anatomical structure, identify a surgical tool, hand of a surgeon, bleeding, motion, and the like at a particular location within the anatomical structure, and track that location with camera 115 by rotating camera 115 by appropriate yaw and pitch angles. In some embodiments, the camera control application may control positions (i.e., yaw and pitch angles) of various cameras 115, 121, 123 and 125 to capture video/image date from different ROIs during a surgical procedure. Additionally or alternatively, a human operator may control the position of various cameras 115, 121, 123 and 125, and/or the human operator may supervise the camera control application in controlling the position of the cameras.--, in [0086]; and, -- the markers may be automatically generated and included in the timeline based on information in the video at a given location. In some embodiments, computer analysis may be used to analyze frames of the video footage and identify markers to include at various locations in the timeline. Computer analysis may include any form of electronic analysis using a computing device. In some embodiments, computer analysis may include using one or more image recognition algorithms to identify features of one or more frames of the video footage. Computer analysis may be performed on individual frames, or may be performed across multiple frames, for example, to detect motion or other changes between frames.--, in [0114]); and the evaluation unit evaluates the content of the surgery performed by the surgeon according to the recognition degree (see WOLF: e.g., --[0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; also see SHELTON: e.g., -- The surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument.--, in abstract, and, -- A surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the healthcare professional. The surgical computing system may compare the actions of the healthcare professional with expected or acceptable actions.--, in [0053], and, -- [0119] A GI tract imaging/sensing system may collect images of a patient's colon. The GI tract imaging/sensing system may include an ingestible wireless camera and a receiver. The GI tract imaging/sensing system may include one or more white LEDs, a battery, radio transmitter, and antenna. The ingestible camera may include a pill. The ingestible camera may travel through the digestive tract and take pictures of the colon. The ingestible camera may take pictures up to 35 frames per second during motion. The ingestible camera may transmit the pictures to a receiver. The receiver may include a wearable device. The GI tract imaging/sensing system may process the images locally or transmit them to a processing unit. Doctors may look at the raw images to make a diagnosis.--, in [0117]-0120]; and, -- the edema sensing system may detect a risk of colorectal anastomotic leak based on fluid build-up. Based on the detected edema physiological conditions, the edema sensing system may generate a score for healing quality. For example, the edema sensing system may generate the healing quality score by comparing edema information to a certain threshold lower leg circumference. Based on the detected edema information--, in [0123]-[0126]; and, -- The systems and techniques may be employed to evaluate a healthcare professional's techniques for using a surgical instrument. The systems and techniques may monitor a healthcare professional's range-of-motions and efficiency-of-motion while conducting a surgical procedure. The system may evaluate the actions of the healthcare professional relative to the technique of others and/or accepted techniques and may offer feedback to the healthcare professional to improve his or her technique as the healthcare professional is performing a surgical procedure.--, in [0414]). Re Claim 10, Wolf as modified by SHELTON further disclose calculates a confidence degree indicating a degree of confidence of an analysis result of the body information (see WOLF: e.g., --a confidence level that a desired surgical outcome will occur if a specific action is taken, and/or a confidence level that a desired outcome will not occur if a specific action is not taken. A confidence level may be based on an analysis of historical surgical procedures, consistent with disclosed embodiments, and may include a probability (i.e., likelihood) that an outcome will occur. A desired outcome may be a positive outcome, such as an improved health status, a successful placement of a medical implant, and/or any other beneficial eventuality. In some embodiments, a desired outcome may include an avoidance of a possible undesired situation following a decision making junction (e.g., an avoidance of a side effect, a post-operative complication, a fluid leakage event, a negative change in a health status of a patient, and/or any other undesired situation).--, in [0560], and, --A recommendation may include a description of a current surgical situation, an indication of preemptive or corrective measures, and/or danger zone mapping. In one example, as previously mentioned, a recommendation may include a recommended placement of a surgical drain to remove inflammatory fluid, blood, bile, and/or other fluid from a patient. A confidence level that a desired surgical outcome will or will not occur if a specific action is taken or not taken may be part of a recommendation. A recommendation may be based on a skill level of a surgeon, a correlation and a vital sign, and/or a surgical event that occurred in a surgical procedure prior to a decision making junction (i.e., a prior surgical event). In some embodiments, a recommendation may be based on a condition of a tissue of a patient and/or a condition of an organ of a patient. As another example, a recommendation of the specific action may include a creation of a stoma, as previously discussed by way of example. [0580] Disclosed systems and methods may involve analyzing current and/or historical surgical footage to identify features of surgery, patient conditions, and other features to estimate surgical contact force.--, in [0579]-[0580],and, --a systems, methods and computer readable media may be provided for updating a predicted outcome during a surgical procedure is disclosed. For example, image data may be analyzed to detect changes in a predicted outcome, and a remedial action may be communicated to a surgeon. A predicted outcome may include an outcome that may occur with an associated confidence or probability (e.g., a likelihood). For example, a predicted outcome may include a complication, a health status, a recovery period, death, disability, internal bleeding, hospital readmission after the surgery, and/or any other surgical eventuality. In some embodiments, a predicted outcome includes a score, such as a lower urinary tract symptom (LUTS) outcome score. More generally, a predicted outcome may include any health indicator associated with a surgical procedure.--, in [0615],and [0629]-[0631]); calculates the recognition degree based on the confidence degree (see WOLF: e.g., --a face recognition algorithm may be applied to image data to identify a known surgeon, and a corresponding level of skill may be retrieved from a data structure, such as a database. In some embodiments, a level of skill of a surgeon may be determined based on a sequence of events identified in image data (e.g., based on a length of time to perform one or more actions, based on a patient response detected in image data during surgery, and/or based on other information indicating a level of skill of a surgeon). In one example, in response to a first determined skill level, a first outcome may be predicted, and in response to a second determined skill level, a second outcome may be predicted, the second outcome may differ from the first outcome. In another example, a machine learning model may be trained using training examples to predict outcome of surgical procedures based on skill levels of surgeons, and the trained machine learning model may be used to predict the outcome based on the determined skill level. An example of such training example may include an indication of a skill level of a surgeon, together with a label indicating the desired predicted outcome. The desired predicted outcome may be based on an analysis of historical data, based on user input (such as expert opinion), and so forth.--, in [0624],and, --[0635] A condition of an anatomical structure may be determined in a variety of ways, such as through a machine learning model trained with examples of known conditions. In some embodiments, an object recognition model and/or an image classification model may be trained using historical examples and implemented to determine a condition of an anatomical structure. Training may be supervised and/or unsupervised. Some other non-limiting examples of methods for determining conditions of anatomical structures are described above. [0636] Embodiments may include a variety of ways of determining a predicted outcome based on a condition of an anatomical structure and/or any other input data. For example, a regression model may be fit to historical data that include conditions of anatomical structures and outcomes. More generally, using historical data, a regression model may be fit to predict an outcome based on one or more of a variety of input data, including a condition of an anatomical structure, a patient characteristic, a skill level of a surgeon, an estimated contact force, a source of fluid leakage, an extent of fluid leakage characteristic, and/or any other input data relating to a surgical procedure. An outcome may be predicted based on other known statistical analysis including, for example, based on correlations between input data relating to a surgical procedure and outcome data.--, in [0635]-[0636]). Re Claim 11, Wolf as modified by SHELTON further disclose the evaluation unit evaluates the content of surgery performed by the surgeon according to a temporal change of the recognition degree (see WOLF: e.g., -- to track a surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., a surgical instrument) from different view angles. For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure…. tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument,… to predict future position and orientation of cameras 115-125 based on the movement of the hand of the surgeon, the movement of a surgical instrument, the movement of a body of the surgeon, historical data reflecting likely next steps, or any other data from which future movement may be derived.--, in [0307]-[0309]; --a face recognition algorithm may be applied to image data to identify a known surgeon, and a corresponding level of skill may be retrieved from a data structure, such as a database. In some embodiments, a level of skill of a surgeon may be determined based on a sequence of events identified in image data (e.g., based on a length of time to perform one or more actions, based on a patient response detected in image data during surgery, and/or based on other information indicating a level of skill of a surgeon). In one example, in response to a first determined skill level, a first outcome may be predicted, and in response to a second determined skill level, a second outcome may be predicted, the second outcome may differ from the first outcome. In another example, a machine learning model may be trained using training examples to predict outcome of surgical procedures based on skill levels of surgeons, and the trained machine learning model may be used to predict the outcome based on the determined skill level. An example of such training example may include an indication of a skill level of a surgeon, together with a label indicating the desired predicted outcome. The desired predicted outcome may be based on an analysis of historical data, based on user input (such as expert opinion), and so forth.--, in [0624],and, --The variables may be endless. Such variables may relate to the condition of the patient, the surgeon, the complexity of the procedure, complications, the tools used, the time elapsed between two or more events, or any other variables or combination of variables that may have some direct or indirect impact on predicted outcome. One such variable may be fluid leakage (e.g., a magnitude, duration, or determined source). For example, determining a change in a predicted outcome may be based on a magnitude of bleeding. A feature of a fluid leakage event (e.g., a magnitude of bleeding, a source of bleeding) may be determined based on an analysis of image data--, in [0632]; and, --[0635] A condition of an anatomical structure may be determined in a variety of ways, such as through a machine learning model trained with examples of known conditions. In some embodiments, an object recognition model and/or an image classification model may be trained using historical examples and implemented to determine a condition of an anatomical structure. Training may be supervised and/or unsupervised. Some other non-limiting examples of methods for determining conditions of anatomical structures are described above. [0636] Embodiments may include a variety of ways of determining a predicted outcome based on a condition of an anatomical structure and/or any other input data. For example, a regression model may be fit to historical data that include conditions of anatomical structures and outcomes. More generally, using historical data, a regression model may be fit to predict an outcome based on one or more of a variety of input data, including a condition of an anatomical structure, a patient characteristic, a skill level of a surgeon, an estimated contact force, a source of fluid leakage, an extent of fluid leakage characteristic, and/or any other input data relating to a surgical procedure. An outcome may be predicted based on other known statistical analysis including, for example, based on correlations between input data relating to a surgical procedure and outcome data.--, in [0635]-[0636]).. Re Claim 12, Wolf as modified by SHELTON further disclose the evaluation unit evaluates a difficulty level of surgery according to information relating to an anatomical structure of the body included in the body information (see SHELTON: e.g., -- the detection, prediction, and/or determination described herein may be performed by a computing system based on measured data and/or related biomarkers generated by the circadian rhythm sensing system. The circadian rhythm sensing system may process the circadian rhythm data locally or transmit the data to a processing unit. [0144] A menstrual cycle sensing system may measure menstrual cycle data including heart rate, heart rate variability, respiration rate, body temperature, and/or skin perfusion. Based on the menstrual cycle data, the menstrual cycle unit may indicate menstrual cycle-related biomarkers, complications, and/or contextual information, including menstrual cycle phase. For example, the menstrual cycle sensing system may detect the periovulatory phase in the menstrual cycle based on measured heart rate variability. Changes in heart rate variability may indicate the periovulatory phase. For example, the menstrual cycle sensing system may detect the luteal phase in the menstrual cycle based on measured wrist skin temperature and/or skin perfusion. Increased wrist skin temperature may indicate the luteal phase. Changes in skin perfusion may indicate the lueal phase. For example, the menstrual cycle sensing system may detect the ovulatory phase based on measured respiration rate. Low respiration rate may indicate the ovulatory phase. [0145] Based on menstrual cycle-related biomarkers, the menstrual cycle sensing system may determine conditions including hormonal changes, surgical bleeding, scarring, bleeding risk, and/or sensitivity levels. For example, the menstrual cycle phase may affect surgical bleeding in rhinoplasty. For example, the menstrual cycle phase may affect healing and scarring in breast surgery. For example, bleeding risk may decrease during the periovulatory phase in the menstrual cycle. [0146] In an example, the detection, prediction, and/or determination described herein may be performed by a computing system based on measured data and/or related biomarkers generated by the menstrual cycle sensing system. The menstrual cycle sensing system may locally process menstrual cycle data or transmit the data to a processing unit.--, in [0143]-[0146]). Re Claim 13, Wolf as modified by SHELTON further disclose the analysis unit analyzes a trajectory of a distal end position of an instrument operated by the surgeon (see WOLF: e.g., --to capture images of a surgical procedure, image data associated with a first event during the surgical procedure. The embodiments may determine, based on the received image data associated with the first event, a predicted outcome associated with the surgical procedure, and may receive, from at least one image sensor arranged to capture images of a surgical procedure, image data associated with a second event during the surgical procedure. The embodiments may then determine, based on the received image data associated with the second event, a change in the predicted outcome, causing the predicted outcome to drop below a threshold. A recommended remedial action may be identified and recommended based on image-related data on prior surgical procedures contained in a data structure. [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]); and the evaluation unit evaluates an operation performance of the instrument in the surgery based on the trajectory (see WOLF: e.g., -- to track a surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., a surgical instrument) from different view angles. For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure…. tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument,… to predict future position and orientation of cameras 115-125 based on the movement of the hand of the surgeon, the movement of a surgical instrument, the movement of a body of the surgeon, historical data reflecting likely next steps, or any other data from which future movement may be derived.--, in [0307]-[0309]; and, --[0575] At step 2902, the process may include receiving video footage of a surgical procedure performed by a surgeon on a patient in an operating room, consistent with disclosed embodiments and as previously described by way of examples. FIG. 1 provides an example of an operating room, surgeon, patient, and cameras configured for capturing video footage of a surgical procedure. Video footage may include images from at least one of an endoscope or an intracorporeal camera (e.g., images of an intracavitary video). [0576] At step 2904, the process may include accessing at least one data structure including image-related data characterizing surgical procedures, consistent with disclosed embodiments and as previously described by way of examples. In some embodiments, accessing a data structure may include receiving data of a data structure via a network and/or from a device via a connection.--, [0575]-[0576]). Re Claim 14, Wolf as modified by SHELTON further disclose the analysis unit analyzes a positional relationship between a point of action, which is a portion of the instrument operated by the surgeon in contact with the anatomical structure, and a portion of the anatomical structure in contact with the instrument (see WOLF: e.g., -- to track a surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., a surgical instrument) from different view angles. For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure…. tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument,… to predict future position and orientation of cameras 115-125 based on the movement of the hand of the surgeon, the movement of a surgical instrument, the movement of a body of the surgeon, historical data reflecting likely next steps, or any other data from which future movement may be derived.--, in [0307]-[0309]; and, --[0575] At step 2902, the process may include receiving video footage of a surgical procedure performed by a surgeon on a patient in an operating room, consistent with disclosed embodiments and as previously described by way of examples. FIG. 1 provides an example of an operating room, surgeon, patient, and cameras configured for capturing video footage of a surgical procedure. Video footage may include images from at least one of an endoscope or an intracorporeal camera (e.g., images of an intracavitary video). [0576] At step 2904, the process may include accessing at least one data structure including image-related data characterizing surgical procedures, consistent with disclosed embodiments and as previously described by way of examples. In some embodiments, accessing a data structure may include receiving data of a data structure via a network and/or from a device via a connection.--, [0575]-[0576]), and the evaluation unit evaluates the operation performance of the instrument in the surgery based on the positional relationship (see WOLF: e.g., -- to track a surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., a surgical instrument) from different view angles. For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure…. tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument,… to predict future position and orientation of cameras 115-125 based on the movement of the hand of the surgeon, the movement of a surgical instrument, the movement of a body of the surgeon, historical data reflecting likely next steps, or any other data from which future movement may be derived.--, in [0307]-[0309]; and, --[0575] At step 2902, the process may include receiving video footage of a surgical procedure performed by a surgeon on a patient in an operating room, consistent with disclosed embodiments and as previously described by way of examples. FIG. 1 provides an example of an operating room, surgeon, patient, and cameras configured for capturing video footage of a surgical procedure. Video footage may include images from at least one of an endoscope or an intracorporeal camera (e.g., images of an intracavitary video). [0576] At step 2904, the process may include accessing at least one data structure including image-related data characterizing surgical procedures, consistent with disclosed embodiments and as previously described by way of examples. In some embodiments, accessing a data structure may include receiving data of a data structure via a network and/or from a device via a connection.--, [0575]-[0576]; also see SHELTON: e.g., -- The surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the operator of the surgical instrument.--, in abstract, and, -- A surgical computing system may determine, based on at least one of the usage data and/or the sensor data, an evaluation of the actions of the healthcare professional. The surgical computing system may compare the actions of the healthcare professional with expected or acceptable actions.--, in [0053], and, -- [0119] A GI tract imaging/sensing system may collect images of a patient's colon. The GI tract imaging/sensing system may include an ingestible wireless camera and a receiver. The GI tract imaging/sensing system may include one or more white LEDs, a battery, radio transmitter, and antenna. The ingestible camera may include a pill. The ingestible camera may travel through the digestive tract and take pictures of the colon. The ingestible camera may take pictures up to 35 frames per second during motion. The ingestible camera may transmit the pictures to a receiver. The receiver may include a wearable device. The GI tract imaging/sensing system may process the images locally or transmit them to a processing unit. Doctors may look at the raw images to make a diagnosis.--, in [0117]-0120]; and, -- the edema sensing system may detect a risk of colorectal anastomotic leak based on fluid build-up. Based on the detected edema physiological conditions, the edema sensing system may generate a score for healing quality. For example, the edema sensing system may generate the healing quality score by comparing edema information to a certain threshold lower leg circumference. Based on the detected edema information--, in [0123]-[0126]; and, -- The systems and techniques may be employed to evaluate a healthcare professional's techniques for using a surgical instrument. The systems and techniques may monitor a healthcare professional's range-of-motions and efficiency-of-motion while conducting a surgical procedure. The system may evaluate the actions of the healthcare professional relative to the technique of others and/or accepted techniques and may offer feedback to the healthcare professional to improve his or her technique as the healthcare professional is performing a surgical procedure.--, in [0414]). Re Claim 15, Wolf as modified by SHELTON further disclose the acquisition unit acquires a plurality of surgical images continuously in a time series (see WOLF: e.g., [0024] Some embodiments of this disclosure involve systems methods, and computer readable media for enabling fluid leak detection during surgery. Embodiments may involve receiving, in real time, intracavitary video of a surgical procedure. The processor may be configured to analyze frames of the intracavitary video to determine an abnormal fluid leakage situation in the intracavitary video. The embodiments may institute a remedial action when the abnormal fluid leakage situation is determined.--, in [0023]-[0024]; Fig. 3, and, Fig. 3, and, --display screen 113 may show a zoomed-in image of a tip of a surgical instrument and a surrounding tissue of an anatomical structure in proximity to the surgical instrument. .. instrument 301 may be configured to measure data related to various properties of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B may be used to measure the electrical resistance and/or impedance of tissue 331…based on angle 317 distance between tips 323A and 323B may be measured, and this distance may be transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by various cameras 115, 121, 123 and 125, as shown in FIG. 1. --,in [00096]-[0099]; and see: Fig. 8A, and, -- At step 804, process 800 may include analyzing the video footage to identify a video footage location associated with a surgical phase of the particular surgical procedure. As discussed above, the location may be associated with a particular frame, a range of frames, a time index, a time range, or any other location identifier. [0189] Process 800 may include generating a phase tag associated with the surgical phase, as shown in step 806. This may occur, for example, through video content analysis (VCA), using techniques such as one or more of video motion detection, video tracking, shape recognition, object detection, fluid flow detection, equipment identification, behavior analysis, or other forms of computer aided situational awareness. When learned characteristics associated with a phase are identified in the video, a tag may be generated demarcating that phase. The tag may include, for example, a predefined name for the phase. At step 808, process 800 may include associating the phase tag with the video footage location. The phase tag may indicate, for example, that the identified video footage location is associated with the surgical phase of the particular surgical procedure. At step 810, process 800 may include analyzing the video footage using one or more of the VCA techniques described above, to identify an event location of a particular intraoperative surgical event within the surgical phase. Process 800 may include associating an event tag with the event location of the particular intraoperative surgical event, as shown at step 812. The event tag may indicate, for example, that the video footage is associated with the surgical event at the event location.--, in [0197]-[0199]and, -- to track a surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., a surgical instrument) from different view angles. For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure…. tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument,… to predict future position and orientation of cameras 115-125 based on the movement of the hand of the surgeon, the movement of a surgical instrument, the movement of a body of the surgeon, historical data reflecting likely next steps, or any other data from which future movement may be derived.--, in [0307]-[0309]), and the evaluation unit identifies, among the plurality of surgical images, a surgical image having recognition degree with respect to information relating to a specific anatomical structure included in the body information that is determined to be equal to or greater than a specific threshold in the analysis unit (see WOLF: e.g., --to capture images of a surgical procedure, image data associated with a first event during the surgical procedure. The embodiments may determine, based on the received image data associated with the first event, a predicted outcome associated with the surgical procedure, and may receive, from at least one image sensor arranged to capture images of a surgical procedure, image data associated with a second event during the surgical procedure. The embodiments may then determine, based on the received image data associated with the second event, a change in the predicted outcome, causing the predicted outcome to drop below a threshold. A recommended remedial action may be identified and recommended based on image-related data on prior surgical procedures contained in a data structure.--, in [0023]-[0024]; and, -- machine learning algorithms (also referred to as machine learning models in the present disclosure) may be trained using training examples, for example in the cases described below. Some non-limiting examples of such machine learning algorithms may include classification algorithms, data regressions algorithms, image segmentation algorithms, visual detection algorithms (such as object detectors, face detectors, person detectors, motion detectors, edge detectors, etc.), visual recognition algorithms (such as face recognition, person recognition, object recognition, etc.), speech recognition algorithms, mathematical embedding algorithms, natural language processing algorithms, support vector machines, random forests, nearest neighbors algorithms, deep learning algorithms, artificial neural network algorithms, convolutional neural network algorithms, recursive neural network algorithms, linear machine learning models, non-linear machine learning models, ensemble algorithms, and so forth. For example, a trained machine learning algorithm may comprise an inference model, such as a predictive model, a classification model, a regression model, a clustering model, a segmentation model, an artificial neural network (such as a deep neural network, a convolutional neural network, a recursive neural network, etc.), a random forest, a support vector machine, and so forth. In some examples, the training examples may include example inputs together with the desired outputs corresponding to the example inputs. Further, in some examples, training machine learning algorithms using the training examples may generate a trained machine learning algorithm, and the trained machine learning algorithm may be used to estimate outputs for inputs not included in the training examples.--, in [0079], -- camera 115 may be equipped with a laser 137 (e.g., an infrared laser) for precision tracking. In some cases, camera 115 may be tracked automatically via a computer-based camera control application that uses an image recognition algorithm for positioning the camera to capture video/image data of a ROI. For example, the camera control application may identify an anatomical structure, identify a surgical tool, hand of a surgeon, bleeding, motion, and the like at a particular location within the anatomical structure, and track that location with camera 115 by rotating camera 115 by appropriate yaw and pitch angles. In some embodiments, the camera control application may control positions (i.e., yaw and pitch angles) of various cameras 115, 121, 123 and 125 to capture video/image date from different ROIs during a surgical procedure. Additionally or alternatively, a human operator may control the position of various cameras 115, 121, 123 and 125, and/or the human operator may supervise the camera control application in controlling the position of the cameras.--, in [0086]; and, -- the markers may be automatically generated and included in the timeline based on information in the video at a given location. In some embodiments, computer analysis may be used to analyze frames of the video footage and identify markers to include at various locations in the timeline. Computer analysis may include any form of electronic analysis using a computing device. In some embodiments, computer analysis may include using one or more image recognition algorithms to identify features of one or more frames of the video footage. Computer analysis may be performed on individual frames, or may be performed across multiple frames, for example, to detect motion or other changes between frames.--, in [0114]; also SHELTON: e.g., -- the edema sensing system may detect a risk of colorectal anastomotic leak based on fluid build-up. Based on the detected edema physiological conditions, the edema sensing system may generate a score for healing quality. For example, the edema sensing system may generate the healing quality score by comparing edema information to a certain threshold lower leg circumference. Based on the detected edema information--, in [0123]-[0126]; and, -- The systems and techniques may be employed to evaluate a healthcare professional's techniques for using a surgical instrument. The systems and techniques may monitor a healthcare professional's range-of-motions and efficiency-of-motion while conducting a surgical procedure. The system may evaluate the actions of the healthcare professional relative to the technique of others and/or accepted techniques and may offer feedback to the healthcare professional to improve his or her technique as the healthcare professional is performing a surgical procedure.--, in [0414])) . Conclusion Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEI WEN YANG whose telephone number is (571)270-5670. The examiner can normally be reached on 8:00 - 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEI WEN YANG/Primary Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Aug 23, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §103, §112
Dec 17, 2025
Response Filed
Mar 11, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602789
ENDOSCOPIC IMAGE SEGMENTATION METHOD BASED ON SINGLE IMAGE AND DEEP LEARNING NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12586413
METHOD FOR RECOGNIZING ACTIVITIES USING SEPARATE SPATIAL AND TEMPORAL ATTENTION WEIGHTS
2y 5m to grant Granted Mar 24, 2026
Patent 12582359
IMAGE DISPLAY METHOD, STORAGE MEDIUM, AND IMAGE DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12573034
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM, AND IMAGE PROCESSING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567168
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 657 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month