DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-21 received on 17 June 2024 are currently pending and being considered by Examiner in this Office Action.
Information Disclosure Statement
The information disclosure statements (IDS’s) submitted on 25 July 2024, 23 January 2025, 18 April 2025, 29 July 2025, and 10 December 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS’s are being considered by the Examiner in this Office Action.
Claim Objections
Claims 1-21 are objected to because of the following informalities:
Regarding claims 1, 6, 12, 13, & 16, the limitation “identifying one or more surgical procedure being performed…” rather than “one or more surgical procedures being performed…”, as referenced in a limitation further down in each of the independent claims;
Regarding claims 2-11 & 14-21, these claims are dependent from independent claims 1, 12, & 13, and therefore inherit the deficiencies associated therewith.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
The claims recite subject matter within a statutory category as a process (claims 1-11 & 12) and a machine (claims 13-21) (Subject Matter Eligibility (SME) Test Step 1: Yes) which recite steps of:
identifying, using a real-time surgical context recognition module, one or more surgical procedures being performed on a patient in a sterile field, wherein the real-time surgical context recognition module receives one or more video streams of the surgical procedure being performed and one or more video streams of a back table within the sterile field;
determining, using a back table instruction processor module including a trained machine learning agent, a sequence of surgical tools that will be needed to perform the identified one or more surgical procedures;
outputting, to a monitor visible within the sterile field, each of the surgical tools within the sequence, wherein the surgical tools are presented sequentially for arrangement on the back table within the sterile field;
receiving input from a back table camera viewing the back table; and
verifying, by the back table instruction processor module, that the surgical tools within the sequence have been provided on the back table.
These steps of identifying one or more surgical procedures being performed on a patient in sterile field, determining a sequence of surgical tools that will be needed to perform said surgical procedures, outputting each of the surgical tools within the sequence for arrangement on a back table within the sterile field, receiving input from a back table camera viewing the back table, and verifying that the surgical tools have been provided, as drafted, under the broadest reasonable interpretation, includes performance of the limitation in the mind but for recitation of generic computer components. That is, other than reciting steps as performed by the generic computer components, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the identifying one or more surgical procedures and determining a sequence of surgical tools needed to perform said procedures language, performing the identification and/or determinations in the context of this claim encompasses a mental process of the user or a doctor/surgeon/scrub technician determining what surgery is to be performed and prepping for said surgery, such as determining which tools are necessary and in which typical order to complete the operation. Similarly, the limitation of outputting each of the surgical tools in sequence to populate/arrange said tools on the back table within the sterile field, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, such as a user or a doctor/surgeon, etc., telling or otherwise presenting which tools are needed for the surgical operation, such as to a scrub technician/nurse/assistant in order to prep the tools for the surgery on a back table. For example, but for the receiving input and verifying that the surgical tools have been provided on the back table language, receiving input and verifying if surgical tools have been provided on the back table in the context of this claim encompasses a mental process of the user, such as a doctor/surgeon confirming whether the appropriate tools are indeed provided according to the requested items needed. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the independent claims recite an abstract idea.
These steps of identifying one or more surgical procedures being performed on a patient in sterile field, determining a sequence of surgical tools that will be needed to perform said surgical procedures, outputting each of the surgical tools within the sequence for arrangement on a back table within the sterile field, receiving input from a back table camera viewing the back table, and verifying that the surgical tools have been provided, as drafted, under the broadest reasonable interpretation, includes methods of organizing human activity. MPEP 2106.04(a)(2)(II) sets forth various concepts relating to methods of organizing human activity, including fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations); and managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions). In particular, the steps recited heavily relate to managing personal behavior or relationships or interactions between people. For example, Applicant’s Specification Par [0002]-[0003] describes the background of the relevant art and the interactions between surgeons, surgical assistants, and surgical technicians (e.g. scrub technicians). That is, the steps recited substantially relate to communicative and interpersonal relationship between various entities in a surgical environment, such as “coordination between the scrub technician and the surgeon” in determining which tools and/or items are required to complete the surgery and populate on the back table in the surgical room, as set forth in Applicant’s Specification. Accordingly, the independent claims recite an abstract idea.
Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims (such as claims 2-11 & 14-21, reciting particular aspects of how identifying the surgical procedure to be performed or tools/sequence of tools associated therewith may be performed in the mind but for recitation of generic computer components) (SME Test Step 2A, Prong 1: Yes).
This judicial exception is not integrated into a practical application. In particular, the additional elements do not integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements amount to no more than limitations which:
amount to mere instructions to apply an exception (such as recitation of a real-time surgical context recognition module, a back table instruction processor, surgical tools, a trained machine learning agent, a monitor/display, a back table camera, one or more processors, a memory amounts to invoking computers as a tool to perform the abstract idea, see Applicant’s Specification [0017] for a real-time surgical context recognition module; Spec [0025] for a back table instruction processor; Spec [0032]-[0033] & Figs. 2-4 for surgical tools; Spec [0036]-[0041] for a trained machine learning agent; Spec [0026] for a monitor/display; Spec [0007] & [0019] for a back table camera; Spec [0052] for one or more processors; Spec [0051] for a memory, see MPEP 2106.05(f));
add insignificant extra-solution activity to the abstract idea (such as recitation of receiving one or more video streams of the surgical procedures being performed and one or more video streams of a back table within the sterile field, receiving input from a back table camera viewing the back table, amounts to mere data gathering; recitation of identifying one or more surgical procedures being performed on a patient in a sterile field, determining a sequence of surgical tools that will be needed to perform the surgical procedures amounts to selecting a particular data source or type of data to be manipulated; recitation of outputting each of the surgical tools within the sequence and presented sequentially for arrangement on the back table within the sterile field, storing computer-program instructions/modules for performing the steps recited amounts to insignificant application, see MPEP 2106.05(g); outputting each of the surgical tools within the sequence, e.g. gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, see MPEP 2106.05(a)(II)(iii));
generally link the abstract idea to a particular technological environment or field of use (such a recitation of utilizing said steps in a surgical environment during a surgical procedure performed on a patient in a sterile field and utilizing generally-known machine learning models, see MPEP 2106.05(h)).
Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claims 2-11 & 14-21, which recite limitations relating to a computerized method, the surgical tools, the trained machine learning agent, a second trained machine learning agent, the back table instruction processor, the real-time surgical context recognition module, a tool recognition module, additional limitations which amount to invoking computers as a tool to perform the abstract idea, see Applicant’s Specification [0048] for a computer/computerized method; Spec [0032]-[0033] & Figs. 2-4 for surgical tools; Spec [0036]-[0041] for a trained machine learning agent; Spec [0036]-[0038] for a second trained machine learning agent; Spec [0025] for a back table instruction processor; Spec [0017] for a real-time surgical context recognition module; [0019] for a tool recognition module; claims 4, 11, 14, & 21, which recite limitations relating to receiving an output to be outputted on the monitor/display, additional limitations which add insignificant extra-solution activity to the abstract idea which amounts to mere data gathering; claims 6-9, 11, 16-19, & 21, which recite limitations relating to identifying surgical tools, identifying surgical procedures, such as in real-time, additional limitations which add insignificant extra-solution activity to the abstract idea by selecting a particular data source or type of data to be manipulated; claims 5 & 15 which recite limitations relating to training the machine learning agent, additional limitations which amount to insignificant application; claims 2-11 & 14-21, which generally recite limitations relating to utilizing said steps in a surgical environment during a surgical procedure, and/or use of generally-known machine learning models additional limitations which generally link the abstract idea to a particular technological environment or field of use; claims 4 & 14 which recite limitations relating to outputting each of the surgical tools within the sequence, e.g. gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, see MPEP 2106.05(a)(II)(iii)). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application (SME Test Step 2A, Prong 2: No).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception, add insignificant extra-solution activity to the abstract idea, and generally link the abstract idea to a particular technological environment or field of use. Additionally, the additional limitations, other than the abstract idea per se, amount to no more than limitations which:
amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields (such as receiving one or more video streams of the surgical procedures being performed and one or more video streams of a back table within the sterile field, receiving input from a back table camera viewing the back table, e.g., receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i); identifying one or more surgical procedures being performed on a patient in a sterile field, determining a sequence of surgical tools that will be needed to perform the surgical procedures, e.g., performing repetitive calculations, Flook, MPEP 2106.05(d)(II)(ii); maintaining records and/or training parameters of one or more procedures and/or tools associated therewith, e.g., electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii); storing computer-program instructions/modules for performing the steps recited, storing one or more received data, such as video streams, etc., e.g., storing and retrieving information in memory, Versata Dev. Group, MPEP 2106.05(d)(II)(iv)).
Dependent claims recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea. Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claims 2-11 & 14-21, additional limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields; claims 4, 11, 14, & 21, which recite limitations relating to receiving an output to be outputted on the monitor/display and/or clinical patient data, e.g., receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i); claims 5-9, 11, 15-19, & 21, which recite limitations relating to identifying surgical tools, identifying surgical procedures, such as in real-time, e.g., performing repetitive calculations, Flook, MPEP 2106.05(d)(II)(ii); claims 5-6 & 15-16, which recite limitations relating to maintaining trained machine learning models and parameters associated therewith, e.g., electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii); claims 2-11 & 14-21, which recite limitations relating to storing computer-program instructions/modules for performing the steps recited, storing one or more received data, such as video streams, e.g., storing and retrieving information in memory, Versata Dev. Group, MPEP 2106.05(d)(II)(iv); claims 11 & 21, which recite limitations relating to receiving patient clinical data, which under BRI, includes extraction from a physical and/or electronic document, such as a patient health record, e.g., electronic scanning or extracting data from a physical document, Content Extraction, MPEP 2106.05(d)(II)(v)). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation (SME Test Step 2B: No).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-21 are rejected under 35 U.S.C. 103 as being unpatentable over Fine et al. (U.S. Patent Publication No. 2021/0327567), hereinafter “Fine”, in view of Jogan et al. (U.S. Patent Publication No. 2022/0334787), hereinafter “Jogan”, further in view of Gerstner et al. (U.S. Patent Publication No. 2025/0331945), hereinafter “Gerstner”.
Claim 1 –
Regarding Claim 1, Fine discloses a method of providing surgical guidance to a scrub technician during a surgical procedure, the method comprising:
identifying, using a real-time surgical context recognition module, one or more surgical procedures being performed on a patient in a sterile field (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field, such that the system can leverage machine learning and artificial intelligence to determine which step of a surgical procedure the OR is currently performing, i.e. understood to be surgical context, albeit not explicitly recited for identifying the overall surgical procedure(s) itself), wherein
the real-time surgical context recognition module receives one or more video streams of the surgical procedure being performed and one or more video streams of a back table within the sterile field (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field; See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow);
determining using a back table instruction processor including a trained machine learning agent, a sequence of surgical tools that will be needed to perform the identified one or more surgical procedures (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field; See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow);
outputting, to a monitor visible within the sterile, each of the surgical tools within the sequence, wherein the surgical tools are presented sequentially for arrangement on the back table within the sterile field (While not “back table” per se, see Fine Par [0047] which discloses demonstrating the potential to detect instrument use events from real-time video feeds of instrument trays on moveable carts (i.e. mayo stands) and is therefore understood to constitute a back table; See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed, albeit not explicitly recited for the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field).
While Fine discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and a particular step of the surgical procedure identified and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow, Fine does not explicitly disclose identifying the overall surgical procedure itself and/or the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, in particular.
Therefore, Jogan discloses automatically identifying the overall surgical procedure itself based on various contextual data received by the system (see Jogan Par [0105] which discloses determining or inferring information related to a surgical procedure from contextual data received from databases and/or instruments, including the type of procedure being undertaken, the type of tissue being operated on, the body cavity, etc., and thereby improve a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure; See Jogan Par [0110] which discloses the auxiliary equipment that are modular devices can automatically pair with the surgical hub that is located within a particular vicinity of the modular devices as part of their initialization process, such that the surgical hub can then derive contextual information about the surgical procedure by detecting the types of modular devices that pair with it during this pre-operative or initialization phase; furthermore, based on the combination of the data from the patient's EMR, the list of medical supplies to be used in the procedure, and the type of modular devices that connect to the hub, the surgical hub can generally infer the specific procedure that the surgical team will be performing; once the surgical hub knows what specific procedure is being performed, the surgical hub can then retrieve the steps of that procedure from a memory or from the cloud and then cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing). The disclosure of Jogan is directly applicable to the disclosure of Fine because the disclosures share limitations and capabilities, such as automatically determining and/or outputting surgical workflows on an interface relating to various instruments/steps in a surgical procedure.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Fine, which already discloses automatically advancing a surgical procedure workflow based on the real-time surgery video feed and a particular step of the surgical procedure identified, to further include overall identification of the entire surgical procedure itself based on various contextual data received by the system, as disclosed by Jogan, because this allows for improving a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure and/or cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing (See Jogan Par [0105] & [0110]).
While Fine and Jogan generally disclose automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow (i.e. determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results). However, Fine and Jogan do not explicitly disclose the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, in particular.
Therefore, Gerstner discloses the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field (See Gerstner Par [0201]-[0202] which discloses various instrument set-ups including various known racks or apparatuses, such as a back table, cart, and/or Mayo stand in the operating room, i.e. sterile field; See Gerstner Par [0219]-[0221] which discloses presenting a graphical presentation of the selected arrangement of surgical instrument trays, such that the user interface shows various graphical elements that each represent surgical tools/instruments that are assigned to each surgical instrument tray; See Gerstner Par [0243] which disclose the computing system outputting the surgical instruments being arranged on the trays in arrangements specified by a particular planogram (e.g. ordered on each tray in an order, i.e. sequentially)). The disclosure of Gerstner is directly applicable to the combined disclosure of Fine and Shelton, because the disclosures share limitations and capabilities, such as being directed towards outputting elements on an interface relating to various instruments/steps in a surgical procedure.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine and Jogan, which already discloses determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results, to further specifically include the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, as disclosed by Gerstner, because this allows for real-time confirmation of preparation trays, i.e. back-table, and thereby allowing the system to automatically “know” what instruments are supposed to be in the trays, and in what configuration they are supposed to be in, given certain surgical procedures, and subsequently allowing for identifying and/or indications when instruments are missing, in a wrong location, or are in an incorrect configuration (See Gerstner Par [0202]).
Claim 2 –
Regarding Claim 2, Fine, Jogan, and Gerstner disclose the method of claim 1 in its entirety. Fine and Gerstner further disclose a method, wherein:
the one or more procedures comprises two or more procedures sharing the same back table within the sterile field (See Fine Par [0066] which discloses the system including any number of workflows specific to unique medical procedures, i.e. one or more procedures; See Gerstner Par [0129] & [0206] which discloses the vertical rack having different shelves that could be used for surgical procedures, i.e. one or more procedures, and/or communicating data to robotic devices that may be used in surgical procedures, i.e. one or more procedures, such that the robotic devices can be used to retrieve surgical instruments for the surgeon or member of the surgical team).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the combined disclosure of Fine, Jogan, and Gerstner to further include the one or more procedures comprising two or more procedures sharing the same back table within the sterile field, because this allows for placing all tools for the planned procedures on the rack/back-table, such that different shelves can correspond/be used for the one or more surgical procedures (See Gerstner Par [0129] & [0206]).
Claim 3 –
Regarding Claim 3, Fine, Jogan, and Gerstner discloses the method of claim 1 in its entirety. Fine further discloses a method, wherein:
the one or more procedures comprises a single procedure (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field, such that the system can leverage machine learning and artificial intelligence to determine which step of a surgical procedure the OR is currently performing, i.e. singular procedure; See Fine Par [0066] which discloses the system including any number of workflows specific to unique medical procedures, i.e. one or more procedures).
Claim 4 –
Regarding Claim 4, Fine, Jogan, and Gerstner disclose the method of claim 1 in its entirety. Fine and Gerstner further disclose a method, wherein:
outputting comprises outputting the surgical tools sequentially in a time in a time manner during the course of the one or more surgical procedures (While not “back table” per se, see Fine Par [0047] which discloses demonstrating the potential to detect instrument use events from real-time video feeds of instrument trays on moveable carts (i.e. mayo stands) and is therefore understood to constitute a back table; See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed, albeit not explicitly recited for the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field; See Gerstner Par [0219]-[0221] which discloses presenting a graphical presentation of the selected arrangement of surgical instrument trays, such that the user interface shows various graphical elements that each represent surgical tools/instruments that are assigned to each surgical instrument tray; See Gerstner Par [0243] which disclose the computing system outputting the surgical instruments being arranged on the trays in arrangements specified by a particular planogram (e.g. ordered on each tray in an order, i.e. sequentially).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine, Gerstner, and Jogan, which already discloses determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results, to further specifically include the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, as disclosed by Gerstner, because this allows for real-time confirmation of preparation trays, i.e. back-table, and thereby allowing the system to automatically “know” what instruments are supposed to be in the trays, and in what configuration they are supposed to be in, given certain surgical procedures, and subsequently allowing for identifying and/or indications when instruments are missing, in a wrong location, or are in an incorrect configuration (See Gerstner Par [0202]).
Claim 5 –
Regarding Claim 5, Fine, Jogan, and Gerstner disclose the method of claim 1 in its entirety. Fine and Gerstner further disclose a method, wherein:
the trained machine learning agent is trained to identify the sequence of surgical tools based on a doctor preference (See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow; See Gerstner Par [0158] which discloses additionally receiving training sets of “backend” data including surgeon preference, such that the planogram can be based on said surgeon preferences; See Gerstner Par [0160] & [0196] which discloses additional preferences corresponding to specific instruments, sets, trays, and/or locations on the vertical rack being received over time and the system learning said preferences/modifying the planograms accordingly; See Gerstner Par [0219]-[0221] which discloses presenting a graphical presentation of the selected arrangement of surgical instrument trays, such that the user interface shows various graphical elements that each represent surgical tools/instruments that are assigned to each surgical instrument tray; See Gerstner Par [0243] which disclose the computing system outputting the surgical instruments being arranged on the trays in arrangements specified by a particular planogram (e.g. ordered on each tray in an order, i.e. sequentially)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine, Jogan, and Gerstner which already discloses determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results, to further specifically include the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, such as according to doctor preference, as disclosed by Gerstner, because this allows for real-time confirmation of preparation trays, i.e. back-table, and thereby allowing the system to automatically “know” what instruments are supposed to be in the trays, and in what configuration they are supposed to be in, given certain surgical procedures/doctor’s preferences, and subsequently allowing for identifying and/or indications when instruments are missing, in a wrong location, or are in an incorrect configuration (See Gerstner Par [0158]-[0160], [0196], & [0202]).
Claim 6 –
Regarding Claim 6, Fine, Jogan, and Gerstner disclose the method of claim 1 in its entirety. Fine and Jogan further disclose a method, wherein:
identifying the one or more surgical procedures being performed comprises using a second trained machine learning agent (It should be noted that while a “second trained machine learning agent” may be specified, this is understood to represent optimization within prior art conditions or through routine experimentation (See MPEP 2144.05(II)(A)), i.e. Fine and Jogan effectively disclose identifying one or more surgical procedures based on contexts of an active OR and/or other information via an automated learning algorithm, such as machine learning, that it would be understood by one of ordinary skill in the art before the effective filing date of the claimed invention that further training a first learning algorithm instead of a employing a second learning algorithm accomplishes the same endeavors of identifying the one or more surgical procedures being performed; therefore, while not a “second trained machine learning agent” per se, see Fine Par [0051] which discloses expanding the training dataset, i.e. effectively creating a second training dataset, to deliver fully automated, role-specific workflow advancement within the context of an active OR, such that the model was optimized to handle more complex visual scenarios that are likely to occur during a procedure; see Jogan Par [0105] which discloses determining or inferring information related to a surgical procedure from contextual data received from databases and/or instruments, including the type of procedure being undertaken, the type of tissue being operated on, the body cavity, etc., and thereby improve a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure, and would therefore be a separate learning algorithm from the one provided in Fine; See Jogan Par [0110] which discloses the auxiliary equipment that are modular devices can automatically pair with the surgical hub that is located within a particular vicinity of the modular devices as part of their initialization process, such that the surgical hub can then derive contextual information about the surgical procedure by detecting the types of modular devices that pair with it during this pre-operative or initialization phase; furthermore, based on the combination of the data from the patient's EMR, the list of medical supplies to be used in the procedure, and the type of modular devices that connect to the hub, the surgical hub can generally infer the specific procedure that the surgical team will be performing; once the surgical hub knows what specific procedure is being performed, the surgical hub can then retrieve the steps of that procedure from a memory or from the cloud and then cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Fine, Jogan, and Gerstner, which already discloses automatically advancing a surgical procedure workflow based on the real-time surgery video feed and a particular step of the surgical procedure identified, to further include overall identification of the entire surgical procedure itself based on various contextual data received by the system, as disclosed by Jogan, because this allows for improving a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure and/or cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing (See Jogan Par [0105] & [0110]).
Claim 7 –
Regarding Claim 7, Fine, Jogan, and Gerstner discloses the method of claim 1 in its entirety. Fine further discloses a method, wherein:
identifying and determining are performed in real time (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field; See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow in real-time).
Claim 8 –
Regarding Claim 8, Fine, Jogan, and Gerstner disclose the method of claim 1 in its entirety. Fine further discloses a method, wherein:
identifying, using the real-time surgical context recognition module comprises identifying one or more tools already being used by the surgical procedure (See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed)
Claim 9 –
Regarding Claim 9, Fine, Jogan, and Gerstner disclose the method of claim 8 in its entirety. Fine and Gerstner further disclose a method, wherein:
determining, using the back table instruction processor, comprises identifying one or more tools present on the back table (See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed, albeit not explicitly recited for the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field; See Gerstner Par [0201]-[0202] which discloses various instrument set-ups including various known racks or apparatuses, such as a back table, cart, and/or Mayo stand in the operating room, i.e. sterile field; See Gerstner Par [0219]-[0221] which discloses presenting a graphical presentation of the selected arrangement of surgical instrument trays, such that the user interface shows various graphical elements that each represent surgical tools/instruments that are assigned to each surgical instrument tray; See Gerstner Par [0243] which disclose the computing system outputting the surgical instruments being arranged on the trays in arrangements specified by a particular planogram (e.g. ordered on each tray in an order, i.e. sequentially)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine, Jogan, and Gerstner, which already discloses determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results, to further specifically include the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, as disclosed by Gerstner, because this allows for real-time confirmation of preparation trays, i.e. back-table, and thereby allowing the system to automatically “know” what instruments are supposed to be in the trays, and in what configuration they are supposed to be in, given certain surgical procedures, and subsequently allowing for identifying and/or indications when instruments are missing, in a wrong location, or are in an incorrect configuration (See Gerstner Par [0202]).
Claim 10 –
Regarding Claim 10, Fine, Jogan, and Gerstner disclose the method of claim 9 in its entirety. Fine further discloses a method, further comprising:
using a tool recognition module (See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed).
Claim 11 –
Regarding Claim 11, Fine, Jogan, and Gerstner disclose the method of claim 1 in its entirety. Fine and Jogan further disclose a method, further comprising:
identifying, using the real-time surgical context recognition module comprises receiving patient clinical data (See Jogan Par [0110] which discloses the auxiliary equipment that are modular devices can automatically pair with the surgical hub that is located within a particular vicinity of the modular devices as part of their initialization process, such that the surgical hub can then derive contextual information about the surgical procedure by detecting the types of modular devices that pair with it during this pre-operative or initialization phase; furthermore, based on the combination of the data from the patient's EMR, the list of medical supplies to be used in the procedure, and the type of modular devices that connect to the hub, the surgical hub can generally infer the specific procedure that the surgical team will be performing) in addition to the one or more video streams of the back table within the sterile field and the one or more video streams of the surgical procedure (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field; See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine, Jogan, and Gerstner, which already discloses automatically advancing a surgical procedure workflow based on the real-time surgery video feed and a particular step of the surgical procedure identified, to further include overall identification of the entire surgical procedure itself based on various contextual data, such as patient clinical/EMR data in particular, received by the system, as disclosed by Jogan, because this allows for improving a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure and/or cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing (See Jogan Par [0105] & [0110]).
Claim 12 –
Regarding Claim 12, Fine discloses a method of providing surgical guidance to a scrub technician during a surgical procedure, the method comprising:
identifying, using a real-time surgical context recognition module, one or more surgical procedures being performed on a patient in a sterile field (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field, such that the system can leverage machine learning and artificial intelligence to determine which step of a surgical procedure the OR is currently performing, i.e. understood to be surgical context, albeit not explicitly recited for identifying the overall surgical procedure(s) itself), wherein
the real-time surgical context recognition module receives one or more video streams of the surgical procedure being performed and one or more video streams of a back table within the sterile field (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field; See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow);
determining, using a back table instruction processor module including a trained machine learning agent, a sequence of surgical tools that will be needed to perform the identified one or more surgical procedures (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field; See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow);
outputting, to a monitor visible within the sterile field, each of the surgical tools within the sequence, wherein the surgical tools are presented sequentially for arrangement on the back table within the sterile field (While not “back table” per se, see Fine Par [0047] which discloses demonstrating the potential to detect instrument use events from real-time video feeds of instrument trays on moveable carts (i.e. mayo stands) and is therefore understood to constitute a back table; See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed, albeit not explicitly recited for the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field);
receiving input from a back table camera viewing the back table (See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed, i.e. input, of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow); and
While Fine discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and a particular step of the surgical procedure identified and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow, Fine does not explicitly disclose identifying the overall surgical procedure itself and/or the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field/verifying, by the back table instruction processor module, that the surgical tools within the sequence have been provided on the back table, in particular.
Therefore, Jogan discloses automatically identifying the overall surgical procedure itself based on various contextual data received by the system (see Jogan Par [0105] which discloses determining or inferring information related to a surgical procedure from contextual data received from databases and/or instruments, including the type of procedure being undertaken, the type of tissue being operated on, the body cavity, etc., and thereby improve a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure; See Jogan Par [0110] which discloses the auxiliary equipment that are modular devices can automatically pair with the surgical hub that is located within a particular vicinity of the modular devices as part of their initialization process, such that the surgical hub can then derive contextual information about the surgical procedure by detecting the types of modular devices that pair with it during this pre-operative or initialization phase; furthermore, based on the combination of the data from the patient's EMR, the list of medical supplies to be used in the procedure, and the type of modular devices that connect to the hub, the surgical hub can generally infer the specific procedure that the surgical team will be performing; once the surgical hub knows what specific procedure is being performed, the surgical hub can then retrieve the steps of that procedure from a memory or from the cloud and then cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Fine, which already discloses automatically advancing a surgical procedure workflow based on the real-time surgery video feed and a particular step of the surgical procedure identified, to further include overall identification of the entire surgical procedure itself based on various contextual data received by the system, as disclosed by Jogan, because this allows for improving a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure and/or cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing (See Jogan Par [0105] & [0110]).
While Fine and Jogan generally disclose automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow (i.e. determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results). However, Fine and Jogan do not explicitly disclose the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, in particular.
Therefore, Gerstner discloses the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field (See Gerstner Par [0201]-[0202] which discloses various instrument set-ups including various known racks or apparatuses, such as a back table, cart, and/or Mayo stand in the operating room, i.e. sterile field; See Gerstner Par [0219]-[0221] which discloses presenting a graphical presentation of the selected arrangement of surgical instrument trays, such that the user interface shows various graphical elements that each represent surgical tools/instruments that are assigned to each surgical instrument tray; See Gerstner Par [0243] which disclose the computing system outputting the surgical instruments being arranged on the trays in arrangements specified by a particular planogram (e.g. ordered on each tray in an order, i.e. sequentially) and verifying, by the back table instruction processor module, that the surgical tools within the sequence have been provided on the back table (See Gerstner Par [0200]-[0202] & [0266] which discloses the computer being able to verify the presence or absence of an instrument within an instrument set or tray even if an instrument is not in the correct place according to the planogram for a particular procedure).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine and Jogan, which already discloses determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results, to further specifically include the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field and verifying said arrangement, as disclosed by Gerstner, because this allows for real-time confirmation of preparation trays, i.e. back-table, and thereby allowing the system to automatically “know” what instruments are supposed to be in the trays, and in what configuration they are supposed to be in, given certain surgical procedures, and subsequently allowing for identifying and/or indications when instruments are missing, in a wrong location, or are in an incorrect configuration (See Gerstner Par [0202]).
Claim 13 –
Regarding Claim 13, Fine discloses a system comprising:
one or more processors (See Fine Par [0026] & [0035]);
a memory coupled to the one or more processors (See Fine Par [0026] & [0035]), the memory storing computer-program instructions (See Fine Par [0026]) that when executed by the one or more processors, perform a computer-implemented method comprising:
identifying, using a real-time surgical context recognition module, one or more surgical procedures being performed on a patient in a sterile field (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field, such that the system can leverage machine learning and artificial intelligence to determine which step of a surgical procedure the OR is currently performing, i.e. understood to be surgical context, albeit not explicitly recited for identifying the overall surgical procedure(s) itself), wherein
the real-time surgical context recognition module receives one or more video streams of the surgical procedure being performed and one or more video streams of a back table within the sterile field (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field; See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow);
determining, using a back table instruction processor including a trained machine learning agent, a sequence of surgical tools that will be needed to perform the identified one or more surgical procedures (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field; See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow);
outputting, to a monitor visible within the sterile field, each of the surgical tools within the sequence, wherein the surgical tools are presented sequentially for arrangement on the back table within the sterile field (While not “back table” per se, see Fine Par [0047] which discloses demonstrating the potential to detect instrument use events from real-time video feeds of instrument trays on moveable carts (i.e. mayo stands) and is therefore understood to constitute a back table; See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed, albeit not explicitly recited for the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field).
While Fine discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and a particular step of the surgical procedure identified and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow, Fine does not explicitly disclose identifying the overall surgical procedure itself and/or the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, in particular.
Therefore, Jogan discloses automatically identifying the overall surgical procedure itself based on various contextual data received by the system (see Jogan Par [0105] which discloses determining or inferring information related to a surgical procedure from contextual data received from databases and/or instruments, including the type of procedure being undertaken, the type of tissue being operated on, the body cavity, etc., and thereby improve a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure; See Jogan Par [0110] which discloses the auxiliary equipment that are modular devices can automatically pair with the surgical hub that is located within a particular vicinity of the modular devices as part of their initialization process, such that the surgical hub can then derive contextual information about the surgical procedure by detecting the types of modular devices that pair with it during this pre-operative or initialization phase; furthermore, based on the combination of the data from the patient's EMR, the list of medical supplies to be used in the procedure, and the type of modular devices that connect to the hub, the surgical hub can generally infer the specific procedure that the surgical team will be performing; once the surgical hub knows what specific procedure is being performed, the surgical hub can then retrieve the steps of that procedure from a memory or from the cloud and then cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Fine, which already discloses automatically advancing a surgical procedure workflow based on the real-time surgery video feed and a particular step of the surgical procedure identified, to further include overall identification of the entire surgical procedure itself based on various contextual data received by the system, as disclosed by Jogan, because this allows for improving a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure and/or cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing (See Jogan Par [0105] & [0110]).
While Fine and Jogan generally disclose automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow (i.e. determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results). However, Fine and Jogan do not explicitly disclose the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, in particular.
Therefore, Gerstner discloses the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field (See Gerstner Par [0201]-[0202] which discloses various instrument set-ups including various known racks or apparatuses, such as a back table, cart, and/or Mayo stand in the operating room, i.e. sterile field; See Gerstner Par [0219]-[0221] which discloses presenting a graphical presentation of the selected arrangement of surgical instrument trays, such that the user interface shows various graphical elements that each represent surgical tools/instruments that are assigned to each surgical instrument tray; See Gerstner Par [0243] which disclose the computing system outputting the surgical instruments being arranged on the trays in arrangements specified by a particular planogram (e.g. ordered on each tray in an order, i.e. sequentially)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine and Jogan, which already discloses determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results, to further specifically include the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, as disclosed by Gerstner, because this allows for real-time confirmation of preparation trays, i.e. back-table, and thereby allowing the system to automatically “know” what instruments are supposed to be in the trays, and in what configuration they are supposed to be in, given certain surgical procedures, and subsequently allowing for identifying and/or indications when instruments are missing, in a wrong location, or are in an incorrect configuration (See Gerstner Par [0202]).
Claim 14 –
Regarding Claim 14, Fine, Jogan, and Gerstner disclose the system of claim 13 in its entirety. Fine and Gerstner further disclose a system, wherein:
outputting comprises outputting the surgical tools sequentially in a time in a timed manner during the course of the one or more surgical procedures (While not “back table” per se, see Fine Par [0047] which discloses demonstrating the potential to detect instrument use events from real-time video feeds of instrument trays on moveable carts (i.e. mayo stands) and is therefore understood to constitute a back table; See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed, albeit not explicitly recited for the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field; See Gerstner Par [0219]-[0221] which discloses presenting a graphical presentation of the selected arrangement of surgical instrument trays, such that the user interface shows various graphical elements that each represent surgical tools/instruments that are assigned to each surgical instrument tray; See Gerstner Par [0243] which disclose the computing system outputting the surgical instruments being arranged on the trays in arrangements specified by a particular planogram (e.g. ordered on each tray in an order, i.e. sequentially).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine, Gerstner, and Jogan, which already discloses determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results, to further specifically include the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, as disclosed by Gerstner, because this allows for real-time confirmation of preparation trays, i.e. back-table, and thereby allowing the system to automatically “know” what instruments are supposed to be in the trays, and in what configuration they are supposed to be in, given certain surgical procedures, and subsequently allowing for identifying and/or indications when instruments are missing, in a wrong location, or are in an incorrect configuration (See Gerstner Par [0202]).
Claim 15 –
Regarding Claim 15, Fine, Jogan, and Gerstner disclose the system of claim 13 in its entirety. Fine and Gerstner further disclose a system, wherein:
the trained machine learning agent is trained to identify the sequence of surgical tools based on a doctor preference (See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow; See Gerstner Par [0158] which discloses additionally receiving training sets of “backend” data including surgeon preference, such that the planogram can be based on said surgeon preferences; See Gerstner Par [0160] & [0196] which discloses additional preferences corresponding to specific instruments, sets, trays, and/or locations on the vertical rack being received over time and the system learning said preferences/modifying the planograms accordingly; See Gerstner Par [0219]-[0221] which discloses presenting a graphical presentation of the selected arrangement of surgical instrument trays, such that the user interface shows various graphical elements that each represent surgical tools/instruments that are assigned to each surgical instrument tray; See Gerstner Par [0243] which disclose the computing system outputting the surgical instruments being arranged on the trays in arrangements specified by a particular planogram (e.g. ordered on each tray in an order, i.e. sequentially)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine, Jogan, and Gerstner which already discloses determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results, to further specifically include the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, such as according to doctor preference, as disclosed by Gerstner, because this allows for real-time confirmation of preparation trays, i.e. back-table, and thereby allowing the system to automatically “know” what instruments are supposed to be in the trays, and in what configuration they are supposed to be in, given certain surgical procedures/doctor’s preferences, and subsequently allowing for identifying and/or indications when instruments are missing, in a wrong location, or are in an incorrect configuration (See Gerstner Par [0158]-[0160], [0196], & [0202]).
Claim 16 –
Regarding Claim 16, Fine, Jogan, and Gerstner disclose the system of claim 13 in its entirety. Fine and Jogan further disclose a system, wherein:
identifying the one or more surgical procedures being performed comprises using a second trained machine learning agent (It should be noted that while a “second trained machine learning agent” may be specified, this is understood to represent optimization within prior art conditions or through routine experimentation (See MPEP 2144.05(II)(A)), i.e. Fine and Jogan effectively disclose identifying one or more surgical procedures based on contexts of an active OR and/or other information via an automated learning algorithm, such as machine learning, that it would be understood by one of ordinary skill in the art before the effective filing date of the claimed invention that further training a first learning algorithm instead of a employing a second learning algorithm accomplishes the same endeavors of identifying the one or more surgical procedures being performed; therefore, while not a “second trained machine learning agent” per se, see Fine Par [0051] which discloses expanding the training dataset, i.e. effectively creating a second training dataset, to deliver fully automated, role-specific workflow advancement within the context of an active OR, such that the model was optimized to handle more complex visual scenarios that are likely to occur during a procedure; see Jogan Par [0105] which discloses determining or inferring information related to a surgical procedure from contextual data received from databases and/or instruments, including the type of procedure being undertaken, the type of tissue being operated on, the body cavity, etc., and thereby improve a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure, and would therefore be a separate learning algorithm from the one provided in Fine; See Jogan Par [0110] which discloses the auxiliary equipment that are modular devices can automatically pair with the surgical hub that is located within a particular vicinity of the modular devices as part of their initialization process, such that the surgical hub can then derive contextual information about the surgical procedure by detecting the types of modular devices that pair with it during this pre-operative or initialization phase; furthermore, based on the combination of the data from the patient's EMR, the list of medical supplies to be used in the procedure, and the type of modular devices that connect to the hub, the surgical hub can generally infer the specific procedure that the surgical team will be performing; once the surgical hub knows what specific procedure is being performed, the surgical hub can then retrieve the steps of that procedure from a memory or from the cloud and then cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Fine, Jogan, and Gerstner, which already discloses automatically advancing a surgical procedure workflow based on the real-time surgery video feed and a particular step of the surgical procedure identified, to further include overall identification of the entire surgical procedure itself based on various contextual data received by the system, as disclosed by Jogan, because this allows for improving a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure and/or cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing (See Jogan Par [0105] & [0110]).
Claim 17 –
Regarding Claim 17, Fine, Jogan, and Gerstner disclose the system of claim 13 in its entirety. Fine further discloses a system, wherein:
identifying and determined are performed in real time (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field; See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow in real-time).
Claim 18 –
Regarding Claim 18, Fine, Jogan, and Gerstner discloses the system of claim 13 in its entirety. Fine further discloses a system, wherein:
identifying, using the real-time surgical context recognition module, comprises identifying one or more tools already being used by the surgical procedure (See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed).
Claim 19 –
Regarding Claim 19, Fine, Jogan, and Gerstner disclose the system of claim 18 in its entirety. Fine and Gerstner further disclose a system, wherein:
determining, using the back table instruction processor, comprises identifying one or more tools present on the back table (See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed, albeit not explicitly recited for the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field; See Gerstner Par [0201]-[0202] which discloses various instrument set-ups including various known racks or apparatuses, such as a back table, cart, and/or Mayo stand in the operating room, i.e. sterile field; See Gerstner Par [0219]-[0221] which discloses presenting a graphical presentation of the selected arrangement of surgical instrument trays, such that the user interface shows various graphical elements that each represent surgical tools/instruments that are assigned to each surgical instrument tray; See Gerstner Par [0243] which disclose the computing system outputting the surgical instruments being arranged on the trays in arrangements specified by a particular planogram (e.g. ordered on each tray in an order, i.e. sequentially)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine, Jogan, and Gerstner, which already discloses determining which surgical tools are most appropriate for particular steps in a surgical workflow and outputting the results, to further specifically include the surgical tools being presented sequentially in particular for arrangement on the back table within the sterile field, as disclosed by Gerstner, because this allows for real-time confirmation of preparation trays, i.e. back-table, and thereby allowing the system to automatically “know” what instruments are supposed to be in the trays, and in what configuration they are supposed to be in, given certain surgical procedures, and subsequently allowing for identifying and/or indications when instruments are missing, in a wrong location, or are in an incorrect configuration (See Gerstner Par [0202]).
Claim 20 –
Regarding Claim 20, Fine, Jogan, and Gerstner disclose the system of claim 19 in its entirety. Fine further discloses a system, further comprising:
using a tool recognition module (See Fine Par [0049] which discloses the instrument recognition engine being trained by using an instrument preparation station typical of most ORs (i.e. a mayo stand), and live-feed video in which the instrument recognition engine detected, identified, and added bounding boxes to a plurality of a surgical instruments added to a tray simulating a realistic scenario in the OR, such that any time an instrument enters or exits the field of view, the instrument recognition engine records this as an “instrument use event” and identifying an appropriate sequence given a certain step or benchmark in the surgical procedure being performed).
Claim 21 –
Regarding Claim 21, Fine, Jogan, and Gerstner disclose the system of claim 13 in its entirety. Fine and Jogan further disclose a system, wherein:
identifying, using the real-time surgical context recognition module, comprises receiving patient clinical data (See Jogan Par [0110] which discloses the auxiliary equipment that are modular devices can automatically pair with the surgical hub that is located within a particular vicinity of the modular devices as part of their initialization process, such that the surgical hub can then derive contextual information about the surgical procedure by detecting the types of modular devices that pair with it during this pre-operative or initialization phase; furthermore, based on the combination of the data from the patient's EMR, the list of medical supplies to be used in the procedure, and the type of modular devices that connect to the hub, the surgical hub can generally infer the specific procedure that the surgical team will be performing) in addition to the one or more video streams of the back table within the sterile field and the one or more video streams of the surgical procedure (See Fine Par [0028] which discloses receiving real-time video feed showing a surgery in the OR, i.e. sterile field; See Fine Par [0011]-[0012] & [0029] which discloses receiving real-time video feed of one or more instrument trays and/or preparation stations in an operating room, intended for collecting instrument use events and automatically advancing a surgical procedure workflow based on the real-time surgery video feed and leveraging machine learning to automatically link instrument and/or material use with steps in the surgery workflow).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Fine, Jogan, and Gerstner, which already discloses automatically advancing a surgical procedure workflow based on the real-time surgery video feed and a particular step of the surgical procedure identified, to further include overall identification of the entire surgical procedure itself based on various contextual data, such as patient clinical/EMR data in particular, received by the system, as disclosed by Jogan, because this allows for improving a robotic arm and/or robotic surgical tool that are connected to it and provide contextualized information or suggestions to the surgeon during the course of the surgical procedure and/or cross-reference the data it subsequently receives from the connected data sources (e.g., modular devices and patient monitoring devices) to infer what step of the surgical procedure the surgical team is performing (See Jogan Par [0105] & [0110]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Canton et al. (U.S. Patent Publication No. 2023/0386074) discloses a system for organizing and tracking sterilizable tools and consumables through an entire use cycle in a perioperative environment, such as through the multiple steps of the use, decontamination, sterilization, assembly, and distribution workflow and thereby optimizing the collections of sterilizable tools and consumables provided for use in specific procedures or by specific members;
Stiller et al. (U.S. Patent Publication No. 2015/0332196) discloses systems for controlling a workflow in an operating room including interconnected medical devices that support surgical systems and surgical operation, such that use of a medical device at least partially determines the subsequent clinical information that is displayed on a display monitor.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNTER J RASNIC whose telephone number is (571)270-5801. The examiner can normally be reached M-F 8am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.R./Examiner, Art Unit 3684
/Shahid Merchant/Supervisory Patent Examiner, Art Unit 3684