Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This office action for the 18/424389 application is in response to the communications filed December 03, 2025.
Claims 1, 5, 6, 12-14, 16 and 18-20 were amended December 03, 2025.
Claims 4 and 17 were cancelled December 03, 2025.
Claims 21 and 22 were added as new December 03, 2025.
Claims 1-3, 5-16 and 18-22 are currently pending and considered below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5-16 and 18-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
As per claim 1,
Step 1: The claim recites subject matter within a statutory category as a process.
Step 2A is a two-prong inquiry, in which Prong 1 determines whether a claim recites a judicial exception. Prong 2 determines if the additional limitations of the claim integrates the recited judicial exception into a practical application. If the additional elements of the claim fail to integrate the judicial exception into a practical application, claim is directed to the recited judicial exception, see MPEP 2106.04(II)(A).
Step 2A Prong 1: The claim contains subject matter that recites an abstract idea, with the steps of a method, comprising: receiving a specification of surgical implements available for use during a surgical operation; receiving an image depicting at least a portion of the available surgical implements utilized during the surgical operation; generating a record of the utilized surgical implements depicted in the image; identifying one or more surgical implement preparation groupings including by: determining a set of candidate surgical implements based at least in part on a similarity or scoring with respect to a plurality of previously received images; and annotating each of at least a subset of utilized surgical implements based at least on the determined set of candidate surgical implements; wherein the identified one or more surgical implement preparation groupings optimizes one or more specified utilization metrics; and performing a coverage analysis to determine a representation level of a characteristic associated with the surgical operation with respect to a group of surgical operations, the group of surgical operations including at least one other surgical operation. These steps, as drafted, under the broadest reasonable interpretation recite:
certain methods of organizing human activity (e.g., fundamental economic principles or practices including: hedging; insurance; mitigating risk; etc., commercial or legal interactions including: agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations; etc., managing personal behavior or relationships or interactions between people including: social activities; teaching; following rules or instructions; etc.) but for recitation of generic computer components. That is, other than reciting steps as performed by the generic computer components, nothing in the claim element precludes the step from being directed to certain methods of organizing human activity. The identified abstract idea, law of nature, or natural phenomenon identified above, in the context of this claim, encompasses a certain method of organizing human activity, namely managing personal behavior or relationships or interactions between people. This is because each of the limitations of the abstract idea recites a list of rules or instructions that a human person can follow in the course of their personal behavior. If a claim limitation, under its broadest reasonable interpretation, covers at least the recited methods of organizing human activity above, but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. See MPEP 2106.04(a).
Step 2A Prong 2: The claim does not recite additional elements that integrate the judicial exception into a practical application. In particular, the additional elements do not integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements amount to no more than limitations which:
amount to mere instructions to apply an exception, see MPEP 2106.05(f), such as:
“at least in part automatically” and “using computer image recognition” which corresponds to merely using a computer as a tool to perform an abstract idea. Paragraph [0021] of the as-filed specification describes that the hardware used to implement the steps of the abstract idea amounts to nothing more than a generic computer. Implementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two or add significantly more in Step 2B, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer.
add insignificant extra-solution activity to the abstract idea, see MPEP 2106.05(g), such as:
“storing the record in a data storage of surgical implement utilization data” which corresponds to mere data gathering and/or output.
Accordingly, this claim is directed to an abstract idea.
Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception, add insignificant extra-solution activity to the abstract idea, and/or generally link the abstract idea to a particular technological environment or field of use. Additionally, the additional limitations, identified as insignificant extra-solution activity to the abstract idea, amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields such as:
computer functions that have been identified by the courts as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity, see MPEP 2106.05(d)(II), such as:
“storing the record in a data storage of surgical implement utilization data” which corresponds to storing and retrieving information in memory.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 2,
Claim 2 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 2 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the specification of the surgical implements available for use during the surgical operation is received from a first source and the image depicting the at least a portion of the available surgical implements utilized during the surgical operation is received from a second source different from the first source.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 3,
Claim 3 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 3 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the specification of the surgical implements available for use during the surgical operation includes an instrument count sheet.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 5,
Claim 5 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 5 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein a total number of the plurality of previously received images utilized for the annotation is based at least on a robustness analysis of the plurality of previously received images performed using a threshold indicating a reliability of the plurality of the previously received images.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 6,
Claim 6 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 6 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein a total number of the plurality of previously received images utilized for the annotation is based at least on a coverage analysis of the plurality of previously received images performed using a threshold indicating a level of representation of the surgical operation with respect to a group of surgical operations.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 7,
Claim 7 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 7 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein … generating the record of the utilized surgical implements depicted in the image includes identifying each of at least a subset of the utilized surgical implements” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
“automatically” and “based at least on a trained machine learning model” further defines an additional element that was insufficient to provide a practical application and/or significantly more. The claim with this further defining limitation still corresponds to merely using a computer as a tool to perform an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 8,
Claim 8 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 8 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein … generating the record of the utilized surgical implements depicted in the image includes identifying each of at least a subset of the utilized surgical implements without using the specification of the surgical implements available for use during the surgical operation.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
“automatically” further defines an additional element that was insufficient to provide a practical application and/or significantly more. The claim with this further defining limitation still corresponds to merely using a computer as a tool to perform an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 9,
Claim 9 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 9 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the record of the utilized surgical implements depicted in the image is based at least on the specification of the surgical implements available for use during the surgical operation.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 10,
Claim 10 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 10 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“herein the one or more surgical implement preparation groupings are organized into one or more trays.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 11,
Claim 11 depends from claim 10 and inherits all the limitations of the claim from which it depends. Claim 11 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the one or more trays includes a core tray including a first subset of the utilized surgical implements and an accessory tray including a second subset of the utilized surgical implements.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 12,
Claim 12 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 12 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein identification the one or more surgical implement preparation groupings includes performing service-line analysis.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 13,
Claim 13 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 13 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein identification the one or more surgical implement preparation groupings includes determining a co-usage pattern of a first one of the utilized surgical implements that is used with a second one of the utilized surgical implements at a frequency that meets a usage threshold.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 14,
Claim 14 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 14 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“further comprising outputting a report based at least on the identification of the one or more surgical implement preparation groupings.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 15,
Claim 15 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 15 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the one or more specified utilization metrics includes at least one of: a corresponding usage rate for each instrument.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 16,
Claim 16 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 16 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein the identification of the one or more surgical implement preparation groupings includes performing a robustness analysis to determine a reliability of the identified the one or more surgical implement preparation groupings.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 18,
Claim 18 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 18 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein identification of the one or more surgical implement preparation groupings includes performing another coverage analysis to determine a representation level of a characteristic associated with the surgical operation based at least on the specification of the surgical implements available for use during the surgical operation.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 19,
Claim 19 is substantially similar to claim 1. Accordingly, claim 19 is rejected for the same reasons as claim 1.
“a processor configured to:” and “a memory coupled to the processor and configured to provide the processor with instructions.” introduces additional elements that is insufficient to provide a practical application or significantly more because they further defines an additional element that was insufficient to provide a practical application and/or significantly more. The claim with this further defining limitation still corresponds to merely using a computer as a tool to perform an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 20,
Claim 20 is substantially similar to claim 1. Accordingly, claim 20 is rejected for the same reasons as claim 1.
“A computer program product embodied in a non-transitory computer readable medium and comprising computer instructions for:” introduces additional elements that is insufficient to provide a practical application or significantly more because they further defines an additional element that was insufficient to provide a practical application and/or significantly more. The claim with this further defining limitation still corresponds to merely using a computer as a tool to perform an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 21,
Claim 21 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 21 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein determining the set of candidate surgical implements is based at least in part on a cosine similarity or scoring with respect to a plurality of previously received images” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As per claim 22,
Claim 22 depends from claim 1 and inherits all the limitations of the claim from which it depends. Claim 22 merely further defines the abstract idea and/or introduces additional elements that are insufficient to provide a practical application or something significantly more:
“wherein at least one image of the plurality of previously received images is captured by a camera.” further describes the abstract idea. This claim limitation is still directed to “Certain Methods of Organizing Human Activity” and therefore continues to recite an abstract idea.
Looking at the limitations of the claim as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely recite an abstract idea and/or provide conventional computer implementation which does not impose a meaningful limit to integrate the abstract idea into a practical application and/or amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 7, 9-12, 14-16, and 18-22 under 35 U.S.C. 103 as being unpatentable over Donnelly et al. (US 2022/0292815; herein referred to as Donnelly) in view of Schlabach (US 2007/0112649).
As per claim 1,
Donnelly teaches a method, comprising: receiving a specification of surgical implements available for use during a surgical operation:
(Paragraph [0019] of Donnelly. The teaching describes a mobile computer device or on a computer device, either of which is in communication with at least one image data collection device configured to produce an image of the surgical tray. The application is configured to receive the image of the surgical tray or surgical tool, implant, fastener, or other object on or in the surgical tray from the image data collection device, and to communicate the image through a wired and/or wireless communication network to a server located at a site where the surgical tray is located or at a location remote from the site. The system includes a processor in communication through the wired and/or wireless communication network with the software application, as well as the server, of the system. The processer is configured to call up from a library database of the system, upon communication of the image to the server: a plurality of previously created identification model(s) comprised of previously created tensors linked to synthetic trays. The identification model(s) linked to the synthetic trays were previously uploaded by the training system previously described. The processor is configured to analyze the image and classify the type of tray in the image based on the identification model(s) linked to synthetic trays. Then, based on the classification of the tray in the image assigned by the processor, the processor calls up from the library database: (1) a plurality of identification model(s) linked to 3-dimensional synthetic items, which are linked to the classification of the tray, (2) the identification model(s) including: (a) surface texture, (b) item material composition, and (c) a size tolerance; (3) a list of items linked to the synthetic tray, and (4) a plurality of feature vector(s) created for 3-dimensional synthetic items as outlined above. The processor then analyzes the images and proceeds to classify the type of items in the image based on the identification model(s) linked to the 3-dimensional synthetic items. The processor then compares the list of classified items to the list of items linked to the classified tray to determine if there are any missing items.)
Donnelly further teaches receiving an image depicting at least a portion of the available surgical implements utilized during the surgical operation:
(Paragraphs [0019] and [0037] of Donnelly. The teaching describes that based on the tray classification, the system calls up from a database a list of all real-world items, which should be on the tray including corresponding instrument identification model(s) linked to synthetic items previously uploaded to a library database by an administrator, or their employee, contractor, or agent. Next, the system compares the instrument identification model(s) to the image of the tray to identify items are located on the tray. Finally, the system displays a list of items that were not located in the image of the surgical tray.)
Donnelly further teaches at least in part automatically generating a record of the utilized surgical implements depicted in the image and storing the record in a data storage of surgical implement utilization data:
(Paragraphs [0043] and [0044] of Donnelly. The teaching describes that the classified tray along with a list of the detected instruments and missing instruments is then provided to the software application for display on a graphical user interface to the user. The system proceeds in two phases. First, the system is configured to analyze an image to identify and classify the type of tray in the image. Second, the system is configured then to analyze the trays to determine, what, if any, items are missing from the tray(s) and then to display the results to a user through a graphical user interface. Optionally, the system can store the results in a database for later analysis, such as, for use in auditing.)
Donnelly further teaches identifying one or more surgical implement preparation groupings using computer image recognition including by determining at set of candidate surgical implements based at least in part of a similarity or scoring with respect to a plurality of previously received images wherein the identified one or more surgical implement preparation groupings optimizes one or more specified utilization metrics:
(Paragraphs [0074], [0075] and [0140] of Donnelly. The teaching describes that the image data collection device(s) may capture specific tendencies of the individual(s) and/or team(s) performing the procedure. The system may utilize such information to immediately or in the future suggests adjustments to the items on the tray and/or needed equipment. Indeed, the system may be a dynamic system that through use learns the preferences and tendencies of the individual(s) and/or team(s) performing the procedure. By learning such preferences and/or tendencies the system may increase the efficiencies and/or lower the cost of the procedures. For example, the system may recommend that certain items that are never used by the team be removed from future trays. Both the tray classifier 415 and the instrument identification 420 modules are trained on 2-dimesional synthetic images. In certain embodiments, the tray classifier 415 and the instrument identification 420 are trained using images of real-world objects. Once an identification model has been trained with expected performance, the next step is to assess the prediction results of the identification model in a controlled, close-to-real setting to gain confidence that the model is valid, reliable, and meets business requirements for use. In this step, confidence thresholds of the detector module are set (i.e., the module that identifies whether an item is or is not in an image). In identifying target classification(s) of a real-world object with the detector module, the system assigns a numeric confidence value to each output. This confidence value represents the system's confidence in the prediction. The system determines a correctness of each prediction in the set of predictions and determines a relationship between the confidence scores and the correctness of the test predictions.)
Donnelley further teaches annotating each of at least a subset of utilized surgical implements based at least on the determined set of candidate surgical implements:
(Paragraph [0038] of Donnelly. The teaching describes that the software application can recognize real-world items from images (photograph or video) because the application has been previously trained with the 3-dimesional synthetic items corresponding with real-world items. In such an embodiment for the data preparation steps, an administrator, or their employee contractor, or agent: (1) selects an item (e.g., tray, tool, implant, fastener, or the like) and creates scenes, and (2) uploads scenes to a 3-D rendering program such as, for example, 3ds Max and/or Unity, and thereafter, and the system (3) renders synthetic images of the device (i.e., 2-dimesional images) and creates correlated synthetic colored masks, (4) provides a dataset to the software application to develop annotated files or to map instrument masks to real world images of instruments, and (5) splits the annotated files into subfiles for layering images.)
Donnelly does not explicitly disclose performing a coverage analysis to determine a representation level of a characteristic associated with the surgical operation with respect to a group of surgical operations, the group of surgical operations including at least one other surgical operation.
However, Schlabach teaches disclose performing a coverage analysis to determine a representation level of a characteristic associated with the surgical operation with respect to a group of surgical operations, the group of surgical operations including at least one other surgical operation:
(Paragraphs [0044] and [0045] of Schlabach. The teaching describes a surgical table may carry or be proximate to one or more RFID readers that periodically or constantly identify the location of tagged items. The system 400 automatically initializes the count procedure before a surgical procedure by identifying the inventory located within range of the surgical table. Periodically during the surgical procedure, the system 400 reports the location of tagged items: unused items on the sterilized carts or trays, used items on the contaminated carts or trays, items on the operating table, and/or items that are internal to a patient's anatomy. The system 400 alerts the surgical team with either blinking screen or audio alert, for example, of tagged items that remain within the patient. The system 400 also warns when items are not at any of the known locations (e.g., sterile cart, contaminated cart, table), allowing the surgical team to determine the status of a missing item. The system's count function may be initiated, for example, by an audio command or by pressing a foot pedal or other user command. Insofar that a coverage analysis is a determination of gaps in the data used for analysis, determining when expected item is missing is a representation of a gap in the implement data used for the tray analysis.)
It would have been obvious to one of ordinary skill in the art before the time of filing to add to the inventory management system of Donnelly, the inventory management teachings pertaining to instrument counts of Schlabach. Paragraph [0101] of Schlabach teaches that keeping track of the number of instruments used in a procedure increases the safety of the patient who is having surgery performed on them. One of ordinary skill in the art in possession of Donnelly would have looked to Schlabach to achieve such improvements. One of ordinary skill in the art would have added to Donnelly, the teachings of Schlabach based on this incentive without yielding unexpected results.
As per claim 2,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein the specification of the surgical implements available for use during the surgical operation is received from a first source and the image depicting the at least a portion of the available surgical implements utilized during the surgical operation is received from a second source different from the first source:
(Paragraphs [0009] and [0037] of Donnelly. The teaching describes an AI-enabled, dynamic, computer vision system useful for identifying surgical trays, instruments, and implants. The AI-enabled system of the invention is trained to meet a minimum recall and precision threshold previously upload by an administrator of the system or an employee, contractor, or agent of the administrator. Once trained, the AI-enabled system is configured to be deployed for use in a manner that permits a user to take a picture, video, or image of at least one surgical tool tray with a mobile computer device, which includes an imager (e.g., a camera) and then the system notifies the user of: (1) the types of tray(s), (2) the instruments and implants on the tray(s), and/or (3) any instruments and implants missing from tray(s). In embodiments, the system permits a user to take another picture, video, or image of the tray(s) after surgery, and then the system notifies the user, and records into a database of the system, which instruments and implants are present on the tray(s).)
As per claim 3,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Schlabach further teaches specification of the surgical implements available for use during the surgical operation includes an instrument count sheet:
(Paragraphs [0077]-[0082] and [0101] of Schlabach. The teaching describes that the second column 602 includes a count for medical equipment listed in the first column 601. The inventory items may be counted for a given point in time. The several columns 603 at the right side of the second column 602 displays historical count values for the various categories and items in the first column 601. The system 400 creates a historical update to this count sheet by creating a column. The save button 604 on the right allows the user to save a version of the count sheet to the system 400. The submit button 607 at the bottom saves the entered count data into the history of the count sheet, creating a history column. The system 400 provides functionality to be used concurrently whenever the end user is performing or managing instrument counts on a count sheet during surgery. The functionality includes: pre-population of items, ability to manage a count baseline, review history of count baseline, perform counts, review the history of counts, input quantities through multiple methods and increase patient safety by having the system 400 measure and pro-actively communicate counts that don’t match the count baseline.)
As per claim 7,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein automatically generating the record of the utilized surgical implements depicted in the image includes identifying each of at least a subset of the utilized surgical implements based at least on a trained machine learning model:
(Paragraphs [0009] and [0037] of Donnelly. The teaching describes an AI-enabled, dynamic, computer vision system useful for identifying surgical trays, instruments, and implants. The AI-enabled system of the invention is trained to meet a minimum recall and precision threshold previously upload by an administrator of the system or an employee, contractor, or agent of the administrator. Once trained, the AI-enabled system is configured to be deployed for use in a manner that permits a user to take a picture, video, or image of at least one surgical tool tray with a mobile computer device, which includes an imager (e.g., a camera) and then the system notifies the user of: (1) the types of tray(s), (2) the instruments and implants on the tray(s), and/or (3) any instruments and implants missing from tray(s). In embodiments, the system permits a user to take another picture, video, or image of the tray(s) after surgery, and then the system notifies the user, and records into a database of the system, which instruments and implants are present on the tray(s).)
As per claim 9,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein the record of the utilized surgical implements depicted in the image is based at least on the specification of the surgical implements available for use during the surgical operation:
(Paragraph [0061] of Donnelly. The teaching describes that the labels identify the type of material, surface texture, and size tolerance information of the item. The labels are either manually or automatically confirmed, supplemented, or assigned to each bounded section of the item or holding tray. For example, the administrator, or their employee, contractor, or agent, can set the relevant bounding box(es) for a tray holder to metallic properties imitating polished aluminum for the metal holders in the tray and micarta material for other plastic holders. Indeed, each synthetic item or tray part may be manually selected via polygon selection, and onto that subset of polygons, a material ID may be manually assigned by the administrator or their employee, contractor, or agent.)
As per claim 10,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein the one or more surgical implement preparation groupings are organized into one or more trays:
(Paragraphs [0049] of Donnelly. The teaching describes that the invention can operate with tens of thousands of 2-dimensional synthetic images of each 3-dimensional item that must first be created to enable training of the system to dynamically identify items in an image, regardless of the location, orientation, alternate surface texture (e.g., biological material present on the surface of the item), or evidence of use of an item. The system can be taught to recognize different items by identifying and linking feature vectors to different feature vectors and/or specific trays and instruments, implants, tools, or fasteners.)
As per claim 11,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 10.
Donnelly further teaches wherein the one or more trays includes a core tray including a first subset of the utilized surgical implements and an accessory tray including a second subset of the utilized surgical implements:
(Paragraph [0100] of Donnelly. The teaching describes that the system can further include an option to audit the items on the tray. In such an embodiment, a user, such as a hospital administrator, can first view the patient's case and a list of the required trays and items. The user can then view whether multiple pieces of the same equipment are located on other trays. In this regard, the user can identify potential areas of waste.)
As per claim 12,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein identification of the one or more surgical implement preparation groupings includes performing service-line analysis:
(Paragraphs [0099] and [0100] of Donnelly. The teaching describes that in one embodiment, the ability to review the list of trays and items contained thereon and/or missing depends on the role of the user. For example, if the user signs in as a nurse, the system can restrict user access to only view a list of trays and items contained thereon for the upcoming surgery. Conversely, a hospital administrator can be permitted to not only view the upcoming surgery, but also a list of trays and equipment used in all prior surgeries by the relevant surgeon. Furthermore, certain users, such as medical device sales representatives can be restricted to see the contents of trays that are supposed to contain their products, which would allow those representatives to identify when their products are missing from trays. The system can further include an option to audit the items on the tray. In such an embodiment, a user, such as a hospital administrator, can first view the patient's case and a list of the required trays and items. The user can then view whether multiple pieces of the same equipment are located on other trays. In this regard, the user can identify potential areas of waste.)
As per claim 14,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches further comprising outputting a report based at least on the identification of the one or more surgical implement preparation groupings:
(Paragraph [0140] of Donnelly. The teaching describes that the image data collection device(s) may capture specific tendencies of the individual(s) and/or team(s) performing the procedure. The system may utilize such information to immediately or in the future suggests adjustments to the items on the tray and/or needed equipment. Indeed, the system may be a dynamic system that through use learns the preferences and tendencies of the individual(s) and/or team(s) performing the procedure. By learning such preferences and/or tendencies the system may increase the efficiencies and/or lower the cost of the procedures. For example, the system may recommend that certain items that are never used by the team be removed from future trays.)
As per claim 15,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein the one or more specified utilization metrics includes at least one of: a corresponding usage rate for each instrument:
(Paragraph [0140] of Donnelly. The teaching describes that the image data collection device(s) may capture specific tendencies of the individual(s) and/or team(s) performing the procedure. The system may utilize such information to immediately or in the future suggests adjustments to the items on the tray and/or needed equipment. Indeed, the system may be a dynamic system that through use learns the preferences and tendencies of the individual(s) and/or team(s) performing the procedure. By learning such preferences and/or tendencies the system may increase the efficiencies and/or lower the cost of the procedures. For example, the system may recommend that certain items that are never used by the team be removed from future trays.)
As per claim 16,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein the identification of the one or more surgical implement preparation groupings includes performing a robustness analysis to determine a reliability of the identified the one or more surgical implement preparation groupings:
(Paragraphs [0017], [0072], [0075], [0100] and [0140] of Donnelly. The teaching describes a 3-dimensional synthetic item can be used to evaluate the amount and effectiveness of training the system has undergone. A test dataset can be provided to the system during or after the related training dataset is provided to the system to be processed. The system is provided with answers when processing the training dataset(s), but the system is not provided with answers when processing the test dataset(s). When each synthetic image in a test dataset is provided to the system, the system identifies the item in the synthetic image and provides a numeric confidence factor, which represents the confidence the system has that the identification of the item is correct. If the numeric confidence factor fails to meet, or exceed, a minimum threshold previously set in the system by the administrator of the system, or an employee, contractor, or agent of the administrator, additional training dataset(s) are provided to the system so that the system can improve the confidence factor by creating updated feature vector(s) attributable to the identified pattern to be stored on the server(s) for later deployment. Conversely, if the system identification(s) and numeric confidence value(s) is correct, and the confidence factor is equal to or greater than confidence factor set in the system, then the system can be deployed for use with the new or updated identification model(s). During the training process, the system is provided with synthetic images, and optionally real-life images of objects, e.g., tray(s) or item(s), along with correct results linked to each synthetic, and optionally, real-world, image. The correct results are referred to as a target or a target attribute. The learning algorithm finds patterns in the training data that map the input data attributes to the target. Once an identification model has been trained with expected performance, the next step is to assess the prediction results of the identification model in a controlled, close-to-real setting to gain confidence that the model is valid, reliable, and meets business requirements for use. In this step, confidence thresholds of the detector module are set (i.e., the module that identifies whether an item is or is not in an image). In identifying target classification(s) of a real-world object with the detector module, the system assigns a numeric confidence value to each output. This confidence value represents the system's confidence in the prediction. The system determines a correctness of each prediction in the set of predictions and determines a relationship between the confidence scores and the correctness of the test predictions. The system establishes a confidence threshold for the identification model based on the determined relationship and labels. To avoid incorrect designations, the administrator of the system or an employee, contractor, or agent of the administrator designates a minimum confidence threshold and links that minimum confidence threshold to the relevant item in the database. Minimum thresholds can be universal across all items such as a system will only identify a real-world object from an image of a tray if the system identifies the real-world object with more than 90% confidence. Conversely, unique confidence threshold(s) can be linked to individual item(s). For example, the system can be configured to identify a screw in an image with 70% confidence but may be restricted from confirming the presence of a surgical implant in an image unless the confidence value for that identification is greater than 95%. In another embodiment, the system can further include an option to audit the items on the tray. In such an embodiment, a user, such as a hospital administrator, can first view the patient's case and a list of the required trays and items. The user can then view whether multiple pieces of the same equipment are located on other trays. In this regard, the user can identify potential areas of waste. In another embodiment, the image data collection device(s) may capture specific tendencies of the individual(s) and/or team(s) performing the procedure. The system may utilize such information to immediately or in the future suggests adjustments to the items on the tray and/or needed equipment. Indeed, the system may be a dynamic system that through use learns the preferences and tendencies of the individual(s) and/or team(s) performing the procedure. By learning such preferences and/or tendencies the system may increase the efficiencies and/or lower the cost of the procedures. For example, the system may recommend that certain items that are never used by the team be removed from future trays.)
As per claim 18,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
The combined teaching of Donnelly and Schlabach further teach wherein the identification of the one or more surgical implement preparation groupings includes performing another coverage analysis to determine a representation level of a characteristic associated with the surgical operation based at least on the specification of the surgical implements available for use during the surgical operation:
(Paragraphs [0017], [0072], [0075], [0100] and [0140] of Donnelly. The teaching describes a 3-dimensional synthetic item can be used to evaluate the amount and effectiveness of training the system has undergone. A test dataset can be provided to the system during or after the related training dataset is provided to the system to be processed. The system is provided with answers when processing the training dataset(s), but the system is not provided with answers when processing the test dataset(s). When each synthetic image in a test dataset is provided to the system, the system identifies the item in the synthetic image and provides a numeric confidence factor, which represents the confidence the system has that the identification of the item is correct. If the numeric confidence factor fails to meet, or exceed, a minimum threshold previously set in the system by the administrator of the system, or an employee, contractor, or agent of the administrator, additional training dataset(s) are provided to the system so that the system can improve the confidence factor by creating updated feature vector(s) attributable to the identified pattern to be stored on the server(s) for later deployment. Conversely, if the system identification(s) and numeric confidence value(s) is correct, and the confidence factor is equal to or greater than confidence factor set in the system, then the system can be deployed for use with the new or updated identification model(s). During the training process, the system is provided with synthetic images, and optionally real-life images of objects, e.g., tray(s) or item(s), along with correct results linked to each synthetic, and optionally, real-world, image. The correct results are referred to as a target or a target attribute. The learning algorithm finds patterns in the training data that map the input data attributes to the target. Once an identification model has been trained with expected performance, the next step is to assess the prediction results of the identification model in a controlled, close-to-real setting to gain confidence that the model is valid, reliable, and meets business requirements for use. In this step, confidence thresholds of the detector module are set (i.e., the module that identifies whether an item is or is not in an image). In identifying target classification(s) of a real-world object with the detector module, the system assigns a numeric confidence value to each output. This confidence value represents the system's confidence in the prediction. The system determines a correctness of each prediction in the set of predictions and determines a relationship between the confidence scores and the correctness of the test predictions. The system establishes a confidence threshold for the identification model based on the determined relationship and labels. To avoid incorrect designations, the administrator of the system or an employee, contractor, or agent of the administrator designates a minimum confidence threshold and links that minimum confidence threshold to the relevant item in the database. Minimum thresholds can be universal across all items such as a system will only identify a real-world object from an image of a tray if the system identifies the real-world object with more than 90% confidence. Conversely, unique confidence threshold(s) can be linked to individual item(s). For example, the system can be configured to identify a screw in an image with 70% confidence but may be restricted from confirming the presence of a surgical implant in an image unless the confidence value for that identification is greater than 95%. In another embodiment, the system can further include an option to audit the items on the tray. In such an embodiment, a user, such as a hospital administrator, can first view the patient's case and a list of the required trays and items. The user can then view whether multiple pieces of the same equipment are located on other trays. In this regard, the user can identify potential areas of waste. In another embodiment, the image data collection device(s) may capture specific tendencies of the individual(s) and/or team(s) performing the procedure. The system may utilize such information to immediately or in the future suggests adjustments to the items on the tray and/or needed equipment. Indeed, the system may be a dynamic system that through use learns the preferences and tendencies of the individual(s) and/or team(s) performing the procedure. By learning such preferences and/or tendencies the system may increase the efficiencies and/or lower the cost of the procedures. For example, the system may recommend that certain items that are never used by the team be removed from future trays.)
(Paragraphs [0044] and [0045] of Schlabach. The teaching describes a surgical table may carry or be proximate to one or more RFID readers that periodically or constantly identify the location of tagged items. The system 400 automatically initializes the count procedure before a surgical procedure by identifying the inventory located within range of the surgical table. Periodically during the surgical procedure, the system 400 reports the location of tagged items: unused items on the sterilized carts or trays, used items on the contaminated carts or trays, items on the operating table, and/or items that are internal to a patient's anatomy. The system 400 alerts the surgical team with either blinking screen or audio alert, for example, of tagged items that remain within the patient. The system 400 also warns when items are not at any of the known locations (e.g., sterile cart, contaminated cart, table), allowing the surgical team to determine the status of a missing item. The system's count function may be initiated, for example, by an audio command or by pressing a foot pedal or other user command. Insofar that a coverage analysis is a determination of gaps in the data used for analysis, determining when expected item is missing is a representation of a gap in the implement data used for the tray analysis.)
As per claim 19,
Claim 19 is substantially similar to claim 1. Accordingly, claim 19 is rejected for the same reasons as claim 1.
As per claim 20,
Claim 20 is substantially similar to claim 1. Accordingly, claim 20 is rejected for the same reasons as claim 1.
As per claim 21,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein determining the set of candidate surgical implements is based at least in part on a cosine similarity or scoring with respect to a plurality of previously received images:
(Paragraphs [0074], [0075] and [0140] of Donnelly. The teaching describes that the image data collection device(s) may capture specific tendencies of the individual(s) and/or team(s) performing the procedure. The system may utilize such information to immediately or in the future suggests adjustments to the items on the tray and/or needed equipment. Indeed, the system may be a dynamic system that through use learns the preferences and tendencies of the individual(s) and/or team(s) performing the procedure. By learning such preferences and/or tendencies the system may increase the efficiencies and/or lower the cost of the procedures. For example, the system may recommend that certain items that are never used by the team be removed from future trays. Both the tray classifier 415 and the instrument identification 420 modules are trained on 2-dimesional synthetic images. In certain embodiments, the tray classifier 415 and the instrument identification 420 are trained using images of real-world objects. Once an identification model has been trained with expected performance, the next step is to assess the prediction results of the identification model in a controlled, close-to-real setting to gain confidence that the model is valid, reliable, and meets business requirements for use. In this step, confidence thresholds of the detector module are set (i.e., the module that identifies whether an item is or is not in an image). In identifying target classification(s) of a real-world object with the detector module, the system assigns a numeric confidence value to each output. This confidence value represents the system's confidence in the prediction. The system determines a correctness of each prediction in the set of predictions and determines a relationship between the confidence scores and the correctness of the test predictions.)
As per claim 22,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein at least one image of the plurality of previously received images is captured by a camera:
(Paragraphs [0074], [0075], [0116] and [0140] of Donnelly. The teaching describes that the image data collection device(s) may capture specific tendencies of the individual(s) and/or team(s) performing the procedure. The system may utilize such information to immediately or in the future suggests adjustments to the items on the tray and/or needed equipment. Indeed, the system may be a dynamic system that through use learns the preferences and tendencies of the individual(s) and/or team(s) performing the procedure. By learning such preferences and/or tendencies the system may increase the efficiencies and/or lower the cost of the procedures. For example, the system may recommend that certain items that are never used by the team be removed from future trays. Both the tray classifier 415 and the instrument identification 420 modules are trained on 2-dimesional synthetic images. In certain embodiments, the tray classifier 415 and the instrument identification 420 are trained using images of real-world objects. Once an identification model has been trained with expected performance, the next step is to assess the prediction results of the identification model in a controlled, close-to-real setting to gain confidence that the model is valid, reliable, and meets business requirements for use. In this step, confidence thresholds of the detector module are set (i.e., the module that identifies whether an item is or is not in an image). In identifying target classification(s) of a real-world object with the detector module, the system assigns a numeric confidence value to each output. This confidence value represents the system's confidence in the prediction. The system determines a correctness of each prediction in the set of predictions and determines a relationship between the confidence scores and the correctness of the test predictions. The system includes at least one image collection device for obtaining real world pictures of the relevant surgical trays. In certain embodiments, the image data collection device can be a camera capable of capturing photographs or video of real-world objects.)
Claims 5, and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Donnelly and Schlabach in further view of Adidharma et al. (US 2022/0262098).
As per claim 5,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein a total number of the plurality of previously received images utilizes for the annotation is based at least on a robustness analysis of the plurality of received images performed using a threshold:
(Paragraphs [0017], [0072] and [0075] of Donnelly. The teaching describes When each synthetic image in a test dataset is provided to the system, the system identifies the item in the synthetic image and provides a numeric confidence factor, which represents the confidence the system has that the identification of the item is correct. If the numeric confidence factor fails to meet, or exceed, a minimum threshold previously set in the system by the administrator of the system, or an employee, contractor, or agent of the administrator, additional training dataset(s) are provided to the system so that the system can improve the confidence factor by creating updated feature vector(s) attributable to the identified pattern to be stored on the server(s) for later deployment. Feature vectors can be revised by adding additional scalars, matrices can be revised by adding additional feature vectors, and tensors can be revised by adding additional matrices if each have the same shape. Such revisions can be accomplished by the system automatically adding corresponding elements while training. As the system trains, it automatically amends the relevant tensor(s) in a manner that best captures the patterns (i.e., provides the highest recall and precision). The training continues with the updated learning algorithm being exposed to new 2-dimensional synthetic images without the aid of knowing the target or target attributes. Furthermore, because the 2-dimesional synthetic images are created from the 3-dimensional synthetic items, the ability to have the training data include or exclude the correct answer(s) for the system's review can be automatically excluded from being transmitted to the system when training. As a result, not only can the 2-dimesional synthetic images be used to train the system, but they can also serve as a basis to automatically evaluate the system's recall and precision. Once an identification model has been trained with expected performance, the next step is to assess the prediction results of the identification model in a controlled, close-to-real setting to gain confidence that the model is valid, reliable, and meets business requirements for use. In this step, confidence thresholds of the detector module are set (i.e., the module that identifies whether an item is or is not in an image). In identifying target classification(s) of a real-world object with the detector module, the system assigns a numeric confidence value to each output. This confidence value represents the system's confidence in the prediction. The system determines a correctness of each prediction in the set of predictions and determines a relationship between the confidence scores and the correctness of the test predictions. The system establishes a confidence threshold for the identification model based on the determined relationship and labels. To avoid incorrect designations, the administrator of the system or an employee, contractor, or agent of the administrator designates a minimum confidence threshold and links that minimum confidence threshold to the relevant item in the database. Minimum thresholds can be universal across all items such as a system will only identify a real-world object from an image of a tray if the system identifies the real-world object with more than 90% confidence. Conversely, unique confidence threshold(s) can be linked to individual item(s). For example, the system can be configured to identify a screw in an image with 70% confidence but may be restricted from confirming the presence of a surgical implant in an image unless the confidence value for that identification is greater than 95%.)
The combined teaching of Donnelly and Schlabach does not explicitly teach indicating a reliability of the plurality of the previously received images.
However, Adidharma teaches in the related art of image classification, via computational model, images of a source image stream as valid images or invalid images based on whether the images include a surgical tool or biological tissue, teaches an indication of a reliability of the plurality of the previously received images:
(Paragraphs [0006], [0026] and [0043]-[0045] of Adidharma. The teaching describes classifying input images as valid images or invalid images using: a clustering algorithm that classifies each of the input images into either a first group or a second group; and labels that indicate whether the input images include a surgical tool; and training a computational model to identify the valid images based on whether the valid images include biological tissue or a surgical tool, or whether the valid images have at least a threshold level of clarity. The method also includes training a computational model to identify the valid images based on whether the valid images include biological tissue and/or a surgical tool, and/or whether the valid images have at least a threshold level of clarity. After training, the model can be used to classify and condense unlabeled image streams for brevity and enhanced usefulness. Images that include biological tissue and/or a surgical tool, and/or that have a threshold level of clarity are more likely to be useful. The unlabeled image having some threshold level of clarity also makes it more likely that the computational model 116 will label that unlabeled image as valid. When provided surgical videos, the clustering algorithm 308 also tends to classify the input images 302 having the threshold level of clarity into the first group 310. Image clarity can be examined using an additional training layer as well. The feature detection algorithm 320 identifies the valid images 304 having at least a threshold quantity of identifiable features such as edges, dots, forked veins, lines, blobs, gradients, lines, spatial frequencies, textures, blur, active bleeding etc.). For example, the threshold quantity could be at least one standard deviation greater than the mean number of identifiable features of the input images 302 or the third group 311, depending on the ordering of the training layers. Filtering images lacking some amount of features out of the valid images 304 will tend to improve the clarity and informative nature of the valid images 304.)
It would have been obvious to one of ordinary skill in the art before the time of filing to add to the combined teaching of Donnelly and Schlabach the image analysis techniques of Adharma. Paragraphs [0002] and [0025] of Adidharma teach that the methods used of authenticating informative images in their classifier allows for more accurate and improved image analysis. One of ordinary skill in the art in possession of the combined teaching of Donnelly and Schlabach would have looked to Adidharma to achieve a better image analysis process in determining the state of their surgical trays. One of ordinary skill in the art would have added to the combined teaching of Donnelly and Schlabach, the teaching of Adidharma based on this incentive without yielding unexpected results.
As per claim 6,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein a total number of the plurality of previously received images utilized for the annotation:
(Paragraph [0038] of Donnelly. The teaching describes that the software application can recognize real-world items from images (photograph or video) because the application has been previously trained with the 3-dimesional synthetic items corresponding with real-world items. In such an embodiment for the data preparation steps, an administrator, or their employee contractor, or agent: (1) selects an item (e.g., tray, tool, implant, fastener, or the like) and creates scenes, and (2) uploads scenes to a 3-D rendering program such as, for example, 3ds Max and/or Unity, and thereafter, and the system (3) renders synthetic images of the device (i.e., 2-dimesional images) and creates correlated synthetic colored masks, (4) provides a dataset to the software application to develop annotated files or to map instrument masks to real world images of instruments, and (5) splits the annotated files into subfiles for layering images.)
The combined teaching of Donnelly and Schlabach does not disclose wherein the annotation is based at least on coverage analysis of the plurality of previously received images performed using a threshold indicating a level of representation of the surgical operation with respect to a group of surgical operations.
However, Adidharma teaches classifying, via a computational model, images of a sources image stream as valid images or invalid images based on whether the images include biological tissue or surgical tools, disclosed based on at least one a coverage analysis of the plurality of previously received images performed using a threshold indicating a level of representation of the surgical operation with respect to a group of surgical operations:
(Paragraphs [0067] [0087] and [0088] of Adidharma. The teaching describes that they suggest a novel multi-stage video summarization procedure utilizing semantic features and video frame temporal correspondences to create a representative summarization. This was compared to commercial software, a general video summarization tool not tailored to endonasal surgical videos. Finally, this proved capable of achieving a 98.2% reduction in overall video length while preserving 87% of key medical scenes on our data set. For our implementation of video summarization, we chose to use subshot representativeness to dictate the best subshot. Once frame subshot boundaries are found, a clip length is specified by the user. The output of the CNN was taken to compare the L2 distance. First, the L2 Distance was calculated from each frame to each other frame within the shot. Since low L2 distance indicates that two frames are similar, the similarity of each frame to the overall video shot is taken as the mean of the similarity of each frame to each other frame. The subshot representativeness was then the sum of the similarity of each frame in a candidate clip (see equations at lower right panel of FIG. 7) where R(k) is the “representativeness” of the clip starting at frame k, N is the total number of frames in the scene, and I is the feature vector at frame i. Finally, the best clip was chosen as the clip that had the maximum “representativeness” for each shot.)
It would have been obvious to one of ordinary skill in the art before the time of filing to add to the combined teaching of Donnelly and Schlabach the image analysis techniques of Adharma. Paragraphs [0002] and [0025] of Adidharma teach that the methods used of authenticating informative images in their classifier allows for more accurate and improved image analysis. One of ordinary skill in the art in possession of the combined teaching of Donnelly and Schlabach would have looked to Adidharma to achieve a better image analysis process in determining the state of their surgical trays. One of ordinary skill in the art would have added to the combined teaching of Donnelly and Schlabach, the teaching of Adidharma based on this incentive without yielding unexpected results.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Donnelly and Schlabach in further view of Baker et al. (US 2015/0324739; herein referred to as Baker).
As per claim 8,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
The combined teaching of Donnelly and Schlabach does not explicitly teach wherein automatically generating the record of the utilized surgical implements depicted in the image includes identifying each of at least a subset of the utilized surgical implements without using the specification of the surgical implements available for use during the surgical operation.
However, Baker teaches wherein automatically generating the record of the utilized surgical implements depicted in the image includes identifying each of at least a subset of the utilized surgical implements without using the specification of the surgical implements available for use during the surgical operation:
(Paragraphs [0151] and [0202] of Baker. The teaching describes that at step 2703, if the identifier is unknown, the inventory system issues and analyze instrument command internally, and proceeds to a process of analyzing the instrument, step 2602. At step 3404, some embodiments of the inventory system check whether to train the selected instrument type with the current image. In one embodiment, the system chooses to train the image if the score of the measurements for the selected instrument type is below a predetermined threshold. In the case where the user has selected an instrument type from the list of untrained yet known instruments, the inventory system may choose to train. In the case when the user has manually input an instrument type which was unknown to the inventory system, the inventory system may choose to train. In another embodiment, the user may select whether or not to train the instrument type on the current image.)
It would have been obvious to one of ordinary skill in the art before the time of filing to add to the inventory management teachings of the combined teaching of Donnelly and Schlabach, the inventory management teachings of Baker. Paragraph [0202] of Baker details that when an object is unknown, the system may choose to train or retrain itself on the unknown object. This level of dynamic inventory management provides more flexibility in the workflow process. One of ordinary skill in the art in possession of the combined teaching of Donnelly and Schlabach would have seen this increased flexibility in the inventory management of Baker and looked to it to improved its own system. One of ordinary skill in the art would have added to the combined teaching of Donnelly and Schlabach, the teaching of Baked based on this incentive without yielding unexpected results.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Donnelly and Schlabach in further view of Fuji et al. (Fujii, Ryo, et al. “Surgical tool detection in open surgery videos.” Applied Sciences, vol. 12, no. 20, 17 Oct. 2022, p. 10473, https://doi.org/10.3390/app122010473; herein referred to as Fuji)
As per claim 13,
The combined teaching of Donnelly and Schlabach teaches the limitations of claim 1.
Donnelly further teaches wherein the identification of the one or more surgical implement preparation groupings optimizes the one or more specified utilization metrics:
(Paragraphs [0017], [0072], [0075], [0100] and [0140] of Donnelly. The teaching describes a 3-dimensional synthetic item can be used to evaluate the amount and effectiveness of training the system has undergone. A test dataset can be provided to the system during or after the related training dataset is provided to the system to be processed. The system is provided with answers when processing the training dataset(s), but the system is not provided with answers when processing the test dataset(s). When each synthetic image in a test dataset is provided to the system, the system identifies the item in the synthetic image and provides a numeric confidence factor, which represents the confidence the system has that the identification of the item is correct. If the numeric confidence factor fails to meet, or exceed, a minimum threshold previously set in the system by the administrator of the system, or an employee, contractor, or agent of the administrator, additional training dataset(s) are provided to the system so that the system can improve the confidence factor by creating updated feature vector(s) attributable to the identified pattern to be stored on the server(s) for later deployment. Conversely, if the system identification(s) and numeric confidence value(s) is correct, and the confidence factor is equal to or greater than confidence factor set in the system, then the system can be deployed for use with the new or updated identification model(s). During the training process, the system is provided with synthetic images, and optionally real-life images of objects, e.g., tray(s) or item(s), along with correct results linked to each synthetic, and optionally, real-world, image. The correct results are referred to as a target or a target attribute. The learning algorithm finds patterns in the training data that map the input data attributes to the target. Once an identification model has been trained with expected performance, the next step is to assess the prediction results of the identification model in a controlled, close-to-real setting to gain confidence that the model is valid, reliable, and meets business requirements for use. In this step, confidence thresholds of the detector module are set (i.e., the module that identifies whether an item is or is not in an image). In identifying target classification(s) of a real-world object with the detector module, the system assigns a numeric confidence value to each output. This confidence value represents the system's confidence in the prediction. The system determines a correctness of each prediction in the set of predictions and determines a relationship between the confidence scores and the correctness of the test predictions. The system establishes a confidence threshold for the identification model based on the determined relationship and labels. To avoid incorrect designations, the administrator of the system or an employee, contractor, or agent of the administrator designates a minimum confidence threshold and links that minimum confidence threshold to the relevant item in the database. Minimum thresholds can be universal across all items such as a system will only identify a real-world object from an image of a tray if the system identifies the real-world object with more than 90% confidence. Conversely, unique confidence threshold(s) can be linked to individual item(s). For example, the system can be configured to identify a screw in an image with 70% confidence but may be restricted from confirming the presence of a surgical implant in an image unless the confidence value for that identification is greater than 95%. In another embodiment, the system can further include an option to audit the items on the tray. In such an embodiment, a user, such as a hospital administrator, can first view the patient's case and a list of the required trays and items. The user can then view whether multiple pieces of the same equipment are located on other trays. In this regard, the user can identify potential areas of waste. In another embodiment, the image data collection device(s) may capture specific tendencies of the individual(s) and/or team(s) performing the procedure. The system may utilize such information to immediately or in the future suggests adjustments to the items on the tray and/or needed equipment. Indeed, the system may be a dynamic system that through use learns the preferences and tendencies of the individual(s) and/or team(s) performing the procedure. By learning such preferences and/or tendencies the system may increase the efficiencies and/or lower the cost of the procedures. For example, the system may recommend that certain items that are never used by the team be removed from future trays.)
The combined teaching of Donnelly and Schlabach does not explicitly teach determining a co-usage pattern of a first one of the utilized surgical implements that is used with a second one of the utilized surgical implements at a frequency that meets a usage threshold.
However, Fuji teaches determining a co-usage pattern of a first one of the utilized surgical implements that is used with a second one of the utilized surgical implements ata frequency that meets a usage threshold:
(Section 3.5 Co-Occurrences of the Surgical Tools of Fuji. The teaching describes that we also study the co-occurrences in our dataset. In Figure 9a, the co-occurrence matrix of surgical tools and surgical types, we can see some tools are only used in particular surgery types. As is obvious, we can see Mouth Gag is only used in the types of surgery performed around the mouth (Posterior Pharyngeal Flap and Alveolar Bone Grafting), and Skewer is only used for Scar Revision. On the other hand, we can confirm Tweezers are used in all surgeries. Therefore, since the surgical tool type can be the main indicator of the type of surgery performed, it is important to distinguish between the types of surgical tools. In Figure 9b, the co-occurrence matrix of the surgical tools and surgical tools, we can see some sets of tools appear at the same time. For example, Cup and Skewer, Chisel and Hammer and Needle Holders and Suture, Suture Needle are often used together. On the other hand, Gauze, Suction Cannula and Tweezers appear with any type of tool. Therefore, it indicates that the information of one tool can help the detection of other tools.)
It would have been obvious to one of ordinary skill in the art before the time of filing to add to the classification teachings of the combined teaching of Donnelly and Schlabach, the commonly co-occurring instruments of Fuji. Fuji teaches that some tools in surgery are used ubiquitously while others are not. By understanding the co-occurrence of instruments among different procedures, one can better characterize and distinguish between different types of surgery. One of ordinary skill in the art in possession of the combined teaching of Donnelly and Schlabach would have looked to Fuji to achieve this improved surgical classification system. One of ordinary skill in the art would have added to the combined teaching of Donnelly and Schlabach, the teaching of Fuji based on this incentive without yielding unexpected results.
Response to Arguments
Applicant's arguments filed December 03, 2025 have been fully considered.
Applicant’s arguments pertaining to rejections made under 35 U.S.C. 101 are not persuasive.
The Applicant argues that the amended determining, annotating, optimizes and performing limitations are examples of a technical solution to a technical problem. A human person is incapable of performing at least these limitations. These features are examples of integrating the abstract idea into a practical application.
The Examiner respectfully disagrees. The Applicant has failed to provide any sort of reasoning to support their conclusions. They identify the technical solution as a way to identify surgical implement preparation groupings using computer recognition. However, they failed to properly identify the specific problem being addressed, merely stating that the problem was how to identify surgical implement preparation group(s) using a computer. There is no discussion of what causes these problems and how the solution addresses the specific problems. This is a merely a vague problem that remains detached from the technical improvement discussion. Furthermore, the Applicant failed to explain how or why a human was incapable of performing the highlighted steps. In contrast, the Examiner has provided a detailed explanation as to why claim 1 recites an abstract idea. Please refer to the updated rejection above.
Applicant’s arguments pertaining to rejections made under 35 U.S.C. 102/103 are rendered moot in light of the new combination of references used in the current rejection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAD A NEWTON whose telephone number is (313)446-6604. The examiner can normally be reached M-F 8:00AM-4:00PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PETER H. CHOI can be reached at (469) 295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHAD A NEWTON/Primary Examiner, Art Unit 3681