Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 1-28 objected to because of the following informalities: none of the claims 1-28 contain a grammatically correct preamble. Please change "Method for detecting…" to "A method for detecting…". Appropriate correction is required.
Claim 1 objected to because of the following informalities:
'Performing a provision of image information about at least the one medical consumable by the image recording device" is unclear to the examiner in the wording of the claim. Is the image information being taken around the medical consumable, or just a picture is taken of the medical consumable? Please clarify.
'Whereby the image information is specific to a shape of the medical consumable' is unclear to the examiner. Is the image information containing only pixels that outline the shape of the medical consumable, or is the medical consumable simply the foreground object of the image? Please clarify. The examiner has interpreted the limitation to be claiming an image that contains a shape of the medical consumable inside of it - based off of the instant disclosure paragraph 10.
Appropriate correction is required.
Claim analysis - 35 USC § 112
Use of the word “means” (or “step for”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. § 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that § 112(f) (pre-AIA § 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function.
Absence of the word “means” (or “step for”) in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. § 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that § 112(f) (pre-AIA § 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function.
Claim elements in this application that use the word “means” (or “step for”) are presumed to invoke § 112(f) except as otherwise indicated in an Office action. Similarly, claim elements that do not use the word “means” (or “step for”) are presumed not to invoke § 112(f) except as otherwise indicated in an Office action.
Since the claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, claim(s) 1-28 has/have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof because claims 1-28 use ‘means’, and are modified by functional language and are not modified by structural language, and therefore invoke 112(f) in their interpretation.
A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation for the claims invoking 35 U.S.C. 112(f), stated above: Fig 2-3 and Para 68-74.
If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action.
If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3, 7, 21, 27, 28 rejected under 35 U.S.C. 101 because,
In regards to claims 1, 3, 7, 21, 27, 28:
Step 1:
Claims 1 3, 7, 21, 27, and 28 are directed towards a process, machine, manufacture or composition of matter which is/are statutory subject matter.
Step 2A: Claims 1 is directed to a method/system for:
method for detecting at least one medical consumable, where the following steps must be carried out:
provision of an image recording device,
performing a provision of image information about at least the one medical consumable by the image recording device,
whereby the image information is specific to a shape of the medical consumable,
performing an application of an image evaluation means, for shape recognition of the shape of
the medical consumable,
with the provided image information as an input of the image evaluation means to use an output of the image analyzing means a classification information about the medical consumable,
generating audio information based on the classification information in order to initiate an acoustic output of the audio information via an audio device.
Prong 1:
The limitations of: provision of an image recording device,
performing a provision of image information about at least the one medical consumable by the image recording device,
whereby the image information is specific to a shape of the medical consumable,
performing an application of an image evaluation means, for shape recognition of the shape of the medical consumable,
with the provided image information as an input of the image evaluation means to use an output of the image analyzing means a classification information about the medical consumable;
as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in a mental process relationships. That is, nothing in the claim element precludes the step from being performed in a mental procedure. For example, recognizing that a certain medical consumable tool is the type of tool that it is (classification) is a mental process.
Similarly, the limitation of: generating audio information based on the classification information in order to initiate an acoustic output of the audio information via an audio device
as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in a mental process. For example, “initiate an acoustic output of the audio information” based on the classification information in the context of this claim is simply the mental process of speaking out loud the type of medical consumable that is being referenced.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in a mental process, then it falls within the “Mental Process” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Claims 3, 7, 21, 27, 28 also recite a mental process.
Prong 2:
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – image evaluation means (claim 1), manual operation of operator individual (claim 3), a neural network (claim 7); and an ‘audio processing device’ and ‘image processing device’ (claims 21, 27, and 28), is recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computing component / software application. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim(s) is directed to an abstract idea.
Step 2B:
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception such as improvements to another technology or technical field, or other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment.
Moreover, the claim language that may be separate from the abstract idea (i.e., additional elements) include computer processor, machine-readable medium.
The additional hardware/software (e.g., processor, machine-readable medium) perform only basic function, which would be common to every additional hardware/software (e.g., processor, machine-readable medium).
Thus, the recited generic additional hardware/software (e.g., processor, machine-readable medium) perform no more than their basic computer function. In the court of Alice Corp. v. CLS Bank Intl, the court cites a “data processing system” with a “communications controller” and “data storage unit,” for example, —is purely functional and generic (page 16). In the specification of instant application, processor, machine-readable medium are general computer components. Generic computer-implementation of a method is not a meaningful limitation that alone can amount to significantly more than an abstract idea. Moreover, when viewed as a whole with such additional element considered as an ordered combination, claims modified by adding a generic computer are nothing more than a purely conventional computerized implementation of an idea in the general field of computer processing and do not provide significantly more than an abstract idea.
Consequently, the identified additional elements taken into consideration individually or in combination fails to amount of significantly more than the abstract idea above.
The examiner would like to note that claims 2, 4-6, 8-20 and 22-26 amount to significantly more than the judicial exception such as improvements to another technology or technical field, and are therefore not rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 101 – part 2
Claim 28 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. See MPEP 2106 and 2106.03 for guidance. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter of a process, machine, manufacture, or composition of matter. “A computer program” as recited is not patent eligible subject matter because it is “software/data per se”. A recommended remedy for claiming a computer program is to have it embodied within a “non-transitory” computer readable medium. See also USPTO Published 2019 Patent Eligibility Guidance.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 7, 21, and 27-28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1).
Regarding claim 1, Philippe et al teaches method for detecting at least one medical consumable, where the following steps must be carried out, provision of an image recording device (Abst and Para 45, according to one aspect there is an apparatus for tracking medical products. See para 25 regarding processor and memory. See Para 4 and 37 regarding the items being medical consumables):
performing a provision of image information about at least the one medical consumable by the image recording device(Para 45, the apparatus 10 also includes an imaging unit indicated generally as 20. The imaging unit 20 includes at least one camera 22 that is positioned and adapted to observe the storage compartments 14 to track medical products therein. In particular, the camera 22 has a detection region 24, which represents the region visible to the camera 22 and allows the imaging unit 20 to observe the contents of one or more storage compartments 14 (such as the open drawer 14a)),
whereby the image information is specific to a shape of the medical consumable (Para 32 and 45-50, where an object is present, the visual indicators could further include shapes, patterns, and/or colors that are associated with particular medical products. For example, a library of medical product shapes, patterns and/or colors could be stored in a database and then compared with an observed object to determine what particular medical product has been observed (or a reasonable estimate thereof). i.e. image information is specific to a shape of medical consumable),
performing an application of an image evaluation means, for shape recognition of the shape of the medical consumable (Para 32 and 47 and 50, where an object is present, the visual indicators could further include shapes, patterns, and/or colors that are associated with particular medical products. For example, a library of medical product shapes, patterns and/or colors could be stored in a database and then compared with an observed object to determine what particular medical product has been observed (or a reasonable estimate thereof). i.e. performing shape recognition to determine the type of medical consumable (classification)),
with the provided image information as an input of the image evaluation means to use an output of the image analyzing means a classification information about the medical consumable (Para 50, the imaging unit 20 may be adapted to determine not only whether an object is present, but what particular medical products are in the storage compartment 14. For example, shape, size and pattern recognition algorithms may be used to visually identify one or more particular medical products in one or more regions of the storage compartment 14. In some cases this may be done by comparing the observed images captured by the camera 22 to a database of known medical products. i.e. using the image information as an input of the image evaluation means to classify what type of medical consumable it is),
Philippe et al does not teach generating audio information based on the classification information in order to initiate an acoustic output of the audio information via an audio device, thought Philippe does generate audio information (Para 59-60), it is not based on the image information.
In a similar field of endeavor, Blendinger et al teaches, generating audio information based on the classification information in order to initiate an acoustic output of the audio information via an audio device (Para 6-7 and Para 34-45, based on the determined context, from the patient model (e.g., using or proceeding from or based on the patient model), speech data is automatically generated. This speech data describes at least a part of the determined context (e.g., the respective situation in speech form). The automatically generated speech data is then output to the medical personnel. This outputting takes place by an acoustic speech output and/or as text on a display surface. See Para 34-35 regarding taking object into account and outputting in the form of speech data).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) so that the method includes generating audio information based on the classification information in order to initiate an acoustic output of the audio information via an audio device. Doing so would allow the medical personnel to be informed particularly reliably and with little distraction about the respective current situation (Abst, Blendinger et al).
Regarding claim 2, Philippe et al teaches the method according to claim 1, wherein the method further comprises: provision of a recording area, in order to place the one medical consumable thereon, the recording area being transparent in order to enable recording of the medical consumable by the image device through the recording area (Fig 2, drawer 14a, the apparatus 10 also includes an imaging unit indicated generally as 20. The imaging unit 20 includes at least one camera 22 that is positioned and adapted to observe the storage compartments 14 to track medical products therein. In particular, the camera 22 has a detection region 24, which represents the region visible to the camera 22 and allows the imaging unit 20 to observe the contents of one or more storage compartments 14 (such as the open drawer). i.e. the drawer area is 'recording area' and allows for image capturing of the medical consumables in the drawer and through the dividers, also see transparent drawers in Fig 12),
conducting the recording of the image information by the image recording device, the recording being initiated electronically and/or manually by an operator or automatically at regular intervals in order to provide the image information (Para 40 and 93, the imaging units may be adapted to respond to gestures (such as hand signals) to initiate tracking of one or more of these activities. In some embodiments, the imaging units may be adapted to interpret user activities and determine the corresponding action with or without the use of gestures (e.g. determining whether a medical product is being removed from or added to a storage compartment). i.e. recording initiated manually by operator).
Regarding claim 3, Philippe et al does not teach, method according to claim 1 wherein the procedure further comprises: performing of a recording of the image information by the image recording device, whereby the medical consumable being held above the image recording device for recording by the operator, the recording being initiated electronically and/or manually by the operator or is automatically initiated at regular intervals to provide the image information.
In a similar field of endeavor, Blendinger et al teaches method according to claim 1 wherein the procedure further comprises: performing of a recording of the image information by the image recording device, whereby the medical consumable being held above the image recording device for recording by the operator, the recording being initiated electronically and/or manually by the operator or is automatically initiated at regular intervals to provide the image information (Para 8 and Fig 1 see the medical consumable/object being held by the doctor, and Para 8 states that the recording of imaging is performed at regular intervals).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) so that the method includes performing of a recording of the image information by the image recording device, whereby the medical consumable being held above the image recording device for recording by the operator, the recording being initiated electronically and/or manually by the operator or is automatically initiated at regular intervals to provide the image information. Doing so would allow the medical personnel to be informed particularly reliably and with little distraction about the respective current situation (Abst, Blendinger et al).
Regarding claim 4, Philippe et al does not teach, method according to claim 1, wherein the method further comprises: conducting of speech synthesis in order to provide an artificial generation of a human speaking voice from the audio information in order to acoustically issue the audio information via the audio device based on the human speaking voice.
In a similar field of endeavor, Blendinger et al teaches, method according to claim 1, wherein the method further comprises: conducting of speech synthesis in order to provide an artificial generation of a human speaking voice from the audio information in order to acoustically issue the audio information via the audio device based on the human speaking voice (Para 49, the speech synthesis device serves or is configured to generate speech data based on the determined context from the patient model, where the speech data describes at least part of the determined context in speech form. The output device serves or is configured to output the speech data to the medical personnel by an acoustic speech output and/or as text on a display surface of the output device).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) so that the method includes conducting of speech synthesis in order to provide an artificial generation of a human speaking voice from the audio information in order to acoustically issue the audio information via the audio device based on the human speaking voice. Doing so would allow the medical personnel to be informed particularly reliably and with little distraction about the respective current situation (Abst, Blendinger et al).
Regarding claim 5, Philippe et al teaches method according to claim 1 wherein the method further comprises: Provision of an interface for communication with a merchandise management system and/or a patient data management system, transmitting at least the partial classification information and/or information resulting therefrom to the merchandise management system and/or to the patient data management system (Para 50 and 74, imaging unit 20 may be adapted to determine not only whether an object is present, but what particular medical products are in the storage compartment 14. For example, shape, size and pattern recognition algorithms may be used to visually identify one or more particular medical products in one or more regions of the storage compartment 14. In some cases this may be done by comparing the observed images captured by the camera 22 to a database of known medical products. i.e. transmitting partial classification information of the detected items to a merchandise management system).
Regarding claim 7, Philippe et al does not teach, method according to claim 1, wherein the image evaluation means is designed as a neural network.
In a similar field of endeavor, Blendinger et al teaches, method according to claim 1, wherein the image evaluation means is designed as a neural network (Para 30 - Herein, for example, a simple difference formation, a threshold value method, a pattern recognition, or a similarly analysis or similarity assessment may be carried out. This may be realized, for example, by a conventional image processing algorithm or, for example, by a neural network. The context may thus be determined entirely or partially automatically, so that an effort for manual operation actions may be minimized, as may an error proneness).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) so that wherein the image evaluation means is designed as a neural network. Doing so would allow the medical personnel to be informed particularly reliably and with little distraction about the respective current situation (Abst, Blendinger et al).
Regarding claims 21 and 27-28, claims 21 and 27-28 are all rejected for the same reasons as claim 1 in the combination above.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Owsley et al (US 20170200206 A1).
Regarding claim 6, Philippe et al does not teach, method according to claim wherein the method further comprises: transmission of at least one identification characteristic of the patient through the patient data management system to the image evaluation means, in order to subsequently add at least the one identification characteristic to the audio information in order to verify at least the one medical consumable to the correct patient.
In a similar field of endeavor, Blendinger et al teaches, method according to claim wherein the method further comprises: transmission of at least one identification characteristic of the patient through the patient data management system to the image evaluation means (Para 34-35, the medical object is identified by an automatic object recognition algorithm and/or by an automatic comparison of the image data with a provided object database. Object data assigned to the identified object is then recalled from the object database or another database and processed to generate the patient model and/or to determine the context. In other words, therefore, it may be determined automatically not only where the medical object is situated, but what type of medical object or which medical object it is. For example, the medical object may be identified with regard to type or with regard to specific model. Through the identification of the medical object, for example, a more precise modeling of the medical object and/or interaction of the medical object with the patient may be enabled. i.e. combination of object type with patient model can be considered transmission of identification characteristic of the patient (specific patient model), see Para 35 regarding adding identification characteristic to the audio information).
Philippe et al and Blendinger et al do not teach, in order to subsequently add at least the one identification characteristic to the audio information in order to verify at least the one medical consumable to the correct patient.
In a similar field of endeavor, Owsley et al teaches in order to subsequently add at least the one identification characteristic to the audio information in order to verify at least the one medical consumable to the correct patient (Para 47, process 100 proceeds to another optional step 108 where the controller 14 determines whether the correct patient has been identified for administration of the patient consumable 16 by accessing the patient records via the hospital network 30 as shown in FIG. 4. If, for example, the patient consumable 16 is medication, the controller 14 receives the prescription schedule via the hospital network 30 to determine whether the patient 40 is due for medication 16. An image of the patient may be displayed on the touch screen display 36 to permit the caregiver 38 to verify the patient's identity. If an incorrect patient is identified, or the patient 40 is not due for medication 16, then access to the medication 16 in a locked patient consumable container or a locked medication box is blocked as shown in step 110. i.e. subsequently add an identification characteristic to the audio in order to verify at least the one medical consumable to the correct patient).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Owsley et al (US 20170200206 A1) so that the method further comprises: transmission of at least one identification characteristic of the patient through the patient data management system to the image evaluation means, in order to subsequently add at least the one identification characteristic to the audio information in order to verify at least the one medical consumable to the correct patient. Doing so would allow the medical personnel to be informed particularly reliably and with little distraction about the respective current situation (Abst, Blendinger et al). Doing so would also maintain adequate records of patient care received so that a physician can interpret, from a distance, data in the electronic medical records system to thereby modify a patient's treatment plan (Para 4, Owsley et al).
Claim(s) 8-9, 13-15, 17-18, and 20, 22-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Wolf et al (US 10729502 B1).
Regarding claim 8, Philippe et al and Blendinger et al do not teach, Method according the claim 1, wherein the method further comprises: conducting of a training of the image evaluation means, whereby the image evaluation means is trained and validated using training data, the training data being used for training and the remaining proportion of the training data being used for validation, whereby the training data contain at least one piece of image information of at least one medical consumable.
In a similar field of endeavor, Wolf et al teaches, Method according the claim 1, wherein the method further comprises: conducting of a training of the image evaluation means, whereby the image evaluation means is trained and validated using training data, the training data being used for training and the remaining proportion of the training data being used for validation, whereby the training data contain at least one piece of image information of at least one medical consumable (Col 10 lines 30-70 and Col 118 lines 23-65, the machine-learning model may be trained using historical surgical footage of a historical surgical procedure and historical data for amounts of a medical supply used during the historical surgical procedure. In some examples, an amount of a medical supply of a particular type used in a surgical procedure may be determined by analyzing video frames captured during the surgical procedure. For example, a machine learning model may be trained using training example to determine amounts of medical supplies of particular types used in surgical procedures from images and/or videos of surgical procedures, and the trained machine learning model may be used to analyze the video frames captured during a surgical procedure and determine the amount of the medical supply of the particular type used in the surgical procedure. An example of such training example may include an image and/or a video of at least a portion of a particular surgical procedure, together with a label indicating the amount of the medical supply of the particular type used in the particular surgical procedure. i.e. machine learning model (image evaluation means) trained using training data containing at least an image of a medical consumable. See Col 10 lines 30-70 regarding training data and validation data).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Wolf et al (US 10729502 B1) so that the method comprises conducting of a training of the image evaluation means. Doing so would allow for approaches that efficiently and effectively analyze surgical videos to enable a surgeon to view surgical events, provide decision support, and/or facilitate postoperative activity (Col 1 lines 35-40, Wolf et al).
Regarding claim 9, Philippe et al and Blendinger et al do not teach, Method according to claim 8, wherein the conducting of the training further comprises: acquiring a training image information, conducting a detection of an image of at least the one medical consumable in the acquired training image information, whereby the acquired training image information is used as training data if the detection indicates a presence in the image, and otherwise the acquired training image information is used as a reference for the training.
In a similar field of endeavor, Wolf et al teaches, Method according to claim 8, wherein the conducting of the training further comprises: acquiring a training image information, conducting a detection of an image of at least the one medical consumable in the acquired training image information, whereby the acquired training image information is used as training data if the detection indicates a presence in the image, and otherwise the acquired training image information is used as a reference for the training (Col 118 lines 23-65, the machine-learning model may be trained using historical surgical footage of a historical surgical procedure and historical data for amounts of a medical supply used during the historical surgical procedure. In some examples, an amount of a medical supply of a particular type used in a surgical procedure may be determined by analyzing video frames captured during the surgical procedure. For example, a machine learning model may be trained using training example to determine amounts of medical supplies of particular types used in surgical procedures from images and/or videos of surgical procedures, and the trained machine learning model may be used to analyze the video frames captured during a surgical procedure and determine the amount of the medical supply of the particular type used in the surgical procedure. An example of such training example may include an image and/or a video of at least a portion of a particular surgical procedure, together with a label indicating the amount of the medical supply of the particular type used in the particular surgical procedure. i.e. machine learning model (image evaluation means) trained using training data containing at least an image of a medical consumable. Training images used as training data contains a medical consumable, labels are used to determine if the reference image is correct. Reference images are compared to the corresponding estimated outputs, and model is trained based on the comparison. See Col 10 lines 30-70).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Wolf et al (US 10729502 B1) so that the method includes acquiring a training image information with a medical consumable in the required training image information. Doing so would allow for approaches that efficiently and effectively analyze surgical videos to enable a surgeon to view surgical events, provide decision support, and/or facilitate postoperative activity (Col 1 lines 35-40, Wolf et al).
Regarding claim 13, Philippe et al and Blendinger et al do not teach, Method according to claim 8, wherein the training data includes at least one piece of training image information in which at least two medical consumables to be recorded are present.
In a similar field of endeavor, Wolf et al teaches Method according to claim 8, wherein the training data includes at least one piece of training image information in which at least two medical consumables to be recorded are present (Col 115 lines 5-20 and Col 118 lines 23-65, for example, a machine learning model may be trained using training example to determine amounts of medical supplies of particular types used in surgical procedures from images and/or videos of surgical procedures, and the trained machine learning model may be used to analyze the video frames captured during a surgical procedure and determine the amount of the medical supply of the particular type used in the surgical procedure. i.e. training data includes at least one piece of training image information (frames) which contains at least two medical consumables (counting a plurality of medical consumables (more than one) the training data includes multiple consumables that are counted during the course of the surgery using the recognition model).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Wolf et al (US 10729502 B1) so that the method includes wherein the training data includes at least one piece of training image information in which at least two medical consumables to be recorded are present. Doing so would allow for approaches that efficiently and effectively analyze surgical videos to enable a surgeon to view surgical events, provide decision support, and/or facilitate postoperative activity (Col 1 lines 35-40, Wolf et al).
Regarding claim 14, Philippe et al teaches Method according to claim 8 wherein the method further comprises: making available an external database and an external data processing device in order to perform the image evaluation (Para 68 and Para 74-76, in some embodiments, the imaging unit 20 may adapted to learn the particular layout of each particular storage compartment 14 (e.g. the size and shape of the regions 40a, 40b, 42a, 42b may be observed and stored in a database using the imaging unit 20 in a "learning" or training mode). The learning mode may also be adapted to allow for reconfiguration of storage compartment layouts. i.e. external database is used to perform the image evaluation).
Regarding claim 15, Philippe et al teaches Method according to claim 14, wherein the external database which provides at least the one classification information of at least the medical consumable, whereby the classification information of at least the one medical consumable is assigned to at least one form (Para 50, for example, shape, size and pattern recognition algorithms may be used to visually identify one or more particular medical products in one or more regions of the storage compartment 14. In some cases this may be done by comparing the observed images captured by the camera 22 to a database of known medical products. i.e. database provides classification information to identify a consumable).
Regarding claim 17, Philippe et al does not teach Method according to claim 14 wherein the method further comprises: updating the database, whereby the updating being carried out regularly by transmitting training data, the training data being created by: conducting of a training of the image evaluation means, whereby the image evaluation means is trained and validated using training data, the training data being used for training and the remaining proportion of the training data being used for validation, whereby the training data contain at least one piece of image information of at least one medical consumable.
Blendinger et al teaches Method according to claim 14 wherein the method further comprises: updating the database, whereby the updating being carried out regularly by transmitting training data, the training data being created by (Para 49, a system of one or more of the present embodiments for supporting medical personnel in a procedure on a patient has a capture device, a data processing device, a speech synthesis device, and an output device. The capture device serves or is configured for continuous capture of image data of the patient and of a medical object generated by a medical imaging method. The data processing device serves or is configured for continuously updating a digital patient model based on the respective current image data. The data processing device further serves or is further configured for tracking a position of the medical object and for automatic determination of a situational and/or spatial context in which the medical object is situated, by processing the image data using an image processing algorithm. i.e. updating a database (digital patient model) continuously including medical device equipment recognized).
Philippe et al and Blendinger et al do not teach conducting of a training of the image evaluation means, whereby the image evaluation means is trained and validated using training data, the training data being used for training and the remaining proportion of the training data being used for validation, whereby the training data contain at least one piece of image information of at least one medical consumable.
In a similar field of endeavor, Wolf et al teaches, conducting of a training of the image evaluation means, whereby the image evaluation means is trained and validated using training data, the training data being used for training and the remaining proportion of the training data being used for validation, whereby the training data contain at least one piece of image information of at least one medical consumable (Col 35 line 65-Col 37 line 10, in some embodiments, the disclosed methods may further include updating the trained neural network model based on at least one of the analyzed frames. See also Col 115 lines 5-20 and Col 118 lines 23-65. i.e. conducting training of the machine learning using a training and validation training data set, which contains at least information of one medical consumable).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Wolf et al (US 10729502 B1) so that the image evaluation means is trained and validated using training data. Doing so would allow the medical personnel to be informed particularly reliably and with little distraction about the respective current situation (Abst, Blendinger et al). Doing so would also allow for approaches that efficiently and effectively analyze surgical videos to enable a surgeon to view surgical events, provide decision support, and/or facilitate postoperative activity (Col 1 lines 35-40, Wolf et al).
Regarding claim 18, Philippe et al teaches Method according to claim 14, wherein the method further comprises: a storage of at least the one recorded medical consumable assigned to a medical condition and/or a patient in order to enable individual recording of required medical consumables per patient and/or per medical condition (Para 3-4 and 87 and 89-90, the user A can then communicate information to the imaging unit 20 for example by making a particular facial gesture (e.g. a smile) or a hand signal (e.g. holding up two fingers). This may be done to indicate that a specific task (e.g. replenishment) will be performed, or reference a particular patient (e.g. by patient number, bed number, and so on.) to link one or more picked medical products thereto. This can be useful for charge capturing to ensure that the system 100 knows which user picked which medical products, what particular user those medical products where used for, and what (if any) of those medical products were returned to the storage depot 12 after the medical treatment. i.e. enable individual recording of medical consumables per patient in database).
Regarding claim 20, Philippe et al teaches Method according to claim 8, wherein the classification information additionally comprises a batch number of at least the one medical consumable (Para 32 and 34, in some cases, the bar code or other visual indicator may include expiry information, a serial number, and/or other details about the medical product. In some embodiments, the visual indicator may be linked to such details about the medical product (e.g. via a product database)).
Regarding claim 22, Philippe et al teaches System according to claim 21, wherein the system is designed as a stand-alone system, whereby a single, common housing is provided for all components of the system (Para 72-74 and Fig 1 and Fig 4, turning now to FIG. 4, illustrated therein is a system 100 for tracking medical products according to one embodiment. As shown, the system 100 includes one or more storage depots 12 (e.g. cabinets). Each storage depot 12 has an imaging unit 20 associated therewith, which may include an imaging processor 21, cameras 22, 30 and other elements as generally described above).
Regarding claim 23, Philippe et al teaches System according to claim 21, wherein the image recording device features at least one lamp, to illuminate a viewing area of the image recording device (Para 72-74 and Fig 1 and Fig 4, turning now to FIG. 4, illustrated therein is a system 100 for tracking medical products according to one embodiment. As shown, the system 100 includes one or more storage depots 12 (e.g. cabinets). Each storage depot 12 has an imaging unit 20 associated therewith, which may include an imaging processor 21, cameras 22, 30 and other elements as generally described above).
Regarding claim 24, Philippe et al teaches System according to claim 21, wherein the image recording device features at least one lamp, to illuminate a viewing area of the image recording device (Para 38, there are several technical challenges that make implementing computerized imaging systems in a medical environment non-trivial. The first challenge is providing proper visibility and lighting for the cameras being used. To this end, the vision algorithms selected for use with the imaging unit should be able to recognize the presence, partial presence and/or absence of many different medical products in a variety of lighting conditions, including low lighting conditions. In some embodiments, the imagining units may include lights for illuminating the medical products and/or storage compartments to assist the camera(s) in obtaining good images).
Regarding claim 25, Philippe et al teaches System according to claim 21, wherein the system features an interface for a data connection with a merchandise management system and/or a patient data management system, whereby he interface arranged in the housing (Para 73-74, the system 100 also includes at least one server 102. Generally the imaging unit 20 of each storage depot 12 is adapted to communicate with the server 102 so the system can track medical products consumed and/or replenished for each storage depot 12).
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Wolf et al (US 10729502 B1) and Soni et al (US 11720647 B2).
Regarding claim 11, Philippe et al and Blendinger et al and Wolf et al do not teach, Method according to claim 8, wherein the training data comprises at least one piece of training image information and a brightness of at least the one piece of training image information varies in order to enable the application of the image evaluation means when the brightness of the at least one piece of image information varies.
In a similar field of endeavor, Soni et al teaches, Method according to claim 8, wherein the training data comprises at least one piece of training image information and a brightness of at least the one piece of training image information varies in order to enable the application of the image evaluation means when the brightness of the at least one piece of image information varies (Col 25 lines 1-20, the modality augmentation component 114 can modify and/or vary different combinations/permutations of gamma/radiation level, brightness level, and/or contrast level of the preliminary training images 204 in order to generate the intermediate training images 504. In various aspects, the modality augmentation component 114 can retrieve from any suitable database and/or data structure (and/or can receive as input from an operator). i.e. varying brightness in different training images to enable the application of the image evaluation means, when the brightness varies).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Wolf et al (US 10729502 B1) and Soni et al (US 11720647 B2) so that wherein the training data comprises at least one piece of training image information and a brightness of at least the one piece of training image information varies in order to enable the application of the image evaluation means when the brightness of the at least one piece of image information varies. Doing so can accordingly add the new image characteristic/property to the list of modality-based characteristics 502 and can thus begin modifying the new characteristic/property to generate the intermediate training images 504. In this way, the list of modality-based characteristics 502 can be updated, changed, amended, edited, and/or modified as desired so as to suit different operational contexts (Col 24 lines 58-65, Soni et al).
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Wolf et al (US 10729502 B1) and Peine et al (NPL: A Deep Learning Approach for Managing Medical Consumable Materials in Intensive Care Units via Convolutional Neural Networks: Technical Proof-of-Concept Study).
Regarding claim 12, Philippe et al and Blendinger et al and Wolf et al do not teach Method according to claim 8, wherein the training data comprises at least one piece of training image information in which the medical consumable to be acquired is partially obscured.
In a similar field of endeavor, Peine et al teaches Method according to claim 8, wherein the training data comprises at least one piece of training image information in which the medical consumable to be acquired is partially obscured (On Site Study, (1) scenario one, where the material was presented without any visual obstruction to the detection unit; (2) scenario two, where the material was 50% covered to simulate a visual obstruction during the routine clinical workflow; and (3) scenario three, where a secondary material (skin disinfection bottle) was present in the visual field while the material was presented without visual obstruction. i.e. medical consumable is partially obscured).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Wolf et al (US 10729502 B1) and Peine et al (NPL: A Deep Learning Approach for Managing Medical Consumable Materials in Intensive Care Units via Convolutional Neural Networks: Technical Proof-of-Concept Study) so that the training data comprises at least one piece of training image information where the consumable is partially obscured. Doing so would allow for cost-effective solution to determine how many materials are needed for a single patient with a specific disease […] this is particularly true for storage and investment, as suboptimal management results in unnecessarily high storage maintenance costs (Peine et al., Introduction).
Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Ingle (US 20190238791 A1).
Regarding claim 26, Philippe et al and Blendinger et al do not teach, System according to claim 21, wherein the system has a water-repellent and/or water-impermeable coating to be designed for use in a medical environment.
In a similar field of endeavor, Ingle teaches, System according to claim 21, wherein the system has a water-repellent and/or water-impermeable coating to be designed for use in a medical environment (Para 49 and Fig 3, surgical visualization and recording system (SVRS) 200 for capturing, communicating, and displaying images of a surgical site with up to a 4K ultrahigh definition (UHD) resolution in association with patient information in real time during a surgery. The SVRS 200 comprises the UHD camera system 201 with the optical component 203 and the image sensor 220, and the display unit 216 with the tactile user interface 217 and the embedded microcomputer 222 as disclosed in the detailed description of FIGS. 1-2. The UHD camera system 201 is waterproof. The UHD camera system 201 is made of waterproof materials comprising, for example, polypropylene, polyetherimide, polychlorotrifluoroethylene, etc. The optical component 203 is positioned at a proximal end of a surgical scope device).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Philippe et al (US 20130076898 A1) in view of Blendinger et al (US 20190050984 A1) and Ingle (US 20190238791 A1) so that the system has a water-repellent and/or water-impermeable coating to be designed for use in a medical environment. Doing so would access to the patient information along with visualization of the captured and communicated images of the surgical site in real time allows the surgeon to plan and conduct the surgery with enhanced visualization and information in real time (Para 44, Ingle).
Allowable Subject Matter
Claims 10, 16, and 19 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, provided that any rejection or objection given to the independent claims has been withdrawn by the examiner.
The following is a statement of reasons for the indication of allowable subject matter in claims 10, 16, and 19.
No prior art alone or in combination anticipates the limitations of claims 10, 16, and 19. Blendinger et al teaches an audio synthesized output based on a detected medical object in an image, but does not ‘acquire a confirmation of the classification information of the image evaluation means based on the output of the audio information via the audio device by an operator,’ as is taught in claim 10. Claims 16 and 19 are not taught by Blendinger et al. The primary reference Philippe et al does not teach the limitations of claims 10, 16, or 19. Therefore, claims 10, 16, and 19 are considered objected to as allowable by the examiner.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 20220296332 A1
US 9760265 B2
US 20180146919 A1
US 20070239482 A1
US 20070083286 A1
US 10831865 B2
JP-2005309702-A
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK PETER KRAYNAK whose telephone number is (703)756-1713. The examiner can normally be reached Monday - Friday 7:30 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACK PETER KRAYNAK/Examiner, Art Unit 2668
/UTPAL D SHAH/Primary Examiner, Art Unit 2668