DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
1. The information disclosure statement (IDS) submitted on 1 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner.
Response to Amendment
2. This action is in response to the amendment filed on 11/26/2025. In applicant’s remarks, applicant states that claims 1 and 16 were amended, however no amended claims were submitted. Claim 22 was added. Claims 1-22 remain rejected in the application. Applicant’s amendment to the specification has overcome each and every objection previously set forth in the Non-Final Office Action mailed 8/27/2025.
Response to Arguments
3. Applicant’s arguments filed on 11/26/2025 have been fully considered but they are not persuasive. First, on page 9 of applicant’s remarks, applicant states the following: “Regarding Claim 1, the cited reference Melgan [sic] fails to teach or suggest at least, "generate a plurality of composite images by replacing the occluded portion of the second image with at least one occluded portion from the at least one first image."” A similar quotation of Claim 1 is made by applicant on page 10. However, the quotations by applicant differs from the Claim 1 of record: “generate a plurality of composite images by replacing the occluded portion of the second image with the at least one nonoccluded portion from the at least one first image;”. Examiner will interpret applicant’s quotations as the language from Claim 1 of record.
4. Regarding applicant’s assertion that Meglan does not teach the limitation(s): “extract at least one nonoccluded portion of the at least one first image corresponding to the occluded portions” and “generate a plurality of composite images by replacing the occluded portion of the second image with the at least one nonoccluded portion from the at least one first image”, examiner respectfully disagrees. Regarding the limitation: “extract at least one nonoccluded portion of the at least one first image corresponding to the occluded portions”, Meglan discloses and reads on the limitation as follows: (column 2, lines 49-53) “In an aspect, applying the image filter includes separating the initial image <reads on extract from image> into an initial background image <reads on nonoccluded portion> and an initial occluding layer <reads on occluded portions>, separating the plurality of images <reads on extract from image> into a plurality of background images <reads on nonoccluded portion> and a plurality of occluding layers <reads on occluded portions>…” Regarding the limitation: “generate a plurality of composite images by replacing the occluded portion of the second image with the at least one nonoccluded portion from the at least one first image”, Meglan discloses and reads on the limitation as follows: (column 2, lines 49-55) “In an aspect, applying the image filter includes separating the initial image into an initial background image <reads on nonoccluded portion> and an initial occluding layer, separating the plurality of images <reads on second image with occluded portions> into a plurality of background images and a plurality of occluding layers, and combining the initial background images and the plurality of background images <reads on replacing occluded portion region> to generate the processed image <reads on composite image>.” and (column 1, lines 57-62), “The removal algorithm includes controlling the image capture device to capture a plurality of images <reads on second image with occluded portions> and applying an image filter to combine the initial image and the plurality of images and generate a processed image where the occluding object is removed <reads on replacing the occluded portion> from the processed image <reads on composite image>.” and (column 3, lines 52-55), “The various images may then be combined to present a single image or video to a user where the occluding object is removed from the image or video <reads on generate a plurality of composite images>.” Examiner also notes the following in Meglan: (column 2, lines 49-51) “In an aspect, applying the image filter includes separating the initial image …, separating the plurality of images …”. Prior to applying the image filter, both the initial image and plurality of “secondary” images include occluded portions. These secondary images include and read on “occluded portion of the second image” of Claim 1. Thus, as illustrated above, Meglan discloses generating a processed image or video <reads on plurality of composite images>, by acquiring an initial image and “secondary” images <reads on second image with occluded portion>, and applying an image filter to combine an initial image and “secondary” images to removing an occluding object <reads on replacing the occluded portion>.
5. Regarding arguments with respect to claims 2-15 and 17-21, they are dependent on independent claims 1 and 16 respectively. Applicant does not argue anything other than independent claim 1, and similarly claim 16. The limitations in those claims, in conjunction with combination, has previously been established and explained.
Claim Rejections - 35 USC § 102
6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
7. Claims 1, 2, 5, and 12 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Meglan (US-10849709-B2).
8. As per claim 1, Meglan discloses: An internal medical imaging system comprising:
an imaging sensor configured to obtain medical images within an internal portion of a patient; (Meglan, Fig. 1; column 1, lines 50-55, “In an embodiment of the disclosure, a system for removing an occluding object from a surgical image includes an image capture device configured to be inserted into a patient and capture an initial image of a surgical site inside the patient during a surgical procedure and a controller configured to receive the image.”)
a display device configured to display image data and related graphical information from the imaging sensor; and (Meglan, Fig. 1; column 2, line 27, “In some aspects, a display displays the processed image.”)
a computing device including a processor and a memory device, the memory device including instructions that, when executed by the processor, cause the computing device to perform operations including: (Meglan, Fig. 1; column 4, lines 12-16, “Turning to FIG. 1, a system for processing images and/or video of a surgical environment, according to embodiments of the present disclosure, is shown generally as 100. System 100 includes a controller 102 that has a processor 104 and a memory 106.” and column 7, lines 36-40, “The controller may include any type of computing device, computational circuit, or any type of processor or processing circuit capable of executing a series of instructions that are stored in a memory.”)
receive, from the imaging sensor, at least one first image of a field of view of a patient; (Meglan, column 1, lines 50-55, “In an embodiment of the disclosure, a system for removing an occluding object from a surgical image includes an image capture device configured to be inserted into a patient and capture an initial image of a surgical site inside the patient during a surgical procedure and a controller configured to receive the image.”)
receive, from the imaging sensor, a plurality of second images of the field of view, the plurality of second images received subsequent to the at least one first image;
identify an occluded portions of the plurality of second images within the field of view; (Meglan, column 1, lines 55-62, “When the controller determines that the occluding object is present in the initial image, the controller executes a removal algorithm. The removal algorithm includes controlling the image capture device to capture a plurality of images and applying an image filter to combine the initial image and the plurality of images and generate a processed image where the occluding object is removed from the processed image.”)
extract at least one nonoccluded portion of the at least one first image corresponding to the occluded portions;
generate a plurality of composite images by replacing the occluded portion of the second image with the at least one nonoccluded portion from the at least one first image; and (Meglan, column 2, lines 49-59, “In an aspect, applying the image filter includes separating the initial image into an initial background image and an initial occluding layer, separating the plurality of images into a plurality of background images and a plurality of occluding layers, and combining the initial background images and the plurality of background images to generate the processed image. Combining the initial background images and the plurality of background images includes registering the initial background image and the plurality of background images, and overlaying the registered initial background image and the plurality of background images.” and column 1, lines 55-62, “When the controller determines that the occluding object is present in the initial image, the controller executes a removal algorithm. The removal algorithm includes controlling the image capture device to capture a plurality of images and applying an image filter to combine the initial image and the plurality of images and generate a processed image where the occluding object is removed from the processed image.” and column 5, lines 34-35, “In another embodiment, a plurality of images may be obtained over time to remove occluding objects.”)
send, to the display device, the plurality of composite images in substantially real-time as the receiving the plurality of second images. (Meglan, column 2, line 27, “In some aspects, a display displays the processed image.” and column 4, lines 2-8, “The captured video is processed in real time or near real time and then displayed to the clinician as processed image. The image processing filters are applied to each frame of the captured video. Providing the processed image or video to the clinician provides the clinician with an unobscured view to a clinician.”)
9. As per claim 2, Meglan discloses: The internal medical imaging system of claim 1, further comprising an instrument, wherein the instructions further cause the computing device to perform operations including identify the occluded portions of the plurality of second images by detecting the instrument in the field of view. (Meglan, Fig. 5; column 1, lines 42-44, “There is a need for improved methods of providing a clinician with an endoscopic view that is not obscured by tool and/or local contaminants.” and column 2, lines 28-39, “In another embodiment of the present disclosure, a method for removing an occluding object from a surgical image is provided. The method includes capturing an initial image of a surgical site inside the patient during a surgical procedure with an image capture device and executing a removal algorithm when the occluding object is detected in the initial image. The removal algorithm includes controlling the image capture device to capture a plurality of images and applying an image filter to combine the initial image and the plurality of images and generate a processed image where the occluding object is removed from the processed image.” and column 5, line 66- column 6, line 11, “FIG. 5 depicts an image 130 of a surgical environment that is captured by the image capture device 108. Image 130 is processed by image processing filter 114, which may involve the use of image filter 120, to generate a processed image 132. As can be seen in processed image 132, the occluded object “O” that was present in image 130 is removed from the processed image 132. The above-described embodiments may also be configured to work with robotic surgical systems and what is commonly referred to as “Telesurgery.” Such systems employ various robotic elements to assist the clinician in the operating theater and allow remote operation (or partial remote operation) of surgical instrumentation.”)
10. As per claim 5, Meglan discloses: The internal medical imaging system of claim 2, wherein the instrument further comprises a sensor or controller and wherein the instructions further cause the computing device to perform operations including detect the instrument in the field of view by receiving sensor or control information from the instrument indicating that a portion of the instrument has been extended into the field of view. (Meglan, Fig. 5; column 6, lines 33-45, “The robotic arms 206 of the surgical system 200 are typically coupled to a pair of master handles 208 by a controller 210. ... The handles 206 can be moved by the clinician to produce a corresponding movement of the working ends of any type of surgical instrument 204 (e.g., probe, end effectors, graspers, knifes, scissors, etc.) attached to the robotic arms 206. For example, surgical instrument 204 may be a probe that includes an image capture device. The probe is inserted into a patient in order to capture an image of a region of interest inside the patient during a surgical procedure.” and column 2, lines 24-26, “In some aspects, the controller determines that the occluding object is present in the initial image based on an input from a user.” and column 5, line 66- column 6, line 11, “FIG. 5 depicts an image 130 of a surgical environment that is captured by the image capture device 108. Image 130 is processed by image processing filter 114, which may involve the use of image filter 120, to generate a processed image 132. As can be seen in processed image 132, the occluded object “O” that was present in image 130 is removed from the processed image 132. The above-described embodiments may also be configured to work with robotic surgical systems and what is commonly referred to as “Telesurgery.” Such systems employ various robotic elements to assist the clinician in the operating theater and allow remote operation (or partial remote operation) of surgical instrumentation.”)
11. As per claim 12, Meglan discloses: The internal medical imaging system of claim 1, wherein the instructions further cause the computing device to perform operations including identify the occluded portions of the plurality of second images by comparing a set of pixel data of the plurality of second images to a baseline value and determining the occluded portion when a subset of the set of pixel data is below the baseline value. (Meglan, column 2, lines 5-16, “In an aspect, applying the image filter includes separating the initial image into an initial background image and an initial occluding layer, separating the plurality of images into a plurality of background images and a plurality of occluding layers, and combining the initial background images and the plurality of background images to generate the processed image. Combining the initial background images and the plurality of background images includes registering the initial background image and the plurality of background images, and overlaying the registered initial background image and the plurality of background images.” and column 5, lines 21-33, “In another embodiment, in step s28, the occluding object may be removed from image “I0” leaving an empty space in the image “I0”. The image filter 120 uses corresponding pixels from images “I1” to “IM” taken at different perspectives to fill in the empty space created in image “I0” thereby producing a complete image of the surgical site without the occluding object. Specifically, image “I0” and image “I1” are compared by filter 120 to register or align the two images. Then image filter 120 uses the pixels in image “I1” that do not belong to the occluding object to fill in the empty space in image “I0”. The process is repeated for the remaining images “I2” to “IM” until the empty space in image “I0” is filled.”)
Claim Rejections - 35 USC § 103
12. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
13. Claims 3, 4, and 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Meglan (US-10849709-B2) in view of Chen et al. (US-11830189-B2, hereinafter "Chen-US189").
14. As per claim 3, Meglan discloses: The internal medical imaging system of claim 2, wherein the detecting the instrument in the field of view includes [[identifying a known instrument signature within the field of view.]] (See rejection for claim 2 above.)
15. Meglan doesn't explicitly disclose but Chen-US189 discloses: identifying a known instrument signature within the field of view. (Chen-US189, column 1, line 57- column 2, line 2, “In one example aspect, systems are described for processing ultrasound images to identify objects. ... The operations may include receiving an ultrasound image of an anatomical structure from a computing device of an ultrasound imaging system, and providing the ultrasound image as input to a machine learning model that is trained to identify a plurality of objects in ultrasound images of the anatomical structure. The plurality of objects may include anatomical features, disruptive features, and/or instruments.” and column 19, lines 37-50, “The trained medical images may include a plurality of objects annotated within the image. ... The annotated objects may further include foreign bodies, such as an inflatable balloon, needle, knife, scalpel, finger, stent, intravascular device, catheter, surgical instrument etc. that may be inserted into the body as part of a procedure.”)
16. Chen-US189 is analogous art with respect to Meglan because they are from the same field of endeavor, namely object identification in a medical imaging environment. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include identifying a known instrument signature within the field of view, as taught by Chen-US189 into the teaching of Meglan. The suggestion for doing so would allow the imaging system to detect an operator’s instrument or tool during a procedure. As a result, not only could the imaging system potentially remove or limit the occlusion of the instrument, but could also provide additional information about the instrument, particularly its position and orientation in relation to tissue and potentially indicate what mode the instrument is in if it has different functions. Therefore, it would have been obvious to combine Chen-US189 with Meglan.
17. As per claim 4, Meglan in view of Chen-US189 discloses: The internal medical imaging system of claim 3, wherein the detecting the instrument in the field of view includes applying a machine learning model trained to detect known image signatures generated by the instrument. (Chen-US189, column 1, line 57- column 2, line 2, “In one example aspect, systems are described for processing ultrasound images to identify objects. ... The operations may include receiving an ultrasound image of an anatomical structure from a computing device of an ultrasound imaging system, and providing the ultrasound image as input to a machine learning model that is trained to identify a plurality of objects in ultrasound images of the anatomical structure. The plurality of objects may include anatomical features, disruptive features, and/or instruments.” and column 19, lines 37-50, “The trained medical images may include a plurality of objects annotated within the image. ... The annotated objects may further include foreign bodies, such as an inflatable balloon, needle, knife, scalpel, finger, stent, intravascular device, catheter, surgical instrument etc. that may be inserted into the body as part of a procedure.”)
18. Chen-US189 is analogous art with respect to Meglan because they are from the same field of endeavor, namely object identification in a medical imaging environment. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include applying a machine learning model trained to detect a known instrument in an image, as taught by Chen-US189 into the teaching of Meglan. The suggestion for doing so would assist the imaging system to automatically detect an instrument based on previous machine learning training. The automatic detection would speed up the process of performing image correction, such as with occlusion, or provide real-time information to an operator during a procedure. Therefore, it would have been obvious to combine Chen-US189 with Meglan.
19. As per claim 6, Meglan in view of Chen-US189 discloses: The internal medical imaging system of claim 2, wherein the instructions further cause the computing device to perform operations including generate the plurality of composite images by generating a graphic indicating the instrument causing the occluded portions of the plurality of second images. (Chen-US189, column 4, lines 45-49, “In interventional imaging, a physician may utilize electronic images during a procedure to e.g., visualize instruments inserted into the patient's body to assist the physician in safely guiding the instruments to an intended target area.” and column 17, lines 51-58, “For example, upon receiving the physician's selection, the balloon (and not the ganglion) may be visually emphasized (e.g., highlighted) along with the needle, where the needle may visualized in a similar but distinguishable manner from the balloon (e.g., highlighted in a different color), and thus represent to the viewer the relative positions and/or orientations of the balloon and the needle.” and column 31, lines 27-33, “In each of the first and second images 1502, 1504, a location of the distal end 1508 of the instrument predicted using the exemplary method 1100 described with reference to FIG. 11 may be visually indicated. For example, the distal end 1508 may be highlighted, shaded, or colored to distinguish the distal end 1508 from a remaining portion of the instrument 1506.” and column 23, lines 48-54, “Additionally, as described in FIGS. 13-14, another machine learning model may be trained and used to predict a trajectory of the instrument to an intended target. A visualization including the predicted location and the trajectory of the instrument overlaid on the medical image, among other information, may be displayed to the physician, as shown in FIG. 15.” and column 5, lines 3-15, “Example observations may include identification of objects within the ultrasound images such as anatomical features, features that are not normally present in the anatomical structure that may disrupt the body's function, and/or foreign bodies (e.g., instruments) inserted in the body. The observations may also include location and/or trajectory predictions of the objects, and/or predictions of whether an optimal image of the object is being captured. Visualizations based on the predictions may be generated and provided to the physicians in real-time as they are performing diagnostic examinations on patients and/or as they are performing an ultrasound-guided procedure, which may be to treat a diagnosed disorder.”)
20. Chen-US189 is analogous art with respect to Meglan because they are from the same field of endeavor, namely object identification in a medical imaging environment. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include generating composite images of a graphic representation of the instrument indicating occlusion in the images, as taught by Chen-US189 into the teaching of Meglan. The suggestion for doing so would provide an operator with a clear visual indicator of an instrument during a procedure. This is particularly useful in displaying the end of the instrument so that an operator may be aware of its location and orientation in relation to tissue. Therefore, it would have been obvious to combine Chen-US189 with Meglan.
21. As per claim 7, Meglan in view of Chen-US189 discloses: The internal medical imaging system of claim 6, wherein the instructions further cause the computing device to perform operations including display the plurality of composite images by displaying an indication that the instrument is extended a predetermined amount. (Chen-US189, column 26, lines 46-53, “At step 1206, a predicted location may be received as output from the trained machine learning model. The predicted location may include at least a distal end of the instrument. The predicted location may also include an orientation and/or length of the instrument. For example, the machine learning model may identify at least a portion of the instrument in the image and then determine an orientation and/or length of the instrument.” and column 25, lines 3-9, “In either the supervised or unsupervised examples, once at least a portion of the instrument is identified, a location of the distal end can be determined. In some examples, the determination of the distal end may be further facilitated by other information, such as a known length of the instrument or other images from the image sequence of the subset.” and column 31, lines 27-33, “In each of the first and second images 1502, 1504, a location of the distal end 1508 of the instrument predicted using the exemplary method 1100 described with reference to FIG. 11 may be visually indicated. For example, the distal end 1508 may be highlighted, shaded, or colored to distinguish the distal end 1508 from a remaining portion of the instrument 1506.”)
22. Chen-US189 is analogous art with respect to Meglan because they are from the same field of endeavor, namely object identification in a medical imaging environment. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include composite images that display an indication that the instrument is extended a predetermined amount, as taught by Chen-US189 into the teaching of Meglan. The suggestion for doing so would provide an operator with a clear visual indicator of an instrument during a procedure. This is particularly useful in displaying the end of the instrument so that an operator may be aware of its location and orientation in relation to tissue. In addition, the display may also indicate information about the instrument through text, numbers, colors, etc. that indicate the status of the instrument. This detailed information will assist an operator with live data and lower the likelihood of mistakes. Therefore, it would have been obvious to combine Chen-US189 with Meglan.
23. As per claim 8, Meglan in view of Chen-US189 discloses: The internal medical imaging system of claim 7, wherein the indication is a numerical indication of an extension length of the instrument. (Chen-US189, column 24, lines 37-42, “In some examples, the prediction may include a predicted location of the distal end of the instrument. For example, the machine learning model may identify at least a portion of the instrument in the image and then determine an orientation and/or length of the instrument.” and column 25, lines 3-9, “In either the supervised or unsupervised examples, once at least a portion of the instrument is identified, a location of the distal end can be determined. In some examples, the determination of the distal end may be further facilitated by other information, such as a known length of the instrument or other images from the image sequence of the subset.” and column 31, lines 27-33, “In each of the first and second images 1502, 1504, a location of the distal end 1508 of the instrument predicted using the exemplary method 1100 described with reference to FIG. 11 may be visually indicated. For example, the distal end 1508 may be highlighted, shaded, or colored to distinguish the distal end 1508 from a remaining portion of the instrument 1506.” and column 9, lines 38-45, “In other words, the post-processing step 206 transforms the prediction into an informational format and/or display that is consumable by the physician or other healthcare professional. Exemplary informational formats and/or displays may include heatmaps, text overlays superimposed on images, numerical tabular formats, rank ordered tabular formats, text tables, highlight tables, and/or bar charts.”)
24. Chen-US189 is analogous art with respect to Meglan because they are from the same field of endeavor, namely object identification in a medical imaging environment. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include composite images that provide a numerical indication of an extension length of the instrument, as taught by Chen-US189 into the teaching of Meglan. The suggestion for doing so would provide an operator with a clear visual indicator of an instrument during a procedure. This is particularly useful in displaying the end of the instrument so that an operator may be aware of its location and orientation in relation to tissue. In addition, the display may also indicate information about the instrument through text, numbers, colors, etc. that indicate the status of the instrument. This detailed information will assist an operator with live data and lower the likelihood of mistakes. Therefore, it would have been obvious to combine Chen-US189 with Meglan.
25. As per claim 9, Meglan in view of Chen-US189 discloses: The internal medical imaging system of claim 7, wherein the indication is a graphical indication including a color coding corresponding to a predetermined extension length of the instrument. (Chen-US189, column 24, lines 37-42, “In some examples, the prediction may include a predicted location of the distal end of the instrument. For example, the machine learning model may identify at least a portion of the instrument in the image and then determine an orientation and/or length of the instrument.” and column 25, lines 3-9, “In either the supervised or unsupervised examples, once at least a portion of the instrument is identified, a location of the distal end can be determined. In some examples, the determination of the distal end may be further facilitated by other information, such as a known length of the instrument or other images from the image sequence of the subset.” and column 31, lines 27-33, “In each of the first and second images 1502, 1504, a location of the distal end 1508 of the instrument predicted using the exemplary method 1100 described with reference to FIG. 11 may be visually indicated. For example, the distal end 1508 may be highlighted, shaded, or colored to distinguish the distal end 1508 from a remaining portion of the instrument 1506.”)
26. Chen-US189 is analogous art with respect to Meglan because they are from the same field of endeavor, namely object identification in a medical imaging environment. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include composite images that provide color coding corresponding to a predetermined extension length, as taught by Chen-US189 into the teaching of Meglan. The suggestion for doing so would provide an operator with a clear visual indicator of an instrument during a procedure. This is particularly useful in displaying the end of the instrument so that an operator may be aware of its location and orientation in relation to tissue. In addition, the display may also indicate information about the instrument through text, numbers, colors, etc. that indicate the status of the instrument. This detailed information will assist an operator with live data and lower the likelihood of mistakes. Therefore, it would have been obvious to combine Chen-US189 with Meglan.
27. Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Meglan (US-10849709-B2) in view of Chen et al. (CN-114359142-A, hereinafter "Chen-CN142").
28. As per claim 10, Meglan discloses: The internal medical imaging system of claim 1, wherein the instructions further cause the computing device to perform operations including identify the occluded portions of the plurality of second images by [[detecting an air bubble in the field of view.]] (See rejection for claim 1 above.)
29. Meglan doesn't explicitly disclose but Chen-CN142 discloses: detecting an air bubble in the field of view. (Chen-CN142, page 4, ¶ [0004], “In view of the above-mentioned deficiencies in the prior art, the present invention aims to provide a method and system for automatic bubble detection and grading based on ultrasound video, which can realize automatic bubble detection and grading based on ultrasound video ...”)
30. Chen-CN142 is analogous art with respect to Meglan because they are from the same field of endeavor, namely detecting objects in medical imaging, specifically bubbles in this case. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include detecting an air bubble in the field of view, as taught by Chen-CN142 into the teaching of Meglan. The suggestion for doing so would allow the imaging system an opportunity to potentially filter out the air bubble in case it is obstructing the operator’s view. In addition, the ability to detect air bubbles also may help deal with medical conditions and situations that arise during a procedure and allow the operator to handle it appropriately. Therefore, it would have been obvious to combine Chen-CN142 with Meglan.
31. As per claim 11, Meglan in view of Chen-CN142 discloses: The internal medical imaging system of claim 10, wherein the instructions further cause the computing device to perform operations including detect the air bubble in the field of view by applying an image processing algorithm trained to detect an ultrasound signature of an air bubble. (Chen-CN142, page 9, ¶ [0080], “Step S6: Use the constructed dataset to train a 3D convolutional neural network, where the input of the 3D convolutional neural network is the bubble ROI area of the ultrasound video frame sequence ...” and page 9, ¶ [0081], “Step S7: Use the trained 3D convolutional neural network to automatically classify the ultrasound video to be detected.” and page 9, ¶ [0082], “In a preferred embodiment, an automatic bubble detection and grading system based on ultrasound video of the present invention includes: an ultrasound device for acquiring cardiac ultrasound video; a target tracking module for tracking the area to be detected in each frame of the ultrasound video; an ROI area detection module for calculating the ROI area in the area to be detected, that is, the area where bubbles may exist; and a trained 3D convolutional neural network for automatically detecting the input ultrasound video to be detected and obtaining an automatic grading result.”)
32. Chen-CN142 is analogous art with respect to Meglan because they are from the same field of endeavor, namely detecting objects in medical imaging, specifically bubbles in this case. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include detecting an air bubble by utilizing an image processing algorithm trained to detect it via ultrasound, as taught by Chen-CN142 into the teaching of Meglan. The suggestion for doing so would provide the imaging system an ability to quickly identify air bubbles through training and possibly point out air bubbles to the operator that are difficult to identify. It also provides the potential for automatic filtering of air bubbles when detected. Therefore, it would have been obvious to combine Chen-CN142 with Meglan.
33. Claims 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Meglan (US-10849709-B2) in view of Krieger et al. (US-10089737-B2, hereinafter "Krieger").
34. As per claim 13, Meglan discloses: The internal medical imaging system of claim 1, wherein the instructions further cause the computing device to perform operations including [[track position and orientation of at least one first image and the plurality of second images.]] (See rejection for claim 1 above.)
35. Meglan doesn't explicitly disclose but Krieger discloses: track position and orientation of at least one first image and the plurality of second images. (Krieger, column 3, lines 61-63, “FIG. 8 illustrates an embodiment wherein the imagers and light source are tracked to determine position and orientation, and navigated by the workstation.” and column 5, lines 23-25, “The distortion model 4 calculates the projected distortion and location of image artifacts based on current physical models of image acquisition and reflectance.” and column 5, lines 44-48, “Knowing the surface orientation, which may be represented as a vector normal to a patch of the surface, and the angle of incoming light, the angle of reflected light can be predicted according to the law of reflection.” and column 8, lines 51-54, “The depth map 2 and optical image 3, which are obtained during the processes in FIG. 1 or 3, may be registered to one another with knowledge of the camera positions and orientations.”)
36. Krieger is analogous art with respect to Meglan because they are from the same field of endeavor, namely image artifact/abnormality correction, including for medical applications. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include track position and orientation of a plurality of images, as taught by Krieger into the teaching of Meglan. The suggestion for doing so would allow the imaging system to compensate in real-time for visual changes, particularly when a camera or object moves. The position and orientation would make sure the imaging system is properly registered in 3D space so that any image compensation is properly aligned with the image feed. Therefore, it would have been obvious to combine Krieger with Meglan.
37. As per claim 14, Meglan in view of Krieger discloses: The internal medical imaging system of claim 13, wherein the instructions further cause the computing device to perform operations including generate the plurality of composite images by deforming the nonoccluded portion of the at least one first image. (Krieger, column 3, lines 1-5, “Subsequently one may use information from the 3D depth map to correct abnormalities in the optical image. These flattened, corrected images may then be overlaid on the 3D image for visualization purposes, or used to provide corrected 2D images.” and column 3, lines 17-25, “According to one embodiment there is described a method to correct for undesired image distortion or artifacts which is informed by the 3D depth information and other knowledge of the camera arrangement and physical environment. According to one embodiment there is described a method for determining the possible deformations of the 3D image data, satisfying a multitude of relevant parameters designed to minimize the loss of relevant information.” and column 6, lines 31-40, “For example, one factor in the distortion model, which can be assessed with knowledge of the 3D surface and lighting conditions, is occlusion. If a region is occluded, the region will appear darker due to shadows and have limited depth information. The distortion model recognizes such areas, knowing the lighting conditions and 3D surface, and will be used to generate possible surface features by interpolating characteristics of the surrounding unclouded area such as color, texture, and 3D surface information.”)
38. Krieger is analogous art with respect to Meglan because they are from the same field of endeavor, namely artifact/abnormality correction, including for medical applications. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include generating a plurality of composite images by deforming the nonoccluded portion of at least the first image, as taught by Krieger into the teaching of Meglan. The suggestion for doing so would allow for the imaging system to properly compensate for any distortion or image alignment issues. This would then give an operator a better representation and understanding of the displayed images, particularly during a medical procedure. Therefore, it would have been obvious to combine Krieger with Meglan.
39. As per claim 15, Meglan in view of Krieger discloses: The internal medical imaging system of claim 14, wherein the instructions further cause the computing device to perform operations including deform the nonoccluded portion of the at least one first image by interpolating to account for differences in position and orientation of the plurality of second images and the at least one first image. (Krieger, column 5, lines 27-36, “The distortion model 4 includes a 2D image with depth at each pixel, giving a 3D surface at which each pixel contains additional information relevant to the correction of imaging abnormalities such as amount of extra light due to reflection, adjustment in illumination due to surface orientation and occlusion, expected radiance and diffusion due to surface roughness and other material properties, such as irregularities in color inferred from adjacent areas. These are sensitive to the position and intensity of the light which must be known at the time of image acquisition.” and column 6, lines 31-40, “For example, one factor in the distortion model, which can be assessed with knowledge of the 3D surface and lighting conditions, is occlusion. If a region is occluded, the region will appear darker due to shadows and have limited depth information. The distortion model recognizes such areas, knowing the lighting conditions and 3D surface, and will be used to generate possible surface features by interpolating characteristics of the surrounding unclouded area such as color, texture, and 3D surface information.” and column 7, 50-53, “For example, if there is an area with an image artifact, the surrounding areas of acceptable quality may be extrapolated or interpolated in order to approximate the optical image 3 at the distorted region.”)
40. Krieger is analogous art with respect to Meglan because they are from the same field of endeavor, namely artifact/abnormality correction, including for medical applications. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include deforming the nonoccluded portion of at least the first image by interpolating to account for differences in position and orientation of the plurality of additional images, as taught by Krieger into the teaching of Meglan. The suggestion for doing so would allow for the imaging system to properly compensate for any distortion or image alignment issues, particularly by interpolating pixels so that a complete and unobscured image is available. This would then give an operator a better representation and understanding of the displayed images, particularly during a medical procedure. Therefore, it would have been obvious to combine Krieger with Meglan.
41. Claims 16-21 are rejected under 35 U.S.C. 103 as being unpatentable over Meglan (US-10849709-B2) in view of Buckton et al. (US-9390546-B2, hereinafter "Buckton").
42. As per claim 16, Meglan discloses: A method for real-time replacement of occluded portions of an [[ultrasound]] imaging stream, the method comprising:
receiving, via the [[ultrasound]] imaging stream, at least one first image of a field of view of a patient;
receiving, via the [[ultrasound]] imaging stream, a plurality of second images of the field of view, the plurality of second images received subsequent to the at least one first image;
identifying occluded portions of the plurality of second images within the field of view;
extracting at least one nonoccluded portion of the at least one first image corresponding to the occluded portions;
generating a plurality of composite images by replacing the occluded portions of the plurality of second images with the at least one nonoccluded portion from the at least one first image; and
displaying the plurality of composite images in substantially real-time as the receiving the plurality of second images. (See rejection for claim 1 above.)
43. Meglan doesn't explicitly disclose but Buckton discloses: … ultrasound [[imaging stream]] (Buckton, column 2, lines 13-16, “The ultrasound imaging system further includes a visualization module configured generate a representation of an object of interest and remove occluded features …”)
44. Buckton is analogous art with respect to Meglan because they are from the same field of endeavor, namely removing visual occlusions from medical imaging. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include the use of a ultrasound imaging stream, as taught by Buckton into the teaching of Meglan. The suggestion for doing so would provide an additional medical imaging system to filter visual occlusions, providing an operator a choice of imaging system. Therefore, it would have been obvious to combine Buckton with Meglan.
45. Claim 17, which is similar in scope to claims 2 and 16, is thus rejected under the same rationale as described above. In addition, the rational for combining Buckton with Meglan is the same as claim 16 above.
46. Claim 18, which is similar in scope to claims 3, 16, and 17, is thus rejected under the same rationale as described above. In addition, the rational for combining Buckton with Meglan is the same as claim 16 above.
47. Claim 19, which is similar in scope to claims 4, 16, 17, and 18, is thus rejected under the same rationale as described above. In addition, the rational for combining Buckton with Meglan is the same as claim 16 above.
48. Claim 20, which is similar in scope to claims 5, 16, and 17, is thus rejected under the same rationale as described above. In addition, the rational for combining Buckton with Meglan is the same as claim 16 above.
49. Claim 21, which is similar in scope to claims 6, 7, 8, 9, 16, and 17, is thus rejected under the same rationale as described above. In addition, the rational for combining Buckton with Meglan is the same as claim 16 above.
50. Claims 22 is rejected under 35 U.S.C. 103 as being unpatentable over Meglan (US-10849709-B2) in view of Cohen (US-2013/0172730-A1).
51. As per claim 1, Meglan discloses: An internal medical imaging system comprising:
an imaging sensor configured to obtain medical images within an internal portion of a patient; (Meglan, Fig. 1; column 1, lines 50-55, “In an embodiment of the disclosure, a system for removing an occluding object from a surgical image includes an image capture device configured to be inserted into a patient and capture an initial image of a surgical site inside the patient during a surgical procedure and a controller configured to receive the image.”)
a display device configured to display image data and related graphical information from the imaging sensor; and (Meglan, Fig. 1; column 2, line 27, “In some aspects, a display displays the processed image.”)
a computing device including a processor and a memory device, the memory device including instructions that, when executed by the processor, cause the computing device to perform operations including: (Meglan, Fig. 1; column 4, lines 12-16, “Turning to FIG. 1, a system for processing images and/or video of a surgical environment, according to embodiments of the present disclosure, is shown generally as 100. System 100 includes a controller 102 that has a processor 104 and a memory 106.” and column 7, lines 36-40, “The controller may include any type of computing device, computational circuit, or any type of processor or processing circuit capable of executing a series of instructions that are stored in a memory.”)
receive, from the imaging sensor, a first image of a field of view of a patient, wherein the field of view is captured with the imaging sensor [[at a first position and a first orientation relative to the patient;]] (Meglan, column 1, lines 50-55, “In an embodiment of the disclosure, a system for removing an occluding object from a surgical image includes an image capture device configured to be inserted into a patient and capture an initial image of a surgical site inside the patient during a surgical procedure and a controller configured to receive the image.”)
receive, from the imaging sensor, a second image of the field of view, the second image received subsequent to the first image;
identify an occluded portion of the second image within the field of view; (Meglan, column 1, lines 55-62, “When the controller determines that the occluding object is present in the initial image, the controller executes a removal algorithm. The removal algorithm includes controlling the image capture device to capture a plurality of images and applying an image filter to combine the initial image and the plurality of images and generate a processed image where the occluding object is removed from the processed image.”)
extract a nonoccluded portion of the first image corresponding to the occluded portion of the second image;
generate a composite image by replacing the occluded portion of the second image with the nonoccluded portion from the first image; and (Meglan, column 1, lines 55-62, “When the controller determines that the occluding object is present in the initial image, the controller executes a removal algorithm. The removal algorithm includes controlling the image capture device to capture a plurality of images and applying an image filter to combine the initial image and the plurality of images and generate a processed image where the occluding object is removed from the processed image.” And column 2, lines 49-59, “In an aspect, applying the image filter includes separating the initial image into an initial background image and an initial occluding layer, separating the plurality of images into a plurality of background images and a plurality of occluding layers, and combining the initial background images and the plurality of background images to generate the processed image. Combining the initial background images and the plurality of background images includes registering the initial background image and the plurality of background images, and overlaying the registered initial background image and the plurality of background images.” and column 1, lines 55-62, “When the controller determines that the occluding object is present in the initial image, the controller executes a removal algorithm. The removal algorithm includes controlling the image capture device to capture a plurality of images and applying an image filter to combine the initial image and the plurality of images and generate a processed image where the occluding object is removed from the processed image.” and column 5, lines 34-35, “In another embodiment, a plurality of images may be obtained over time to remove occluding objects.”)
send, to the display device, the composite images in substantially real-time as the receiving the second image. (Meglan, column 2, line 27, “In some aspects, a display displays the processed image.” and column 4, lines 2-8, “The captured video is processed in real time or near real time and then displayed to the clinician as processed image. The image processing filters are applied to each frame of the captured video. Providing the processed image or video to the clinician provides the clinician with an unobscured view to a clinician.”)
52. Meglan doesn't explicitly disclose but Cohen discloses: [[receive, from the imaging sensor, a first image of a field of view of a patient, wherein the field of view is captured with the imaging sensor]] at a first position and a first orientation relative to the patient; (Cohen, [0011], “In some embodiments, the present disclosure includes an imager, a database, an anchor, a medical positioning system (MPS), a processor, and a display.” and [0013], “This common 3D position and orientation, which may determined by the MPS, allows for the association or co-registration of numerous coordinate systems. If the anchor is a physical sensor affixed to a stable location along or within the body of the patient, the MPS may determine the position and orientation of the sensor when each image is acquired during the first and second time periods.” And [0043], ”One way for the MPS 114 to determine the P/Os of the imagers is by affixing sensors to (i.e., to, within, about, etc.) the imagers.” and [0038], “Each of the first MPS 102 … may be a device that determines, among other things, the position and orientation (P/O) of at least one sensor.”)
53. Cohen is analogous art with respect to Meglan because they are from the same field of endeavor, namely medical imaging. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include position and orientation information relative to the patient associated with the field of view of the imaging sensor, as taught by Cohen into the teaching of Meglan. The suggestion for doing so would assist the imaging system keep track of the exact location (position/orientation) relative to the patient and use that information to help register the images with the 3D spatial surroundings. Doing so may not only help remove occlusions from images but provide a doctor or medical operator with details about where the patient’s body is in relation to a medical instrument or other objects. Therefore, it would have been obvious to combine Cohen with Meglan.
Conclusion
54. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Nakamura et al. (JP-2008093287-A) discloses a medical image processing apparatus that detects an occlusion region of a target image from an image of biological tissue and obtains one or more non-target images to complete the occlusion region of the target image.
55. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
56. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW CLOTHIER whose telephone number is (571)272-4667. The examiner can normally be reached Mon-Fri 8:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW CLOTHIER/Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614