DETAILED ACTION
Election/Restrictions
Applicant’s election without traverse of 1-12 in the reply filed on 1/7/2026 is acknowledged.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-12 are rejected under 35 U.S.C. 103 as being unpatentable over Venkataraman et al. (US 20240106988 A1) in view of Reinstein et al. (US 20180214214 A1).
Considering claim 1, Venkataraman teaches a method for automatic cropping by a video endgoscope, the method comprising:
acquiring a first image (204/first portion of the video images) from a video feed of a camera of the video endoscope ([0008], [0010] first portion of the video images, [0043] video output or feed), the first image including a first portion of a tool and patient anatomy (Fig.2A, [0017] a tip of a surgical tool is approaching a critical anatomy, [0052]);
detecting the tool in the first image (204, Fig.2A-B, [0008] monitor and track the movement of a surgical tool (e.g., the tip of the tool) within the viewing window, [0048] detect the tool);
based on the detected tool in the first image (Fig.2A-B, [0008] monitor and track the movement of a surgical tool, [0010] first portion of the video images., [0048]), automatically selecting a first display region of the first image, the first display region including the patient anatomy and a tip of the tool ([0052] show the endoscope video in full-image-view mode to provide the user with an overview of the anatomy and tool placement/status at a reduced resolution. This display mode also allows the user to view surgical-procedure-related information displayed in the border regions (e.g., borders 206 and 208) on the screen, [0048]);
acquiring a second image (second portion of the video images) from the video feed ([0008], [0010], [0043] video output or feed), the second image including a second portion of the tool and the patient anatomy (Fig.2A, [0017] a tip of a surgical tool is approaching a critical anatomy, [0052]);
detecting the tool in the second image (Fig.2A-B, [0008] monitor and track the movement of a surgical tool, [0048]); and
based on the detected tool in the second image (Fig.2A-B, [0008] monitor and track the movement of a surgical tool, [0010] second portion of the video images, [0048] detect the tool), automatically selecting a second display region of the second image, the second display region including the patient anatomy and the tip of the tool ([0052] show the endoscope video in full-image-view mode to provide the user with an overview of the anatomy and tool placement/status at a reduced resolution. This display mode also allows the user to view surgical-procedure-related information displayed in the border regions (e.g., borders 206 and 208) on the screen).
Venkataraman do not clearly teach a laryngoscope.
Reinstein teaches a laryngoscope ([0012]).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of effective filling date of the application to provide above teaching of Reinstein to Venkataraman, in order to provide guidance controller displays a user selection of a point of interest on an interactive planar slice of volume image for user guidance contemplation purposes without any movement of oblique endoscope by spherical robot until receiving a user confirmation of the user selection of the point of interest.
Considering claim 2, Venkataraman and Reinstein further teach wherein the patient anatomy is enlarged in the second display region relative to the first display region (Venkataraman: ([0052] show the endoscope video in full-image-view mode to provide the user with an overview of the anatomy and tool placement/status at a reduced resolution…).
Considering claim 3, Venkataraman and Reinstein further teach wherein the method further includes displaying the first display region and the second display region in real time at a display of the video laryngoscope (Venkataraman: [0010], Reinstein: Fig.5, 8-9).
Considering claim 4, Venkataraman and Reinstein further teach wherein the tip of the tool in the first display region and the second display region has a same height when the first display region and the second display region are displayed, wherein the same height is a distance from an end of the tip of the tool to a bottom edge of one of the first display region or the second display region (Venkataraman: [0073]).
Considering claim 5, Venkataraman and Reinstein further teach wherein the patient anatomy includes vocal cords and the tool is an endotracheal tube (Venkataraman: [0039]-[0040], Reinstein: [0012]).
Considering claim 6, Venkataraman and Reinstein further teach wherein the tip of the tool is a distal end of the endotracheal tube positioned distally from a cuff of the endotracheal tube (Venkataraman: [0039]-[0040], Reinstein: [0012]).
Considering claim 7, Venkataraman teaches a method for automatic cropping by a video lendoscope, the method comprising:
acquiring an image using a camera of the video endoscope ([0010], [0033] using video cameras and video images), the image including a portion of a tool ([0008] monitor and track the movement of a surgical tool, [0010]);
providing at least a portion of the acquired image as input into a trained machine learning (ML) model ([0033] processing and analyzing the surgical videos of the given procedure using machine-learning-based approaches);
receiving detection of the tool (214, Fig.2B) as output from the trained ML model ([0008] monitor and track the movement of a surgical tool, [0010], [0048] an image processing technique with tool detection and recognition functions (e.g., a machine-learning-based or a computer-vision-based technique) can be used to first detect the tool and subsequently determine the location of tool tip 214);
based on the tool detection (Fig.2A-B, [0048] detect the tool), zooming out (adjust the viewing window) to a portion of the acquired image including patient anatomy and a tool portion of the tool having a tool height (a distance between the current tool tip location and the edge of the viewing window) (Fig.2A-B, [0008], [0035] “viewing window,” which selectively displays different regions, [0073] a distance); and
displaying the zoomed-out portion of the acquired image at a display of the video laryngoscope) (Fig.2A-B, [0008], [0035] selectively displays different regions, [0048] visualization solution for displaying, [0073]).
Venkataraman do not clearly teach a laryngoscope.
Reinstein teaches a laryngoscope ([0012]).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of effective filling date of the application to provide above teaching of Reinstein to Venkataraman, in order to provide guidance controller displays a user selection of a point of interest on an interactive planar slice of volume image for user guidance contemplation purposes without any movement of oblique endoscope by spherical robot until receiving a user confirmation of the user selection of the point of interest.
Considering claim 8, Venkataraman and Reinstein further teach wherein the tool height is less than 20 mm (a threshold distance) when the zoomed- out portion of the acquired image is displayed at the display of the video laryngoscope (Venkataraman: [0073] set a threshold distance).
Considering claim 9, Venkataraman and Reinstein further teach wherein displaying the zoomed-out (adjust the viewing window) portion of the acquired image includes fitting the zoomed-out portion to the display (Venkataraman: Fig.2A-B, [0008], [0035] “viewing window,” which selectively displays different regions, [0073]).
Considering claim 10, wherein the aspect ratio of the zoomed-out portion and the display are the same (Venkataraman: [0004] ratio).
Considering claim 11, Venkataraman and Reinstein further teach detecting a progression of the tool towards the patient anatomy; and progressively zooming in to portions of images acquired by the camera as the tool progresses towards the patient anatomy (Venkataraman: Fig.2A-B, [0008], [0035] selectively displays different regions, [0048] visualization solution for displaying, [0073]).
Considering claim 12, Venkataraman and Reinstein further teach wherein the zoomed-in portions include the tool portion having the tool height (Venkataraman: [0073]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHAI MINH NGUYEN whose telephone number is (571)272-7923. The examiner can normally be reached 6-3.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Appiah can be reached at 571-272-7904. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KHAI M NGUYEN/ Primary Examiner, Art Unit 2641