Prosecution Insights
Last updated: April 19, 2026
Application No. 18/809,358

MEDICAL SUPPORT APPARATUS, OPERATING METHOD FOR MEDICAL SUPPORT APPARATUS, AND OPERATING PROGRAM

Final Rejection §102§103
Filed
Aug 20, 2024
Examiner
ALDARRAJI, ZAINAB MOHAMMED
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Fujifilm Corporation
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
83%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
81 granted / 121 resolved
-3.1% vs TC avg
Strong +16% interview lift
Without
With
+16.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
29 currently pending
Career history
150
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
21.6%
-18.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 121 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The proposed reply filed on 10/10/2025 has been entered. Claims 1-2 and 4-20 remain pending in the current application. The amendments to the claims have overcome the claim interpretation, the 35 USC 112 rejection, and the 35 USC 101 rejection. Claim Objections Claims 1, 5, and 19 are objected to because of the following informalities: Claim 1 recites the limitation “derive first positional relationship information indicating a position and orientation of the first three-dimensional image within the surgical field image, on the basis of pose information that indicates the position and orientation of the ultrasound probe within the surgical field as estimated by image analysis of the surgical field image by estimating a position and orientation of the ultrasound probe within the surgical field from the surgical field image and determining a position and orientation of the first three- dimensional image within the surgical field image by expressing the first three- dimensional image in a coordinate system of the ultrasound probe and transforming that coordinate system into a coordinate system of the surgical field image using the estimated pose” should read “derive first positional relationship information indicating a position and orientation of the first three-dimensional image within the surgical field image, on the basis of pose information that indicates a position and orientation of the ultrasound probe within the surgical field as estimated by image analysis of the surgical field image by estimating the position and orientation of the ultrasound probe within the surgical field from the surgical field image and determining the position and orientation of the first three- dimensional image within the surgical field image by expressing the first three- dimensional image in a coordinate system of the ultrasound probe and transforming that coordinate system into a coordinate system of the surgical field image using the estimated pose”. Claim 5 recites the limitation “acquire deformation information indicating how the internal structure in the first three-dimensional image is deformed with respect to the internal structure in the second three-dimensional image the second three-dimensional image having been prepared in advance as a preoperative three-dimensional image, deform the preparation information based on the acquired deformation information, and generate the composite image with the deformed preparation information superimposed onto the surgical field image” should read “acquire deformation information indicating how the internal structure in the first three-dimensional image is deformed with respect to the internal structure in the second three-dimensional image having been prepared in advance as a preoperative three-dimensional image, deform the preparation information based on the acquired deformation information, and generate the composite image with the deformed preparation information superimposed onto the surgical field image”. Claim 19 recites the limitation “deriving first positional relationship information indicating a position and orientation of the first three-dimensional image within the surgical field image, on the basis of pose information that indicates the position and orientation of the ultrasound probe within the surgical field as estimated by image analysis of the surgical field image by estimating a position and orientation of the ultrasound probe within the surgical field from the surgical field image and determining a position and orientation of the first three- dimensional image within the surgical field image by expressing the first three- dimensional image in a coordinate system of the ultrasound probe and transforming that coordinate system into a coordinate system of the surgical field image using the estimated pose” should read “deriving first positional relationship information indicating a position and orientation of the first three-dimensional image within the surgical field image, on the basis of pose information that indicates a position and orientation of the ultrasound probe within the surgical field as estimated by image analysis of the surgical field image by estimating the position and orientation of the ultrasound probe within the surgical field from the surgical field image and determining the position and orientation of the first three- dimensional image within the surgical field image by expressing the first three- dimensional image in a coordinate system of the ultrasound probe and transforming that coordinate system into a coordinate system of the surgical field image using the estimated pose”. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 4, 7, and 17-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Popovic et al (US 2019/0290247). Regarding claim 1, Popovic teaches a medical support apparatus comprising a processor configured to (para. 0037; image-based fusion 10 of an endoscopic image and an ultrasound image of an anatomical region for a minimally invasive procedure accomplished by a controller network 20.): acquire a surgical field image obtained by optically taking, with a camera, an image of a surgical field containing a target area inside the body and an ultrasound probe inserted into the body (paras. 0039, 0041, and 0052; endoscope imaging phase 11 involves an introduction of an endoscope through a port of a patient into the anatomical region as well known in the art of the present disclosure whereby the endoscope is operated to generate an endoscopic image of an anatomical region (e.g., an endoscopic view of an organ within the anatomical region). Probe detector 51 controls a processing of endoscopic image frames 33 to detect laparoscopic ultrasound probe 41 within field-of-view 32 of endoscope 31); acquire a first three-dimensional image illustrating an internal structure of the target area, the first three-dimensional image being based on an ultrasound image group taken by the ultrasound probe (para. 0050; LUS probe controller 40 controls a generation of one or more ultrasound images by a LUS probe 41 as well known in the art of the present disclosure. More particularly, LUS probe 41 has a transducer head 41h for transmitting a short pulse of ultrasound longitudinally within an imaging plane 42 and for receiving reflected sound (“ultrasound echo”) illustrative of an ultrasound image. LUS probe 41 may be stationary during imaging resulting in single 2D planar ultrasound image 43 or LUS probe 41 may be pivoted relative to the port resulting in a 3D scan ultrasound image consisting of a spatial sequence of 2D planar ultrasound images 43.); derive first positional relationship information indicating a position and orientation of the first three-dimensional image within the surgical field image, on the basis of pose information that indicates the position and orientation of the ultrasound probe within the surgical field as estimated by image analysis of the surgical field image by estimating a position and orientation of the ultrasound probe within the surgical field from the surgical field image and determining a position and orientation of the first three- dimensional image within the surgical field image by expressing the first three- dimensional image in a coordinate system of the ultrasound probe and transforming that coordinate system into a coordinate system of the surgical field image using the estimated pose paras. 0059 and 0061-0062; probe detector 41 automatically detects the distal transducer head 41h of LUS probe 41 within a field-of-view 32 of endoscope 31 by executing techniques known in the art based on a detection of a pattern of transducer head 41h, a specific shape/color/texture of transducer head 41 or a supplemental/additional pattern on transducer head 41h. A stage S74 of flowchart 70 encompasses image transformer 52 (FIG. 2) attaching a reference frame 44 to transducer head 41h to thereby execute a 2D-3D registration as well known in the art (e.g., a RANSAC registration) for computing the image transformation .sup.ENDOSCOPET.sub.LUS PROBE between an endoscopic image space 34 of endoscope 31 and an ultrasound image space 42 of LUS probe 41. The examiner notes that the probe is located within the endoscopic image by analyzing the visual pattern attached to the probe in the endoscopic image. After determining the pose of the probe, the positional relationship between the ultrasonic image and the endoscopic image is determined by transforming and registering the endoscopic image and the ultrasonic image); acquire a second three-dimensional image illustrating the internal structure of the target area and having been generated on the basis of a tomographic image group taken in advance by a tomograph (paras. 0074-0074; stage S76 of flowchart 70 may further encompass image integration 53 executing an additional fusion of the endoscopic/ultrasound images to an anatomical model including, but not limited to, a volume image of the anatomical region and an anatomical atlas of an anatomical organ within the anatomical region. Volume Image Registration. The endoscope view may be registered to a preoperative 3D image (e.g., CT, XperCT, MRI, PET etc.) using methods well known in art of the present disclosure (e.g., U.S. Pat. No. 9,095,252 B2).); derive second positional relationship information indicating a positional relationship between a position within the first three-dimensional image and a position within a second three-dimensional image by performing image analysis on the first three-dimensional image and the second three-dimensional image, the image analysis associating morphologies of an internal structure common to both images and computing a three-dimensional transformation between coordinate systems of the first and second three-dimensional images (paras. 0073-0076 and 0087-0088; a registration .sup.ENDOSCOPET.sub.CT between a volumetric space 81 of a pre-operative CT image 80 and endoscopic image space 34 of endoscope 31 whereby 3D image and structures from pre-operative CT image 80 may be concurrently displayed based on the fusion of endoscope/ultrasound images as will be further explained in connection with FIGS. 7-16. The .sup.ENDOSCOPET.sub.CT registration can be refined using an image-based registration method well known in art of the present disclosure (e.g. mutual information) acquired at a plane of interest. From .sup.ENDOSCOPET.sub.CT and .sup.EndoscopeT.sub.LUE, a transformation between the ultrasound and CT image spaces can be computed. Anatomical Atlas Registration. The fusion of endoscopic/ultrasound images may be registered to an anatomical atlas of an anatomical organ that captures the shapes of the anatomical organ across the population. Such registration enables only a rough localization of LUS probe 41 in relation to the anatomical organ. For example, FIG. 6 illustrates a registration .sup.MODELT.sub.ENDOSCOPE between a model space 91 of an anatomical atlas 90 of a liver and endoscopy by using a set of pre-defined, quickly identifiable anatomical points of reference 92 shown in ultrasound images 43b-43d registered to the endoscopic image. These points of reference 92 may be organ specific large vessels bifurcations or surface landmarks. The examiner notes that the anatomical reference points are used to register the fused (ultrasound/endoscopic) image to the preoperative image by associating common anatomical reference points between the two images (preoperative and fused image), then a transformation between ultrasound and preoperative image spaces is computed. Based on the registration and the transformation between the ultrasound and preoperative images, the second positional relationship is determined.); generate a composite image101a-101c of preoperative CT information overlaid in conjunction with ultrasound image 43g on an endoscopic image 33c; FIG. 11 illustrates reference planes 101d-101f in an endoscopic image 33d corresponding to reference planes 101a-101c of a CT image 83. The examiner notes that the composite image is created by overlaying the preoperative information at a position corresponding to the anatomical region visualized in the endoscopic image by transforming the preoperative image data using the computed transformation between preoperative and ultrasound image space (Second positional relationship) and subsequently transforming into the endoscopic coordinate frame using the ultrasound/endoscope positional relationship (first positional relationship) and; control displaying of the composite image (para. 0071; the fusion image is displayed on the display). Regarding claim 2, Popovic teaches the medical support apparatus according to claim 1, wherein a marker formed from an optically detectable pattern is provided on an outer circumferential surface of the ultrasound probe, and the processor is configured to estimate the position and orientation of the ultrasound probe by detecting the marker from the surgical field image (para. 0059; probe detector 41 automatically detects the distal transducer head 41h of LUS probe 41 within a field-of-view 32 of endoscope 31 by executing techniques known in the art based on a detection of a pattern of transducer head 41h, a specific shape/color/texture of transducer head 41 or a supplemental/additional pattern on transducer head 41h.). Regarding claim 4, Popovic teaches the medical support apparatus according to claim 1, wherein the internal structure of the target area is a vascular structure, and the processor is configured to derive the second positional relationship information by comparing bifurcation points of blood vessels included in the vascular structure in each of the first three-dimensional image and the second three-dimensional image to one another (figure 6, paras. 0073-0076; FIG. 6 illustrates a registration .sup.MODELT.sub.ENDOSCOPE between a model space 91 of an anatomical atlas 90 of a liver and endoscopy by using a set of pre-defined, quickly identifiable anatomical points of reference 92 shown in ultrasound images 43b-43d registered to the endoscopic image. These points of reference 92 may be organ specific large vessels bifurcations or surface landmarks.). Regarding claim 7, Popovic teaches the medical support apparatus according to any claim 1, wherein the preparation information includes a plurality of pieces of preparation information at different depths in a depth direction proceeding from a surface layer to deep layers of the target area in the second three-dimensional image (figure 6, paras. 0073-0076; . stage S76 of flowchart 70 may further encompass image integration 53 executing an additional fusion of the endoscopic/ultrasound images to an anatomical model including, but not limited to, a volume image of the anatomical region and an anatomical atlas of an anatomical organ within the anatomical region. a registration .sup.MODELT.sub.ENDOSCOPE between a model space 91 of an anatomical atlas 90 of a liver and endoscopy by using a set of pre-defined, quickly identifiable anatomical points of reference 92 shown in ultrasound images 43b-43d registered to the endoscopic image. These points of reference 92 may be organ specific large vessels bifurcations or surface landmarks. The examiner notes that the preparation data is the same as the 3d anatomical model that comprise a preoperative image and an anatomical atlas that represent a target region with internal structures under the surface of the target region at different depths). Regarding claim 17, Popovic teaches the medical support apparatus according to claim 1, wherein the surgical field image is an image acquired in the camera in specular surgery, and the camera is a camera provided at a distal end part of an endoscope (figure 1, para. 0002; Endoscopes and laparoscopes are thin, elongated camera assemblies that allow clinicians to view the internal anatomy of a patient without the need to surgically expose the anatomy for a direct view. Endoscopes can fit through narrow natural orifices or small incisions in the skin, thus resulting in reduced trauma to the patient as compared to open surgery). Regarding claim 18, Popovic teaches the medical support apparatus according to claim 1, wherein the preparation information includes at least one from among an excision line where the target area is to be excised, a lesion, a vascular structure which is an internal structure of the target area, and reference information when determining the excision line (para. 0076; the internal structure is large vessels of the liver). Regarding claim 19, Popovic teaches an operating method for a medical support apparatus comprising a processor, the operating method causing the processor to (para. 0037; image-based fusion 10 of an endoscopic image and an ultrasound image of an anatomical region for a minimally invasive procedure accomplished by a controller network 20.): acquiring a surgical field image obtained by optically taking, with a camera, an image of a surgical field that contains a target area inside the body and an ultrasound probe inserted into the body (paras. 0039, 0041, and 0052; endoscope imaging phase 11 involves an introduction of an endoscope through a port of a patient into the anatomical region as well known in the art of the present disclosure whereby the endoscope is operated to generate an endoscopic image of an anatomical region (e.g., an endoscopic view of an organ within the anatomical region). Probe detector 51 controls a processing of endoscopic image frames 33 to detect laparoscopic ultrasound probe 41 within field-of-view 32 of endoscope 31); acquiring a first three-dimensional image illustrating an internal structure of the target area, the first three-dimensional image being based on an ultrasound image group taken by the ultrasound probe (para. 0050; LUS probe controller 40 controls a generation of one or more ultrasound images by a LUS probe 41 as well known in the art of the present disclosure. More particularly, LUS probe 41 has a transducer head 41h for transmitting a short pulse of ultrasound longitudinally within an imaging plane 42 and for receiving reflected sound (“ultrasound echo”) illustrative of an ultrasound image. LUS probe 41 may be stationary during imaging resulting in single 2D planar ultrasound image 43 or LUS probe 41 may be pivoted relative to the port resulting in a 3D scan ultrasound image consisting of a spatial sequence of 2D planar ultrasound images 43.); deriving first positional relationship information indicating a position and orientation of the first three-dimensional image within the surgical field image, on the basis of pose information that indicates the position and orientation of the ultrasound probe within the surgical field as estimated by image analysis of the surgical field image by estimating a position and orientation of the ultrasound probe within the surgical field from the surgical field image and determining a position and orientation of the first three- dimensional image within the surgical field image by expressing the first three- dimensional image in a coordinate system of the ultrasound probe and transforming that coordinate system into a coordinate system of the surgical field image using the estimated pose paras. 0059 and 0061-0062; probe detector 41 automatically detects the distal transducer head 41h of LUS probe 41 within a field-of-view 32 of endoscope 31 by executing techniques known in the art based on a detection of a pattern of transducer head 41h, a specific shape/color/texture of transducer head 41 or a supplemental/additional pattern on transducer head 41h. A stage S74 of flowchart 70 encompasses image transformer 52 (FIG. 2) attaching a reference frame 44 to transducer head 41h to thereby execute a 2D-3D registration as well known in the art (e.g., a RANSAC registration) for computing the image transformation .sup.ENDOSCOPET.sub.LUS PROBE between an endoscopic image space 34 of endoscope 31 and an ultrasound image space 42 of LUS probe 41. The examiner notes that the probe is located within the endoscopic image by analyzing the visual pattern attached to the probe in the endoscopic image. After determining the pose of the probe, the positional relationship between the ultrasonic image and the endoscopic image is determined by transforming and registering the endoscopic image and the ultrasonic image); acquiring a second three-dimensional image illustrating the internal structure of the target area and having been generated on the basis of a tomographic image group taken in advance by a tomograph (paras. 0074-0074; stage S76 of flowchart 70 may further encompass image integration 53 executing an additional fusion of the endoscopic/ultrasound images to an anatomical model including, but not limited to, a volume image of the anatomical region and an anatomical atlas of an anatomical organ within the anatomical region. Volume Image Registration. The endoscope view may be registered to a preoperative 3D image (e.g., CT, XperCT, MRI, PET etc.) using methods well known in art of the present disclosure (e.g., U.S. Pat. No. 9,095,252 B2).); deriving second positional relationship information indicating a positional relationship between a position within the first three-dimensional image and a position within a second three-dimensional image by performing image analysis on the first three-dimensional image and the second three-dimensional image, the image analysis associating morphologies of an internal structure common to both images and computing a three-dimensional transformation between coordinate systems of the first and second three-dimensional images (paras. 0073-0076 and 0087-0088; a registration .sup.ENDOSCOPET.sub.CT between a volumetric space 81 of a pre-operative CT image 80 and endoscopic image space 34 of endoscope 31 whereby 3D image and structures from pre-operative CT image 80 may be concurrently displayed based on the fusion of endoscope/ultrasound images as will be further explained in connection with FIGS. 7-16. The .sup.ENDOSCOPET.sub.CT registration can be refined using an image-based registration method well known in art of the present disclosure (e.g. mutual information) acquired at a plane of interest. From .sup.ENDOSCOPET.sub.CT and .sup.EndoscopeT.sub.LUE, a transformation between the ultrasound and CT image spaces can be computed. Anatomical Atlas Registration. The fusion of endoscopic/ultrasound images may be registered to an anatomical atlas of an anatomical organ that captures the shapes of the anatomical organ across the population. Such registration enables only a rough localization of LUS probe 41 in relation to the anatomical organ. For example, FIG. 6 illustrates a registration .sup.MODELT.sub.ENDOSCOPE between a model space 91 of an anatomical atlas 90 of a liver and endoscopy by using a set of pre-defined, quickly identifiable anatomical points of reference 92 shown in ultrasound images 43b-43d registered to the endoscopic image. These points of reference 92 may be organ specific large vessels bifurcations or surface landmarks. The examiner notes that the anatomical reference points are used to register the fused (ultrasound/endoscopic) image to the preoperative image by associating common anatomical reference points between the two images (preoperative and fused image), then a transformation between ultrasound and preoperative image spaces is computed. Based on the registration and the transformation between the ultrasound and preoperative images, the second positional relationship is determined.); generating a composite image101a-101c of preoperative CT information overlaid in conjunction with ultrasound image 43g on an endoscopic image 33c; FIG. 11 illustrates reference planes 101d-101f in an endoscopic image 33d corresponding to reference planes 101a-101c of a CT image 83. The examiner notes that the composite image is created by overlaying the preoperative information at a position corresponding to the anatomical region visualized in the endoscopic image by transforming the preoperative image data using the computed transformation between preoperative and ultrasound image space (Second positional relationship) and subsequently transforming into the endoscopic coordinate frame using the ultrasound/endoscope positional relationship (first positional relationship) and; controlling displaying of the composite image (para. 0071; the fusion image is displayed on the display). Regarding claim 20, Popovic teaches a non-transitory computer-readable storage medium storing an operating program for causing a computer to function as a medical support apparatus, the operating program of the medical support apparatus causing the computer to execute processing to perform the method of claim 19 (para. 0024; The structural configuration of the controller may include, but is not limited to, processor(s), computer-usable/computer readable storage medium(s), an operating system, application module(s), peripheral device controller(s), slot(s) and port(s).). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Popovic et al (US 2019/0290247) in the view of Shekhra et al. (US 2014/0303491). Regarding claim 5, Popovic teaches the medical support apparatus according to claim 3, however fails to explicitly teach wherein the processor is configured to acquire deformation information indicating how the internal structure in the first three-dimensional image is deformed with respect to the internal structure in the second three-dimensional image the second three-dimensional image having been prepared in advance as a preoperative three-dimensional image, deform the preparation information based on the acquired deformation information, and generate the composite image with the deformed preparation information superimposed onto the surgical field image. Shekhra, in the same field of endeavor, teaches acquire deformation information indicating how the internal structure in the first three-dimensional image is deformed with respect to the internal structure in the second three-dimensional image the second three-dimensional image having been prepared in advance as a preoperative three-dimensional image, deform the preparation information based on the acquired deformation information, and generate the composite image with the deformed preparation information superimposed onto the surgical field image (paras. 0088 and 0091-0094; method of creating composite stereoscopic images through deformable registration of a surface model derived from pre-operative tomographic images and a surface model reconstructed from the endoscopic images. Organ surfaces (or surfaces patches) identified in pre-operative tomographic images are deformably registered with corresponding surfaces (or surface patches) obtained from either endoscopic video or intra-operative volumetric images using a deformable surface registration method implemented by the deformable surface registration module. The result of the registration is then propagated to the entire pre-operative tomographic volume using previously computed organ deformation models. The pre-operative tomographic image volume is registered with intra-operative tomographic image volume using an intensity-based deformable image registration algorithm. Using suitable methods, anatomic structures in the preoperative tomographic image volume are segmented. The same structures are obtained in the intraoperative tomographic image volume either through image segmentation or as part of the imaging process. Once the pre-operative tomographic images are registered with intra-operative tomographic images, a triple modality combination (pre-operative tomographic data, intra-operative tomographic data, and intraoperative endoscopic video) can be performed.). It would have been obvious to one in the ordinary skill in the art before the effective filling date of the claimed invention to have modified Popovic to incorporate the teachings of Shekhra to provide a deformation information indicating how the internal structure in the first three-dimensional image is deformed with respect to the internal structure in the second three-dimensional image to acquire deformation information indicating how the internal structure in the first three-dimensional image is deformed with respect to the internal structure in the second three-dimensional image the second three-dimensional image having been prepared in advance as a preoperative three-dimensional image, deform the preparation information based on the acquired deformation information, and generate the composite image with the deformed preparation information superimposed onto the surgical field image. This modification will allow creating an accurate composite image which accounts for different spatial states and poses of the anatomy which can be moved or deformed because of factors such as respiration, surgical maneuvers, and other factors. As a large non-rigid misalignment is generally expected between the anatomy that appears in pre-operative tomographic images and the actual intra-operative anatomy, furthermore, it is expected to vary continuously during the surgery as disclosed within Shekhar in paras. 0075 and 0089. Thus, by determining and accounting for deformation information allows accurate composite images to be generated to support the surgeon during surgery. Regarding claim 6, Popovic teaches the medical support apparatus according to claim 5, wherein the target area is a lung or a liver (para. 0068; an endoscopic image 33a showing an endoscopic view of an anatomical organ in the form of a liver). Claims 8-16 are rejected under 35 U.S.C. 103 as being unpatentable over Popovic et al (US 2019/0290247) in the view of Popovic et al (US 2017/0007350) hereinafter Popovic II. Regarding claim 8, Popovic teaches the medical support apparatus according to any claim 7, however, fails to explicitly teach wherein, when superimposing the plurality of pieces of preparation information at different depths onto the surgical field image, the processor is configured to change a display appearance of the plurality of pieces of preparation information in the composite image, depending on the depth. Popovic II, in the same field of endeavor, teaches wherein, when superimposing the plurality of pieces of preparation information at different depths onto the surgical field image, the processor is configured to change a display appearance of the plurality of pieces of preparation information in the composite image, depending on the depth (para. 0044; an image 200 shows an overlay 107 of blood vessels 202 (e.g., arteries) from pre-operative/intra-operative 3D images over an endoscope image 134. The image 200 indicates depth of blood vessels 202 relative to the surface of an organ 204 (the heart in this example). The indication of depth permits improved planning for surgical procedures (e.g., coronary bypass) and increases safety by reducing the risk of damaging the vessel or vessels during removal of the excess tissue over the blood vessels 202 during surgery. Image 200 shows a map of coronary arteries 202. The coronary arteries 202 may be textured, colored, spatially oriented, sized or labeled with alphanumeric characters to indicate properties of the arteries, such as depth from the surface, locations of lesions or other features, blockages, etc.). It would have been obvious to one in the ordinary skill in the art before the effective filling date of the claimed invention to have modified Popovic to incorporate the teachings of Popovic II to provide a change in display appearance of the plurality of pieces of preparation information in the composite image, depending on the depth. This modification will permit improved planning of surgical procedures and increases safety by reducing the risk of damaging the vessel through removal of the excess tissue as disclosed within Popovic in para. 0024. Regarding claim 9, Popovic teaches the medical support apparatus according to any claim 8, however, fails to explicitly teach wherein when changing the display appearance, the processor is configured to raise visibility of the preparation information nearer the surface layer than the deep layers. Popovic II, in the same field of endeavor, teaches wherein when changing the display appearance, the processor is configured to raise visibility of the preparation information nearer the surface layer than the deep layers (figure 7; paras. 0044 and 0050; an image 200 shows an overlay 107 of blood vessels 202 (e.g., arteries) from pre-operative/intra-operative 3D images over an endoscope image 134. The image 200 indicates depth of blood vessels 202 relative to the surface of an organ 204 (the heart in this example). The indication of depth permits improved planning for surgical procedures (e.g., coronary bypass) and increases safety by reducing the risk of damaging the vessel or vessels during removal of the excess tissue over the blood vessels 202 during surgery. Image 200 shows a map of coronary arteries 202. The coronary arteries 202 may be textured, colored, spatially oriented, sized or labeled with alphanumeric characters to indicate properties of the arteries, such as depth from the surface, locations of lesions or other features, blockages, etc. a depth 324 is indicated between a surface of the organ 204 and the blood vessel 206 at the location of the instrument 102. The instrument 102 need not be an endoscope or the imaging device collecting the images. The depth may be indicated with or without color or other indicators. The depth 324 may be indicated with a color gradient 312 along with an indicator 325 (e.g., an arrow, etc.) showing a specific depth at or near the tool tip of instrument 102.). It would have been obvious to one in the ordinary skill in the art before the effective filling date of the claimed invention to have modified Popovic to incorporate the teachings of Popovic II to provide a change in the visibility of the preparation information nearer the surface layer than the deep layers. This modification will permit improved planning of surgical procedures and increases safety by reducing the risk of damaging the vessel through removal of the excess tissue as disclosed within Popovic in para. 0024. Regarding claim 10, Popovic teaches the medical support apparatus according to any claim 8, however, fails to explicitly teach wherein the depth direction is the direction along an image-taking optical axis of the camera that takes the surgical field image. Popovic II, in the same field of endeavor, teaches wherein the depth direction is the direction along an image-taking optical axis of the camera that takes the surgical field image (figure 4, para. 0047; a depth of the vessel 306 is computed and defined. In general, the depth of the vessel is not uniquely defined, and depends on a reference object from which the depth “d” is being computed (e.g., the direction of view 308 of the endoscope, the instrument direction, the surface of the heart 304, etc.). For example, the depth d can be defined as the closest distance to the organ surface, or the distance along the direction of view 308.). It would have been obvious to one in the ordinary skill in the art before the effective filling date of the claimed invention to have modified Popovic to incorporate the teachings of Popovic II to provide a depth direction is the direction along an image-taking optical axis of the camera that takes the surgical field images. This modification will permit improved planning of surgical procedures and increases safety by reducing the risk of damaging the vessel through removal of the excess tissue as disclosed within Popovic in para. 0024. Regarding claim 11, Popovic teaches the medical support apparatus according to any claim 10, however, fails to explicitly teach wherein a reference position for the depth is a viewpoint position of the camera. Popovic II, in the same field of endeavor, teaches wherein a reference position for the depth is a viewpoint position of the camera (figure 4, para. 0047; a depth of the vessel 306 is computed and defined. In general, the depth of the vessel is not uniquely defined, and depends on a reference object from which the depth “d” is being computed (e.g., the direction of view 308 of the endoscope, the instrument direction, the surface of the heart 304, etc.). For example, the depth d can be defined as the closest distance to the organ surface, or the distance along the direction of view 308.). It would have been obvious to one in the ordinary skill in the art before the effective filling date of the claimed invention to have modified Popovic to incorporate the teachings of Popovic II to provide a reference position for the depth is a viewpoint position of the camera. This modification will permit improved planning of surgical procedures and increases safety by reducing the risk of damaging the vessel through removal of the excess tissue as disclosed within Popovic in para. 0024. Regarding claim 12, Popovic teaches the medical support apparatus according to any claim 10, however, fails to explicitly teach wherein a reference position for the depth is a surface position of the target area. Popovic II, in the same field of endeavor, teaches wherein a reference position for the depth is a surface position of the target area (figure 4, para. 0047; a depth of the vessel 306 is computed and defined. In general, the depth of the vessel is not uniquely defined, and depends on a reference object from which the depth “d” is being computed (e.g., the direction of view 308 of the endoscope, the instrument direction, the surface of the heart 304, etc.). For example, the depth d can be defined as the closest distance to the organ surface, or the distance along the direction of view 308.). It would have been obvious to one in the ordinary skill in the art before the effective filling date of the claimed invention to have modified Popovic to incorporate the teachings of Popovic II to provide a reference position for the depth is surface position of the target area. This modification will permit improved planning of surgical procedures and increases safety by reducing the risk of damaging the vessel through removal of the excess tissue as disclosed within Popovic in para. 0024. Regarding claim 13, Popovic teaches the medical support apparatus according to any claim 12, however, fails to explicitly teach wherein, when rendering the preparation information as seen from the viewpoint position of the camera, the processor virtually sets a projection plane for projecting the preparation information and a plurality of projection lines each connecting a respective one of the plurality of pieces of preparation information to the projection plane, and, the reference position for the depth is a surface position of the target area that intersects the projection line set for each piece of the preparation information. Popovic II, in the same field of endeavor, teaches wherein, when rendering the preparation information as seen from the viewpoint position of the camera, the processor virtually sets a projection plane for projecting the preparation information and a plurality of projection lines each connecting a respective one of the plurality of pieces of preparation information to the projection plane, and, the reference position for the depth is a surface position of the target area that intersects the projection line set for each piece of the preparation information (figure 4, paras. 0046-0047; a depth of the vessel 306 is computed and defined. In general, the depth of the vessel is not uniquely defined, and depends on a reference object from which the depth “d” is being computed (e.g., the direction of view 308 of the endoscope, the instrument direction, the surface of the heart 304, etc.). For example, the depth d can be defined as the closest distance to the organ surface, or the distance along the direction of view 308. In one embodiment, since the endoscope 302 provides the only visual feedback of the operating field, the depth d may be measured from the direction of view 308 of the endoscope 302. The examiner notes that if the reference position for the depth is not the surface of an organ the reference position would be the direction of view.). It would have been obvious to one in the ordinary skill in the art before the effective filling date of the claimed invention to have modified Popovic to incorporate the teachings of Popovic II to provide a reference position for the depth is view point position of the camera. This modification will permit improved planning of surgical procedures and increases safety by reducing the risk of damaging the vessel through removal of the excess tissue as disclosed within Popovic in para. 0024. Regarding claim 14, Popovic teaches the medical support apparatus according to any claim 12, however, fails to explicitly teach wherein the reference position for the depth is a surface position of the target area that is closest from the viewpoint position side of the camera. Popovic II, in the same field of endeavor, teaches wherein the reference position for the depth is a surface position of the target area that is closest from the viewpoint position side of the camera (para. 0047; a depth of the vessel 306 is computed and defined. In general, the depth of the vessel is not uniquely defined, and depends on a reference object from which the depth “d” is being computed (e.g., the direction of view 308 of the endoscope, the instrument direction, the surface of the heart 304, etc.). For example, the depth d can be defined as the closest distance to the organ surface, or the distance along the direction of view 308. In one embodiment, since the endoscope 302 provides the only visual feedback of the operating field, the depth d may be measured from the direction of view 308 of the endoscope 302.). It would have been obvious to one in the ordinary skill in the art before the effective filling date of the claimed invention to have modified Popovic to incorporate the teachings of Popovic II to provide a reference position for the depth is surface position of the target area. This modification will permit improved planning of surgical procedures and increases safety by reducing the risk of damaging the vessel through removal of the excess tissue as disclosed within Popovic in para. 0024. Regarding claim 15, Popovic teaches the medical support apparatus according to any claim 8, however, fails to explicitly teach wherein the depth direction is the direction proceeding from a surface position to the deep layers, with the surface position of the target area in a second three-dimensional image as a reference position. Popovic II, in the same field of endeavor, teaches wherein the depth direction is the direction proceeding from a surface position to the deep layers, with the surface position of the target area in a second three-dimensional image as a reference position (para. 0047; a depth of the vessel 306 is computed and defined. In general, the depth of the vessel is not uniquely defined, and depends on a reference object from which the depth “d” is being computed (e.g., the direction of view 308 of the endoscope, the instrument direction, the surface of the heart 304, etc.). For example, the depth d can be defined as the closest distance to the organ surface, or the distance along the direction of view 308. In one embodiment, since the endoscope 302 provides the only visual feedback of the operating field, the depth d may be measured from the direction of view 308 of the endoscope 302.). It would have been obvious to one in the ordinary skill in the art before the effective filling date of the claimed invention to have modified Popovic to incorporate the teachings of Popovic II to provide the depth direction is the direction proceeding from a surface position to the deep layers, with the surface position of the target area in a second three-dimensional image as a reference position. This modification will permit improved planning of surgical procedures and increases safety by reducing the risk of damaging the vessel through removal of the excess tissue as disclosed within Popovic in para. 0024. Regarding claim 16, Popovic teaches the medical support apparatus according to any claim 7, however, fails to explicitly teach wherein the processor can cause the plurality of pieces of preparation information at different depths to be displayed selectively in the composite image. Popovic II, in the same field of endeavor, teaches wherein the processor can cause the plurality of pieces of preparation information at different depths to be displayed selectively in the composite image (paras. 0055 and 0057; an image 502 shows an endoscope image 134 having an overlay 107 registered thereon based upon operative images 135 and/or model(s) 136. A surgeon selects a coronary artery 504 or a portion thereof in the endoscope view. The robot 108 (FIG. 1) moves the endoscope 102 along the selected artery 504 using the known spatial relationships between the robot 108 and an image (heart) coordinate system (e.g., coordinate system of the 3D scanning device (used to generate images 135)). A 3D visualization feature 125 (FIG. 1) generates a cross section 506 (or three-dimensional fly-through images) of the vessel at a selected position or positions that can be provided for display to a user/surgeon. For example, the cross section 506 may show a virtual image 508 showing the presence of any calcifications or atherosclerotic narrowings. The interior camera view or cross-section 506 is provided from the preoperative images 135 and/or model 136. As the motion of the endoscope 102 is known from robot encoders combined with the registration information). It would have been obvious to one in the ordinary skill in the art before the effective filling date of the claimed invention to have modified Popovic to incorporate the teachings of Popovic II to provide preparation information at different depths to be displayed selectively in the composite image. This modification will permit improved planning of surgical procedures and increases safety by reducing the risk of damaging the vessel through removal of the excess tissue as disclosed within Popovic in para. 0024. Response to Arguments Applicant's arguments filed 10/10/2025 have been fully considered but they are not persuasive. The applicant argues that Popovic fails to disclose a step of deriving the second positional relationship information by preforming image analysis on the first three dimensional image and the second three dimensional image, where the image analysis associating morphologies of an internal structure common to both images and computing a three-dimensional transformation between coordinate system of the first and second three dimensional images and utilizing the second positional relationship information. The examiner respectfully disagrees. Popovic discloses preforming image-based registration between ultrasound and preoperative images using anatomical landmarks such as vessel bifurcations, organ boundaries, and atlas derived structures, which constitute morphology of internal structures visible in both images. Such image based registration necessarily associates anatomical morphology across images and computes a three dimensional transformation between coordinate systems (ultrasound and preoperative), and further Popovic further teaches composing that transformation with ultrasound/endoscope pose to render fused preoperative information in the endoscopic view [see figure 5-6, 10-11, and their description]. Under the broadest reasonable interpretation of the claim language, the combination of anatomical landmark correspondence, image based registration, and computed coordinate transformation disclosed in Popovic meets the recited limitations. Therefore, the rejection is maintained. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAINAB M ALDARRAJI whose telephone number is (571)272-8726. The examiner can normally be reached Monday-Thursday7AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carey Michael can be reached at (571) 270-7235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZAINAB MOHAMMED ALDARRAJI/ Patent Examiner, Art Unit 3797 /MICHAEL J CAREY/ Supervisory Patent Examiner, Art Unit 3795
Read full office action

Prosecution Timeline

Aug 20, 2024
Application Filed
Jul 11, 2025
Non-Final Rejection — §102, §103
Oct 10, 2025
Response Filed
Jan 13, 2026
Final Rejection — §102, §103
Apr 09, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599331
Hyperspectral Image-Guided Ocular Imager for Alzheimer's Disease Pathologies
2y 5m to grant Granted Apr 14, 2026
Patent 12594038
ESTIMATION OF CONTACT FORCE OF CATHETER EXPANDABLE ASSEMBLY
2y 5m to grant Granted Apr 07, 2026
Patent 12588887
MEDICAL DEVICE POSITION SENSING COMPONENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12582479
METHOD AND SYSTEM FOR AUTOMATIC PLANNING OF A MINIMALLY INVASIVE THERMAL ABLATION AND METHOD FOR TRAINING A NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12569189
DEVICE, METHOD AND COMPUTER PROGRAM FOR DETERMINING SLEEP EVENT USING RADAR
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
83%
With Interview (+16.1%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 121 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month