Prosecution Insights
Last updated: April 19, 2026
Application No. 18/396,888

ENDOSCOPIC EXAMINATION SUPPORT APPARATUS, ENDOSCOPIC EXAMINATION SUPPORT METHOD, AND RECORDING MEDIUM

Non-Final OA §103§DP
Filed
Dec 27, 2023
Examiner
PROVIDENCE, VINCENT ALEXANDER
Art Unit
2617
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
15 granted / 18 resolved
+21.3% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
38 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
0.9%
-39.1% vs TC avg
§103
82.4%
+42.4% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
0.9%
-39.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-12 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-12 of copending application 18/396990 (US 20240127531 A1). Although the claims at issue are not identical, they are not patentably distinct from each other because: Here is the general claim correspondence between the inventions: Instant Invention Claim(s) 1 2 3 4 5 6 7 8 9 18,396,990 Claim(s) 5, 1, and 2 1 6 1 1 7 8 9 10 Instant Invention Claim(s) 10 11 12 18,396,990 Claim(s) 11 5, 12, and 2 5, 13, and 2 Note that claims 11 and 12 of the present application are a method and computer-readable medium variant of claim 1 respectively. Below is a claim analysis chart comparison of claim 1 (which as explained previously, applies also to claims 11 and 12): Instant Invention, Claim 1 18,396,990 An endoscopic examination support apparatus comprising: a memory configured to store instructions; and a processor configured to execute the instructions to: Claim 1: An endoscopic examination support apparatus comprising: a memory configured to store instructions; and a processor configured to execute the instructions to: estimate a depth from endoscopic images obtained by imaging an interior of a luminal organ with an endoscope camera; Claim 5: estimate a depth from endoscopic images obtained by imaging an interior of the luminal organ with the endoscope camera; estimate a relative posture change of the endoscope camera from two endoscopic images successive in time; Claim 5: estimate a relative posture change of the endoscope camera from two endoscopic images successive in time generate a three-dimensional model of the luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera Claim 5: generate a three-dimensional model of a luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera. detect an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model; Claim 1: detect an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model; generate a display image including an information indicative of a direction of an unobserved area existing outside the endoscopic image, Claim 1: generate a display image including an information indicative of a direction of an unobserved area existing outside the endoscopic image which can be changed ON-OFF depending on a position and/or direction of the endoscope camera. Claim 2: wherein an information indicative of a direction of un unobserved area existing outside the endoscopic image can be changed ON-OFF depending on the position and/or direction of the endoscope camera. The steps as laid out by claim 1 in the instant application generally follow those of claim 1 in the copending application except for the fact that some portions of the method described in dependent claims 5 and 2 in the copending application instead appear in independent claim 1 of the present application. One of ordinary skill in the art would reasonably conclude that the present application is an obvious variation of the copending application 18,396,990, because Claim 5 and 2 in the copending application are dependent on claim 1 and therefore necessarily include the limitations of claim 1 as part of their scope, and because Claims 1, 2, and 5 in the copending application use the same language and phraseology as Claim 1 in the present application. Claim Objections Claim 3 objected to because of the following informalities: Claim 3 recites “estimate a depth” and “estimate a relative posture change”. However, the depth and relative posture change were already defined in independent Claim 1, so it is unclear if Claim 3 is referring to a new or the old depth/relative posture change. Appropriate correction is required. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Japan on August 1st, 2022. It is noted, however, that applicant has not filed a certified copy of the PCTJP2022029426 application as required by 37 CFR 1.55. No Prior Art Rejection for Some Claims It is noted that claims 5 and 7-8 do not have any prior art rejection. They are not indicated allowable, however, due to these claims being rejected under non-statutory obviousness-type double patenting (as indicated above). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 11, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Ikuma (US 20170112578 A1) in view of Ummalaneni (US 20180368920 A1) and Xing (US 20210145523 A1). Regarding claim 1: Ikuma teaches: An endoscopic examination support apparatus comprising: a memory configured to store instructions; and a processor configured to execute the instructions to (Ikuma: a program for causing a computer to realize the operation method of the navigation system, a non-transitory computer-readable recording medium recording the program are also possible [0114]): detect an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model (Ikuma: with a three-dimensional image of a predetermined luminal organ in the subject, […] determine, with respect to a plurality of branch conduits in the predetermined luminal organ, whether each of the plurality of branch conduits is already observed or is still unobserved by the endoscope [0012]); and generate a display image including an information indicative of a direction of an unobserved area existing outside the endoscopic image, which can be changed ON-OFF depending on a position and/or direction of the endoscope camera (see Note 1A). Note 1A: Ikuma teaches: “as shown in FIG. 7, the first minor calyx 55a, which is the closest unobserved branch conduit on the distal end side of the current viewpoint position 60, is set as the target branch conduit, and the guide arrow 62 is displayed in the virtual endoscopic image 42 as an example of navigation to the first minor calyx 55a” [0100]. That is, Ikuma teaches that a virtual endoscopic image may be generated that includes an informational guide arrow 62 indicative of the direction to an unobserved branch conduit, namely first minor calyx 55a. Ikuma then teaches: “When observation of the first minor calyx 55a ends, the first minor calyx 55a becomes observed, and thus, the second minor calyx 55b, which is the closest unobserved branch conduit on the distal end side of the current viewpoint position 60, is set as the next target branch conduit, and the guide arrow 62 as shown in FIG. 8 is displayed as navigation to the second minor calyx 55b as the target branch conduit.” [0101]. That is, Ikuma shows that once the minor calyx 55a is shown in the image, the guide arrow changes to point to the unobserved second minor calyx 55b. The changing of the guide arrow may be considered analogous to switching ON-OFF between the two calyces. Ikuma fails to teach: estimate a depth from endoscopic images obtained by imaging an interior of a luminal organ with an endoscope camera; estimate a relative posture change of the endoscope camera from two endoscopic images successive in time; generate a three-dimensional model of the luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera; Ummalaneni teaches: estimate a depth from endoscopic images obtained by imaging an interior of a luminal organ with an endoscope camera (Ummalaneni: the distal end of an endoscope can be provided with an imaging device, and the disclosed navigation techniques can generate a depth map based on image data received from the imaging devices [0008]; see Note 1B); estimate a relative posture change of the endoscope camera from two endoscopic images successive in time (Ummalaneni: Optical flow, another computer vision-based technique, may analyze the displacement and translation of image pixels in a video sequence in the vision data 92 to infer camera movement. [0108]); generate a three-dimensional model of the luminal organ in which an endoscope camera is placed (Ummalaneni: The disclosed techniques can generate a 3D model of a virtual luminal network representing the patient's anatomical luminal network [0008]), Note 1B: Ummalaneni teaches that “The light emitted from the illumination sources 310 allows the imaging device 315 to capture images of the interior of a patient's luminal network.” [0143]. That is, the endoscope retrieves data by imaging the interior surfaces of the patients luminal network. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Ummalaneni with Ikuma. Estimating a depth from endoscopic images obtained by imaging an interior of a luminal organ with an endoscope camera, estimating a relative posture change of the endoscope camera from two endoscopic images successive in time and generating a three-dimensional model of the luminal organ in which an endoscope camera is placed, as in Ummalaneni, would benefit the Ikuma teachings by enabling better tracking of the endoscopic camera. Ikuma in view of Ummalaneni still fails to explicitly teach: generate a three-dimensional model of the luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera; Xing teaches: generate a three-dimensional model of the organ in which an endoscope camera is placed (Xing: The computing device 104 receives image data and depth data from the robotic surgical device and generates a model of the patient anatomy viewed by an endoscope of the robotic surgical device [0028]), by performing a three-dimensional restoration process (Xing: Stereo image reconstruction [0033]) on a basis of the depth (Xing: The depth sensor 244 senses or detects a depth or distance from the camera 230 to portions of the patient's anatomy. … Stereo images from the camera 230 and the second camera may be used to generate a depth map or three-dimensional image [0033]; see Note 1C) and the relative posture change of the endoscope camera (Xing: Stereo image reconstruction also relies on a computing device 204 comparing the field of view captured by each camera of the stereo camera and observing relative positions of objects in each field of view. [0033]); Note 1C: It would be obvious to one of ordinary skill in the art to use the depth sensor 244 to generate, or at least assist in generating a depth map. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Xing with Ikuma in view of Ummalaneni. Generating a three-dimensional model of the luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera, as in Xing, would benefit the Ikuma in view of Ummalaneni teachings by ensuring further accurate measurements of the organ, leading to a more accurate 3D model. Regarding claim 2: Ikuma in view of Ummalaneni and Xing teaches: The endoscopic examination support apparatus according to claim 1 (as shown above), wherein the information indicative of the direction of the unobserved area existing outside the endoscopic image is displayed (see Note 1A), at least one of positions adjacent to the upper and lower ends and the left and right ends of the endoscopic image (see Note 2A), depending on the direction in which the unobserved area is located with respect to the current position of the endoscope camera (Ikuma: In the example shown in FIG. 7, the guide arrow 62 is an arrow which starts from a center position, of the virtual endoscopic image 42, corresponding to the current viewpoint position 60, [0100]). Note 2A: Ikuma showcases in Figure 7 that the guide arrow 62 is displayed in the lower right area of the display. Therefore, it is adjacent to at least one of the upper and lower ends and the left and right ends of the endoscopic image. Regarding claim 11: Ikuma teaches: An endoscopic examination support method comprising: detecting an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model (Ikuma: with a three-dimensional image of a predetermined luminal organ in the subject, […] determine, with respect to a plurality of branch conduits in the predetermined luminal organ, whether each of the plurality of branch conduits is already observed or is still unobserved by the endoscope [0012]); and generating a display image including an information indicative of a direction of an unobserved area existing outside the endoscopic image, which can be changed ON-OFF depending on a position and/or direction of the endoscope camera (see Note 1A). Ikuma fails to teach: estimating a depth from endoscopic images obtained by imaging an interior of a luminal organ with an endoscope camera; estimating a relative posture change of the endoscope camera from two endoscopic images successive in time; generating a three-dimensional model of the luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera; Ummalaneni teaches: estimating a depth from endoscopic images obtained by imaging an interior of a luminal organ with an endoscope camera (Ummalaneni: the distal end of an endoscope can be provided with an imaging device, and the disclosed navigation techniques can generate a depth map based on image data received from the imaging devices [0008]; see Note 1B); estimating a relative posture change of the endoscope camera from two endoscopic images successive in time (Ummalaneni: Optical flow, another computer vision-based technique, may analyze the displacement and translation of image pixels in a video sequence in the vision data 92 to infer camera movement. [0108]); generating a three-dimensional model of the luminal organ in which an endoscope camera is placed (Ummalaneni: The disclosed techniques can generate a 3D model of a virtual luminal network representing the patient's anatomical luminal network [0008]), Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Ummalaneni with Ikuma. Estimating a depth from endoscopic images obtained by imaging an interior of a luminal organ with an endoscope camera, estimating a relative posture change of the endoscope camera from two endoscopic images successive in time and generating a three-dimensional model of the luminal organ in which an endoscope camera is placed, as in Ummalaneni, would benefit the Ikuma teachings by enabling better tracking of the endoscopic camera. Ikuma in view of Ummalaneni still fails to explicitly teach: generating a three-dimensional model of the luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera; Xing teaches: generating a three-dimensional model of the organ in which an endoscope camera is placed (Xing: The computing device 104 receives image data and depth data from the robotic surgical device and generates a model of the patient anatomy viewed by an endoscope of the robotic surgical device [0028]), by performing a three-dimensional restoration process (Xing: Stereo image reconstruction [0033]) on a basis of the depth (Xing: The depth sensor 244 senses or detects a depth or distance from the camera 230 to portions of the patient's anatomy. … Stereo images from the camera 230 and the second camera may be used to generate a depth map or three-dimensional image [0033]; see Note 1C) and the relative posture change of the endoscope camera (Xing: Stereo image reconstruction also relies on a computing device 204 comparing the field of view captured by each camera of the stereo camera and observing relative positions of objects in each field of view. [0033]); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Xing with Ikuma in view of Ummalaneni. Generating a three-dimensional model of the luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera, as in Xing, would benefit the Ikuma in view of Ummalaneni teachings by ensuring further accurate measurements of the organ, leading to a more accurate 3D model. Regarding claim 12: Ikuma teaches: A non-transitory computer-readable recording medium storing a program, the program causing a computer to execute (Ikuma: a program for causing a computer to realize the operation method of the navigation system, a non-transitory computer-readable recording medium recording the program are also possible [0114]): detecting an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model (Ikuma: with a three-dimensional image of a predetermined luminal organ in the subject, […] determine, with respect to a plurality of branch conduits in the predetermined luminal organ, whether each of the plurality of branch conduits is already observed or is still unobserved by the endoscope [0012]); and generating a display image including an information indicative of a direction of an unobserved area existing outside the endoscopic image, which can be changed ON-OFF depending on a position and/or direction of the endoscope camera (see Note 1A). Ikuma fails to teach: estimating a depth from endoscopic images obtained by imaging an interior of a luminal organ with an endoscope camera; estimating a relative posture change of the endoscope camera from two endoscopic images successive in time; generating a three-dimensional model of the luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera; Ummalaneni teaches: estimating a depth from endoscopic images obtained by imaging an interior of a luminal organ with an endoscope camera (Ummalaneni: the distal end of an endoscope can be provided with an imaging device, and the disclosed navigation techniques can generate a depth map based on image data received from the imaging devices [0008]; see Note 1B); estimating a relative posture change of the endoscope camera from two endoscopic images successive in time (Ummalaneni: Optical flow, another computer vision-based technique, may analyze the displacement and translation of image pixels in a video sequence in the vision data 92 to infer camera movement. [0108]); generating a three-dimensional model of the luminal organ in which an endoscope camera is placed (Ummalaneni: The disclosed techniques can generate a 3D model of a virtual luminal network representing the patient's anatomical luminal network [0008]), Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Ummalaneni with Ikuma. Estimating a depth from endoscopic images obtained by imaging an interior of a luminal organ with an endoscope camera, estimating a relative posture change of the endoscope camera from two endoscopic images successive in time and generating a three-dimensional model of the luminal organ in which an endoscope camera is placed, as in Ummalaneni, would benefit the Ikuma teachings by enabling better tracking of the endoscopic camera. Ikuma in view of Ummalaneni still fails to explicitly teach: generating a three-dimensional model of the luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera; Xing teaches: generating a three-dimensional model of the organ in which an endoscope camera is placed (Xing: The computing device 104 receives image data and depth data from the robotic surgical device and generates a model of the patient anatomy viewed by an endoscope of the robotic surgical device [0028]), by performing a three-dimensional restoration process (Xing: Stereo image reconstruction [0033]) on a basis of the depth (Xing: The depth sensor 244 senses or detects a depth or distance from the camera 230 to portions of the patient's anatomy. … Stereo images from the camera 230 and the second camera may be used to generate a depth map or three-dimensional image [0033]; see Note 1C) and the relative posture change of the endoscope camera (Xing: Stereo image reconstruction also relies on a computing device 204 comparing the field of view captured by each camera of the stereo camera and observing relative positions of objects in each field of view. [0033]); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Xing with Ikuma in view of Ummalaneni. Generating a three-dimensional model of the luminal organ in which an endoscope camera is placed, by performing a three-dimensional restoration process on a basis of the depth and the relative posture change of the endoscope camera, as in Xing, would benefit the Ikuma in view of Ummalaneni teachings by ensuring further accurate measurements of the organ, leading to a more accurate 3D model. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Ikuma (US 20170112578 A1) in view of Ummalaneni (US 20180368920 A1), Xing (US 20210145523 A1) and Levine (US 20240156325 A1). Ikuma in view of Ummalaneni and Xing teaches: The endoscopic examination support apparatus according to claim 1 (as shown above), wherein the processor is further configured to execute the instructions to: Ikuma in view of Ummalaneni and Xing fails to teach: estimate a depth using an image recognition model which is a machine learning model learned in advance; and estimate a relative posture change of the endoscope camera using the image recognition model which is a machine learning model learned in advance. Levine teaches: estimate a depth using an image recognition model which is a machine learning model learned in advance (Levine: The present disclosure provides a method for generating a depth map or a 3D point cloud using recent analytical and deep learning-based algorithms operating on a stereoscopic endoscope video stream [0006]); and estimate a relative posture change of the endoscope camera using the image recognition model which is a machine learning model learned in advance (Levine: the video processing device 56 may compute the relative motion between successive frames based on kinematics [0074]; see Note 3A). Note 3A: Levine teaches that the video processing device processes video via a trained neural network: “In various embodiments, training of the neural network may happen on a separate system, e.g., graphic processor unit (“GPU”) workstations, high performing computer clusters, etc., and the trained algorithm would then be deployed on the video processing device.” [0007]. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Levine with Ikuma in view of Ummalaneni and Xing. Estimating a depth using an image recognition model which is a machine learning model learned in advance and estimating a relative posture change of the endoscope camera using the image recognition model which is a machine learning model learned in advance, as in Levine, would benefit the Ikuma in view of Ummalaneni and Xing teachings by ensuring further accurate measurements of the organ, leading to a more accurate 3D model (Levine: The present disclosure combines several depth-mapping techniques together, to produce a depth map that is more reliable than one produced from any single algorithm alone. [0009]). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Ikuma (US 20170112578 A1) in view of Ummalaneni (US 20180368920 A1), Xing (US 20210145523 A1) and Kaufman (US 20110187707 A1; hereinafter Kaufman A). Ikuma in view of Ummalaneni and Xing teaches: The endoscopic examination support apparatus according to claim 1 (as shown above), wherein the processor is further configured to execute the instructions to: Ikuma in view of Ummalaneni and Xing fails to teach: wherein the processor is further configured to execute the instructions to generate the display image including an unobserved area mask displayed in a display manner so as to cover the unobserved areas in the endoscopic image. Kaufman A teaches: wherein the processor is further configured to execute the instructions to generate the display image including an unobserved area mask displayed in a display manner so as to cover the unobserved areas in the endoscopic image (Kaufman: when a user is in a region that includes unobserved areas, the secondary window 725 can display these regions, preferably in real time, and alert the user that the endoscope may require flexing or repositioning in order to observe part of the lumen. The alert can be visual, such as highlighting an unviewed portion on the display, [0046]). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Kaufman A with Ikuma in view of Ummalaneni and Xing. Generating the display image including an unobserved area mask displayed in a display manner so as to cover the unobserved areas in the endoscopic image, as in Kaufman A, would benefit the Ikuma in view of Ummalaneni and Xing teachings by ensuring that incomplete data is not shown to the user. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Ikuma (US 20170112578 A1) in view of Ummalaneni (US 20180368920 A1), Xing (US 20210145523 A1) and Kaufman (US 20070003131 A1; hereinafter Kaufman B). Regarding claim 6: Ikuma in view of Ummalaneni and Xing teaches: The endoscopic examination support apparatus according to claim 1 (as shown above), wherein the processor is further configured to execute the instructions to: Ikuma in view of Ummalaneni and Xing fails to teach: detect, as the unobserved area, at least one of an observation difficult area for which observation by the endoscope camera in the luminal organ is estimated to be difficult, and a missing area of the three-dimensional model. Kaufman B teaches: detect, as the unobserved area, at least one of an observation difficult area for which observation by the endoscope camera in the luminal organ is estimated to be difficult (Kaufman B: In many virtual display environments, the object being examined, such as a virtual colon, has properties, such as folds and curves, which make it difficult to observe certain areas of the object during normal navigation and examination. Thus, in displaying regions marked as unviewed, it is desirable to "unravel" a lumen shaped object, such as the colon, by mapping the lumen to a 2D planar representation [0042]), and a missing area of the three-dimensional model (Kaufman B: By selecting a particular missed patch, the system can then automatically bring the user to a display of the original 3D volume where the virtual camera is directed to the missed patch, [0042]; see Note 6A). Note 6A: In [0042], Kaufman B teaches that a difficult to observe area may be mapped to a 2D representation instead of providing a 3D representation. Kaufman B further teaches: “it is preferred that the 2D display provides the user with an effective way to visualize and select a missing patch 305 using a graphical user interface 325.” [0042]. That is, a missing area of the three-dimensional model may be represented with a two-dimensional representation instead. Representing a missing area of the three-dimensional model with a two-dimensional representation inherently requires detecting a missing area of the three-dimensional model. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Kaufman B with Ikuma in view of Ummalaneni and Xing. Detecting, as the unobserved area, at least one of an observation difficult area for which observation by the endoscope camera in the luminal organ is estimated to be difficult, and a missing area of the three-dimensional model, as in Kaufman B, would benefit the Ikuma in view of Ummalaneni and Xing teachings by better identifying difficult and missing areas and better visualizing said areas later. Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Ikuma (US 20170112578 A1) in view of Ummalaneni (US 20180368920 A1), Xing (US 20210145523 A1) and Hameed (US 20190080454 A1). Regarding claim 9: Ikuma in view of Ummalaneni and Xing teaches: The endoscopic examination support apparatus according to claim 1 (as shown above), wherein the processor is further configured to execute the instructions to: Ikuma in view of Ummalaneni and Xing fails to teach: detect a lesion candidate area which is an area estimated to be a lesion candidate by a learned machine learning model based on the endoscopic image; and generate the display image including information indicating a direction of the lesion candidate area outside. Hameed teaches: detect a lesion candidate area which is an area estimated to be a lesion candidate by a learned machine learning model based on the endoscopic image (Hameed: polyp detection methods may comprise applying one or more convolutional neural networks (CNNs) to an acquired image to determine whether the image contains a polyp [0032]; see Note 9A); and generate the display image including information indicating a direction of the lesion candidate area outside (Hameed: the image 504 containing the polyp 504 may appear on the display 500, and an arrow 506 may appear in the image 502 to help the practitioner navigate towards the detected polyp, as depicted in FIG. 5B [0042]). Note 9A: Hameed teaches that a machine learning model (CNN) may check images to estimate if an image contains a polyp. A polyp is understood to be analogous to a lesion, as Hameed uses polyp and lesion interchangeably. For example, the Abstract of Hameed recites: “Disclosed herein are methods for identifying polyps or lesions in a colon. […] computer-implemented methods for polyp detection may be used in conjunction with an endoscope system to […] identify any polyps and/or lesions in a visual scene captured by the endoscopic system, and provide an indication to the practitioner that a polyp and/or lesion has been detected.” Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Hameed with Ikuma in view of Ummalaneni and Xing. Detecting a lesion candidate area which is an area estimated to be a lesion candidate by a learned machine learning model based on the endoscopic image; and generating the display image including information indicating a direction of the lesion candidate area outside, as in Hameed, would benefit the Ikuma in view of Ummalaneni and Xing teachings by ensuring medical professionals are quickly able to locate lesions for removal: (Hameed: it is in the interest of both the practitioner and the patient for the colonoscopy to proceed in an expedient manner. Accordingly, improvements to the accuracy of identifying polyps and/or lesions (e.g., reducing the rate of false positive or false negative results) and efficiency of colonoscopies are desirable. [0003]) Regarding claim 10: Ikuma in view of Ummalaneni, Xing, and Hameed teaches: The endoscopic examination support apparatus according to claim 9 (as shown above), wherein the processor is further configured to execute the instructions to: generate the display image including information indicating a latest detection result of the lesion candidate area (Hameed: If a polyp is detected in one of the side-mounted imaging devices, the image 504 containing the polyp 504 may appear on the display 500 and an arrow 506 may appear in the image 502 to help the practitioner navigate towards the detected polyp, as depicted in FIG. 5B [0042]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT ALEXANDER PROVIDENCE whose telephone number is (571)270-5765. The examiner can normally be reached Monday-Thursday 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VINCENT ALEXANDER PROVIDENCE/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Dec 27, 2023
Application Filed
Dec 12, 2025
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586303
GEOMETRY-AWARE THREE-DIMENSIONAL SYNTHESIS IN ALL ANGLES
2y 5m to grant Granted Mar 24, 2026
Patent 12530847
IMAGE GENERATION FROM TEXT AND 3D OBJECT
2y 5m to grant Granted Jan 20, 2026
Patent 12530808
Predictive Encoding/Decoding Method and Apparatus for Azimuth Information of Point Cloud
2y 5m to grant Granted Jan 20, 2026
Patent 12524946
METHOD FOR GENERATING FIREWORK VISUAL EFFECT, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12380621
COMPUTER-IMPLEMENTED SYSTEMS AND METHODS FOR GENERATING ENHANCED MOTION DATA AND RENDERING OBJECTS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+25.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month