DETAILED ACTION
Summary
Claims 1-14, and 18-19 are pending in the application. Claims 1-14, and 18-19 are rejected under 35 USC 103. Claims 1-4, 8-11, and 18-19 are provisionally rejected under non-statutory double patenting.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/2/2026 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-4, 8, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Rivlin et al. (WO 2020/242949 A1) in view of Ishikake et al. (U.S PGPub 2021/0287395 A1) and Mori (U.S PGPub 2021/0056695 A1).
Regarding Claim 1, Rivlin teaches an endoscope system (Abstract) comprising:
one or more processors (Fig. 1, 104+142) [0049]+[0054] configured to:
acquire a first endoscope image [0009]+[0026], the first endoscope image including a detection target [0026]-[0027]+[0074] and a landmark having a positional relationship of the landmark to the detection target [0028]+[0035]
acquired position information of a position of the detection target in the first endoscope image [0026]-[0027]+[0074]
acquire position information of a position of the landmark in the first endoscope image [0028]+[0035];
store [0049] the position information of the detection target in association with the position information of the landmark [0030]+[0035];
display, on a display, the estimated position of the detection target [0030]+[0087]-[0089].
Rivlin fails to explicitly teach acquire a second endoscope image, the second endoscope image including the landmark and a position at which the detection target is obscured, determine an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target, or display the estimated position of the detection target on the position at which the detection target is obscured in the second endoscope image.
Ishikake teaches a trained model for an endoscopic system (Abstract). This system acquires a second endoscope image [0019]. The second endoscope image includes a landmark and a position at which the target is obscured (Fig. 2A+2B) (the target in covered) [0032]+[0037], while the other organs in the image (e.g. liver) is a landmark (Fig. 2A) [0051]+[0073]. The system determined an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target [0073] (the model uses positional relationships between the target and other organs to determine the position of an obscured target). This system displays the estimated position of the detection target on the position at which the detection target is obscured in the image [0025]+[0029]-[0030]+[0077].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the system of Rivlin to perform a position information estimation process and display the estimated position on the display, as taught by Ishikake, because this makes it easier for an inexperienced physician to perform the surgery, as recognized by Ishikake [0029]-[0030].
The combination fails to explicitly teach the detection target in association with the position information of the landmark in a case where the detection target is not obscured in the first endoscope image.
Mori teaches a method of estimating the location of a landmark in an endoscope image (Abstract). This system obtains a position information of the landmark (Fig. 16, L) and the detection target (a tumor) (Fig. 16, T) which is not obscured in the endoscopic image [0085]+[0087]-[0089].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to obtain the position information of the target in association with the position information of the landmark in an unobscured image, as taught by Mori, because this allows the system to generate and present useful information to the surgeon, as recognized by Mori [0090].
Regarding Claim 3, the combination of references teaches the invention substantially as claimed. Rivlin further teaches wherein the one or more processors (Fig. 1, 104+142) [0049]+[0054] are configured to: perform notification using either or both of a notification sound and a displayed notification on the display [0028] (bounding boxes/visual indicators are a notification on the display); and perform the notification in at least one case of a case that the position information of the detection target is acquired or a case that the position information of the landmark is acquired [0028].
Regarding Claim 4, the combination of references teaches the invention substantially as claimed. Rivlin further teaches wherein the one or more processors (Fig. 1, 104+142) [0049]+[0054] are configured to: perform notification using either or both of a notification sound and a displayed notification on the display [0028] (bounding boxes/visual indicators are a notification on the display); and perform the notification in at least one case of a case that the position information of the detection target is stored in association with the position information of the landmark, a case that the second endoscope image is acquired, or a case that the estimated position of the detection target is determined [0028] (displaying the landmark and abnormality with a visual indicator on the heads up display is a notification that the position information of the detection target and the position information of the landmark are stored in association with one another [0030]+[0035]+[0049]).
Regarding Claim 8, the combination of references teaches the invention substantially as claimed. Rivlin further teaches wherein the landmark includes a plurality of landmarks [0035] (the intestinal twists) the one or more processors are to determine the estimated position based on less than all the plurality of landmarks [0035] (the system only uses the last twist to define the polyp positions, which is less than to total number of twists).
Regarding Claim 18, Rivlin teaches a method of operating an endoscope system (Abstract) including one or more processors (Fig. 1, 104+142) [0049]+[0054] comprising:
acquiring a first endoscope image [0009]+[0026], the first endoscope image including a detection target [0026]-[0027]+[0074] and a landmark having a positional relationship of the landmark to the detection target [0028]+[0035]
acquiring position information of a position of the detection target in the first endoscope image [0026]-[0027]+[0074];
acquiring position information of a position of a landmark in the first endoscope image [0028]+[0035];
storing [0049] position information of the detection target in association with the position information of the landmark [0030]+[0035];
displaying, on a display, the estimated position of the detection target [0030]+[0087]-[0089].
Rivlin fails to explicitly teach acquiring a second endoscope image, the second endoscope image including the landmark and a position at which the detection target is obscured, determining an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target, or displaying the estimated position of the detection target on the position at which the detection target is obscured in the second endoscope image.
Ishikake teaches a trained model for an endoscopic system (Abstract). This system acquires a second endoscope image [0019]. The second endoscope image includes a landmark and a position at which the target is obscured (Fig. 2A+2B) (the target in covered) [0032]+[0037], while the other organs in the image (e.g. liver) is a landmark (Fig. 2A) [0051]+[0073]. The system determined an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target [0073] (the model uses positional relationships between the target and other organs to determine the position of an obscured target). This system displays the estimated position of the detection target on the position at which the detection target is obscured in the image [0025]+[0029]-[0030]+[0077].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the system of Rivlin to perform a position information estimation process and display the estimated position on the display, as taught by Ishikake, because this makes it easier for an inexperienced physician to perform the surgery, as recognized by Ishikake [0029]-[0030].
The combination fails to explicitly teach the detection target in association with the position information of the landmark in a case where the detection target is not obscured in the first endoscope image.
Mori teaches a method of estimating the location of a landmark in an endoscope image (Abstract). This system obtains a position information of the landmark (Fig. 16, L) and the detection target (a tumor) (Fig. 16, T) which is not obscured in the endoscopic image [0085]+[0087]-[0089].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to obtain the position information of the target in association with the position information of the landmark in an unobscured image, as taught by Mori, because this allows the system to generate and present useful information to the surgeon, as recognized by Mori [0090].
Claims 2 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Rivlin in view of Ishikake and Mori as applied to claim 1 above, and further in view of Meagher et al. (U.S PGPub 2020/0297433 A1).
Regarding Claim 2, the combination of references teaches the invention substantially as claimed. The combination is silent regarding wherein the one or more processors are configured to display the position information of the detection target, the estimated position of the detection target, and the position of the landmark on the display to be distinguished from each other.
Meagher teaches a system for real time augmentations of an endoscopic image (Abstract). This system displays the position information (Fig. 2, A) [0035], the estimated position (Fig. 6) [0036], and the position of the landmark (Fig. 2, B or C) [0035] on the display to be distinguished from each other [0026] (different opacity/color are distinguishable from each other).
It would have been obvious to one of ordinary skill before the effective filing date to modify the combined system to display the combined system, as taught by Meagher, because this better allows the surgeon to navigate the system, even under compromised visual situations, as recognized by Meagher [0006].
Regarding Claim 19, the combination of references teaches the invention substantially as claimed. The combination is silent a step of displaying the position information of the detection target, the estimated position of the detection target, and the position information of the landmark on the display to be distinguishable from each other.
Meagher teaches a system for real time augmentations of an endoscopic image (Abstract). This system displays the position information (Fig. 2, A) [0035], the estimated position (Fig. 6) [0036], and the position information of the landmark (Fig. 2, B or C) [0035] on the display to be distinguishable from each other [0026] (different opacity/color are considered distinguishable from each other).
It would have been obvious to one of ordinary skill before the effective filing date to modify the combined system to display the combined system, as taught by Meagher, because this better allows the surgeon to navigate the system, even under compromised visual situations, as recognized by Meagher [0006].
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Rivlin in view of Ishikake and Mori as applied to claim 1 above, and further in view of Taniguchi (U.S PGPub 2013/0229503 A1).
Regarding Claim 9, the combination of references teaches the invention substantially as claimed. Rivlin further teaches wherein the landmark includes a plurality of landmarks [0028]+[0035] and the one or more processors are configured to limit the plurality of landmarks to be displayed on the display [0028]+[0035] (the system only uses the easily identifiable twists, not ever possible feature in the intestine. The number of landmarks are therefore limited).
Rivlin fails to explicitly teach wherein the one or more processors are configured to select the plurality of landmark to be displayed on the display.
Taniguchi teaches a medical imaging system (Abstract). This system wherein the one or more processors are configured to select the landmark to be displayed on the display among the landmarks [0108] (the system selects a landmark to be displayed automatically).
It would have been obvious to one of ordinary skill in the art to modify the combined system to automatically select the landmarks for display, as taught by Taniguchi, because this shortens the review of the captured images by highlighting images which contain features of interest, as recognized by Taniguchi [0007]. One of ordinary skill would further recognize the combination suggests the automatic display of the landmark, as taught by Taniguchi, would be a landmark selected from among the landmarks of Rivlin.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Rivlin in view of Ishikake and Mori as applied to claim 1 above, and further in view of Kiyuna et al. (U.S PGPub 2022/0148182 A1).
Regarding Claim 10, the combination for references teaches the invention substantially as claimed. The combination fails to explicitly teach wherein the one or more processors are configured to receive a user operation for designating whether or not the landmark is usable to determine the estimated position of the detection target.
Kiyuna teaches a medical imaging system (Abstract). This system has one or more processors (Fig. 2, 11) [0029] are configured to receive a user operation for designating whether or not the landmark is usable to determine the estimated position of the detection target [0050]-[0051] (the user need to confirm the landmark for determining the relative position of the landmark and the attention part).
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to receive a user operation, as taught by Kiyuna, because this helps to ensure the operator does not miss a region of interest, as recognized by Kiyuna [0004]-[00005].
Claims 11 and 14 is rejected under 35 U.S.C. 103 as being unpatentable over Rivlin in view of Ishikake and Mori as applied to claim 1 above, and further in view of Yoshino (U.S PGPub 2011/0254937 A1).
Regarding Claim 11, the combination of references teaches the invention substantially as claimed
The combination fails to explicitly teach wherein the first endoscope image based on first illumination light and the second endoscope image based on second illumination light having a spectrum different from a spectrum of the first illumination light.
Yoshino teaches and endoscopic imaging system (Abstract). This system has an endoscope image which includes a first endoscope image based on first illumination light (Fig. 5, 320) [0076] and a second endoscope image based on second illumination light having a spectrum different from a spectrum of the first illumination light (Fig. 6, 330) [0071]+[0078] and the one or more processors are configured to perform the first detection process and the second detection process from the second endoscope image [0086]+[0118], and display the detection target actual position information, the detection target estimated position information, and the position information of the landmark on the display from the first endoscope image [0114]+[0166].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to obtain a first illumination image with a first light and a second image with a second light, and perform the detection in the second image, as taught by Yoshino, because this reduces the burden on the doctor, as recognized by Yoshino [0166].
Regarding Claim 14, the combination of references teaches the invention substantially as claimed. The combination fails to explicitly teach wherein the landmark is any of a mucous membrane pattern, a shape of an organ, or marking by a user operation.
Yoshino further teaches wherein the landmark is position information of at least any of a mucous membrane pattern [0240].
It would have been obvious to one of ordinary skill in the art before the effective filing date to substitute the landmark is a mucous membrane, as taught by Yoshino, as the substitution for one known landmark with another yields predictable results to one of ordinary skill in the art. One of ordinary skill would have been able to carry out such a substitution, and the results of using a mucous membrane pattern as a landmark are reasonably predictable.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 and 18 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 14 of copending Application No. 18/463,897 in view of Ishikake and Mori.
Regarding Claim 1, the reference application teaches an endoscope system comprising: (Claim 11, line 1)
one or more processors configured to: (Claim 11, line 2)
acquire a first endoscopic image (Claim 11, line 3), the first endoscopic image including a detection target and a landmark having a positional relationship of the landmark to the detection target; (Claim 11, lines 4-9)
acquire position information of a position of the detection target in the first endoscope image; (Claim 11, lines 4-5)
acquire position information of a position of a landmark in the first endoscope image (Claim 11, lines 6-7)
store the position information of the detection target in associated with the position information of the landmark (Claim 11, lines 8-11) (setting a relationship is considered “storing” the relationship)
acquire a second endoscope image (Claim 11, lines 11-12) (updating the endoscope image is obtaining a second image),
determining an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target (Claim 11, lines 9-11).
The reference application fails to explicitly teach the second endoscope image including the landmark and a position at which the detection target is obscured or display, on a display, the estimated position of the detection target on the position at which the detection target is obscured in the second endoscope image and display, on a display, the estimated position of the detection target.
Ishikake teaches a trained model for an endoscopic system (Abstract). This system acquires a second endoscope image [0019]. The second endoscope image includes a landmark and a position at which the target is obscured (Fig. 2A+2B) (the target in covered) [0032]+[0037], while the other organs in the image (e.g. liver) is a landmark (Fig. 2A) [0051]+[0073]. The system determined an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target [0073] (the model uses positional relationships between the target and other organs to determine the position of an obscured target). This system displays the estimated position of the detection target on the position at which the detection target is obscured in the image [0025]+[0029]-[0030]+[0077]. This system displays, on a display, the estimated position of the detection target [0030]+[0087]-[0089].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to perform a position information estimation process and display the estimated position on the display, as taught by Ishikake, because this makes it easier for an inexperienced physician to perform the surgery, as recognized by Ishikake [0029]-[0030].
The combination fails to explicitly teach the detection target in association with the position information of the landmark in a case where the detection target is not obscured in the first endoscope image.
Mori teaches a method of estimating the location of a landmark in an endoscope image (Abstract). This system obtains a position information of the landmark (Fig. 16, L) and the detection target (a tumor) (Fig. 16, T) which is not obscured in the endoscopic image [0085]+[0087]-[0089].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to obtain the position information of the target in association with the position information of the landmark in an unobscured image, as taught by Mori, because this allows the system to generate and present useful information to the surgeon, as recognized by Mori [0090].
Regarding Claim 18, the reference application teaches a method of operating an endoscope system (Claim 11, line 1) (while claim is a system claim, the processors are necessarily performing a method) including one or more processor(Claim 11, line 2) comprising:
acquiring a first endoscopic image (Claim 11, line 3), the first endoscopic image including a detection target and a landmark having a positional relationship of the landmark to the detection target; (Claim 11, lines 4-9)
acquiring position information of a position of the detection target in the first endoscope image; (Claim 11, lines 4-5)
acquiring position information of a position of a landmark in the first endoscope image (Claim 11, lines 6-7)
storing the position information of the detection target in association with the position information of the landmark (Claim 11, lines 8-11) (setting a relationship is considered “storing” the relationship)
acquiring a second endoscope image (Claim 11, lines 11-12) (updating the endoscope image is obtaining a second image),
determining an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target (Claim 11, lines 9-11).
The reference application fails to explicitly teach the second endoscope image including the landmark and a position at which the detection target is obscured or display, on a display, the estimated position of the detection target on the position at which the detection target is obscured in the second endoscope image, and displaying, on a display, the estimated position of the detection target.
Ishikake teaches a trained model for an endoscopic system (Abstract). This system acquires a second endoscope image [0019]. The second endoscope image includes a landmark and a position at which the target is obscured (Fig. 2A+2B) (the target in covered) [0032]+[0037], while the other organs in the image (e.g. liver) is a landmark (Fig. 2A) [0051]+[0073]. The system determined an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target [0073] (the model uses positional relationships between the target and other organs to determine the position of an obscured target). This system displays the estimated position of the detection target on the position at which the detection target is obscured in the image [0025]+[0029]-[0030]+[0077]. This system displays, on a display, the estimated position of the detection target [0030]+[0087]-[0089].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to perform a position information estimation process and display the estimated position on the display, as taught by Ishikake, because this makes it easier for an inexperienced physician to perform the surgery, as recognized by Ishikake [0029]-[0030].
The combination fails to explicitly teach the detection target in association with the position information of the landmark in a case where the detection target is not obscured in the first endoscope image.
Mori teaches a method of estimating the location of a landmark in an endoscope image (Abstract). This system obtains a position information of the landmark (Fig. 16, L) and the detection target (a tumor) (Fig. 16, T) which is not obscured in the endoscopic image [0085]+[0087]-[0089].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to obtain the position information of the target in association with the position information of the landmark in an unobscured image, as taught by Mori, because this allows the system to generate and present useful information to the surgeon, as recognized by Mori [0090].
This is a provisional nonstatutory double patenting rejection.
Claims 1, 11, and 18 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2 and 15-16 of copending Application No. 18/463,966 in view of Ishikake and Mori.
Regarding Claim 1, the reference application teaches an endoscope system comprising: (Claim 1, line 1)
one or more processors configured to: (Claim 1, line 2)
acquire a first endoscopic image (Claim 1, line 3), the first endoscopic image including a detection target and a landmark having a positional relationship of the landmark to the detection target; (Claim 1, lines 4-8)
acquire position information of a position of the detection target in the first endoscope image; (Claim 1, lines 4-5)
acquire position information of a position of a landmark in the first endoscope image (Claim 1, lines 6-7)
position information of the detection target in associated with the position information of the landmark (Claim 1, lines 8-9)
acquire a second endoscope image (Claim 1, lines 10-12) (the image without the target and the landmark must be a different image from the target and the landmark), the second endoscope image including the landmark and a position at which the detection target is not visible(Claim 1, lines 10-12)
determining an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target (Claim 2)
and display, on a display, the estimated position of the detection target (Claim 2))
The application fails to explicitly teach store position information, that the undetected target is obscured, or display the estimated position of the detection target on the position at which the detection target is obscured in the second endoscope image.
Ishikake teaches a trained model for an endoscopic system (Abstract). This system acquires a second endoscope image [0019]. The second endoscope image includes a landmark and a position at which the target is obscured (Fig. 2A+2B) (the target in covered) [0032]+[0037], while the other organs in the image (e.g. liver) is a landmark (Fig. 2A) [0051]+[0073]. This system displays the estimated position of the detection target on the position at which the detection target is obscured in the image [0025]+[0029]-[0030]+[0077]. The relative position information is stored in a memory [0024]
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to perform a position information estimation process and display the estimated position on the display, as taught by Ishikake, because this makes it easier for an inexperienced physician to perform the surgery, as recognized by Ishikake [0029]-[0030].
The combination fails to explicitly teach the detection target in association with the position information of the landmark in a case where the detection target is not obscured in the first endoscope image.
Mori teaches a method of estimating the location of a landmark in an endoscope image (Abstract). This system obtains a position information of the landmark (Fig. 16, L) and the detection target (a tumor) (Fig. 16, T) which is not obscured in the endoscopic image [0085]+[0087]-[0089].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to obtain the position information of the target in association with the position information of the landmark in an unobscured image, as taught by Mori, because this allows the system to generate and present useful information to the surgeon, as recognized by Mori [0090].
Regarding Claim 11, the reference application teaches an endoscope system (Claim 1, line 1) comprising:
one or more processors configured to: (Claim 1, line 2)
acquire a first endoscopic image (Claim 1, line 3), the first endoscopic image including a detection target and a landmark having a positional relationship of the landmark to the detection target; (Claim 1, lines 4-8)
acquire position information of a position of the detection target in the first endoscope image; (Claim 1, lines 4-5)
acquire position information of a position of a landmark in the first endoscope image (Claim 1, lines 6-7)
position information of the detection target in associated with the position information of the landmark (Claim 1, lines 8-9)
acquire a second endoscope image (Claim 1, lines 10-12) (the image without the target and the landmark must be a different image from the target and the landmark), the second endoscope image including the landmark and a position at which the detection target is not visible (Claim 1, lines 10-12)
wherein the first endoscope image is based on first illumination light and the second endoscope image based on second illumination light having a spectrum different from a spectrum of the first illumination light, and the one or more processors are configured to perform the first detection process and the second detection process from the second endoscope image (Claim 6, lines 2-6).
The application fails to explicitly teach store position information, that the undetected target is obscured, determine an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target, or display the estimated position of the detection target on the position at which the detection target is obscured in the second endoscope image.
Ishikake teaches a trained model for an endoscopic system (Abstract). This system acquires a second endoscope image [0019]. The second endoscope image includes a landmark and a position at which the target is obscured (Fig. 2A+2B) (the target in covered) [0032]+[0037], while the other organs in the image (e.g. liver) is a landmark (Fig. 2A) [0051]+[0073]. The system determined an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target [0073] (the model uses positional relationships between the target and other organs to determine the position of an obscured target). This system displays the estimated position of the detection target on the position at which the detection target is obscured in the image [0025]+[0029]-[0030]+[0077]. The relative position information is stored in a memory [0024]
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the reference application to perform a position information estimation process and display the estimated position on the display, as taught by Ishikake, because this makes it easier for an inexperienced physician to perform the surgery, as recognized by Ishikake [0029]-[0030].
Regarding Claim 18, the reference application teaches a method of operating an endoscope system including one or more processors, the method comprising: (Claim 15, lines 1-2)
acquiring a first endoscopic image (Claim 15, line 3), the first endoscopic image including a detection target and a landmark having a positional relationship of the landmark to the detection target; (Claim 15, lines 4-8)
acquiring position information of a position of the detection target in the first endoscope image; (Claim 15, lines 4-5)
acquiring position information of a position of a landmark in the first endoscope image (Claim 15, lines 6-7)
position information of the detection target in associated with the position information of the landmark (Claim 1, lines 8-9)
acquiring a second endoscope image (Claim 1, lines 10-12) (the image without the target and the landmark must be a different image from the target and the landmark), the second endoscope image including the landmark and a position at which the detection target not visible (Claim 1, lines 10-12)
determining an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in association with the position of the detection target (Claim 16)
and displaying, on a display, the estimated position of the detection target (Claim 16))
The application fails to explicitly teach store position information, that the undetected target is obscured, or display the estimated position of the detection target on the position at which the detection target is obscured in the second endoscope image.
Ishikake teaches a trained model for an endoscopic system (Abstract). This system acquires a second endoscope image [0019]. The second endoscope image includes a landmark and a position at which the target is obscured (Fig. 2A+2B) (the target in covered) [0032]+[0037], while the other organs in the image (e.g. liver) is a landmark (Fig. 2A) [0051]+[0073]. This system displays the estimated position of the detection target on the position at which the detection target is obscured in the image [0025]+[0029]-[0030]+[0077]. The relative position information is stored in a memory [0024].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to perform a position information estimation process and display the estimated position on the display, as taught by Ishikake, because this makes it easier for an inexperienced physician to perform the surgery, as recognized by Ishikake [0029]-[0030].
The combination fails to explicitly teach the detection target in association with the position information of the landmark in a case where the detection target is not obscured in the first endoscope image.
Mori teaches a method of estimating the location of a landmark in an endoscope image (Abstract). This system obtains a position information of the landmark (Fig. 16, L) and the detection target (a tumor) (Fig. 16, T) which is not obscured in the endoscopic image [0085]+[0087]-[0089].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to obtain the position information of the target in association with the position information of the landmark in an unobscured image, as taught by Mori, because this allows the system to generate and present useful information to the surgeon, as recognized by Mori [0090].
This is a provisional nonstatutory double patenting rejection.
Claims 2 and 19 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of copending Application No. 18/463,966 in view of Ishikake, Mori, and Meagher.
Regarding Claim 2, the combination of references teaches the invention substantially as claimed. The combination is silent regarding wherein the one or more processors are configured to display the position information of the detection target, the estimated position of the detection target, and the position of the landmark on the display to be distinguished from each other.
Meagher teaches a system for real time augmentations of an endoscopic image (Abstract). This system displays the position information (Fig. 2, A) [0035], the estimated position (Fig. 6) [0036], and the position of the landmark (Fig. 2, B or C) [0035] on the display to be distinguished from each other [0026] (different opacity/color are considered different modes).
It would have been obvious to one of ordinary skill before the effective filing date to modify the combined system to display the combined system, as taught by Meagher, because this better allows the surgeon to navigate the system, even under compromised visual situations, as recognized by Meagher [0006].
Regarding Claim 19, the combination of references teaches the invention substantially as claimed. The combination is silent regarding displaying the position information of the detection target, the estimated position of the detection target, and the position of the landmark on the display to be distinguishable from each other.
Meagher teaches a system for real time augmentations of an endoscopic image (Abstract). This system displays the position information (Fig. 2, A) [0035], the estimated position information (Fig. 6) [0036], and the position information of the landmark (Fig. 2, B or C) [0035] on the display to be distinguishable from each other [0026] (different opacity/color are considered different modes).
It would have been obvious to one of ordinary skill before the effective filing date to modify the combined system to display the combined system, as taught by Meagher, because this better allows the surgeon to navigate the system, even under compromised visual situations, as recognized by Meagher [0006].
Claims 3, 4, and 8 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of copending Application No. 18/463,966 in view of Ishikake, Mori, and Rivlin.
Regarding Claim 3, the combination of references teaches the invention substantially as claimed. The reference application fails to teach wherein the one or more processors are configured to: perform notification using either or both of a notification sound and notification on the display; and perform the notification in at least one case of a case where the detection target actual position information is detected during the first detection process or a case where the position information of the landmark is detected during the second detection process.
Rivlin teaches wherein the one or more processors (Fig. 1, 104+142) [0049]+[0054] are configured to: perform notification using either or both of a notification sound and a displayed notification on the display [0028] (bounding boxes/visual indicators are a notification on the display); and perform the notification in at least one case of a case that the position information of the detection target is acquired or a case that the position information of the landmark is acquired [0028].
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the reference application to provide a notification, as taught by Rivlin, because this allows the user to more easily and intuitively, as recognized by Rivlin [0030].
Regarding Claim 4, the combination of references teaches the invention substantially as claimed. The reference application fails to teach wherein the one or more processors are configured to: perform notification using either or both of a notification sound and notification on the display; and perform the notification in at least one case of a case that the position information of the detection target is stored in association with the position information of the landmark, a case that the second endoscope image is acquired, or a case the estimated position of the detection target is determined.
Rivlin teaches wherein the one or more processors (Fig. 1, 104+142) [0049]+[0054] are configured to: perform notification using either or both of a notification sound and a displayed notification on the display [0028] (bounding boxes/visual indicators are a notification on the display); and perform the notification in at least one case of a case that the position information of the detection target is stored in association with the position information of the landmark, a case that the second endoscope image is acquired, or a case that the estimated position of the detection target is determined [0028] (displaying the landmark and abnormality with a visual indicator on the heads up display is a notification that the position information of the detection target and the position information of the landmark are stored in association with one another [0030]+[0035]+[0049]).
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the reference application to provide a notification, as taught by Rivlin, because this allows the user to more easily and intuitively, as recognized by Rivlin [0030].
Regarding Claim 8, the combination of references teaches the invention substantially as claimed. The reference application fails to teach wherein landmark includes a plurality of landmarks, and the one or more processors are configured to determine the estimated position of the detection target based on less than all of the plurality of landmarks.
Rivlin teaches wherein the landmark includes a plurality of landmarks [0035] (the intestinal twists) and the one or more processors determine the estimated position based on less than all the plurality of landmarks [0035] (the system only uses the last twist to define the polyp positions, which is less than to total number of twists).
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the reference application to provide a notification, as taught by Rivlin, because this allows the user to more easily and intuitively, as recognized by Rivlin [0030].
Claim 9 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of copending Application No. 18/463,966 in view of Ishikake, Rivlin, and Taniguchi (U.S PGPub 2013/0229503 A1).
Regarding Claim 9, the combination of references teaches the invention substantially as claimed. The reference application fails to explicitly teach wherein the landmark includes a plurality of landmarks, and the one or more processors are configured to select the landmark to be displayed on the display among the landmarks, and limit the landmarks to be displayed on the display.
Rivlin teaches wherein the landmark includes a plurality of landmarks [0028]+[0035] and the one or more processors are configured to limit the plurality of landmarks to be displayed on the display [0028]+[0035] (the system only uses the easily identifiable twists, not ever possible feature in the intestine. The number of landmarks are therefore limited).
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the reference application to select and limit landmarks, as taught by Rivlin, because this allows the user to more easily and intuitively, as recognized by Rivlin [0030].
Rivlin fails to explicitly teach wherein the one or more processors are configured to select the plurality of landmark to be displayed on the display.
Taniguchi teaches a medical imaging system (Abstract). This system wherein the one or more processors are configured to select the landmark to be displayed on the display among the landmarks [0108] (the system selects a landmark to be displayed automatically).
It would have been obvious to one of ordinary skill in the art to modify the combined system to automatically select the landmarks for display, as taught by Taniguchi, because this shortens the review of the captured images by highlighting images which contain features of interest, as recognized by Taniguchi [0007]. One of ordinary skill would further recognize the combination suggests the automatic display of the landmark, as taught by Taniguchi, would be a landmark selected from among the landmarks of Rivlin.
Claim 10 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of copending Application No. 18/463,966 in view of Ishikake, Mori, and Kiyuna et al. (U.S PGPub 2022/0148182 A1).
Regarding Claim 10, the reference application teaches the invention substantially as claimed. The combination fails to explicitly teach wherein the one or more processors are configured to receive a user operation for designating whether or not the landmark is usable to determine the estimated position of the detection target.
Kiyuna teaches a medical imaging system (Abstract). This system has one or more processors (Fig. 2, 11) [0029] are configured to receive a user operation for designating whether or not the landmark is usable to determine the estimated position of the detection target [0050]-[0051] (the user need to confirm the landmark for determining the relative position of the landmark and the attention part).
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the combined system to receive a user operation, as taught by Kiyuna, because this helps to ensure the operator does not miss a region of interest, as recognized by Kiyuna [0004]-[00005].
Claim 14 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of copending Application No. 18/463,966 in view of Ishikake, Mori, and Yoshino.
Regarding Claim 14, reference application teaches the invention substantially as claimed. The combination fails to explicitly teach wherein the landmark is any of a mucous membrane pattern, a shape of an organ, or marking by a user operation.
Yoshino further teaches wherein the landmark is position information of at least any of a mucous membrane pattern [0240].
It would have been obvious to one of ordinary skill in the art before the effective filing date to substitute the landmark for a mucous membrane pattern, as taught by Yoshino, as the substitution for one known landmark with another yields predictable results to one of ordinary skill in the art. One of ordinary skill would have been able to carry out such a substitution, and the results of using a mucous membrane pattern as a landmark are reasonably predictable.
Response to Arguments
Applicant's arguments filed 2/2/2026 have been fully considered but they are not persuasive.
Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Mori was brough in to teach the position information of the detection target in association with the position information of the landmark in a case that the detection target is not obscured in the first endoscope image.
Applicant argues that Ishikake fails to teach determining “an estimated position of the detection target in the second endoscope image based on the position of the landmark stored in associated with the position of the detection target”. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Ishikake teaches that the estimated position of the detection target is located using the positional relationship between subjects (i.e. landmarks) and the location of the target in training images [0073]. The combination of Rivlin and Mori teach determining the relative positions of landmarks and targets in unobscured images. When viewed together, one of ordinary skill would recognize that the positional relationship of the images of Rivlin and Mori would be used by the model of Ishikake to determine the position of the object in the obscured image. Therefore, the rejection under 35 USC 103 is maintained.
The double patenting rejection is maintained for the reasons detailed above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN D MATTSON whose telephone number is (408)918-7613. The examiner can normally be reached Monday - Friday 9 AM - 5 PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEAN D MATTSON/Primary Examiner, Art Unit 3798