Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on Jan 16, 2026 has been entered.
Response to Arguments
Applicant's arguments filed 01/16/2026 have been fully considered but they are not
persuasive.
On page 2-3 of the “Remarks”, applicant asserts “The Office stated Iwaki and Morita fail to teach claim 21, and asserted that Watanabe in paragraphs [0064] and [0111] teaches claim 21. However, the Office did not provide articulated reasoning on how these paragraphs read on claim 21. Therefore, clarification from the Office would be appreciated.
In addition, referring to FIGS. 2 and 7 of Watanabe….there is no treatment instrument shown in the endoscope image of FIG. 7. Therefore, Watanabe fails to disclose "recognize, from at least one of the plurality of endoscopic images, whether or not a specific action of the endoscope operator is performed on the portion to be observed according to a specific component appeared in the endoscopic image, wherein the specific action is at least one action of a use of a treatment tool to the portion to be observed, washing of the portion to be observed, length measurement of the portion to be observed, or application of pigment on the portion to be observed, and the specific component is the at least one of the treatment tool used to the portion to be observed, water to wash the portion to be observed, a marker to measure the portion to be observed, or the pigment applied to the portion to be observed' of claim 1.”.
Response: Examiner respectfully disagree with applicant’s argument. Claim 1 does not require that the treatment tool, water, marker, or pigment be explicitly visible in a particular static image example. Rather, the claim broadly requires recognition of whether a specific action is performed according to a specific component appearing in the endoscopic image, which encompasses recognition based on procedural scene classification, image content characteristics associated with treatment, system state corresponding to treatment operations. Watanabe on para [0111] – [0112] disclose that the observation scene displayed to the operator changes depending on the procedure being performed, including a transition from magnifying observation to treatment. Thus, constitutes processor recognition that a treatment action is being performed during endoscopy. Watanabe further on para [0064] disclose that the distal end opening portion enables protrusion of a distal end opening portion of a treatment instrument through the treatment instrument channel. Thus, Watanabe clearly disclose the presence and use of treatment tools during endoscopic procedures.
A person of ordinary skill in the art would readily understand that treatment scenes inherently involve the use of treatment tools, washing, measurement, or pigment application, all of which are routine and well-known endoscopic actions. Therefore, Watanabe teaches or at least suggest recognizing treatment actions based on components and conditions associated with treatment.
Applicant’s argument improperly imports a requirement that the treatment tool must be visually depicted in the endoscopic image, which is not recited in the claim.
Additionally, the claim limitation has an or option and Morita teaches at least one of the options. Mortia on page [0132] disclose “diagnosis or treatment may be performed while selectively displaying the normal light image or the special light image by operating the system (e.g., switch)”. The system controls whether attention area information is displayed depending on circumstances, thereby reducing operator burden during diagnosis or treatment. Further, Mortia explicitly associates attention area with treatment -relevant regions such as mucosal areas, lesion areas, bubbles or feces. See Mortia para [0212] “when the user is a doctor, and desires to perform treatment, the attention area refers to an area that includes a mucosal area or a lesion area. If the doctor desires to observe bubbles or feces, the attention area refers to an area that includes a bubble area or a feces area. Specifically, the attention area for the user differs depending on the objective of observation, but necessarily has an observation priority relatively higher than that of other areas” Treatment related actions corresponding to attention area such as lesions, bubbles, feces, which are reasonably understood as including use of treatment tools. Although in endoscopic practice, areas such as mucosal lesion, bubbles or feces and similar features are precisely the area where the operator performs the very actions. Therefore, Mortia at least suggest the claimed specific action of "a use of a treatment tool to the portion to be observed". Thus, the arguments are not persuasive.
Watanabe reasonably teaches or suggests the limitation, and the new reference Miyai (US 20170042407 A1) is relied upon to explicitly demonstrate that treatment instruments appear in endoscopic images and are images together with the surgical region, confirming that recognizing operator actions according to components appearing in endoscopic images was well-known in the art.
Double Patenting
On page 2 of the “Remarks”, applicant asserts “Claims 1, 3-8 are rejected on the ground of nonstatutory double patenting as being unpatentable over allowed claims 1-3, 6, 11-15 of application 16813709 wherein these claims have been allowed. As to the double patenting rejection, an appropriate reply will be submitted to the Office at the conclusion of the prosecution of the pending application.”.
Response: For the reason provided in the double patenting section of the rejection below, the rejection is maintained. As detailed in the office action below, the current claimed invnetion is not patentably distinct to an invention for which applicant already has allowed Application No. 16/813709. The rejection for double patenting is still maintained.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 3-8 are rejected on the ground of nonstatutory double patenting as being unpatentable over allowed claims 1-3, 6, 11-15 of co-pending application 16813709 wherein these claims have been allowed. Although the claims at issue are not identical, they are not patentably distinct from each other because the scopes of allowed claims anticipate the scopes of current application claims as outlined in the chart below.
Both the allowed application 16/813709 and the present application claim a processor-based endoscopic image processing device that recognizes operator actions (such as use of a treatment tool, washing measurement, or pigment application) from the endoscopic images and conditionally controls display of region-of interest information accordingly. The overlapping claim scope and identical inventive concept render the two claim sets sufficiently similar that one would make the other obvious to person of ordinary skill in the art "Anticipation is the epitome of obviousness", Realtime Data, LLC v. lancu; MPEP 1207.03(a)(ll)(2).
Current application 18/164536
1. (currently amended) An endoscopic image processing device comprising a processor configured to: cause a display to display region-of-interest information about a region of interest included in a plurality of endoscopic images of a portion to be observed sequentially picked up by an endoscope operator and to be sequentially displayed on the display; cause a frame-shaped figure indicating a position of the region of interest to be displayed on the endoscopic image based on the region-of-interest information; recognize, from at least one of the plurality of endoscopic images, whether or not a specific action of the endoscope operator is performed on the portion to be observed according to a specific component appeared in the endoscopic image, wherein the specific action is at least one action of a use of a treatment tool to the portion to be observed, washing of the portion to be observed, length measurement of the portion to be observed, or application of pigment on the portion to be observed; observed, and the specific component is the at least one of the treatment tool used to the portion to be observed, water to wash the portion to be observed, a marker to measure the portion to be observed, or the pigment applied to the portion to be observed; in a case where the specific action is not recognized, perform first emphasis display where the region-of-interest information is displayed at a position in the endoscopic image at a first emphasis level, and in a case where the specific action is not recognized, cause the display not to display the region-of-interest information
3. (original) The endoscopic image processing device according to claim 1, wherein the processor is further configured to: acquire the plurality of endoscopic images; detect the region of interest from the acquired endoscopic images; and acquire the region-of-interest information about the detected region of interest.
4. (original) The endoscopic image processing device according to claim 3, wherein the processor is further configured to cause the display to sequentially display the plurality of acquired endoscopic images.
5. (original) The endoscopic image processing device according to claim 1, wherein the processor is further configured to cause a figure to be displayed on the endoscopic image based on the region-of-interest information.
6. (original) The endoscopic image processing device according to claim 1, further comprising an emphasis method storage configured to store an emphasis method for the region-of-interest information, wherein the processor is further configured to cause the region-of-interest information to be displayed by the emphasis method stored in the emphasis method storage.
7. (original) An endoscope system comprising: the endoscopic image processing device according to claim 1; the display; an endoscope configured to be inserted into an object to be examined; and a camera configured to sequentially pick up the plurality of endoscopic images of the portion to be observed included in the object to be examined.
8 (currently amended) An endoscopic image processing device comprising a processor configured to: cause a display to display region-of-interest information about a region of interest included in a plurality of endoscopic images of a portion to be observed sequentially picked up by an endoscope operator and to be sequentially displayed on the display; cause a frame-shaped figure indicating a position of the region of interest to be superimposed and displayed on the endoscopic image as the region-of-interest information; recognize an action of the endoscope operator on the portion to be observed from at least one of the plurality of endoscopic images; switch between first emphasis display where the region-of-interest information is displayed at a position in the endoscopic image at a first emphasis level and second emphasis display where the region-of-interest information is displayed at a second emphasis level relatively lower than the first emphasis level, according to a result of the recognition of the action of the endoscope operator; recognize, from the at least one of the plurality of endoscopic images, whether or not a specific action of the endoscope operator is performed on the portion to be observed according to a specific component appeared in the endoscopic image, wherein the specific action is at least one action of a use of a treatment tool to the portion to be observed, washing of the portion to be observed, length measurement of the portion to be observed, or application of pigment on the portion to be observed, and the specific component is the at least one of the treatment tool used to the portion to be observed, water to wash the portion to be observed, a marker to measure the portion to be observed, or the pigment applied to the portion to be observed; and perform the first emphasis display in a case where the specific action is not recognized, and cause the display not to display the region-of-interest information in a case where the specific action is recognized
Allowed claims (11/23/2022) of application 16/813709
1. (currently amended) An endoscopic image processing device comprising: a processor configured to: cause a display to display region-of-interest information about a region of interest included in a plurality of endoscopic images of a portion to be observed sequentially picked up by an endoscope operator and to be sequentially displayed on the display; recognize an endoscope operator's action on the portion to be observed from at least some endoscopic images of the plurality of endoscopic images, and recognize whether or not a specific action is performed, wherein the specific action is at least one action of a use of a treatment tool, washing, length measurement, or pigment observation; switch between first emphasis display where the region-of-interest information is displayed at a position in the endoscopic image at a first emphasis level and second emphasis display where the region-of-interest information is displayed at a second emphasis level relatively lower than the first emphasis level, according to a recognition result of the processor; and perform the first emphasis display in a case where the specific action is not recognized, and perform the second emphasis display in a case where the specific action is recognized.
2. (previously presented) The endoscopic image processing device according to claim 1, wherein the processor is further configured to: acquire the plurality of endoscopic images; detect the region of interest from the acquired endoscopic images; and acquire the region-of-interest information about the detected region of interest.
3. The endoscopic image processing device according to claim 2, wherein the processor is further configured to cause the display to sequentially display the plurality of acquired endoscopic images.
6. (previously presented) The endoscopic image processing device according to claim 1, wherein the processor displays a figure based on the region-of-interest information.
11. (previously presented) The endoscopic image processing device according to claim 1, further comprising: an emphasis method storage section that stores an emphasis method for the region-of-interest information, wherein the processor displays the region-of-interest information by the emphasis method stored in the emphasis method storage section.
14. (previously presented) An endoscope system comprising: a display; an endoscope that is to be inserted into an object to be examined; a camera that sequentially picks up a plurality of endoscopic images of a portion to be observed included in the object to be examined; and the endoscopic image processing device according to claim 1.
15. (currently amended) An endoscopic image processing method comprising: a display control step of causing a display to display region-of-interest information about a region of interest included in a plurality of endoscopic images of a portion to be observed sequentially picked up by an endoscope operator and to be sequentially displayed on the display; and an action recognition step of recognizing an endoscope operator's action on the portion to be observed from at least some endoscopic images of the plurality of endoscopic images, and recognizing whether or not a specific action is performed, wherein the specific action is at least one action of a use of a treatment tool, washing, length measurement, or pigment observation, wherein first emphasis display where the region-of-interest information is displayed at a position in the endoscopic image at a first emphasis level and second emphasis display where the region-of-interest information is displayed at a second emphasis level relatively lower than the first emphasis level are switched in the display control step according to a recognition result of the action recognition step, wherein the display control step further comprises performing the first emphasis display in a case where the specific action is not recognized in the action recognition step, and performing the second emphasis display in a case where the specific action is recognized in the action recognition step.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 3-8, and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over Iwaki (US 20180098690 A1) in view of Morita et al. (US 20120220840A1), and further in view of Miyai (US 20170042407 A1).
Regarding claim 1, Iwaki teaches an endoscopic image processing device comprising: a processor (see Fig. 5, 300, Abstract; “An endoscope apparatus includes a processor including hardware”) to: cause a display (Fig. 5, 400) to display region-of- interest information (Fig. 6A, “AL1” is a region-of-interest information i.e., an alert image, see also para [0054]; “performing display control to display an alert image on the captured image in an overlaid manner based on the attention region and the motion vector, the alert image highlighting the attention region”) about a region of interest (Fig. 6A, “AA1” is a region-of-interest) included in a plurality of endoscopic images of a portion to be observed sequentially picked up by an endoscope operator and to be sequentially displayed on the display (Figs. 6A and 6B illustrate sequential endoscopic images of same region of interest “AA1” and “AA2” respectively; see also para [0087]; “The display control section 350 changes the form of the alert image in such a manner that when an attention region is detected in sequential time series images, an region hidden by the alert image AL in an earlier one of the images can be observed in a later one of the images”); recognize, from at least one of the plurality of endoscopic images, whether or not a specific action of the endoscope operator is performed on the portion to be observed (see para [0098-0100]; “The display control section 350 may perform the display control on the alert image in the second captured image to achieve the second object region that is smaller than the first object region, when the imaging section 200 is determined to have made zooming on the object, during the transition between the first captured image and the second captured image, based on the motion vector…..Thus, the user only needs to perform an operation involving zooming or a translational or rotational motion” Note: the above statement disclose whether a zooming/ translational/ rotational has taken place); in a case where the specific action is recognized, perform first emphasis display where the region-of-interest information is displayed at a position in the endoscopic image at a first emphasis level, (see Fig. 7A, para [0013]; “a display control process that displays an alert image on the captured image in an overlaid manner based on the attention region and the motion vector, the alert image highlighting the attention region”). Examiner interpret the “alert image” appears in Fig.7A but disappears in Fig.7B based on a motion vector and a detailed observation when a lesion is noticed); and in a case where the specific action is recognized, cause the display not to display the region-of-interest information (see Figs. 4A-4B, 6A-6B, 7A-7B, para [0095-0097]; the “alert image” appears in Fig.7A but disappears in Fig.7B based on a motion vector and a detailed observation when a lesion is noticed; again, the different emphasis display is “the alert image” is either being displayed as smaller or removed in a subsequent region of interest image when a lesion is noticed from a prior region of interest image”). However, Iwaki does not teach as further claimed, but
Morita et al. teach cause a frame-shaped figure indicating a position of the region of interest to be displayed on the endoscopic image based on the region-of-interest information (see Fig. 26 and Fig. 31 disclose frame-shaped figure, para [0213]; “display an area including the lesion area with high visibility”, see also para [0265]; “When it is desired that the candidate attention area information that corresponds to a candidate attention area 1 (see FIG. 15A) indicate a rectangular shape, the position of each pixel included in the candidate attention area 1 is calculated from the coordinates a(m, n) of each local area that belongs to the candidate attention area 1 and information about the pixels included in each local area, A rectangle that is circumscribed to the pixels is set as the candidate attention area, and the position of each pixel included in the candidate attention area is calculated, and output as the candidate attention area information that corresponds to the candidate attention area 1”); wherein the specific action is at least one action of a use of a treatment tool to the portion to be observed, washing of the portion to be observed, length measurement of the portion to be observed, or application of pigment on the portion to be observed (see para [0132]; “treatment may be performed while selectively displaying the normal light image or the special light image by operating the system (e.g., switch)”, see also para [0212]; “when the user is a doctor, and desires to perform treatment, the attention area refers to an area that includes a mucosal area or a lesion area. If the doctor desires to observe bubbles or feces, the attention area refers to an area that includes a bubble area or a feces area. Specifically, the attention area for the user differs depending on the objective of observation, but necessarily has an observation priority relatively higher than that of other areas”).
However, the combination of Iwaki et al. and Mortia et al. does not teach whether or not a specific action of the endoscope operator is performed on the portion to be observed according to a specific component appeared in the endoscopic image, and the specific component is the at least one of the treatment tool used to the portion to be observed, water to wash the portion to be observed, a marker to measure the portion to be observed, or the pigment applied to the portion to be observed.
In the same field of endeavor, Miyai teaches whether or not a specific action of the endoscope operator is performed on the portion to be observed according to a specific component appeared in the endoscopic image, and the specific component is the at least one of the treatment tool used to the portion to be observed, water to wash the portion to be observed, a marker to measure the portion to be observed, or the pigment applied to the portion to be observed (see para [0042]; “In the endoscopic surgery, for example, as illustrated in FIG. 2, an insertion portion 25 of the endoscope camera head 11 and two pairs of forceps 81 (81A and 81B) being surgical instruments are inserted into the body of a patient. The endoscope camera head 11 emits light from a tip end of the insertion portion 25, illuminates a surgery region 82 of the patient, and images a state of the two pairs of forceps 81 and the surgery region 82” Note: surgical instruments (forceps 81) imply the specific component/treatment tool). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify the general use of an endoscope apparatus to implement: an image acquisition, an attention region detection, a motion vector estimation and a display control process of Iwaki et al. in view of the use of an image processing device, an electronic apparatus, an endoscope system, an information storage device of Morita et al and further in view of the use of an image processing to determine a position of a distal end of an important object within a medical image of Miyai in order to easily estimate a region of interest desired by a practitioner (see para [0043]).
Regarding claim 3, the rejection of claim 1 is incorporated herein.
Iwaki in the combination further teaches wherein the processor is further configured to: acquire the plurality of endoscopic images (see Fig. 5, “300” is processing section acquiring a series of endoscopic images from imaging section “200”; para [0077; 0079]); detect the region of interest from the acquired endoscopic images (see para [0078; 0086]; i.e., attention region detection region “320” and attention region “AA”); and acquire the region-of-interest information about the detected region of interest (see para [0080;0086]; i.e, “alert image AL” overlaid on “detected attention region AA”).
Regarding claim 4, the rejection of claim 3 is incorporated herein.
Iwaki further teaches wherein the processor causes the display to sequentially display the plurality of acquired endoscopic images (see Figs. 6A and 6B illustrate sequential endoscopic images of same region of interest “AA1” and “AA2” respectively; see also para 0087 i.e., “sequential time series images” see also para [0134]; “a plurality of alert images may be displayed for a single attention region”).
Regarding claim 5, the rejection of claim 1 is incorporated herein.
Iwaki in the combination further teach wherein the processor is further configured to cause a figure to be displayed on the endoscopic image based on the region-of-interest information (see Fig. 11A-11C e.g., “AL1” and “AL2”).
Regarding claim 6, the rejection of claim 1 is incorporated herein.
Iwaki in the combination further teaches further comprising: an emphasis method storage configured store an emphasis method for the region-of-interest information, wherein the processor is further configured to cause the t region-of-interest information to be display by the emphasis method stored in the emphasis method storage section (see para [0080]; “The attention region detection section 320 detects an attention region in the captured image. The image storage section 330 stores (records) the captured image. The motion vector estimation section 340 estimates a motion vector based on the captured image at a processing target timing and a captured image obtained in the past ((in a narrow sense, obtained at a previous timing) and stored in the image storage section 330. The display control section 350 performs the display control on the alert image based on a result of detecting the attention region and the estimated motion vector. The display control section 350 may perform display control other than that for the alert image”).
Regarding claim 7, the rejection of claim 1 is incorporated herein.
Iwaki in the combination further teaches an endoscope system comprising: a display; an endoscope that is to be inserted into an object to be examined; a camera that sequentially picks up a plurality of endoscopic images of a portion to be observed included in the object to be examined (see para [0040-0043]; “there is provided an endoscope comprising: a processor comprising hardware, the processor being configured to implement: an image acquisition process that acquires a captured image, the captured image being an image of an object obtained by an imaging section”).
Regarding claim 8, the scope of claim 8 is fully encompassed by the scope of claim 1, the rejection analysis of claim 1 is equally applicable here.
Iwaki in the combination further teaches switch between first emphasis display where the region-of-interest information is displayed at a position in the endoscopic image at a first emphasis level and second emphasis display where the region-of-interest information is displayed at a second emphasis level relatively lower than the first emphasis level (see Figs 7A-7B, para [0095]; “a lesion detected in a past images as illustrated in FIG. 7A and around a lesion part detected in the current image as illustrated in FIG. 7B. When the motion vector is directed toward the image center, the user may be determined to have noticed the lesion and will start detailed observation. Also, in this case, the alert image displayed in the first captured image is removed in the second captured image illustrated in FIG. 7B. Also, in a state illustrated in FIG. 7B, illustrating a state corresponding to that in FIG. 3B, the alert image AL2 is hidden and thus is not overlaid on the image region R1′ corresponding to the first object region illustrated in FIG. 3B. Thus, the second image region and the second object region each have a region of 0”; see also para [0097]; “the display control section 350 may perform the display control on the alert image in the second captured image to achieve the second object region that is smaller than the first object region, when the imaging section 200 is determined to have made at least one of a translational motion and a rotational motion relative to the object, during the transition between the first captured image and the second captured image, based on the motion vector”; Examiner interpret the “alert image” appears in Fig.7A but disappears in Fig.7B based on a motion vector and a detailed observation when a lesion is noticed).
Regarding claim 10, the rejection of claim 8 is incorporated herein.
Iwaki in the combination further teaches wherein the processor is further configured to: acquire the plurality of endoscopic images (see Fig. 5, “300” is processing section acquiring a series of endoscopic images from imaging section “200”; para [0077; 0079]); detect the region of interest from the acquired endoscopic images (see para [0078; 0086]; i.e., attention region detection region “320” and attention region “AA”); and acquire the region-of-interest information about the detected region of interest (see para [0080;0086]; i.e, “alert image AL” overlaid on “detected attention region AA”).
Regarding claim 11, the rejection of claim 10 is incorporated herein.
Iwaki further teaches wherein the processor causes the display to sequentially display the plurality of acquired endoscopic images (see Figs. 6A and 6B illustrate sequential endoscopic images of same region of interest “AA1” and “AA2” respectively; see also para 0087 i.e., “sequential time series images” see also para [0134]; “a plurality of alert images may be displayed for a single attention region”).
Regarding claim 12, the rejection of claim 8 is incorporated herein.
Iwaki in the combination further teach wherein the processor is further configured to cause a figure to be displayed on the endoscopic image based on the region-of-interest information (see Fig. 11A-11C e.g., “AL1” and “AL2”).
Regarding claim 13, the rejection of claim 12 is incorporated herein.
Iwaki in the combination further teaches wherein at least one of a color, a shape, or transparency of the figure at the first emphasis level is different from that at the second emphasis level (see para [0035]; FIG. 11A to FIG. 11C illustrate the shape of “AL” changing; “a method of changing a shape of the alert image based on a pan/tilt operation” see also para [0080]; “Examples of such display control include image processing such as color conversion processing, grayscale transformation processing, edge enhancement processing, scaling processing, and noise reduction processing”; Figs. 6A-6B illustrate “AL” disappearing/hidden; Fig. 1 illustrates “AL” in transparent mode).
Regarding claim 14, the rejection of claim 8 is incorporated herein.
Iwaki in the combination further teaches wherein the region-of-interest information is displayed at a position different from the endoscopic image in the first emphasis display (see Fig. 9A-9B, para [0112]; “Thus, the first captured image and the second captured image have different positions of the alert image relative to the attention region. Thus, at least a part of the image region R1′ corresponding to the first object region is not overlaid on the alert image AL2 in the second captured image as illustrated in FIG. 8D. As a result, the object difficult to observe in the first captured image can be easily observed in the second captured image. Specifically, in the examples illustrated in FIG. 8B and FIG. 8D, AL2 is not overlaid on R1′ (the second image region and the second object region each have a size=0). It is a matter of course that AL2 might be overlaid on R1′, that is, the attention region might be not be visible in the first captured image and in the second captured image, depending on a relationship among P0, DRA, and DR1”).).
Regarding claim 15, the rejection of claim 8 is incorporated herein.
Iwaki in the combination further teaches wherein the region-of-interest information is displayed at a position in the endoscopic image in the second emphasis display (see Fig. 9B, para [0013]; “a display control process that displays an alert image on the captured image in an overlaid manner based on the attention region and the motion vector, the alert image highlighting the attention region”).
Regarding claim 16, the rejection of claim 8 is incorporated herein.
Iwaki in the combination further teaches wherein the region-of-interest information is displayed at a position different from the endoscopic image in the second emphasis display (see Fig. 11, para [0016]; “wherein the processor implements the display control process that performs display control on the alert image in the second captured image to achieve the second object region that is smaller than the first object region”).
Regarding claim 17, the rejection of claim 8 is incorporated herein.
Iwaki in the combination further teaches further comprising: an emphasis method storage configured to store an emphasis method for the region-of-interest information, wherein the processor is further configured to cause the t region-of-interest information to be display by the emphasis method stored in the emphasis method storage section (see para [0080]; “The attention region detection section 320 detects an attention region in the captured image. The image storage section 330 stores (records) the captured image. The motion vector estimation section 340 estimates a motion vector based on the captured image at a processing target timing and a captured image obtained in the past ((in a narrow sense, obtained at a previous timing) and stored in the image storage section 330. The display control section 350 performs the display control on the alert image based on a result of detecting the attention region and the estimated motion vector. The display control section 350 may perform display control other than that for the alert image”).
Regarding claim 18, the rejection of claim 8 is incorporated herein.
Iwaki in the combination further teaches an endoscope system comprising: a display; an endoscope that is to be inserted into an object to be examined; a camera configured to sequentially picks up the plurality of endoscopic images of the portion to be observed included in the object to be examined (see para [0040-0043]; “there is provided an endoscope comprising: a processor comprising hardware, the processor being configured to implement: an image acquisition process that acquires a captured image, the captured image being an image of an object obtained by an imaging section”).
Regarding claim 19, the scope of claim 19 is fully encompassed by the scope of claim 1, the rejection analysis of claim 1 is equally applicable here.
Regarding claim 20, the scope of claim 20 is fully encompassed by the scope of claim 1, the rejection analysis of claim 1 is equally applicable here. (See also para [0134]; “the target color is superimposed on (displayed within) the normal light image (see FIG. 2)” of Morita et al.).
Claim 2 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Iwaki and Morita et al. in view of Miyai as applied in claim 1 above, and further in view of Hayami et al. (US 20210327067 A1).
Regarding claim 2, the rejection of claim 1 is incorporated herein. The combination of Iwaki, Morita et al. and Miyai as a whole does not teach as further claimed, but
Hayami et al. teach wherein the processor is further configured to: cause the frame-shaped figure not to be displayed on the endoscopic image in a case where the application of pigment on the portion to be observed is recognized (see para [0102]; “In an endoscopic image display region AG of a display image DGF shown in FIG. 11, an endoscopic image EGF having the same shape and the same size as a shape and a size of the endoscopic image display region AG and equivalent to an image in which a lesioned part included in the lesion region LE in the endoscopic image EGE shown in FIG. 10 is framed out is displayed. According to the display image DGF shown in FIG. 11, a mark is not displayed in the mark display region AM2 corresponding to the reference region AR2 in the endoscopic image EGF”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify the general use of an endoscope apparatus to implement: an image acquisition, an attention region detection, a motion vector estimation and a display control process of Iwaki et al. in view of the use of an image processing device, an electronic apparatus, an endoscope system, an information storage device of Morita et al and an image processing to determine a position of a distal end of an important object within a medical image of Miyai and further in view of the use of a recording medium of Hayami et al. in order for specifying one reference region immediately before the detection of the lesion region is interrupted (see para [0102]).
Regarding claim 9, the rejection of claim 8 is incorporated herein.
Hayami et al. in the combination further teach wherein the processor is further configured to: cause the frame-shaped figure not to be superimposed and displayed on the endoscopic image in a case where the application of pigment on the portion to be observed is recognized (see para [0102]; “In an endoscopic image display region AG of a display image DGF shown in FIG. 11, an endoscopic image EGF having the same shape and the same size as a shape and a size of the endoscopic image display region AG and equivalent to an image in which a lesioned part included in the lesion region LE in the endoscopic image EGE shown in FIG. 10 is framed out is displayed. According to the display image DGF shown in FIG. 11, a mark is not displayed in the mark display region AM2 corresponding to the reference region AR2 in the endoscopic image EGF”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify the general use of an endoscope apparatus to implement: an image acquisition, an attention region detection, a motion vector estimation and a display control process of Iwaki et al. and the use of an image processing device, an electronic apparatus, an endoscope system, an information storage device of Morita et al and an image processing to determine a position of a distal end of an important object within a medical image of Miyai and further in view of the use of a recording medium of Hayami et al. in order for specifying one reference region immediately before the detection of the lesion region is interrupted (see para [0102]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WINTA GEBRESLASSIE/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677